Main Post Office, P.O. Box 751                    www.asyousow.org

Berkeley, CA 94704

   

BUILDING A SAFE, JUST, AND SUSTAINABLE WORLD SINCE 1992

 

 

Notice of Exempt Solicitation Pursuant to Rule 14a-103

 

Name of the Registrant: Meta Platforms Inc. (META)
Name of persons relying on exemption: As You Sow
Address of persons relying on exemption: Main Post Office, P.O. Box 751, Berkeley, CA 94704

 

Written materials are submitted pursuant to Rule 14a-6(g)(1) promulgated under the Securities Exchange Act of 1934. Submission is not required of this filer under the terms of the Rule, but is made voluntarily in the interest of public disclosure and consideration of these important issues.

 

 

Meta Platforms Inc. (META)
Vote Yes: Item #10 – Shareholder Proposal Regarding Report on Enforcement of Community Standards and User Content

Annual Meeting: May 31, 2023

 

CONTACT: Andrew Behar| abehar@asyousow.org

 

THE RESOLUTION

 

RESOLVED:  Shareholders request the Board, at reasonable expense and excluding proprietary or legally privileged information, prepare and publish a report analyzing why the enforcement of “Community Standards” as described in the “Transparency Center” has proven ineffective at controlling the dissemination of user content that contains or promotes hate speech, disinformation, or content that incites violence and/or causes harm to public health or personal safety.

 

SUPPORTING STATEMENT: Proponent suggests the report include for each of Meta’s products, including Facebook, Messenger, Instagram, WhatsApp, and others with over 100 million users:

 

·A quantitative and qualitative assessment by external, independent, and qualified experts of the effectiveness of Meta’s algorithms, staff, and contractors to locate and eliminate content violating Community Standards;

 

·Examination of benefits to users and impact to revenue if Company voluntarily follows existing legal frameworks established for broadcast networks (e.g. laws governing child pornography and political advertisements); and

 

·Analysis of the benefits of the Company continuing to conduct technology impact assessments focused on how Meta’s platforms affect society.

 

   
 

 

     

2023 Proxy Memo

Meta Platforms Inc. | Report on Enforcement of Community Standards and User Content

 

 

SUMMARY

 

Meta Platforms (“Meta”) claims in its Statement of Opposition that the goal of its Community Standards is to create a place for expression” and to “give people a voice.”1 Yet, the Company concedes that, “in some limited cases, we allow content—which would otherwise go against our standards—if we believe it is newsworthy and in the public interest.” This is, perhaps, the answer to the question of why Meta’s application of its Community Standards and Guidelines (“Standards”) has been ineffective year over year – the desire to promote “newsworthy” information despite its outcome. Shareholders have asked Meta to analyze why the enforcement of Community Standards as described in its Transparency Center has proven ineffective at controlling the dissemination of user content that contains or promotes hate speech, disinformation, or content that incites violence and/or causes harm to public health or personal safety.

 

Meta has a complex history of ineffective content moderation and enforcement of its Standards. This failure creates reputational, business, and regulatory risk, harming Meta’s ability to compete and innovate. Meta is not unique amongst its peers in having to address the proliferation of misinformation. The Company, however, is unique in the degree of reputational risk it has created, harming its ability to innovate and expand. Meta has a responsibility to its investors and users to address this growing reputational impact by affirmatively assessing why its Standards are not robustly and equitably enforced.

 

RATIONALE FOR A YES VOTE

 

1.Continued proliferation of misinformation and harmful content exposes Meta to material business and regulatory risk.

 

2.Meta’s reputation for misinformation on its platforms increases risk by inhibiting the Company’s ability to innovate and compete with other social media and artificial intelligence companies.

 

DISCUSSION

 

1.Continued proliferation of misinformation and harmful content exposes Meta to material business and regulatory risk.

 

For many years, Meta has received harsh criticism from regulators and public officials for its inability to effectively check misinformation and harmful content on its platform. This trend continues in 2022. The Company’s approach to election misinformation during the 2022 election demonstrated Meta’s ability to limit the spread of political misinformation.2 Yet, concern about how Meta enforces its standards continues into 2023. Senator Mark Warner, Chairman of the Senate Select Committee on Intelligence, sent a letter in February to Zuckerberg “pressing the company on its efforts to combat the spread of misinformation, hate speech, and incitement content around the world.”3 The letter specifically pointed to recent failures to adequately enforce Facebook’s Community Standards to combat misinformation that has led to violence in foreign countries.

 

_____________________________

1 https://www.sec.gov/ix?doc=/Archives/edgar/data/1326801/000132680123000050/meta-20230414.htm, p. 86

2 https://www.bloomberg.com/news/articles/2022-08-16/meta-pulls-out-familiar-playbook-in-advance-of-2022-elections

3 https://www.warner.senate.gov/public/index.cfm/2023/2/warner-presses-meta-on-facebook-s-role-in-inciting-violence-and-spreading-misinformation-around-the-world

 

  2
 

 

     

2023 Proxy Memo

Meta Platforms Inc. | Report on Enforcement of Community Standards and User Content

 

 

Failure to address these concerns presents an ongoing risk. An important example is the transfer of Twitter’s ownership to Elon Musk in 2022. Musk publicly underscored his desire to reduce the platform’s content moderation standards and to reduce staffing devoted to misinformation.4 Brands and advertising firms, already wary of the rapid transfer of ownership, dramatically reduced their spending, primarily over concerns about content moderation.5 The result was a 40% drop in Twitter’s revenue in 2022.6

 

Advertiser fear of misinformation and harmful content has been an issue for Meta in the past, even leading to media boycotts and intensive public pressure campaigns for advertisers to withdraw ads from the Company’s platforms.7 Continued missteps could risk harm at critical times for the Meta’s shifting strategic vision.

 

Meta’s third-quarter earnings report reflected a drop in ad revenue that sent shockwaves through the technology sector, reflecting a decline in ad revenue for the first time since 2007.8 This loss led to intensive public speculation about Meta’s ability to raise revenue and effectively manage content on its social media platforms.9 Articles described falling ad prices on Facebook and Instagram, and indicators that Meta was being more indiscriminate with ad content to raise revenue. 10,11 Articles also analyzed how much ad revenue Facebook generates from approved ads with blatantly harmful and violent information, further entrenching Meta’s reputation as a home for disinformation.12

 

Advertiser, user, and regulator focus on Meta’s handling of misinformation has not ebbed, warranting deep analysis of why the enforcement of Meta’s Standards, as described in the Transparency Center, has proven ineffective at controlling the dissemination of user content that contains or promotes harmful content and disinformation. The Proponent’s requested report would provide more transparency to users and investors, while offering Meta the chance to seriously repair its public image.

 

2.Meta’s reputation for misinformation on its platforms increases risk by inhibiting the Company’s ability to innovate and compete with other social media and artificial intelligence companies.

 

Meta’s ambition to be a leader in artificial intelligence “AI” is at risk due to its reputation as a company that has not acted effectively to stop misinformation on its platforms. CEO Mark Zuckerberg has been open about Meta’s push to innovate “AI” products and services. In a Facebook post on February 27, 2023, Zuckerberg announced the creation of a “new top-level product group at Meta focused on generative AI,” promising to “turbocharge” the Company’s AI work and integrate AI “into all of our different products.”13 With competitors like Microsoft and Google making huge bets on generative AI, alongside an explosion of AI startups, there is clear incentive for Meta to establish itself as a leader in the field. 14

 

_____________________________

4 https://apnews.com/article/voting-rights-elon-musk-twitter-inc-technology-dd4273dbda5b15343753f56c1f43a659; https://www.washingtonpost.com/technology/2022/11/29/twitter-covid-misinformation-policy/

5 https://www.washingtonpost.com/business/musks-latest-move-at-twitter-can-only-sink-ad-revenue/2022/11/14/1326f162-6442-11ed-b08c-3ce222607059_story.html; https://www.firstpost.com/world/twitter-revenue-falls-by-more-than-40-per-cent-since-elon-musk-took-12256072.html

6 https://www.firstpost.com/world/twitter-revenue-falls-by-more-than-40-per-cent-since-elon-musk-took-12256072.html

7 https://www.nytimes.com/2020/08/01/business/media/facebook-boycott.html

8 https://www.npr.org/2022/10/26/1131590734/meta-earnings-slump; https://mashable.com/article/tiktok-news-source-facebook-decrease

9 https://www.npr.org/2022/10/26/1131590734/meta-earnings-slump

10 https://www.nytimes.com/2023/02/11/technology/bad-digital-ads.html

11 https://www.nytimes.com/2023/02/11/technology/bad-digital-ads.html

12 https://www.theregister.com/2022/12/02/facebook_ads_screening/; https://www.nytimes.com/2023/02/11/technology/bad-digital-ads.html; https://fortune.com/2022/08/15/
facebook-failed-to-catch-blatant-misinformation-in-ads-ahead-of-brazils-2022-election/; https://thehill.com/policy/technology/3595977-facebook-profits-from-ads-on-
searches-for-hate-group-pages-report/ https://nadler.house.gov/news/documentsingle.aspx?DocumentID=394784

13 https://www.facebook.com/zuck/posts/pfbid02zHwANqWrZLMimhq7U97i3xaHkMEHu8CLsa9TGRj1QeejwDxRFChxSK1zY6yPak5Kl

14 https://techcrunch.com/2023/01/23/microsoft-invests-billions-more-dollars-in-openai-extends-partnership; https://www.bloomberg.com/news/articles/2023-02-03/google-
invests-almost-400-million-in-ai-startup-anthropic; https://www.axios.com/2023/01/10/artificial-intelligence-hype-explosion-generative-ai-chatgpt

 

  3
 

 

     

2023 Proxy Memo

Meta Platforms Inc. | Report on Enforcement of Community Standards and User Content

 

 

However, its reputation for misinformation promotion creates risk to these goals. Yann LeCun, Meta’s chief AI scientist, explained that Meta had to pull back its own chatbot creation, Galactica, because of its tendency to make up, or “hallucinate,” false information.15 While other generative AI models have had similar issues,16 it was uniquely impactful to Meta because of its reputation for disinformation on its platforms. LeCun explained that “OpenAI and other small companies are in a better position to actually get some credit for releasing this kind of thing . . . they are not going to get the same kind of blowback.”17 In short: because of Meta’s marred reputation stemming from misinformation on its platforms, it is reluctant to take on the additional reputation risk associated with experimenting with AI. Its poor reputation thus creates risk to its position in the emergent generative AI market at a critical inflection point for growth.

 

While AI may have potential future applications to identify misinformation on social media,18 it is currently not a viable replacement for fighting misinformation now, nor does it address the present issue of Meta’s tarnished reputation.19 The report requested in the Proposal would give the opportunity for Meta to acknowledge and assess why the enforcement of “Community Standards” as described in the “Transparency Center” has proven ineffective. Assessing and addressing this shortcoming will promote Meta’s ability to expand into new areas of innovation without its reputation as a limiting factor.

 

RESPONSE TO META BOARD OF DIRECTORS’ STATEMENT IN OPPOSITION

 

“We are committed to keeping people safe while giving them the ability to express themselves.”

 

While an admirable mission, Meta’s commitment to safety has been consistently compromised. While Proponent recognizes the complexity in moderating content across social media platforms and constant content streams, Meta too often falls short of providing a safe user experience. From prolific hate speech,20 to the lax treatment of high-profile and popular users known to disseminate harmful misinformation,21 Meta’s commitment has not resulted in a consistently safe experience for users across platforms.

 

_____________________________

15 https://www.nytimes.com/2023/02/07/technology/meta-artificial-intelligence-chatgpt.html

16 https://www.theguardian.com/technology/2023/mar/14/chat-gpt-4-new-model

17 https://www.nytimes.com/2023/02/07/technology/meta-artificial-intelligence-chatgpt.html

18 https://www.wired.co.uk/article/fact-checkers-ai-chatgpt-misinformation

19 https://scholar.harvard.edu/cvt/artificial-intelligence-struggles-moderate-covid-misinformation; https://www.rappler.com/newsbreak/in-depth/they-are-getting-smarter
-how-disinformation-peddlers-avoid-regulation/

20 https://www.businessinsider.com/facebooks-local-partners-say-hate-speech-stays-on-the-platform-2023-4 ; https://www.pbs.org/newshour/world/facebooks-system-
approved-dehumanizing-hate-speech

21 https://www.npr.org/2021/10/22/1048543513/facebook-groups-jan-6-insurrection

 

  4
 

 

     

2023 Proxy Memo

Meta Platforms Inc. | Report on Enforcement of Community Standards and User Content

 

 

Moreover, given the massive layoffs that have occurred, it is not unreasonable to assume Meta is even less able to provide a safe user experience. Reports from multiple sources indicate that Meta laid off about 75% of its “information problems engineering team, which helps enable fact-checking.”22 This change alone calls into question Meta’s ability to enforce its Standards. The report requested by the Proposal is timely and necessary to assess Meta’s current ability to effectively enforce its Standards to fight misinformation.

 

Meta also notes as a point of opposition that it provides “users with the tools to take control of their experience on our apps.”23 Whether user control is feasible has not been demonstrated. Further, posts with misinformation are more likely to be circulated, are more visible, and can drown out other content that provides an opposing viewpoint or fact checks.24 To say that users are in control of their experience is a partial truth and misplaces the burden of a safe experience to those least likely to be able to control, or to those least interested in controlling, misinformation.

 

“We regularly report on the actions that we take, including our use of third-party assessments to review the accuracy of our reporting and the strength of our processes.”

 

This statement refers to Meta’s commission of an Ernst & Young LLP (“EY”) assessment of the way that the Company “designed, implemented, operated, and monitored effective controls” over the process of preparing and conducting the Community Standards Enforcement Report.”25 This assessment, however, was not intended to “express an opinion or any other form of assurance over Meta’s enforcement policies and its identifying, labeling, or actioning of violating content in accordance with such policies.”26

 

“We work closely with global experts to continually improve our services.”

 

Meta maintains an Oversight Board currently comprised of 22 independently appointed members representing professional expertise in human rights, law, and academia that make non-binding recommendations on various aspects of Meta’s activity.27 When the Oversight Board issues a decision on Meta’s conduct, the Company has 90 days to review and respond. The Oversight Board makes decisions within the context of Meta’s Standards and has made several “decisions” (non-binding recommendations) on issues related to specific cases, like gender identity and nudity,28 and policy advisory opinions, like a recent decision regarding Meta’s cross check-program, in which misinformation was a central theme.29

 

_____________________________

22 https://www.wsj.com/articles/mark-zuckerberg-says-meta-will-slow-hiring-wont-rule-out-future-layoffs-98653b09

23 https://www.sec.gov/ix?doc=/Archives/edgar/data/1326801/000132680123000050/meta-20230414.htm

24 https://www.nytimes.com/2022/10/13/technology/misinformation-integrity-institute-report.html

25 https://about.fb.com/wp-content/uploads/2022/05/EY-CSER-Independent-Assessment-Q4-2021.pdf, p. 4

26 https://about.fb.com/wp-content/uploads/2022/05/EY-CSER-Independent-Assessment-Q4-2021.pdf, p. 4 (emphasis added).

27 https://www.oversightboard.com/meet-the-board/

28 https://www.oversightboard.com/decision/BUN-IH313ZHJ/

29 https://www.oversightboard.com/decision/PAO-NR730OFI/

 

  5
 

 

     

2023 Proxy Memo

Meta Platforms Inc. | Report on Enforcement of Community Standards and User Content

 

 

Despite the Oversight Board’s non-binding reviews, unequal application of Meta’s Standards for high-profile members known to disseminate misinformation and harmful has been a consistent issue.30 In the cross-check program, content from popular accounts with high user engagement is subjected to unique reviews. Part of that review is a secondary review of the content to assess whether it complies with the Standards, a process called “ER Secondary Review”.31 If content is flagged as harmful or as misinformation, content is not immediately taken down, unlike content from other accounts. The Oversight Board takes issue with this unequal application of Meta’s Standards, and has called for greater transparency, particularly of who and what accounts are on this cross-check list.32 Meta responded by saying that it is “[c]urrently reviewing how to improve the criteria for identifying entities who should receive ER Secondary Review.”33 Yet, there does not appear to be an adoption of the Oversight Board’s recommendation at this time.

 

The Proponent recognizes that enforcing standards is a complex task, especially when it comes to content from high-profile users and on contentious public issues, but it is incumbent on Meta as administrator of its platforms. Meta’s intent to review policies provides a prime opportunity for the Company to issue the Proposal’s requested report, which should include a review of Meta’s cross-check program and its ER secondary review as part of a larger analysis of why the enforcement of its Standards has proven ineffective.

 

 

CONCLUSION

 

Meta’s reputation as a company that cannot effectively control misinformation presents material business and regulatory risk, including the potential for harming its ability to innovate effectively. The Company has a unique and timely opportunity to assess its ability to enforce its Standards and ensure users and investors that it recognizes those risks and is willing to take action to mitigate them.

 

Vote “Yes” on this Shareholder Proposal 10

 

--

 

For questions, please contact Andrew Behar, As You Sow, abehar@asyousow.org

 

THE FOREGOING INFORMATION MAY BE DISSEMINATED TO SHAREHOLDERS VIA TELEPHONE, U.S. MAIL, E-MAIL, CERTAIN WEBSITES AND CERTAIN SOCIAL MEDIA VENUES, AND SHOULD NOT BE CONSTRUED AS INVESTMENT ADVICE OR AS A SOLICITATION OF AUTHORITY TO VOTE YOUR PROXY. THE COST OF DISSEMINATING THE FOREGOING INFORMATION TO SHAREHOLDERS IS BEING BORNE ENTIRELY BY ONE OR MORE OF THE CO-FILERS. PROXY CARDS WILL NOT BE ACCEPTED BY ANY CO-FILER. PLEASE DO NOT SEND YOUR PROXY TO ANY CO-FILER. TO VOTE YOUR PROXY, PLEASE FOLLOW THE INSTRUCTIONS ON YOUR PROXY CARD.

 

_____________________________

30 https://www.nytimes.com/2022/12/06/technology/meta-preferential-treatment.html?smid=nytcore-ios-share&referringSource=articleShare

31 https://transparency.fb.com/pao-cross-check-policy

32 https://www.nytimes.com/2022/12/06/technology/meta-preferential-treatment.html?smid=nytcore-ios-share&referringSource=articleShare

33 https://transparency.fb.com/pao-cross-check-policy

 

 

6

 

 

 

Meta Platforms (NASDAQ:FB)
과거 데이터 주식 차트
부터 10월(10) 2024 으로 11월(11) 2024 Meta Platforms 차트를 더 보려면 여기를 클릭.
Meta Platforms (NASDAQ:FB)
과거 데이터 주식 차트
부터 11월(11) 2023 으로 11월(11) 2024 Meta Platforms 차트를 더 보려면 여기를 클릭.