Why is Meta shutting down fact-checkers? | Explained
The Hindu
Meta CEO Mark Zuckerberg announces removal of fact-checkers, shift towards user-driven content moderation, sparking concerns over misinformation.
The story so far: On January 7, Meta CEO Mark Zuckerberg said the company will get rid of fact-checkers and simplify content policies by removing restrictions on topics as it is “out of touch with mainstream discourse.” In a five-minute video, he said that the company will return to its roots as the fact-checkers have been “too politically biased” and “destroyed more trust than they created, especially in the U.S.”
After the 2016 U.S. presidential election results were out, Meta, then known as Facebook, faced serious backlash for amplifying political posts that helped tilt the election in favour of U.S. President-elect Donald Trump. To build back its reputation, Facebook roped in content moderators globally and developed technology to filter harmful content.
Meta started its independent fact-checking programme in partnership with the International Fact-Checking Network (IFCN) and the European Fact-Checking Standards Network (EFCSN). Over time, Meta became one of the largest donors to IFCN.
Meta worked with fact-checkers to address misinformation on its platforms, Facebook, Instagram and Threads. While the fact-checkers worked on finding misinformation and rating them based on the seriousness of content violation, Meta followed up with action and informed users of the measures it took. Beyond fact-checking, partner organisations worked across Meta’s platforms to carry out research, and rate content on a qualitative scale — false, altered, partly false, missing context, satire, and true. Per IFCN’s ‘State of Fact-Checkers in 2023’ report, income from Meta’s Third-Party Fact-Checking Programme and grants remain fact-checkers’ predominant revenue streams. And 68% of fact-checking organisations have 10 or fewer employees, whereas only 6.6% employ 31 or more people.
Fact-checkers play a vital role in finding false and misleading content promoted on social media platforms by domestic accounts, and at times by foreign regimes. They also played a crucial role during the COVID-19 pandemic by correcting misinformation on social platforms. If the rated content on Meta is false or altered, its distribution across Meta’s apps will be reduced. If key information is missing or the content is satirical, Meta might provide the needed facts. Content rated poorly by a fact-checker may not be suggested to users, and repeat offenders could be hit with penalties such as restricted reach, being unable to monetise their content or turn their content into a news page.
Apart from relying on fact-checkers, Meta set up an Oversight Board to adjudicate cases involving serious content policy violations. The board heard serious content violation cases and made binding decisions to uphold or overturn Meta’s own actions. Gradually, Meta started to move away from news content in general to keep its platform free from disinformation-prone content. The company said it will not “proactively recommend content about politics on recommendation surfaces across Instagram and Threads”, noting that it wants these apps to be a “great experience” for all.
Now that is starting to change under Joel Kaplan, Meta’s new chief of global affairs. Mr. Kaplan said “civic content” about elections and politics would return to the apps, and that users can choose what they want to see. He expanded on Mr. Zuckerberg’s video clip, noting that the platform will also get rid of a number of restrictions “on topics like immigration and gender identity that are the subject of frequent political discourse and debate”. He said, “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.”