Meta Scraps Fact-Checkers, Eases Content Restrictions

Fact-checkers are headed to the dustbin of history at Meta.

“We will end the current third-party fact-checking program in the United States and instead begin moving to a Community Notes program,” Meta’s Chief Global Affairs Officer Joel Kaplan announced in a company blog on Tuesday.

Kaplan added that Meta would also be addressing the “mission creep” that has made the rules governing the company’s platforms too restrictive and prone to over-enforcement.

“We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate,” he wrote. “It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.”

In addition, Meta will be modifying the automated systems that scan its platforms for policy violations. “[T]his has resulted in too many mistakes and too much content being censored that shouldn’t have been,” Kaplan wrote.

Going forward, the systems will focus on illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud, and scams, while less severe policy violations will depend on someone reporting an issue before any action is taken.

Meta is also making it harder to remove content from the platforms by requiring multiple reviewers to reach a determination in order to take something down and allowing users to see more civic content — posts about elections, politics, or social issues — should they desire it.

Censorship Tool

Kaplan explained that when Meta launched its independent fact-checking program in 2016, it didn’t want to be the arbiter of truth, so it handed the responsibility of fact-checking content to independent organizations.

“The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read,” he wrote.

“That’s not the way things played out, especially in the United States,” he continued. “Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how.”

“Over time, we ended up with too much content being fact-checked that people would understand to be legitimate political speech and debate,” he noted. “Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.”

David Inserra, a fellow for free expression and technology at the Cato Institute, a Washington, D.C. think tank, served on a Facebook content policy team and said he was bothered by the selection bias of the group. “The only people who joined to be fact-checkers wanted to moderate content,” he told TechNewsWorld. “People who wanted users to make their own decisions about content didn’t become fact-checkers.”

“My experience with the effectiveness of Facebook’s fact-checking was pretty mixed overall,” added Darian Shimy, CEO and founder of FutureFund, a fundraising platform for K-12 schools and PTAs, in Pleasanton, Calif.

“It’s safe to say that it added a layer of accountability, but candidly, I found it was too slow and inconsistent to keep up with the pace of viral misinformation,” he told TechNewsWorld. “Talking to many people in my circle and researching internally, I found that most people felt that relying on third-party fact-checkers created a perception of bias, which didn’t always help build trust with users.”

‘Not a Victory for Free Speech’

Irina Raicu, director for internet ethics at Santa Clara University’s Markkula Center for Applied Ethics, noted that there was plenty of disinformation showing up on Facebook under the existing fact-checking regime.

“Part of the problem was the automation of content moderation,” she told TechNewsWorld. “The algorithmic tools were pretty blunt and missed the nuances of both language and images. And the problem was even more widespread in posts in languages other than English.”

“With billions of pieces of content posted daily, it was simply impossible for human fact-checkers to keep up,” added Paul Benigeri, co-founder and CEO of Archive, a company that develops software to automate e-commerce digital marketing workflows, in New York City.

“Fact-checking felt more like a PR move,” he told TechNewsWorld. “Sometimes it worked, but it never came close to catching the full volume of misleading posts.”

Meta scrapping its fact-checking system was questioned by Tal-Or Cohen Montemayor, founder and executive director of CyberWell, a non-profit organization dedicated to fighting antisemitism on social media, headquartered in San Francisco.

“While the previous fact-checking system has proven to be an ineffective and unscalable method of combatting misinformation and disinformation during real-time conflicts and emergencies,” she told TechNewsWorld, “the answer cannot be less accountability and less investment from the side of the platforms.”

“This is not a victory for free speech,” she declared. “It’s an exchange of human bias in a small and contained group of fact-checkers for human bias at scale through Community Notes. The only way to prevent censorship and data manipulation by any government or corporation would be to institute legal requirements and reforms on big tech that enforce social media reform and transparency requirements.”

Flawed Community Solution

Meta’s Community Notes replacement for fact-checking is modeled on a similar scheme deployed on X, formerly Twitter. “The community-based approach is nice in that it deals partially with the scale issue,” said Cody Buntain, an assistant professor at the College of Information at the University of Maryland. “It allows many more people to engage with this process and add context.”

“The problem is that community notes, while it can work in the large aggregate scale for occasional pieces of information or the occasional story that goes viral, it is generally not fast enough and gets totally overwhelmed with new major events,” he explained.

“We saw this in the aftermath of the attacks in Israel back in October of 2023,” he continued. “There were people highly engaged in the community note process, but Twitter as a platform just got swamped and overwhelmed with the amount of misinformation going on around this event.”

“When the platforms say, ‘We’re going to wash our hands of this and let the community deal with it,’ that becomes problematic in these moments where the only people who really can deal with massive influxes of high-velocity, low-quality information are the platforms,” he said. “Community notes aren’t really set up to deal with those issues, and those are the moments when you want high-quality information the most.”

“I’ve never been a fan of community notes,” added Karen Kovacs North, clinical professor of communication at the Annenberg School for Communication and Journalism at the University of Southern California.

“The type of people who are willing to put notes on something are usually polarized and passionate,” she told TechNewsWorld. “The middle-of-the-roaders don’t take time to put their comments down on a story or a piece of content.”

Currying Trump’s Favor

Vincent Raynauld, an assistant professor in the Department of Communication Studies at Emerson College, noted that while community moderation sounds great in theory, it has some problems. “Even though the content might be flagged as being disinformation or misleading, the content is still available to people to consume,” he told TechNewsWorld.

“So even though some people might see the community note, they might still consume that content, and that content might still have an impact on their attitudes, knowledge, and behavior,” he explained.

Along with the Kaplan announcement, Meta released a video of CEO Mark Zuckerberg hailing the company’s latest moves. “We’re going to get back to our roots and focus on reducing mistakes, simplifying our policies, and restoring free expression on our platforms,” he said.

“Zuckerberg’s announcement has nothing to do with making Meta’s platforms better and everything to do with currying favor with Donald Trump,” asserted Dan Kennedy, a professor of journalism at Northeastern University, in Boston.

“There was a time when Zuckerberg cared about his products being used to promote dangerous misinformation and disinformation, about the January 6 insurrection and Covid,” he told TechNewsWorld. “Now Trump is returning to office, and one of Zuckerberg’s rivals, Elon Musk, is running amok with Trump’s indulgence, so Zuckerberg is just getting with the program.”

“No system of fact-checking and moderation is perfect,” he added, “but if Zuckerberg really cared, he’d work to improve it rather than getting rid of it altogether.”

Musk as Trend Setter

Damian Rollison, director of marketing for SOCi, a comarketing cloud platform headquartered in San Diego, pointed out an irony in Meta’s latest move. “I think it’s safe to say that no one predicted Elon Musk’s chaotic takeover of Twitter would become a trend other tech platforms would follow, and yet here we are,” he told TechNewsWorld.

“We can see now, in retrospect, that Musk established a standard for a newly conservative approach to the loosening of online content moderation, one that Meta has now embraced in advance of the incoming Trump administration,” he said.

“What this will likely mean is that Facebook and Instagram will see a spike in political speech and posts on controversial topics,” he continued.

“As with Musk’s X, where ad revenues are down by half, this change may make the platform less attractive to advertisers,” he added. “It may also cement a trend whereby Facebook is becoming the social network for older, more conservative users and ceding Gen Z to TikTok, with Instagram occupying a middle ground between them.”

Related Content

Video editing app Captions switches to a freemium model to boost growth

Decentralized payments startup Nevermined raises $4M to unlock AI-to-AI agent commerce

Bell’s 15th annual Let’s Talk Day takes place January 22

Leave a Comment