Tech giant Meta has announced the end of its U.S. fact-checking program, replacing it with a community-based system similar to X’s “Community Notes.”
This move could shake up brand safety, as less stringent content moderation opens the door for misinformation to spread, jeopardizing ad performance and brand reputation across Meta’s platforms.
The shift, revealed today, reflects CEO Mark Zuckerberg’s commitment to restoring free expression and simplifying content moderation across Facebook, Instagram, and Threads.
The fact-checking program, launched in 2016, was designed to combat misinformation by partnering with independent organizations.
This move will see Meta's platforms rely more on users to flag misleading content, a strategy already used by X.
View this post on Instagram
In a statement posted on Instagram, Zuckerburg shares how it’s “time to get back to our roots around free expression.”
Meta’s decision follows recent appointments of conservative figures to its leadership team, including Joel Kaplan as global affairs head and Dana White to its board.
View this post on Instagram
The transition is expected to reshape Meta’s policy landscape.
It will focus on improving the accuracy of its content filters while reducing reliance on external fact-checking groups.
As Meta phases into the new Community Notes system, it faces heightened scrutiny over its ability to curb disinformation without relying on third-party fact-checkers.
In the coming months, Meta will bolster its Community Notes system and shift content moderation teams to new locations outside California.
The company will focus on addressing severe violations such as terrorism and illegal content, while leaving less urgent misinformation to be flagged by users.
The shift will affect over 3 billion users of Meta’s platforms, and its success will be closely monitored, particularly as it competes with X’s own model.
Meta’s independent Oversight Board welcomed the decision, but some former partners, like Check Your Fact, expressed surprise and concern.
The changes come amid rising concerns about the spread of misinformation on social media, with critics arguing that a community-based approach could worsen the issue.
Ross Burley, co-founder of the Centre for Information Resilience says the move is a step back for content moderation.
Meta Shift Impacts Business Dynamics
Meta's transition to community-based content moderation signals a significant evolution in how the platform manages content.
Businesses must now consider developing internal protocols to verify the accuracy and appropriateness of their content. In other words, brands must exercise greater emphasis on self-regulation and accountability.
For example, a healthcare company may need to build a dedicated team to monitor and verify health-related posts to prevent misinformation that could harm public health, as well as be extra careful with posting opinions that users may find offensive.
Additionally, advertisers may need to reevaluate their placements on Meta's platforms.
The loosening of content oversight could increase brand safety risks, especially for companies seeking to avoid association with controversial or misleading information.
With Meta's evolving moderation strategy, brands must stay agile and adapt quickly, tracking shifts in audience behavior and algorithm changes to refine their messaging and protect their reputation.
By staying informed and responsive to these changes, businesses can safeguard their online reputation and continue to thrive.
It's not the first time Meta has made changes to enhance user experience and safety.
The company has recently taken steps to improve online safety for younger users on Instagram.