Creators, NGOs, politicians, and activists have responded with a mix of dismay and concern, cheers and applause at changes announced by Meta to its content moderation and fact-checking policies, which are set to drastically change the face of, and user experience on Instagram, Facebook and Threads in the US.
The new policies, announced by Meta CEO Mark Zuckerberg earlier this week, will eliminate independent fact-checkers of content posted to Instagram, Facebook and Threads. They will be replaced with a system of Community Notes similar to that of X (formerly Twitter), which relies on users to provide context and corrections to potentially misleading content.
Meta will also loosen content moderation restrictions and revise its Hateful Conduct policies to allow more content on controversial topics such as immigration, gender identity, and elections.
The key changes include:
- Eliminating fact-checkers: Independent fact-checking partnerships are being replaced by a “Community Notes” model, similar to X (formerly Twitter), which relies on user-generated context and corrections instead of professional fact-checking.
- Relaxing moderation policies: Meta will now allow more content on controversial topics such as immigration, gender identity, and elections, arguing that these subjects are central to political discourse.
- Loosening hateful conduct policies: Content targeting minorities, such as allegations linking race or gender to mental illness, or advocating for gender-based job restrictions, is now permitted under Meta’s revised guidelines.
- Removing language linking hate speech to offline violence: Despite past cases like Facebook’s role in inciting violence against minorities in Myanmar, Meta has removed the explicit connection between hate speech and violence from its policies.
You can find full details of Meta’s revised Hateful Conduct policy here and here.
Meta said would continue to remove illegal content related to drugs, firearms or child sexual abuse material, for example.
“We're going to simplify our content policies and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse,” Zuckerberg said. “What started as a movement to be more inclusive has increasingly been used to shut down opinions and shut out people with different ideas, and it's gone too far.”
Political and issues-based creators to make a comeback
Seeking to reassure content creators, Instagram head Adam Mosseri said creators would feel the greatest impact in relation to what he called “over-enforcement [of hateful conduct rules] and political content”.
“We will now show political recommendations, that means posts from accounts you do not follow across our platforms and in a personalised way,” Mosseri said in an Instagram Reel. “And you don’t have to worry about your account becoming non-recommendable overall if you predominantly post about politics.”
Meta deprioritised political content on Instagram and Facebook following the 2020 US presidential elections. At the time, many political content creators and activists complained about reduced reach and engagement on their accounts, and argued that limiting political content could suppress public debate and the dissemination of different viewpoints.
Meta has now dropped the policy in what it says is a response to user demand for more political discourse on the platforms.
Mosseri said the changes were designed to help “anybody with something worth sharing to be able to be creative on our platform”.
Instagram, Facebook and Threads users can expect to see a steady stream of political content from accounts they don’t follow in their feeds, almost immediately.
Free speech or free hate? Meta changes raise concerns for the future of a safe and ethical web
But critics around the world decried the Meta policy changes as an attempt to curry favour with the incoming Trump administration in the US, with many arguing the changes would result in the spread of misinformation, and increased trolling and bullying of minority groups.
Trump and his Republican allies have been highly critical of content moderation policies designed to stamp out hate speech, arguing content moderation is an attack on free speech.
Responding to the Meta policy changes, Kolsquare Founder and CEO Quentin Bordage expressed concern that the uncontrolled approach to content moderation was a return to past practices, when misinformation and violent speech were rife on Meta platforms and led to real-world consequences.
“It seems frightening. These changes come amid accusations of political bias and censorship from certain US political figures, including President-elect Trump. Mark Zuckerberg’s framing of the move as addressing user trust raises concerns about tech companies prioritising business interests over societal responsibility,” Bordage said.
“The changes at Meta underscore the urgent need for global standards and regulation in the social media space. While we continue to observe how these developments will impact Europe, Kolsquare advocates for an international framework that connects social media regulation with ethical advertising and content practices.”
NGOs and activist groups lined up to denounce the changes to content moderation policies, which they said would unleash a wave of hate speech directed at minorities and expose children to more harmful content online.
“Zuckerberg’s removal of fact-checking programs and industry-standard hate speech policies make Meta’s platforms unsafe places for users and advertisers alike. Without these necessary hate speech and other policies, Meta is giving the green light for people to target LGBTQ people, women, immigrants, and other marginalised groups with violence, vitriol, and dehumanising narratives,” GLAAD President and CEO Sarah Kate Ellis said in a statement.
“With these changes, Meta is continuing to normalise anti-LGBTQ hatred for profit — at the expense of its users and true freedom of expression. Fact-checking and hate speech policies protect free speech.”
Ian Russell, chairman of the Molly Rose Foundation and whose daughter died after seeing harmful content on Instagram, said:
“Meta’s decision to roll back on content moderation is a major concern for safety online. We are dismayed that the company intends to stop proactive moderation of many forms of harmful content and to only act if and when a user complaint is received.”
For US creators and influencer marketers, Meta’s content moderation changes compound uncertainty in social media landscape as TikTok ban looms
Meta’s decision to abandon independent fact-checking programs and loosen restrictions around potentially harmful content contribute to an even further roiling of the US social media landscape.
With the spectre of a TikTok ban coming into force on January 19, creators have been scrambling to build and replicate their communities on other platforms, including Instagram and Facebook. The new policies may force some into rethinking their commitment to maintaining active accounts on these platforms.
Business Insider reported creators were divided along partisan lines in their opinions about the changes to content moderation and fact-checking on Meta.
Meta’s decision this week also means content creators active on several platforms will have to contend with vastly different sets of rules. To date, neither YouTube or its parent company Google have responded to the Meta decision or indicated similar changes to its content moderation and fact-checking policies.
To what extent Facebook and Instagram users and creators quit the platforms remains to be seen. However, Meta’s roiling of the user experience on its platform can be expected to lead to some fragmentation of the social media ecosystem as users seek alternative platforms.
In the aftermath of Meta’s announcement, TechCrunch reported a surge in Google searches seeking information on deleting Instagram and Facebook, and for ‘Facebook alternative’ platforms like Mastodon and Bluesky.
Early winners from Meta’s content moderation and fact-checking changes could be Bluesky, which banked a surge in users who fled X due to their discontent content and curation on the platform resulting from Musk’s changes and his heavy involvement in the US election.
X, which has bled users and advertisers in the wake of owner Elon Musk’s takeover of the platform and subsequent downgrading of content moderation policies, welcomed the Meta decision.
Threads too, which despite its high early on-boarding of millions of users has been criticised as boring and banal due to its insistence on not hosting political content, could suddenly see more traction and engagement in feeds.
For influencer marketers, the changing user experience on Meta platforms will force deeper analysis of creators’ social media history and values to avoid risks to brand safety, and of audience data to ensure that target audiences are still reactive on those platforms.
Combined, Meta apps have a daily active user base of some 3.29 billion people globally, making them an indispensable destination for marketers and creators. And while this is likely to remain the case, influencer marketers overly reliant on Meta platforms would be advised to investigate the relevance of other platforms to their strategies in the wake of changing content moderation policies.
While some advertisers expressed concern about the potential impact of loosened content moderation on brand safety, analysts predicted advertisers would be reluctant to leave Meta platforms due to their size, importance and a return on investment for ads that can be hard to beat.
Europe flexes its muscles: EC pushes back against censorship accusations, warns changes to content moderation policies are subject to DSA oversight
In response to Meta’s announcement, European leaders warned Meta that any changes to content moderation and factchecking policies could not be implemented without first conducting a risk assessment that must be approved by the European Commission.
“I spoke with the management of Meta France this evening [Tuesday] and was assured that this functionality [the elimination of independent fact-checkers with a system of Community Notes] is only being deployed in the US for the moment,” France’s Minister for Artificial Intelligence and Digital Clara Chappaz said on X.
“In Europe, the Digital Services Act will be respected. Believe in my vigilance on the subject.”
Under the EU’s Digital Services Act (DSA), Meta’s platforms — and others like YouTube and X with more than 45 million monthly active users — must engage in rigorous content risk assessments, provide detailed and transparent reporting on activities taken to mitigate the risk of misinformation and harmful content, work with independent fact checkers, and submit to independent audits of these activities.
Failure to comply with these provisions can result in fines of up to 6% of a company’s global revenue.
The European Commission is already investigating Meta’s handling of disinformation, political content and processes for flagging illegal content under the DSA.
In his statement announcing the policy change, Zuckerberg took direct aim at Europe’s digital safety laws, saying he would work with US Present-elect Donald Trump to push back against censorship around the world.
“Europe has an ever increasing number of laws institutionalising censorship and making it difficult to build anything innovative there,” Zuckerberg said.
In response, the EC said it only forced platforms to take down content that is illegal or may be harmful, such as to children or to the EU’s democracies, Reuters reported.
“We absolutely refute any claims of censorship,” an EC spokesperson said.
In the UK, the Department for Science, Innovation and Technology said it was looking closely at Meta’s announcement impacting its US platform.
“The UK’s Online Safety Act will oblige them to remove illegal content and content harmful to children here in the UK, and we continue to urge social media companies to counter the spread of misinformation and disinformation hosted on their platforms,” The Guardian reported.
About Kolsquare
Kolsquare is Europe’s leading Influencer Marketing platform, offering a data-driven solution that empowers brands to scale their KOL (Key Opinion Leader) marketing strategies through authentic partnerships with top creators.
Kolsquare’s advanced technology helps marketing professionals seamlessly identify the best content creators by filtering their content and audience, while also enabling them to build, manage, and optimize campaigns from start to finish. This includes measuring results and benchmarking performance against competitors.
With a thriving global community of influencer marketing experts, Kolsquare serves hundreds of customers—including Coca-Cola, Netflix, Sony Music, Publicis, Sézane, Sephora, Lush, and Hermès—by leveraging the latest Big Data, AI, and Machine Learning technologies. Our platform taps into an extensive network of KOLs with more than 5,000 followers across 180 countries on Instagram, TikTok, X (Twitter), Facebook, YouTube, and Snapchat.
As a Certified B Corporation, Kolsquare leads the way in promoting Responsible Influence, championing transparency, ethical practices, and meaningful collaborations that inspire positive change.
Since October 2024, Kolsquare has become part of the Team.Blue group, one of the largest private tech companies in Europe, and a leading digital enabler for businesses and entrepreneurs across Europe. Team.Blue brings together over 60 successful brands in web hosting, domains, e-commerce, online compliance, lead generation, application solutions, and social media.