🎙️ Voice is AI-generated. Inconsistencies may occur.
The recent announcement by Meta to promote profound changes to its content and fact checking policies is shocking and raises critical questions about the future of information regulation.
In an era marked by misinformation and political polarization critics like writer Michael Harriot even state that Mark Zuckerberg "didn't just announce changes to Meta's content moderation policy. He didn't even announce that Meta's content policy will change. He announced that his company is willing to help kill people."
But the decision is not merely ideological; it's also about business. While ideology plays a role, the core motivation is protecting a business model that thrives on fake news and hatred. Hate sells—or, more accurately, the reaction to it does. It's the outrage that generates engagement. Additionally, far-right figureheads are often more willing to defend "free speech" when it comes to hateful speech, which in turn eases the burden on social media platforms to moderate content—saving them money.

Fact checking, in general, has proven increasingly ineffective. Individuals do not change their beliefs when confronted with factual corrections. Instead, they tend to double down on their pre-existing views. This means that fact checking, even though far from censorship, can inadvertently reinforce false beliefs rather than dispel them as people don't really seek the truth, making people more likely to distrust the media and deepen their attachment to conspiracies.
To protect their business model, Meta and billionaire Elon Musk's X have aligned themselves with the far right—trying to distance themselves from regulatory pressures from the European Union (EU) and other nations. They are prioritizing a business model over the integrity of information and democracy.
Unrestricted free speech is a concept that exists only in the U.S., and not in other parts of the world—clashing with local legislation elsewhere. If democracy is bad for business, then down with democracy—that is far more than the will of the majority or to just let everyone speak as if all ideas had equal rights.
This is also the reason why a Brazilian Supreme Court judge ordered the suspension of X for months in Brazil last year after Elon Musk refused to comply with orders to suspend profiles spreading hate and threatened the country's democracy. Free speech wasn't the issue, respecting Brazilian law was.
Meta's recent changes coincide with a broader trend among social media platforms to cater to right-wing populist sentiments. This alignment serves dual purposes—it bolsters the platforms' user engagement by appealing to a specific demographic, while simultaneously protecting their business interests against regulatory scrutiny. The partnership with Trump's administration comes as no surprise as Musk, Zuckerberg, and others can claim to defend free speech when they are simply protecting their businesses.
The implications of this alignment extends beyond U.S. borders. Countries like Brazil have already taken steps to regulate platforms like X, demonstrating that governments can (and should) intervene when social media becomes a breeding ground for extremism. The EU must now consider similar actions against Meta to safeguard public discourse and prevent the spread of harmful misinformation.
It is imperative for regulators to explore alternatives to traditional fact checking methods and the best way to do so is through algorithmic governance. Unlike conventional content moderation, which often reacts after misinformation has spread, algorithmic governance proactively shapes information flows through data-driven approaches. This method aims to suppress harmful content before it gains traction, fostering a more balanced digital environment without amplifying lies and extremism.
Although moderation decisions are always interesting debate material, the content that ultimately appears on social media platforms is primarily determined by algorithms, largely beyond the scope of political discourse.
By involving civil society and governments in the development of these algorithms, we can create a system that reflects collective values and priorities rather than the whims of a few powerful individuals or corporations who pose a real danger to democracy, as their power is not just economic but also comes from the absolute control of information flows.
The rise in misinformation demands urgent global action. As platforms like Meta retreat from responsibility, it falls upon governments and civil society to step in and establish frameworks that hold these companies accountable.
This includes creating mechanisms for users to understand and appeal decisions made by platforms and ensuring that those who spread misinformation face appropriate consequences—via partnerships with the judiciaries of different countries to speed up takedowns and analyze valid complaints from users, with mechanisms to appeal decisions. And to tackle misinformation for what it is, punishing users who spread them, banning them from social media, and even prosecuting them.
A handful of billionaires shouldn't be able to decide how public debate takes place, nor which topics can or cannot be discussed, and even less which political groups or parties should be in the spotlight.
Raphael Tsavkko Garcia is a Brazilian journalist and editor based in Belgium. He holds a PhD in human rights from the University of Deusto (Spain).
The views expressed in this article are the writer's own.
Is This Article Trustworthy?

Is This Article Trustworthy?

Newsweek is committed to journalism that is factual and fair
We value your input and encourage you to rate this article.
Newsweek is committed to journalism that is factual and fair
We value your input and encourage you to rate this article.