Facebook promises to protect public debate, but it doesn’t do what it does

-

When it comes to misinformation, Facebook’s actions don’t match its rhetoric about protecting democracy and public debate. A recent study reveals how inadequate the platform’s efforts have been to combat the scourge of “fake news” when this type of content is more frequent: elections and health crises.

During the 2018 and 2020 Brazilian elections and the COVID-19 pandemic, we examined Facebook’s use of warning labels, fact checks, and other initiatives aimed at curbing widespread misinformation. The research and its conclusions are summarized in the article published in the magazine Media, Culture and Society entitled “Mind the Gap: Facebook’s measures against information disorder do not go far enough”. What we found was a mess of inconsistent policies, loopholes, and apparent indifference that allowed false or misleading content to slip past these mitigation processes and continue to spread to the platform’s users.

Even when checking agencies partner with Facebook through the program Meta’s Third-Party Fact-Checking confirmed that something was demonstrably wrong or misleading, this did not always result in warning labels being applied to the platform. We found examples of cases where erroneous facts were initially labeled as false or misleading, but continued to spread under the radar after superficial edits by disseminators of this type of content. Proven disinformation was able to bypass Facebook’s mechanisms based on something as simple as how the false claim was visually displayed.

Posts selling the same lies about COVID-19 vaccines could inexplicably receive different treatments. While some form of visualization could have a “false information” label, other formats had nothing.

Even more worrying is how predictable many of these shortcomings should have been for a tech giant like Facebook. Of course, low-tech tricks like slightly altering images or using previews of websites, would be used to circumvent attempts at moderation.

Clearly, Facebook’s venerated algorithms, despite their touted complexity, cannot reliably detect misinformation in all the forms it manifests on a platform of billions of users, constantly adapting its tactics. Human content moderators also inevitably miss things through sheer volume.

But what our study really highlights is the lack of genuine motivation. Inconsistent rules, systematic loopholes, financial incentives built into the recommendation system, it shows that Facebook is truly failing to disrupt the misinformation business model that makes its ad machine run.

While legitimate news outlets have to follow draconian rules about anything that could be considered disinformation, disinformation factories always seem to find loopholes because Facebook’s economic incentives are to keep engagement and outrage at a maximum.

Facebook’s response has always been “substantial investments,” but we’ve reached a point where reinforcing disinformation labels, hiring more fact-checkers or doubling down on investments in artificial intelligence won’t be enough. If Facebook wants to honor its civic responsibilities and self-declared values, it will need to completely rebuild the fundamental dynamics of its platform to purge the sustainable viral spread of misinformation as a business model.

This would mean strictly depriving all propagators of false content of the ability to monetize through autoplay videos, engagement-driven algorithmic amplification, or paid promotion gimmicks. Also ruthlessly reduce the reach of any actors who persistently push misinformation into a shadow purgatory where their lies cannot spread. It would also mean making it easier to identify the subtlest mutation of recycled disinformation campaigns before they can flourish again.

And more fundamentally, protect public discourse from being overwhelmed by the temptation of easy engagement through the anger and outrage promoted by the disinformation industry.

This may sound like an existential dilemma for Facebook. The platform’s current uncoordinated and insufficient efforts – perhaps motivated more by public relations than genuine protection of democracy – leave it complicit in the real social dangers and harms of misinformation. It’s time for Zuckerberg and his team to decide how far they will go to change course and live up to their stated ideals.

The article is in Portuguese

Tags: Facebook promises protect public debate doesnt

-

-

PREV Warning about cancer cases due to Covid vaccines is based on unverified reports in the USA
NEXT Your gut may hold the key to fighting the flu
-

-

-