Meta's Struggles with Hate Speech Before U.S. Elections
As the U.S. elections approach, Meta, the parent company of Facebook and Instagram, faces increasing challenges in moderating hate speech. New research highlights the difficulties in addressing harmful content, revealing the ongoing scrutiny the platform endures.
TECH NEWSTECH BLOGS
ADVERTISEMENTS
ADVERTISEMENTS
Meta's Fight to Curb Hate Speech Ahead of U.S. Elections
Nov 05, 2024 | Shibasis Rath
Battling Hate Speech Moderation
Meta, Facebook and Instagram's parent company, is facing intense scrutiny over its failure to regulate hate speech ahead of the United States presidential election. A new study shared with the Thomson Reuters Foundation indicates the company still has much work to do in managing hateful content, despite decades of criticism.
Hate Speech Moderation Test
A nonprofit called Global Witness tested Meta's response to hate speech by analyzing 200,000 comments posted on the pages of 67 U.S. Senate candidates over a month. By using Facebook's reporting tools, Global Witness submitted 14 egregious comments violating Meta's community standards on hate speech, but Meta never reacted in time. Other comments targeting people based on their religion and sexual orientation went online until researchers made Meta directly contact them.
Delayed Response and Broader Criticisms
Meta's delayed response, Ellen Judson, a researcher at Global Witness, pointed out, has revealed flaws in the strategy that the company uses to combat hate speech. The incident marks one in a long list of criticisms Meta faces from researchers and lawmakers across the world. For instance, the European Commission launched an investigation in April to check if Meta adheres to the EU's rules on online content.
Meta responded that the sample from which Global Witness drew its conclusion was small and that, in any case, the majority of comments flagged came under other provisions. Judson pointed out the potential danger of even small amounts of hate speech to create psychic harm and diminish political action.
Divestment of Election Security Investments
Meta's struggles, according to a former Twitter public policy official, are due to its divesting from election readiness. Meta has reduced its teams that monitor political content, yet many still use it. Currently, 68% of all adults use Facebook in the United States, and nearly half of this group says it uses Instagram, also owned by Meta. Meta claims hate speech levels are at historic lows under 0.02% of views, though a former data scientist argued the automated systems lack the nuance to fully understand context.
Transparency and Accountability
Meta has previously spoken about a $20 billion investment in election integrity. Yet, critics argue the platform isn't doing enough to bring transparency to hate speech moderation. In October, misleading election ads bypassed Meta's ad review process. Judson and others are now calling for more transparency—including user exposure data on hate speech and insights into Meta's moderation processes.
Judson warns that unless Meta increases transparency and accountability, it will remain in "catch-up" mode, unable to fully safeguard its platform for U.S. elections.
This article reflects ongoing discussions around Meta's content moderation challenges ahead of the U.S. elections.
ADVERTISEMENTS
ADVERTISEMENTS