Anti-Muslim Hate

Failure to protect

Social media companies are failing to act on Anti-Muslim Hate 89% of the time.

Earlier reports by CCDH have shown platforms have similarly failed to act on antisemitism, anti-black racism, misogynist abuse and dangerous vaccine misinformation.

Join us and demand Facebook, Instagram, TikTok, Twitter and YouTube take swift action to stop profiting from and ban stop anti-Muslim hate on their platforms.

Download report Find out more

About

In a joint statement in 2019, Meta, Twitter, and Google committed to uphold the Christchurch Call to Eliminate Terrorist and Violent Extremist Content Online. They stated that they would be “resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence”.


Once again, their press releases prove to be nothing more than empty promises. This report exposes that social media companies, including Facebook, Instagram, TikTok, Twitter, and YouTube, failed to act on 89% of posts containing anti-Muslim hatred and Islamophobic content reported to them. CCDH researchers, using platforms’ own reporting tools, reported 530 posts which contain disturbing, bigoted, and dehumanizing content that target Muslim people through racist caricatures, conspiracies, and false claims.

These posts were viewed at least 25 million times. Many of the abusive content was easily identifiable, and yet there was still inaction. Instagram, TikTok and Twitter allow users to use hashtags such as #deathtoislam, #islamiscancer and #raghead. Content spread using the hashtags received at least 1.3 million impressions.

What is the impact of inaction? When social media companies fail to act on hateful and violent content, they know there is a significant threat of offline harm. Anti-Muslim hate seeks to dehumanize and marginalize communities of people who have historically been the subject of violent threats, attacks, discrimination, and hostility. Enabling this content to be promoted and shared on platforms without effective interventions and consequences, further endangers these communities by driving social divisions, normalizing the abusive behavior, and encouraging offline attacks and abuse. Worse still,
platforms profit from this hate, gleefully monetizing content, interactions, and the resulting attention and eyeballs. For them, hate is good business.

Contrary to their press releases and pledges, Facebook and Instagram failed to act on 89% of content promoting the “Great Replacement” conspiracy theory, which inspired and was used by the terrorists who committed massacres at the Christchurch mosque attack in 2019 and the Tree of Life synagogue shooting in 2018. This is directly within scope of the Christchurch Call commitments, which the companies in this study committed to progress.

The conspiracist and racist content identified in this report spreads and perpetuates hatred of Muslims and their faith. It has a chilling effect on these communities, and prevents Muslim people from exercising their freedom of religion and speech online. Like Big Tech’s failures to act on antisemitism, anti-Black racism, misogynistic abuse, and misinformation, companies’ continued failure to act on anti-Muslim hate creates an ecosystem that restricts freedom of expression and pushes marginalized people off their
platforms, all while allowing white supremacist, extremist, and hateful content to thrive and provide their shareholders with record profits.

Legislators, regulators, and civil society no longer believe social media companies when they promise to act on extremism and hate. Systemic and unchecked failures, like those identified in this report, must be addressed and technology companies must be held to account. Meta has been sued for their failure to address anti-Muslim attacks on their platforms by victims of the Rohingya genocide , and yet Facebook failed to act on 94% of posts in this sample. The status quo is insufficient to incentivize technology companies from taking their responsibilities towards Muslim communities and other groups
seriously.


Our experience as an organization suggests that three things are missing from existing
powers globally:
1) The power to compel transparency around algorithms (which select which content
is amplified and which is not); enforcement of community standards (which rules
are applied and how and when); and economics (where, when, by whom, and
using which data, advertising, which makes up the bulk of revenues for social
media platforms, is placed).
2) The power to hold accountable social media platforms at an individual, community
and national level for the impact of content they monetize
3) The power to hold accountable social media executives for their conduct as
administrators of platforms that hold enormous power over discourse not just in
terms of content moderation, but also: the amplification of content, institutional
and user experience design of the systems through which discourse occurs, and
equity in user experience for marginalized communities.


Speaking personally for a moment, if I may. My mom is Muslim. She is a good, hardworking, kind, and loving woman. She deserves better from those who have the power to protect her from the amplified hate of conspiracy theorists and devious, capable merchants of hate, and yet fail to do their bit. I cannot sleep when I see injustice. It makes me want to act. I cannot, for the life of me, fathom why the billionaires who own these platforms sleep at night when they know they could do so, so much more.

Imran Ahmed
CEO, CCDH