Recommending hate: How TikTok’s search algorithms amplify hate speech

Algorithms amplify stereotypes and discrimination, including against marginalised women

The big idea: TikTok’s search engine systematically links hateful and misogynistic search prompts to content featuring presumed members of marginalised groups, exposing them to increased harassment and perpetuating societal biases.

Didn’t we already know that? Previous research has documented algorithmic bias on social media feeds and search engines, notably Google’s racist image search failures in 2015 — in which Google Photos’ image recognition algorithm mistakenly labelled black people as ‘gorillas’ — and the over-sexualisation of black women in search results. This new report confirms that TikTok’s search engine, a relatively under-examined area, similarly fails to filter hateful content and instead amplifies it. It adds new evidence on how TikTok’s algorithms associate hateful search terms with African-descended groups and other marginalised communities across English, French, German, and Hungarian languages.

Why it matters: TikTok’s role as a major information source for younger users globally means its search algorithms can normalise and spread online gender-based violence (OGBV) and racial discrimination. The report highlights the platform’s failure to adequately moderate hateful content, which risks reinforcing systemic misogyny and racism, including anti-black and anti-Romani hate. This is especially critical for African and diaspora communities, who face disproportionate exposure to harmful stereotypes, impacting mental health and safety online.

Key findings:

Nearly two-thirds (197 out of 300) of TikTok videos returned by hateful search prompts perpetuated harmful stereotypes targeting marginalised groups, including black and Romani women.

Ten videos explicitly contained the hateful search terms. Most results were associated through indirect algorithmic matching, such as partial keywords, synonyms, or visually inferred metadata.

The algorithms often linked anti-black slurs to videos featuring black individuals, including some linked to Nigeria, showing a harmful conflation between slurs and African identities.

TikTok’s opaque ranking and matching system prevents a clear understanding of how hateful content is surfaced and why certain videos are recommended.

The platform’s current content moderation and artificial intelligence detection systems are insufficient to prevent the spread of misogynistic and racist content in search results.

Go deeper: What exactly did the research find?

  • The report’s multilingual approach found consistent patterns of algorithmic bias in English, French, German, and Hungarian, with hateful prompts targeting Arab/Muslim, black, and Romani women.
  • In Hungarian, a sexist slur combined with a Romani self-descriptor disproportionately surfaced videos of Romani women, reinforcing gendered and ethnic stereotypes.
  • In French, hateful slurs against black women led to videos containing associated derogatory terms or translations, showing risks in algorithmic query expansion.
  • The algorithm’s ‘auto-correct’ behaviour linked the German anti-black slur ‘N*gerin’ to ‘Nigeria’, demonstrating how spelling similarity can dangerously associate hateful terms with African content creators.
  • The report calls for TikTok to increase transparency, incorporate gender analysis in algorithm design, and involve impacted communities in risk assessments to mitigate these harms.

The bottom line: TikTok’s search engine algorithms are not neutral; they replicate and amplify societal biases that fuel online hate and gender-based violence. Without urgent reforms in moderation, transparency, and algorithmic accountability, marginalised groups — especially black and Romani women and African-descended communities — will continue to face disproportionate harm. Policymakers must enforce stronger oversight and data access to enable independent auditing and mitigation of algorithmic discrimination on platforms such as TikTok.

Want more? Get the full evidence and context here:

This summarised article was produced by CfA editor-in-chief Justin Arenstein, using a customised Agentic AI toolkit built on Perpexity.AI, and was copy-edited by Gloria Aradi and proofread by CfA iLAB editor Athandiwe Saba.

Leave a Reply