Skip to main content
Amnesty International UK
Log in

UK: X created a 'staggering amplification of hate' during the 2024 riots

False claims on X (formerly Twitter) spread rapidly after tragic triple murder in Southport, contributing to violent racist riots across the UK
False claims on X (formerly Twitter) spread rapidly after tragic triple murder in Southport, contributing to violent racist riots across the UK

False claims on X (formerly Twitter) spread rapidly after tragic triple murder in Southport, contributing to violent racist riots across the UK.

Amnesty International’s analysis links X’s design and policy choices to the amplification of content which incited violence against Muslims and migrants.

'One year on, even with the arrival of the Online Safety Act, it appears nothing has changed' - Sacha Deshmukh, Amnesty International UK Chief Executive at Amnesty International UK

A year after racist riots erupted across England and Northern Ireland following the murder of Bebe King (aged 6), Elsie Dot Stancombe (aged 7) and Alice da Silva Aguiar (aged 9) in Southport, Amnesty International has found that social media platform X played a central role in the spread of false narratives and content which incited violence against Muslim and migrant communities. 

Our analysis of X’s open-source code reveals that the platform’s content ranking algorithms prioritise the type of content that can spread misinformation and hate, with deeply inadequate safeguards to prevent human rights abuses.

“Our research demonstrates that these design choices significantly exacerbated human rights risks for racialised communities in the wake of the Southport riots– and continues to present a serious human rights risk today,” said Pat de Brún, Amnesty International’s Head of Big Tech Accountability.

Islamophobic and xenophobic rhetoric on X after Southport Attack

Within hours of the murder of the three young girls and attempted murder of ten others by 17-year-old Axel Rudakubana, incendiary posts by far-right influencers went viral on X. Within 24 hours, posts falsely claiming the attacker was Muslim, a refugee and/or a migrant who had arrived by boat reached an estimated 27 million impressions.

Our analysis of the recommender system and the removal of safeguards at X demonstrate that it is set up to amplify this type of harmful content and that there are few safeguards to prevent it. The net result was a staggering amplification of hate speech and anti-migrant sentiment. Andrew Tate, a notorious online influencer, posted a video falsely claiming the attacker was an “undocumented migrant” who “arrived on a boat” whilst Elon Musk claimed that “civil war is inevitable”. 

Stephen Yaxley-Lennon’s (also known as Tommy Robinson’s) posts on X, including claims such as there being “more evidence to suggest Islam is a mental health issue rather than a religion of peace” received over 580 million views in the two weeks following the Southport attack - an unprecedented reach for a figure banned on most mainstream platforms for breaching hate speech rules.

Algorithmic Amplification of Harmful Content

Amnesty International’s analysis of X’s open-source recommender algorithm found that the algorithm gives top priority to content that drives “conversation” even if that conversation is driven by misinformation or hate. This is exacerbated by the artificial amplification of posts from “premium” verified subscribers, which are paid accounts. These systems enabled toxic, racist, and false content to thrive on X in the wake of the stabbings, which fuelled and contributed to violence including racist attacks.

“X’s algorithm favours what would provoke a response and delivers it at scale. Divisive content that drives replies, irrespective of their accuracy or harm, may be prioritised and surface more quickly in timelines than verified information,” said Pat de Brún, Amnesty’s Head of Big Tech Accountability.

“One year on, public safety still at risk”

The Southport tragedy occurred in the context of Elon Musk’s takeover of X in late 2022. Since then, X has dismantled or weakened many of its safety guardrails aimed at curbing hate speech and disinformation, from mass layoffs of content moderation staff to the reinstatement of previously banned accounts - with no evidence of human rights impact assessments.

Today, the way X’s system weights, ranks, and boosts content, particularly posts that generate heated replies or are shared or created by “blue” or “premium” accounts – often paying users with limited identity verification, mean that inflammatory or hostile posts are likely to gain traction during periods of heightened social tension. Where such content targets racial, religious and other marginalised groups, portraying them as threatening or violent, X’s algorithms risk inciting discrimination, hostility or violence.

Sacha Deshmukh, Amnesty International UK Chief Executive said:

“By amplifying hate and misinformation on such a massive scale, X acted like petrol on the fire of racist violence in the aftermath of the Southport tragedy. The platform’s algorithm not only failed to ‘break the circuit’ and stop the spread of dangerous falsehoods; they are highly likely to have amplified them."

"One year on, it appears nothing has changed. Indeed, just two weeks ago we saw false online rumours about the transfer of people seeking asylum spark protests outside the Brittania Hotel in Canary Wharf. The UK’s online safety regime fails to keep the public safe, and the risk of X fuelling violence, discrimination and social tensions remains as high as they were during the rioting last year.

"The UK government must address the gaps in the Online Safety Act and challenge the racist rhetoric and scapegoating of refugees that are flourishing on social media. Regulators must hold X to account for its repeated role in human rights abuses and recognise that the self-regulation model is clearly failing. In cases where X’s algorithm is found to have amplified content that led to racist attacks during the riots, X should provide an avenue for remedy and establish a restitution fund for affected communities."

View latest press releases