Skip to main content
Amnesty International UK
Log in

Fighting the Twitter Trolls

Image shows a woman looking tense at her laptop and is overwritten "Amnesty is recruiting volunteers to track abuse against women on #ToxicTwitter"

By Milena Marin, Amnesty International's Senior Innovations Campaigner

Many women on Twitter deal with abuse on a regular basis. Although these tweets clearly violate Twitter’s own hateful conduct policy, which prohibits, among other things, “non-consensual racial tropes” and “violent threats”, these abusive tweets remain online. 

Women clearly can’t rely on Twitter to identify abuse and deal with it. That’s why Amnesty International has devised a unique crowdsourcing project which gives anyone with a phone or laptop a way to fight back

Last month Amnesty International published extensive research into women’s experiences of reporting violence and abuse to Twitter, which has become notorious as a platform which tolerates the darkest and nastiest elements of misogyny. We documented widespread frustration at Twitter’s inconsistent enforcement of its own policies, and its failure to explain or account for sometimes-baffling decisions about when abusive content is permissible. 

Despite multiple requests to Twitter, the company has refused to release any data about the number of reports of abuse it receives, how it responds to them, or how it trains its moderators. This hides the extent of the problem, and makes it very difficult to know how the current reporting system - which is clearly not working for so many women - could be improved. Collecting the data is the first step in holding Twitter to account for its failure to protect human rights online – and this is where you come in.

Troll Patrol is a new platform which empowers internet users to play a part in making Twitter a safer, less toxic place. Harnessing the power of a global network of digital activists, Troll Patrol is at the forefront of how Amnesty International is combining machine learning and crowdsourcing to tackle human rights abuses.

Here’s how Troll Patrol works: 

We collected hundreds of thousands of tweets sent to female politicians and journalists in 2017. Some of these tweets may be threatening or abusive while others are harmless. We are now asking volunteers to help us sort through these tweets, by flagging the ones that are abusive or problematic. Eventually, we’ll have enough data about what online abuse looks like that we’ll be able to use machine learning to automatically detect abusive Tweets. 

Over the past three weeks, more than 5,000 people from all over the world have volunteered to take part in Troll Patrol. 

Volunteers can dedicate as little as 30 seconds to read a tweet and tell us if it contains problematic or abusive content, and to categorise the type of abuse. Does it contain sexism or homophobia? Are there sexual or physical threats? 

We know this issue isn’t black and white. Not everybody defines or experiences abuse in the same way, and sometimes content that that may not seem so bad on its own can have harmful effects if sent repeatedly.

So, we verify all contributions and piece them together with thousands of other people’s work. The result will be a massive database of examples of abusive tweets against women, which will help us trace and better understand patterns in online abuse. Using cutting-edge machine learning technologies, innovators at Amnesty International will eventually be able to build an algorithm that can detect abuse automatically.  

Once we can automatically identify abuse, we can easily answer questions like: who receives online abuse? Who are the perpetrators? And most importantly, what more can Twitter do to stop it? 

Other social media platforms are also using machine learning to crack down on online abuse, but Troll Patrol is unique in that it allows social media users to contribute to finding a solution. 

In the past Amnesty International has used a similar platform to analyse data about oil spills in the Niger Delta and expose serious negligence by oil companies. By crowdsourcing research, we were able to build up vital evidence in a fraction of the time it would take one Amnesty researcher, without losing the human judgement which is so essential to our work.

Troll Patrol isn’t about policing Twitter or forcing it to remove content. We are asking it to be more transparent, and we hope that presenting the data from Troll Patrol will compel it to make that change. The more Twitter users know about the extent of the problem, the more confident they can feel in calling out abuse.

If it’s possible for volunteers to identify online abuse in a systematic way, then Twitter should be able to as well. The company responded to our research last month by saying that it “cannot delete hatred and prejudice from society”. We’re not asking Twitter to do this, but we are asking it to stop nurturing hatred and prejudice by allowing abuse on its platform to go unchecked. 

Want to help beat the trolls? Anyone with an internet connection and a mobile, tablet or computer can contribute to Troll Patrol: https://decoders.amnesty.org/projects/troll-patrol

 

About Amnesty UK Blogs
Our blogs are written by Amnesty International staff, volunteers and other interested individuals, to encourage debate around human rights issues. They do not necessarily represent the views of Amnesty International.
View latest posts
0 comments