Press releases
Global: New Amnesty toolkit arms activists to hold states and tech giants accountable for harmful AI
The toolkit is a practical guide for uncovering AI harms in welfare, policing, healthcare, and education
‘Building our collective power to investigate and seek accountability for harmful AI systems is crucial to challenging abusive practices by states and companies’ - Damini Satija, Amnesty Tech
Amnesty International is launching its Algorithmic Accountability toolkit, aiming to equip rights defenders, activists and communities to shed light on the serious implications that Artificial Intelligence (AI) and automated decision-making systems (ADMs) have on our human rights.
The toolkit draws on Amnesty’s investigations, campaigns, media and advocacy in the United Kingdom, Denmark, Sweden, Serbia, France, India, Occupied Palestinian Territory (OPT), the United States and the Netherlands. It provides a ‘how to’ guide for investigating, uncovering and seeking accountability for harms arising from algorithmic systems that are becoming increasingly embedded in our everyday lives specifically in the public sector realms of welfare, policing, healthcare, and education.
Regardless of the jurisdiction in which these technologies are deployed, a common outcome from their rollout is not “efficiency” or “improving” societies -as many government officials and corporations claim - but rather bias, exclusion and human rights abuses.
Damini Satija, Programme Director at Amnesty Tech said:
“The toolkit is designed for anyone looking to investigate or challenge the use of algorithmic and AI systems in the public sector, including civil society organisations (CSOs), journalists, impacted people or community organisations. It is designed to be adaptable and versatile to multiple settings and contexts.
“Building our collective power to investigate and seek accountability for harmful AI systems is crucial to challenging abusive practices by states and companies and meeting this current moment of supercharged investments in AI, given how these systems can enable mass surveillance, undermine our right to social protection, restrict our freedom to peaceful protest and perpetuate exclusion, discrimination and bias across society.”
The toolkit introduces a multi-pronged approach based on the learnings of Amnesty’s investigations in this area over the last three years, as well as learnings from collaborations with key partners. This approach not only provides tools and practical templates to research these opaque systems and their resulting human rights violations, but it also lays out comprehensive tactics for those working to end these abusive systems by seeking change and accountability via campaigning, strategic communications, advocacy or strategic litigation.
One of the many case studies the toolkit draws on is Amnesty’s investigation into Denmark’s welfare system, exposing how the Danish welfare authority Udbetaling Danmark (UDK)’s AI-powered welfare system fuels mass surveillance and risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalised racial groups through its use of AI tools to flag individuals for social benefits fraud investigations.
The investigation could not have been possible without the collaboration with impacted communities, journalists and local civil society organisations and in that spirit, the toolkit is premised on deep collaboration between different disciplinary groups. The toolkit situates human rights law as a critically valuable component of algorithmic accountability work, especially given this is a gap in the ethical and responsible AI fields and audit methods. Amnesty’s method ultimately emphasises collaborative work, while harnessing the collective influence of a multi-method approach. Communities and their agency to drive accountability remains at the heart of the process.
“This issue is even more urgent today, given rampant unchecked claims and experimentation around the supposed benefits of using AI in public service delivery. State actors are backing enormous investments in AI development and infrastructure and giving corporations a free hand to pursue their lucrative interests, regardless of the human rights impacts now and further down the line.” said Damini Satija.
“Through this toolkit, we aim to democratise knowledge and enable civil society organisations, investigators, journalists, and impacted individuals to uncover these systems and the industries that produce them, demand accountability, and bring an end to the abuses enabled by these technologies.”
Highlighting how these systems are already harming people in the UK, Alba Kapoor, racial justice lead at Amnesty International UK, said:
“Increasingly, we’re seeing the UK state rely on AI as a silver bullet to improve ‘efficiency’ and cut costs. Yet time and time again, these technologies prove to be flawed and harmful, violating our rights to privacy and to equality and non-discrimination.
“This is happening across the board – from the rise of so-called ‘predictive policing’ used by UK police forces with little regard for people’s rights, to the DWP’s automated welfare systems that exclude people from accessing the support they need. Add to this the police’s use of facial recognition technology, which has been shown to misidentify Black people at dramatically higher rates than white people. Scrutiny of these technologies is more vital than ever.”