Why ‘The Great Hack’ is just the tip of the iceberg
Edited blog based on Joe Westby’s original article (AI and Big Data Researcher, Amnesty Tech
It was the scandal that exposed the dark side the internet. The inside story of how Cambridge Analytica misused Facebook data to manipulate voters in the US election is told in “The Great Hack” – a new Netflix documentary out today.
But this is not just about one company. The film goes further to open our eyes to the way our lives are constantly monitored - and controlled - through technology. It exposes the fact that entire business models of some Big Tech companies may be deeply threatening our human rights.
Your “data exhaust”
In the digital world, everything you do leaves a “data exhaust”. This “data exhaust” is a record of everything you do in this world – from when you put petrol in your car, to what websites you visit.
When combined, even seemingly innocuous data can reveal a LOT about a person.
Facebook and Google of course have long affirmed their commitment to respecting human rights. But they’ve also have amassed data vaults with an unprecedented amount of information on people. This goes far beyond the data that you choose to share on their platforms.
Mass corporate surveillance on such a scale threatens the very essence of the right to privacy.
To what extent are we susceptible to such manipulation?
One of the most urgent and uncomfortable questions raised in The Great Hack is: to what extent are we susceptible to such behavioural manipulation?
This behavioural manipulation poses a real threat to our ability to make our own autonomous decisions. It even challenges our right to opinion, privacy and dignity.
Advertising and propaganda aren’t new. But there is no precedent for targeting individuals in such intimate depth, and at the scale of whole populations.
The current toxic trend
These practices may also be helping to fuel discrimination.
Companies – and governments – could easily abuse data analytics to target people based on their race, ethnicity, religion, gender, or other protected characteristics.
The push to grab users’ attention and to keep them on platforms can also encourage the current toxic trend towards the politics of demonisation. People are more likely to respond to clickbait – sensationalist or incendiary material – meaning platforms systematically privilege conspiracy theories, misogyny, and racism.
What is to be done?
The business models of the Tech giants present a systemic and structural issue that will not be easy to address. It requires a mix of political and regulatory solutions.
Stronger data protection is part of the answer. Enforcing General Data Protection Regulation (GDPR), which has international reach, and using it as a model in other countries, would ease the extent of data-mining and profiling.
Calls to break up the Big Tech companies are becoming more common, and the industry is already being examined by competition authorities in various jurisdictions. A recent decision in Germany to limit data sharing and aggregation between Facebook and WhatsApp is an example of a precise measure to counter the concentration of power towards the big players.
Whatever regulatory tools are deployed, it is vital that they are grounded in the human rights risks posed by the model. Human rights provide the only international, legally binding framework that can capture the ways the business model is impacting our lives and what it means to be human – and hold the companies to account.
It is high time to confront the human rights impacts of “surveillance capitalism” itself.
Related: read more about how thousands of you came together to speak out against Google's plan to relaunch its search engine in China
Our blogs are written by Amnesty International staff, volunteers and other interested individuals, to encourage debate around human rights issues. They do not necessarily represent the views of Amnesty International.