Human rights take a backseat in AI regulation
The European Union (EU) is in the final stages of drafting a pioneering framework, the EU Artificial Intelligence (AI) Act, to regulate the development and use of AI systems. In line with civil society demands, proposed regulatory standards are based on a classification of AI systems and their uses by their risk level. But civil society has expressed concern about insufficient human rights safeguards. In the final stretch of the process and anticipating that the EU AI Act may prepare the ground for further global regulation, civil society continues to push for stronger human rights protections and the elimination of loopholes.
Artificial Intelligence (AI) technology has existed for decades, and we’ve used it for years – on our smartphones, when searching the web, shopping online, watching Netflix, making automatic translations, programming a thermostat, or using car navigation. But recently AI made a great leap forward: it became able to produce human-like speech. That’s when we all took notice.
But civil society had long been researching and warning of the risks – to democratic processes, privacy, data protection and fundamental human rights – caused by opaque algorithms replicating stereotypes and biases and by outright misuse of AI-powered tools, as well as a related series of tricky safety, security and liability issues.
Spurred on by such concerns, in 2021 the European Union (EU) launched a process to develop a set of rules to address challenges associated with AI technologies and regulate their development, deployment and use across member states. It was while this process was underway, in November 2022, that ChatGPT was launched. This burgeoning of generative AI – AI that produces or manipulates text, images, audio and video content – was something EU negotiators struggled to get to grips with, showing how far ahead of regulatory efforts the technology is.
The legal tool that’s the result of this process, the EU AI Act, broadly takes the lines advocated for by civil society when it comes to basing regulation on a classification of risks. But civil society groups remain concerned about the current, near-final version of the Act. As the process enters its closing phase they continue to call for human rights protections to be strengthened and loopholes to be closed.
Uses and risks
As defined in the EU AI Act, an AI system is a kind of software that ‘can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with’. It includes software such as search engines, virtual assistants and speech and facial recognition systems, as well as software ‘embodied’ in objects such as drones, robots, cars and other devices.
There’s been a lot of discussion about whether ‘AI’ in an accurate designation or a misnomer. Some say it isn’t really artificial, as opposed to human, given that it depends on the continuous production of thought and art by real, creative human beings. And they claim it isn’t really intelligent either, at least not yet and for a long time to come, because human intelligence is so much more than pattern-matching.
In any case, as with all successful technologies before, AI is spreading fast because of its many benign uses. Because of its ability to process and systematise large quantities of data, identify patterns and make predictions, AI is already being used in areas from manufacturing and sales to food and energy production, farming and transportation to health research, healthcare, public administration and disaster preparedness and mitigation. In the online sphere, it’s being used to detect disinformation and identify and combat cyberthreats, among other things.
But here’s the catch: alongside the good uses come a lot of bad ones. AI can be used to fend off attacks on democratic processes and preserve the integrity of elections, but it can also be used to encourage online echo chambers, polarise public opinion, create and spread deepfakes, destroy reputations, distort decision making, manipulate voters, perpetrate fraud and repress people mobilising for rights. The latest technologies governments are using to identify, harass and intimidate protesters are powered by AI.
While some potential negative impacts of AI – such as making some skills redundant and eliminating whole categories of jobs – are shared with other technologies that preceded it, others are specific to AI.
At the top of the list, digital rights organisation Access Now highlights the human rights violations, disproportionately affecting excluded groups, that result from biases in algorithms and lack of transparency in programming. As it is fed with historical data containing inequities and inequalities, AI often further reinforces western bias and stereotypes based on things like age, sex and class. Biometric technologies are generally designed to work with what’s construed as a ‘normal’ body, perpetuating prejudice against people with disabilities, and racial biases can lead to terrible outcomes.
AI-based physical and behavioural biometric technologies such as facial and voice recognition systems, used to identify people and make predictions about them, often aren’t as accurate and reliable as they’re presented to be – and they’re particularly prone to function creep, being overstretched from their original, seemingly innocuous purposes. A biometric voice recognition system designed for healthcare workers to detect mental distress, for instance, could easily be repurposed as a lie detector in the hands of immigration law enforcement.
Voices from the frontline
Omran Najjar, Kshitij Sharma, Leen D’hondt, Nasilele Amatende and Petya Kangalova are part of the Humanitarian OpenStreetMap Team, an international civil society group dedicated to humanitarian action and community development through open mapping.
The biggest challenges are the biases and the lack of transparency of the algorithms embedded in existing AI solutions.
Most current AI models are closed. It’s not clear how they were trained. They are like a black box in which you provide input, then magic happens and you obtain a certain output. The more you train it, the better it becomes, but the output will always depend on your input. And those bringing in the input are often biased.
The problem with existing models is you cannot even know if they are biased, or how they are biased because they are black boxes. You cannot know what’s inside and training data and processes are not traceable.
In our work, we seek to tackle biases by localising models, meaning we are not looking at the general model that works everywhere. And we counter the lack of transparency by using fully open-source AI models.
Whether AI should be regulated probably depends on the kind of AI and the risk it poses. The EU has developed a regulatory framework, the EU AI Act, to regulate AI systems on a sliding scale of risk. For instance, AI systems that carry unacceptable risk – such as social scoring systems and AI applications that remotely monitor people in real time in public spaces – would be prohibited. High-risk AI systems – such as AI deployed in medical devices, the management of critical infrastructure, employment recruitment tools, credit scoring applications and so on – would have to comply with very strict requirements to ensure transparency, data governance and record-keeping, and human oversight, among other things.
This is an edited extract of our conversation with Humanitarian OpenStreetMap Team. Read the full interview here.
Regulation
In 2021, the European Commission proposed the establishment of a comprehensive set of rules to address the ethical and legal challenges of AI technologies and regulate their development, deployment and use across EU states. Following the passage by the European Parliament of a first version of the EU AI Act in June 2023, a final round of debates – known as a ‘trilogue’ – was held between the European Commission, Council and Parliament.
The Commission was pressed to finish the process before the end of 2023 so the Act could be submitted to a parliamentary vote before 2024 European Parliament elections, scheduled for June. Although contentious issues remained, a provisional political agreement on what’s slated to be the world’s first comprehensive piece of legislation on AI was reached on 8 December.
The text, not yet fully final, was hailed in Brussels as achieving the best possible balance between businesses and people, law enforcement and freedoms, innovation and rights protections. Civil society begs to differ.
On the eve of the EU agreement, a global coalition of over 50 civil society and human rights organisations from more than 30 countries issued a ‘Civil Society Manifesto for Ethical AI’, an initiative to steer AI policies towards safeguarding rights and decolonise AI discourse. The manifesto demanded the inclusion of people’s voices – and therefore of civil society – in the process to develop genuinely global, inclusive and accountable standards.
The 🇪🇺 AI Act is a global first.
— Ursula von der Leyen (@vonderleyen) December 8, 2023
A unique legal framework for the development of AI you can trust.
And for the safety and fundamental rights of people and businesses.
A commitment we took in our political guidelines - and we delivered.
I welcome today's political agreement.
The EU is celebrating reaching a deal on the AI Act –– but its human rights protections have major exceptions.
— Access Now (@accessnow) December 14, 2023
Here's @djleufer, @infofannny, + @CaterinaRodelli on the deal and why the AI Act has always been a concession to industry + law enforcement.https://t.co/5IahzL09kx
Voices from the frontline
Nadia Benaissa is legal policy advisor at Bits of Freedom, a Dutch civil society organisation (CSO) that aims to protect the rights to privacy and freedom of communication by influencing legislation and policy on technologies, giving policy advice, raising awareness and undertaking legal action. Bits of Freedom also took part in the negotiations for the EU AI Act.
As negotiations of the Act proceed, a coalition of 150 CSOs, including Bits of Freedom, has urged the European Commission, Council and Parliament to prioritise people and their fundamental rights.
Alongside other civil society groups, we have actively collaborated to draft amendments and engaged in numerous discussions with members of the European and Dutch Parliaments, policymakers and various stakeholders. We firmly pushed for concrete and robust prohibitions, such as those concerning biometric identification and predictive policing. Additionally, we emphasised the significance of transparency, accountability and effective redress in the use of AI systems.
We have made significant advocacy achievements, which include the prohibition of real-time and post-biometric identification, a better formulation of prohibitions, mandatory Fundamental Rights Impact Assessments, the recognition of more rights regarding transparency, accountability and redress, and the establishment of a mandatory AI database.
But we recognise that there is still work to be done. We’ll keep pushing for the best possible protection of human rights and we’ll continue to focus on the demands made in our statement to the EU trilogue, which boil down to empowering affected people with a framework of accountability, transparency, accessibility and redress, drawing limits on harmful and discriminatory surveillance by national security, law enforcement and migration authorities, and pushing back on Big Tech lobbying by removing loopholes that undermine regulation.
This is an edited extract of our conversation with Nadia. Read the full interview here.
A risk-based approach
The Act sets standards and obligations that are the more stringent the higher the risk is, both for individual people and for society as a whole. While AI systems with risk levels deemed ‘unacceptable’ are banned outright, those deemed to be of limited risk are merely subjected to transparency requirements.
Generative AI models are classed as posing limited risk and should therefore comply with minimal transparency requirements to allow users to make informed decisions. For instance, users must be made aware they’re interacting with AI and not with a person or that content was generated by AI so they can decide whether they want to access it. Models must be designed so they’re prevented from generating illegal content, and summaries of copyrighted data used for AI training must be published.
The Act acknowledges that this Is a fast-evolving field and concedes that high-impact general-purpose AI models such as the more advanced GPT-4 model might pose systemic risk, so they should undergo thorough evaluation, and any serious incidents would have to be reported.
One step further up on the risk scale, AI systems posing potential risks to health, safety, fundamental rights, the environment, democracy or the rule of law are rated as high risk and therefore subjected to stricter obligations, including mandatory Fundamental Rights Impact Assessments and Conformity Assessments and data governance, registration, risk management, quality management and transparency requirements, among others.
This applies to two categories of AI systems: those used in products already covered by EU product safety legislation, such as toys, cars and medical devices, and those in specific areas, to be registered in separate databases, including those used for the management of critical infrastructure, education and vocational training, human resources, recruitment and worker management, access to public services and benefits, law enforcement and migration, asylum and border control. High-risk AI systems must be assessed before being put on the market and throughout their lifecycle.
The AI systems that are to be banned due to their unacceptable levels of risk include those that imply the cognitive behavioural manipulation of people or specific vulnerable groups, the exploitation of people’s vulnerabilities, social scoring based on social behaviour or personal characteristics, and systems of emotion recognition – which interpret facial expressions – and biometric identification and categorisation.
But even for AI systems banned due to unacceptable risk, the AI Act intends to allow exceptions for law enforcement purposes. Real-time remote biometric identification, for instance, may be allowed in a limited number of particularly serious cases, while non-real-time biometric identification may be allowed with court approval to prosecute other serious crimes.
The devil in the details
The fact that the political agreement reached on 8 December didn’t include a definitive legal text has raised the stakes at subsequent technical meetings. With work continuing to sort out details and clean up the draft to be submitted to Parliament and adopted by the Council, civil society is running against the clock to try to narrow or eliminate exceptions and increase human rights protections.
Civil society’s biggest concerns focus on mass surveillance and emotion recognition, and the use of these and other AI-powered systems against people on the move and people of colour.
One point of contention is that the ban on the use of emotion recognition technologies, initially meant to apply to education, the workplace, law enforcement and migration, ended up leaving out their use in law enforcement and migration contexts. The draft also appears to include a dangerous loophole, allowing emotion recognition to be used for medical or safety purposes: the ‘aggression detection’ systems already on the market are notorious for classifying images of Black men as more aggressive than those of white men.
Human rights violations could also result from changes in the original intent to unconditionally ban the use of live facial recognition. While the draft allows for the limited use of facial recognition and subjects it to safeguards, human rights groups have argued that there should be a complete ban on its use in public spaces and at borders because no safeguards can prevent the harms to human rights, civic space and the rule of law that facial recognition can potentially inflict. Civil society also criticises the double standards implied in the Act’s failure to ban the export of harmful AI technologies, including for social scoring, that will be illegal in the EU.
#ProtectNotSurveil, a coalition of civil society groups, activists, researchers and others working to ensure the EU AI Act safeguards people on the move from harm as a consequence of AI, has warned that people on the move and Europe’s racial minorities are the testing ground for AI-powered surveillance and tracking tools – but once the technologies have been put in place, everyone will be affected. And the implications don’t stop there, since EU laws could influence any eventual global regulatory regime.
To stop this, it demands several changes to the EU AI Act. One is the regulation of all high-risk AI systems used in the area of migration, particularly surveillance technologies used for border control and identity checks, so they’re subject to oversight and accountability measures. Another is a ban on harmful AI practices in relation to migration, including predictive analytics systems used for preventing migration, automated risk assessments and profiling systems that entrench racism and bias, ‘lie detectors’ and other pseudo-scientific technologies that claim to infer emotions from biometric data, and remote biometric identification enabling mass surveillance of borders and detention facilities.
For civil society, the bottom line is clear: AI systems deemed to pose unacceptable risks to fundamental rights that aren’t susceptible to technical fixes or procedural safeguards must be banned outright. The stakes are too high not to do so.
OUR CALLS FOR ACTION
-
EU states must introduce all changes to the AI Act required to align regulation with the protection of fundamental human rights rather than the tech industry’s interests.
-
EU states must introduce a full ban on the use of facial recognition technologies and the export of all technologies banned in the EU.
-
Civil society should continue to advocate for higher human rights standards that could eventually feed into global regulation efforts.
Cover photo by John Moore/Getty Images