Human rights and justice must be at the heart of the upcoming Commission guidelines on the AI Act implementation
16. januar 2025
The following statement has been written collectively by the AI Act civil society coalition and the #ProtectNotSurveil coalition following the European Commission consultation on the AI Act prohibitions and AI system definition.
On 11 December 2024, the European Commission’s consultation on its Artificial Intelligence (AI) Act guidelines closed. These guidelines will determine how those creating and using AI systems can interpret rules on the types of systems in scope, and which systems should be explicitly prohibited.
Since the final AI Act presents various grave loopholes when it comes to the protection of fundamental rights, particularly in the areas of policing and migration, it is important the guidelines clarify that fundamental rights are the central guiding basis to enable meaningful AI Act enforcement.
More specifically, we urge the AI Office to ensure the upcoming guidelines on AI Act prohibitions and AI system definition include the following as a necessary basis for fundamental rights-based enforcement:
1) Clarify that comparatively ‘simple’ systems are explicitly within scope of the AI system definition: such systems should not be considered out of scope of the AI Act just because they use less complicated algorithms. We are concerned that developers might leverage the definition of AI and the classification of high-risk AI systems to bypass the obligations of the AI Act. For instance, transforming an AI system into a rule-based system could circumvent the regulations of the AI Act, while maintaining the same functionality and carrying the same risks. Hence, regulation must focus on potential harm, not just technical methods. The Dutch SyRI scandal is a clear example of a system that appeared simple and explainable, but which had devastating consequences for people’s rights and lives, especially for racialised people and those with a migrant background.
2) Prohibitions of systems posing an ‘unacceptable’ risk to fundamental rights are clarified to prevent the weaponisation of technology against marginalised groups and the unlawful use of mass biometric surveillance. Specifically, the guidelines should reflect:
3) Concerning the interplay with other Union law, the guidelines must ensure that human rights law, in particular the EU Charter of Fundamental Rights, are the central guiding basis for the implementation and that all AI systems must be viewed within the wider context of discrimination, racism, and prejudice. For this reason, the guidelines must emphasise that the objective of the prohibitions is to serve a preventative purpose, and therefore must be interpreted broadly in the context of harm prevention.
Lastly, we note the shortcomings of the Commission’s consultation process: notably, the lack of advanced notice and a short time frame for submissions, no publication of the draft guidelines to enable more targeted and useful feedback, lack of accessible formats for feedback, strict character limits on complicated, and at times leading, questions which required elaborate answers — for example, Question 2 on the definition of AI systems only asked for examples of AI systems that should be excluded from the definition of AI, hence allowing to narrow and not widen the definition of AI.
We urge the AI Office and the European Commission to ensure that all future consultations related to the AI Act implementation, both formal and informal, give a meaningful voice to civil society and impacted communities and that our views are reflected in policy developments and implementation.
As civil society organisations actively following the AI Act, we expect that the AI Office will ensure a rights-based enforcement of this legislation, and will prioritise human rights over the interests of the AI industry.
Signatories
Organisations
- Access Now
- AlgorithmWatch
- Amnesty International
- ARTICLE 19
- Border Violence Monitoring Network (BVMN)
- Danes je nov dan
- Electronic Frontier Norway (EFN)
- Electronic Privacy Information Center (EPIC)
- Equinox Initiative for Racial Justice
- EuroMed Rights
- European Center for Not-for-Profit Law (ECNL)
- European Digital Rights (EDRi)
- European Disability Forum (EDF)
- European Network Against Racism (ENAR)
- Federación de Consumidores y Usuarios CECU
- Homo Digitalis
- Lafede.cat – Organitzacions per la Justícia Global
- Panoptykon Foundation
- Privacy International
- SHARE Foundation
- Statewatch
Individuals
- Associate prof. Øyvind Hanssen (UiT Arctic University of Norway)
- Douwe Korff, Emeritus Professor of International Law
- Dr. Derya Ozkul (University of Warwick)
- Prof. Jan Tobias Muehlberg (Universite Libre de Bruxelles)