How to fill the gaps in the AI Act

10. September 2025

 

Have you attended a protest in the last year? While the AI Act is meant to protect you from the dangers of artificial intelligence (AI), its very exceptions could allow you to be a target of surveillance even months after a peaceful rally.

This raises the question: does this new European law truly protect us, or does it, under the guise of security, create dangerous legal loopholes for the surveillance of citizens?

Priporočila glede izjem v aktu o UI

The AI Act, which has been in effect since August 2024, brings important changes to the use of AI. In February 2025, provisions on prohibited practices, those that pose an unacceptable risk to the safety and fundamental rights of citizens, came into force. But with them, certain exceptions that deeply concern us also took effect.

For instance, the measures do not apply to the use of AI systems for exclusively military, defense, or national security purposes. At the same time, the list of prohibited practices is primarily a list of conditional prohibitions. Therefore we are concerned that these regulatory gaps will be exploited to weaken democracy, democratic processes, and the rule of law, which will have a chilling effect on public gatherings, the right to protest, privacy, and other rights.

With this in mind, Danes je nov dan has prepared recommendations for the state to prevent the exploitation of legal loopholes and ensure that technology serves people, rather than controlling them and threatening their freedoms.

Excluding the use of AI in the field of national security

The AI Act does not apply to the field of national security because, according to the Treaty on European Union, this area is supposed to remain under the exclusive jurisdiction of each member state. However, this does not mean that these activities are completely exempt from EU law. When using AI systems, states must comply with general principles, such as the principle of proportionality. This means that every measure must be necessary and appropriate to achieve a specific goal while also respecting the essence of fundamental rights.

Given that the concept of national security is broad, there is a danger that this exception could be abused. For example, protests or other forms of public gathering could be monitored by authorities using AI systems under the pretext of "national security," even though such surveillance should, in reality, fall under the more regulated area of crime prevention in the AI Act. To prevent this, the state must urgently adopt a clear protocol or resolution for the use of AI for these purposes.

Conditional prohibitions for some high-risk systems

The AI Act defines some practices that are in principle prohibited but allows for numerous exceptions. The vague exceptions that could permit the widespread use of AI systems for population surveillance are particularly worrying:

  • Migration and border control: The Act does not cover some harmful AI systems used in the fields of migration and border management, leaving civil society organizations and individuals without adequate protection.
  • Insufficient transparency: Obligations regarding transparency and public notification are loose, which prevents effective public oversight. The public and journalists will not know where and under what circumstances the most invasive systems are being used.
  • Remote biometric identification: The AI Act differentiates between real-time biometric identification and post-identification (e.g., from video recordings). While real-time identification is heavily restricted, the conditions for post-identification are lax. This could lead to abuse, mass surveillance, and violations of the right to privacy and freedom of expression. For example, police could later identify protest participants from video footage, even if they committed no crime. Such systems also reinforce the power imbalance between those who observe and those who are observed.

What should the state do?

In light of these risks, it is crucial that the state does not adopt additional legislation that would permit the use of real-time biometric systems. At the same time, the competent authorities should:

  1. Introduce more restrictive measures for the use of AI systems, especially in migration and biometric identification.
  2. Ensure a clear and legally regulated process for obtaining permits to use these systems.
  3. Establish a public registry of AI systems to enable oversight of their use.
  4. Ensure an effective system of internal monitoring and the possibility of immediately shutting down systems in the event of any violation of fundamental rights.
 

Sofinancira Evropska unija in Impact4values logotipa

Funded by the European Union. The views and opinions expressed are solely those of the author(s) and do not necessarily reflect the views and opinions of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor the EACEA can be held responsible for them.

 

Worth your attention

Invitations, tips, suggestions, and warnings