One year ago the AI Act became law: What does this mean for our rights?
1. August 2025

One year has passed since the Artificial Intelligence (AI) Act officially became EU law. But as authorities across Europe get ready for its full implementation, some of the hard-won safeguards are already under threat – especially when it comes to the use of AI technologies in law enforcement and migration.
Instead of using this time to strengthen and implement key measures that protect human rights, we are witnessing attempts to devalue them. Behind closed doors, the European Commission is siding with tech corporations and some repressive governments, and under the guise of "simplification," strives to weaken the already minimal protections defined in the Act.
Dangerous Systems Get the Green Light
In the past year, the European Commission organised several consultations on how the Act should be implemented in practice. However, it seems that instead of protecting human rights, these procedures are opening the door to dangerous AI tools that could be used with minimal oversight and without properly defined accountability for potential harm. The consequences of such leniency are all the more serious because various AI technologies are already being used today to profile, target, and harm people on the move as well as other marginalised communities.
Furthermore, some EU governments are already exploiting the legal loopholes allowed by the AI Act to implement mass surveillance measures. In Austria, the government used a biometric system to identify climate activists at protests, while in Hungary, a law was passed allowing the use of facial recognition technology at pride parades.
We Demand Stricter Oversight and Full Transparency
Recently, the European AI Office asked which AI systems should be defined as "high-risk". In cooperation with the Protect not Surveil coalition and the EDRi group for the AI Act, we demand that this list must include: hand-held facial recognition and fingerprinting used by police and border guards, forecasting tools to predict human mobility in the humanitarian sector, and surveillance tech at borders.
We advocate the principle: no transparency, no deployment. The AI Act currently allows police and migration authorities to operate in the dark, exempt from disclosing information to the public about the systems they use. If you can’t explain how a system works or who it harms, it has no place in law enforcement or migration control.
EU actors can’t dodge responsibility. If EU companies or agencies use harmful AI abroad, they must still be bound by the AI Act. No loopholes. No outsourcing harms.
Some Systems Don't Need Regulation, They Need a Ban
We advocate the position that some systems currently permitted are too dangerous even for the strictest regulation and should therefore be banned entirely:
- social scoring, including in migration procedures,
- AI “Lie detectors” used by police and migration authorities,
- all forms of remote biometric identification in public, not only real-time uses,
- predictive policing systems, including location-based predictions,
- forecasting tools in border management,
- dialect recognition used during asylum procedures.
A Crucial Moment for Action
We are at a critical moment, as there is a serious danger that the way the AI Act is introduced and implemented will open the door to dangerous technologies. This is all part of a broader trend of securitisation, where national security is becoming an excuse to introduce harmful technologies and reduce the protection of rights.
It is therefore essential that we work together to prevent the abuse of artificial intelligence by police and migration authorities. Help us share information about the risks in this field.
Worth your attention
Invitations, tips, suggestions, and warnings

16. January 2025

1. August 2024

15. September 2023