Definition
The EU AI Act prohibits certain practices to ensure ethical AI use. These include using AI to manipulate or exploit vulnerable individuals, implementing social scoring systems by governments, and using real-time remote biometric identification in public spaces, except in specific emergency situations. Additionally, AI systems that significantly affect people’s rights, such as in hiring or judicial decisions, must meet strict safeguards to ensure transparency and accountability.
The EU AI Act sets clear limits on how artificial intelligence systems may be used. Some practices are considered so harmful that they are banned outright. These prohibitions are designed to protect people’s rights, safety and ability to make informed decisions, while still allowing responsible innovation.
Understanding these rules helps explain how regulation can support trustworthy AI, not just restrict it.
What counts as a prohibited AI practice
The Act bans specific uses of AI that risk significant harm or unfair treatment of people.
These include AI systems that:
Use subliminal, manipulative or deceptive techniques that distort a person’s behaviour in a way that impairs their ability to make informed decisions and is likely to cause significant harm
Exploit vulnerabilities of individuals or groups, such as those related to age, disability, or social or economic situation, in ways that distort behaviour and are likely to cause significant harm
Create social scoring systems that evaluate or classify people over time based on social behaviour or personal characteristics, leading to unjustified or disproportionate negative treatment, or negative treatment in unrelated contexts
Assess or predict the risk of a person committing a criminal offence based solely on profiling or personality traits, unless the system only supports human assessment that is already based on objective and verifiable facts linked to criminal activity
Build or expand facial recognition databases by untargeted scraping of images from the internet or CCTV footage
Infer emotions in workplaces or educational institutions, except where used for medical or safety reasons
Categorise people using biometric data to infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation, with limited exceptions related to law enforcement datasets
Use real-time remote biometric identification in publicly accessible spaces for law enforcement, except in strictly defined and limited situations
Strict limits on real-time biometric identification
Even where real-time remote biometric identification is allowed for law enforcement, its use is tightly controlled. It may be used only when strictly necessary for specific objectives, such as:
- Searching for victims of serious crimes like abduction, trafficking or sexual exploitation, or for missing persons
- Preventing a specific, substantial, and imminent threat to life or physical safety, or a genuine and present or foreseeable terrorist threat
- Identifying a person suspected of serious criminal offences for investigation, prosecution or execution of a criminal penalty
Its use must be proportionate and limited in time, geography and the people concerned. Authorities must consider both the seriousness of the threat and the impact on people’s rights and freedoms.
Authorisation, oversight and safeguards
The EU AI Act does not just ban certain AI practices. It also builds in oversight to ensure sensitive AI uses are controlled, accountable and rights-respecting.
For high-risk uses such as real-time remote biometric identification in public spaces for law enforcement, strict conditions apply. Use must be necessary and proportionate, limited in time, place and scope, and in principle authorised in advance by a judicial or independent authority. Urgent cases are tightly constrained, and use must stop and data be deleted if authorisation is refused. No decision with adverse legal effects on a person may rely solely on AI output.
Oversight also operates more broadly through:
- Fundamental rights impact assessments before certain AI uses
- Registration of systems in an EU database
- Notification of market surveillance and data protection authorities
- National legal rules on authorisation, supervision and reporting, with the option for stricter national limits
- Annual reporting by national authorities and aggregated reporting by the European Commission
How regulation aims to support trustworthy innovation
By clearly banning the most harmful practices and placing strict conditions on highly sensitive uses, the EU AI Act aims to ensure AI systems respect fundamental rights and human dignity. These rules help build trust by:
- Preventing manipulation and exploitation
- Limiting intrusive surveillance
- Reducing risks of unfair discrimination and unjustified treatment
- Ensuring human oversight and legal accountability
This framework is designed so that innovation can continue, but within boundaries that prioritise safety, fairness, and respect for people.
Key takeaways
- The EU AI Act bans AI practices that manipulate, exploit vulnerabilities, enable social scoring, misuse biometric data, or create serious risks of harm
- Real-time biometric identification in public spaces for law enforcement is only allowed in narrowly defined, strictly necessary situations
- Strong safeguards include prior authorisation, fundamental rights assessments, registration, notification and oversight
- No harmful legal decisions may rely solely on AI outputs in these cases
- The aim of these rules is to ensure AI development remains responsible, rights-respecting and trustworthy


