Delancy
← Back to EU AI Act Navigator RISK CLASSIFICATION

Find your risk tier.

Answer a few questions about your AI system and we will tell you which EU AI Act risk tier it falls into and what obligations apply.

Question 1

What does your AI system do?

Select the option that best describes the primary function of the AI system you want to classify.

REFERENCE

The four risk tiers.

Unacceptable Risk

AI practices entirely prohibited under the EU AI Act.

Deploying subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques to materially distort behaviour causing significant harm

Art. 5(1)(a)

Exploiting vulnerabilities of a person or group due to age, disability or a specific social or economic situation to materially distort behaviour causing significant harm

Art. 5(1)(b)

Social scoring by public authorities or on their behalf leading to detrimental or unfavourable treatment of persons

Art. 5(1)(c)

Making risk assessments of natural persons to assess or predict the risk of a person committing a criminal offence, based solely on profiling or on assessing personality traits and characteristics

Art. 5(1)(d)

Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage

Art. 5(1)(e)

Inferring emotions of natural persons in the areas of workplace and education institutions, except where the use is intended to be put in place or on the market for medical or safety reasons

Art. 5(1)(f)

Biometric categorisation systems that categorise individual natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation

Art. 5(1)(g)

Real-time remote biometric identification systems in publicly accessible spaces for law enforcement, except for narrowly defined exceptions involving search for victims, prevention of specific imminent threats, and serious criminal offence suspects

Art. 5(1)(h)

High Risk

AI systems subject to strict requirements before placement on the market.

Biometrics

Remote biometric identification systems, biometric categorisation systems, and emotion recognition systems not falling under Article 5

Annex III, §1
Critical infrastructure

AI intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity

Annex III, §2
Education and vocational training

AI systems used to determine access to or admission to educational and vocational training institutions, to evaluate learning outcomes, to assess the appropriate level of education, and to monitor and detect prohibited behaviour of students during tests

Annex III, §3
Employment, workers management and access to self-employment

AI for recruitment and selection, for making decisions affecting terms of work-related relationships, for task allocation based on individual behaviour or personal traits, and for monitoring and evaluating performance and behaviour

Annex III, §4
Access to and enjoyment of essential private services and essential public services and benefits

AI for evaluating eligibility for public assistance benefits, for creditworthiness assessment, for risk assessment and pricing in life and health insurance, and for evaluating and classifying emergency calls or dispatching emergency services

Annex III, §5
Law enforcement

AI for individual risk assessments as regards offending or reoffending, for polygraphs and similar tools, for evaluating the reliability of evidence, for assessing the risk of a natural person for offending, and for profiling in the course of detection, investigation or prosecution of criminal offences

Annex III, §6
Migration, asylum and border control management

AI for polygraphs and similar tools during examination of applications, for assessing risks including security risks, for examining applications for asylum, visa and residence permits, and for detecting, recognising or identifying natural persons in the context of migration

Annex III, §7
Administration of justice and democratic processes

AI intended to be used by a judicial authority or on their behalf to assist in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used for influencing the outcome of an election or referendum

Annex III, §8

Limited Risk

AI systems with specific transparency obligations under Article 50.

AI systems intended to interact directly with natural persons shall be designed so that the natural person is informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use

Art. 50(1)

Providers of AI systems that generate synthetic audio, image, video or text content shall ensure the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated

Art. 50(2)

Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons exposed thereto of the operation of the system

Art. 50(3)

Deployers of AI systems that generate or manipulate image, audio or video content constituting a deep fake shall disclose that the content has been artificially generated or manipulated

Art. 50(4)

Minimal Risk

All other AI systems — no mandatory requirements under the EU AI Act beyond AI literacy.

No mandatory requirements under the EU AI Act. Voluntary codes of conduct are encouraged under Article 95 to foster the application of some or all of the requirements for high-risk AI systems.

Art. 95

AI literacy obligations under Article 4 still apply to all providers and deployers regardless of risk tier.

Art. 4

Examples include: spam filters, AI-enabled video games, inventory management systems, and AI-powered content recommendation systems not falling under other risk categories.

Art. 95

Know your tier. Now check your compliance.

Take the free assessment to see where you stand against your specific obligations.