Delancy
← Back to EU AI Act Navigator PLAIN-ENGLISH GUIDE

The EU AI Act in 5 minutes.

You do not need to read 144 pages of regulation. This page covers what the Act is, whether it applies to you, what it requires, and when.

01

What is the EU AI Act.

The EU AI Act is the world's first comprehensive law regulating artificial intelligence. It was published on 12 July 2024 and entered into force on 1 August 2024. Its official name is Regulation (EU) 2024/1689.

It works like product safety legislation. Before you can sell a toaster in the EU, it has to meet safety standards. The AI Act does the same thing for AI systems: before you can place an AI system on the EU market or use one in the EU, it has to meet certain requirements depending on how risky it is.

The Act has 113 articles across 13 chapters, plus 13 annexes. But you do not need to know all of them. Most businesses only need to understand a handful.

02

Does it apply to you.

Probably yes, if you do business in the EU and either build AI systems or use them. The Act applies to:

Providers

You develop an AI system (or commission someone to develop it for you) and place it on the EU market or put it into service under your own name or trademark. This includes companies outside the EU if your AI system is used inside the EU.

Deployers

You use an AI system under your authority. If you buy an AI tool from a vendor and use it in your business operations, you are a deployer. This is the category most businesses fall into.

Importers

You bring an AI system from outside the EU into the EU market. You have to verify the provider has done their compliance work before you can sell it.

Distributors

You make an AI system available on the EU market without modifying it. You need to check the CE marking and documentation are in order.

It does not apply to: AI used purely for military or national security purposes, AI used for scientific research before it reaches the market, and AI used by individuals for purely personal, non-professional activity.

03

The four risk tiers.

The Act sorts AI systems into four buckets based on risk. The higher the risk, the stricter the rules. Most AI you encounter in everyday business falls into the bottom two tiers.

BANNED
Unacceptable Risk

These AI practices are completely prohibited. You cannot build them, sell them, or use them in the EU.

Examples: Social scoring systems (rating citizens to determine access to services). AI that manipulates people through subliminal techniques. Scraping the internet to build facial recognition databases. Using emotion recognition AI on employees or students.

Enforceable since 2 February 2025. Fines up to 35 million EUR or 7% of global turnover.

STRICT
High Risk

Allowed, but with heavy requirements. You need a risk management system, data governance, technical documentation, human oversight, conformity assessment, and ongoing monitoring.

Examples: AI used in recruitment and HR decisions. AI that determines creditworthiness. AI used in law enforcement or border control. AI in medical devices. AI that evaluates students or determines access to education.

Most obligations apply from 2 August 2026.

DISCLOSE
Limited Risk

Just one main obligation: tell people they are dealing with AI. If your chatbot talks to customers, they need to know it is a chatbot. If you generate images, audio, or video with AI, you need to label it as AI-generated.

Examples: Customer service chatbots. AI-generated marketing images. Deepfake detection. Emotion recognition systems (where not prohibited).

FREE
Minimal Risk

No mandatory obligations. This covers the vast majority of AI systems in use today. Voluntary codes of conduct are encouraged but not required.

Examples: Spam filters. AI-powered inventory management. Video game AI. Content recommendation systems. Most internal business automation.

One obligation still applies to everyone: AI literacy. Your staff need to understand the AI tools they use (Article 4).

04

General-purpose AI models.

There is a separate set of rules for the companies that build large AI models, such as the models behind ChatGPT, Claude, and Gemini. These are called "general-purpose AI models" (GPAI).

If you just use these models (through an API or a product built on them), you are a deployer, not a GPAI provider. The GPAI obligations fall on the company that trained the model, not on you.

What GPAI providers must do: publish training data summaries, provide technical documentation to downstream providers, comply with EU copyright law, and for models with "systemic risk" (very large models), conduct adversarial testing and report serious incidents.

These rules have applied since 2 August 2025.

05

What you actually need to do.

For most UK and Channel Islands businesses using AI tools built by someone else:

1
Know what AI you are using

Audit your AI tools. Which ones interact with customers? Which ones make decisions about people? Which ones handle sensitive data? This determines which risk tier each tool falls into.

2
Check you are not using anything prohibited

This is already enforceable. If any of your AI tools do social scoring, scrape biometric data, infer emotions in the workplace, or manipulate behaviour, stop using them now.

3
Disclose AI to your customers

If a chatbot talks to your customers, tell them it is AI. If you publish AI-generated content, label it. This is cheap and easy to implement.

4
Train your staff

Article 4 requires all providers and deployers to ensure staff have sufficient "AI literacy". This means your people understand what AI tools they are using, what those tools can and cannot do, and how to use them appropriately.

5
If you use high-risk AI, prepare for August 2026

If any of your AI systems fall into the high-risk category (HR decisions, credit scoring, critical infrastructure), you will need documented processes: human oversight procedures, risk assessments, logging, and monitoring. Start mapping these now.

06

Key dates.

PASSED
2 February 2025

Prohibited AI practices became enforceable. If you are doing anything on the banned list, you are already in violation.

PASSED
2 August 2025

Rules for general-purpose AI models and the EU governance structure became applicable.

NEXT
2 August 2026

The big one. Most of the regulation becomes applicable: high-risk requirements, transparency obligations, penalties, market surveillance. This is the deadline most businesses are preparing for.

2027
2 August 2027

High-risk AI embedded in regulated products (medical devices, vehicles, machinery) must comply.

2030
2 August 2030

Final deadline. All remaining transitional provisions expire. Legacy AI systems used by public authorities must comply.

07

What happens if you do not comply.

The fines are structured like GDPR: they scale with the severity of the violation and the size of your business.

35M EUR
or 7% of global turnover
Prohibited practices

Using banned AI systems

15M EUR
or 3% of global turnover
Other violations

Non-compliance with requirements

7.5M EUR
or 1% of global turnover
Incorrect information

Supplying misleading data to authorities

SMEs and start-ups get proportionate caps. The regulation explicitly requires fines to be "effective, proportionate and dissuasive" — not punitive for small businesses that make genuine efforts to comply.

08

Does this apply in the UK and Channel Islands.

The EU AI Act is EU law. It does not directly apply in the UK, Jersey, Guernsey, or Gibraltar.

But it applies to you if your AI systems are used by people in the EU, or if the output of your AI systems is used in the EU. If you serve EU clients, process EU citizen data with AI, or sell AI-powered products into the EU market, you fall within scope.

The UK is developing its own AI regulation framework. The Channel Islands and Gibraltar are likely to follow with their own proportionate approaches. But for now, the EU AI Act is the only enforceable AI-specific regulation with cross-border reach.

Practical advice: If you have any EU exposure at all, treat the EU AI Act as your baseline. Anything you do to comply with it will exceed whatever the UK eventually requires.

Want to go deeper.

Use the full Navigator to browse all 113 articles, check your compliance obligations by role, or track the implementation timeline.

Need help mapping the Act to your business operations?

Book a Discovery Call →