Compliance is designed in, not bolted on
Every AI system Delancy builds goes through EU AI Act risk classification during the architecture phase. We identify the risk tier, map the regulatory obligations, and design the system to meet them from day one.
AI Act compliance is part of every build
When a client maps their processes with Delancy, we design the build architecture on top of that map. For every node where AI is involved, we classify it against the EU AI Act before a single line of code is written.
Risk classification
Each AI component is classified against the four EU AI Act risk tiers: unacceptable, high, limited, or minimal. We identify the exact application area and relevant articles.
Obligation mapping
Based on the risk tier, we map the specific obligations that apply: risk management systems, technical documentation, human oversight, transparency requirements, and post-market monitoring.
Compliance documentation
Every build architecture includes an EU AI Act compliance summary documenting the classification, obligations, and design decisions for each AI component in the system.
The four risk tiers under the EU AI Act
The EU AI Act (Regulation 2024/1689) classifies AI systems into four tiers based on the level of risk they pose. Each tier carries different obligations for providers and deployers.
Prohibited AI practices (Article 5)
AI systems that pose a clear threat to safety, livelihoods, or rights. These include social scoring, subliminal manipulation, real-time biometric identification in public spaces, and emotion inference in workplaces. Delancy will never build systems that fall into this category.
Strict compliance required (Annex III)
AI used in employment decisions, credit scoring, education, critical infrastructure, or law enforcement. Requires risk management systems, technical documentation, human oversight, conformity assessment, and post-market monitoring. Delancy designs these obligations into the system architecture.
Transparency obligations (Article 50)
AI that interacts with people or generates content. Requires disclosure that users are interacting with AI, labelling of AI-generated content, and informing about emotion recognition. Most Delancy chatbots and content tools fall here.
AI literacy and voluntary codes (Article 69)
Internal automation, data processing, document extraction, and operations management. Requires AI literacy for staff and encourages voluntary codes of conduct. Most operational workflow systems Delancy builds are in this tier.
Included with every AI build
AI Act compliance summary
A documented classification of every AI component in your system, with risk tier, relevant articles, and provider obligations.
Risk-aware architecture
Systems designed with the right level of human oversight, logging, and transparency baked into the architecture from the start.
Human oversight by design
For high-risk applications, we build in override capabilities, monitoring dashboards, and escalation paths as required by Articles 14 and 26.
Classification at every stage
If you come back for additional workflows or AI agents, each new component goes through the same classification process before it's built.
Ready to build with confidence?
Book a discovery call to discuss your processes and how Delancy can design an AI system that meets EU AI Act requirements from day one.
Book a Discovery CallEU AI Act tools
Use our free interactive tools to understand your current position.
Risk Classifier
Find out which risk tier your AI system falls into.
ToolCompliance Assessment
Check your readiness across five key areas.
ReferenceCompliance Checklist
Role-based obligations for providers, deployers, and importers.
ReferenceArticle Browser
Search and browse all 113 articles of the regulation.