Topic

EU AI Act Compliance

High-risk provisions become enforceable on August 2, 2026. Here is what compliance actually requires, what enforcement will look like, and how to scope a readiness assessment that beats the deadline.

The EU AI Act is the first comprehensive AI regulation with real enforcement teeth. The deadline is no longer abstract. The work to be ready has to start now.

The Act takes a risk-based approach. AI systems are categorised by the harm they could cause, and obligations scale to the category. Most enterprise systems doing meaningful work in regulated sectors (employment, credit, education, essential services, law enforcement, migration) land in the high-risk tier. The obligations for that tier are extensive: a documented risk management system across the AI lifecycle, data governance evidence, technical documentation, transparency disclosures, and demonstrable human oversight measures. None of these are check-box items. Each is the kind of thing a regulator can ask for in writing.

What changed in 2026 is the timeline. The high-risk provisions became enforceable on August 2. The grace period that European enterprises spent the previous twelve months relying on is now gone. The fine structure is large enough (up to 7 percent of global turnover for the worst category) that it has moved from a legal-team conversation to a CFO conversation.

What readiness actually looks like

A real readiness assessment maps every AI system the company runs against the Act's risk categories, identifies the obligations that apply to each, audits the existing evidence, and produces a remediation roadmap with owners and dates. The output is the kind of document a regulator can be shown without a follow-up clarification needed. The work is rarely glamorous; the result is a defensible posture.

The systems that turn out to be most exposed are usually not the ones the AI team has been worrying about. They are the older predictive models that have been running for years, embedded in HR or credit or operations workflows, that were never built with the AI Act's documentation requirements in mind. The new generative work is often more visible and easier to govern. The legacy work is where the surprises live.

Where Navedas fits

Navedas runs a one-week Readiness Assessment that maps your AI systems against the high-risk provisions, quantifies fine exposure, and delivers a remediation roadmap. The realtime decision layer then handles the ongoing oversight requirement: human-in-the-loop checkpoints, audit logs, and policy enforcement for every covered decision.

Articles & resources

Frequently asked questions

What is the EU AI Act?

The EU AI Act is the European Union's risk-based regulation of artificial intelligence systems. It categorises AI systems into prohibited, high-risk, limited-risk, and minimal-risk tiers, with requirements scaled to the category. The high-risk tier carries the heaviest documentation, monitoring, and oversight obligations, and is the tier most enterprise systems fall into.

When do the high-risk provisions become enforceable?

The high-risk obligations apply from August 2, 2026 for AI systems already on the market and at first deployment for new ones. Prohibited practices have been enforceable since February 2025. General-purpose AI model obligations applied from August 2, 2025. The high-risk deadline is the one most enterprise teams are now scoping toward.

What does compliance actually require?

Five categories. A risk management system across the AI lifecycle. Data and data governance documentation. Technical documentation and record-keeping. Transparency and instructions for deployers. And human oversight measures. Each comes with specific evidence the regulator can request. Compliance is a documentation and process discipline, not a single tool purchase.

What are the fines?

Up to 35 million euros or 7 percent of worldwide annual turnover for prohibited-practice violations, whichever is higher. Up to 15 million or 3 percent for non-compliance with most other obligations. Up to 7.5 million or 1 percent for supplying incorrect information to authorities. The exposure is large enough that the cost of readiness is small relative to the cost of being wrong.

Related topics

Beat the August 2026 deadline.

Run the one-week Readiness Assessment. Map every AI system, quantify fine exposure, and get a remediation roadmap with owners and dates.