GDPR & AI Compliance
GDPR has been quietly setting the bar for automated decisions since 2018. The EU AI Act now builds on it. Here is what Article 22 actually requires and how to satisfy both regimes with one operational discipline.
GDPR is not a new regulation. It is the regulation that anticipated where AI was going. Most of what the AI Act demands has a GDPR analogue, which means most of the work is already familiar to any data protection function.
Article 22 is the part that gets quoted most often in the AI context. It gives data subjects the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects. There are exceptions (contract necessity, explicit consent, member-state law), but the operational requirements are constant: the controller has to be able to explain the logic, describe the significance, articulate the envisaged consequences, and provide a route to human review. None of these are obvious to retrofit. All are straightforward to design in.
The interaction between GDPR and the EU AI Act is now the day-to-day reality for most European compliance teams. GDPR governs the personal data the AI uses. The AI Act governs the system itself. The same AI system frequently needs a DPIA under one regime and a conformity assessment under the other, with substantial documentation overlap. The teams that run both processes in parallel, with shared evidence, spend roughly half what the teams running them as separate workstreams spend.
What changes operationally
Three things. First, every automated decision with significant effect needs an audit-ready record (the GDPR Article 22 case for human intervention requires that the human can actually see what happened). Second, the lawful basis for the processing has to hold up across the AI lifecycle, not just at deployment, which means consent and legitimate-interest assessments need to be revisited as the system evolves. Third, data subject rights have to be exercisable in practice, not just in theory: a customer who asks for human review of an AI decision should not be the one tracing the system back to find out what it did.
Where Navedas fits
The realtime decision layer produces the audit trail that satisfies both regimes. Each AI decision is logged with inputs, applied rule, citation, verdict, and policy version. When a data subject exercises their Article 22 rights, the answer already exists in a form a human reviewer can read.
Articles & resources
Audit-Ready Compliance
Decision logs that satisfy GDPR Article 22 and AI Act documentation simultaneously.
Explore → ResourceEU AI Act Readiness
Map systems against AI Act high-risk obligations alongside existing GDPR posture.
Explore → SolutionAI Risk Containment
The realtime decision layer that makes human-intervention rights operationally viable.
Explore → ToolQuarterly Exposure Calculator
Quantify exposure from automated decisions running without an Article 22-ready record.
Calculate →Frequently asked questions
What does GDPR Article 22 require for AI?
Article 22 gives data subjects the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects. There are exceptions (contract necessity, explicit consent, member-state law) but in all cases the controller has to provide meaningful information about the logic, the significance, and the envisaged consequences, plus a route to human review.
How does GDPR interact with the EU AI Act?
They are complementary, not duplicative. GDPR governs the personal data the AI uses; the AI Act governs the system itself. A given AI system may need a DPIA under GDPR and a conformity assessment under the AI Act, with overlap in the documentation but distinct legal bases. Compliance teams now run both processes in parallel, ideally with shared evidence.
What is a DPIA and when is it required for AI?
A Data Protection Impact Assessment is a structured analysis of the privacy risks of a processing activity. It is required when processing is likely to result in high risk to data subjects, which most AI systems involving personal data trigger. The DPIA covers necessity, proportionality, the risks identified, and the measures taken to address them.
What rights do data subjects have over AI decisions?
The right to be informed of automated decision-making. The right to obtain human intervention. The right to express their point of view and contest the decision. Plus the underlying GDPR rights: access, rectification, erasure, restriction, portability, and objection. Operationalising these requires the same audit trail that AI Act compliance demands, which is why the two regimes overlap so heavily.
Related topics
Satisfy GDPR and the AI Act with one operational discipline.
See how the realtime decision layer produces the audit trail both regimes require, with the human-intervention path already wired in.