EU AI ACT · REGULATION 2024/1689 · IN FORCE AUG 2024 · GDPR · REGULATION 2016/679 · COMPLIANCE ATLAS
Compliance Atlas · 2024–2026

EU AI ACT
& GDPR
COMPLIANCE
BLUEPRINT

A comprehensive reference of every key article across the EU AI Act and GDPR — with real-world use cases, violation examples, and a full blueprint for a law-compliant AI agent.

113EU AI Act Articles
99GDPR Articles
4Risk Tiers (AI Act)
8GDPR Data Principles
€35MMax AI Act Fine

Part I · Regulation (EU) 2024/1689

EU Artificial Intelligence Act

The world's first comprehensive legal framework for artificial intelligence — classifying AI systems by risk and imposing obligations on providers, deployers, and importers across the EU.

01
Art. 1 Subject Matter & Objectives General
Establishes the AI Act's purpose: to lay down harmonised rules for AI systems placed on or used in the EU market, ensuring safety and respect for fundamental rights.
  • An EU company developing an AI tool for document review knows they must comply before market placement
  • A US company exporting an AI hiring tool to Germany is made subject to these rules
  • A hospital deploying a diagnostic AI confirms it is in scope before procurement
  • Claiming a product is "just software" to escape AI Act scope when it uses ML models
  • A non-EU developer ignoring the regulation because they are based outside the EU
Art. 2 Scope of Application General
Defines who is covered: providers, deployers, importers, distributors, and manufacturers placing AI systems in the EU market or affecting EU persons, regardless of where they are established.
  • A UK AI startup selling to EU clients is captured under extraterritorial scope
  • An open-source AI framework used commercially in the EU is in scope for deployers
  • EU military AI systems are explicitly excluded and handled separately
  • A US firm deploying facial recognition in EU airports claiming extraterritorial exemption
  • A deployer misclassifying as a "distributor" to avoid provider-level obligations
Art. 3 Definitions General
Defines 65+ key terms including "AI system," "provider," "deployer," "general-purpose AI model," "high-risk AI system," and "intended purpose."
  • A regulator uses Art. 3's definition to determine whether a rule-based system qualifies as an "AI system"
  • A company determines they are a "deployer" (not provider) because they use a third-party AI system
  • Legal teams use the "general-purpose AI" definition to understand GPT-4 class model obligations
  • Misclassifying a system that adapts outputs based on training as "non-AI" to avoid compliance
  • Calling yourself a "distributor" to evade provider obligations when you substantially modify an AI system
Art. 5 Prohibited AI Practices Unacceptable
Absolute prohibitions — no exceptions — on AI systems that: use subliminal manipulation, exploit vulnerabilities, perform biometric categorisation by sensitive attributes, conduct untargeted facial scraping, assess social scoring, or exploit emotions in workplace/education contexts.
  • Real-time biometric ID by law enforcement with prior judicial authorisation in serious crime cases
  • Post-hoc biometric analysis of a recorded crime scene by police with authorisation
  • Emotion detection for safety-critical workplace monitoring with worker consent and oversight
  • A retail chain scraping social media images to build a customer emotion database
  • A government deploying social credit scoring to restrict citizens' access to services
  • An insurer using subliminal audio cues in claims calls to influence customer decisions
  • A school deploying emotion AI to profile students' attentiveness without consent
  • Real-time facial recognition in public spaces without law enforcement justification
02
Art. 6 Classification of High-Risk AI High Risk
An AI system is high-risk if it is used as a safety component of regulated products (Annex I), or falls in Annex III categories including biometrics, education, employment, credit scoring, law enforcement, migration, and justice.
  • An AI CV-screening tool used in hiring decisions is classified as high-risk (Annex III)
  • An AI medical device component triggers both MDR and AI Act obligations
  • An AI credit-scoring model used for mortgage decisions is high-risk
  • AI in autonomous vehicles is high-risk under Annex I machinery safety
  • Labelling a high-risk hiring AI as "decision support only" without the required governance
  • Deploying a biometric access AI in a workplace without Annex III classification assessment
Art. 9 Risk Management System High Risk
High-risk AI providers must establish and maintain a continuous risk management system covering identification, estimation, evaluation, and mitigation of known and foreseeable risks — throughout the entire lifecycle.
  • A financial AI company documents a risk register with residual risk acceptance for each identified harm
  • A healthcare AI provider continuously monitors post-deployment outputs for risk drift
  • An AI hiring tool provider updates its risk assessment when adding a new job category
  • Performing a one-time pre-launch risk assessment and never updating it
  • Failing to assess risks arising from interaction of the AI system with other systems
  • Accepting residual risks without documenting reasoning or testing mitigation measures
Art. 10 Data & Data Governance High Risk
Training, validation, and testing data for high-risk AI must meet quality criteria: relevance, representativeness, freedom from errors, completeness, and appropriate consideration of characteristics of the person groups that may be affected.
  • A credit AI provider audits training data for racial under-representation before deployment
  • A medical AI company validates test data against diverse demographic subgroups
  • An NLP provider documents data sources and pre-processing pipelines for each model version
  • Training a recidivism prediction AI on historically biased criminal justice data without bias mitigation
  • Using scraped internet data without filtering for known errors, hate speech, or outliers
  • Failing to document the proportion of demographic groups in training sets
Art. 11 Technical Documentation High Risk
Providers must maintain technical documentation (Annex IV) before market placement: general description, design logic, training methodology, performance metrics, limitations, and testing results.
  • A startup documents model architecture, training data, and known limitations in an Annex IV package
  • A notified body reviews technical documentation to certify a high-risk AI before CE marking
  • An AI provider keeps versioned documentation to trace changes across model updates
  • Providing only a marketing brochure instead of Annex IV technical documentation
  • Failing to update technical documentation when a model undergoes substantial modification
Art. 12 Record-Keeping & Logging High Risk
High-risk AI systems must automatically log events relevant to identifying risks and post-market monitoring — with logs retained for the period specified in the conformity assessment procedure (minimum 6 months).
  • A law enforcement AI logs every query, operator, and output timestamp for audit trails
  • A medical AI logs confidence scores alongside decisions for each patient interaction
  • An HR AI preserves decision logs for disputed hiring or promotion decisions
  • Deleting AI decision logs after 30 days for storage cost reasons
  • Logging outputs but not logging the inputs that triggered each decision
Art. 13 Transparency & Information High Risk
High-risk AI systems must be sufficiently transparent for deployers to understand their purpose, capabilities, limitations, accuracy, and the population the AI is intended to serve — documented in instructions for use.
  • An AI tool for judges includes documentation noting demographic groups where accuracy drops
  • A credit AI's instructions clearly state it should not be used for insurance pricing
  • A medical imaging AI documents performance benchmarks across imaging device types
  • Providing "black box" AI to a bank with no documentation on feature importance
  • Claiming 99% accuracy without disclosing this applies only to a specific demographic
Art. 14 Human Oversight High Risk
High-risk AI systems must be designed so natural persons can effectively oversee them. This includes the ability to understand outputs, disregard/override/stop the system, and not be over-reliant on AI decisions ("automation bias" safeguards).
  • A benefits assessment AI always routes final decisions to a human case worker
  • A parole AI gives officers an override button with mandatory reason documentation
  • An autonomous drone has a remote pilot kill-switch with a dead-man timeout
  • Designing an AI so fast that humans cannot realistically review decisions before they execute
  • Presenting AI recommendations as "final decisions" to eliminate perceived human liability
  • Training operators only to accept AI outputs and never to question or override them
Art. 15 Accuracy, Robustness & Cybersecurity High Risk
High-risk AI must achieve appropriate accuracy levels, be resilient to errors and inconsistencies, and be resistant to adversarial attacks (prompt injection, data poisoning, model evasion) particularly where outputs influence significant decisions.
  • A fraud detection AI is tested against adversarial examples before deployment
  • A medical AI undergoes robustness testing under corrupted or noisy input conditions
  • A credit AI reports confidence intervals alongside each score to flag uncertain decisions
  • Deploying a hiring AI that can be fooled by simple keyword stuffing in CVs
  • Using a model that was not tested against adversarial inputs in a law enforcement context
Art. 16–17 Provider Obligations & QMS High Risk
Providers must implement a Quality Management System covering: strategy, design, development, testing, deployment monitoring, complaint handling, and corrective actions — documented and subject to audit.
  • A medical AI startup builds an ISO 9001-aligned QMS with AI-specific controls
  • A fintech documents its model lifecycle from data collection to post-deployment monitoring
  • A provider implements a formal change management process for model updates
  • Pushing model updates to production without a documented change assessment
  • No formal process for handling user-reported AI errors or bias complaints
Art. 26 Deployer Obligations High Risk
Deployers (organisations using high-risk AI) must: use AI per provider instructions, assign human oversight, monitor performance, inform affected natural persons of AI use, and conduct fundamental rights impact assessments before deploying in public bodies or critical sectors.
  • A local authority completes a Fundamental Rights Impact Assessment before deploying predictive policing
  • An employer notifies workers that an AI monitors productivity and routes disputes to HR
  • A bank assigns a responsible officer to review AI credit decisions weekly
  • A public hospital using an AI triage tool without telling patients it influences their care pathway
  • A firm using an AI hiring tool outside its documented intended purpose (e.g. for promotion decisions)
Art. 43 Conformity Assessment High Risk
Before placing high-risk AI on market, providers must carry out a conformity assessment. For most Annex III systems, this is self-assessment. For biometrics and critical infrastructure, a third-party notified body assessment is mandatory.
  • A credit AI undergoes internal conformity assessment with a signed Declaration of Conformity
  • A biometric verification system contracts a notified body for third-party review
  • An AI provider repeats conformity assessment when making a "substantial modification" to the model
  • Affixing a CE mark without completing the required conformity assessment procedure
  • Claiming self-assessment sufficiency for a real-time facial recognition system at a border
Art. 49 EU Declaration of Conformity High Risk
The provider must draw up and sign an EU Declaration of Conformity (DoC) before placing a high-risk AI on the market. The DoC must include: provider details, system description, standards applied, notified body reference (if applicable), and a signed statement of compliance.
  • An AI diagnostic tool's DoC references Annex IV technical documentation and EN ISO 13485
  • A regulator pulls the DoC to verify compliance during a post-market inspection
  • A distributor checks the DoC before agreeing to carry a high-risk AI product
  • Signing a Declaration of Conformity without the underlying technical documentation being complete
  • Not updating the DoC when the AI system is substantially modified
03
Art. 50 Transparency for Limited-Risk AI Limited Risk
Providers of chatbots, deepfakes, and AI-generated content must ensure users are informed they are interacting with an AI or viewing AI-generated content. Synthetic content must be machine-detectable with watermarking where technically feasible.
  • A customer service chatbot displays "You are chatting with an AI assistant" at the start of each session
  • A news agency's AI-generated video is watermarked with IPTC C2PA provenance metadata
  • A virtual therapist app discloses at onboarding that the therapist is an AI
  • A call centre bot pretending to be a human named "Sarah" without disclosure
  • A political campaign releasing AI-generated candidate videos without any disclosure label
  • A media company removing AI watermarks from synthetic news images
Art. 51 GPAI Classification GPAI
General-purpose AI models are classified by systemic risk. A GPAI model presents systemic risk if trained with compute exceeding 10^25 FLOPs or if the Commission designates it based on capabilities and reach (e.g. GPT-4 class or above).
  • OpenAI's GPT-4 and similar frontier models are subject to systemic-risk GPAI obligations
  • A smaller open-source model (e.g., 7B parameters, low compute) falls under standard GPAI rules only
  • The EU AI Office uses the FLOP threshold to objectively trigger systemic risk review
  • A GPAI provider underreporting training compute to fall below the 10^25 FLOP threshold
  • A foundation model API provider failing to notify the AI Office of a new high-compute model release
Art. 53 GPAI Provider Obligations GPAI
All GPAI model providers must: maintain technical documentation, publish usage policies, comply with copyright (training data transparency), and register in the EU database. Systemic-risk models additionally require adversarial testing and incident reporting.
  • A GPAI provider publishes a detailed model card covering training data, capabilities, and known limitations
  • An API provider implements usage policies refusing misuse cases (e.g., CSAM generation)
  • A frontier lab conducts red-team adversarial testing and submits results to the AI Office
  • A GPAI provider releasing a model with no model card or documentation of training data sources
  • Failing to report a serious incident involving GPAI misuse for critical infrastructure attacks
  • Not having copyright clearance processes for training data scraped from the internet
Art. 55 Systemic Risk Obligations GPAI SR
Providers of GPAI models with systemic risk must: evaluate models using standardised protocols, perform adversarial testing (red-teaming), track and report serious incidents, ensure cybersecurity protections for the model and infrastructure, and report energy efficiency data.
  • Anthropic conducts structured red-teaming before each Claude major release per Art. 55
  • A GPAI provider reports a prompt-injection incident that caused harmful output to the EU AI Office within 72 hours
  • A lab partners with external safety evaluators for standardised capability benchmarking
  • A frontier model provider not reporting a serious misuse incident that caused financial harm at scale
  • Claiming red-teaming was performed without adequate documentation of methodology
04
Art. 72 Post-Market Monitoring High Risk
Providers of high-risk AI must implement a post-market monitoring plan from day one, actively collecting and analysing performance data in deployed environments. The plan feeds into corrective actions and regulators' oversight activities.
  • A credit AI provider monitors quarterly accuracy drift across demographic segments post-deployment
  • A healthcare AI developer collects clinician feedback reports as part of post-market surveillance
  • An AI provider publishes annual post-market performance summaries to market surveillance authorities
  • Considering compliance complete at launch and never revisiting model performance
  • Not establishing a feedback loop from deployers to providers about AI system errors
Art. 73 Serious Incident Reporting High Risk
Providers must report serious incidents to the market surveillance authority within 15 days (life-threatening: 72 hours). A "serious incident" includes death or serious injury caused by an AI system, or serious infrastructure disruption.
  • A hospital AI misdiagnosis causing patient harm triggers a 72-hour incident report
  • An AI safety system failure in an autonomous vehicle is reported within 15 days
  • A predictive policing error leading to wrongful arrest is escalated as a serious incident
  • A provider concealing an AI-related patient death to avoid regulatory scrutiny
  • Reporting after 30 days because the internal review process was too slow
Art. 99 Penalties & Fines Enforcement
Prohibited AI practices (Art. 5): up to €35M or 7% of global annual turnover. Other violations: up to €15M or 3% of turnover. Incorrect information to regulators: up to €7.5M or 1.5% turnover. SMEs and startups have proportionally capped fines.
  • A startup in a regulatory sandbox benefits from reduced penalties for first-time violations
  • Voluntary disclosure of a compliance gap before enforcement can reduce penalty severity
  • Demonstrated good-faith QMS and post-market monitoring mitigates financial exposure
  • Deploying an unacceptable-risk AI (social scoring) commercially — €35M / 7% turnover
  • A Fortune 500 AI provider with 3% penalty exposure = potentially €600M+ fine
  • Providing false information during a market surveillance investigation
Art. 64–67 AI Regulatory Sandboxes Minimal Risk
Member States must establish regulatory sandboxes allowing AI development and testing in controlled real-world conditions before market launch, with reduced liability for participating providers if safety protocols are followed.
  • A startup tests a healthcare AI on live patient data within a supervised sandbox without full conformity requirements
  • A city partners with an AI provider to trial a traffic management system in a designated sandbox zone
  • A sandbox participant is shielded from certain penalties while testing a novel high-risk AI
  • Using sandbox status to deploy commercially beyond the approved test boundary
  • Not maintaining the required monitoring and documentation even while in the sandbox
Key Timing: Prohibited practices (Art. 5) were enforceable from 2 Feb 2025. GPAI obligations apply from 2 Aug 2025. High-risk AI (Annex III) obligations apply from 2 Aug 2026. Full Act enforcement from 2 Aug 2027.
Part II · Regulation (EU) 2016/679

General Data Protection Regulation (GDPR)

The foundational data protection law governing collection, processing, storage, and transfer of personal data — critically relevant to any AI system that processes information about natural persons.

05

Lawfulness, Fairness & Transparency

Art. 5(1)(a) — Data must be processed lawfully, fairly, and in a transparent manner to the data subject.

Purpose Limitation

Art. 5(1)(b) — Data collected for specified, explicit, legitimate purposes and not further processed incompatibly.

Data Minimisation

Art. 5(1)(c) — Data must be adequate, relevant, and limited to what is necessary for the processing purpose.

Accuracy

Art. 5(1)(d) — Personal data must be accurate and kept up to date; inaccurate data must be erased or rectified.

Storage Limitation

Art. 5(1)(e) — Data kept in identifiable form only as long as necessary for the stated purpose.

Integrity & Confidentiality

Art. 5(1)(f) — Processed with appropriate security, protecting against unauthorised access or accidental loss.

Accountability

Art. 5(2) — The controller is responsible for demonstrating compliance with all of the above principles.

Special Category Data

Art. 9 — Extra restrictions on health, genetic, biometric, racial, political, religious, and sexual orientation data.

Art. 6 Lawful Basis for Processing Core
Processing is lawful only on one of six grounds: consent, contract performance, legal obligation, vital interests, public task, or legitimate interests (balancing test required for the last).
  • An AI recommendation engine processes purchase history under Art. 6(1)(b) — performance of contract
  • A healthcare AI processes patient records under Art. 6(1)(c) — legal obligation (medical records law)
  • A fraud detection AI is justified under Art. 6(1)(f) — legitimate interest (fraud prevention outweighs individual impact)
  • Using purchase data to train an unrelated marketing AI without a lawful basis
  • Claiming legitimate interest for profiling without completing a Legitimate Interest Assessment
  • Processing employee personal data for AI training on the basis of employment contract without specific consent
Art. 7 Conditions for Consent Core
Consent must be freely given, specific, informed, and unambiguous (a clear affirmative act). Pre-ticked boxes are invalid. Consent is freely given only if refusal has no negative consequence. Consent can be withdrawn at any time.
  • An AI personalisation platform shows a clear opt-in checkbox, separate from terms of service
  • A mental health app requests separate consent for anonymised data use in AI model training
  • A user withdraws consent; the company deletes their data from the training pipeline within 30 days
  • Burying AI training consent in a 47-page privacy policy with a pre-ticked box
  • Conditioning access to a service on consent to AI training data use
  • An employer asking employees to consent to biometric AI monitoring (consent not freely given)
Art. 9 Special Category Data Sensitive
Processing of health, genetic, biometric (for identification), racial/ethnic, political, religious, sexual data is prohibited by default. Exceptions exist for explicit consent, employment law, vital interests, public health, and research with appropriate safeguards.
  • A hospital uses patient health data to train a diagnostic AI under Art. 9(2)(h) — medical treatment justification
  • A research institution trains an AI on de-identified genetic data under Art. 9(2)(j) — scientific research exception
  • An employer uses fingerprint authentication under Art. 9(2)(b) — employment law obligation (with consent backup)
  • A retailer using facial recognition to infer customer emotions (biometric + health data) without explicit consent
  • An insurer training AI on inferred health conditions from purchase data
  • A political party using social media profiling to infer political opinions for targeting
Art. 11 Processing Without Identification Reduced Risk
If a controller does not require identification of individuals for its purposes, it has reduced obligations under certain GDPR provisions — particularly if processing anonymised or pseudonymised data where re-identification is not reasonably possible.
  • An AI trained on fully anonymised census data may not require the same GDPR protections as identifiable datasets
  • A company uses differential privacy techniques to ensure individual re-identification is computationally infeasible
  • Claiming anonymisation when data can be re-identified by cross-referencing with other datasets (Netflix re-identification case precedent)
  • Applying Art. 11 exemptions when the AI system can infer individual identity from "anonymised" inputs
06
Art. 13–14 Right to Information Transparency
At data collection, individuals must be informed: who the controller is, the purpose and legal basis, retention period, recipients, automated decision-making logic, and rights available to them — in clear, plain language.
  • An AI recruitment platform's privacy notice explains that CV ranking is automated and describes the logic
  • A chatbot discloses at the start that conversations are used to improve its AI model, with opt-out link
  • A lender's privacy notice explains that credit scoring is automated and describes the main factors used
  • A company mentioning AI processing only in footnote 47 of a 60-page privacy policy
  • Failing to disclose that voice interactions are used to train a speech recognition AI
Art. 15 Right of Access Subject Rights
Individuals can request confirmation of whether their data is processed, access to that data, and meaningful information about any automated processing including the logic involved, significance, and envisaged consequences.
  • A user asks an AI credit company for the features that influenced their credit score — company provides a feature importance report
  • A candidate requests all personal data held by an AI hiring platform and receives it within 30 days
  • A bank's AI fraud system provides a redacted explanation of why a transaction was flagged
  • Telling a rejected applicant "the AI decided" with no further information
  • Taking 3 months to respond to a Subject Access Request
  • Providing raw data without any explanation of AI processing as required by Art. 15(1)(h)
Art. 17 Right to Erasure Subject Rights
The "right to be forgotten" — individuals can request erasure when: consent is withdrawn, data is no longer necessary, data was unlawfully processed, or a valid objection is raised. Critical challenge: erasing from trained AI models (machine unlearning).
  • A company implements machine unlearning to remove a user's data influence from a trained model after erasure request
  • A search company removes personal data references from index and requests downstream cache clearing
  • A healthcare AI deletes patient records and retrains affected model components on erasure request
  • Deleting the database record but leaving the individual's data embedded in a production AI model
  • Refusing erasure citing "we can't un-train the model" without demonstrating this is technically impossible
  • Not notifying downstream processors (e.g., cloud AI API vendors) of erasure obligations
Art. 18 Right to Restriction Subject Rights
Individuals can request restriction of processing (data "frozen" in place but not processed) while accuracy disputes, objections, or unlawful processing claims are resolved.
  • A person disputes AI-inferred credit score; company restricts using that score in any decision until reviewed
  • A job applicant objects to automated screening; company pauses AI processing of their application
  • Continuing to use a disputed AI profile to serve targeted ads during the restriction period
  • Not having a technical mechanism to "freeze" individual records from AI pipeline processing
Art. 20 Right to Data Portability Subject Rights
When processing is based on consent or contract and done by automated means, individuals can receive their data in a structured, machine-readable format and transmit it to another controller.
  • A user exports their full activity history from an AI fitness app in JSON format to import into a competitor
  • A bank customer requests their transaction history in CSV format to submit to a new lender's AI underwriting system
  • Providing data only as a PDF that cannot be machine-parsed by another AI system
  • Charging a fee for a data portability export without legal justification
Art. 21 Right to Object Subject Rights
Individuals can object at any time to processing based on legitimate interests or public task — including profiling. The controller must stop unless it can demonstrate compelling legitimate grounds that override the individual's interests.
  • A user objects to their browsing data being used for AI behavioural profiling; platform stops immediately
  • A candidate objects to AI profiling for job targeting; recruiter must cease and demonstrate overriding grounds or stop
  • Ignoring an objection to AI-based direct marketing profiling (this must always be honoured)
  • Making the objection process so difficult (dark patterns) that it is practically unusable
Art. 22 Automated Decision-Making & Profiling Critical for AI
Individuals have the right not to be subject to a decision based solely on automated processing — including profiling — which produces legal or similarly significant effects. Exceptions: contract necessity, legal authorisation, or explicit consent. If an exception applies, safeguards are mandatory: human review, contestation right, and explanation of the decision logic.
  • A bank's AI loan decision always routes to a human reviewer before the decision is communicated — satisfying the "solely automated" exception
  • A user rejected by an AI credit score is told the key factors and given the right to request human review
  • A company obtains explicit consent to allow AI-only screening for low-risk job shortlisting
  • An insurer using AI pricing ensures a human can review and override any premium above a threshold
  • An AI fully deciding visa applications without any human in the loop (as in the Dutch SyRI case)
  • An AI hiring system auto-rejecting candidates based on name inference (proxy for race) with no appeal
  • A bank's chatbot making binding credit decisions without human review or explanation
  • Not disclosing that a hiring decision was made automatically and not providing contestation rights
07
Art. 25 Privacy by Design & Default Design
Data protection must be implemented from the design stage of AI systems — not bolted on afterward. By default, only data strictly necessary for each purpose must be processed, and systems must automatically apply the most privacy-protective settings.
  • An AI engineer implements differential privacy at the model training stage, before any data processing begins
  • A chatbot collects only conversation metadata (not content) by default; full logging requires opt-in
  • A recommendation AI is architected with federated learning so personal data never leaves user devices
  • Building a data maximisation strategy into an AI system's architecture from day one
  • Defaulting all users to full data sharing and profiling with no opt-out at launch
  • Adding privacy controls as a post-launch patch after a regulator complaint
Art. 28 Processor Contracts (DPA) Contracts
When an AI vendor processes personal data on behalf of a controller (e.g., an AI API provider), a Data Processing Agreement (DPA) is mandatory. The DPA must specify purposes, data types, security measures, subprocessor rules, and deletion obligations.
  • A company using OpenAI's API to process customer queries signs OpenAI's DPA before production deployment
  • A healthcare provider ensures its AI vendor's DPA prohibits training on patient data
  • A DPA specifies that the AI vendor cannot subcontract to a cloud provider not on the approved list
  • Using an AI SaaS product that processes employee data without a signed DPA
  • An AI vendor sub-processing data to a third-party model trainer without controller authorisation
Art. 35 Data Protection Impact Assessment Mandatory for AI
A DPIA is mandatory before processing that is "likely to result in high risk" — including systematic profiling, large-scale special category data, or systematic monitoring of public areas. For AI, this is almost always triggered. A DPIA must describe processing, assess necessity, risks, and mitigations.
  • A retailer deploying AI customer emotion analysis completes a DPIA before pilot launch
  • An insurer conducting DPIA for AI health-risk scoring from lifestyle data before product launch
  • A DPIA for an AI HR tool identifies high risk → company adds human oversight control to mitigate
  • A public authority's DPIA consultation with their DPO results in redesigning a surveillance AI
  • Deploying large-scale biometric AI at a music festival without conducting a DPIA
  • Conducting a superficial DPIA that identifies risks but does not implement any mitigations
  • Not consulting the supervisory authority after a DPIA shows high residual risk cannot be mitigated
Art. 37–39 Data Protection Officer (DPO) Governance
A DPO is mandatory for: public authorities, organisations conducting large-scale systematic monitoring, or organisations processing special category data at scale. The DPO advises on DPIAs, monitors compliance, acts as contact point for supervisory authorities, and must be independent.
  • A tech company processing 10M users' behavioural data for AI personalisation must appoint a DPO
  • A DPO reviews a proposed AI health monitoring product before launch and raises Art. 9 concerns
  • A DPO is consulted on an Art. 36 prior consultation because a DPIA found irresolvable high risk
  • Appointing the CTO as DPO — creating a conflict of interest (DPO must be independent)
  • Not providing the DPO with resources to monitor AI system compliance across the organisation
Art. 32 Security of Processing Security
Appropriate technical and organisational measures must ensure security appropriate to the risk: pseudonymisation, encryption, confidentiality, integrity, availability, resilience, and the ability to restore access after incidents. Tested regularly.
  • An AI health platform encrypts all personal data at rest and in transit using AES-256 and TLS 1.3
  • An AI model server is isolated on a private VPC with strict IAM policies and zero-trust network access
  • An AI training pipeline pseudonymises all personal identifiers before data enters model training
  • Storing training data (containing personal information) in a public S3 bucket without encryption
  • Not having a disaster recovery plan for an AI system that processes medical records
  • Model weights containing memorised personal data accessible via inference API without access controls
Art. 33–34 Breach Notification Incident
A personal data breach must be notified to the supervisory authority within 72 hours of becoming aware. If likely to result in high risk to individuals, those individuals must also be notified without undue delay with a clear, plain-language explanation.
  • An AI training dataset is exfiltrated; company notifies the ICO within 48 hours with breach description
  • A model memorisation vulnerability exposes names; affected users receive personal notifications
  • A processor (AI vendor) notifies the controller within 24 hours of discovering a breach, enabling controller's 72h window
  • Discovering a training data breach and waiting 2 weeks for the PR team to draft messaging before notifying
  • Notifying the authority but not telling affected individuals whose financial data was exposed
Art. 44–49 International Data Transfers Transfers
Personal data transfers to non-EU countries require an adequacy decision (e.g., EU-US Data Privacy Framework), Standard Contractual Clauses (SCCs), Binding Corporate Rules, or other approved safeguards. Cloud AI inference in non-EU regions is a transfer.
  • A European company using a US AI API executes SCCs with the US vendor before routing EU personal data
  • A multinational uses Binding Corporate Rules to allow intra-group AI training data transfers globally
  • An EU company checks adequacy status before enabling an AI feature that routes queries to US servers
  • Routing EU patient data to a US AI inference API without SCCs or adequacy decision in place
  • Using SCCs that pre-date the 2021 updated clauses for new processing activities
  • Relying on the Privacy Shield framework after its Schrems II invalidation
Art. 83 Administrative Fines Enforcement
Tier 1 violations (core principles, rights, consent, special data, unlawful transfers): up to €20M or 4% of global annual turnover. Tier 2 (controller/processor obligations, certification): up to €10M or 2% turnover.
  • Meta fined €1.2B for unlawful EU-US data transfers (largest GDPR fine to date)
  • Amazon fined €746M for unlawful targeted advertising processing
  • Google DeepMind fined for using NHS patient data without proper lawful basis
  • Training an AI on special category health data without explicit consent or Art. 9 exception
  • Violating Art. 22 by allowing AI to make fully automated legal decisions without safeguards
  • Mass DPIA failure before deploying AI across millions of EU users
Part III · Architectural Blueprint

The Compliant AI Agent

A comprehensive visual anatomy of an AI Agent designed from the ground up to comply with both the EU AI Act and GDPR — with specific article citations at every stage of the agent lifecycle.

COMPLIANT AI AGENT — LIFECYCLE BLUEPRINT

Every layer of the agent's lifecycle, with the specific EU AI Act and GDPR provisions that govern it, and why they apply.

🎯
Purpose
Definition
📐
Design &
Architecture
📊
Data
Governance
🏗️
Training &
Development
Conformity
Assessment
🚀
Deployment
& Operation
👤
User
Interaction
📈
Post-Market
Monitoring
🔄
Iteration &
Retirement
🎯
PURPOSE DEFINITION & RISK CLASSIFICATION
Lifecycle Phase · Pre-Design
Before any development begins, the agent's intended purpose, deployment context, and risk tier must be formally determined and documented.
EU AI Act Provisions
Art. 3 — Definitions Art. 6 — Risk Classification Art. 9 — Risk Mgmt System Annex III — High-Risk Categories Annex I — Regulated Products
Art. 6 mandates classifying the agent against Annex III categories (employment, credit, biometrics, law enforcement, etc.) before any design work. Art. 3 definitions determine whether the agent qualifies as an "AI system" and who is the "provider" vs "deployer." Art. 9 requires a risk management plan to be established from the outset — not retroactively. An agent used in hiring, credit, or critical infrastructure decisions is automatically high-risk, triggering the full Chapter III compliance regime.
GDPR Provisions
Art. 5 — Core Principles Art. 6 — Lawful Basis Art. 35 — DPIA Trigger Art. 25 — Privacy by Design
Purpose limitation (Art. 5(1)(b)) must be defined before data collection begins — the agent's purpose determines the lawful basis under Art. 6. Art. 35 DPIA screening occurs here: does the agent involve systematic profiling, large-scale special category data, or public monitoring? If yes, a full DPIA must be completed before proceeding. Art. 25 means privacy architecture decisions (federated learning, differential privacy, data minimisation) are selected at this stage.
📐
ARCHITECTURE & DESIGN
Lifecycle Phase · Design
System architecture, human oversight mechanisms, transparency interfaces, tool use boundaries, and safety controls are designed and documented.
EU AI Act Provisions
Art. 9 — Risk Mgmt Architecture Art. 11 — Technical Documentation Art. 14 — Human Oversight Design Art. 15 — Robustness & Cybersecurity Art. 5 — Prohibited Practices Check
Art. 14 requires that the agent is designed so a natural person can understand outputs, override decisions, and stop operation — this must be in the architecture, not added later. Art. 11 means design documents, model selection rationale, and architecture diagrams form part of mandatory Annex IV technical documentation. Art. 15 requires cybersecurity-by-design: protection against prompt injection, model evasion, and adversarial inputs. Art. 5 compliance is checked at design: if the agent could enable subliminal manipulation or social scoring, the feature must be removed.
GDPR Provisions
Art. 25 — Privacy by Design Art. 32 — Security by Design Art. 22 — Automated Decision Architecture Art. 28 — Processor Architecture
Art. 25 mandates that the minimum necessary data flows are built into the architecture — if the agent can complete its task without retaining conversation history, it must not retain it by default. Art. 22 determines whether the agent's decision pipeline is "solely automated" — if it produces legally significant effects, a mandatory human review step must be architected in. Art. 32 requires encryption, access controls, and isolation to be in the design specification, not security-bolted-on post-launch.
📊
DATA GOVERNANCE
Lifecycle Phase · Pre-Training
Training, validation, and test datasets are curated, audited for quality and bias, documented, and processed under lawful grounds — before any model training begins.
EU AI Act Provisions
Art. 10 — Data & Data Governance Art. 9 — Risk-Based Data Selection Art. 53 — GPAI Training Data Transparency Art. 11 — Training Data Documentation
Art. 10 is the data quality mandate: training data must be relevant, sufficiently representative across demographic subgroups, and free from errors. Bias audits are mandatory for high-risk agents — underrepresentation of protected groups in training data creates discriminatory model outputs. Art. 10(5) allows processing special category data in training only to the extent strictly necessary for bias detection and correction. If the agent uses a GPAI foundation model, Art. 53 requires the GPAI provider to have documented training data sources and copyright compliance.
GDPR Provisions
Art. 6 — Lawful Basis for Training Data Art. 7–9 — Consent & Special Categories Art. 5 — Data Minimisation Art. 17 — Erasure from Training Sets Art. 44 — Transfer Rules for Cloud Training
Every data subject whose personal data appears in the training set must have a lawful basis under Art. 6 for that processing. Consent (Art. 7) or research exceptions (Art. 89) are commonly cited. Art. 5(1)(c) data minimisation means only data necessary for the training objective is included — a name-prediction model should not train on health data. Art. 17 creates the machine unlearning obligation: if a person withdraws consent, their data must be removable from training pipelines. Art. 44 applies if training data is transferred to or processed in non-EU cloud infrastructure.
🏗️
TRAINING & DEVELOPMENT
Lifecycle Phase · Build
Model training, fine-tuning, RLHF/safety alignment, adversarial testing, bias evaluation, and Quality Management System documentation.
EU AI Act Provisions
Art. 9 — Risk Mitigation in Training Art. 15 — Adversarial Testing Art. 16–17 — QMS Requirements Art. 55 — Red-Teaming (Systemic Risk) Art. 10 — Training Data Quality Checks
Art. 15 requires testing for robustness against adversarial inputs (prompt injection, jailbreaking, data poisoning attacks) — this happens during training and fine-tuning. Art. 16–17 QMS mandates documented procedures for model development, testing criteria, and sign-off processes. For systemic risk GPAI agents, Art. 55 requires structured red-teaming before deployment. Safety alignment (RLHF or constitutional AI approaches) is the mechanism that implements Art. 5 prohibited practice avoidance at the model level.
GDPR Provisions
Art. 32 — Security During Training Art. 25 — Privacy-Enhancing Techniques Art. 28 — Sub-processor Chain
Art. 32 applies to the training infrastructure: compute clusters handling personal data must have access controls, encryption, and audit logging. Privacy-enhancing technologies mandated by Art. 25 are implemented here: differential privacy (adding calibrated noise to gradients), federated learning (keeping data on-device), or secure multi-party computation. Art. 28 requires DPAs with every cloud provider, GPU compute vendor, or annotation subprocessor in the training pipeline.
CONFORMITY ASSESSMENT & DOCUMENTATION
Lifecycle Phase · Pre-Launch
All conformity assessment procedures, technical documentation, EU Declaration of Conformity, CE marking, and regulatory registration are completed before market placement.
EU AI Act Provisions
Art. 11 — Annex IV Technical Docs Art. 43 — Conformity Assessment Art. 49 — Declaration of Conformity Art. 47 — EU Database Registration Art. 13 — Instructions for Use
Art. 43 prescribes the conformity assessment route: self-assessment for most Annex III systems; mandatory notified body for biometric and critical infrastructure AI. The Annex IV technical documentation package (Art. 11) must be complete and on file before the Declaration of Conformity (Art. 49) is signed. Art. 47 mandates registration in the EU AI public database for high-risk systems. Art. 13 requires publishing clear Instructions for Use for deployers — including system capabilities, limitations, accuracy statistics by demographic group, and human oversight requirements.
GDPR Provisions
Art. 35 — DPIA Completion Art. 36 — Prior Consultation Art. 30 — Records of Processing Art. 37 — DPO Sign-off
The DPIA (Art. 35) must be completed and signed off before launch — if residual risk cannot be mitigated, Art. 36 prior consultation with the supervisory authority is mandatory (this can delay launch significantly). Art. 30 Records of Processing Activities must be updated to reflect the new AI processing activity. The DPO (Art. 37–39) must review and sign off on DPIA findings and ROPA entry before launch approval.
🚀
DEPLOYMENT & OPERATIONAL CONTROLS
Lifecycle Phase · Live Operation
The live agent operates with real users. Operational controls, monitoring infrastructure, incident response, and deployer obligations are all active.
EU AI Act Provisions
Art. 26 — Deployer Obligations Art. 12 — Operational Logging Art. 14 — Human Oversight Active Art. 50 — AI Disclosure to Users Art. 72 — Post-Market Monitoring Start
Art. 26 places active obligations on the deployer: assign human oversight responsibility, monitor performance against documented metrics, and ensure the agent is only used for its documented intended purpose. Art. 12 logging is live from day one: inputs, outputs, timestamps, operator IDs, and confidence scores are automatically logged. Art. 50 requires clear disclosure that users are interacting with an AI system. Art. 14 human oversight is operationally active: a human reviewer can pause the agent's decision output at any time. Art. 72 post-market monitoring plan goes live on deployment day.
GDPR Provisions
Art. 13–14 — Privacy Notices Active Art. 22 — Automated Decision Safeguards Art. 32 — Security Controls Active Art. 5(1)(e) — Retention Controls
Privacy notices (Art. 13–14) must be presented to users at the point of first interaction, disclosing AI processing and any automated decision-making. Art. 22 safeguards are live: if the agent makes legally significant decisions, a human review route must be available and communicated to users. Art. 32 security controls (WAF, rate limiting, anomaly detection, encryption) are active. Data retention schedules (Art. 5(1)(e)) are enforced: interaction logs are deleted after the defined retention period with automated purging.
👤
USER INTERACTION & RIGHTS FULFILMENT
Lifecycle Phase · Ongoing
The agent's interface and backend systems support all data subject rights and provide meaningful transparency for users affected by AI decisions.
EU AI Act Provisions
Art. 13 — Transparency to Deployers/Users Art. 14 — User Oversight Mechanisms Art. 26(6) — Inform Affected Persons Art. 50 — AI Disclosure Obligation
Art. 26(6) requires deployers to inform natural persons when they are subject to an AI system's output — especially in high-risk contexts (credit, hiring, benefits). The agent's interface must display a clear "AI-powered" disclosure. Art. 14 mechanisms let users invoke human review: a clearly visible "Request human review" button or escalation pathway. Art. 13 transparency means affected persons can access the agent's documented accuracy, limitations, and intended use scope.
GDPR Provisions
Art. 15 — Subject Access Request Handling Art. 17 — Erasure Requests Art. 18 — Restriction Requests Art. 20 — Data Portability Art. 21 — Objection Handling Art. 22 — Contestation of AI Decisions
A dedicated rights management portal must be built into the agent ecosystem: Art. 15 SAR responses within 30 days including AI decision logic explanations; Art. 17 erasure triggers machine unlearning workflows; Art. 20 portability exports interaction data in structured JSON/CSV; Art. 21 objection immediately pauses AI profiling; Art. 22 contestation routes the specific decision to a human reviewer who must provide a reasoned response. All rights requests are logged in the ROPA and reported to the DPO monthly.
📈
POST-MARKET MONITORING & INCIDENT RESPONSE
Lifecycle Phase · Ongoing
Continuous monitoring of agent performance, bias drift, security vulnerabilities, and incident management — feeding back into the risk management system.
EU AI Act Provisions
Art. 72 — Post-Market Monitoring Plan Art. 73 — Serious Incident Reporting Art. 9 — Risk Management Updates Art. 15 — Ongoing Robustness Testing Art. 55 — GPAI Incident Reporting
Art. 72 requires a formal post-market monitoring plan: KPIs for accuracy, fairness metrics by demographic subgroup, false positive/negative rates in high-risk decisions, and a reporting cadence to the market surveillance authority. Art. 73 mandates that serious incidents (death, serious harm, infrastructure disruption) are reported within 72 hours (life-threatening) or 15 days (serious). Discovered performance degradation or bias drift requires updating the Art. 9 risk register and implementing corrective actions. Regular adversarial testing (Art. 15) continues post-deployment to catch new attack vectors.
GDPR Provisions
Art. 33–34 — Breach Notification Art. 32 — Security Incident Response Art. 35 — DPIA Re-evaluation Art. 5 — Ongoing Accuracy Obligation
Art. 33 breach notification obligations are operationalised: a 72-hour incident response plan is maintained with pre-approved supervisory authority notification templates. Art. 32 security monitoring includes anomaly detection for unusual data access patterns by the AI system. If the agent's scope or data processing changes substantially, a new DPIA (Art. 35) is triggered. Art. 5(1)(d) accuracy principle means model drift resulting in inaccurate outputs about individuals must trigger correction — e.g., stale credit profiles must be updated before influencing decisions.
🔄
ITERATION, SUBSTANTIAL MODIFICATION & RETIREMENT
Lifecycle Phase · Change Management
Any substantial change to the agent's model, scope, or use case restarts key compliance steps. At retirement, data deletion and documentation archiving obligations apply.
EU AI Act Provisions
Art. 43 — Re-assessment on Modification Art. 49 — Updated DoC Required Art. 11 — Updated Technical Documentation Art. 9 — Risk Register Update Art. 12 — Log Retention Post-Retirement
A "substantial modification" (changed architecture, new training data, new intended purpose, materially different performance) restarts the conformity assessment (Art. 43) and requires an updated Declaration of Conformity (Art. 49) and updated Annex IV technical documentation (Art. 11). Minor updates require an updated risk register entry. At retirement, Art. 12 requires operational logs to be retained for the minimum period specified in the conformity assessment (often 10 years for high-risk AI in regulated sectors). The EU database registration must be updated to reflect retirement.
GDPR Provisions
Art. 5(1)(e) — Storage Limitation on Retirement Art. 17 — Erasure of Personal Data Art. 30 — ROPA Update Art. 35 — New DPIA on Material Change Art. 28 — DPA Termination Obligations
At retirement, Art. 5(1)(e) storage limitation requires deletion of all personal data from the agent's databases, model weights (if memorisation is a risk), and operational logs beyond the defined retention period. Art. 17 erasure obligations must be flowed to all processors (cloud providers, sub-processors) with written confirmation of deletion. The ROPA (Art. 30) is updated to mark the processing activity as ceased. If the change involves a new processing purpose, a fresh DPIA (Art. 35) is mandatory. DPAs with processors must include data return/destruction clauses activated at contract termination.
Quick Reference · Compliance Matrix
Agent Component Key EU AI Act Articles Key GDPR Articles Primary Obligation
Risk Classification Engine Art. 6, Annex III, Art. 3 Art. 35 (DPIA trigger) Correctly classify AI risk tier before any work begins
Training Data Pipeline Art. 10, Art. 53 (GPAI) Art. 6, 7, 9, 5(1)(b,c) Lawful basis, quality audits, bias detection, data minimisation
Model Architecture Art. 15, Art. 14, Art. 11 Art. 25, Art. 32 Robustness, human override capability, security-by-design
Decision Output Layer Art. 14, Art. 13, Art. 26 Art. 22, Art. 15 Human oversight gate, explainability, contestation mechanism
User Interface Art. 50, Art. 26(6) Art. 13–14, Art. 21 AI disclosure, privacy notice, rights access point
Audit & Logging System Art. 12, Art. 72 Art. 30, Art. 5(1)(e) Automated logs, retention schedule, ROPA maintenance
Rights Management Portal Art. 26(6), Art. 14 Art. 15, 17, 18, 20, 21, 22 SAR, erasure, restriction, portability, objection, contestation
Incident Response System Art. 73, Art. 9 Art. 33, Art. 34 72h/15-day incident notification; breach response
QMS & Documentation Art. 11, 16–17, Art. 49 Art. 30, Art. 35 Annex IV docs, DPIA, ROPA, Declaration of Conformity
Vendor / Subprocessor Chain Art. 25 (provider liability) Art. 28, Art. 44–49 DPAs with all processors; SCCs for non-EU transfers
Governance & Oversight Art. 26, Art. 43, Art. 9 Art. 37–39 (DPO), Art. 36 Human oversight role, DPO involvement, prior consultation if needed
The Golden Rule: A fully compliant AI agent is not built by adding compliance at the end — it is architected around compliance from the first line of design documentation. The EU AI Act and GDPR are not constraints on AI innovation; they are the engineering specification for trustworthy AI that users can rely on and regulators can audit.