AI Agent Data Privacy: GDPR, SOC 2 & What You Need to Know
COMPLETE guide to AI agent data privacy — GDPR obligations, SOC 2 criteria, EU AI Act risk tiers, and a practical checklist. Learn what your team must do NOW.
Frequently Asked Questions
Does GDPR apply to AI agents?
Yes. GDPR applies to any system that processes personal data about EU residents, regardless of whether that processing is done by a human or an AI agent. If your agent reads emails, accesses CRM records, or generates outputs containing personal data, every GDPR principle — lawful basis, data minimization, purpose limitation — applies in full. See our [guide to AI agent security](/blog/ai-agent-security/) for related technical controls.
Who is the data controller when an AI agent processes personal data?
The organization that determines the purpose and means of the data processing is the data controller — not the AI vendor, and not the agent itself. In a multi-agent system where Agent A delegates to Agent B, the organization that deployed Agent A remains the controller. The vendor providing the underlying LLM or tool infrastructure is typically a data processor, and a Data Processing Agreement (DPA) is legally required.
What is Article 22 GDPR and how does it affect AI decision-making?
Article 22 of GDPR gives individuals the right not to be subject to solely automated decisions that produce significant legal or similarly significant effects — such as loan approvals, job screening, or medical triage. If your AI agent makes or materially influences such decisions, you must provide human oversight, allow subjects to contest the decision, and offer a meaningful explanation of the logic involved.
Do AI agents need to be SOC 2 certified?
SOC 2 certification is not legally mandated, but it is increasingly required by enterprise customers and regulated industries. For AI agents, all five Trust Service Criteria apply — Security, Availability, Processing Integrity, Confidentiality, and Privacy. AI-specific concerns like prompt injection, model hallucination logging, and audit trails for autonomous actions require new controls beyond traditional SOC 2 scope.
What is a DPIA and when is one required for AI agents?
A Data Protection Impact Assessment (DPIA) is a formal risk analysis required under GDPR Article 35 whenever data processing is "likely to result in a high risk" to individuals. AI agents that process large-scale personal data, use profiling, or operate in sensitive domains (health, finance, HR) almost always trigger the DPIA requirement. The assessment must document the processing purpose, risks identified, and mitigation measures before deployment.