
AI and Automated Decision-Making
Data protection compliance for AI systems and automated decisions
AI and automated decision-making sit at the intersection of data protection and technology governance. When organisations deploy AI systems to make or influence decisions about people, they are processing personal data. The question is not “is this AI compliant?” but “does this data processing comply with UK GDPR?”. This is where Bratby Law advises: mapping the data protection obligations that apply when AI systems touch personal data, from initial training data assessment through to the rights that individuals retain to challenge automated decisions.
When AI becomes a data protection problem
The moment you deploy an AI system that processes personal data, you have a data protection problem. This is true whether you are building your own model, using a third-party AI platform with pre-trained capabilities, training a model on your own customer data, implementing recommendation engines, deploying chatbots, automating recruitment decisions, or using AI to score or profile individuals. The common mistake is treating this as a technology governance question. It is fundamentally a data protection question. You cannot solve it with technical controls alone. You must start with lawful basis, transparency, and the rights you owe to people whose data the system touches.
Why AI governance matters now
The regulatory landscape shifted in February 2026. The Data (Use and Access) Act 2025 fundamentally reformed the rules on automated decision-making. The original Article 22 of the GDPR prohibited automated individual decision-making as a default, except in narrow circumstances. The DUAA replaced this with Articles 22A to 22D. Under the new regime, automated decision-making involving non-special category personal data is now permitted, provided that controllers implement mandatory safeguards: transparency about what decisions are automated, meaningful human involvement where appropriate, and a right for individuals to contest the decision.
The Information Commissioner’s Office has committed substantial resource to AI oversight. The ICO’s AI and Biometrics Strategy (June 2025) prioritises transparency, bias, and rights. The ICO has signalled that it will publish a statutory code of practice on AI and data protection in 2026. In January 2026, the ICO published its first assessment of agentic AI risks, identifying specific data protection challenges: purpose definition, accuracy and hallucination, special category data inference, cybersecurity, and transparency across agent-to-agent communication chains. For organisations serving EU markets, the EU AI Act’s extraterritorial reach means compliance obligations extend beyond the UK GDPR.
Behind all of this is a misunderstanding that persists in boards and technology teams. Many organisations treat AI governance as a standalone regime. They create separate AI committees, separate AI compliance frameworks, separate risk registers. This is inefficient and often creates conflict with existing data protection obligations. The truth is simpler: AI governance is not a new category of law. It is the application of existing UK GDPR principles to AI systems. Lawful basis, transparency, fairness, purpose limitation, and accountability already cover the core obligations. The task is implementing them rigorously when AI is involved.
Where organisations get AI governance wrong
Common failures cluster around six points. First, no lawful basis assessment for training data. Organisations scrape web data, licence datasets, or repurpose internal data to train models without asking: what lawful basis allows this processing? Relying on consent is often infeasible for historical data. Legitimate interests may be available, but only if you have genuinely balanced the rights of data subjects against your business need. Second, transparency notices that mention “AI” but do not explain what matters. A privacy notice that says “we use machine learning” tells you nothing about whether a decision is automated, what the consequences are, or whether the system will be used to profile you. Third, DPIAs that describe the AI system but do not assess data protection risk. A DPIA that reads like a technical specification has missed the point. Fourth, treating Article 22 (now Article 22A) as binary. Either the decision involves a human, or it does not. This misses the requirement of “meaningful” human involvement under Articles 22C and 22D. A human in the loop who rubber-stamps algorithmic output is not meaningful intervention.
Fifth, failing to address bias as a fairness obligation. Article 5(1)(a) of the UK GDPR requires fair processing. If your AI system produces systematically adverse outcomes for protected groups, you are in breach. Bias is not a technical issue to be solved by data science. It is a legal obligation to be managed by your data protection framework. Sixth, ignoring purpose limitation when repurposing data for model training. Data collected for transaction processing may not be fairly processed by feeding it into an AI system trained to predict customer behaviour for marketing purposes. Each stage of the AI lifecycle requires its own lawful basis assessment.
What good AI governance looks like
Bratby Law’s approach is to treat AI governance as an extension of data protection compliance, not a separate workstream. This means mapping each stage of the AI lifecycle through the GDPR lens. For data collection, you must confirm a lawful basis exists and that your transparency notice fairly describes how the data will be used in AI systems. For training data, especially data scraped from the web or repurposed for model training, you need explicit lawful basis assessment and documentation of controller/processor roles across the supply chain. For inference and output, you must implement the mandatory safeguards under Article 22C: individuals must know that a decision is automated, must be able to make representations, must have access to human review, and must be able to contest the outcome. DPIAs must assess data protection risk, not just technical function. Bias must be mapped as a fairness obligation under Article 5(1)(a), with testing and monitoring built into model governance. Controller and processor roles must be clearly allocated when the AI system is built by a third party and operated by you. Model governance documentation must satisfy both UK GDPR accountability requirements and, where applicable, EU AI Act transparency and risk management standards. Documentation should cover training data provenance, model performance validation, bias testing, decision logging, individual complaint handling, and audit trails.
When to instruct specialist AI and data protection counsel
Internal teams typically handle day-to-day AI governance: writing AI usage policies, documenting training data, implementing transparency notices, training staff on Article 22 obligations. Specialist counsel becomes essential in several scenarios. First, where a DPIA reveals novel risks or where the processing is complex, the safeguards required under Articles 22C and 22D may be unclear. Second, where you are repurposing data for model training, especially web-scraped data, the lawful basis assessment is difficult. You will need advice on whether legitimate interests is defensible against the rights of data subjects. Third, where the ICO engages you on an investigation or enforcement action relating to an AI system, you will need counsel who understands both the technical facts and the GDPR argument. Fourth, where you operate in both the UK and EU, you need advice on dual compliance: what Articles 22A to 22D require in the UK, and what the EU AI Act requires for high-risk systems. Fifth, in transactions where AI technology is an asset, you need due diligence on the data rights underlying the model, compliance gaps in the seller’s governance, and allocation of liability between buyer and seller. Sixth, where an individual challenges an automated decision, asserting their rights under Article 22C, you need advice on your legal position and obligations to investigate and respond.
Frequently asked questions about AI and data protection
Do we need consent to use AI systems that process personal data?
Consent is one lawful basis under Article 6 GDPR, but it is rarely the right one. Obtaining valid consent for AI processing is difficult: the notice must be granular (which AI systems, what decisions, what purposes), the decision to consent must be freely made (consent is invalid if declining consent is impracticable), and you must respect withdrawal. For most organisations, legitimate interests is more practical, provided you have balanced controller interests against data subject rights in a documented impact assessment. Consent is often required for special category data, but even then other bases (employment law, vital interests, public task) may be available. The test is always whether the basis is lawful and fair, not which basis feels easiest administratively.
If we have a human in the loop, does Article 22 no longer apply?
Article 22A applies to decisions reached “entirely or partly” on automated processing. Human involvement does not automatically remove the obligation. What matters is whether the human involvement is “meaningful”. Under Articles 22C and 22D, meaningful involvement means the human must have access to the relevant information, be able to contest the algorithmic recommendation, and be empowered to override it. A human who reviews algorithmic output in a process designed to confirm rather than question it is not exercising meaningful involvement. The human reviewer must have genuine discretion and accountability for the decision.
Are we required to carry out a DPIA before deploying an AI system?
UK GDPR Article 35 requires a DPIA for processing that is likely to result in high risk to rights and freedoms. High-risk categories include large-scale processing of special category data, large-scale monitoring, automated decision-making with legal effects, and novel use of technologies. Most AI systems fall into one of these categories. Even if a DPIA is not mandatory, it is prudent practice to carry one out. A good DPIA will surface data protection risks (not technical risks) and force you to document mitigations. For deployed AI systems, the DPIA becomes part of your accountability record: evidence that you identified and managed data protection risk.
What counts as “special category data” for the purposes of AI and automated decision-making?
Special category data includes personal data concerning racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic data, biometric data for identification purposes, health data, and data concerning sex life or sexual orientation. Article 22A prohibits automated individual decision-making based on special category data except in very limited circumstances: the data subject has given explicit consent, employment law permits it, or vital interests are at stake. Machine learning systems that infer special category data (for example, inferring religious affiliation from behavioural patterns, or ethnicity from a name) may engage these protections. Organisations commonly underestimate the risk of inference. If your system infers special category data and uses that inference to make a decision, you are likely in breach.
What happens if someone challenges an automated decision?
Under Articles 22C and 22D, individuals have a right to contest significant automated decisions. You must respond to the challenge promptly and meaningfully. This does not mean automatically overriding the algorithm. It means giving the individual an explanation, allowing them to provide further information, and demonstrating that a human has reviewed their challenge. If you have automated a decision that would otherwise require human judgment (recruitment, credit assessment, benefits eligibility), be prepared to defend the decision in writing. Document your algorithm’s logic, the training data, performance metrics, and the human review that occurred. If you cannot explain why the algorithm recommended what it did, you may not be able to satisfy the Article 22C obligation to allow meaningful human intervention and contestation.
What is the difference between bias (in our AI system) and discrimination (under data protection law)?
Bias is a technical concept: the algorithm produces systematically different outputs for individuals in different groups, whether or not this was intended. Discrimination is a legal concept: the processing violates Article 5(1)(a) fairness or other non-discrimination laws. A system can be biased without being discriminatory if the bias is immaterial. But if an AI system makes recruitment decisions and produces a bias against women, age groups, or ethnic minorities, the bias becomes discrimination. You must test for bias as part of AI governance. Testing alone is not sufficient. You must monitor performance after deployment and have a process to remediate if bias emerges. Document this as part of your fairness obligation under Article 5(1)(a). The ICO’s AI and Biometrics Strategy flagged bias and discrimination as a priority enforcement area.
Need data protection advice for your AI product?

AI Regulation
Representative experience
Recent and representative matters include:
- Advised a telecoms operator on the data protection framework for an AI-driven customer churn prediction model, including the Article 22 implications of automated retention offers and pricing decisions.
- Prepared a DPIA for a financial services firm deploying machine learning for fraud detection, assessing the lawful basis for profiling under Article 6(1)(f) and the safeguards required under Article 22(2)(b).
- Advised on the transparency obligations under Articles 13 and 14 for an AI recruitment screening tool, including the requirement to provide meaningful information about the logic involved in automated shortlisting.
- Reviewed the data protection compliance of a large language model deployment by a professional services firm, addressing training data provenance, purpose limitation, and the application of the research processing exemption.
- Advised a health-tech company on the interaction between the UK GDPR automated decision-making provisions and the Equality Act 2010 in the context of algorithmic triage of patient referrals.
Related data protection pages
See also our other data protection pages:
- Data Protection Impact Assessments
- UK/EU Data Protection Divergence
- Data Breach Response and ICO Notification
- PECR and ePrivacy
- Data Governance, Transfers and Accountability
- UK GDPR Compliance
- Sector-Specific Data Protection
See also: Open Banking and Variable Recurring Payments.
Independent directory rankings
Our specialist expertise is recognised in major independent legal directories:
- Chambers & Partners: Rob Bratby is ranked as a band 2 lawyer in the UK Guide 2026 in the “Telecommunications” category: Chambers
- The Legal 500: Rob Bratby is listed as a “Leading Partner – Telecoms” in London (TMT – IT & Telecoms): The Legal 500
- Lexology: Rob Bratby is featured on Lexology’s expert profiles as a Global Elite Thought Leader for data: Lexology



