AI regulation and compliance

AI and Automated Decision-Making

Automated decision-making under the UK GDPR has been reformed. The Data (Use and Access) Act 2025, section 80 replaced Article 22 with new Articles 22A to 22D, which came into force on 20 June 2025 under SI 2025/1356. The old prohibition on solely automated decisions with significant effects has been replaced with a safeguards regime: such decisions are now permitted for most personal data, provided controllers put the required safeguards in place. Special category data is treated more restrictively. This page explains what the new regime requires, where organisations go wrong, and when to instruct specialist counsel. For the wider UK AI regulatory picture, see our UK AI regulation hub.

We advise controllers and processors on UK GDPR compliance for AI and automated decision-making systems, including model training, lawful basis, the new Article 22A safeguards, DPIAs, transparency, vendor contracts and international transfers. The work ranges from product counsel for fintechs deploying credit decisioning to regulatory advice on AI-enabled network management for telecoms operators and response to ICO enquiries.

When AI becomes a data protection problem

AI becomes a data protection problem the moment personal data enters the pipeline. That includes training data, inference inputs, model outputs that identify or profile individuals and logs that record who decided what. The lawful basis, transparency and accountability obligations in the UK GDPR apply to each of those stages, not just to the final decision.

The controllers most exposed are those running solely or largely automated decisions that produce legal or similarly significant effects for individuals: credit scoring, fraud detection, recruitment screening, benefits eligibility, insurance underwriting and tariff personalisation. Since 20 June 2025 these decisions are governed by the new Article 22A safeguards regime rather than the old Article 22 prohibition. The shift is legally material: the default has flipped from “not permitted unless” to “permitted provided”. The compliance task is now to evidence safeguards, not to argue an exception.

Old Article 22 and new Articles 22A to 22D compared

IssueOld Article 22 (pre-20 June 2025)New Articles 22A to 22D (in force)
Default positionProhibition on solely automated decisions with legal or significant effectsPermitted for most personal data, subject to safeguards
Special category dataProhibited unless explicit consent or substantial public interestProhibited unless explicit consent, substantial public interest or other Article 9 condition (Article 22C)
SafeguardsRight to human intervention, to express a view, to contest the decisionInformation, human review on request, ability to make representations, ability to contest (Article 22A(1))
Partial automationNot clearly addressedArticle 22B extends safeguards to decisions that are “predominantly” automated even where a human rubber-stamps
TransparencyArticles 13 to 15 reference to Article 22Retained, updated to reference Articles 22A to 22D (Article 22D)
SourceArticle 22 UK GDPR (repealed 20 June 2025)DUAA 2025 section 80, commenced by SI 2025/1356

Why AI governance matters now

The legal framework has tightened in practice even as the statutory default has loosened. Four moving parts explain why AI governance is now a board-level concern.

First, the DUAA 2025 reforms. Section 80 of the Data (Use and Access) Act 2025 rewrote the rules for automated decision-making. The new Articles 22A to 22D require controllers to inform data subjects, provide human review on request, accept representations and allow the decision to be contested. The safeguards must be in place before the decision is taken, not added after complaint. Part 5 of the DUAA also changes the ICO’s structure, powers and complaints handling, which has operational consequences for every controller.

Second, ICO enforcement. The ICO’s AI and biometrics strategy identifies automated decision-making, generative AI training data and biometric systems as enforcement priorities. The ICO’s current ADM and profiling guidance is the operational reference. The ICO has consulted on a statutory code of practice on AI and automated decision-making under the DUAA; that code has not yet been made and is not in force, so controllers should treat the existing guidance as the live compliance baseline until the code is issued.

Third, AI model supply chains. Most organisations are not training foundation models themselves. They are buying AI-enabled products whose training data, model weights and inference pipelines sit with third-party processors or sub-processors. Controller accountability under Article 5(2) UK GDPR does not transfer with the software: the controller remains responsible for lawful basis, transparency, data subject rights and DPIA.

Fourth, divergence with the EU. The EU AI Act is being phased in through 2026 and the EU GDPR retains the old Article 22 prohibition. UK organisations with EU operations must comply with both regimes. We cover the developing gaps on our UK/EU data protection divergence page.

Where organisations get AI governance wrong

Most AI data protection failures come from the same recurring mistakes. The pattern is familiar across sectors.

Treating AI as an IT project, not a controller obligation. Procurement and engineering teams select models and vendors without data protection sign-off. The first DPIA is written after launch, not before. By the time the legal team sees the pipeline, training data has been ingested, models have been fine-tuned and the lawful basis is being retro-fitted. Article 35 UK GDPR requires a DPIA prior to processing likely to result in high risk, and the ICO lists AI processing of personal data in its mandatory DPIA criteria.

Mis-stating the lawful basis. Consent is often claimed where it does not work (training data scraped from the open web, employee data used for productivity models, legacy customer data repurposed for machine learning). Legitimate interests is often asserted without a documented balancing test and without considering the ICO’s three-part test in the ADM and profiling guidance. Contract performance is sometimes stretched to cover processing that goes well beyond what the contract requires.

Failing the Article 22A safeguards test. Since 20 June 2025 controllers using solely or predominantly automated decisions with legal or significant effects must provide specified information, offer human review on request, accept representations and allow the decision to be contested. The common failures are: no notice that the decision was automated, a human review pathway that does not actually reach a qualified reviewer, a complaints process that does not distinguish automated decisions from general queries and an escalation route that cannot meet the ICO’s 30-day acknowledgement standard under the DUAA.

Weak processor contracts. Article 28 UK GDPR obligations are often missing or boilerplate. Sub-processor chains for foundation models are rarely mapped. Audit rights are limited to SOC 2 reports that do not address training data provenance. International transfer mechanisms (UK IDTA or addendum, transfer risk assessment) are frequently absent for models hosted in the United States.

Transparency that tells the data subject nothing. Privacy notices say “we use AI” without naming the purposes, the logic, the categories of data used or the consequences of the decision. Articles 13 and 14 UK GDPR require meaningful information; the ICO will not accept generic language in an ADM context.

What good AI governance looks like

Good AI governance starts before procurement and ends only when the system is decommissioned. The operational minimum has six elements.

A documented lawful basis, recorded per processing activity, not per project. Legitimate interests must be supported by a legitimate interests assessment; consent by granular, freely given, informed and withdrawable consent mechanics; Article 9 special category processing by a specific condition in Schedule 1 DPA 2018.

A DPIA completed before the AI system goes live and updated at each material change. The DPIA must name the controller, the processor, the data categories, the lawful basis, the risks to data subjects and the mitigations. We describe our approach on our DPIA page.

Article 22A safeguards operationalised, not just documented. The notice to data subjects must be specific. The human review pathway must be reachable, the reviewer must be competent and independent of the automated decision, and the reviewer must have authority to overturn the decision. The ability to make representations and to contest must be described in the privacy notice and delivered in the customer journey.

Processor contracts that reflect Article 28 obligations in full, with specific provisions on training data, model outputs, audit rights, sub-processor flow-down, incident notification and international transfers. We cover the transfer mechanics on our data governance, transfers and accountability page.

Transparency that tells a data subject what is happening. The ICO’s guidance sets a high bar: meaningful information about the logic, the significance and the envisaged consequences. Plain language, not system architecture diagrams.

An incident response plan that covers AI-specific failure modes: hallucination in customer-facing outputs, model drift, adversarial inputs, training data leakage through inference, data subject complaints about automated decisions and regulator enquiries. The plan must sit alongside general data breach response procedures, not replace them.

When to instruct specialist AI and data protection counsel

Specialist advice pays for itself at four points in the AI lifecycle.

At design, when the lawful basis, the Article 22A safeguards and the DPIA scope are being set. Getting this wrong is expensive to fix later because it requires either re-papering or switching off the system.

At procurement, when the vendor contract, the Article 28 terms, the international transfer mechanism and the audit rights are being negotiated. Vendor standard terms routinely fall short of UK GDPR requirements.

At launch, when the privacy notice, the customer communications and the human review pathway need to be stress-tested. Launch is the last moment to catch a transparency failure before data subjects become complainants.

On complaint or investigation, when the ICO or a regulated sector regulator (FCA, Ofcom, PSR) starts asking questions. The 30-day acknowledgement standard under the DUAA applies; the substantive response horizon is shorter than most in-house teams expect. We advise on ICO investigations as part of our UK GDPR compliance work.

Frequently asked questions about AI and automated decision-making

Has Article 22 of the UK GDPR been repealed?

Yes. Section 80 of the Data (Use and Access) Act 2025 replaced the old Article 22 with new Articles 22A to 22D. The change came into force on 20 June 2025 under SI 2025/1356. The old prohibition on solely automated decisions with legal or similarly significant effects has been replaced with a safeguards regime that permits such decisions for most personal data, subject to information, human review, representation and contest rights.

Do we need consent to use AI systems that process personal data?

Usually not. Consent under Article 6(1)(a) UK GDPR must be freely given, specific, informed and withdrawable, which is difficult to achieve for AI training and inference at scale. Legitimate interests under Article 6(1)(f) is more often the right basis, provided you have done a documented legitimate interests assessment. Special category data needs an Article 9 condition in addition. The lawful basis should be set per processing activity before the system goes live, not retro-fitted.

When is a DPIA required for an AI system?

Article 35 UK GDPR requires a DPIA for processing likely to result in high risk. The ICO lists AI processing of personal data in its mandatory DPIA criteria. In practice any AI system making decisions about individuals, profiling individuals at scale, processing special category data or using new or novel technology requires a DPIA. The DPIA must be completed before processing starts and updated whenever the purpose, data or risk profile materially changes.

What does Article 22A require in practice?

Article 22A requires controllers making solely or predominantly automated decisions with legal or similarly significant effects to provide four things: information to the data subject about the decision, an ability to make representations, the right to obtain human intervention or review of the decision and the right to contest the decision. The ICO expects each of these to be operational, not theoretical. A privacy notice that mentions a right to human review without a reachable pathway to exercise it will not satisfy the requirement.

Can we use AI on special category data?

Yes, but the constraints are tighter. Article 9 UK GDPR requires a specific condition (explicit consent, employment, vital interests, substantial public interest, or one of the others), which must be read alongside Schedule 1 of the Data Protection Act 2018. Article 22C adds further restrictions on solely automated decisions using special category data. The default should be that AI systems do not process special category data unless an Article 9 condition is clearly available, documented and reviewed.

Are there different rules for generative AI?

The UK GDPR applies in the same way to generative AI as to any other processing of personal data. The ICO has said in its AI and biometrics strategy that training data provenance, data subject rights in foundation models and transparency are priority areas. Using a public large language model with personal data in the prompt is a disclosure to the model provider and must be treated as a processor relationship (or, in some architectures, as an independent controller relationship) with appropriate contractual and transfer protections.

Is there a statutory ICO code of practice on AI?

Not yet. The DUAA 2025 gives the Secretary of State power to require the ICO to produce a code of practice on AI and automated decision-making. The ICO has consulted on that code but it has not been issued or brought into force. Until it is, the live compliance references are the UK GDPR as amended by the DUAA, the DPA 2018 and the ICO’s existing ADM and profiling guidance.

How is UK regulation diverging from the EU AI Act?

Materially. The EU AI Act is a dedicated, horizontal regulation being phased in through 2026 with prohibited practices, high-risk system obligations and conformity assessment. The UK has no equivalent statute. UK AI regulation is delivered through sector regulators (ICO, Ofcom, FCA, CMA) plus the DUAA 2025 reforms to automated decision-making. EU GDPR still includes the old Article 22 prohibition; UK GDPR now has the Article 22A safeguards regime. Organisations operating in both jurisdictions face divergent compliance obligations. We track the gaps on our UK/EU data protection divergence page and in the UK AI regulation hub.

Need data protection advice for your AI product?

AI regulation and compliance

AI Regulation

Representative experience

Recent and representative matters include:

  • Advised a telecoms operator on the data protection framework for an AI-driven customer churn prediction model, including the Article 22 implications of automated retention offers and pricing decisions.
  • Prepared a DPIA for a financial services firm deploying machine learning for fraud detection, assessing the lawful basis for profiling under Article 6(1)(f) and the safeguards required under Article 22(2)(b).
  • Advised on the transparency obligations under Articles 13 and 14 for an AI recruitment screening tool, including the requirement to provide meaningful information about the logic involved in automated shortlisting.
  • Reviewed the data protection compliance of a large language model deployment by a professional services firm, addressing training data provenance, purpose limitation, and the application of the research processing exemption.
  • Advised a health-tech company on the interaction between the UK GDPR automated decision-making provisions and the Equality Act 2010 in the context of algorithmic triage of patient referrals.

Related data protection pages

See also our other data protection pages:

See also: Open Banking and Variable Recurring Payments.

Independent directory rankings

Our specialist expertise is recognised in major independent legal directories:

  • Chambers & Partners: Rob Bratby is ranked as a band 2 lawyer in the UK Guide 2026 in the “Telecommunications” category: Chambers
  • The Legal 500: Rob Bratby is listed as a “Leading Partner – Telecoms” in London (TMT – IT & Telecoms): The Legal 500
  • Lexology: Rob Bratby is featured on Lexology’s expert profiles as a Global Elite Thought Leader for data: Lexology
Chambers and Partners accreditation
Legal 500 accreditation
Lexology Global Elite Thought Leader accreditation

See our TelXL case study for an example of how we advise on AI analytics and automated decision-making.

Ready to discuss your matter?