Automated Decision-Making After the DUAA: What the New UK Regime Means for AI-Enabled Products

Automated decision-making DUAA UK regime Bratby Law data protection

The DUAA rewrites the rules on automated decision-making

Section 80 of the Data (Use and Access) Act 2025 replaced Article 22 of the UK GDPR with four new articles (22A to 22D) that took effect on 5 February 2026. The old default was prohibition: organisations could not make solely automated decisions with legal or similarly significant effects unless an exception applied. The new default is permission, subject to safeguards. For any business deploying AI systems that make or inform consequential decisions about individuals, the change is structural. The ICO has now opened a consultation on draft guidance covering the new regime, closing on 29 May 2026, which gives the first indication of how the regulator expects the reformed framework to operate in practice.

What changed: Articles 22A to 22D

Article 22A of the UK GDPR now defines two key concepts. A “significant decision” is one that produces a legal effect concerning a data subject, or has a similarly significant effect. A decision is “based solely on automated processing” where there is no meaningful human involvement in taking the decision. These definitions matter because the safeguards in Article 22C only apply to significant decisions made solely by automated processing. If a decision is not significant, or if there is meaningful human involvement, the specific ADM safeguards do not apply (although general data protection obligations under Articles 5 and 6 of the UK GDPR still do).

Article 22B restricts automated decisions based on special category data (health, racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric data, sexual orientation). A significant automated decision based entirely or partly on special category data processing may only be taken where the data subject has given explicit consent, or where the decision is required or authorised by law. This is the one area where the prohibition survives largely intact.

Article 22C sets out mandatory safeguards for all other significant solely automated decisions. Controllers must: (a) provide the data subject with information about the decision; (b) enable the data subject to make representations; (c) enable the data subject to obtain human intervention; and (d) enable the data subject to contest the decision.

Article 22D gives the Secretary of State power to make regulations defining “meaningful human involvement” and “similarly significant effect”, and to prescribe additional safeguard requirements. No such regulations have yet been made.

The practical gap: what counts as meaningful human involvement

The most consequential question under the new regime is whether a human review process qualifies as “meaningful”. The DUAA’s Explanatory Notes state that the term may be clarified by secondary legislation “in light of constantly emerging technologies”, but no definition exists in the Act itself. In the meantime, the ICO’s draft guidance is the closest thing to a regulatory steer.

The ICO’s Recruitment Rewired report, published on 31 March 2026, examined automated decision-making in hiring across more than 30 employers. The ICO found that where employers used AI-driven screening tools to filter candidates, human review was often perfunctory: reviewers rubber-stamped outputs without access to the underlying reasoning. The ICO concluded that this did not constitute meaningful human involvement, and called for human reviewers to have the competence, authority and genuine ability to override automated outcomes.

For AI-enabled products beyond recruitment, the same principle applies. A compliance officer who reviews an automated fraud detection decision, but who in practice never overrides the system, is unlikely to satisfy the meaningful involvement threshold. The human must be able to intervene, must have access to sufficient information to form an independent view, and must actually exercise judgment. A tick-box process will not suffice.

UK/EU divergence: prohibition versus permission

The DUAA’s approach to automated decision-making is one of the clearest points of UK/EU data protection divergence. Under Article 22 of the EU GDPR, the default remains a general prohibition on solely automated decisions with legal or similarly significant effects. The data subject has a right not to be subject to such a decision, with narrow exceptions for contractual necessity, legal authorisation, or explicit consent.

Under the UK regime from 5 February 2026, automated decision-making involving non-special category data is permitted as a starting point, provided the Article 22C safeguards are in place. The controller does not need to establish an exception before deploying the system. It needs to ensure the safeguards operate and that data subjects can access them.

EU GDPR (Article 22)UK GDPR (Articles 22A–22D, post-DUAA)
Default positionProhibition: data subject has right not to be subject to solely automated significant decisionsPermission: solely automated significant decisions permitted with safeguards
Non-special category dataPermitted only with explicit consent, contractual necessity, or legal authorisationPermitted under any lawful basis (except recognised legitimate interests) with safeguards
Special category dataProhibited except with explicit consent or substantial public interestProhibited except with explicit consent or legal authorisation
SafeguardsRight to obtain human intervention, express point of view, contest decisionInformation, representations, human intervention, right to contest
Regulatory definition powersNone (CJEU case law fills gaps)Secretary of State may define “meaningful human involvement” and “similarly significant effect” by regulations
Enforcement approachDPA enforcement per Member StateICO enforcement; draft ADM guidance consultation open until 29 May 2026

For organisations operating across both jurisdictions, this creates a dual compliance challenge. A system that is lawful under the UK regime may still breach Article 22 EU GDPR if it relies on the UK’s more permissive default without meeting an EU exception. Firms should not assume that UK compliance equates to EU compliance. (For more on how the DUAA creates divergence on lawful bases, see our analysis of recognised legitimate interests under the DUAA.)

This divergence is also being watched by the EDPB, which assessed the UK’s adequacy position in the context of the DUAA reforms. The ADM changes were specifically noted as a point of concern. Organisations that depend on the UK adequacy decision for cross-border data transfers should monitor this closely.

What this means for AI-enabled products

The reformed ADM regime affects any product or service that uses AI to make or inform consequential decisions about individuals. Credit scoring, insurance underwriting, automated claims assessment, content moderation, fraud detection and candidate screening are all in scope where the decision is significant and solely automated.

Controllers deploying these systems need to address four operational requirements. First, determine whether each automated decision is “significant” within Article 22A. A decision that triggers a legal consequence (refusing credit, terminating a contract, rejecting an application) will clearly qualify. Decisions with similarly significant effects require case-by-case assessment.

Second, assess whether there is meaningful human involvement. If so, the specific ADM safeguards do not apply (but general fairness and transparency obligations do). If not, the full Article 22C safeguard suite must be implemented.

Third, ensure the safeguard mechanisms are operational, not theoretical. Data subjects must be able to access information about the decision, make representations, obtain human review and contest the outcome. This requires clear processes, trained staff and defined response timescales.

Fourth, where the automated decision involves special category data, Article 22B applies and the controller must establish explicit consent or legal authorisation before the system operates. A data protection impact assessment under Article 35 UK GDPR is likely to be required for any high-risk ADM system, regardless of whether special category data is involved.

Viewpoint

The DUAA’s ADM reform is pragmatic. A blanket prohibition on automated decision-making was always a poor fit for AI systems that assist, rather than replace, human judgment. The new framework correctly focuses on safeguards rather than gatekeeping. But the regime’s effectiveness depends entirely on what “meaningful human involvement” turns out to mean, and right now that question is unanswered in law.

In our experience advising clients deploying AI decision systems in regulated sectors, the operational difficulty is not the principle of human review but its execution at scale. A telecoms operator handling thousands of automated credit checks per day, or a payments firm running real-time fraud screening, cannot route every flagged case through a senior decision-maker. The challenge is designing review processes that are both proportionate and substantive. The ICO’s recruitment findings suggest that perfunctory review will not pass muster, but the regulator has not yet drawn the line between adequate and inadequate oversight for high-volume, low-latency decisions.

Organisations should not wait for the ICO’s final guidance (expected summer 2026) before acting. The regime is already in force. The practical step is to audit existing automated decision-making systems, map which decisions are “significant”, assess whether human involvement is genuinely meaningful, and build the safeguard processes that Article 22C requires. If your organisation needs advice on AI and data governance, the time to address this is now, not when the ICO opens an investigation.

Links

Bratby Law advises telecoms operators, payments firms and technology businesses on the data protection requirements for AI-enabled products, including automated decision-making compliance under the DUAA. For advice on auditing your ADM systems or responding to the ICO’s consultation, contact Rob Bratby at rob@bratby.law.

Select topics of interest

Similar Posts