ICO AI biometrics: Code of Practice mandated for 12 May 2026

ICO AI biometrics: Code of Practice mandated, SI 2026/425, in force 12 May 2026

In short: from 12 May 2026, ICO AI biometrics compliance moves to a statutory footing. SI 2026/425 comes into force on that date, placing the Information Commissioner under a statutory duty to write a single Code of Practice on artificial intelligence and automated decision-making, alongside live foundation-model and police facial recognition audit workstreams.

By Rob Bratby, Managing Partner, Bratby Law. Lexology Global Elite Thought Leader for Data Protection. Chambers UK Band 2 (Telecommunications). Legal 500 Leading UK Telecoms Partner. 30+ years in telecoms and data protection regulation, including Oftel and senior operator roles.

Firms that build or deploy AI on personal data face a fixed compliance benchmark from 12 May 2026. On that day, the Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 (SI 2026/425) come into force, placing the Information Commissioner under a statutory duty to write a single Code on AI development, AI deployment and automated decision-making. The strategy update the ICO published in March 2026 already signals what the Code will require. The practical question for controllers is whether to align now or wait.

Key findings (ICO AI and biometrics strategy and SI 2026/425)

  • The Information Commissioner has a statutory duty to prepare a Code of Practice on AI and automated decision-making from 12 May 2026 (regulation 2(1) SI 2026/425). Source: SI 2026/425.
  • The Code-making powers in s.124A and s.124B Data Protection Act 2018 were inserted by ss.92 and 93 of the Data (Use and Access) Act 2025. Source: SI 2026/425 footnotes.
  • The replacement Article 22 framework (new Articles 22A to 22D UK GDPR) was inserted by DUAA 2025 s.80 and came into force on 5 February 2026. Source: SI 2026/82.
  • The strategy targets three priority workstreams: automated decision-making in recruitment and central government, foundation model development, and police live facial recognition. Source: ICO strategy.
  • The ICO is engaging with eleven foundation model developers and is auditing five police forces (South Wales and Gwent, Essex, Leicestershire, West Yorkshire, Greater Manchester). Source: March 2026 update.
Priority areaPrimary statutory hookStatus
Automated decision-makingUK GDPR Articles 22A to 22D (DUAA 2025 s.80); s.50C DPA 2018 for law enforcement processingIn force 5 February 2026; Code mandated 12 May 2026; draft ADM guidance expected during 2026
Live facial recognition (police)UK GDPR Article 35 (DPIA); DPA 2018 Part 3 (law enforcement); R (Bridges) v South Wales PoliceFive force audits underway; Greater Manchester Police audit running April 2026
Foundation model developmentUK GDPR Articles 5(1)(a)–(b), 6, 9, 35; ICO consultation responses 2024ICO engaging eleven major developers; further policy positions expected
Recognised legitimate interestsUK GDPR Article 6(1)(ea) and Annex 1 (DUAA 2025 s.70 and Schedule 4)In force 5 February 2026

Regulatory background to ICO AI biometrics oversight

The ICO AI biometrics strategy was published on 25 June 2025 under the title Preventing harm, promoting trust. The strategy sat alongside the Information Commissioner’s existing toolkit under the UK GDPR (the EU GDPR as retained and amended) and the Data Protection Act 2018, and aligned with the ICO25 strategic enduring objectives. It signalled four planned outputs: a statutory code of practice for AI and automated decision-making, an evidence-led foundation-model engagement programme, an audit programme for police live facial recognition, and updated guidance on automated decision-making and profiling.

The Data (Use and Access) Act 2025 (c.18) filled in the statutory architecture. Section 70 of DUAA 2025 amended Article 6 UK GDPR to insert the new lawful ground at Article 6(1)(ea) (recognised legitimate interest) and to bring in Annex 1 through Schedule 4. Section 80 replaced old Article 22 with new Articles 22A to 22D, which set out the framework for significant decisions based solely on automated processing. Sections 92 and 93 inserted s.124A and s.124B DPA 2018, the Code-making powers the Secretary of State has now exercised. The bulk of these provisions came into force on 5 February 2026 by SI 2026/82. The strategy update the ICO published on 17 March 2026 reported progress against each workstream and previewed the draft ADM guidance.

ICO AI biometrics: what SI 2026/425 actually requires

SI 2026/425 was made on 16 April 2026, laid before Parliament on 21 April 2026 and comes into force on 12 May 2026. It is short. Regulation 2(1) requires the Commissioner to prepare an “appropriate code of practice giving guidance as to good practice in the processing of personal data under the relevant data protection legislation in relation to (a) developing and using artificial intelligence, and (b) automated decision-making”. Regulation 2(2) requires the Code to include guidance on processing children’s personal data. Regulation 3 carves national security out of the panel review under s.124B DPA 2018. The instrument itself imposes no new compliance obligations on controllers; the binding work for ICO AI biometrics compliance is done by the underlying UK GDPR and DPA 2018, and by the Code once it is published.

Automated decision-making: Articles 22A to 22D

Automated decision-making is the largest pillar of the ICO AI biometrics workstream. The replacement Article 22 framework changes the default. Article 22 UK GDPR as originally retained operated as a near-prohibition: a data subject had the right not to be subject to a decision based solely on automated processing producing legal or similarly significant effects, except where the processing was necessary for a contract, authorised by law or based on explicit consent. Article 22A reframes that into a definition (a decision based solely on automated processing means one with no meaningful human involvement; a significant decision is one with legal or similarly significant effects), Article 22B sets out the lawful conditions, Article 22C provides the safeguards (meaningful information about the logic, the right to obtain human review, the right to contest), and Article 22D contains the regulation-making power. Special-category data continues to attract additional conditions.

Foundation models: the ICO’s five-chapter framework

Foundation models are the third ICO AI biometrics priority. The ICO is engaging with eleven major foundation model developers and has commissioned research into harms across the foundation-model lifecycle. The supervisory backbone is the 2024 generative AI consultation series, which set out positions across five chapters: lawful basis for web scraping, purpose limitation across the lifecycle, accuracy of model outputs, engineering individual rights into models, and allocation of controller-processor responsibility across the supply chain. The strategy update of 17 March 2026 confirms the ICO is now developing further policy positions on these chapters, building on the consultation responses, with publication “in the coming months”.

On lawful basis for training data, the ICO’s position is that legitimate interests under Article 6(1)(f) UK GDPR is the only realistic route for training a model on web-scraped personal data, and that controllers must satisfy the full three-part assessment, with the balancing test taking reasonable expectations into account. Public availability does not displace lawful basis; it informs the balancing test only. Article 9 imposes additional conditions on special-category data scraped at scale; Article 5(1)(a), (b) and (d) impose the fairness, transparency, purpose-limitation and accuracy overlay across the lifecycle.

On individual rights, the ICO has signalled that controllers cannot evade Articles 12 to 22 simply because a model has memorised training data in opaque parameters. Engineering rights-respecting models from the design stage, including provision for erasure requests, output filtering, and purpose-bounded retraining, is treated as part of Article 35 DPIA and Article 25 data-protection-by-design compliance. On accountability, the ICO’s framework treats foundation-model developers, fine-tuners and deployers as separate roles in the supply chain, each carrying its own controller or processor obligations. The strategy commits to ongoing scrutiny: the eleven-developer engagement is evidence-led rather than guidance-led, and the resulting findings will feed into the Code mandated by SI 2026/425.

Integrating foundation models in the UK: what deployers should focus on

Most UK organisations do not train foundation models. They integrate them: through API access to a US-hosted model, through fine-tuning a vendor model on internal data, or through retrieval-augmented generation that combines a third-party model with a customer-data store. The ICO AI biometrics framework treats integration as a separate compliance question from training, with its own lawful basis, transparency, DPIA and supply-chain obligations.

The first question for a UK deployer is whether the underlying model was trained lawfully. The ICO’s published position is that a UK controller integrating a third-party model cannot rely on the developer’s lawful-basis assertion alone: due diligence on training data, on opt-out mechanisms, and on the developer’s compliance with Article 5 fairness and transparency obligations is part of the deployer’s own accountability. In practice this means contractual warranties from the model provider, vendor-DPIA review, and where the model has been trained at scale on web-scraped personal data, an articulated view on the residual risk to UK data subjects. A deployer relying on a model that fails an ICO foundation-model investigation downstream will not be insulated by the supplier’s own controller status.

The second question is what the integration itself does with personal data. Three patterns recur. First, prompt-engineering: where staff or end-users feed personal data into a model through a prompt, the deployer is processing that data and needs a lawful basis and a retention rule for prompt and conversation logs under Article 30 UK GDPR. Second, fine-tuning on customer or employee data: the deployer becomes a controller for the training set, with Articles 5, 6, 9, 13 and 14 obligations and an Article 35 DPIA where the use case is high-risk. Third, automated decisions on model outputs: where a hiring shortlist, credit decision, underwriting outcome or content-moderation action turns on a model output without meaningful human involvement, Articles 22A to 22D engage in full, with the Article 22C safeguards running directly to the data subject.

The third question is the supply-chain split, which the ICO AI biometrics framework treats as a separate compliance vector. Most third-party foundation models are accessed via APIs operated outside the UK, frequently in the US. The deployer is the controller for the integration use case; the model provider is typically a processor for the deployer’s prompts and a separate controller for the underlying model. The vendor agreement therefore needs to satisfy Article 28 UK GDPR processor obligations on the prompt-data flow, an international-transfer mechanism (UK addendum to the EU SCCs or another lawful route) for export of prompt data, and clear allocation of responsibility for memorisation and output-side personal data breach. Records of processing under Article 30 should describe the integration use case as a separate processing activity from any internal AI development.

Police live facial recognition: the press lead, narrower private-sector reach

The police live facial recognition pillar attracts the largest share of press coverage of the ICO AI biometrics strategy, but the data-protection mechanics for non-police controllers are narrower and turn on familiar UK GDPR provisions. Police processing falls within Part 3 DPA 2018 (the law enforcement regime). The leading authority is R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058, in which the Court of Appeal held that a deployment with insufficient legal framework and an inadequate DPIA is unlawful under Article 8 ECHR.

The Divisional Court has now applied the Bridges test to the Metropolitan Police’s tightened policy. In R (Thompson and Carlo) v Commissioner of Police of the Metropolis [2026] EWHC 915 (Admin) (Holgate LJ and Farbey J, 21 April 2026), the Court dismissed a judicial review of the Metropolitan Police’s 11 September 2024 live facial recognition policy. The Court held that the policy is “in accordance with the law” under Article 8 ECHR and “prescribed by law” under Articles 10 and 11 ECHR, rejecting the claimants’ argument that the policy left officers with unlawfully broad discretion as to where, when and against whom to deploy the technology. The claimants have indicated they will appeal. The contrast with Bridges is instructive: Bridges struck down a policy for excess of discretion; Thompson confirms that a sufficiently articulated framework, with constrained watchlist criteria and deployment rules, can meet the foreseeability bar. The lesson for private-sector deployers of biometric systems is the same in form: documented constraint on the discretion the system permits, evidenced through the Article 35 DPIA, the discriminatory-impact analysis and the Article 9 special-category condition. Most Bratby Law clients sit on that side of the line.

What changed: old Article 22 versus new Articles 22A to 22D

ThemeOld Article 22 UK GDPR (pre-DUAA)New Articles 22A to 22D (DUAA 2025 s.80)
DefaultRight not to be subject to a decision based solely on automated processingPermitted where safeguards are in place; conditions retained for special-category data
Threshold“Solely automated” undefined; ICO guidance read meaningful human review narrowlyArticle 22A defines a decision as based solely on automated processing where there is “no meaningful human involvement”
Significant effect“Legal effects or similarly significant effects” undefinedArticle 22A defines a significant decision as one with legal or similarly significant effects
SafeguardsArticle 22(3): right to human intervention, right to express a view, right to contestArticle 22C: meaningful information about the logic, right to human review, right to contest, technical and organisational measures against discrimination
Code of practiceNone mandatedStatutory code mandated by SI 2026/425 from 12 May 2026

ICO AI biometrics compliance: what controllers should focus on now

The cross-cutting ICO AI biometrics compliance points sit above the integration analysis. The new Article 6(1)(ea) recognised legitimate interest is narrowly drawn (DUAA 2025 s.70 and Annex 1) and does not assist AI training; Article 6(1)(f) remains the realistic route. Records of processing under Article 30 should describe AI development, integration and decisioning as separate activities. DPIAs under Article 35 should be re-run, not refreshed, when an AI use case is added or materially changed. Bratby Law’s AI and automated decision-making page covers the substantive position; our AI and data governance advice guide sets out the standard scope for an instruction.

Viewpoint

I read the SI 2026/425 timetable as the supervisory turn the ICO AI biometrics strategy foreshadowed when it was published in June 2025. The press will lead on the police live facial recognition workstream; for most controllers in regulated sectors that is not where the compliance pressure sits. The eleven-developer foundation-model engagement and the central government ADM engagement (including DWP) tell me the ICO is moving from horizon-scanning to investigation in the parts of the strategy that touch the private sector directly. In our experience advising telecoms operators, payments firms and fintechs on data governance, the binding constraint on AI compliance is rarely the headline regime; it is the discipline of running the DPIA, recording the human-review architecture, and documenting the Article 5 fairness analysis with rigour. The Code will codify that discipline. Controllers integrating third-party foundation models without a documented training-lawfulness diligence will find the regime less accommodating than the pre-DUAA Article 22 implied.

Frequently asked questions

Does SI 2026/425 itself impose new compliance obligations on controllers?

No. SI 2026/425 binds only the Information Commissioner. Regulation 2(1) requires the Commissioner to prepare a code of practice on AI and automated decision-making. The substantive compliance obligations on controllers continue to flow from the UK GDPR, Articles 22A to 22D in particular, the Data Protection Act 2018 and the existing ICO guidance and case law. The Code, when published, will be guidance on good practice rather than a free-standing source of obligation.

When will the Code itself be published?

The ICO has not committed to a publication date, but two primary-source anchors point to a 2027 timetable. The strategy update of 17 March 2026 confirms that “preparation work for the development of the code is ongoing” and that the ICO’s “draft ADM guidance will inform parts of our AI and ADM code of practice”. The ICO’s Technology guidance plans page (last updated 27 April 2026) shows the ADM guidance is in drafting, the public consultation is open until 29 May 2026, and final ADM guidance is due Summer 2026. The Code itself sits downstream of that guidance and must then complete the panel and laying-before-Parliament process under ss.124B and 125 DPA 2018, with reg 3 SI 2026/425 carving national security out of panel review. A realistic earliest finalisation is 2027; commercially it would be imprudent to wait.

Does the new Article 22A regime change the position for routine AI in HR or finance?

Yes, in two practical respects. First, the meaningful human involvement test is now anchored in the statutory definition rather than in guidance, which raises the documentary bar. Recruitment platforms relying on a human reviewer who rubber-stamps an algorithmic shortlist will struggle to argue the decision is not “based solely on automated processing”. Second, the Article 22C safeguards are calibrated to support meaningful contest: a generic appeal route will not satisfy them. Insurance, credit and recruitment systems are the segments most exposed.

Does Article 6(1)(ea) recognised legitimate interest help with AI training?

Generally no. The Annex 1 conditions are narrow: disclosure for an Article 6(1)(e) public-interest task, national security and defence, response to an emergency, detecting or investigating crime, and safeguarding a vulnerable individual. Foundation-model training does not fit any of them. Controllers training models on personal data should rely on Article 6(1)(f) ordinary legitimate interest with a documented three-part assessment, or Article 6(1)(a) consent where feasible.

What should a UK firm integrating a third-party foundation model do first?

Run a training-lawfulness diligence on the model, not only on the integration. The ICO will not treat the deployer as insulated by the developer’s controller status. Map the integration use case against Articles 5, 6 and 22A, identify the prompt-data flow as a separate processing activity under Article 30, and put the vendor agreement on an Article 28 footing for prompt processing. Where the model is hosted outside the UK, set the international-transfer mechanism in writing.

For advice on aligning ICO AI biometrics compliance, automated decision-making safeguards or biometric processing with the new Code of Practice, contact Rob Bratby at Bratby Law.

Select topics of interest

Similar Posts