UK AI regulation: the practitioner framework

In short: There is no UK AI Act. UK AI regulation sits on the UK GDPR (as amended by the Data (Use and Access) Act 2025, in force 5 February 2026), sector regulators applying their existing rules (ICO, Ofcom, FCA), and DSIT policy direction. For clients with EU exposure the EU AI Act applies in parallel. Compliance starts with a DPIA, an Article 22A–22D analysis, and Article 28 controls on third-party model vendors.
The shape of UK AI regulation is not what most clients expect. There is no UK AI Act and none on the legislative timetable. The framework sits across three layers: the UK General Data Protection Regulation as amended by the Data (Use and Access) Act 2025, sector regulators applying their existing rules to AI systems (ICO, Ofcom, FCA, PSR), and DSIT policy direction. Clients with EU market exposure also face the EU AI Act, which applies in parallel and on its own timetable. For in-house counsel and compliance teams responsible for AI deployment, the practical question is which provisions of UK AI regulation actually bind, where the regulators sit, and what to do at deployment.
There is no UK AI Act, and the framework is policy plus existing law
The UK has not enacted standalone AI legislation. The White Paper of 29 March 2023, A pro-innovation approach to AI regulation (CP 815), set out five non-statutory principles (safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; contestability and redress) and asked existing sector regulators to apply them within their existing remits. The AI Opportunities Action Plan, published by DSIT on 13 January 2025, reinforced that approach and shifted government emphasis from safety-led caution to active promotion of AI-enabled growth. The AI Safety Institute, established in November 2023 alongside the Bletchley Park AI Safety Summit, has been refocused on the broader opportunities mandate. The King’s Speech in July 2024 did not include an AI Bill. As at April 2026 there is no published draft and no commencement timetable. The current shape of UK AI regulation is policy plus existing law, not a horizontal AI statute.
Where UK AI regulation sits: the UK GDPR as amended by DUAA 2025
The primary statutory layer in UK AI regulation involving personal data is the UK GDPR. Section 80 of the Data (Use and Access) Act 2025 substituted the retained Article 22 (automated individual decision-making) with a new Chapter III, Section 4A containing Articles 22A to 22D, in force from 5 February 2026 by SI 2026/82 reg 2(j).
The new architecture is more permissive than the old. Article 22A defines the trigger: a decision is “based solely on automated processing” only where there is no meaningful human involvement, and is a “significant decision” if it produces a legal effect or a similarly significant effect on the data subject. Article 22B preserves the heightened restriction for solely-automated significant decisions on special category data: explicit consent, contractual necessity coupled with Article 9(2)(g), or a legal authorisation. Article 22C requires safeguards for any solely-automated significant decision on personal data, namely information about the decision, the right to make representations, the right to obtain human intervention, and the right to contest. Article 22D gives the Secretary of State a regulation-making power to refine those safeguards, subject to the affirmative resolution procedure.
DUAA 2025 also recalibrated the lawful-basis architecture by inserting a new Annex 1 to the UK GDPR (Schedule 4 to DUAA 2025) creating “recognised legitimate interests”: a closed list of conditions covering disclosure for an Article 6(1)(e) public-interest task, national security and defence, public security, responding to an emergency (within the Civil Contingencies Act 2004 meaning), the detection, investigation and prosecution of crime, and safeguarding a vulnerable individual (under 18 or an adult at risk). Where one of these conditions applies, the Article 6(1)(f) balancing test does not need to be run. General commercial AI processing falls outside the closed list; controllers continue to run the standard legitimate interests assessment. Where AI is used to recommend or moderate user-generated content on a regulated user-to-user service, the Online Safety Act 2023 also engages, with safety duties applying to the recommender and moderation systems alongside the data protection regime.
The regulators: ICO at the centre, Ofcom and FCA on sector overlays
UK AI regulation places the Information Commissioner’s Office at the centre wherever an AI system processes personal data. Its AI and data protection guidance addresses lawfulness, fairness, transparency, training data, the new automated decision-making framework, and the foundation-model questions that follow generative AI. Updated ADM guidance reflecting the DUAA 2025 changes is in consultation; pre-DUAA guidance remains available but reads subject to the new statutory text. The ICO operates an AI and data protection regulatory sandbox for novel deployments where compliance is genuinely uncertain.
Ofcom regulates AI within its existing telecoms, online safety and broadcasting remits. Its open letter to communications providers on frontier AI cyber risk under the Telecommunications (Security) Act 2021 is the practical illustration: AI-assisted attack capability raises the baseline against which network security measures are judged, and the same logic carries into the Article 32 UK GDPR analysis for any provider that also processes personal data.
The Financial Conduct Authority regulates AI in financial services through the Consumer Duty (PRIN 2A), the Senior Managers and Certification Regime, and its model risk and operational resilience expectations. Its AI Lab is the structured engagement channel for novel applications. Where an AI system makes a credit, pricing, or fraud-detection decision affecting a consumer, the Consumer Duty outcomes on understanding, fair value and support apply alongside the Article 22A–22C framework. The Payment Systems Regulator addresses AI in payments through scheme governance and authorised push payment fraud rules.
The DSIT framework is policy direction, not law
Sitting above the statutory and regulator layers of UK AI regulation is a layer of government policy that is sometimes mistaken for regulation itself. The AI Opportunities Action Plan, the £500m Sovereign AI Fund announced in April 2026, and the Compute Roadmap set the direction for AI infrastructure, public-sector adoption, and skills. They are policy commitments rather than legal obligations on private operators. Where they fund or procure AI in regulated sectors (healthcare, financial services, telecoms infrastructure), recipients still owe the full UK GDPR and sector-rule compliance obligations. The Action Plan does not displace the sector-regulator model; it accelerates deployment within it.
How UK AI regulation compares with the EU AI Act
UK AI regulation diverges from the EU framework more sharply than any other area of regulatory law since Brexit. The EU has enacted a comprehensive, binding, cross-sectoral AI regulation, Regulation (EU) 2024/1689 (the AI Act), in force from 1 August 2024 and applying in phases. The UK has not. The two frameworks differ in instrument, architecture, automated decision-making rules, GPAI treatment, prohibited uses, enforcement, and penalties. They also differ in extraterritorial reach, with practical consequences for any UK-headquartered business whose AI systems touch the EU market.
The EU AI Act is risk-tiered. Article 5 prohibits a defined set of practices outright (including social scoring by public authorities, untargeted facial-recognition image scraping, certain manipulative subliminal techniques, and real-time remote biometric identification in public spaces with limited exceptions). High-risk systems, listed in Annex III and covering AI used in biometrics, critical infrastructure, education and vocational training, employment, access to essential public and private services, law enforcement, migration and border control, the administration of justice, and certain democratic processes, are subject to conformity assessment, technical documentation maintained for ten years (Article 18), a risk management system, human oversight, and pre-market registration. Limited-risk systems carry transparency obligations. Minimal-risk systems are unregulated. A separate regime in Chapter V applies to general-purpose AI (GPAI) models, with stricter obligations for systemic-risk models above the 10^25 floating-point operations training compute threshold (Article 51).
The application timetable is staged. The Act entered into force on 1 August 2024. The Article 5 prohibitions and the AI literacy obligations applied from 2 February 2025. The GPAI obligations applied from 2 August 2025. The bulk of the high-risk obligations apply from 2 August 2026. High-risk AI systems embedded in regulated products under existing EU sectoral law (medical devices, machinery, vehicles) have until 2 August 2027.
The Act’s territorial scope (Article 2) reaches well beyond EU-established providers and deployers. It captures providers that place AI systems on the EU market or put them into service in the EU regardless of location, and providers and deployers established outside the EU where the output of the AI system is used in the EU. A UK fintech offering an AI credit-scoring model to consumers in Germany is within the Act, even if the firm has no EU subsidiary. Penalties under Article 99 are tiered: up to €35 million or 7% of worldwide annual turnover for breach of the prohibited practices; up to €15 million or 3% for non-compliance with other obligations including the high-risk regime; and up to €7.5 million or 1% for supplying incorrect or misleading information. SMEs face a lower-of cap structure. Enforcement is shared between the EU AI Office (which supervises GPAI providers directly) and national supervisory and market surveillance authorities in each Member State.
The two frameworks compare across the dimensions in the table below.
| Dimension | UK framework | EU AI Act |
|---|---|---|
| Type of instrument | No standalone AI statute. Sector-regulator overlay on UK GDPR (DUAA 2025), TSA 2021, Consumer Duty, Online Safety Act 2023. | Regulation (EU) 2024/1689; directly applicable in EU law. |
| Architecture | Sector-regulator model: ICO, Ofcom, FCA, PSR, CMA apply existing remits to AI. | Risk-tiered: prohibited / high-risk / limited / minimal, plus separate GPAI regime in Chapter V. |
| Status as at April 2026 | DUAA 2025 in force 5 February 2026. No AI Bill on the legislative timetable. | In force 1 August 2024. Article 5 prohibitions applied 2 February 2025. GPAI obligations applied 2 August 2025. High-risk obligations apply from 2 August 2026. |
| Automated decision-making | UK GDPR Articles 22A–22D (DUAA 2025): “based solely on automated processing” trigger; safeguards model in Article 22C. | EU GDPR Article 22 (not amended): same “based solely on” trigger; AI Act high-risk classification overlays many ADM use cases. |
| Lawful basis (general) | UK GDPR Article 6, with new Annex 1 recognised legitimate interests (closed list). | EU GDPR Article 6 (no recognised legitimate interests equivalent). |
| Pre-market authorisation | None. | Conformity assessment required for high-risk AI systems before market placement. |
| Documentation retention | DPIA at deployment under Article 35 UK GDPR; no fixed retention period prescribed. | Technical documentation for high-risk systems retained for 10 years (Article 18). |
| GPAI / foundation models | No statutory regime. Voluntary AISI evaluations. | Chapter V GPAI obligations: technical documentation, training-data summary, copyright compliance policy. Systemic-risk threshold 10^25 floating-point operations of training compute (Article 51). |
| Prohibited AI | None statutorily. Consumer-protection, competition, data-protection or sectoral rules engage instead. | Article 5 prohibitions: social scoring, untargeted facial-recognition scraping, certain manipulative techniques, real-time remote biometric identification in public spaces (limited exceptions). |
| Enforcement | Sector regulators using existing powers. ICO is the central regulator for personal-data processing. | EU AI Office for GPAI providers; national supervisory and market surveillance authorities for in-scope systems. |
| Penalties | Sectoral. ICO fines under UK GDPR up to the higher of £17.5m or 4% of worldwide turnover. | Article 99: up to €35m or 7% (prohibited practices); €15m or 3% (other obligations including high-risk); €7.5m or 1% (incorrect or misleading information). |
| Extraterritoriality | UK GDPR territorial scope (Article 3). No AI-specific extraterritorial regime. | Article 2: providers placing systems on the EU market regardless of location, and non-EU providers and deployers whose AI output is used in the EU. |
The headline reading that UK AI regulation is “lighter touch” than the EU regime is broadly correct as a matter of statute, but it understates two things. First, the EU AI Act is a horizontal layer that sits on top of EU GDPR rather than replacing it; the UK GDPR (post-DUAA) is doing more of the regulatory work. Second, any UK business with an EU customer-facing AI deployment is in the EU AI Act regardless of UK domicile, which means a UK-only compliance posture is rarely commercially viable for a scaling business.
The compliance reality for UK and EU AI deployers
For an in-house counsel asked how to launch an AI-enabled product into the UK, UK AI regulation reduces to a practical sequence of four steps. The first is a data protection impact assessment under Article 35 UK GDPR. Deploying an AI system that processes personal data on any meaningful scale falls within the ICO’s published high-risk-processing criteria and a DPIA is mandatory rather than discretionary.
The second is an Articles 22A to 22D analysis. Where the system makes (or could make) decisions producing legal or similarly significant effects on individuals, the question is whether human involvement in the workflow is meaningful enough to take the decision outside the “based solely on automated processing” definition. If it is not, the safeguards in Article 22C apply: information, representations, human intervention and contest. AI systems that surface recommendations on which a human acts without substantive review tend to fall within the regime in practice.
The third is an Article 28 UK GDPR controller and processor analysis for every third-party model vendor, hosting provider, fine-tuning service or output-surfacing tool involved in the processing. Allocate accountability accordingly. Off-the-shelf vendor terms typically need amendment to meet the Article 28 specifications and to support the controller’s downstream obligations.
The fourth is security under Article 32. The DSIT open letter on AI-powered cyber attacks raises the state-of-the-art baseline against which “appropriate” technical and organisational measures are judged; see the firm’s analysis of the controller duties under Article 32 after the DSIT open letter.
For a deployer with EU market exposure, the EU AI Act adds a parallel workstream that is not satisfied by the UK analysis. The first task is a risk classification: prohibited, high-risk (against the Annex III list and the Annex I sectoral product law), limited risk, or minimal. High-risk classification triggers conformity assessment, technical documentation, the risk management system, human oversight design, and pre-market registration. The second task is a downstream-provider assessment for any GPAI model used in the product: a UK firm building on Anthropic Claude, OpenAI GPT-4 or Google Gemini for an EU-facing service is treated as a downstream deployer of the GPAI model and inherits a slice of the GPAI obligations. The third is to identify the responsible national supervisory authority in each Member State of deployment, since enforcement is decentralised. The EU GDPR Article 22 analysis is run separately from (and on a different test to) the UK Articles 22A–22D analysis for any data subject in the EEA. Practical scope guidance for the most common UK and EU AI deployment questions sits at the firm’s AI and data governance page.
Viewpoint
UK AI regulation reads, on paper, lighter than the EU AI Act. In practice the regulatory burden for a UK-only deployer is rarely lower in any case the firm sees. The UK GDPR (now with Articles 22A to 22D), Consumer Duty for retail financial services, and the TSA 2021 cyber overlay together set a higher bar than headline coverage of the “no UK AI Act” position suggests. What clients usually want is not a single statutory test to pass, but a clear allocation of who is the controller, who is the processor, where the decisions are taken, and which regulator has to be satisfied. That allocation has not got easier with DUAA 2025: the new automated decision-making regime is more permissive at the level of lawful basis, but the safeguards in Article 22C land hard on whoever owns the deployment.
For a scaling business with EU exposure, the practical centre of gravity is increasingly the EU AI Act rather than UK AI regulation. Once a UK firm crosses the EU customer-facing threshold the Act applies regardless of UK domicile, the conformity-assessment burden lands on the same product the UK framework treats more permissively, and the dual-track compliance architecture (UK GDPR Articles 22A–22D plus EU GDPR Article 22 plus EU AI Act risk classification) becomes the operating reality. Expect ICO enforcement under the new UK ADM framework once the final guidance lands later this year, and expect EU AI Act compliance to set the methodology that UK practice converges on for any client with cross-border ambition.
For advice on the deployment of AI products into UK or EU markets, allocation of controller and processor responsibility under Articles 22A to 22D and Article 28, or AI-related due diligence on a regulated target, contact Rob Bratby at Bratby Law. The firm advises across the data protection, including on AI and automated decision-making.
