UK AI Regulation: What the Law Says

UK AI Regulation: What the Law Actually Says

The UK has no standalone AI law. Unlike the European Union, which passed the AI Act in 2024, the UK has chosen a sector-led approach under which existing regulators apply existing law to AI systems within their remit. This article sets out what that means in practice in 2026: which statutes and regulators bind AI developers and deployers, what the Data (Use and Access) Act 2025 changed for automated decision-making, where copyright law stands after the government stepped back from its proposed training exception, what Ofcom, the FCA, the PSR and the CMA expect in their sectors, how the EU AI Act reaches UK firms supplying the European market, and what is left of the earlier commitments to AI-specific legislation. It is written for legal, compliance and product teams who need to know which rules apply today, rather than which rules might apply in future.

The UK approach in one paragraph

The sector-led framework was set out in the March 2023 AI Regulation White Paper and confirmed in the government’s February 2024 consultation response. It asks existing regulators to apply existing powers to AI systems operating within their remit, guided by five cross-cutting principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. The regulators with the most significant reach over AI are the Information Commissioner’s Office for data protection, Ofcom for telecoms and online safety, the Financial Conduct Authority for financial services, the Payment Systems Regulator for payments, the Competition and Markets Authority for competition, the Medicines and Healthcare products Regulatory Agency for medical devices, and the Health and Safety Executive for workplace safety. None of them has, or is expected to receive, AI-specific statutory powers. Each continues to apply the law it already administers. Coordination runs through the Digital Regulation Cooperation Forum, which brings the ICO, Ofcom, the FCA and the CMA together on cross-cutting AI issues.

Data protection is where most UK AI regulation bites

Any AI system that processes personal data is regulated by UK GDPR and the Data Protection Act 2018. That covers most AI deployments. Article 5(1)(a) requires fair and transparent processing. Articles 13 and 14 require data subjects to be told when and how their data is processed, including by automated systems. Article 35 requires a data protection impact assessment for high-risk processing, which includes most uses of AI at scale. The lawful basis analysis under Article 6 is where most AI data protection problems begin and end: scraping, repurposing and large-scale profiling are all harder to justify than organisations often assume.

The ICO has been the most active UK regulator in the AI space. Its Guidance on AI and data protection remains the leading practical resource for controllers. The ICO’s AI and Biometrics Strategy, published in June 2025 and updated in March 2026, sets out the Commissioner’s enforcement priorities and commits the regulator to developing a statutory code of practice on AI and automated decision-making. That code will be issued under powers introduced by the DUAA 2025 and is in development at the time of writing. For a fuller treatment of how UK GDPR applies to AI systems, see the Bratby Law data protection practice page and the specialist guidance on AI and automated decision making under UK GDPR.

What the DUAA 2025 changed for automated decision-making

Section 80 of the Data (Use and Access) Act 2025 replaced Article 22 of the UK GDPR with new Articles 22A to 22D. The reform came into force on 5 February 2026 under the Commencement No. 6 Regulations. Under the replaced provisions, the old near-total prohibition on solely automated decision-making has gone. Solely automated decisions that produce legal or similarly significant effects are now permitted outside the special category data context, provided the controller implements safeguards: transparency about the automated nature of the decision, the ability for the data subject to make representations, a right to human intervention, and a right to contest. Significant decisions based on special category personal data remain subject to tighter conditions under Article 22B.

For a deeper analysis of the new Articles 22A to 22D, including what safeguards controllers must build into their systems, see our companion post on Automated Decision-Making After the DUAA.

The substitution is more than cosmetic. It moves automated decision-making from the exceptions list into a default-permitted regime governed by procedural safeguards. That does not mean controllers can now automate at will. It means the compliance question has shifted from “do we qualify for an exception?” to “have we built the safeguards correctly?”. The ICO’s first substantive enforcement guidance on the new framework is expected later in 2026, following the publication of its recruitment ADM investigation findings in March 2026. For the enforcement dimension, including the ICO’s new investigatory powers and the sharper PECR fine cap, see our post on the DUAA enforcement regime.

The Copyright, Designs and Patents Act 1988 makes unauthorised reproduction of copyright works an infringement. Section 29A provides a narrow exception for non-commercial research text and data mining but does not cover commercial AI training. Unless a licence is in place or the training falls within section 29A, training a commercial AI model on UK copyright works is infringement.

The government consulted between December 2024 and February 2025 on four options, including a broad text and data mining exception with a rights reservation opt-out modelled on the EU Digital Single Market Directive. In its Report on Copyright and Artificial Intelligence, published on 18 March 2026 under sections 135 to 137 of the DUAA, the government confirmed that it no longer had a preferred option and would engage stakeholders on narrower alternatives. The broad exception is not on the table. No replacement has been legislated. The House of Lords Communications and Digital Committee, reporting shortly before the government, recommended a licensing-first approach and urged ministers to rule out the opt-out model. For the detail, see the analysis of how the government stepped back from its proposed copyright exception.

The AI legislation that has not arrived

The 2024 King’s Speech committed the Labour government to introducing “appropriate legislation” on the most powerful AI models. That Bill has not been introduced. In February 2025 the government said that “most AI systems should be regulated at the point of use” and that “existing expert regulators are best placed to do this”, a pivot back towards the sector-led framework inherited from the White Paper. The current parliamentary session has been extended to 2026 without a further King’s Speech, so the legislative vehicle for any AI Bill is still notional. A private member’s Artificial Intelligence (Regulation) Bill [HL] has been tabled in the House of Lords but carries no government support and is unlikely to become law.

For businesses, the practical implication is that no UK statute labelled “AI” is coming in the 2026 session. Planning on the assumption that one will arrive is not prudent.

Telecoms, online safety and Ofcom’s AI remit

Ofcom regulates telecoms networks and services under the Communications Act 2003, telecoms security under the Telecommunications (Security) Act 2021, and user-to-user and search services under the Online Safety Act 2023. It has not proposed AI-specific rules in any of those domains and has been explicit that it will not. Ofcom’s strategic approach to AI confirms that its regulation is technology-neutral: regulated companies are free to deploy AI without seeking Ofcom’s permission, provided they meet their existing outcome-based obligations.

For the broader UK telecoms regulatory landscape, including Ofcom’s General Conditions, security duties and online safety remit, see our Telecoms Regulation practice area.

Three threads matter in practice. First, online safety. Ofcom’s illegal content and child safety codes under the Online Safety Act apply equally to AI-generated content. Deepfakes, synthetic non-consensual intimate imagery and AI-assisted illegal material are treated the same as human-generated equivalents. The duties on user-to-user and search services are identical regardless of how content was produced. For a fuller treatment, see our post on the Online Safety Act.

Second, telecoms network security. TSA 2021 obligations on public telecoms providers require them to manage security risks to their networks and services. Where operators deploy AI for threat detection, traffic management or fraud prevention, the underlying security duties in the TSA and Ofcom’s Security Code of Practice remain the framework against which deployment is assessed. AI adoption in the security layer is encouraged, not regulated separately.

Third, network innovation. Ofcom co-runs SONIC Labs with Digital Catapult as a testbed for AI in mobile networks, reflecting the regulator’s commitment to enabling rather than gating AI-driven network evolution. The same logic applies to broadcasting: AI is widely used by broadcasters for captions, translation and automated metadata, and none of this engages a separate AI regime.

The practical point for telecoms and online services is that Ofcom’s existing tools are the tools that will bite. There is no AI licence, no AI code, and no prior approval. Compliance with existing outcome-based duties absorbs AI deployment without friction.

Financial services and payments

The FCA has not proposed AI-specific rules. Its AI Update, published in April 2024 in response to the White Paper, explains how existing rules already apply to AI systems in regulated firms. The Consumer Duty (PRIN 2A) operates as the de facto AI fairness rule for retail financial services: firms deploying AI in customer-facing decisions must deliver good outcomes, avoid foreseeable harm and support customer understanding. SYSC 8 outsourcing rules apply when firms rely on third-party foundation models. SMCR personal accountability reaches the senior managers responsible for AI governance.

For the broader UK payments regulatory landscape, including FCA authorisation, PSR rules, safeguarding and the Consumer Duty, see our Payments Regulation practice area.

The FCA launched its AI Lab in 2024 and ran the AI Consortium, Live Testing and Supercharged Sandbox programmes from May 2025. Participation is voluntary but signals the regulator’s preference for engagement over prior authorisation. The joint Bank of England and FCA AI in UK financial services survey, most recently conducted in 2024, continues to shape the regulators’ understanding of deployment across the sector.

The Payment Systems Regulator has taken a similar line, applying its competition and consumer protection powers under the Financial Services (Banking Reform) Act 2013 rather than issuing AI-specific rules. Agentic AI systems that initiate payments sit within the consent framework of the Payment Services Regulations 2017. The legal analysis of authentication, authorisation and liability is worked through in our post on agentic AI payments and the PSRs 2017.

Competition, foundation models and the CMA

The Competition and Markets Authority has taken a more structural view of the AI market than any other UK regulator. Its AI Foundation Models Initial Report (September 2023) and Update Paper (April 2024) set out six principles intended to protect competition and consumer interests as the foundation model sector develops: access to key inputs, diversity of business models, choice for deployers and end users, flexible switching, fair dealing, and transparency. The principles are not legally binding, but the CMA has been clear that it will use its existing competition law powers where the dynamics the principles describe are undermined, and that it will monitor the sector for conduct of concern as markets develop.

The more significant development for transactions is the CMA’s scrutiny of minority investments and partnerships between large cloud providers and AI developers. In 2024 the CMA reviewed several such arrangements under the UK merger regime, ultimately deciding not to refer them for in-depth investigation but signalling that it would continue to monitor sector dynamics. The position in 2026 is that the CMA’s investigatory reach over foundation model markets is broad under existing law, even without AI-specific powers. Transactions involving foundation model assets or AI compute capacity should factor a CMA review into the timetable.

Children’s data and the Age-Appropriate Design Code

The ICO’s Age-Appropriate Design Code, also known as the Children’s Code, applies to online services likely to be accessed by children under 18. It is a statutory code under section 123 of the DPA 2018 and is directly enforceable through the Commissioner’s ordinary UK GDPR powers. For AI systems, the code bites on profiling, nudging, recommender systems and automated age assurance. It interacts with the Online Safety Act’s child safety duties and the two regimes are being aligned through joint ICO and Ofcom statements, with a third joint statement on age assurance expected in 2026.

Any business deploying AI in consumer-facing services that may reach children should assume the Children’s Code applies. The ICO has signalled a particular focus on the mobile games sector in 2026 and has said it will consider enforcement action where services use AI to profile, recommend to or nudge children contrary to the code. For services designed for adults that may nevertheless be accessed by children, the age assurance obligation is the gating question.

The EU AI Act’s reach into UK business

The EU AI Act does not apply in the UK. It does, however, apply to UK-based providers and deployers where the AI system is placed on the EU market or where its output is used in the Union. The extraterritorial hook in Article 2 of the AI Act (Regulation (EU) 2024/1689) means UK firms serving EU customers will fall within scope even without EU establishment.

The obligations are tiered by risk. Prohibited practices have applied since 2 February 2025. Obligations on providers of general-purpose AI models applied from 2 August 2025. The principal compliance deadline for high-risk AI systems is 2 August 2026: high-risk providers and deployers must meet the full conformity assessment, risk management, data governance, transparency, human oversight and post-market monitoring obligations. A limited set of high-risk AI systems used by public authorities has a transitional window extending to 2 August 2030.

The practical point for UK firms is that the EU AI Act runs in parallel with UK law, not instead of it. A single AI system deployed in both markets will need to meet UK GDPR under the UK regime and the AI Act’s high-risk obligations under the EU regime, as well as EU GDPR where personal data is processed. The UK’s lighter-touch framework does not reduce EU compliance cost for firms operating on both sides of the Channel.

What this means for UK businesses

The most useful mental model for UK AI regulation in 2026 is polyphonic. A single AI deployment can trigger several regulators at once. A retail bank using AI to score loan applications engages UK GDPR (ICO), the Consumer Duty (FCA), SMCR personal accountability, and potentially the EU AI Act if it has EU operations. A broadcaster using AI to generate captions engages the Online Safety Act for user-to-user elements (Ofcom) and UK GDPR for staff and contributor data (ICO). A telecoms operator using AI for network security engages the TSA 2021 and Ofcom’s Security Code of Practice, plus UK GDPR for personal data flowing through the network. A payments firm deploying agentic AI engages the PSRs 2017 (FCA and PSR) and UK GDPR.

There is no UK AI Act and one is unlikely in the 2026 session. UK GDPR is the workhorse: if an AI system touches personal data, the ICO’s rules apply and Articles 22A to 22D now govern solely automated decision-making. Copyright is binding: training a commercial AI model on UK works without a licence is infringement and no exception is on the horizon. Sector regulators will not issue AI-specific rules. They will apply the powers they already have under the Consumer Duty, the Online Safety Act, the PSRs 2017, the TSA 2021 and similar frameworks. The CMA has a distinct competition-law line of sight into foundation model markets and into transactions involving AI assets. The EU AI Act reaches UK firms that serve EU customers. The Children’s Code overlays any consumer service that may be accessed by children.

Viewpoint

The UK’s sector-led approach is defensible for a jurisdiction with a mature regulatory estate. It avoids the rigidity of the EU AI Act and lets each regulator calibrate intervention to its own sector. The cost is the polyphonic compliance burden: a single AI system may engage four or five UK regulators and the EU AI Act at the same time, with no single source of truth to reconcile them. That burden falls disproportionately on regulated firms in financial services, telecoms and payments, which are the firms most likely to deploy AI in high-stakes decisions. Unregulated technology businesses can operate almost entirely within UK GDPR; regulated firms cannot.

The practical answer for most clients is to treat UK GDPR as the central framework and branch out into copyright, the Consumer Duty, the Online Safety Act, the TSA 2021, the PSRs 2017, the CMA’s foundation model principles, the Children’s Code and the EU AI Act only where the specific deployment engages them. Mapping a single AI system against the applicable regimes is a first-principles exercise that pays off in reduced duplication and cleaner governance. That is the default approach Bratby Law takes on UK AI projects in 2026.

If you are building or deploying AI systems in the UK and need to map your regulatory obligations, get in touch.

Select topics of interest

Similar Posts