AI Regulation in the UK – An overview

Introduction

In contrast to the overarching, prescriptive approach of the EU AI Act, the UK’s approach to regulating artificial intelligence (AI) is to focus on regulating the use of AI, rather than the underlying AI technology. The UK’s objective is to foster innovation whilst managing risks.

2023 AI White Paper

The UK government’s March 2023 White Paper: A pro-innovation approach to AI regulation proposed five non-statutory principles:

  • safety, security and robustness
  • appropriate transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress

Sector regulators responses

In February 2024, this was followed by non-statutory guidance to sector regulators on how they should apply these principles within existing statutory frameworks. In the following months (before the July 2024 general election), various regulators reacted to the white paper and guidance:

2024 General Election: new government evolves prior policy

On 4 July 2024, following a general election, a new labour government took over from the prior conservative administration. In contrast to some policy areas, the new government maintained many aspects of the existing UK AI regulatory and policy framework, in particular the ‘pro-innovation’ sector approach that leveraged existing regulation and regulators to address potential harms (see January 2025 Government response to Science, Innovation and Technology Select committee), but made a significant new legislative proposal to address safety concerns of the most powerful AI models (see Kings Speech 2024, Response to select committee Jan 2025), including putting the UK AI Security Institute (previously the AI Safety Institute) on a statutory footing. In addition, specific measures were proposed to ban the creation and sharing of sexually explicit deepfakes.

Sector regulators continue to publish more information about their approach:

In future posts, I will look in more detail at the approach of different UK sectors regulators and how existing UK law is, and might be, applied to emerging AI issues.

Existing UK legislation applicable to AI

Existing UK legislation (applied by regulators and the courts) applicable to AI includes:

Data & Privacy

  • UK GDPR / Data Protection Act 2018 – data use & automated decisions

Equality & Employment

  • Equality Act 2010 – discrimination & bias
  • Employment Rights Act 1996 – hiring & monitoring

Consumer & Product Safety

  • Consumer Protection Act 1987 – defective AI products
  • Consumer Rights Act 2015 – fairness & transparency
  • Product Security and Telecommunications Infrastructure Act 2022 – AI-enabled device security & resilience

Online Content & Misuse

  • Online Safety Act 2023 – AI content moderation
  • Computer Misuse Act 1990 – AI cybercrime
  • Fraud Act 2006 – AI scams & fraud
  • Defamation Act 2013 – AI-generated content / deepfakes

Intellectual Property

  • Copyright, Designs & Patents Act 1988 – authorship & data mining
  • Patents Act 1977 – inventorship rules

Competition & Markets

  • Competition Act 1998 / Enterprise Act 2002 – AI markets & dominance

Sector-Specific (nb not comprehensive)

  • Network and Information Systems Regulations 2018 – cybersecurity for essential services & digital providers (incl. AI systems)
  • Financial Services and Markets Act 2000 – FCA/PRA oversight of AI
  • Medical Devices Regulations 2002 (UKCA) – AI in healthcare

(Disclosure: In case you are wondering, this post was written by a person. I used a generic LLM model (ChatGPT), a RAG specialised legal models (Lexis protege) and a source analyser (NotebookLM) to help me produce this draft, but each had their own drawbacks, so all mistakes are my own! Image courtesy of ChatGPT.)

Similar Posts