The Sovereign AI Fund: £500m and the UK’s regulatory divergence bet

In short: The £500 million UK Sovereign AI Fund, launched on 16 April 2026, is a pump-primer, not a war chest. Individual US AI companies have closed single funding rounds at several times that figure in the past 18 months. The UK’s real pitch to AI founders is regulatory divergence. The Data (Use and Access) Act 2025 amended UK GDPR in February 2026 to permit automated decision-making under new safeguards and to add a narrow recognised legitimate interest lawful basis. That opens product headroom in the UK. It also creates an EEA access problem that has to be designed around from day one.
On 16 April 2026, the Secretary of State for Science, Innovation and Technology, Liz Kendall, launched the UK Sovereign AI Fund at Wayve’s London offices. Seven companies received first-wave backing: AI infrastructure startup Callosum, which took the Fund’s first equity investment, and six further firms granted fully-funded access to UK supercomputing capacity. The total vehicle is £500 million. As a capital commitment that is modest. Individual US AI companies have closed single funding rounds at several times that figure over the past 18 months. The Fund’s role is ecosystem signalling and compute de-risking, not closing the capital gap with US venture flows.
What the UK is selling is not the cheque. It is the regulatory deal underneath it. The Data (Use and Access) Act 2025 amended UK GDPR in targeted ways. Part 5 came into force on 5 February 2026 under the Commencement No. 6 Regulations (SI 2026/82). For AI founders, two of the amendments matter: the new automated decision-making regime and the narrow recognised legitimate interest basis. The divergence those amendments open up between the UK and the EU is the real pitch.
Permission-with-safeguards in the UK, rules in the EU
Section 80 of the DUAA 2025 replaced UK GDPR Article 22 with four new provisions. Article 22A defines “solely” automated decision-making as processing with “no meaningful human involvement”. Article 22B retains a prohibition where the decision uses special category data, subject to explicit consent or lawful authority gateways. Article 22C mandates four safeguards for permitted automated decisions with legal or similarly significant effects: information to the data subject, a right to make representations, a right to human intervention, and a right to contest. Article 22D gives the Secretary of State regulation-making power to specify what each safeguard requires in practice. The ICO’s draft guidance on automated decision-making, open for consultation until 29 May 2026, sets out regulator expectations on the “meaningful human involvement” test. Section 70 adds Article 6(1)(ea), a new lawful basis for “recognised legitimate interest”, backed by a new Annex 1 listing five pre-approved purposes: crime prevention and detection, safeguarding vulnerable people, emergency response, national security and defence, and disclosure to a public authority for a public interest task.
The EU has taken the opposite route. The EU AI Act (Regulation 2024/1689) applies risk-tiered horizontal rules to AI systems by use case. EU GDPR retains the original Article 22 near-prohibition on solely automated decisions with legal or similarly significant effects, unlocked only through the three narrow gateways: contractual necessity, legal authorisation, or explicit consent. There is no UK-style permission-with-safeguards opening. The UK approach gives AI founders working on regulated-sector decision engines, in financial services, insurance, recruitment and health, a compliance architecture that the EU does not.
The EEA access problem
Divergence cuts both ways. A product built to the UK’s Article 22A to 22D architecture, and relying on ordinary legitimate interest for training, cannot in the same form serve EU data subjects. EU GDPR Article 22 prohibits the underlying decision engine unless one of the three gateways applies. The EU AI Act then adds its own, independent risk classification. A UK-only compliance design does not translate at the border.
For Sovereign AI Fund recipients, and for any founder whose product roadmap includes EEA customers, the practical consequence is dual-track architecture from day one: one track built to UK rules, one built to EU rules, sharing infrastructure where possible. The cost of designing this in early is materially lower than the cost of discovering it at Series A due diligence, when investor counsel asks for the EU AI Act risk classification and the EU GDPR Article 22 analysis and finds neither exists.
What to build in, and when
Four items belong in the first 90 days of product and compliance work for any firm taking Fund backing or otherwise scaling a UK AI product. First, a data protection impact assessment under Article 35 UK GDPR for any high-risk processing, including large-scale profiling, special category data and biometric identification. The ICO’s AI and biometrics strategy update, March 2026 confirms transparency, bias mitigation, and rights and redress as the three priorities for regulator scrutiny.
Second, a lawful basis analysis for each processing activity. Recognised legitimate interest under Article 6(1)(ea) is a narrow tool. The Annex 1 purposes do not cover commercial product development, so most AI founders cannot rely on it. Ordinary legitimate interest under Article 6(1)(f) remains available for training at scale but requires a documented necessity and balancing assessment that holds up under ICO scrutiny.
Third, an Article 22 architecture for any product that produces decisions with legal or similarly significant effects on individuals. The architecture must either secure documented human involvement with authority, competence and time to override the output, or, if the decision remains solely automated, deliver all four Article 22C safeguards. For decision-engine products in regulated sectors this is the gating compliance question.
Fourth, the parallel EU layer. EU AI Act classification and EU GDPR Article 22 analysis must run alongside the UK work, not after it. The Article 28 flow-down matters too: Fund recipients supplying regulated customers will find contractual warranties on UK and EU compliance written into supply agreements. Where that foundation needs laying, see our guidance on AI and data governance advice.
Viewpoint
The Sovereign AI Fund is a policy signal dressed as a capital vehicle. £500 million across seven recipients will not move the commercial needle against US venture capital flows. What the UK is trying to do is different. It is offering a regulatory deal on automated decision-making and lawful basis that is genuinely lighter than the EU, using the Fund as credible commitment to stay the course on that policy direction, and betting that the combination of regulatory headroom and ecosystem signalling compounds into an AI industry presence that outweighs the capital shortfall.
It can work, but not for founders who read the Fund as a UK-first steer. In my experience advising AI and data-led businesses, the product architecture built in the first 12 months decides what markets the firm can serve in the next five. A UK-optimised stack on day one is a product trap by Series B, when EEA customers require EU AI Act and EU GDPR evidence the firm has not built. The right read of the UK’s bet is that divergence creates commercial headroom at the front end, provided founders treat EU compliance as parallel work, not a later translation exercise. Public capital and regulatory divergence are both levers. Neither is enough alone.
Links
- DSIT announcement, Sovereign AI Fund first investments, 16 April 2026
- Secretary of State speech at Wayve, 16 April 2026
- Data (Use and Access) Act 2025
- DUAA 2025 Commencement No. 6 Regulations (SI 2026/82)
- ICO draft guidance on automated decision-making and profiling (consultation closes 29 May 2026)
- ICO AI and biometrics strategy
- Related Bratby Law analysis: DUAA takes effect: ICO enforcement; Recognised legitimate interests: UK/EU divergence; AI and automated decision-making practice page
For Sovereign AI Fund recipients, and for firms deploying AI products into UK and EU markets, Bratby Law advises on AI and data governance. Contact Rob Bratby to discuss UK and EU compliance architecture and EU AI Act classification work.
