AI cyber threats and UK GDPR Article 32: controller duties after the DSIT open letter

In short: the joint open letter from the Secretary of State for Science, Innovation and Technology and the Security Minister on AI cyber threats, updated on 22 April 2026 alongside the launch of the Cyber Resilience Pledge at CyberUK in Glasgow, does not create new law. It sharpens the lens through which the Information Commissioner’s Office will judge existing controller and processor obligations under UK GDPR Article 32 and section 66 of the Data Protection Act 2018. Controllers deploying AI systems should assume that “appropriate technical and organisational measures” now include controls against AI-specific attack vectors, and should update their risk assessments, records of processing and incident response plans accordingly. Regulated telecommunications operators and payment firms should also check that their AI deployments sit inside the sectoral security regimes that already bind them.
Regulatory background
The open letter is a political signal, not a legal instrument. Its three asks, that boards take ownership of cyber risk, that firms adopt Cyber Essentials across supply chains, and that organisations sign up to the National Cyber Security Centre’s Early Warning Service, are voluntary. The companion Cyber Resilience Pledge formalises the same three commitments into a scheme that firms may sign.
The binding obligations sit elsewhere. UK GDPR Article 32 requires every controller and processor to implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk. The Regulation names four benchmarks: pseudonymisation and encryption where appropriate; ongoing confidentiality, integrity, availability and resilience of processing systems and services; the ability to restore availability and access in a timely manner after an incident; and a process for regularly testing and evaluating the effectiveness of those measures. Article 32 was amended by Schedule 11 paragraph 10 of the Data (Use and Access) Act 2025, with that amendment in force from 20 August 2025.
For law enforcement controllers and their processors, section 66 of the Data Protection Act 2018 imposes a parallel duty, similarly risk-proportionate, and recently supplemented by a new subsection (3) inserted by DUAA 2025 section 84(5), also in force from 20 August 2025.
What “appropriate” means when AI is in the stack
“Appropriate” does the heavy lifting in both provisions. The controller must calibrate its measures to the state of the art, the cost of implementation, and the likelihood and severity of harm to data subjects. AI systems move all three dimensions. The state of the art for securing large language models, retrieval-augmented generation pipelines and agentic systems is a developing field, implementation costs are often substantial, and the threat surface is broader than a conventional enterprise application.
The evidence base has moved. On 7 April 2026, Anthropic announced Project Glasswing and disclosed that its unreleased Claude Mythos Preview model had autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser. The UK’s AI Security Institute published its own evaluation of Mythos Preview on 15 April 2026, reporting that it was the first system to complete a 32-step enterprise attack simulation autonomously. The National Cyber Security Centre judged that defenders retain the advantage, but only if they act on it. For Article 32 purposes this is material. The Regulation’s “state of the art” and “likelihood and severity” tests are calibrated against what is publicly known, and the UK government has now published the evidence.
Against that backdrop, four AI-specific attack vectors should now feature in the risk register. Training data poisoning, where an attacker corrupts the dataset to bias or weaken the model, can compromise the integrity of processing. Model inversion and membership inference attacks allow adversaries to reconstruct training data, including personal data, from model outputs. Prompt injection can exfiltrate confidential material or cause the model to breach access controls. And supply-chain compromise of third-party foundation models or embedding APIs introduces risks the controller may not see in its own logs. The ICO has flagged these vectors in its AI guidance and they should be reflected in AI governance documentation.
Three practical consequences follow. First, records of processing activities under Article 30 should reflect each AI system as a processing activity in its own right, with the security measures described with enough specificity to evidence Article 32 compliance. A generic “access controls and encryption at rest” entry will not satisfy the ICO’s security guidance, which is currently under review following the Data (Use and Access) Act 2025. Second, incident response plans should explicitly cover scenarios involving AI systems, including model compromise, prompt-injection data exfiltration, and degraded integrity following suspected training-data tampering. Third, Article 32’s testing and evaluation duty is not a one-off compliance artefact. For systems that are retrained or fine-tuned on live data, testing should be continuous.
Telecommunications operators: TSA sits above Article 32
For public electronic communications providers, the DSIT letter lands on top of a considerably thicker security regime than it describes. Sections 105A to 105Z of the Communications Act 2003, as inserted and amended by the Telecommunications (Security) Act 2021, impose statutory duties to identify, reduce and prepare for security compromises, and to report specified compromises to Ofcom. The Electronic Communications (Security Measures) Regulations 2022 translate those duties into specific technical requirements covering network architecture, redundancy and resilience, supply-chain assurance, patching, logging, monitoring, access controls and vendor equipment. Ofcom’s Telecommunications Security Code of Practice then sets out the measures that, where implemented, Ofcom will treat as meeting the statutory duties. The Code applies in staged tiers linked to relevant annual turnover: Tier 1 providers (turnover of ยฃ1 billion or more) and Tier 2 providers (between ยฃ50 million and ยฃ1 billion) are subject to the most demanding measures with implementation deadlines now broadly live, and Ofcom publishes annual compliance updates.
The notification architecture is also denser than the DSIT letter suggests. Section 105K of the Communications Act 2003 requires providers to notify Ofcom of security compromises that have a significant effect on the network or service, backed by Ofcom’s power to impose penalties of up to ten per cent of relevant turnover. Where the same compromise involves personal data, Article 33 of the UK GDPR requires notification to the ICO within 72 hours, and Regulation 5A of the Privacy and Electronic Communications (EC Directive) Regulations 2003 adds a sector-specific personal data breach notification regime applicable to providers of public electronic communications services. Three regulators may be notifiable on different triggers and timelines on the same incident. On supply chain, the designated vendor regime in sections 105Z1 to 105Z22 of the Communications Act 2003, read alongside the National Security and Investment Act 2021 clearance regime, places board-level ownership of third-party AI model provenance squarely inside the existing regulatory perimeter rather than leaving it to procurement discretion.
Where an operator processes subscriber data through an AI system, whether for fraud detection on call detail records and session metadata, network automation and self-organising orchestration, customer service automation, spam and scam filtering, or traffic analytics, three regulatory regimes apply to the same control environment: UK GDPR Article 32 supervised by the ICO, the TSA regime supervised by Ofcom, and PECR where personal data of subscribers and users of a public electronic communications service is in scope. The DSIT letter’s asks are baseline cyber hygiene. They do not displace the TSA’s prescriptive controls, they do not reduce the ICO’s expectation that AI-specific risks are reflected in the controller’s technical and organisational measures, and they do not substitute for the PECR confidentiality rules. A board minute recording that the firm has adopted the Cyber Governance Code is not, by itself, evidence of compliance with any of those three regimes. The Ofcom Code and the ICO’s security guidance are the respective “appropriate measures” benchmarks, and on any given control the stricter standard will govern.
Payment firms: Regulation 98 and operational resilience
Authorised payment institutions, electronic money institutions and registered account information service providers face a similar layering. Regulation 98 of the Payment Services Regulations 2017 requires payment service providers to establish a framework with appropriate mitigation measures and control mechanisms to manage the operational and security risks relating to the payment services they provide. Under Regulation 98(2), payment service providers must provide the FCA each year with an updated and comprehensive assessment of the operational and security risks and the adequacy of their mitigation measures and control mechanisms. The content of “appropriate” for these purposes is set by the FCA’s Approach Document on payment services and electronic money, its guidance on ICT and security risk management, and its operational resilience policy statement PS21/3 together with the SYSC 15A rules. Regulation 99 requires prompt notification to the FCA of major operational or security incidents, and of other regulators where they are affected.
Strong customer authentication under Regulation 100 of the PSRs 2017, with the dynamic linking requirement for remote electronic payment transactions specified in the onshored regulatory technical standards (Commission Delegated Regulation (EU) 2018/389), is a core obligation that directly engages AI models in many firms. The transaction risk analysis exemption allows payment service providers to forgo strong customer authentication for electronic payments of low risk where the firm’s real-time fraud monitoring can maintain fraud rates below the prescribed reference thresholds. In practice, that monitoring is almost always AI or rules-plus-AI. A failure in the underlying model, whether from training-data drift, prompt injection in an agentic variant, or supply-chain compromise of a third-party scoring API, can breach the exemption conditions and give rise to a regulatory breach independent of any data protection dimension. Regulation 67 liability allocation between payer, payee and payment service provider for unauthorised transactions adds a commercial consequence for any failure of this kind.
The wider resilience regime reinforces the point. FCA SYSC 15A requires firms to identify their important business services, set impact tolerances for the disruption of each, and map dependencies, including AI components used in authentication, fraud detection, customer identification or transaction monitoring flows. For firms within the scope of the PRA rulebook, SS1/21 imposes parallel operational resilience obligations. Electronic money institutions and payment institutions providing payment services into EEA markets also fall within the Digital Operational Resilience Act regime for those flows, with threat-led penetration testing and ICT third-party risk requirements that go beyond the UK rules. A security failure in an AI-driven fraud or transaction-monitoring system can therefore engage the ICO under Article 32, the FCA under the PSRs 2017 and SYSC 15A, and the Payment Systems Regulator in its oversight of scheme-level resilience, in parallel and on different timelines. The ministerial letter does not change that regulatory perimeter; it sharpens board-level scrutiny of whether the controls mapped to it actually work.
Viewpoint
The ministerial letter is a prompt, not a deadline, and it is narrower than some of the commentary suggests. Controllers that treat it as a PR exercise will be exposed; those that treat it as a compliance trigger will be better positioned when the ICO, Ofcom or FCA investigates the next AI-related incident. The ICO has been signalling for over a year that it will look first at whether security measures were commensurate with the specific risks of the technology deployed, not at whether generic enterprise controls were in place. The letter’s public framing now makes it harder for a controller to argue, after the fact, that AI-specific threats were not reasonably foreseeable on 22 April 2026. For regulated telecommunications and payments firms, the sharper point is that the existing sectoral regimes already require exactly the analysis the letter now publicly calls for. The question is whether the documentation evidences it.
If you would like to discuss how Article 32, section 66, the TSA regime or the PSRs 2017 security rules apply to your AI deployment, or how to update your Article 30 register and incident response plan to reflect the current regulatory posture, please get in touch.
Rob Bratby is Managing Partner of Bratby Law, the boutique telecoms, data and payments regulatory law firm. Rob has over 25 years’ experience advising on UK and EU data protection and privacy law, including at Oftel, and holds General Counsel appointments across the payments and communications sectors. Bratby Law advises controllers, processors, fintechs, electronic money institutions and telecommunications operators on the intersection of UK GDPR, sector-specific security regimes and AI governance.
