When Regulators Regulate Themselves: The UK judiciary and data protection regulator (ICO) guidelines for internal AI use

UK Judges and ICO publish internal AI use guidelines
The Judiciary of England and Wales and the Information Commissioner’s Office (ICO) have both published internal policies on the use of artificial intelligence. These are not consultation papers or policy statements directed at industry. They are operational frameworks governing how public authorities will use AI in their own work, demonstrating how to govern it in practice.
The judiciary’s approach to AI
The Judicial Office’s AI Guidance for Judicial Office Holders (October 2024) draws a clear line. AI may assist with administrative and preparatory tasks such as summarising case law, drafting correspondence and organising documents. It must not influence judicial reasoning or the substance of decisions. Where AI-generated content forms part of any submission, courts expect it to be identified and verified. All AI outputs must be treated as secondary sources, not authoritative legal material. Any AI use in administrative or preparatory work must be capable of explanation and justification.
This approach accepts AI’s potential efficiencies while protecting impartiality and public trust in the justice system. It reflects the UK’s wider pro-innovation but accountable model of AI regulation.
The ICO’s internal AI use policy
The ICO’s internal AI use policy, published alongside its Strategic Approach to AI Regulation (April 2024), is equally significant. As the UK’s data protection regulator, the ICO enforces responsible data processing across both public and private sectors. By setting AI governance rules for itself, it applies its own regulatory principles in practice.
The policy allows ICO staff to use AI for document review, investigation triage and data analysis, but within a strict governance framework. Staff must assess risks before deploying AI tools, conduct Data Protection Impact Assessments (DPIAs) for higher-risk applications, and maintain human oversight of AI outputs. The ICO will not use AI in enforcement or decision-making processes without explicit governance board approval.
Alignment with the UK AI governance framework
Both policies sit within the UK government’s cross-sectoral AI governance framework, set out in the White Paper on a Pro-Innovation Approach to AI Regulation and the February 2024 guidance to regulators. The five cross-sectoral principles of safety, transparency, fairness, accountability and contestability were designed for sectoral interpretation. What makes these two policies notable is that both institutions have operationalised those values internally.
This matters for three reasons. First, public confidence in AI depends on the behaviour of institutions that use it; courts and regulators modelling responsible AI use establish trust across sectors. Second, these policies align with similar approaches taken by Ofcom, the FCA and other UK regulators, creating consistent governance frameworks. Third, they show how abstract principles translate into concrete action: governance boards, audit trails, DPIAs and human-in-the-loop safeguards.
Viewpoint
For organisations subject to UK regulation, these developments raise the bar. Regulators will now scrutinise how AI tools are used in decision-making, not only whether they are used lawfully. Organisations should maintain governance documentation covering approval processes, oversight mechanisms and audit records. Under the EU AI Act (2024/1689), public authorities face binding requirements for transparency and risk management when deploying AI. The UK’s lighter, principle-based model converges in spirit: both demand explainability, auditability and clear human responsibility.
By defining how they govern their own use of AI, the judiciary and the ICO are not just regulating others. They are showing what responsible AI governance looks like from the inside.
Subscribe below for updates on data protection, or contact Rob Bratby at Bratby Law.
