Ai regulation and data protection lawyer | uk compliance

AI and Data

Regulation, governance and risk

Artificial intelligence and data-driven technologies are reshaping how organisations operate, innovate and manage risk. As regulatory frameworks develop at pace – across the UK’s principles-based model, GDPR, international data transfer regimes and emerging AI governance expectations – businesses, boards and legal teams need AI and data advice that is technically robust and commercially grounded.

Bratby Law provides senior, specialist support on AI and data regulation, governance and risk. Our approach combines long-standing experience advising on complex technology and regulatory matters with a strong technical understanding and the effective use of advanced tools, including AI-assisted analysis where appropriate. The result is clear, practical guidance that helps organisations adopt, deploy and scale AI responsibly.

We advise on regulatory strategy, data protection compliance, AI governance frameworks, impact assessments, cross-border data issues and the practical controls businesses need to demonstrate accountability. The practice is led by a recognised global expert in data protection with experience at a UK regulator, in-house and as a partner in international law firms.

Our AI and Data Services

Our AI and Data practice is organised around four expertise specialisms.

Governance and risk

Board-level and senior management support on AI and data governance and risk.

AI regulation

Clear, senior guidance on UK and international AI regulatory requirements. Advice on the UK’s principles-based AI framework, the EU AI Act and related guidance, including how these interact with UK GDPR, the Data Protection Act 2018 and sector-specific regimes.

We help clients:

  • Assess whether and how their use of AI is regulated.
  • Map AI systems and use-cases across the lifecycle.
  • Design proportionate governance and documentation, including AI risk assessments and model assurance.
  • Engage constructively with regulators and other stakeholders.

Learn more about our AI regulation support

Data protection

Comprehensive support across UK GDPR, PECR and data-governance obligations. Data-mapping, DPIAs/AIAs, high-risk processing, transparency and fairness, data retention, security, profiling and automated decision-making.

We advise on:

  • Data strategy and governance for AI and analytics programmes.
  • Lawful bases, purpose limitation and compatibility for re-use of data.
  • Data subject rights, including explainability for AI-enabled decisions.
  • ICO engagement, codes of practice and regulatory investigations.

Learn more about our data protection services

AI in telecoms

Sector-specific advice on how AI and data use interacts with telecoms regulation and digital infrastructure reform. This includes AI-enabled network management, security and resilience obligations, and customer-facing services.

We support:

Learn more about AI and data in telecoms

Governance and risk

Board-level and senior management support on AI and data governance, risk oversight, corporate governance and accountability.

Our work includes:

  • Designing AI and data governance frameworks, policies and committees.
  • Integrating AI risk into existing risk management and assurance processes.
  • Training boards and senior teams on regulatory expectations and practical oversight.
  • Supporting internal audits, independent reviews and regulatory enquiries.

Learn more about governance and risk

What clients ask us about AI and data

Typical questions include:

  • How do the UK’s approach to AI regulation, the EU AI Act and UK GDPR interact for our business?
  • What governance, policies and documentation do we actually need for AI systems in production?
  • How do telecoms-specific duties on security, resilience and consumer protection apply when we use AI in networks and customer services?
  • What should our board and senior management be doing now to demonstrate appropriate governance and oversight?

Our role is to answer these AI and data questions clearly, explain the regulatory logic and translate it into concrete governance, contractual and operational steps.

How we help

We combine regulatory depth, sector knowledge and technical understanding.

Regulatory experience

Experience at Oftel, in-house at operators and as a partner and practice leader in international firms provides an end-to-end view of how regulation is made, interpreted and enforced.

Data and AI expertise

Recognised strength in data protection and data governance, now applied to AI systems, model assurance and lifecycle governance.

Technical and analytical capability

A scientific first degree and formal governance training support a structured approach to AI and data risk. We use modern tools, including AI-assisted analysis where appropriate, to work efficiently while maintaining rigorous professional judgement.

Commercial focus

Advice is calibrated to commercial objectives and risk appetite, helping clients to move projects forward rather than simply catalogue risk.

Who we work with

We provide AI and data advice to:

  • Telecoms operators and digital-infrastructure providers.
  • Technology companies, platforms and AI-enabled businesses.
  • Investors and acquirers assessing AI and data risks in transactions.
  • Law firms and consultancies seeking specialist co-counsel input on AI and data issues.

Our unique end-to-end perspective


Our approach is shaped by a rare combination of regulatory, operator and private-practice experience:

The Regulator’s Perspective

Work at Oftel, the predecessor to Ofcom, provides first-hand experience of how UK communications regulation is developed, interpreted and enforced. This includes leadership of the project to liberalise the UK’s international telecoms infrastructure market (subsea cables and satellite), and a detailed understanding of regulatory intent and enforcement dynamics.

The Operator’s Perspective

Senior in-house roles at COLT and embedded general-counsel roles within operator-side businesses provide practical insight into how networks are built, where risks arise, how compliance is operationalised and how commercial and regulatory decisions are made inside carriers and infrastructure operators.

The Adviser’s Perspective

As a former partner and practice leader at international law firms in London and Singapore, Rob Bratby has advised operators, platforms, investors and global technology companies on major regulatory matters, market-shaping projects, cross-border transactions and multi-jurisdictional compliance programmes.

This combination enables advice that is legally rigorous, commercially aligned and technically grounded.

Why a specialist boutique?


Bratby Law is structured to provide a clear alternative to broad TMT practices and larger City firms:

Boutique approach

TMT or City firm

Specialist, sector-specific focus

Broad TMT coverage with variable depth

Senior delivery on all matters

Work delegated to teams of varying experience

Integrated regulatory, operator and advisory experience

Limited practical or regulatory grounding

Predictable, flexible engagement models

Rigid, process-driven structures


As a boutique, Bratby Law provides specialist regulatory depth, partner-level delivery and commercially aligned advice shaped by practical operator-side experience. Engagement models are flexible and predictable, including direct instruction, specialist co-counsel and fractional general-counsel support.

What clients say

Related insights

Our Insights blog tracks key developments in AI and data regulation and related telecoms and platform issues.

Discuss your AI and data question

To explore a specific AI, data or governance issue please get in touch to arrange an initial discussion.

Independent directory rankings

Our specialist expertise is recognised in major independent legal directories:

  • Chambers & Partners: Rob Bratby is ranked in the UK Guide 2026 in the “Telecommunications” category: Chambers
  • The Legal 500: Rob Bratby is listed as a “Leading Partner – Telecoms” in London (TMT – IT & Telecoms): The Legal 500
  • Lexology: Rob Bratby is featured on Lexology’s expert profiles (Global Elite Thought Leader): Lexology
1 | bratby law | telecoms | ai | data
Ai & data 5 | bratby law | telecoms | ai | data
2 | bratby law | telecoms | ai | data
Ai & data 6 | bratby law | telecoms | ai | data

Related sub-pages

Frequently asked questions about AI and data

What laws regulate AI in the UK?

The UK does not yet have a single AI Act. Instead, AI is governed through a combination of existing legislation and sector-specific regulators applying the UK’s five cross-sector AI principles (safety, transparency, fairness, accountability and contestability). Key legal regimes include the GDPR/UK GDPR, the Data Protection Act 2018, the Equality Act 2010, consumer protection law, financial services regulation, product safety legislation and competition law. Regulators such as the ICO, Ofcom, the FCA and the CMA are increasingly active in AI oversight.

How does the UK’s approach differ from the EU AI Act?

The UK takes a “principles-based, sector-led” approach rather than a binding horizontal AI statute. The EU AI Act is prescriptive, categorising AI systems by risk and imposing detailed compliance requirements. UK-based organisations working in or selling into the EU may need to comply with both frameworks.

When will the UK introduce binding AI regulation?

Government policy currently prioritises regulator-led implementation of the five principles, but ministers retain the option of legislating if voluntary adoption is insufficient. The ICO and CMA have already begun using existing powers to regulate AI systems, and binding requirements may emerge through amendments to data protection legislation or new sector-specific rules.

What are an organisation’s core obligations when developing or deploying AI systems?

Obligations depend on context, but typically include:
lawfulness and transparency of data use |
data minimisation and purpose limitation |
meaningful human oversight |
security and robustness controls |
testing and validation to manage bias and accuracy risk |
accountability frameworks, including model governance and documentation |
third-party risk management and contractual controls for AI services and models.

Do we need a separate AI governance framework?

Yes. AI use introduces lifecycle governance obligations that go beyond standard data protection compliance. A formal governance structure typically includes an AI policy, roles and responsibilities, risk assessment processes, model registers, impact assessments, technical assurance procedures and escalation channels.

How do we comply with UK GDPR when training AI models?

Compliance requires:
identifying a lawful basis for training |
ensuring training data is fair, relevant and not excessive |
applying appropriate anonymisation or pseudonymisation |
assessing high-risk activities using a DPIA or AI-specific impact assessment |
providing adequate transparency about model training |
managing data subject rights, including the right to object or request erasure if applicable.

Can we use public data, scraped data or open datasets for training?

Potentially, but not without risk. Public availability does not remove data protection obligations. Copyright, database rights, confidentiality and terms of service may also apply. Organisations should assess provenance and establish a defensible lawful basis before using scraped or aggregated datasets.

Do we need consent to use personal data in AI systems?

Not necessarily. Consent is often impractical. Most organisations rely on legitimate interests or contractual necessity, supported by a DPIA and appropriate safeguards. However, consent may be required for high-risk biometric or sensitive data deployments.

How does AI risk differ from general technology or data protection risk?

AI systems introduce new categories of risk: model drift, hallucination, explainability limitations, bias, training data toxicity, unexpected behaviour and dependency on upstream model providers. These require tailored oversight, technical assurance and operational governance.

How do we manage cross-border data transfers in AI systems?

Where personal data is processed outside the UK or EEA, organisations must implement transfer safeguards such as UK IDTA clauses or EU SCCs. Transfer risk assessments must evaluate local surveillance risk, model training practices and downstream sub-processors.

Are foundation model providers responsible for compliance?

Providers have obligations under data protection, consumer law and competition law, but deploying organisations remain responsible for ensuring their use of the model is lawful, fair and safe. Regulators increasingly expect organisations to interrogate provider assurances rather than rely on them blindly.

What should we do if our AI system produces inaccurate or harmful outputs?

Organisations should have clear escalation and remediation processes, including human review, override mechanisms, incident reporting, documentation, retraining measures and notification procedures where legal obligations (e.g. data breaches) arise.

How will the ICO enforce AI compliance?

The ICO has confirmed that it will use its full enforcement powers (including fines) to address unlawful AI training, bias, opaque automated decision-making and inadequate transparency. The ICO has already published detailed AI auditing guidance, which sets practical expectations for organisations.

What are the consequences of non-compliance?

Consequences include regulatory enforcement, litigation risk (including group claims), contractual disputes, reputational harm, financial losses and operational disruption. For regulated sectors such as telecoms or financial services, non-compliance can impact licensing and supervisory relationships.

Do we need an AI Register or model inventory?

Yes. A model inventory is now a widely-accepted best practice and increasingly expected by regulators. It assists with governance, accountability, risk management, audits and the ability to explain and document system behaviour.

Should boards oversee AI governance?

Yes. Boards are expected to oversee AI risk in the same way they oversee cybersecurity or data protection. They should receive regular reporting, approve governance frameworks and ensure adequate resources and controls.

Are SMEs expected to comply to the same AI and data standards as large organisations?

Regulators adopt a risk-based approach, but SMEs remain responsible for lawful and safe AI use. Simple frameworks, lightweight governance and proportionate documentation are acceptable provided they address the risks.

How does AI relate to telecoms regulation?

AI is increasingly relevant to network management, automated customer engagement, fraud detection, content moderation and communications platforms. Ofcom is actively considering AI-related risks across online safety, spectrum management and network resilience. Providers must manage both AI-specific and telecoms-specific obligations.

What practical steps should we take to begin AI compliance?

A typical starting point includes:
mapping AI use cases |
reviewing data flows and training datasets |
implementing an AI governance framework |
completing an AI impact assessment |
updating contracts and third-party arrangements |
aligning policies (data protection, security, acceptable use, procurement) |
conducting staff training |
establishing monitoring and oversight arrangements.

Ai regulation and data protection lawyer | uk compliance

AI and Data

Regulation, governance and risk