
AI, Data and Governance Advice
Data protection compliance for AI-enabled products and automated decision-making
Who this is for
Need to know whether your data processing is compliant, or how to fix it if it is not? We advise on UK GDPR and DPA 2018 compliance across all processing activities – from lawful basis and accountability to DPIAs, international transfers and breach response.
| Product type | Likely lawful basis | DPIA required? | Key compliance risks |
|---|---|---|---|
| AI/ML model training on personal data | Legitimate interests (UK GDPR, Art 6(1)(f)) | Yes (high risk processing) | Purpose limitation; data minimisation; transparency about training data sources |
| Automated decision-making with legal effects | Contract performance or explicit consent (Art 22) | Yes (Art 35(3)(a)) | Right to human review; meaningful information about logic; fairness and bias testing |
| Customer profiling and segmentation | Legitimate interests (Art 6(1)(f)) | Likely (systematic monitoring) | Balancing test documentation; opt-out mechanism; data retention limits |
| Data analytics platform | Varies by use case | Depends on scale and data types | Controller/processor classification; international transfers; purpose specification |
| IoT and connected devices | Consent or legitimate interests | Yes if systematic monitoring | PECR consent for device storage; data minimisation; security by design |
Typical triggers
- Your product team has built a feature that processes personal data and your DPO needs a DPIA completed before launch
- The board has asked for a legal opinion on whether a specific product’s data governance arrangements are adequate
- You are responding to an ICO enquiry about the use of automated decision-making in a specific product
- You need to assess whether Article 22 UK GDPR applies to a particular product feature that makes or informs decisions about individuals
- A customer or regulator has asked how your AI model was trained and what personal data it processes, and you need a defensible answer
- The business is deploying an AI-enabled product and needs to know whether a data protection impact assessment is required
- A third-party AI platform provider’s terms raise questions about controller and processor roles and international data transfers
- The business receives a data subject access request that covers AI-generated profiles or automated decisions
- An investor or acquirer asks about the company’s AI governance and data protection compliance as part of due diligence
What we deliver
AI products raise IP questions at every stage of development and deployment. We advise on IP rights in training data, including copyright protection in curated datasets and database rights in compiled data, plus third-party content clearance obligations. We help you structure ownership and licensing of trained models and AI-generated outputs, and we advise on trade secret protection for proprietary algorithms and model architectures.
Our approach integrates IP analysis with data protection compliance, because the two frameworks often apply to the same data assets. Find out more: data commercialisation and licensing and how IP, proprietary information and personal data interact in three-framework analysis.
- IP rights in training data: copyright protection for curated datasets, database rights in compiled data, third-party content clearance and rights negotiation
- Ownership of trained models and outputs: who owns the trained model, model weights and outputs generated by AI systems, and licensing structures for downstream use
- Trade secret protection: protecting proprietary algorithms, model architectures and training processes as confidential information
- Licensing AI-derived content: structuring licences for analytics, predictions, recommendations and AI-generated content for commercial use
- DPIA: a data protection impact assessment for a specific product or processing activity, compliant with Article 35 UK GDPR
- Article 22 assessment: a written opinion on whether automated individual decision-making provisions apply to a specific feature
- Governance memo: a board paper confirming the data protection compliance position of a product or AI use case
- ICO response: a draft response to an ICO enquiry about a specific product’s data processing
- Remediation plan: where a product does not currently comply, a plan to bring it into compliance with identified steps and timeline
If you need data protection or AI governance advice for a regulated product, we can provide an initial assessment and recommended next steps within a week.
Related direct legal advice pages
See also our other direct legal advice pages:
- Do I need regulatory authorisation before offering my product in the UK?
- What should I do if a regulator is investigating my business?
- What regulatory risks should I check before buying a regulated business?
- Payments Product, Safeguarding and Scheme Governance Advice
- Commercial and Technology Contract Support
- Telecoms Product Launch Advice
- Deal Structuring and Negotiation
- Direct Legal Advice (overview)
Representative experience
Recent and representative matters include:
- Designed and implemented a UK GDPR compliance framework for a telecoms operator, covering records of processing, a DPIA programme and data subject rights procedures that satisfied an ICO audit.
- Advised a SaaS platform on lawful basis for B2B marketing data processing, completing legitimate interests assessments that allowed the client to proceed without consent dependency.
- Conducted a compliance gap analysis for a health-tech company, with focus on special category data processing, producing a prioritised remediation plan delivered within four weeks.
- Advised on cookie compliance and direct marketing obligations for an e-commerce platform, restructuring the consent management platform to achieve PECR compliance before a planned ICO review.
- Supported an organisation through ICO engagement following a personal data breach, managing the notification process and remediation plan that led to no further regulatory action.
Frequently asked questions
Do we need a DPIA before deploying an AI system?
If your AI system processes personal data in a way that is likely to result in a high risk to individuals, Article 35 of the UK GDPR requires you to carry out a data protection impact assessment before the processing begins. The ICO considers that most AI systems involving profiling, automated decision-making, or large-scale processing of personal data will meet the high-risk threshold. We assess whether a DPIA is required, and if so, conduct it in a format that satisfies the ICO’s expectations.
Are we the controller or processor when we use a third-party AI platform?
The controller-processor classification depends on who determines the purposes and means of the personal data processing, not on who owns the technology. If you decide what data to feed into the AI platform and what outputs to act on, you are likely the controller. If the platform provider also uses the data for its own model training, it may be a joint controller. We analyse your specific arrangements and advise on the correct classification, which determines your respective obligations under the UK GDPR.
Can we transfer personal data to US-based AI providers?
International transfers of personal data to the US are permitted under the UK-US data bridge, which came into effect in October 2023. However, the data bridge only covers US organisations that have self-certified under the UK Extension to the EU-US Data Privacy Framework. If your AI provider is not certified, you will need to rely on alternative transfer mechanisms such as standard contractual clauses and conduct a transfer risk assessment. We advise on the correct transfer mechanism for your specific provider arrangements.
What does the UK GDPR require for automated decision-making?
Article 22 of the UK GDPR gives individuals the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or similarly significant effects. If your AI system makes decisions about individuals without meaningful human involvement, you need a lawful basis under Article 22(2), and you must provide individuals with information about the logic involved, the significance of the processing, and the right to obtain human intervention. We advise on how to structure your AI decision-making processes to comply with these requirements.
How should we handle data subject access requests relating to AI outputs?
A data subject access request under Article 15 of the UK GDPR extends to personal data processed by AI systems, including any profiles, scores, or decisions generated about the individual. You must also provide meaningful information about the logic involved in automated decision-making. We advise on how to identify and extract the relevant personal data from AI systems, what explanations you need to provide about algorithmic logic, and how to manage vexatious or disproportionate requests using the manifestly unfounded exemption.
Do I need a DPIA for my product?
If the product processes personal data and uses automated processing likely to result in a high risk to individuals, a DPIA is required under Article 35 UK GDPR. We assess whether the threshold is met and, if so, prepare the DPIA.
Does Article 22 UK GDPR apply to AI-assisted decisions?
Article 22 applies to decisions based solely on automated processing that produce legal effects or similarly significant effects on individuals. Where there is meaningful human involvement, it may not apply. We advise on the boundary for specific products.
Is the UK bringing in standalone AI legislation?
The UK has adopted a sector-specific approach through existing regulators. The primary legal framework remains the UK GDPR and the DPA 2018. We advise on what that means for specific products rather than on AI policy in the abstract.
How does this differ from ongoing fractional GC support?
This page is for a defined instruction: a DPIA, an Article 22 assessment, a board paper, a response to the ICO. For ongoing embedded data protection support, see the Fractional General Counsel pages.
How does this page differ from the Data Protection page?
The Data Protection page explains the regulatory framework. This page is for a live instruction where you have a specific product compliance question that needs answering.
What has the Data Use and Access Act 2025 changed for automated decision-making?
The Data Use and Access Act 2025 replaced the previous Article 22 UK GDPR framework with a new regime for automated decision-making. Under the new rules, organisations must inform individuals when a significant decision has been taken by automated means, provide a meaningful explanation of the decision, and offer a route to human review. The safeguards are broader than the old Article 22 framework because they apply to decisions that are significant rather than only those that are solely automated. We advise on how to implement these requirements in your product or service.
Do I need to tell my customers that my product uses AI?
UK data protection law requires transparency about automated processing. Under Articles 13 and 14 of the UK GDPR, you must inform individuals about the existence of automated decision-making, including profiling, and provide meaningful information about the logic involved. The DUAA 2025 extended these transparency obligations to any significant automated decision. In practice, this means your privacy notice, product documentation and customer-facing communications should explain where AI is used, what decisions it informs, and how individuals can seek human review. We advise on the wording and placement of these disclosures.
