ICO enquiry into Meta AI smart glasses: what it tells AI product teams about outsourced human review

In short: The ICO wrote to Meta in March 2026 over outsourced human review of audio-visual data captured by Ray-Ban Meta smart glasses. The enquiry tests controller/processor allocation under UK GDPR Article 28, lawful basis for AI training data, and whether the DUAA 2025 “meaningful human involvement” standard is met where reviewers validate AI outputs at scale.
The ICO’s ongoing enquiry into Meta’s Ray-Ban Meta smart glasses, announced in March 2026, asks questions that every organisation deploying AI-enabled products with outsourced human review must now answer under the Data (Use and Access) Act 2025 (DUAA 2025), in force since 5 February 2026. Workers at Sama, a third-party contractor in Nairobi, Kenya, reviewed audio and visual data captured by the glasses to train Meta’s AI models. The ICO confirmed it was writing to Meta to request information on how the company meets its obligations under UK data protection law. The architecture of Meta’s pipeline, controller directing processor directing outsourced labour pool, is not unique to Meta. Any organisation running a similar structure should audit it against the same framework now.
What the ICO is enquiring about: outsourced human review of AI training data
The ICO’s engagement follows a March 2026 investigation by Swedish media which revealed that Sama workers in Kenya labelled and annotated video, audio and transcript data from the Ray-Ban Meta glasses. Meta told the BBC it uses recordings to improve its AI systems only in defined circumstances and that users can manage their data through device settings.
This is the information-gathering stage of the ICO’s regulatory cycle, a step taken before the regulator decides whether to open a formal investigation. The maximum penalty under UK GDPR is £17.5 million or 4% of global annual turnover, whichever is higher. For companies operating AI pipelines with comparable structure, the ICO’s lines of enquiry are a proxy for the questions they should be applying to their own operations.
How UK GDPR jurisdiction engages
UK jurisdiction does not turn on where Meta or its contractors sit. Under UK GDPR Article 3(2), the regime applies to controllers and processors not established in the United Kingdom where the processing relates to offering goods or services to data subjects in the United Kingdom (Article 3(2)(a)) or to monitoring their behaviour in the United Kingdom (Article 3(2)(b)). The Ray-Ban Meta glasses are sold to UK consumers and capture audio and visual data of UK data subjects in UK locations. Both limbs are engaged. That puts the AI training pipeline, the contractor chain and the transparency framework within the ICO’s enforcement reach, regardless of where individual reviewers operate.
The controller/processor question under UK GDPR Article 28
Under UK GDPR Article 28, a controller that engages a processor must do so under a binding contract specifying the subject matter, nature and purpose of the processing and imposing on the processor the mandatory obligations in Article 28(3): act only on documented instructions, maintain confidentiality, implement Article 32 security measures, obtain written authorisation before sub-processing, assist the controller with data subject rights, and submit to audits.
Meta is the data controller here: it determines that data will be collected via the glasses and used for AI training. Sama, as the reviewing entity, is the candidate processor. The processor label holds only where the outsourced party acts solely on the controller’s instructions and exercises no independent discretion over purpose. Where a reviewer exercises genuine judgment, deciding what to label, how to categorise content, or whether a data point warrants retention, it may step into a joint controller role under Article 26, with materially different accountability consequences. The role analysis must come before the contract.
Sub-processor chain liability follows directly. If Sama is Meta’s processor, Meta must authorise all sub-processors Sama engages and ensure Article 28(3) obligations flow down. A gap in that chain is a standalone compliance failure.
Lawful basis: why consent and legitimate interests both face difficulty
For audio and visual data captured in public spaces, UK GDPR Article 6 requires a lawful basis. Neither consent nor legitimate interests is straightforward on these facts.
Consent under Article 7 must be informed and unambiguous. A bystander recorded without knowledge cannot give meaningful consent before the capture occurs. Meta’s position that users can manage their own recordings does not address third-party data subjects who appear in those recordings.
Legitimate interests under Article 6(1)(f) requires the three-part assessment: identify the interest, test necessity, and balance against the data subject’s interests. For AI training data captured at scale in public spaces without notice, the balancing test is difficult. The DUAA 2025 introduced pre-cleared recognised legitimate interests (RLIs) under Article 6(1)(ea). The prescribed RLI categories cover national security, crime prevention and safeguarding vulnerable individuals. Commercial AI model training is not among them. The RLI route is closed.
Audio-visual data captured in public spaces may also incidentally capture health information, religious observance or sexual orientation, bringing Article 9 special category processing into play. That elevates the lawful basis requirement and makes a data protection impact assessment (DPIA) under Article 35 very likely mandatory.
DUAA 2025 and the meaningful human involvement standard
DUAA 2025 s.80 (in force 5 February 2026 under SI 2026/82 reg. 2(j)) replaced old UK GDPR Article 22 with a new framework in Articles 22A to 22D. Article 22D gives the Secretary of State power to prescribe by regulations cases in which there is, or is not, to be taken to be meaningful human involvement under Article 22A(1)(a), making this a watching brief for AI product teams. Under Article 22A(1)(a), a decision is based solely on automated processing “if there is no meaningful human involvement in the taking of the decision.” Article 22A(2) requires that when assessing meaningful involvement a person must consider, among other things, the extent to which the decision is reached by means of profiling. This test applies to any significant decision producing a legal or similarly significant effect for a data subject.
For organisations relying on human reviewers to keep their processing outside the Article 22B restrictions or Article 22C safeguard obligations, the statutory formulation is precise and demanding. The question is not whether a human being is present at some point in the pipeline. It is whether that person exercises genuine independent judgment capable of altering the output. Reviewers confirming AI-generated classifications at volume, with limited discretion to reject them, are unlikely to satisfy the Article 22A standard. Our automated decision-making practice page covers the full post-DUAA framework, and our earlier analysis of the ADM regime after the DUAA addresses the practical application of Articles 22A to 22D.
Practical steps for AI product teams relying on outsourced human review
The Meta enquiry identifies four steps for organisations deploying AI-enabled products with outsourced human review in their pipelines.
First, map the processor chain. Every entity touching personal data in the AI training or inference pipeline should be characterised: processor, joint controller or independent controller. Article 28 contracts must be in place and current. Sub-processor authorisation and obligation flow-down must be documented, not assumed.
Second, verify the lawful basis. A generic legitimate interests assertion is not sufficient where the processing involves AI training data, public-space capture, third-party data subjects and incidental special category data. Each category of data requires its own analysis. A DPIA is likely mandatory.
Third, audit the transparency notices. Article 13 and 14 obligations require identification of recipients and sub-processors. A data protection privacy notice describing AI processing at a high level, without identifying the outsourced review component and the jurisdictions in which it operates, will not meet the ICO’s current transparency posture.
Fourth, test the meaningful involvement claim. Where reliance is placed on reviewer involvement as the Article 22A threshold, document the independent judgment reviewers exercise and the rate at which they diverge from AI-generated outputs. Presence in the process is not a defence. Our AI and data governance advice service covers processor chain audits, DPIA preparation and ADM compliance reviews for regulated and AI-enabled businesses.
Viewpoint
The ICO’s engagement with Meta over the Ray-Ban glasses is not primarily about Meta. It signals that outsourced human review is no longer a structural shortcut in AI compliance. In advising AI product companies on UK AI and data protection frameworks, the most consistent gap I see is the treatment of the reviewer function as a default compliance measure rather than as a data processing activity in its own right, requiring its own controller/processor characterisation, its own lawful basis, and its own transparency chain. The DUAA 2025 has raised the bar on what “meaningful” human involvement means: a rubber-stamp review function at scale almost certainly does not meet it. The ICO’s AI and data protection guidance has long flagged this area; the Meta enquiry shows the regulator is now applying direct scrutiny to pipeline architecture, not just output. Teams with human reviewers in their AI pipeline should have audit-ready answers to all four points above before, not after, the ICO writes.
For advice on controller/processor analysis, lawful basis for AI training data, or DUAA 2025 ADM compliance for AI-enabled products, contact Rob Bratby at Bratby Law.
