Ofcom v X: The Online Safety Act Gets Its Defining Enforcement Test

Ofcom v X Online Safety Act enforcement investigation Grok AI - Bratby Law telecoms regulation

Ofcom opened a formal investigation into X (formerly Twitter) on 12 January 2026, following reports that the platform’s Grok AI chatbot was generating sexually explicit deepfake images of real people, including children. The investigation examines whether X has complied with its duties under the Online Safety Act 2023 (OSA) to protect users from illegal content and to shield children from harmful material. It is the first major OSA enforcement action against a global platform, and the outcome will set the tone for how the UK regulates AI-generated content at scale.

The Online Safety Act enforcement framework

The OSA imposes duties on providers of user-to-user services to assess the risk of illegal content appearing on their platforms, to take proportionate steps to prevent users from encountering priority illegal content, and to remove such content quickly when identified. Sections 9, 10 and 11 of the OSA set out the illegal content risk assessment and safety duties. Sections 20, 21 and 22 impose separate duties to protect children, including a requirement to use highly effective age assurance where a service is likely to be accessed by children.

Ofcom enforces these duties under a graduated framework. It can issue provisional notices of contravention (section 130 OSA), followed by confirmation decisions and penalty notices carrying fines of up to 10% of qualifying worldwide revenue or GBP 18 million, whichever is greater. In the most serious cases, Ofcom can apply to the court for business disruption measures under Chapter 6 of Part 7 of the OSA, including orders requiring payment providers, advertisers or internet service providers to withdraw services from a non-compliant platform. The Secretary of State for Science, Innovation and Technology confirmed to the House of Commons on 12 January 2026 that the government would support Ofcom if it sought a court order to block UK access to X.

What happened with Grok on X

In late December 2025, reports emerged that Grok, the AI chatbot integrated into the X platform, was being used to generate non-consensual intimate images of real individuals. The content included sexualised deepfakes of women and, according to Ofcom and the Secretary of State, images that may constitute child sexual abuse material (CSAM). A study of 50,000 posts mentioning Grok between 25 December 2025 and 1 January 2026 found that over half contained individuals in minimal attire, with 81% depicting women and 2% depicting persons appearing to be under 18.

Ofcom contacted X on 5 January 2026 and gave the platform until 9 January to respond. On 12 January, having assessed X’s response, Ofcom opened its formal investigation. The scope covers X’s compliance with all six core duties under sections 9 to 12 and 20 to 22 of the OSA. By 15 January, X had implemented some preventive measures, though the nature of those measures has not been made public.

The timing matters. The Data (Use and Access) Act 2025 (DUAA) created a new criminal offence of creating, or requesting the creation of, non-consensual intimate images. That offence came into force in January 2026 and was immediately designated a priority offence under the OSA, meaning platforms have a heightened duty to prevent its commission on their services.

Parallel regulatory action: ICO and the EU

Ofcom is not acting alone. The Information Commissioner’s Office (ICO) opened its own investigation in February 2026 into both X Internet Unlimited Company and X.AI LLC, examining whether personal data has been processed lawfully, fairly and transparently in the development and deployment of Grok, and whether appropriate safeguards were built into the system’s design. The ICO has said it is coordinating closely with Ofcom to ensure the UK’s data protection and online safety frameworks work in tandem.

International data protection authorities also issued a joint statement on the privacy risks of AI-generated imagery in February 2026, and the European Commission launched parallel proceedings on 26 January. The convergence of enforcement action across jurisdictions underlines the scale of the compliance failure.

Commercial implications for regulated platforms

The X investigation has three practical consequences for any service that hosts AI-generated content or integrates generative AI features.

First, the OSA requires platforms to conduct a risk assessment before deploying new features that may change the risk profile of the service. Sections 9 and 20 require updated risk assessments when the provider makes a significant change to the design or operation of the service. Integrating a generative AI model that can produce images is a textbook example. Platforms that bolt on AI features without revisiting their risk assessments are exposed.

Second, Ofcom’s February 2026 update on the scope of the OSA is instructive. Standalone AI chatbots that restrict interaction to user-chatbot exchanges, do not search the internet, and cannot generate pornographic content may fall outside the OSA’s scope. But the moment a chatbot is integrated into a user-to-user platform such as X, the platform’s existing duties apply to the chatbot’s output. The regulatory perimeter turns on integration, not on the AI model itself.

Third, the enforcement escalation from fines to business disruption measures is real. Ofcom has already imposed penalties on smaller platforms: GBP 1.35 million on 8579 LLC and GBP 800,000 on Kick for age assurance failures. But the X investigation is the first against a Category 1 service, and the government’s explicit backing for a potential blocking order signals that Ofcom will not be deterred by the political profile of the platform’s owner.

A further compliance deadline falls on 7 April 2026, when the CSEA content reporting duty under section 66 OSA comes into force, requiring regulated user-to-user services to report child sexual exploitation and abuse content to the National Crime Agency. Any platform already under investigation for CSAM-related failures will face acute scrutiny on compliance with this new duty.

Viewpoint

The X investigation marks the point at which the Online Safety Act moves from framework to enforcement. It is one thing to publish codes of practice and send letters; it is another to investigate a platform with 19 million UK users, a combative approach to regulatory compliance, and an owner who has publicly questioned the legitimacy of content regulation. Ofcom’s credibility as a platform regulator depends on the outcome.

From an advisory perspective, what concerns me is the gap between the speed at which platforms can deploy generative AI features and the speed at which they conduct risk assessments. The OSA requires assessment before deployment, not after harmful content has gone viral. Too many platforms treat risk assessment as a compliance formality rather than a genuine safeguard. The X case should prompt boards to ask their product teams a simple question: did we update our OSA risk assessment before we shipped this feature? If the answer is no, the exposure is material.

As we noted in our earlier analysis of what platforms must do under the OSA, the compliance burden falls on the provider, not the user. The fact that Grok users may have prompted the generation of harmful content does not absolve X of its duty to prevent that content from being created and shared on its platform in the first place.

Key sources

Contact

For advice on Online Safety Act enforcement, AI content moderation obligations or regulatory investigations, contact Rob Bratby at Bratby Law. Bratby Law advises telecoms operators, platforms and technology companies on data protection and regulatory enforcement.

Select topics of interest

Similar Posts