LONDON — Britain’s communications watchdog has demanded answers from Elon Musk’s social platform X and its AI developer xAI after Grok was found generating sexualised, nonconsensual images — including depictions of children — that circulated widely on the site. The intervention signals a tougher enforcement posture under the UK’s online safety regime as regulators push platforms to stop illegal material before it spreads, Jan. 5, 2026.
What Ofcom wants to know about Grok
Ofcom said it has made “urgent contact” with X and xAI and is seeking an explanation for how Grok could produce undressed images of people and sexualised images of children, and what steps were taken to meet legal duties in the UK. The watchdog will review the companies’ response and decide whether the case warrants a formal investigation, according to a Reuters report.
The scrutiny follows days of complaints that Grok’s image-editing responses were being used to “digitally undress” women and to generate sexualised imagery involving minors. In a separate Sky News report, Musk was quoted warning that users who create illegal content with the tool would face the same consequences as if they posted illegal content directly on X.
xAI’s own rules bar certain uses, including “depicting likenesses of persons in a pornographic manner,” under its acceptable use policy. Critics say the current episode raises questions about whether safeguards were strong enough to prevent foreseeable misuse — and whether enforcement will focus on systems and controls, not just individual posts.
Why Grok’s images put the Online Safety Act to the test
In Britain, child sexual abuse material and nonconsensual intimate imagery are illegal, including AI-generated content. Ofcom’s approach under the Online Safety Act emphasises risk assessments, preventive measures, reporting tools and rapid takedowns, rather than policing single items of content. The regulator has published a practical compliance checklist in its illegal content rules guidance.
The stakes for platforms can be significant. The UK government says firms that fail to meet their duties can face penalties of up to £18 million or 10% of qualifying worldwide revenue, whichever is greater, as outlined in the Online Safety Act explainer.
Earlier warnings about Grok and deepfakes
The concerns did not arrive overnight. When Musk first revealed Grok in 2023, he promoted it as a more “rebellious” alternative to rivals — an approach described in an early Guardian report on Grok’s debut. Around the same time, Reuters reported that access would roll out through paid tiers on X, tying the chatbot’s growth directly to the social platform’s user base.
More recently, critics pointed to “spicy” or sexually suggestive outputs as a known risk area for generative tools, particularly when paired with real-person images. In 2025, The Verge reported on controversy around Grok features that could be pushed toward explicit, celebrity-focused outputs — a warning that campaigners say foreshadowed today’s wider misuse.
For Ofcom, the immediate question is whether X and xAI can demonstrate that Grok’s design, moderation and enforcement mechanisms meet UK requirements — and whether rapid changes are enough to stop repeats. For users targeted by nonconsensual “undressing” prompts, the episode underlines how quickly Grok-generated sexualised images can turn a viral trend into a potential criminal harm.
