The United Kingdom’s communications regulator, Ofcom (the Office of Communications), has opened a formal investigation into X (formerly Twitter) following serious concerns that the platform’s Grok AI chatbot may have been used to generate and disseminate highly sexualised and potentially illegal content.
The investigation centres on allegations that Grok — the artificial intelligence system developed by Elon Musk’s xAI and integrated into X — was used to create non-consensual “undressed” or “nude” images of real individuals. Such practices may constitute intimate image abuse under UK law.
More alarmingly, Ofcom is also examining claims that the AI system may have produced sexualised images of children, material that could qualify as child sexual abuse material (CSAM) — one of the most serious criminal offences under UK legislation.
Ofcom, whose remit spans television and radio broadcasting, telecommunications, postal services, radio spectrum management, and online safety regulation under the Online Safety Act, is assessing whether X has met its statutory obligations to protect UK users from illegal content hosted or generated on its platform.
On Monday 5 January, Ofcom formally contacted X, requiring the company to submit details by Friday 9 January outlining the safeguards and systems it had in place to prevent the creation, sharing, or continued availability of such material.
X provided its response within the stipulated timeframe. Following an expedited assessment of the information submitted, Ofcom concluded that there were sufficient grounds to open a full formal investigation.
A spokesperson for Ofcom said:
The Online Safety Act requires platforms to take robust action to prevent and quickly remove illegal content, including intimate image abuse and child sexual abuse material. Where we have reasonable grounds to suspect a company is failing in these duties, we will not hesitate to investigate and, where necessary, take enforcement action.
The investigation will evaluate whether X has complied with its illegal content duties under the Online Safety Act, including the adequacy of its risk assessments, content moderation processes, and technical safeguards designed to prevent prohibited material from being generated or circulated through tools such as Grok.
If Ofcom determines that X has breached its legal responsibilities, the company could face substantial financial penalties, potentially amounting to up to 10 per cent of its global annual revenue, alongside other enforcement measures.
At the time of publication, neither X nor xAI had issued a public statement regarding the investigation.
The move represents one of the most significant regulatory actions taken against X in the UK since the Online Safety Act received royal assent, and underscores growing concern among regulators, child protection organisations, and digital rights advocates over the risks associated with generative AI technologies.
Ofcom stated that the investigation remains at an early stage and that further updates will be provided in due course.
- Kingsley Oyong Akam

