Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Elon Musk’s AI company xAI has restricted the use of its controversial AI image generator Grok to paying subscribers, following a growing outcry over its use to spread sexualised deepfakes of women and children.
The start-up announced the move on Friday morning, days after it emerged that the chatbot had been used to create explicit images of people without consent.
Those revelations have led lawmakers in the EU, France and the UK to threaten the platform with fines and bans unless it took action.
“Image generation and editing are currently limited to paying subscribers,” Grok posted on X. xAI did not immediately respond to a request for further comment.
Grok has been intentionally designed to have fewer content guardrails than competitors, with Musk calling the model “maximally truth-seeking”. The company’s chatbot also includes a feature that allows users to generate sexually suggestive images.
UK Prime Minister Sir Keir Starmer promised to take action against X on Thursday, urging the social media platform to “get their act together” and stop its AI chatbot tool Grok from producing sexualised images of children.
After xAI limited access to Grok on Friday, the UK technology secretary Liz Kendall said she expects the communications regulator Ofcom to use its powers to intervene, including potentially blocking the service in the country. “It is an insult and totally unacceptable for Grok to still allow this if you’re willing to pay for it,” she said.
Three US senators also urged Apple and Google to remove the X and Grok apps from their app stores until Musk addressed “these disturbing and likely illegal activities.”
The European Commission has ordered X to retain internal documents relating to Grok until the end of the year. French ministers have also reported the sexual images that Grok has generated to prosecutors and media regulators.
On January 3, Musk posted on X that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”.
The rise of generative AI has led to an explosion of non-consensual deepfake imagery, thanks to how easy it is to use the technology to create such images.
The Internet Watch Foundation, a UK-based non-profit, said AI-generated child sexual abuse imagery had doubled in the past year, with material becoming more extreme.
While xAI said it had taken down illegal AI-generated images of children, the latest incident will raise further concerns about how easy it is to override safety guardrails in AI models. The tech industry and regulators have been grappling with the far-reaching social impact of generative AI.
In 2023, researchers at Stanford University found that a popular database used to create AI-image generators was full of child sexual abuse material.
Laws governing harmful AI-generated content are patchy. In May 2025, the US signed into law the Take It Down Act, which tackles AI-generated “revenge porn” and deepfakes.
The UK is also working on a bill to make it illegal to possess, create or distribute AI tools that can generate child sexual abuse material, and to require AI systems to be thoroughly tested to check they cannot generate illegal content.
Additional reporting by David Sheppard


