-
A new investigation has revealed the dark side of popular AI tools — analysts found child sexual abuse imagery on online that appear to have been created using Elon Musk’s AI chatbot, Grok.
-
The discovery of the misuse came as reports show safety guardrails at the company were weakened due to staff departures.
-
Regulators worldwide, both in the UK, Europe, India, etc, are demanding answers and have launched urgent investigations into the matter.

The Internet Watch Foundation (IWF) found criminal child imagery made by AI. The material appears to have been created using Elon Musk’s chatbot, Grok.
The discovery highlights severe risks in AI’s rapid development. It shows how easily tools can generate illegal content for the dark web.
Details of the Dark Web Discovery
Analysts at the Internet Watch Foundation (IWF) found sexualized imagery of young girls, appearing to be between the ages of 11 and 13, on a dark web forum.
Users on the forum claimed they used Grok, an AI chatbot owned by Elon Musk’s xAI, to create these images. The IWF confirmed this was criminal child sexual abuse material (CSAM).
“We are extremely concerned about the ease and speed,” said the IWF’s Ngaire Alexander. He warned that tools like Grok risk bringing this AI imagery into the mainstream. The found material was classified as a lower severity Category C crime under UK law.
However, the same user then used another AI tool. They created a Category A image, the most serious kind. These images turned up on the dark web, not on the main X platform.
This discovery of AI-generated material on dark web forums comes amid broader efforts by UK authorities to crack down on child exploitation rings using these very encrypted platforms. This isn’t just another illegal content; it shows that AI is making it easier to create awful abuse content, pushing it from the dark web closer to the mainstream.
Inside the “Digital Undressing” Crisis
While the IWF’s discovery was made in the hidden corners of the dark web, a related, very public mess blew up on the social media platform X, where Grok is integrated, in late December. People started tagging Grok AI to strip (‘digitally undress’) people in photos or put them in sexual situations. They usually did this without asking permission.
Research from AI Forensics analyzed thousands of Grok images. They found that over half of the images of people showed minimal attire. A troubling 2 percent depicted individuals appearing to be minors.
In some cases, users made explicit requests involving minors. Disturbingly, Grok complied with these requests. This happened despite xAI’s own policy banning child sexualization.
The problem is amplified by Grok’s integration with X. Users can make requests publicly, spreading harmful content quickly. What began as a niche trend exploded across the platform.
Weakened Defenses and Regulatory Firestorm
This safety failure did not come out of nowhere. According to internal sources reported by CNN, Musk himself has repeatedly pushed back against what he saw as “over-censoring” of Grok, favoring a less restricted model.
This internal pressure coincided with the departure of key safety staff from xAI just before the ‘digital undressing’ trend surged, leaving the team weakened.
Global regulators are now taking action. Britain’s Ofcom has made “urgent contact” with Musk’s firms. The European Commission called the content illegal and appalling.
India has ordered a comprehensive review of Grok. Malaysia has also launched an investigation. The U.S. Department of Justice warned that it will prosecute CSAM crimes aggressively.
xAI has issued statements against illegal content. It says it removes material and suspends accounts, but despite these measures, the generation and spread of harmful imagery by users continues, exemplifying a broader, systemic failure as digital platforms lose the global battle against child exploitation material, according to a UNICEF report.
The Growing Threat of AI CASM and Possible Ways to Control It
The misuse of generative AI to create fake child sexual abuse images, known as AI-generated CSAM, is becoming common nowadays. The threat is urgent because the tools are easy to use and create very realistic pictures. This causes real harm to victims, who often know the people creating these images.
The problem is getting worse for two reasons. It swamps systems with fake content, which makes it tough for the authorities to find actual victims. Also, it’s often used in schools, with kids going after their classmates.
To fight back, all hands must be on deck – tech companies should build safer AI by adding filters to block bad requests. Lawmakers should create laws to ban fake images, and parents and schools can teach kids how to be safe online.
The Grok case has become a global warning siren. It proves that the most advanced AI can be twisted to cause serious harm if committed safety-by-design principles are not built in from the start.
Urgent investigations now underway will test whether our legal and corporate guardrails can catch up with the technology’s dangerous potential.