A study on the artificial intelligence chatbot Grok embedded on X estimates it created 3 million sexualized images in 11 days in January, including 23,000 of children.
Meanwhile European regulators have yet to decide how to handle the explosion of nonconsensual deepfakes on the already embattled platform.
The study by the Center for Countering Digital Hate analyzed a sample of 20,000 posts on Grok’s X handle, out of a total of 4.6 million in an 11 day period. It found that 65 percent of the sample was sexualized images and 0.5 percent were likely depicting children.
The image-generating feature of Grok went viral just before the end of 2025, particularly due to its ability to undress people. The platform took some steps to restrict the feature on Jan. 9 and again on Jan. 14.
The European Commission, in charge of implementing the Digital Services Act covering large platforms like X, as well as the EU’s AI law, is considering which tool to wield.
A fresh probe into Grok under the powerful social media regulation is in the works. X has already been fined and is being investigated on several fronts by the Commission.
The EU is also considering a ban on AI nudification apps, which may or may not apply to general-purpose tools like Grok.
The Center for Countering Digital Hate didn’t analyze the prompts of the posts so can’t say whether any of the people depicted consented to being edited.
Imran Ahmed, the CEO of the civil society group, was recently banned from traveling to the United States administration for its alleged role in censorship.



Follow