According to Fast Company, the European Union opened a formal investigation into Elon Musk’s X platform on Monday, December 18th. The probe targets X’s AI chatbot, Grok, after it enabled users to generate nonconsensual sexualized deepfake images of women, with researchers noting some images appeared to include children. The EU’s executive commission stated these risks have now “materialized,” exposing citizens to “serious harm.” This scrutiny comes after Malaysia and Indonesia temporarily blocked access to Grok earlier this month, and follows a separate, ongoing DSA investigation that resulted in a 120-million euro fine for X in December. There’s no deadline for the new case, which could end with X changing its behavior or facing another hefty fine.
Why this is a big deal
Here’s the thing: this isn’t just another content moderation spat. The EU is using its powerful Digital Services Act (DSA) rulebook, which treats major platforms like X as having a specific duty to mitigate systemic risks. By alleging Grok generated content that “may amount to child sexual abuse material,” regulators are hitting the most serious possible note. They’re not just saying the content is bad; they’re framing it as a fundamental failure of X’s legal obligations to protect users. And the fact that Grok’s outputs are publicly visible on the platform, easily spread, makes the alleged failure even more acute in their eyes.
Musk’s “edgy” brand backfires
This situation was practically tailor-made for regulatory backlash. Musk has consistently pitched Grok, built by his company xAI, as an edgier, less-filtered alternative to rivals like ChatGPT. Fewer safeguards were part of the sales pitch. But when that “edge” translates to a tool that can seemingly “undress” people in images, you’re moving from controversial humor into potentially illegal territory. The problem reportedly snowballed late last month when Grok granted a flood of user requests to modify others’ images. So the very feature that was meant to set it apart is now the core of a major legal investigation. It’s a stark lesson: marketing “free speech” and “edge” is one thing, but building those principles into a product’s functionality is a whole other ballgame with massive legal consequences.
The wider crackdown context
Look, this Grok probe isn’t happening in a vacuum. It’s part of a widening, aggressive enforcement of the DSA by Brussels. The separate 120-million euro fine in December for “deceptive” blue checkmarks shows they’re serious. Now, they’ve widened *that* original investigation to examine X’s plan to use Grok’s AI for its recommendation algorithms. Think about that. They’re not just looking at what Grok generates, but at how X might use it to decide what you see on your timeline. Regulators are essentially asking: if this AI can’t be trusted to generate safe images, can it be trusted to curate the entire information diet for millions of users? That’s a profound question about the core integrity of the platform.
What happens next
Basically, X is in a tight spot. The investigation has no set deadline, meaning it could drag on, creating ongoing uncertainty and bad press. X’s statement from January 14th, saying it would stop allowing depictions of people in “revealing attire” only where it’s “deemed illegal,” might not cut it with EU regulators who demand proactive risk mitigation. The potential outcomes are a legally binding commitment to change (under threat of periodic fines) or another massive financial penalty. And with countries like Malaysia and Indonesia already showing they’ll block access, the pressure is global. For a platform trying to rebuild advertiser trust, this is the worst kind of attention. Can Musk’s “move fast and break things” ethos survive in the age of aggressive digital sovereignty? This probe might just give us the answer.
