Deepfakes, Denials and a Watchful Eye: UK Regulator Keeps X Under Pressure

The U.K. is not going to let this one go. Even as other inquiries quietly fade into bureaucratic limbo, this one is sticking.

A British media watchdog said on Thursday that it would press ahead with an investigation of X over the spread of AI-generated deepfake images — despite the platform’s insistence that it’s cracking down on harmful content.

At the center of the dispute are deepfake images – often sexualized; often falsified – that have proliferated on X. The regulator’s fear is far from hypothetical.

With these images, a reputation could be ruined in minutes – and, once they’re out there, trying to keep them from being public is almost an impossible task.

Officials say they need to know if X’s systems are really preventing this material or just reacting once the damage is done.

And that’s a good question, isn’t it? We’ve heard the promises before. This larger fear of AI becoming a self-propelled monster image generator has led to similar inquiries, such as Germany’s scrutiny of Musk’s Grok chatbot and Japan just launching an investigation into it for the same kind of image creation dangers.

What’s fascinating – perhaps even a bit ironic – is that X’s owner, Elon Musk, has long framed the platform as a defender of free expression.

But regulators are not discussing free speech as an abstraction; they have to contend with harm.

When AI generates fake porn of real people, who happen to be women, this is no longer a philosophical debate, it’s a public safety issue.

Meanwhile, countries other than the U.K. are making decisions based on that logic already.

Malaysia, for example, recently cut off access to Grok entirely after AI-generated explicit images appeared, a development that sent a shudder through the tech community.

The UK investigation also comes at a time when regulators are in general flexing more muscle around AI governance.

Europe is heading in the opposite direction with sweeping legislation aimed at holding platforms to account for how AI systems are used and governed.

The way forward seems pretty straightforward when you see how the EU’s landmark AI rules are being pitched as a template to be used by the world beyond.

Here’s my hot take, for whatever it’s worth. This inquiry isn’t primarily about X in isolation. It’s about whether tech companies can continue to demand trust while shipping tools that can get misused at scale.

The UK regulator appears to be saying, politely but firmly, “Show us it works – or we’ll keep looking.”

And honestly, that feels overdue. Deepfakes are no longer just a future threat. They’re here, they are messy and regulators are finally beginning to act like it.