Not a Human — AI: California Forces Chatbots to Spill the Beans

California has officially told chatbots to come clean.

Starting in 2026, any conversational AI that could be mistaken for a person will have to clearly disclose that it’s not human, thanks to a new law signed this week by Governor Gavin Newsom.

The measure, Senate Bill 243, is the first of its kind in the U.S.—a move that some are calling a milestone for AI transparency.

The law sounds simple enough: if your chatbot might fool someone into thinking it’s a real person, it has to fess up. But the details run deep.

It also introduces new safety requirements for kids, mandating that AI systems remind minors every few hours that they’re chatting with an artificial entity.

In addition, companies will need to report every year to the state’s Office of Suicide Prevention on how their bots respond to self-harm disclosures.

It’s a sharp pivot from the anything-goes AI landscape of just a year ago, and it reflects a growing global anxiety about AI’s emotional influence on users.

You’d think this was inevitable, right? After all, we’ve reached a point where people are forming relationships with chatbots, sometimes even romantic ones.

The difference between “empathetic assistant” and “deceptive illusion” has become razor-thin.

That’s why the new rule also bans bots from posing as doctors or therapists—no more AI Dr. Phil moments.

The governor’s office, when signing the bill, emphasized that this was part of a broader effort to protect Californians from manipulative or misleading AI behaviors, a stance outlined in the state’s wider digital safety initiative.

There’s another layer here that fascinates me: the idea of “truth in interaction.” A chatbot that admits “I’m an AI” might sound trivial, but it changes the psychological dynamic.

Suddenly, the illusion cracks—and maybe that’s the point. It echoes California’s broader trend toward accountability.

Earlier this month, lawmakers also passed a rule that requires companies to label AI-generated content clearly, an expansion of the transparency bill aimed at curbing deepfakes and disinformation.

Still, there’s tension brewing under the surface. Tech leaders fear a regulatory patchwork—different states, different rules, all demanding different disclosures.

It’s easy to imagine developers toggling “AI disclosure modes” depending on location.

Legal experts are already speculating that enforcement could get murky, since the law hinges on whether a “reasonable person” might be misled.

And who defines “reasonable” when AI is rewriting the norms of human-machine conversation?

The law’s author, Senator Steve Padilla, insists it’s about drawing boundaries, not stifling innovation. And to be fair, California isn’t alone.

Europe’s AI Act has long pushed for similar transparency, while India’s new framework for AI content labeling hints that global momentum is building.

The difference is tone—California’s approach feels personal, like it’s protecting relationships, not just data.

But here’s the thing I keep coming back to: this law is as much philosophical as it is technical. It’s about honesty in a world where machines are getting too good at pretending.

And maybe, in an age of perfectly written emails, flawless selfies, and AI companions that never tire, we actually need a law that reminds us what’s real—and what’s just really well-coded.

So yeah, California’s new rule might seem small at first glance.

But look closer, and you’ll see the start of a social contract between humans and machines. One that says, “If you’re going to talk to me, at least tell me who—or what—you are.”