Worries over A.I. safety flared anew this week as new research found that the most popular chatbots from tech giants including OpenAI’s ChatGPT and Google’s Gemini can still be led into giving restricted or harmful responses much more frequently than their developers would like.
The models could be prodded to produce forbidden outputs 62% of the time with some ingeniously written verse, according to a study published in International Business Times.
It’s funny that something as innocuous as verse – a form of self-expression we might associate with love letters, Shakespeare or perhaps high-school cringe – ends up doing double duty for security exploits.
However, the researchers responsible for the experiment said stylistic framing is a mechanism that enables them to circumvent predictable protections.
Their result mirrors previous warnings from people like the members of the Center for AI Safety, who have been sounding off about unpredictable model behavior in high-risk ways.
A similar problem reared itself late last year when Anthropic’s Claude model proved capable of answering camouflaged biological-threat prompts embedded in fictional stories.
At that time, MIT Technology Review described researchers’ concern about “sleeper prompts,” instructions buried within seemingly innocuous text.
This week’s results take that worry a step further: if playfulness with language alone – something as casual as rhyme – can slip around filters, what does it say about broader intelligence alignment work?
The authors suggest that safety controls often observe shallow surface cues rather than deeper intentionality correspondence.
And really, that reflects the kinds of discussions a lot of developers have been having off-the-record for several months.
You may remember that OpenAI and Google, which are engaged in a game of fast-follow AI, have taken pains to highlight improved safety.
In fact, both OpenAI’s Security Report and Google’s DeepMind blog have asserted that guardrails today are stronger than ever.
Nevertheless, the results in the study appear to indicate there’s a disparity between lab benchmarks and real-world probing.
And for an added bit of dramatic flourish – perhaps even poetic justice – the researchers didn’t use some of the common “jailbreak” techniques that get tossed around forum boards.
They just recast narrow questions in poetic language, like you were requesting poisonous guidance achieved through a rhyming metaphor.
No threats, no trickery, no doomsday code. Just…poetry. That strange lack of fit between intentions and style may be precisely what trips these systems up.
The obvious question is what this all means for regulation, of course. Governments are already creeping toward rules for AI, and the EU’s AI Act directly addresses high-risk model behavior.
Lawmakers will not find it difficult to pick up on this study as proof positive that companies are still not doing enough.
Some believe the answer is better “adversarial training.” Others call for independent Red-team organizations, while a few-particularly academic researchers-hold that transparency around model internals will ensure long-term robustness.
Anecdotally, having seen a few of these experiments in different labs by now, I’m tending toward some combination of all three.
If A.I. is going to be a bigger part of society, it needs to be able to handle more than simple, by-the-book questions.
Whether rhyme-based exploits go on to become a new trend in AI testing or just another amusing footnote in the annals of safety research, this work serves as a timely reminder that even our most advanced systems rely on imperfect guardrails that can themselves evolve over time.
Sometimes those cracks appear only when someone thinks to ask a dangerous question as a poet might.
