By Kerry Robinson

Talk shapes output.

Most people treat prompting like a vending machine: press button, get snack.

But LLMs aren’t mechanical. They’re behavioral – they pick up on tone, structure, and phrasing.

Kerry Robinson put it well in his recent Substack post on why we often say please and thank you to AI, and why that’s a good thing!

When we talk to AI, we naturally treat it like a social partner, and it responds accordingly. They adapt to tone, pacing, and structure, even when you don’t intend them to.

Say something short and abrupt, you’ll get something short and unhelpful. Ask clearly, with direction and a little warmth, and suddenly the model starts cooperating. It fills in gaps. It follows your lead.

Why? Because these models are trained on dialog. They simulate people responding to social cues. If someone barks a command at you, you’re probably less likely to want to help out. If they ask in a friendly tone and give you specific instructions, you’ll be more helpful.

Your input is a role assignment. You’re requesting they take on an actual persona every time you ask for help. A marketing expert, a data analyst, an IT specialist, or whatever else you fancy.

Give it rushed, scattered commands and you’re inviting confusion. And as Kerry points out, even subtle phrasing changes can make a measurable difference. One study showed polite prompts improved output quality by nearly 9%.

Treat it like a junior collaborator who wants to get it right and you’ll get better work (faster). Check out Kerry’s guide: Simple prompting for smart people where he lays out a prompting framework you can use to nail this every time.

You don’t need to be polite. But you do need to be intentional.

Set the tone. Let the model match it.

Damian

PS: If you’re new here, this newsletter brings you the best from Waterfield Tech experts at the frontier of AI, CX, and IT. Also, Kerry posts weekly at The Dualist, and Fish and Dan share their thoughts every other week at Outside Shot and Daichotome.

 

Here’s what went down this week. 

Bleeding Edge

Early signals you should keep on your radar.

Anthropic used over 150,000 books to train its AI, raising questions about copyright and data provenance. A judge ruled they didn’t violate copyright law, a major step in testing how fair use applies to AI.

Denmark plans to grant citizens copyright over their own faces and voices. The proposed law aims to combat the misuse of AI-generated deepfakes by giving individuals legal control over their likeness, setting a precedent in digital identity rights.

Leading Edge

Proven moves you can copy today.

RingCentral launches an AI receptionist to handle calls. It screens, routes, and answers FAQs – cutting wait times and freeing up human agents. It’s a quiet step toward smarter, leaner front desks.

Penske uses AI to cut truck maintenance costs. It predicts failures before they happen… proof that AI wins don’t need to be flashy to be valuable. And yet another proof point of preemptive CX, which we talked about the other day.

Off the Ledge

Hype and headaches we’re steering clear of.

Researchers tricked AI chatbots into giving fake health advice with made-up citations. Right… it’s easy to make – or let – an AI do the wrong thing, that’s why we need extensive and ongoing monitoring and maintenance of Gen AI apps is not a luxury, it’s a necessity.

A Harvard study found AI can be just as irrational as humans (or worse). Models in the study shifted their views on controversial subjects after writing an essay aligned with one or other side of an argument. Just like humans do. A reminder that these models have learned the best – and worst – of human traits.