I had a spirited WhatsApp exchange last week that illuminates the gap in understanding of Generative AI at the highest levels in business.
My mate is a top insurance industry executive.
It started when I shared what I described as an incredible interview and a fantastic, nuanced, authentic feeling performance from Open AI CEO Sam Altman in his recent TED interview.
I suggested he’d earn a lot about what’s coming in AI but also that we could all learn a little from his poise.
It was a masterclass.
My friend said, thoughtfully:
“I thought his answer about IP theft was a bit of a swerve – it focussed too much on the creative output, and putting cool stuff in the hands of the creatives but not at all on remunerating rights holders appropriately or even crediting their work adequately”
Fair cop. But I shared back a different perspective, which we’ve made a fundamental principle in our internal AI usage guidelines:
“With AI, YOU are the author… not the AI. You own the output.”
The argument centred around an excellent Charlie Brown cartoon that the interviewer made with the latest OpenAI image model. It looked great, and it was quite funny and poignant. Just like the originals.
The audience cheered when the interviewer challenged Altman on this ‘IP theft’.
But I take a different view.
With enough skill, anyone could have drawn the cartoon with a pencil and paper, and that’s entirely legal. ChatGPT helps you be more creative. But if you abuse the copyright holders rights by inappropriate public use you should be penalised.
Only slightly tongue in cheek, I retorted:
“We don’t ban pen and paper. Nor do we require BIC to censor what you produce with a biro”
That was then my mate dropped the bombshell:
“but the very nature of a LLM trained on copyrighted material is that it is reusing or copying it – inspiration is a human not a machine trait.”
Wrong.
This got me thinking – how many people share this misconception? So I quickly crafted an article (with the help of AI, of course!) to set the record straight.
Here are the key points I’d like to hope every industry exec would get by now:
LLMs Are More Like Brains Than Photocopiers
The training process for large language models isn’t about memorizing text — it’s more like spending years in the British Library with a notebook that can’t do copy-and-paste.
The model adjusts billions of numeric parameters as it encounters information, storing distributed patterns rather than exact content.
Sound familiar? That’s because humans do something remarkably similar:
Humans: Strengthen synaptic connections through experience; memories are distributed across neurons
LLMs: Adjust weights through gradient descent; knowledge is distributed across parameters
Both: Reconstruct rather than replay when producing content
The Copy-Paste Myth: What The Research Actually Shows
Studies have found that GPT models rarely reproduce long spans of word-for-word copying. Even with aggressive methods designed to extract memorized content, researchers typically only recover a few hundred passages out of billions in the training corpus.
It can get sketchier with bigger models, but the principle is the same.
And it’s not copy/paste!
Like humans who occasionally recite memorized poems or lyrics, LLMs can sometimes reproduce text verbatim — but that’s far from their default behavior.
They’re creating novel outputs that never existed before, based on the patterns in language, ideas, and human knowledge that they encountered during training.
And it’s the user’s input that drives the model to create the output.
Understanding how these models actually work changes everything about how we harness them:
1. They’re closer to “inspired” co-authors than photocopiers
2. The space of possible recombinations is astronomically large
3. The user’s prompt, curation, and publishing decisions represent true authorship
As we say in our guidelines:
Respect the input. Own the output.
And if the output represents copyright or other IP infringement. Don’t use it!
Kerry
PS: If you want a more regular dose of insights, follow or connect with me on LinkedIn for regular posts on conversational AI, mindset, and egg juggling, among other things!
PPS: You are building with GenAI right now, aren’t you? If not, what’s stopping you? Check out our blog on Gen-AI blockers, or sign up for a complimentary Strategy Workshop to help you get started.
If someone forwarded this to you, please subscribe yourself for weekly insights that’ll make you think differently about your IVR, voice, and chatbots.
Helping you get maximum ROI from conversational AI — whatever the platform.