By Kerry Robinson

I remember when I first played with a ‘word embedding’ model.

You gave it a word, and got back a list of numbers. The embedding.

But you could do math with those numbers, too.

Take the numbers for King, subtract Man, and add Woman, and you’d get a set of numbers that were very close to Queen.

King – Man + Woman = Queen

Wow!

I didn’t know it then, but that was my first contact with what’s now known as ‘latent space’.

Latent space is the term we use to describe the ‘mind of the model’ when it comes to modern AI.

It’s a crazily complex multidimensional landscape. Imagine getting a dictionary and then twisting and turning it and folding and stretching it until similar words were nearby.

Then imagine it’s not just a dictionary but all of human knowledge. 

Imagine the whole of the internet as a napkin. Then fold it and squish it and stretch it until similar concepts are nearby. That’s basically what we’re doing when we train a Large Language Model like ChatGPT.

Inside this strange landscape there’s a place where Zen Buddhism intersects with Shakespeare’s Romeo and Nelson Mandelas long walk to freedom. 

Where the subtle differences between Ochre and Red are distinguished. 

Where different notions of freedom are teased apart.

When we interact with an AI model, we’re literally taking a walk through this landscape. 

We’re exploring latent space.

I took the students of our Personal AI empowerment course through these concepts recently. It was fun to bring these concepts to life, and apply them.

One of the exercises we did was to pick three famous people, and use ChatGPT to explore a specific subject with them. My chosen topic was the mystery of consciousness.  And I chose to speak with Einstein, Nelson Mandela and Peppa Pig.

It sounds crazy, but it works. Try it yourself. Pick a subject of interest, and choose some interesting people to discuss it with. Here’s how I started my discussion. This is the exact prompt I put into ChatGPT:

I want to explore the relationship between intelligence, consciousness and free will. I’m going to invite 3 interesting people to a fictional dinner party and have a conversation with them. We are all milling around getting to know each other. I bring a fresh drink to Albert Einstein and I ask him: how should I think about intelligence, consciousness and free will? You play the role of Albert. What might he say?

There are a couple of tricks here: we set up a context, note it’s fictional, and ask the model to play a role. 

“Einstein’s” response was illuminating:

“Now, consciousness–this is a mystery that even our best science struggles to illuminate. It is the awareness of self, of being an observer in this grand universe. Some believe consciousness is an emergent property of the brain, a complex interplay of neurons. But I am not so sure we will explain it so simply. The great physicist Schrödinger, for example, believed consciousness was fundamental, not derived from matter but something more intrinsic to reality itself. Who is to say he was wrong?”

“Einstein” continues to wax lyrical on consciousness and free will. Then I introduce Mandella. Here’s the prompt I used:

It turns out Nelson Mandela was right next to Einstein and heard everything he said. He chips in… answering the question as if it was posed to him. You play Mandela. He’s in comfortable, private company, so he reveals his inner truth. What might he say?

Again, a couple of tricks here… I made it clear Mandela heard everything, and gave the model an excuse to riff on Mandela’s ‘inner truth’. By asking ‘what might he say’ I give the model room to make stuff up.

“You say, my friend, that we must live as though we have free will, even if the universe suggests otherwise. But I say, it does not matter what the universe suggests. Free will is not given–it is taken. It is claimed in the heart of every man who chooses to stand when he could kneel, to love when he could hate, to build when he could destroy.”

After some more intriguing perspectives, I invite Peppa Pig to the conversation. She nails it. Obviously:

“Maybe that’s what makes us special! Maybe being smart means asking good questions, being conscious means knowing everything is amazing, and having free will means making up new choices that nobody else thought of before!”

This isn’t just a fun exercise. It’s a powerful tool. Give it a go.

By taking things to an extreme, you’ll realise the wide range of possible responses you can get from a model.

You’ll realise how much your input affects the output.

How just a few words can totally change the responses you get.

But also how much we’re in control. You choose how to shape the input. And therefore the output.

Getting a feel for latent space means you stop thinking about an AI model as a question answering machine, and start seeing it as a way to explore a latent space of ideas. 

You take responsibility for the inputs and the outputs. 

You’re exploring possibilities. Not just expecting answers.

So, who are you going to invite to your conversation?

Kerry

PS: If you want a more regular dose of insights, follow or connect with me on LinkedIn for regular posts on conversational AImindset, and egg juggling, among other things!

PPS: You are building with GenAI right now, aren’t you? If not, what’s stopping you? Check out our blog on Gen-AI blockers, or sign up for a complimentary Strategy Workshop to help you get started.

If someone forwarded this to you, please subscribe yourself for weekly insights that’ll make you think differently about your IVR, voice, and chatbots.

Helping you get maximum ROI from conversational AI — whatever the platform.