There’s been a bunch of releases and announcements this week: Anthropic released Claude 3.7 – a strong model that now combines immediate responses with ‘deeper thinking’ when it thinks its warranted.
They beat OpenAI to deploy this approach. Sam Altman shared that he ‘hates’ the model picker, and after there next model release – it sounds like GPT4.5 is around the corner – they’ll start combining all their models so that the AI decides the best way to handle a query.
And I finally got a hold of Grok 3 from xAI, which I’ve been using from inside of the X app. I’ve been impressed by the user interface, the model’s speed, and how it elegantly leverages web sources and X posts. What really caught my eye though, is how deftly it handled some thorny political discussions, questions and challenges.
It’s answer to my query and subsequent poking and provoking about the context and impact of the appointment of Kash Patel as Director of the FBI was quite extraordinary. It was balanced without ducking the issues. Went deep when pushed. And it never failed to make a point or take a stance, for and against every side of the debate.
To me this was a great example of the power of ‘truth seeking AI’, which I discussed in last week’s email. In the past, AIs would have dodged the question.
I’ve talked before about how this world needs quality conversations. And how I learned to leverage AI to explore my biases and understand others’ points of view. Most of our differences come from failing to understand the context and motivations of those we disagree with.
From assuming right and wrong is easy to decide.
From failing to carefully understand, and thoughtfully challenge others’ logic and assumptions, and our own.
The quality of that conversation with Grok 3 gave me new hope for the future of AI and our local, national, and global discourse.
Humanity’s first major encounter with AI was via the algorithms that drove our social media feeds. Those algorithms didn’t seek truth. They sought to hack our minds and emotions to maximize engagement and ad revenue.
That drove us apart.
But our latest AI tools have the potential to augment us in a much more positive way. To hack our minds and emotions for good.
To help us notice our biases.
Challenge our assumptions.
Seek truth.
I couldn’t be more excited – and positive – about what that means for human thriving in the ‘Intelligence Age’ – as Sam Altman has called this era.
Thriving in the Intelligence Age just happens to be the name of the new Personal AI empowerment course we’re running at Waterfield Tech. Because we’ve realized that there are two sides to the AI first contact center and AI first business transformation.
We’ve focused a lot on how businesses can leverage AI to deliver a better customer experience, and more effective and efficient customer services.
But we’ve been relatively quiet on how the individuals within a business can drive that change. How we empower individuals to be the best they can be – with AI – and through that approach, achieve better and faster adoption of AI, and even crowd source the ideas, the AI prompts, and the process adjustments that will help us become an AI first business.
I’m excited to delve into this new area and look forward to sharing what we learn with you.
Kerry
PS: If you want a more regular dose of insights, follow or connect with me on LinkedIn for regular posts on conversational AI, mindset, and egg juggling, among other things!
If someone forwarded this to you, please subscribe yourself for weekly insights that’ll make you think differently about your IVR, voice, and chatbots.
Helping you get maximum ROI from conversational AI — whatever the platform.