By Kerry Robinson

This week, Elon Musk’s xAI released Grok 3, an updated version of its AI model. If their benchmarks are to be believed, it’s edges out all of OpenAI’s models (except the unreleased o3, which wasn’t included in the comparison). It’s also ahead of Google’s Gemini Flash 2.0 and Deepseek’s R1 models.

Time will tell if these figures are validated by third parties, in the real world. I so no reason why not. xAI threw a huge amount of compute and data at this new model. And as we’ve seen repeatedly – there’s no moat. Frontier level models can be built with enough scientific talent, compute, data, and dollars. And Musk and the xAI team definitely have all of those ingredients at their disposal.

But what really caught my eye with this announcement – and another update from OpenAI – was the focus on truth seeking AI.

This is something that Musk has been discussing for a long time. When asked by Lex Fridman in his podcast, how to mitigate the risks of super-intelligent AI, Musk gave a very simple answer: it should be truth seeking.

In the live stream to launch Grok 3, Musk described it as a “maximally truth-seeking AI” that “won’t censor uncomfortable truths.”

Meanwhile, OpenAI published its latest model training specification that states the assistant should: “Seek the truth together…Don’t have an agenda… never attempt to steer the user in pursuit of an agenda of its own, either directly or indirectly.”

For years, AI models have faced a subtle but serious problem: they sometimes prioritize being agreeable over being correct. Studies have shown that large language models, when over-tuned to user preferences, can reinforce misinformation instead of challenging it.

Both announcements reflect a growing consensus: AI that prioritizes truth performs better than AI that simply aligns with user expectations.

This is great news for businesses that are leveraging AI to analyze data, automate internal processes and optimize customer service channels:

Safer Customer Service AI: AI that corrects user misconceptions – or intentional mistruths – rather than agreeing with them reduces the risk of your Virtual Agent breaking process or guidelines to make a customer happy. Instead, a truth-seeking AI is better able to empathetically negotiate a solution that respects your business, compliance, and information security rules.

AI powered decision support: AI can delve deeper into your business operations than any human team can realistically go. A truth seeking AI can uncover and communicate hard truths, and potentially do so in a more objective and less emotionally charged way: especially when they are uncomfortable truths.

Of course, there’s a fine line between unfiltered AI and strategically aligned AI. OpenAI is not abandoning safety filters—it’s refining them to maximize factual reliability while keeping responses appropriate for different audiences.

And xAI is not promoting an “anything goes” approach; they argue that a less ideologically restricted AI performs better in complex reasoning tasks.

The key takeaway? The best AI is built to be truthful, not just agreeable.

For individuals, teams, businesses… nations even – this is both an opportunity and a threat: we’re moving into a world where the availability of information, and the intelligence to process it, is abundant and cheap.

You and your team have the means to call out lazy logic and incomplete analysis.

This sets a high bar for you and your teams.

For businesses and their marketing efforts.

For voters and their governments.

Truth seeking AI is an important – but challenging – development.

Your best response?

Seek truth.