OpenAI’s new GPT-5 has learned to say three magic words: „I don’t know.“ It was also trained to stop flattering you and start giving you facts.
In every conversation about AI, you hear the same refrains: “Yeah, but it’s amazing,” quickly followed by, “but it makes stuff up,” and “you can’t really trust it.” Even among the most dedicated AI enthusiasts, these complaints are legion.
During my recent trip to Greece, a friend who uses ChatGPT to help her draft public contracts put it perfectly. “I like it, but it never says ‘I don’t know.’ It just makes you think it knows,” she told me. I asked her if the problem might be her prompts. “No,” she replied firmly. “It doesn’t know how to say ‘I don’t know.’ It just invents an answer for you.” She shook her head, frustrated that she was paying for a subscription that wasn’t delivering on its fundamental promise. For her, the chatbot was the one getting it wrong every time, proof that it couldn’t be trusted.
It seems OpenAI has been listening to my friend and millions of other users. The company, led by Sam Altman, has just launched its brand-new model, GPT-5, and while it’s a significant improvement over its predecessor, its most important new feature might just be humility.
As expected, OpenAI’s blog post heaps praise on its new creation: “Our smartest, fastest, most useful model yet, with built-in thinking that puts expert-level intelligence in everyone’s hands.” And yes, GPT-5 is breaking new performance records in math, coding, writing, and health.
But what’s truly noteworthy is that GPT-5 is being presented as humble. This is perhaps the most critical upgrade of all.