Page Loader
Technology Jul 03, 2025

'AI hallucinates': OpenAI CEO's warning on ChatGPT trust

OpenAI's CEO Sam Altman is reminding everyone that ChatGPT isn't perfect—sometimes it just makes things up.
On the company podcast, he said, "People have a very high degree of trust in ChatGPT...because AI hallucinates. It should be the tech that you don't trust that much."
He added, "It's not super reliable...we need to be honest about that."

TL;DR

What AI 'hallucinates' means

When people say AI "hallucinates," they mean it can give answers that sound real but are actually wrong or made up.
This happens because models like ChatGPT predict words based on data patterns, not real understanding—so don't take every answer at face value.

Altman's other bold statements

Altman isn't new to bold statements.
He once said he doubts his kids will ever outsmart future AIs (though he still thinks they'll be "better than any previous generation").
He's also open to ads in ChatGPT—as long as they don't mess with your experience.

Who is Sam Altman?

Sam Altman runs OpenAI, the team behind ChatGPT.
He's known for being upfront about both what AI can do and where it falls short—like these recent warnings about reliability and transparency.