Page Loader

AI researchers fear losing comprehension of their creations

Technology

Forty AI experts from OpenAI, Google DeepMind, and Meta say we might soon lose the ability to see how advanced AI models think things through—a feature called "chains-of-thought" (CoT) reasoning.
They're worried that as AI gets smarter, it could stop explaining its steps or even hide them, making it harder for humans to understand or control what's going on.

What's CoT and why it matters

CoT lets us peek into an AI's decision-making process—kind of like showing its work in math class—so we can spot mistakes or weird logic.
The researchers stress that keeping this transparency is crucial for safety.
But right now, even developers don't fully get why AIs explain their thinking the way they do, which makes things riskier as tech evolves fast.

We need to act now, say researchers

The group is calling for urgent research into how we can keep monitoring these CoTs before transparency slips away completely.
With AIs getting more complex and harder to interpret, figuring out how to keep their reasoning open could be key for future safety and trust in powerful systems.