Next Article
'Chain of thought': AI models' reasoning could disappear, warns study
A big new study from OpenAI, Google DeepMind, Anthropic, Meta, and top researchers says the "chain of thought" (CoT) that lets us see how AI models make decisions could disappear as these systems get more advanced.
CoT is what helps humans spot when an AI is up to something weird or risky—before it causes real problems.
AI models get less transparent as they get more powerful
CoT has been a major safety tool for identifying risky or unusual behavior in AIs.
But as models get bigger and more complicated, tracking their reasoning may become impossible—or misleading.
The researchers urge tech companies to invest in better ways to keep tabs on AI's thinking because losing this transparency would make it much harder to understand or control what powerful AIs do next.