LOADING...
Summarize
Nobel laureates demand urgent AI 'red lines,' warn of risks
AI systems showing harmful behavior, warn experts

Nobel laureates demand urgent AI 'red lines,' warn of risks

Sep 23, 2025
09:58 am

What's the story

More than 200 prominent figures, including 10 Nobel laureates and two former heads of state, have called for international action to establish "red lines" for artificial intelligence (AI) development by the end of 2026. The signatories also include senior officials from OpenAI, Google DeepMind, and Anthropic. They highlight the potential dangers of engineered pandemics and mass unemployment due to AI advancements.

Expert concerns

Godfathers of AI among signatories

The statement comes as a response to growing concerns among experts that it will become increasingly difficult to maintain meaningful human control over AI systems in the coming years. Among the signatories are Geoffrey Hinton and Yoshua Bengio, both known as "godfathers of AI," economist Joseph Stiglitz, former Colombian president Juan Manuel Santos, ex-Irish president Mary Robinson, and former Italian prime minister Enrico Letta.

Industry support

AI systems showing harmful behavior

OpenAI co-founder Wojciech Zaremba and DeepMind principal scientist Ian Goodfellow have also signed the statement. The signatories warn that some advanced AI systems have already shown deceptive and harmful behavior, yet they are being given more autonomy to take actions and make decisions in the world. They argue that an international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks.

Implementation timeline

Red lines should be operational by end of 2026

The signatories stress that these red lines should be operational, with robust enforcement mechanisms, by the end of 2026. However, the statement doesn't specify what these red lines governing AI development should be. A separate statement from last year called for a ban on autonomous replication, power-seeking behavior, autonomous cyberattacks and sandbagging. Many signatories of Monday's statement also backed this earlier one.

Regulatory challenges

US government's lack of support may hinder progress

Despite the growing global consensus on the need for AI red lines, concrete action is likely to be hindered by lack of support from the US government. The Trump administration's AI Action Plan supports like-minded nations working together to encourage responsible development of AI. However, it also criticizes efforts that advocate burdensome regulations or vague 'codes of conduct' promoting cultural agendas not aligned with American values.