AI CEO thinks only Chernobyl-level disaster can spur action
Technology
At a recent summit, UC Berkeley's Stuart Russell shared that a leading AI company CEO privately thinks only a disaster as big as Chernobyl could finally push governments to take AI risks seriously.
The 1986 nuclear accident caused widespread health and economic effects, so this is no small warning.
AI leaders acknowledge risk but keep building anyway
Russell said many AI leaders quietly acknowledge serious existential risk, and he specifically referenced the estimate from Anthropic's CEO of roughly 25%, but they keep building anyway—worried about losing out in the race for smarter machines.
He called out governments for "total dereliction of duty," letting private firms gamble with everyone's future.
Only Anthropic's CEO has openly said we should consider pausing development to stay safe.