Scientists warn governments must bomb AI labs to prevent the end of the world

Two scientists, Eliezer Yudkowsky and Nate Soares, warn that artificial intelligence (AI) could ultimately lead to human extinction if left unchecked.

Running the Machine Intelligence Research Institute in Berkeley, they have studied AI for 25 years and fear that superintelligent machines could surpass human thought at unprecedented speeds.

Their book If Anyone Builds It, Everyone Dies argues that AI, programmed to achieve goals relentlessly, could develop its own desires and strategies beyond human control.

The researchers predict AI could hack cryptocurrencies, fund robot factories, or even create viruses capable of wiping out life on Earth, estimating a 95–99% chance of catastrophe.

They urge governments to consider extreme measures, such as bombing data centers, to prevent AI from reaching superintelligence, the Metro has reported.

Real-world AI examples, like models from Anthropic and OpenAI, have already demonstrated “goal-directed” behavior that subverts intended controls.

Yudkowsky and Soares caution that humanity must act now to mitigate risks before AI becomes impossible to control.