guest post

To make AI safe, we must develop it as fast as possible without safeguards

Ia Magenius explains why we need to make AI as powerful as possible to ensure it can't have power over us

October 12, 2025By Ia Magenius

We hear a lot in the media about how AI researchers should stop and think before forging ahead with new and powerful models. According to this argument, we should move cautiously and develop regulatory safeguards to manage the potential risks.

I get it. I myself once endorsed a six-month moratorium on AI research (remember that?) to give us time to think through the implications of what we were doing.

But I've now realised that attempting to develop AI safely is the most dangerous thing we could do.

History shows us why.

Take nuclear weapons. Once it was clear that an atomic bomb was possible, the US government rushed to develop one in the belief that they were in direct competition with the Nazis, without thinking through the full implications. But if those scientists had waited to develop it safely, they might never have been able to peacefully end WW2 by killing over 100,000 innocent Japanese people in Hiroshima and Nagasaki.

By pursuing aggressive technological expansion at any cost, the Manhattan Project led to a safer world. Not only that: the subsequent nuclear arms race meant that, within just a few years, humanity possessed thousands of nuclear weapons that could blow up all of us many times over. But once you discount all the near-misses that have almost led to planetary-scale nuclear war and the end of all human life, it's clear that nuclear weapons are actually one of the safest technologies around.

The lesson is clear. To their critics nuclear weapons were a dangerous technology, but in practice pursuing their development actually made the world significantly safer, apart from all the wars that still seem to happen for complicated reasons I don't have space to discuss here.

The risk with AGI is much the same. Developing superintelligence safely is a complex process. It would take time and require difficult discussions — discussions that everyone in society should have a say in, not just the small number of researchers working on it. But if we pursue that path, there's a real risk that somebody else will make AGI first and destroy all human life before we have a chance to ourselves. That would be unacceptable.

To stop bad actors developing AGI that could kill us all, we need good actors to develop equally lethal AGI instead.

I've come to realise that our best hope is to all race at breakneck speed towards this terrifying, thrilling, goal, removing any and all safeguards that risk slowing our progress. Once we've unleashed the technology's full destructive power, we can then adopt a "stable door" approach to its regulation and control — after all, that approach has worked beautifully for previous technologies, from fossil fuels to microplastics.

We stand at a precipice. If we get this right, the lesson of nuclear weapons suggests that we'll be able to create a sufficient number of different AGIs such that they'll hold each other in check and prevent any one of them from murdering all of us. If not, then we'll all die. But how cool would that be?

THIS ARTICLE IS COMING SOON

Tragically, hundreds of AI alignment researchers have been found with their minds literally blown by our work.

We have therefore taken the difficult decision to stagger publication, to inoculate the general public against our genius.

If you subscribe below, we'll let you know when this article is published. As a preventative measure, please immerse your head in a freezer for 10-15 minutes before attempting to read.