The field of long-termism has previously understood the greatest existential risks to humanity to include all-powerful superintelligence, nuclear proliferation, bioweapons, nanotechnology, etc. Our new report, involving extensive interviews and rigorous mathematical modelling, shows a far greater risk that has been hiding in plain sight: AI researcher burnout.
Our calculation is simple:
- A burnout rate of 0.001% of AI alignment researchers per year
- Leads to a 0.002% increase in the likelihood of AGI being misaligned
- Thereby increasing the existential risk of AGI to humanity by 0.003%
- Which, given how many humans we expect to live until the end of the universe means:
1,000,000,000,000,000,000,000,000 potential future humans are murdered every time an AI alignment researcher has a bad day.
Digging into this further, our report finds that:
- The quality of the first coffee consumed by an AI alignment researcher each morning has outsize consequences, with a poor brew leading to a population the size of China being wiped out.
- We must do more to prevent romantic break-ups between AI alignment researchers, since [each one causes a rupture to the most important work that humanity is capable of doing].
- We track the case study of two heartbroken researchers who replied up to two days later to email threads about which hotel they wanted to be booked into for the upcoming EA Global in New York City. Fortunately, in this case catastrophe was averted by their Operations team selecting the InterContinental on their behalf. However, our modelling shows that had they been situated in Queens due to Manhattan accommodation being hard to find, they would have []
Our recommendations include:
- Immediately building new theme park facilities close to AI alignment hubs. If the parks give maximum priority queue jump to AI researchers, they will save trillions of potential future human lives. Our research finds that these theme parks should focus on water rides. Including potentially nausea-inducing rollercoasters could set off a devastating butterfly effect that would cause AGI to [].
Do you know an unhappy AI alignment researcher? Please let us know as soon as possible — we will arrange for a crisis team to be dispatched.
We will shortly release the full report. Sign up here for updates: