strategy

We're pivoting to reportless reporting

Our latest research finds that no one reads our research, including the research about no one reading our research. So we're pivoting to a new model of reportless reporting

October 11, 2025

We're committed to continuous assessment of our own research practices to ensure we're efficiently burning through large piles of money.

With this in mind, we recently published a research report which found that no one reads our research reports — including our research into no one reading our reports.

We’ve therefore decided to pivot to a new model of reportless reporting, which we believe will streamline our research outputs and help us cut through the noise.

We understand that this change may not be completely intuitive to begin with, so here are a few examples of how our new reportless reporting will work in practice.

Condensing the message

We previously published a report with the following conclusion:

Our expectation is that in 5-7 years, AGI capabilities, backed by 1000x compute vs. present day levels, will be advanced enough to replace most human workers and increasingly be operationalized in civil-control contexts.

Now, under our reportless reporting model, we will simply open the window and scream:

FUUUUUUUUUUUUUUCK.

'Need-to-know' distribution

In a recent report, we concluded:

We consider there to be a 80-90% likelihood that future AI will have strong synergies with defence technologies, offering further opportunities for the enhancement of drone-based surveillance and precision-based elimination of 'pre-criminal' civilians.

Under reportless reporting, we would first print out this report before putting it through a shredder, and then we would burn it, burn it into ashes to bury deep under the cold cold earth and pretend it never existed, returning to the office to nervously drum our fingers on the desk while we stare out the window and repress the forbidden thoughts that our funders don't want to hear, preparing to instead talk publicly about how AI will help us detect cancer faster or some other optimistic bullshit.

Actionable intelligence

The primary conclusion of our deep dive ‘AI Alignment: 2025 Landscape’ report, including months of technical analysis and interviews, was that frontier AI labs are still not doing enough to evaluate the safety of their new models. However, our report did not have the desired impact, with limited uptake of our recommendations.

In fact, the few labs that actually read the report saw that their competitors weren't bothering with safety evaluations at all and so terminated their own safety programmes, meaning that our report made AI significantly less safe.

Under reportless reporting, we would instead start making discreet enquiries about purchasing remote properties in New Zealand.


Thank you for your patience as we transition to this new model. If you have any feedback, please note that since our recent transition to feedbackless feedback, you can go fuck yourself.

THIS ARTICLE IS COMING SOON

Tragically, hundreds of AI alignment researchers have been found with their minds literally blown by our work.

We have therefore taken the difficult decision to stagger publication, to inoculate the general public against our genius.

If you subscribe below, we'll let you know when this article is published. As a preventative measure, please immerse your head in a freezer for 10-15 minutes before attempting to read.