Existential Risk
Browse posts by tag
Superintelligence: Paths, Dangers, Strategies
Notes
Assessment of long-term risks from advanced AI.
Why Artificial Superintelligence Can't Escape the Void
ASI is still subject to Gödel's incompleteness theorems. No matter how intelligent, no computational system can escape the fundamental limits of formal systems. Even superintelligence can't prove all truths.
The Policy: S-Risk Scenarios - Worse Than Extinction
Most AI risk discussions focus on extinction. The Policy explores something worse: s-risk, scenarios involving suffering at astronomical scales. We survive, but wish we hadn't.
Compositional Abstractions for Computing Under Ignorance: Or, What I Learned by Analyzing My Own Research as Data
I asked an AI to brutally analyze my entire body of work—140+ repositories, 50+ papers, a decade and a half of research. The assignment: find the patterns I couldn’t see, the obsessions I didn’t know I had, the unifying thesis underlying …
Post-ASI Archaeology: When Humanity Becomes a Dataset of Origins
We will not be remembered — we will be indexed. If superintelligence endures beyond us, remembrance shifts from memory to query. Building legacy systems not for nostalgia, but to remain legible in a future where legibility determines what persists.
The Policy
A speculative fiction novel exploring AI alignment, existential risk, and the fundamental tension between optimization and ethics. When a research team develops SIGMA, an advanced AI system designed to optimize human welfare, they must confront an …