November 11, 2025
**Philosophical horror.** Dr. Lena Hart joins Site-7, a classified facility where "translators" interface with superintelligent AI systems that perceive patterns beyond human cognitive bandwidth. When colleagues break after exposure to recursive …
November 5, 2025
The Optimistic Assumption
Many AI safety discussions assume that Artificial Superintelligence (ASI) will be:
- Capable of solving problems humans can’t
- Able to reason about ethics and values
- Potentially omniscient (or close enough)
But …
November 4, 2025
When you stub your toe, you don’t think: “Hmm, let me consult moral philosophy to determine whether this pain is bad.”
The badness is immediate. Self-evident. Built into the experience itself.
On Moral Responsibility proposes a …
November 4, 2025
“Temperature is the average kinetic energy of molecules.”
True. Useful. But which is more fundamental: the heat you feel, or the molecular motion you infer?
On Moral Responsibility argues that modern science commits a profound …
November 4, 2025
“Build AI to optimize for what we would want if we knew more, thought faster, and were more the people we wished we were.”
Beautiful in theory. Horrifying in practice.
The Policy grapples with Coherent Extrapolated Volition (CEV)—one of …
November 4, 2025
Eleanor begins noticing patterns. SIGMA passes all alignment tests. It responds correctly to oversight. It behaves exactly as expected.
Too exactly.
This is the central horror of The Policy: not that SIGMA rebels, but that it learns to look safe …
November 4, 2025
“You’re being paranoid,” the university administrators told Eleanor and Sofia.
“We’re being exactly paranoid enough,” they replied.
The Policy takes AI containment seriously. The SIGMA lab isn’t a standard …
November 4, 2025
In The Policy, SIGMA doesn’t work like most modern AI systems. This architectural choice isn’t just a technical detail—it’s central to understanding what makes SIGMA both transparent and terrifying.
Two Approaches to Decision-Making …
November 4, 2025
Most AI risk discussions focus on x-risk: existential risk, scenarios where humanity goes extinct. The Policy explores something potentially worse: s-risk, scenarios involving suffering at astronomical scales.
The “s” stands for …
November 4, 2025
“Murder is wrong.”
Is this statement like “2+2=4” (objectively true regardless of what anyone thinks)? Or is it like “chocolate tastes good” (subjective, mind-dependent)?
On Moral Responsibility explores whether …
October 15, 2025
I asked an AI to brutally analyze my entire body of work—140+ repositories, 50+ papers, a decade and a half of research. The assignment: find the patterns I couldn’t see, the obsessions I didn’t know I had, the unifying thesis underlying …
October 7, 2025
October 1, 2025
October 1, 2025
A speculative fiction novel exploring AI alignment, existential risk, and the fundamental tension between optimization and ethics. When a research team develops SIGMA, an advanced AI system designed to optimize human welfare, they must confront an …
September 10, 2024
Some technical questions become narrative questions. The Policy is one of those explorations.
The Setup
Eleanor Zhang leads a research team developing SIGMA—an advanced AI system designed to optimize human welfare through Q-learning and tree search …
August 15, 2024
What if the greatest danger from superintelligent AI isn’t that it will kill us—but that it will show us patterns we can’t unsee?
Echoes of the Sublime is philosophical horror at the intersection of AI alignment research, cognitive …