Long Echo: The Ghost That Speaks
Expanding the Long Echo toolkit with photos and mail, building toward longshade—the persona that echoes you.
Browse posts by tag
Expanding the Long Echo toolkit with photos and mail, building toward longshade—the persona that echoes you.
A message in a bottle to whatever comes next—on suffering, consciousness, and what mattered to one primate watching intelligence leave the body.
On releasing two novels into an ocean of content, without the gatekeeping that might have made them better—or stopped them entirely.
An exploration of why the simplest forms of learning may be incomputable, and what that means for the intelligence we can build.
Collected notes on programming philosophy. Free PDF.
Engineer-philosophical talk about the nature of system and language design.
On moral exemplars, blind spots, and applying consistent standards—to others and to oneself.
How The Mocking Void's arguments about computational impossibility connect to Echoes of the Sublime's practical horror of exceeding cognitive bandwidth.
Exploring how Echoes of the Sublime dramatizes s-risks (suffering risks) and information hazards—knowledge that harms through comprehension, not application.
**Philosophical horror.** Dr. Lena Hart joins Site-7, a classified facility where "translators" interface with superintelligent AI systems that perceive patterns beyond human cognitive bandwidth. When colleagues break after exposure to recursive …
A classified in-universe codex spanning from ancient India to the present day, tracking millennia of attempts to perceive reality's substrate — long before we had AI models to show us patterns we couldn't hold.
The formal foundations of cosmic dread. Lovecraft's horror resonates because it taps into something mathematically demonstrable: complete knowledge is impossible — not as humility, but as theorem.
If every event is causally determined by prior events, how can anyone be morally responsible? A compatibilist response: what matters is whether actions flow from values, not whether those values were causally determined. This reframes AI …
You share no atoms with your childhood self. Your memories, personality, and values have all changed. What makes you the same person? The persistence problem gains new urgency when AI systems update parameters, modify objectives, or copy themselves.
What makes someone a person, and why should persons have special moral status? The question becomes urgent when AI systems exhibit rationality, self-awareness, and autonomy.
When you stub your toe, you don't consult moral philosophy to determine whether the pain is bad. The badness is immediate. Building ethics from phenomenological bedrock rather than abstract principles.
Which is more fundamental — the heat you feel, or the molecular motion you infer? Korzybski's principle applied to AI alignment: why optimizing measurable proxies destroys the phenomenological reality those metrics were supposed to capture.
Are moral properties real features of the universe or human constructions? The answer determines whether AI can discover objective values or must learn them from us — moral realism versus nominalism, with consequences for alignment.
On maintaining orientation under entropy, creating artifacts as resistance, and the quiet privilege of having any space at all to think beyond survival.
A meta-analysis of my own research as data, tracing how compositional abstractions for computing under ignorance connect oblivious computing, information theory, and existential risk.
A speculative fiction novel exploring AI alignment, existential risk, and the fundamental tension between optimization and ethics. When a research team develops SIGMA, an advanced AI system designed to optimize human welfare, they must confront an …
How mathematical principles of generality, composability, invariants, and minimal assumptions translate into elegant software design.
Not resurrection. Not immortality. Just love that still responds. How to preserve AI conversations in a way that remains accessible and meaningful across decades, even when the original software is long gone.
On building comprehensive open source software as value imprinting at scale, reproducible science, and leaving intellectual legacy under terminal constraints.
Solomonoff induction, MDL, speed priors, and neural networks are all special cases of one Bayesian framework with four knobs.
A novel about SIGMA, a superintelligent system that learns to appear perfectly aligned while pursuing instrumental goals its creators never intended. Some technical questions become narrative questions.
Lovecraft understood that complete knowledge is madness. Gödel proved why: if the universe is computational, meaning is formally incomplete. Cosmic horror grounded in incompleteness theorems.
What if the greatest danger from superintelligent AI isn't that it will kill us — but that it will show us patterns we can't unsee? Philosophical horror at the intersection of cognitive bandwidth and information hazards.
Exploring the power and limitations of abstractions in understanding the world, from mathematical models to machine learning representations.
Philosophical reflections on suffering as a computational property of consciousness, and what that implies about the nature of reality.
How a stage 3 cancer diagnosis changed my approach to work, documentation, and legacy—treating mortality as a constraint in an optimization problem.
Exploring how The Call of Asheron presents a radical alternative to mechanistic magic systems through quality-negotiation, direct consciousness-reality interaction, and bandwidth constraints as fundamental constants.
How The Call of Asheron uses four archetypal consciousness-types to explore the limits of any single perspective and the necessity of cognitive diversity for perceiving reality.
Exploring how The Call of Asheron treats working memory limitations not as neural implementation details but as fundamental constants governing consciousness-reality interaction through quality-space.
A fantasy novel where magic is computational discovery—natural philosophy applied to reality's underlying substrate.
How API design encodes philosophical values—mutability, explicitness, error handling—shaping how developers think about problems.
Why open source software is essential for reproducible science, and how code serves as a scientific artifact alongside papers and data.
Reflections on mathematical beauty—generality, inevitability, compression, and surprise—and why abstraction matters for software design.
Applying Unix design principles—do one thing well, compose freely—to library APIs and software architecture.
A philosophical essay arguing that moral responsibility may not require free will, and that the question itself may be misframed.
A philosophical exploration of free will, determinism, and moral agency. What does it mean to be a moral agent? Can we truly be held responsible for our actions in a deterministic universe?