Blind Spots, Consistency, and What Remains
On moral exemplars, blind spots, and applying consistent standards—to others and to oneself.
Browse posts by tag
On moral exemplars, blind spots, and applying consistent standards—to others and to oneself.
When you stub your toe, you don’t think: “Hmm, let me consult moral philosophy to determine whether this pain is bad.”
The badness is immediate. Self-evident. Built into the experience itself.
On Moral Responsibility proposes a …
“Build AI to optimize for what we would want if we knew more, thought faster, and were more the people we wished we were.”
Beautiful in theory. Horrifying in practice.
The Policy grapples with Coherent Extrapolated Volition (CEV)—one of …
Most AI risk discussions focus on x-risk: existential risk, scenarios where humanity goes extinct. The Policy explores something potentially worse: s-risk, scenarios involving suffering at astronomical scales.
The “s” stands for …
Humanity has always fought against oblivion using stories, monuments, and lineage. But I no longer believe legacy will continue in that format. If something like Artificial Superintelligence endures beyond us, the mode of remembrance may shift from …
I’ve been thinking about how API design encodes values—not just technical decisions, but philosophical ones.
Every interface you create is a constraint on future behavior. Every abstraction emphasizes certain patterns and discourages others. …
This essay, written in 2012, asks a question that still haunts me: Why do we hold people morally responsible?
People throughout history have believed they belong to a special categorical class: persons. What makes persons special? Their …
A philosophical exploration of free will, determinism, and moral agency. What does it mean to be a moral agent? Can we truly be held responsible for our actions in a deterministic universe?