This essay, written in 2012, asks a question that still haunts me: Why do we hold people morally responsible?
The Setup
People throughout history have believed they belong to a special categorical class: persons. What makes persons special? Their status as moral agents—beings capable of:
- Making choices
- Acting intentionally
- Bearing responsibility for actions
- Deserving praise or blame
But why do persons warrant this distinction?
The Uncomfortable Truth
Most justifications for moral responsibility rely on some notion of free will—the ability to have done otherwise.
But what if free will is:
- Metaphysically impossible (determinism)
- Conceptually incoherent (libertarian free will)
- Practically irrelevant (compatibilism’s sleight of hand)
Then what happens to moral responsibility?
The Determinist’s Dilemma
If the universe is deterministic—if your actions are the inevitable result of:
- Prior causes
- Physical laws
- Initial conditions you didn’t choose
- A brain you didn’t design
Then in what sense are you responsible?
You couldn’t have done otherwise. The atoms that compose you followed the only path they could follow. Blame becomes as absurd as blaming a river for flowing downhill.
The Compatibilist’s Trick
Compatibilism tries to rescue free will by redefining it:
Free will = acting according to your desires, without external coercion
But this dodges the hard question: Where do your desires come from?
If desires are:
- Products of genetics (not chosen)
- Shaped by environment (not chosen)
- Constrained by neurobiology (not chosen)
Then acting on your desires isn’t freedom—it’s sophisticated determinism.
The Libertarian’s Ghost
Libertarian free will posits some non-deterministic element—a soul, quantum randomness, or uncaused causes—that breaks the causal chain.
But this creates worse problems:
- Random actions aren’t free actions
- Uncaused causes aren’t you—they’re dice
- Souls are metaphysical promissory notes that never get cashed
If your choices are truly uncaused, they’re not expressions of your agency—they’re cosmic accidents that happen to occur in your brain.
The Real Question
The essay argues that free will might be the wrong question.
Maybe the right question is: Why do we need moral responsibility at all?
Moral Responsibility as Social Technology
Consider this: moral responsibility might not be a metaphysical fact about persons.
It might be a social technology—a useful fiction that:
- Modifies behavior (fear of blame, desire for praise)
- Enables coordination (punishment deters defection)
- Creates accountability structures
- Maintains social order
In other words: we treat people as moral agents not because they are, but because it works.
The Uncomfortable Implications
If moral responsibility is a social construct rather than a metaphysical fact, several implications follow:
1. Punishment Requires New Justification
We can’t justify punishment as “they deserve it” (desert-based punishment).
We must justify it as:
- Deterrence (preventing future harms)
- Rehabilitation (changing behavior patterns)
- Containment (protecting others)
This feels more humane—but also more coldly utilitarian.
2. Praise Becomes Strategic
Praising good actions isn’t about acknowledging moral virtue.
It’s about reinforcing behavior patterns we want to see more of.
This feels… manipulative. Even if true.
3. Moral Luck Is Everywhere
If there’s no metaphysical free will, then moral luck dominates everything:
- The luck of being born with a functional brain
- The luck of growing up in a stable environment
- The luck of not having the genes for psychopathy
Two people in identical circumstances, with identical causal histories, would make identical choices. Praising one and blaming the other becomes arbitrary.
The Connection to My Research
Reading this essay in 2024, after years working on oblivious computing and probabilistic systems, I see a pattern:
I’m skeptical of things that claim to be fundamental but are actually constructed.
- Free will: claimed to be metaphysically fundamental, actually a social construction
- Privacy: claimed to be a right, actually an engineering problem
- Moral responsibility: claimed to be deserved, actually instrumentally useful
The essay was an early exploration of this skepticism.
Why This Still Matters
The free will debate isn’t just academic philosophy. It has real implications:
Criminal Justice
If no one has libertarian free will, our entire criminal justice system—built on desert and retribution—requires rethinking.
Some countries (Norway, for example) have already moved toward rehabilitative models that treat crime as:
- Social dysfunction
- Behavioral patterns to modify
- Public health problem
Not as moral failing requiring punishment.
AI Alignment
As we build increasingly capable AI systems, we’ll need to decide: Are AIs moral agents?
If moral agency requires libertarian free will, then no—AIs can’t be moral agents.
But if moral agency is a social construct we assign based on utility, then… maybe? If it’s useful to treat advanced AI as having moral status?
This connects directly to The Policy—the question of whether SIGMA is a moral agent matters for how we interact with it.
Personal Identity
If you’re not the ultimate author of your choices, what does that mean for:
- Self-improvement (can you change if change is determined?)
- Regret (can you regret what you couldn’t have avoided?)
- Pride (can you be proud of luck?)
The essay doesn’t resolve these. It makes them harder to ignore.
The Position I Hold Now (2024)
Fourteen years after writing this, here’s where I’ve landed:
Moral responsibility is:
- Metaphysically unjustified (no libertarian free will)
- Conceptually confused (compatibilism is word games)
- Pragmatically necessary (society requires it)
So I live in the contradiction:
- I don’t believe people ultimately deserve blame or praise
- I still hold people accountable (including myself)
- I treat moral responsibility as real (while knowing it’s constructed)
This is the same contradiction I explore in oblivious computing: useful fictions with rigorous structure.
Read the Full Essay
The original essay goes deeper into:
- Compatibilism vs libertarian free will vs hard determinism
- The concept of moral luck
- Alternative frameworks for ethical behavior without free will
- The implications for legal systems and personal relationships
Available: On Moral Responsibility | GitHub
Written in 2010, this essay set the philosophical groundwork for much of my later work. The skepticism about “natural” categories and the emphasis on constructed-but-useful systems runs through everything—from oblivious computing to probabilistic data structures. Maybe I’ve always been building mathematics for a deterministic universe.
Discussion