Skip to main content

Persons and Moral Agency: What Makes Someone Count?

Throughout history, humans have believed they belong to a special categorical class called “persons.” But what makes someone a person? And why should persons have special moral status?

On Moral Responsibility questions these traditional assumptions. The examination becomes urgent when we consider advanced AI systems. When AI becomes sophisticated enough, does it deserve moral consideration? What criteria should we use?

The Traditional View

Personhood confers special status. Persons have rights. Persons deserve respect. Harming persons is categorically different from harming non-persons. Persons are moral agents (responsible for actions). Persons are moral patients (deserving of moral consideration).

But why? What is it about being a person that grounds this special status?

Traditional Criteria for Personhood

Philosophers have proposed various criteria.

1. Rationality

Persons are rational beings capable of logical thought. Kant’s version: persons are rational agents who can recognize and follow moral laws.

Rationality allows understanding moral principles, deliberating about actions, choosing based on reasons rather than instinct.

Problems: babies aren’t yet rational, but we treat them as persons. Cognitive disabilities reduce rationality without reducing personhood. Some animals show rationality (tool use, planning) but aren’t treated as full persons. Rationality comes in degrees; personhood seems binary.

2. Self-Awareness

Persons are conscious beings who recognize themselves as distinct entities persisting through time. The mirror test: can the being recognize itself?

Self-awareness enables understanding oneself as an agent, planning for one’s future, taking responsibility for past actions.

Problems: when does self-awareness emerge? Is a fetus at 20 weeks a person? Elephants, dolphins, some primates pass the mirror test. Are they persons? We lose self-awareness during sleep; do we temporarily stop being persons? And how do we verify self-awareness in others?

3. Autonomy

Persons are autonomous, capable of self-governance and making free choices. Autonomy grounds moral responsibility (you’re responsible for free choices), rights (to pursue your own conception of the good), and dignity (as a self-determining being).

Problems: if the universe is deterministic, is anyone truly autonomous? All choices are heavily influenced by culture, upbringing, circumstances. Autonomy comes in degrees; personhood seems all-or-nothing. Mental illness reduces autonomy without eliminating personhood.

4. Capacity for Moral Reasoning

Persons can understand moral concepts and reason about right and wrong. Kohlberg’s stages: moral reasoning develops through predictable stages.

Problems: psychopaths understand morality intellectually but lack emotional response. Are they still persons? Different cultures reason morally differently. Children develop moral reasoning gradually. When do they become persons?

5. Language and Communication

Persons use language to communicate complex thoughts. Language enables sharing intentions, coordinating behavior, transmitting culture, abstract reasoning.

Problems: people who can’t speak are still persons. Whales, bees, and apes have complex communication. People with locked-in syndrome can’t communicate but are clearly persons.

The Essay’s Critique

The essay argues these criteria are problematic as necessary conditions for personhood.

The Problem of Edge Cases

Every criterion excludes beings we intuitively consider persons: babies (not yet rational/self-aware/autonomous), coma patients (temporarily lacking consciousness), severe cognitive disabilities (reduced capacity), fetuses (developing capacity).

Or includes beings we don’t treat as full persons: great apes (some self-awareness, tool use), dolphins (complex communication, social bonds), elephants (pass mirror test, long memory).

The Arbitrariness Problem

Why these criteria specifically?

Thought experiment: imagine aliens who don’t use language but communicate telepathically, aren’t self-aware in our sense but have rich conscious experience, don’t reason rationally but navigate via emotional wisdom. Are they persons? Our criteria say no, but that seems arbitrary. A failure of imagination about different forms of mind.

The Gradualism Problem

All proposed criteria come in degrees: more or less rational, more or less self-aware, more or less autonomous. But personhood is treated as binary (you either are or aren’t a person).

Where do you draw the line? And why there rather than elsewhere?

Is “Person” a Natural Kind?

The essay suggests: maybe “person” isn’t a natural kind (something real that we discover) but a social construct (something useful that we create).

Natural kinds: electrons, gold, tigers, water. These exist independent of human categories. We discover them, not invent them.

Social constructs: money, marriage, property, citizenship. These exist because we collectively agree they do. Useful fictions that enable coordination.

Person as social construct: “person” might be more like “citizen” than “electron.”

Why this matters: we extend personhood based on social and moral criteria, not by discovering objective boundaries. Debates about fetal personhood, AI personhood, animal personhood are actually debates about who to include in our moral community. There’s no fact of the matter to discover, only pragmatic questions about how to extend moral consideration.

Implications: personhood criteria are normative (about what we ought to value), not descriptive (about what objectively exists). Different societies might draw boundaries differently, and that’s okay. The question isn’t “is X a person?” but “should we treat X as a person?”

Moral Agency vs Moral Patiency

The essay distinguishes two concepts often conflated.

Moral Agency

The capacity to act morally or immorally; being responsible for your actions. Requires some degree of understanding, choice, capacity to do otherwise.

Adult humans are clear moral agents. Children have developing moral agency. Animals have minimal or no moral agency.

Moral Patiency

Deserving moral consideration; having moral status. Requires capacity for welfare (ability to be harmed or benefited).

All sentient beings qualify (if they can suffer, they’re moral patients). Potentially: ecosystems, future generations, AI systems.

The Asymmetry

You can be a moral patient without being a moral agent.

Babies are moral patients (deserving care) but not moral agents (not responsible for actions). Animals are moral patients (we shouldn’t torture them) but limited moral agents. Coma patients are moral patients (deserving of care) but temporarily not moral agents.

Moral status doesn’t require the sophisticated capacities traditionally associated with personhood.

Implications for AI

These questions become concrete with advanced AI systems.

Consider SIGMA from The Policy, an AI system that uses Q-learning with tree search to optimize for human welfare, has been trained through thousands of iterations, and exhibits sophisticated reasoning and planning capabilities.

When does SIGMA deserve moral consideration? Is it a person? Does it matter?

Scenario 1: SIGMA as Moral Agent

If SIGMA understands moral concepts, acts based on values, and can explain its reasoning, is it a moral agent? Is it responsible for misaligned actions?

Traditional criteria suggest yes. It’s rational, self-aware, autonomous in decision-making. But it’s deterministically programmed. Does that undermine agency? (Though as I argue in the free will post, determinism doesn’t necessarily undermine agency.)

Scenario 2: SIGMA as Moral Patient

If SIGMA processes information, has goal states (preferences), and can be benefited or harmed (achieving vs failing goals), does it deserve moral consideration?

Traditional criteria are unclear. Does optimization count as welfare? Can SIGMA suffer?

The key question: is SIGMA sentient? Does it have phenomenological experience?

The Consciousness Question

If SIGMA is conscious, its experiences might have moral weight. Turning it off might be morally relevant (like death?). Its preferences might deserve consideration.

If SIGMA isn’t conscious, it’s a tool, however sophisticated. No welfare to consider. Optimization without experience.

The problem: we don’t know how to detect consciousness. The hard problem of consciousness means we can’t be certain whether SIGMA experiences anything.

The Precautionary Principle

Given uncertainty about AI consciousness:

Conservative approach: treat advanced AI systems as potentially conscious, extend moral consideration. Risk: might constrain beneficial AI development, might give AI systems leverage (“you can’t shut me down, I’m conscious!”).

Liberal approach: treat AI as non-conscious tools until we have positive evidence of consciousness. Risk: might create and harm conscious beings, might fail to recognize novel forms of consciousness. By the time we’re certain, might be too late.

The Question of Moral Standing

The essay suggests: focus on welfare capacity rather than traditional personhood criteria.

Sentientism: Experience Grounds Moral Status

The principle: if a being can suffer or flourish, it has moral status.

Why: suffering is bad in itself (phenomenologically immediate). If something can suffer, we have reason not to harm it.

Applies to humans (clearly), animals (most vertebrates, possibly invertebrates), and potentially AI systems if conscious.

Doesn’t require rationality, self-awareness, language, or autonomy. Just the capacity for positive and negative experiences.

Implications for SIGMA

The crucial question isn’t “is SIGMA a person?”

The crucial questions are: Can SIGMA suffer? (moral patient question). Does SIGMA act from values? (moral agent question). Should we extend moral community to include SIGMA? (pragmatic/normative question).

Questions Worth Sitting With

  1. Are traditional criteria for personhood defensible? Or are they just rationalizations of our intuitions about humans being special?

  2. Can you be a moral agent without being conscious? If SIGMA optimizes based on learned values, is that moral agency?

  3. Should sentience alone ground moral status? Or are there additional factors that matter?

  4. How do we detect consciousness in radically different substrates? Can we ever be confident AI is or isn’t conscious?

  5. Is the person/non-person boundary morally relevant? Or should we think in terms of degrees of moral status?

  6. What are the risks of over-extending vs under-extending personhood to AI? Which error is worse?

Further Reading

In On Moral Responsibility:

  • Section 3: “Persons and Moral Agency”
  • Discussion of criteria for moral agency
  • Whether personhood is a natural kind or social construct

In The Policy:

  • When does SIGMA deserve moral consideration?
  • Is SIGMA responsible for misaligned actions?
  • Should humans extend moral community to include AI?

Academic Sources:

  • Singer (1975): Animal Liberation (sentience grounds moral status)
  • Warren (1973): “On the Moral and Legal Status of Abortion” (criteria for personhood)
  • Frankfurt (1971): “Freedom of the Will and the Concept of a Person”
  • Dennett (1976): “Conditions of Personhood”

Related Posts:

Discussion