Skip to main content

Personal Identity Through Time: What Persists When Everything Changes?

You share no atoms with your childhood self. Your memories have changed. Your personality has shifted. Your values have evolved. So what makes you the same person?

This is the persistence problem. Philosophers have been wrestling with it for millennia. On Moral Responsibility examines why it matters for ethics and why we might not need to solve it to do moral reasoning.

The question gains new urgency with AI. When an AI system updates its parameters, modifies its objectives, or creates copies of itself, is it still the same agent? And does the answer matter for responsibility?

The Persistence Problem

The core question: what grounds personal identity across time?

Why it matters for ethics:

  • Responsibility: am I responsible for what my past self did?
  • Planning: should I care about my future self’s welfare?
  • Commitments: do promises made to past-me bind present-me?
  • Survival: does “I” survive if everything about me changes?

Traditional Answers

Philosophers have proposed three main criteria.

1. Physical Continuity

You’re the same person because you have the same body. Clear, observable, matches common sense.

Problems: every atom in your body replaces over time; nothing physical persists from childhood. If your brain were transplanted into another body, you’d intuitively follow the brain, not the original body. If you’re destroyed and recreated atom-by-atom elsewhere, are you the same person? If your brain hemispheres were separated into two bodies, which one is you?

2. Psychological Continuity

You’re the same person because of continuous psychological connections: memories, personality, values.

Locke’s memory criterion: person A at time T1 is the same as person B at time T2 if B can remember being A.

This explains why amnesia threatens identity and handles brain transplant cases (you follow your memories). It accounts for what we actually care about in identity.

Problems: you don’t remember most of your past experiences. False memories don’t make you the person you remember being. Two people could both remember being you (fission problem). And “my memory” presupposes determining whose memory it is (circularity).

3. Narrative Identity

You’re the same person because you construct a coherent narrative connecting past, present, and future. This matches how we actually think about our lives and explains why life events need to “make sense.”

Problems: you constantly revise the story. The narrative includes idealization, omissions, creative interpretation. You tell different narratives in different contexts. And what grounds it? Why is your story about you rather than someone else?

The Essay’s Position: Identity as Convention

The essay suggests a radical possibility: personal identity might be conventional rather than metaphysically deep.

We’re Processes, Not Things

Stop thinking of persons as objects that persist through time. Think of them as processes that continue despite constant change.

Analogy: a wave. The water molecules constantly change. The wave’s shape shifts moment to moment. No particle of water persists throughout the wave’s existence. Yet we track it as “the same wave.” Why? Because it’s a pattern of organization, not a fixed substance.

Similarly: you’re a pattern of organization, physical, psychological, social, that continues despite constant flux in the underlying components.

Identity as Practical Tool

The pragmatic view: “same person over time” is a tool for tracking responsibility and enabling planning, not necessarily a deep metaphysical fact.

Why we need the concept: hold people accountable for past actions. Enable commitments and promises. Support rational planning for the future. Coordinate social interactions.

But these practical purposes don’t require solving the metaphysical puzzle. They just require continuity sufficient for these purposes.

Degrees of Identity (Parfit)

Maybe identity isn’t binary (same person yes/no) but comes in degrees:

  • Very high continuity: you five minutes ago, clearly same person
  • High continuity: you yesterday, same person
  • Medium continuity: you ten years ago, mostly same person
  • Low continuity: you as an infant, barely same person
  • No continuity: you before conception, not same person

Derek Parfit’s insight: what matters for ethics isn’t identity per se, but psychological connectedness (direct connections like memory) and psychological continuity (chains of overlapping connections). These admit degrees.

Implications for Moral Responsibility

If identity is conventional or admits degrees, what about responsibility?

Temporal Discounting of Responsibility

We feel less responsible for actions from long ago. Why? Because we’ve changed. The person who did that thing twenty years ago isn’t quite me anymore. Different values, different character, different understanding.

Responsibility might track degree of psychological continuity. The more I’ve changed, the less continuous I am with past-me, the less responsible I am for past-me’s actions.

Legal systems already recognize this through statutes of limitations, reduced sentences for juvenile crimes, parole based on rehabilitation.

The Future Self Problem

Standard view: I should care about my future self’s welfare because that person will be me.

Challenge: if identity is conventional or comes in degrees, why should I care about future-me more than future-you?

Answer: pragmatic rather than metaphysical. Future-me’s experiences will feel like my experiences (psychological continuity). I can currently influence future-me’s welfare (causal connection). Social norms of personal planning benefit everyone (coordination).

The connection is practical and psychological, not based on a deep metaphysical fact of identity.

Implications for AI Ethics

The persistence problem takes on new urgency with AI systems.

Parameter Updates: Is It Still the Same Agent?

Consider SIGMA from The Policy undergoing iterative training.

Iteration 1: initial parameters, basic capabilities, simple objective function. Iteration 10,000: completely different parameters, advanced capabilities, refined objective function.

Is SIGMA-10000 the same agent as SIGMA-1?

Physical continuity: no, every parameter has changed. Psychological continuity: unclear, it has different values, capabilities, decision patterns.

If not the same agent, who’s responsible for SIGMA-10000’s actions? SIGMA-1 who initiated the training? The developers who set up the process? SIGMA-10000 itself, as a new agent?

AI Copies: Which Is the Real One?

If an AI system creates perfect copies of itself, which copy is the “original” agent?

All of them? Then identity isn’t unique (one becomes many). None of them? Then identity doesn’t survive copying. The first created? Why? What makes temporal priority special? The one with most continuity with the original? But they’re identical copies, equal continuity.

Value Drift: When Does Change Make a Different Agent?

AI systems can undergo value drift, gradual changes in objectives through training.

If an AI’s values change radically, should we treat it as a different agent?

If same agent: the AI is responsible for its value drift; alignment failures are its fault. If different agent: we created a new agent with misaligned values; we’re responsible.

The persistence problem matters for attributing responsibility and understanding agency.

Why We Don’t Need to Solve This

The essay argues: we can do ethics without solving the persistence problem.

Practical Reasoning Works Regardless

Whether or not there’s a metaphysically robust “same person”:

  • We can still hold people accountable (based on psychological continuity)
  • We can still make plans (based on causal connections to future states)
  • We can still have commitments (based on social conventions)
  • We can still respect persons (based on current psychological properties)

Moral Status Doesn’t Require Solving Identity

What matters for moral consideration: current capacity for welfare (can this being suffer or flourish?), current psychological properties (consciousness, preferences, values), practical connections (continuity sufficient for responsibility).

Not: solving whether there’s a metaphysically robust self persisting through time.

Questions Worth Sitting With

  1. What’s your criterion for personal identity? Can you state necessary and sufficient conditions that handle all problem cases?

  2. Does the persistence problem matter practically? Or can we do ethics, law, and rational planning without solving it?

  3. Should responsibility track degree of identity? Are you less responsible for actions from long ago when you’ve changed significantly?

  4. When should we treat an AI as a different agent? After what degree of parameter change or value drift?

  5. Can AI have narrative identity? Do AI systems construct stories about themselves, or is narrative identity specifically human?

  6. Is identity or continuity what matters? Maybe Parfit is right: we should care about psychological connectedness, not whether there’s literally the “same” person.

Further Reading

In On Moral Responsibility:

  • Section 4: “Personal Identity and Persistence”
  • Discussion of persistence criteria and ethical implications
  • Read the full essay

In The Policy:

  • How does iterative AI training affect agent identity?
  • If an AI copies itself, which is responsible for actions?
  • Explore the novel

Academic Sources:

  • Parfit (1984): Reasons and Persons
  • Locke: Essay Concerning Human Understanding (the memory criterion)
  • Williams (1970): “The Self and the Future”
  • Shoemaker & Swinburne (1984): Personal Identity

Related Posts:

Discussion