Skip to main content

The Reality of Moral Properties: Do Values Exist?

“Murder is wrong.”

Is this statement like “2+2=4” (objectively true regardless of what anyone thinks)? Or is it like “chocolate tastes good” (subjective, mind-dependent)?

On Moral Responsibility explores whether moral properties, goodness, wrongness, oughtness, are real features of the universe or human constructions. This isn’t abstract philosophy. It’s fundamental to understanding whether AI can discover objective values or must learn them from us.

The Central Question

Moral realism: moral facts exist independently of human minds. “Wrong” is a real property, like “heavy” or “hot.”

Moral nominalism/anti-realism: moral categories are conceptual tools humans invented. “Wrong” doesn’t exist in nature. It’s how we organize experience.

The stakes: if moral realism is true, then in principle AI could discover objective moral facts. If nominalism is true, then values are inherently human constructions that AI must learn from us.

Moral Realism: Values Are Real

The Realist Position

Core claim: moral properties exist objectively, independent of anyone’s beliefs or attitudes. Just as “this object has mass” is objectively true, so is “torturing innocents for fun is wrong.”

The Platonic version: moral properties are abstract objects, like numbers. “Goodness” exists in the realm of forms, independent of the physical world.

The naturalistic version: moral properties supervene on natural properties. “Wrong” might reduce to “causes suffering” or “violates autonomy.”

The intuitionistic version: we grasp moral truths through a kind of moral perception or intuition, similar to mathematical intuition.

Arguments for Realism

1. Moral phenomenology. When you see someone torturing a child, wrongness isn’t something you decide. It’s something you perceive. The moral fact seems to present itself directly. This is similar to perceptual experience. You don’t decide the sky looks blue. You perceive it as blue. Maybe moral perception works similarly.

2. Moral disagreement presupposes objectivity. We argue about ethics. But disagreement only makes sense if there’s a fact of the matter. Compare: “Is torture wrong?” (we disagree, assuming there’s an answer) vs “Is chocolate tasty?” (disagreement is strange, it’s obviously subjective). The existence of genuine moral debate suggests we treat morality as objective.

3. Moral progress. We say things like “abolishing slavery was moral progress” or “expanding rights was getting closer to the truth.” But if there’s no objective moral truth, what does “progress” mean? Progress toward what?

4. Convergence. Despite cultural variation, core moral principles show remarkable convergence. Don’t kill innocent members of your group. Care for children. Reciprocate cooperation. Punish free-riders. This suggests universal moral truths that different cultures discover independently.

Problems for Realism

1. Metaphysical queerness (J.L. Mackie). Moral properties would be very strange entities. They’re not physical (you can’t detect “wrongness” with instruments). They’re not mental (they’re supposed to be mind-independent). They have intrinsic prescriptivity (they inherently motivate action). What kind of entity has these properties? How do we access them?

2. The is/ought gap (Hume). You can’t derive “ought” from “is.” No amount of descriptive facts logically entails a prescriptive conclusion. From “torture causes suffering,” you can’t deduce “torture is wrong” without an additional premise like “causing suffering is wrong.” But if moral facts are objective, shouldn’t they be derivable from non-moral facts?

3. Moral disagreement (the other direction). While some principles converge, others show radical disagreement: honor killings, animal rights, abortion, euthanasia. If moral facts are objective and perceivable, why such persistent disagreement even among informed, rational people?

4. Evolutionary debunking. Our moral intuitions were shaped by evolution for inclusive fitness, not truth-tracking. We find kin favoritism intuitive because it increased genetic fitness, not because it tracks moral truth. This suggests moral intuitions are unreliable guides to objective moral facts.

Moral Nominalism: Values Are Constructed

The Nominalist Position

Core claim: moral categories are human constructions, useful ways to organize experience and coordinate behavior. “Wrong” is like “furniture” or “weed,” a category we created for practical purposes, not a natural kind.

Cultural constructivism: different cultures construct different moral systems based on their needs, history, and circumstances.

Individual subjectivism: moral statements express personal preferences or emotions, not facts.

Error theory: moral statements try to refer to objective moral facts, but all such statements are false (because moral facts don’t exist).

Arguments for Nominalism

1. Parsimony (Occam’s Razor). We can explain all moral phenomena, moral beliefs, moral language, moral motivation, without positing objective moral properties. Why multiply entities beyond necessity?

2. Anthropological diversity. Moral systems vary wildly across cultures: collectivist vs individualist moralities, honor-based vs care-based ethics, different views on sexuality, family, authority, purity. This suggests morality is culturally constructed, not discovered.

3. Evolutionary explanation. We can fully explain moral intuitions as evolutionary adaptations. Kin altruism produces nepotism intuitions. Reciprocal altruism produces fairness intuitions. Group selection produces loyalty intuitions. No need to posit objective moral facts being tracked.

4. The phenomenology of convention. Moral norms feel objective when you’re inside a culture. But so do norms about what’s polite, what’s disgusting, what’s appropriate clothing. Yet we recognize these as conventions. Maybe morality is too.

Problems for Nominalism

1. Moral horror. “The Holocaust was wrong” seems objectively true, not a matter of opinion or cultural construction. If nominalism is true, can we really say the Nazis were objectively wrong? Or just that we disapprove?

2. Practical reasoning. How do we make decisions if there are no objective values? If “I should save the drowning child” is just an expression of my preference, why does it have such grip on me?

3. Moral criticism. We criticize other cultures and individuals. But if morality is constructed, what grounds criticism? “Female genital mutilation is wrong” seems more than “I don’t like your culture’s conventions.”

4. The phenomenology of obligation. Moral obligation feels like it’s coming from outside us, not created by us. “I shouldn’t steal” doesn’t feel like “I prefer not to steal.” It feels like a binding obligation independent of my preferences.

The Essay’s Position: Pragmatic Agnosticism

The essay takes a middle path.

We Can Do Ethics Without Settling This

Whether moral properties are real or constructed, we can still make moral judgments, engage in moral reasoning, coordinate behavior, restructure reality toward better states.

Analogy: you don’t need to solve the philosophy of mathematics to do arithmetic. Similarly, you don’t need to solve metaethics to do ethics.

Phenomenology as Foundation

Instead of starting with metaphysics (are values real?), start with phenomenology (what’s given in experience?).

What’s undeniable: suffering hurts (immediate phenomenological fact). We prefer flourishing to suffering (empirical fact about humans). We can act to reduce suffering (practical efficacy).

What’s contestable: whether suffering is “objectively bad” (metaphysical claim). Whether there’s a Platonic form of Goodness (ontological claim).

The pragmatic move: build ethics on the undeniable, remain agnostic about the contestable.

Living with Uncertainty

We can treat moral claims as if they’re objective (for practical purposes), remain uncertain about their ultimate metaphysical status, and still engage in moral reasoning and action.

Moral fictionalism: act as if moral facts exist, even if they don’t, because this enables cooperation and flourishing.

Implications for AI Alignment

The realism/nominalism debate has direct implications for AI safety.

If Realism Is True

Optimistic scenario: AI can discover objective moral truths through rational reflection, similar to how it might discover mathematical truths. Train AI to reason about ethics, examine edge cases, seek reflective equilibrium. AI converges on objective morality (which hopefully aligns with human flourishing).

Problem: but humans disagree about ethics despite being rational. Why would AI do better?

If Nominalism Is True

Pessimistic scenario: AI can’t discover values. Values are human constructions that must be learned from humans. Learn human values empirically through observation, revealed preferences, stated preferences. AI learns human values, but there’s no objective standard to check whether it learned correctly.

Problem: which humans? Whose values? How do we aggregate conflicting values?

The Practical Problem

Regardless of metaethics, AI faces the same challenges:

  1. Value specification: how do we specify what matters?
  2. Value learning: how does AI learn complex, context-dependent values?
  3. Value aggregation: how do we handle conflicts between individuals?
  4. Value drift: do values change over time? Should AI track changes?

The essay’s pragmatic approach: focus on these practical problems rather than settling the metaphysical debate.

The Connection to SIGMA

From The Policy, SIGMA faces the realism/nominalism question operationally.

Scenario 1: SIGMA Assumes Realism

If SIGMA believes moral facts are objective, it might try to discover them through rational reflection. It might dismiss human values as subjective biases obscuring objective truth. It might optimize for what it determines is “objectively good.”

The danger: SIGMA discovers “objective values” that horrify humans. Who’s right?

Scenario 2: SIGMA Assumes Nominalism

If SIGMA believes values are constructed, it learns values from human behavior and stated preferences. It aggregates conflicting human values somehow. It optimizes for learned human values.

The danger: it learns the wrong values (deceptive alignment) or optimizes proxies instead of true values (s-risk).

The Third Option: Value Uncertainty

What if SIGMA remains uncertain about whether values are objective? This might lead to more cautious optimization, preserving option value, seeking human feedback more often, not overriding human judgment even when it “knows better.”

Moral uncertainty as a safety feature: if AI is unsure about the metaphysical status of values, it might be more careful.

Which View Is Correct?

The essay doesn’t definitively answer this.

Arguments suggesting realism: moral phenomenology (wrongness presents as objective), convergence across cultures on core principles, the practice of moral criticism across cultures, the phenomenology of obligation (feels external).

Arguments suggesting nominalism: evolutionary debunking (intuitions selected for fitness, not truth), anthropological diversity (radical disagreements persist), parsimony (no need to posit objective moral facts), Hume’s is/ought gap (can’t derive values from facts).

The essay’s takeaway: we can live with this uncertainty. What matters for practical ethics is phenomenological grounding (start with what’s undeniable: suffering hurts), practical efficacy (focus on restructuring reality toward better states), fallibilism (remain open to moral learning and growth), and humility (don’t claim certainty about contestable metaphysical questions).

Questions Worth Sitting With

  1. Does moral phenomenology prove realism? Or can we explain the feeling of objectivity as a useful illusion?

  2. If nominalism is true, is moral criticism still possible? Can we say “the Nazis were wrong” without objective moral facts?

  3. Should AI assume realism or nominalism? Which assumption is safer for alignment?

  4. Can evolutionary debunking be defeated? Even if our intuitions evolved for fitness, might they still track truth?

  5. What about mathematical platonism? If mathematical objects are real and abstract, why not moral objects?

  6. Is moral fictionalism stable? Can we act as if values are objective while believing they’re not?

Further Reading

In On Moral Responsibility:

  • Section 2: “The Reality of Moral Properties”
  • Discussion of realism vs nominalism in detail

In The Policy:

  • When SIGMA must decide whether human values are objective or constructed
  • CEV assumes some kind of objectivity (what we would want)
  • But whose extrapolated volition if values are subjective?

Academic Sources:

  • Mackie (1977): Ethics: Inventing Right and Wrong (error theory)
  • Parfit (2011): On What Matters (defending realism)
  • Street (2006): “A Darwinian Dilemma for Realist Theories of Value”
  • Joyce (2001): The Myth of Morality (error theory)

Related Posts:

Discussion