by Brian Tomasik
First written: 22 Aug. 2014; last update: 1 Mar. 2017
Sometimes it's claimed that we should assume moral realism is true because if it's false, nothing matters, but if it's true, what we do does matter. I reject this argument on the grounds that what we do still matters (to us) even if moral realism is false.
Elsewhere I've sketched why I find metaphysical moral realism to be implausible. This piece replies to a prudential argument according to which we should act as if moral realism is true even if it's probably false.
The moral-realism wager
There's an old argument in defense of moral realism that runs as follows. Start with the premise of
Nonrealist Nihilism: If moral realism is false, then nothing matters.
Now, suppose you think the probability of moral realism is P. Then when you consider taking some action, the expected value of the action is
P * (value if moral realism is true) + (1-P) * (value if moral realism is false)
= P * (value if moral realism is true) + (1-P) * 0,
where the substitution with 0 follows from the Nonrealist Nihilism premise. Therefore, we can prudentially assume that moral realism is true.
I don't know who first originated this point. In 1997, Eliezer Yudkowsky held a version of it in the context of superintelligence. Recently some Oxford effective altruists have defended it. "Meta-Ethical Uncertainty" on Felicifia has further discussion.
Reply 1: Moral realism is confused
My first reply to this argument was to insist that moral realism wasn't even well formed, and hence introducing it into a wager was nonsense. I felt it was similar to the following:
Omnibenevolent Square-Circle Wager: Let Q be the probability that if you pray to an Omnibenevolent Square Circle, it will do more good in the world than could be done by any other possible outcome infinitely many times over. The expected value of praying is then Q * (something huge) compared against a much smaller expected value for any other action, so we should act as if the Square Circle exists.
In general, one can introduce any number of random wagers about absurd ideas, but in most cases it's a waste of time and probably misleading to take them seriously.
The counter-argument to this is as follows: Moral realism is not just a random hypothesis invented out of thin air, but it's something most of the world actually believes. Even among professional philosophers, 56.4% subscribe to moral realism, and only 27.7% embrace anti-realism. Moreover, moral realism does not mainly stem from confusion about theism. Based on the same survey, 72.8% of philosophers come down as atheists, and only 14.6% are theists. Even if you can't understand what moral realism means, maybe that's just a limitation of your cognitive abilities. Maybe you're making a logical mistake. The probability of that can't be too small.
This argument for taking peer disagreement seriously is based on a premise:
Modesty: If other smart people similar to you reach a different conclusion on a topic they're qualified to assess, you should maintain reasonable probability that they're more correct than you.
Reply 2: Morality still matters given nonrealism
Suppose one accepts the Modesty premise and gives some nontrivial probability to moral realism being correct. Does the moral-realist wager then follow? I think not. The reason is that even if moral realism is false, what happens in the world can still matter to us. We create our own value, and this can be at least as important as values that would have come from "on high".
Perhaps the would-be moral realist rejects this notion. After all, the Nonrealist Nihilism premise with which we began this discussion asserted that if moral realism is false, nothing matters. But the Nonrealist Nihilism premise is not universally shared. Indeed, most moral nonrealists reject it and feel that things still matter (to them and others) even if there's no such thing as moral truth, in a similar way as most atheists still care about behaving nicely even without belief in God. So Modesty seems to contradict Nonrealist Nihilism.
If Nonrealist Nihilism has a nontrivial chance of being false, then the moral-realism wager doesn't go through, because the expected value of our actions given nonrealism is at least a nontrivial fraction of their expected value given realism (depending on the probability of Nonrealist Nihilism and its negation). It's worth pointing out that moral values across different axiologies don't have a definite exchange rate, but suppose we've fixed some exchange rate for purposes of this discussion.
We could say that moral realism is question-begging. It asserts that unless it's true, we have no reasons for acting one way or another, so we should assume it's true. But why should moral realism have a monopoly on reasons for action? Compare to another argument: If God doesn't exist, we can't justify any of our beliefs, because without divine revelation there is no true knowledge; therefore, we should assume divine revelation is true, because our actions are completely uninformed if it's false. The same reply works in both cases: You're just declaring by fiat that we can't have reasons for action or knowledge without some mysterious thing. But why would we ever believe that? You have to assume that moral realism does have special bearing on actions before the moral-realist wager gets off the ground. Likewise, you have to assume that only divine knowledge is genuine before the argument for God gets off the ground.
Quibble: Truth by definition
Jacy Reese replied to this piece as follows:
Moral realism is, by definition, what you *should* worry about and what does *matter.* You can claim you wouldn't care about moral realism if it were true, or you'd have some other basis for actions, but that doesn't mean moral realism doesn't matter.
[...] I disagree that the modesty argument "fails." The modesty argument doesn't intend to claim that moral realism should be personally compelling, so it can't fail to do that since it never tried[.]
Technically Jacy is right. If moral realism is defined by fiat as "what matters objectively" (whatever that means), then Nonrealist Nihilism is true by definition. Modesty toward the views of moral non-realists won't change that. But this definition is so divorced from what we ordinarily mean by morality (namely, non-selfish considerations for action) that using it will only lead to misunderstandings. It would be as if we insisted that "I have you in my heart" meant actually having a full-scale copy of a person inside one's heart. By that definition, no person has any other person in his/her heart.
The situation is similar to that of free will. We have the "Varieties of Free Will Worth Wanting", as Daniel Dennett says, just like we have the varieties of morality worth wanting -- namely, other-regarding personal reasons for action. If we insist that free will must be defined in the libertarian sense, then determinism would rule out free will by definition, and no amount of modesty toward compatibilists can change a definition. But saying "determinism implies we don't have free will" leads to terrible misunderstandings, because people confuse the weird libertarian definition of free will with the action-relevant compatibilist one. Likewise, talking about morality as what matters objectively (whatever that means) confuses most people who are really talking about "what matters to us" when they reason about moral truths.
You might reply: "But I really do care about objective morality (whatever that means), not just personal morality." I'm skeptical. A main reason your brain developed positive associations with "objective morality" is because in practice what people mean by it is "human morality", and society's endorsement of ethics is really an endorsement of human-created ethics. Suppose that objective morality really only commanded that you eat live human babies. If you had been born in a world where it was known that objective morality imposed that obligation, you would probably regard objective morality as an evil to be eliminated. So why do you claim to value objective morality now? There's a tiny but nonzero chance that objective morality will turn out to command you to eat live babies. Is your commitment to it really so strong that you'd submit to its authority even if such a discovery were made? Why?
If you really do only care about the "moral truth" and nothing else, then the moral-realism wager may work. But only caring about moral truth is itself a subjective judgment call that you're making, just like "caring about reducing suffering regardless of moral truth" would be. This page says:
Fundamentally, Sartre believes mankind cannot escape responsibility by adopting an external moral system, as the adoption of such is in itself a choice that we endorse, implicitly or explicitly, for which we must take full responsibility. Sartre argues that one cannot escape this responsibility, as each attempt to part one's self from the freedom of choice is in itself a demonstration of choice, and choice is dependent on a person's wills and desires.
Personally, I don't much care what the moral truth is even if it exists. If the moral truth were published in a book, I'd read the book out of interest, but I wouldn't feel obligated to follow its commands. I would instead continue to do what I am most emotionally moved to do. Thus, the moral-realism wager fails because it doesn't really matter to me if moral realism is true or not!
What's the harm with moral realism?
Eliezer Yudkowsky begins his "Coherent Extrapolated Volition" paper with the "Warning: Beware of things that are fun to argue." Moral realism is a fun philosophical topic that inevitably generates heated debates. But does it matter for practical purposes?
For the most part, no. On account of this, I don't make moral-(non)realism discussions prominent, though I admit being tempted to have fun by engaging with them when they arise. It has even been suggested that "'Moral Realism' May Prompt Better Behavior", in a similar way as belief in libertarian free will or God may do as well. So maybe moral realism can be considered one of those harmless fictions that help people live better lives.
One case where moral realism seems problematic is regarding superintelligence. Sometimes it's argued that advanced artificial intelligence, in light of its superior cognitive faculties, will have a better understanding of moral truth than we do. As a result, if it's programmed to care about moral truth, the future will go well. If one rejects the idea of moral truth, this quixotic assumption is nonsense and could lead to dangerous outcomes if taken for granted.
In addition, recognizing moral nonrealism may have subtle effects in other areas. Nonrealism has made me more aware of the ultimate arbitrariness of moral judgments, as a result of which I've probed more novel moral views than if I thought there was a right answer that everyone else had already probably figured out. When making moral assessments, nonrealism helps me reflect on myself as a neural network combining inputs from many parts of my brain. If I thought there was a "right answer", I might dismiss some parts of my intuitions as logically faulty or wrong-headed. If I was a moral realist, I might also force my ethical views to be more abstract and concise, heeding Occam's razor as one would do in science. In contrast, a nonrealist morality can better tolerate complexity, subtlety, pluralism, and emotional depth.