by Brian Tomasik
First written: 2005; last edit: 20 Feb. 2018
Even though consequentialists ultimately care about which outcomes are actually realized, they should judge actions on the basis of what consequences should have been reasonably anticipated at the time the action was done. The aim is to encourage people in the future to act based on proper calculations, not to reward winners of lotteries based on luck. We can eliminate confusion about moral judgment by transforming the problem to one of incentive design -- enforcing rules that tend to elicit the best behavior, rather than rewarding or punishing "intrinsic good/evil essences" of a person.
[A]ctions are evaluated in terms of the range of likely consequences. [...] The actual consequences of an action may be highly significant, but they do not bear on the moral evaluation of the action.
--Noam Chomsky, Hegemony or Survival: America's Quest for Global Dominance (2003), p. 187
The preceding quote might sound odd in an essay on utilitarianism. If the consequentialist goal is to maximize good outcomes, why are we judging actions on the basis of expectations? That sounds more like an appeal to intention-based morality....
The general object which all laws have, or ought to have, in common, is to augment the total happiness of the community; and therefore, in the first place, to exclude, as far as may be, every thing that tends to subtract from that happiness: in other words, to exclude mischief.
[...] But all punishment is mischief: all punishment in itself is evil. Upon the principle of utility, if it ought at all to be admitted, it ought only to be admitted in as far as it promises to exclude some greater evil.
--Jeremy Bentham, An Introduction to the Principles of Morals and Legislation, Chapter 13 (1789)
What, then, is the greater good that punishment accomplishes?
General prevention ought to be the chief end of punishment, as it is its real justification. If we could consider an offence which has been committed as an isolated fact, the like of which would never recur, punishment would be useless. It would be only adding one evil to another. But when we consider that an unpunished crime leaves the path of crime open not only to the same delinquent, but also to all those who may have the same motives and opportunities for entering upon it, we perceive that the punishment inflicted on the individual becomes a source of security to all. That punishment, which, considered in itself, appeared base and repugnant to all generous sentiments, is elevated to the first rank of benefits, when it is regarded not as an act of wrath or of vengeance against a guilty or unfortunate individual who has given way to mischievous inclinations, but as an indispensable sacrifice to the common safety.
--Jeremy Bentham, The Rationale of Punishment, Book 1, Chapter 3 (1830)
Of course, there are many cases in which punishment does not accomplish the aim of prevention, and in those cases, it is not justified. The point is merely that it can be justified when it works -- such as for deterring white-collar crime.
Moral judgment serves the same purpose as punishment: changing future behavior. Like punishment, saying that an action is "moral" or "immoral" serves the instrumental goal of causing good future outcomes. It does so by changing the immediate individual utility that people feel toward different options.
Example. Alice is giving medicine to her ill bunny. She neglects to read the dosage label and gives her bunny far too many pills. As a result the bunny becomes even more sick. While Alice was only trying to do the right thing, she ended up doing more harm than good. Ought we to express disapproval?
The answer depends, of course, on what disapproval would accomplish. If it would make Alice significantly more conscientious in the future, then we ought to scold her for making a bad decision. If it would make her more depressed and less able to care for her bunny, then we ought to console her instead.
There seems often to be a notion that people deserve, in some ultimate sense, punishment for their bad actions. But what good would it accomplish to increase the amount of pain in the universe by inflicting punishment, other than to deter future behavior? When we consider the logical incoherence of ultimate libertarian free will, the notion of
just desserts can be somewhat dissolved.
Evolutionary psychology has brought us these feelings of vengeance in order to serve as a credible threat of retaliation, even when exacting revenge would provide no reparation for the harms committed. In this sense, irrationality can be rational. That said, now that we have governmental punishment, feelings of revenge are now less important for this purpose.
Rule. At time t0, Bob must make a decision among several options. At later time t1, we must decide whether to judge his decision at t0 right or wrong. Labeling a decision "right" will reinforce Bob's behavior; labeling it "wrong" will motivate Bob to choose a different and better option next time. If the conditions that prevailed at t0 remain true at t1 and into the indefinite future, then we ought to say that Bob acted correctly when he chose the option of maximum expected value. This is true irrespective of whether Bob's decision actually did maximize value ex post.
Example. Bob is trying to choose between two medical procedures to perform on his patient.
- Option A: Causes 10 minutes of pain with certainty.
- Option B: With probability 0.9, causes no pain; with probability 0.1, causes 20 minutes of pain.
Assuming no long-term consequences to the pain, it seems Bob ought to choose option B, because the expected minutes of pain are 2, rather than 10.
Bob chooses option B, but unfortunately, it so happens that this is one of the few times when the procedure does cause 20 minutes of pain. The actual outcome is worse than if Bob had chosen option A. Yet, we still ought to say that Bob's decision was a good one. Why? Because the purpose of evaluating the wisdom of a decision at all is not to change the past but to affect the future. If Bob were to make the decision over again at t1, he still ought to choose option B because option B still has the better expected value. Since we want to encourage Bob to choose option B in the future, we say that he was right to choose option B at t0.
Of course, this assumes that Bob's assessment of the probabilities was relatively accurate. If not, then the failure of option B to produce good consequences may be a signal to update his beliefs.
There's a classic debate in ethics regarding "moral luck." Philosophers sometimes confuse themselves by trying to assess the "intrinsic rightness or wrongness" of an action rather than taking the approach that's both easier and more helpful: looking at what kind of judgment-based incentive structures would produce the best results in the long run. I'll examine the first three of Thomas Nagel's categories of moral luck.
Constitutive moral luck
This is the idea that people's personalities are shaped by genes, childhood experiences, and other factors out of their control. Should someone then be blamed for acting badly as a result?
The answer is simple: Would blaming the person be the best way to rectify the behavior? In most cases I'd assume not. Blame and punishment tend to make people feel worse, act in even more inappropriate ways, and transfer these unsalutary environmental conditions to their children. Of course, in moderation, blame may be appropriate, and sometimes the consequences of a bad action are so severe that we need to impose harsh penalties mainly for the deterrence value. Even if a person's inclination toward murder was entirely due to abuse during childhood, say, we still need to threaten harsh sentences for murder insofar as this serves to deter an awful outcome.
Resultant moral luck
An example here is of two drivers who speed through a red light, one of whom happens to hit a child and one of whom doesn't. In legal terms, the penalties are worse when the driver hits the child, yet the moral recklessness of each actor prior to the accident seems to have been the same.
From the standpoint of setting up appropriate incentives, it appears naively that we should punish both drivers equally, since the goal is to prevent people from running red lights in the future. If running the red light was equivalent to flipping a coin that determined whether the child died, this would be the end of the story.
But in practice, there are complications:
- Maybe the driver who hit the child was more reckless than the driver who didn't. Perhaps the first driver didn't even look at the street, while the second driver did. That the first driver was the only one to hit the child is Bayesian evidence that she was more reckless and hence deserves more judgment.
- If we punish only for running the red light, then once a person decides to run the red light, he has no further incentive not to hit the child (apart from perhaps feeling bad about it, being late for his appointment, etc.). So once the driver decides to run red, he can not bother with precautions against hitting someone, which is not what we want. In general, when incentives target intermediate outputs, they can fail to properly track the final outputs that they're intended to optimize.
These two considerations help account for the intuition that the driver who hit the child does in fact deserve more blame.
Circumstantial moral luck
The example in Nagel's essay was of a German who became a Nazi and committed atrocities during Hitler's ascendancy. Had this person emigrated elsewhere before Hitler's rise to power, he would not have committed those deeds. Should we blame this person?
Yes, we should blame the person for the atrocities because violations need to be punished. Society has established a norm against committing atrocities; this norm is like a wire that buzzes if and only if it's touched. The Nazi touched the wire, so it's appropriate for the alarm to go off. Had he emigrated, he would not have touched the wire. Incentives work by being set up to fire when they're violated.
But what about the counterfactual person who emigrated? Should he also be blamed for being the type of person who would commit atrocities given the right circumstances? This depends on whether those character traits are likely to result in other harm in the alternate circumstances and whether blame would be the best means to rectify the situation. (The answer to the latter is probably no, but there may be exceptions.) In this case, the problem reduces to constitutive moral luck, addressed above.
Some American conservatives promulgate the notion of personal responsibility: "the idea that human beings choose, instigate, or otherwise cause their own actions. A corollary idea is that because we cause our actions, we can be held morally accountable or legally liable. Personal responsibility can be contrasted to the idea that human actions are caused by conditions beyond the agent's control."
I propose we strike the balance between personal vs. societal responsibility based on where blame would be most useful. For example:
- In the case of a corporate executive who knowingly violates crucial workplace-safety regulations in order to increase company profits, personal responsibility makes sense because blaming this person for her behavior is plausibly the best way to prevent similar actions in the future.
- Suppose it's true that high levels of lead significantly increase crime rates. Suppose a country can appreciably reduce lead levels by banning leaded gasoline. Should we ignore lead's contribution to crime and only focus on the "personal responsibility" of the individual criminals? No, because reducing lead levels appears to be an extremely effective way to reduce crime—perhaps more effective than wagging our fingers at a bunch of individual people who commit robbery and murder. Of course, we can also wag our fingers to some degree insofar as doing so is useful and not overly cruel.
Like with the nature-vs.-nurture debate, the obvious answer to the personal-vs.-societal-responsibility debate is: "Both!" Many factors come together to cause outcomes, and many factors can be tweaked to change outcomes.
Norcross (2006)'s critique
Norcross (2006) discusses (p. 225) an idea that has been running through this piece: "An action is wrong if and only if it is optimific to punish the agent." Norcross (2006) quotes Sidgwick (p. 225):
From a Utilitarian point of view, as has been before said, we must mean by calling a quality, “deserving of praise,” that it is expedient to praise it, with a view to its future production: accordingly, in distributing our praise of human qualities, on utilitarian principles, we have to consider primarily not the usefulness of the quality, but the usefulness of the praise. (Sidgwick 1962: 428)
Norcross (2006) then says (p. 225):
The utilitarian will, of course, say the same about censure as Sidgwick says about praise: we should assess whether it is good to punish or blame someone by assessing the utility of doing so. Punishing and blaming are actions just like promise-keeping and killing and, like those actions, their value is determined by their consequences, their power to produce utility.
Norcross (2006) then discusses two counterexamples to the idea that the wrongness of an action is determined by whether punishment of the action is optimific. The first criticism is the following principle (p. 225):
If action x is wrong, then an action y done by someone in exactly similar circumstances, with the same intention and the same consequences, is also wrong.
Norcross (2006) considers this a counterargument, but I don't think it is. If the circumstances are "exactly similar", then whether it's optimific to punish x will also tell us whether it's optimific to punish y. If it's optimific to punish x but not y, then the circumstances weren't "exactly similar" after all.
Perhaps one could invent thought experiments in which the differences between x and y seem irrelevant to the wrongness of those actions, such as if you're abducted by aliens and told to punish x but not y or else everyone on Earth will be harmed. This kind of counterexample has the flavor of Norcross (2006)'s second counterargument (p. 226):
it can sometimes be optimific to punish a utility-maximizer. For example, imagine that Agnes has always produced as much utility as it was possible for her to produce. Moreover, none of her actions has led to any unfortunate consequences, such as someone’s untimely death or suffering. Punishing her as a scapegoat might nevertheless produce more utility than not doing so. It is absurd to say that she has done something wrong just in virtue of the fact that it is appropriate or optimific to punish her.
I agree that this is a good objection, and it does suggest that our ordinary conceptions of "wrongness" don't perfectly map onto "being optimific to punish". I guess my reply is: "So what? I'm not very interested in defining 'wrongness'. I'm just interested in what's optimific to do." Of course, using words in the ways they're normally understood can be optimific, so this question of how to define "wrongness" may incidentally have instrumental value.
"Should We Base Moral Judgments on Intentions or Outcomes?" continues this discussion with a more sophisticated analysis of the merits of different ways of evaluating actions.
The Internet Encyclopedia of Philosophy's "Consequentialism" article has sections on Expectable Consequentialism, Reasonable Consequentialism, and Dual Consequentialism, which make similar points as this article.