Summary
In evolved organisms, severe pains tend to be more intense than severe pleasures. However, when we consider the algorithmic implementations of suffering and happiness, the two processes seem fairly symmetric in principle, though there are certainly qualitative differences between them. Even if one thinks that arbitrary minds could experience pleasures as strong as the strongest pains, this doesn't imply that such pleasures would have equal moral gravity to their corresponding pains. Preventing suffering has a particular moral urgency that creating new happiness lacks.
Contents
- Summary
- Introduction
- Is extreme suffering inherently stronger than extreme happiness?
- Is extreme suffering more important than extreme happiness?
- Small is beautiful
- There is no right answer
- See also
- Acknowledgements
- Footnotes
Introduction
In the 1980s, a practice called "necklacing" emerged in South Africa. It involved wrapping a gasoline-filled tire around a victim and then setting it on fire, causing the victim to slowly burn to death, possibly over the course of 20 minutes. You can see images of necklacing here. And this page features a video of an African man being burned alive (though not using a tire).
I feel strongly that experiences like these are so bad they can't be compensated for by good things. No amount of happiness can "cancel out" the awfulness of extreme torture. But there are two possible interpretations of this claim:
- that extreme suffering, by its nature, is somehow inherently stronger than extreme happiness
- that extreme suffering is inherently incomparable with extreme happiness, but preventing suffering has morally dominant priority.
In this piece I defend #2 against #1.
Is extreme suffering inherently stronger than extreme happiness?
One reason to focus more on reducing suffering than creating happiness is that suffering seems easier to prevent. Given the hedonic treadmill, it's not obvious that any given attempt to improve the welfare of someone who's already well off will actually have any net effect.
Another motivation for focusing on suffering is that suffering seems more severe. In the life of a given organism, the most intense negative experiences typically outweigh the most intense positive ones. It's often observed that "Bad is Stronger Than Good" (negativity bias), and for most of us, there's no single emotional event that we would pay an equivalent number of minutes of torture to experience. Carl Shulman explains this in terms of evolutionary fitness: The worst possible single event for an organism is to lose all future reproductive potential at once, while the best possible event is to win some chance of starting one pregnancy or to quickly gain power/status that may help to woo future mates.
But while bad dominates good for evolved creatures, need this be true for arbitrary minds? Or are suffering and happiness actually symmetric, and evolution just concentrates badness signal more densely than goodness signal for instrumental reasons?
Components of hedonic experience
Let's contrast some of the cognitive processes that comprise suffering against some that comprise happiness. The following list is not exhaustive:
Effect | Suffering | Happiness |
Stimulus detection | Nociceptors (for heat, cold, texture, etc.) transmit alarm signals to the brain | Positive sensory receptors (for touch, smell, sight, etc.) transmit pleasant signals to the brain |
Valence assessment | Neural networks combine various signals (from sensory detectors plus other knowledge and brain states) to determine that the current situation is painful | Neural networks combine various signals (from sensory detectors plus other knowledge and brain states) to determine that the current situation is pleasurable |
Reinforcement learning | Negative valence information triggers reinforcement learning that reduces inclination to take similar actions in the future (unless the negative valence has already been predicted, in which case action disinclinations remain constant) | Positive valence information triggers reinforcement learning that increases inclination to take similar actions in the future (unless the positive valence has already been predicted, in which case action inclinations remain constant) |
Hormones | Pituitary gland releases stress hormones | Pituitary gland releases "good" hormones |
Physiology | Heart rate increases, pupils dilate, blood sugar rises, etc. | (I'm not sure of the exact details, but presumably some of the physiological effects differ from the pain case) |
Memory | Lay down strong traumatic memories | Lay down strong joyful memories |
Verbal responses | Screaming, crying, swearing with pain | Screaming, crying, swearing with pleasure |
Behavior | Fight or flight, attempt to escape or recoil (avoidance) | Attempt to continue the pleasant experience (seeking) |
Evaluative thoughts | Think to oneself, "This is awful!" | Think to oneself, "This is wonderful!" |
Planning | Cogitate on how to make the pain stop | Cogitate on how to make the pleasure continue |
Some caveats are in order:
- As noted above, in most evolved creatures, the magnitudes of the pain effects in the above table will be stronger than for the pleasure components. The question at hand is, rather, whether in principle minds could be constructed such that pain and pleasure were symmetric. This is relevant when considering digital minds whose designs don't conform to evolutionary or other task-instrumental pressures.
- For simple reinforcement-learning agents, it's actually not clear when a stimulus counts as having positive rather than negative valence. For discussion of this point, see the section "What’s the boundary between positive and negative welfare?" on p. 18 of "Do Artificial Reinforcement-Learning Agents Matter Morally?" However, for sufficiently complex and human-like minds, the distinction between positive and negative valence should be more clear because there are many reference markers for comparison. For example, assuming that eating chocolate is a positive experience for a typical human, if a given reward shows the same kind of brain effectsa as eating chocolate but with greater intensity, we can consider it a more positive experience.
Are the components of suffering and happiness symmetric?
Following is a hypothetical dialogue between someone who defends symmetry and someone who denies it:
Symmetry: For most of the items on the list, the pleasure version can be obtained by "flipping the sign" of the pain version. For instance, negative valence for pain can be flipped to positive valence for pleasure.
Asymmetry: Some of the components don't easily "flip". For instance, consider hormones and physiology. The effects that happen as a result of suffering aren't obviously a mirror image of those that happen with pleasure. They're just qualitatively different.
Symmetry: Fine, but it's doubtful that physiological changes constitute the bulk of what makes suffering bad or happiness good. Most of the morally relevant processes concern evaluations in the brain.
Asymmetry: The distinction between avoiding versus seeking seems fundamental to what separates pain from pleasure. But avoiding can be seen as a qualitatively different kind of impulse than seeking.
Symmetry: I don't know—they seem pretty symmetric to me. One says "Make it stop" while the other says "Make it continue". Also, as another point, consider the case of a simple reinforcement-learning agent whose rewards take the form of scalar numbers between 0 and 1. If we set some reward level as "hedonic zero"—say rewards of 0.5—then a "pleasure" magnitude of 0.2 can offset a "pain" magnitude of 0.2, because if the agent doesn't discount future rewards, the agent would be equally willing to take
- two experiences with rewards of 0.5 (i.e., hedonic zero)
- one experience with reward of 0.3 (-0.2 from hedonic zero) and one with reward of 0.7 (+0.2 from hedonic zero).
Asymmetry: Ok, but it's arbitrary where you set the "hedonic zero" level, especially in such simple reinforcement-learning agents. If I set hedonic zero as the biggest possible reward (reward = 1), then most of the agent's experiences would be suffering relative to that zero level.
Can subjective evaluation be symmetric?
In the above section I looked at the pieces that comprise hedonic experience from a third-person perspective, but another way to look at the contrast is by subjective reports. People can comment on the qualitative and quantitative nature of their pains and pleasures and compare them against each other. In theory, could a mind have a positive experience that it would subjectively judge as being equal and opposite to, e.g., the experience of necklacing?
A trivial answer is "yes, because in the realm of programmed minds, any subjective evaluation is possible". For instance, a computer can print out the statement that "This experience is worth being tortured for." But in order for subjective evaluations to be meaningful, they have to track underlying brain events in some deeper way. Human brains seem to do this reasonably well, though even human reporting mechanisms may not flawlessly track the exact details of brain operation—due to noisy or skewed aggregation of lower-level events, cognitive biases, top-down expectations, social pressure, or whatever else.
Notwithstanding all those caveats, it does seem plausible that a theoretical mind could subjectively report in a meaningful way that some pleasure was as intense as the pain of torture. Of course, this depends on what counts as a "meaningful" self-report. Presumably a self-report would be meaningful if it adequately considered the components surveyed in the table above rather than ignoring them and pretending via higher-level thoughts that the experience was more/less intense than the underlying brain states implied.
Can implied tradeoffs be symmetric?
Another way to assess welfare is by revealed preferences: What kinds of tradeoffs does the organism make? This works to some degree in biological creatures whose choices may mirror their hedonic valuations, though it's far from perfect, as witnessed by akrasia and other forms of decisional myopia. For instance, using drug needles that may contain HIV does not yield more pleasure than all future expected suffering therefrom. So the trustworthiness of behavioral tradeoffs for judging hedonic comparisons is not ideal.
That said, insofar as we do pay any heed to an organism's choices, it seems plausible that a mind can be built who is willing to endure torture, including all the components that it involves in human brains, in order to also experience some immense pleasure. As I noted when discussing subjective reports, it's important to make sure that these choices aren't arbitrarily imposed but rather that they reflect the realities of the brain dynamics underlying the experiences.
Minsky (2006) on pleasure/pain symmetry
Minsky (2006) offers a list of some of the functions of pleasure and pain similar to what I wrote above. Minsky (2006) says that some of the functions of pain are as follows (p. 67):
Pain makes you focus on the body parts involved.
It makes it hard to think about anything else.
Pain makes you move away from its cause.
It makes you want that state to end, while teaching you, for future times, not to repeat the same mistake.
And for pleasure (p. 68):
Pleasure often makes you focus on the body parts involved.
It makes it hard to think about anything else.
It makes you draw closer to its cause.
It makes you want to maintain that state, while teaching you, for future times, to keep repeating the same "mistake."
Minsky (2006) comments (p. 68): "All this suggests that both pleasure and pain engage some of the same kinds of machinery; both constrict one's range of attention, both have connections with how we learn, and both reduce the priorities of almost all one's other goals."
Is extreme suffering more important than extreme happiness?
Based on the above discussion, it looks like many (though certainly not all) of the components that constitute suffering experiences could be mirrored in corresponding happy experiences. Even if this is the case, I maintain a strong intuition that reducing suffering demands greater moral urgency. This section elaborates on why.
Incommensurability of viewpoints
Toby Ord contends that it's wrongheaded to maintain that suffering has more urgency than what organisms themselves feel:
It would suggest to me that you are illicitly using your tastes as to how bad things are for people instead of theirs (by tastes I mean how much they would enjoy or suffer from different stimulus, not what they believe before they try it). This would be analogous to me saying that feeding people mushrooms is bad for them (and the world) just because I don't like mushrooms. We should use their scales of how much having a particular thing done to them makes them suffer, not ours.
Valuing experiences according to the scale of the organism itself seems like an appealing approach. But what is the organism's own scale? Revealed preferences? Verbalized subjective comparisons? I'll assume it's one of these two.
Suffering can get so bad that an organism would, in the moment, rather trade away all future pleasure in return for making the pain stop now and would report that moment as stronger than the sum of all future pleasure. Perhaps this is the experience of some Muslims tortured by the CIA: The Muslims may believe that if they give in, they'll lose out on an eternity in heaven. (I'm just speculating here, but this is plausibly true for a few of them.) O'Brien explains this in the torture scene of 1984 (a scene that influenced my moral views on this topic):
'By itself,' he said, 'pain is not always enough. There are occasions when a human being will stand out against pain, even to the point of death. But for everyone there is something unendurable—something that cannot be contemplated. [...] You will do what is required of you.'
And of course, a reversed situation could be true: A pleasure might be so intense that the organism, in the moment, would agree to an eternity of suffering in order to continue the pleasure for a while longer. Organisms in these states can't be reconciled. There's no sufficient compensation to someone in the throes of extreme agony or extreme bliss.
One might resolve the situation by considering the assessments of someone in a cool-headed state: Would she agree to be necklaced in return for some sufficiently large reward later? But the problem is that this person-moment now while making the choice is different from her person-moment during torture, at which point she would regret her former self's choice to be tortured.
We might give more weight to the judgments of those who have already been tortured, since their evaluations of the experiences are presumably more accurate. Unfortunately, here too people may forget the severity of their past suffering. This empathy gap often happens to me when I think back on my own past experiences of intense suffering, unable to conjure up feelings of how awful I felt at the time.b
There is no impartial point from which to judge these suffering-for-happiness trades. Everyone has to take a stand somewhere, and my stand is to side with the agony of those in the moment of torture.
Objection: Revealed preferences are faulty
The above discussion replied to Ord on his terms—making valuations according to an organism's own scale. But Andrew McKnight reminded me of how unreliable an organism's choices are for assessing actual comparisons of hedonic value, such as in cases of addiction or "the heat of the moment". Maybe we should back off from looking at an organism's behavior and instead focus on the actual brain processes going on. In this case, it's less relevant that sufficiently extreme torture would force someone to give in, since the magnitude of physiological/algorithmic changes in extreme torture presumably are not infinitely different from in cases of moderate suffering (although we can, if we choose, qualitatively regard extreme suffering as infinitely worse). The next section considers a different intuition pump for giving priority to extreme suffering.
Creating happiness is fundamentally less morally important
Consider the following thought experiment, which might be taken as a kind of shibboleth for one's intuitions about the priority of suffering prevention:
One tortured, two blissful. You find yourself with a button in an otherwise empty room. If you leave the button alone, nothing happens. If you press the button, one tortured person-like mind and two blissful person-like minds will be created, with the torture and bliss being roughly symmetric in the kinds of ways discussed above (except for whatever is intrinsically asymmetric between pain and pleasure). You would be helpless to do anything to aid the tortured person. Do you press the button?
It feels obvious to me that the button should not be pressed. I suspect that at least half of people would feel the same way, though actually surveying opinions on this would be interesting. The implications for altruism are clear: We can either devote our efforts and policies in directions that will prevent the creation of future horrors, or we can push in directions that will create massive amounts of happiness alongside some possible horrors.
Objection: Hard to imagine
One objection to my thought experiment might be that people can't imagine happiness equal in magnitude to torture because, for biological humans, such pleasures don't seem to exist (except maybe in a person with some rare mutation). To resolve this problem, we could consider reducing the magnitudes of pain and pleasure in the thought experiment to a point where both pains and pleasures of that magnitude can be imagined. I suspect that many people would still prefer to prevent the suffering than to offset it with happiness.
Following is a depiction of more imaginable events: one person enduring great fear during interrogation, and two people enjoying the bliss of kissing upon being reunited. Fear during interrogation is probably not nearly as bad as, e.g., the pain of being burned alive, though it's still probably more intense per second than the joy of being reunited. So this illustration shouldn't be taken too literally.
Objection: Inequality intuitions
Maybe people wouldn't press the button because if they did so, the resulting situation would feel unfair: It's not equitable for two people to be happy and one to suffer. I suspect that more people would press the button if doing so created a single person alternating between one hour of torture and two hours of bliss.
However, I don't think considerations of personal identity help here. In the moment of torture, the tortured person wishes to not exist, even though she knows she'll experience bliss later. That person-moment is never compensated; it just exists timelessly in a state of agony. So I don't think condensing three people into one adequately reduces the inequity of the situation.
Perhaps evaluations would be different for small pains. A single person can agree to endure some pain in the knowledge that doing so will secure more future pleasure. I don't have much moral objection to this case because the person-moment enduring the suffering is consenting, unlike the person-moment whose agony is so severe that she'd give up anything to make it stop. (See "Consent-based negative utilitarianism" below.) So even if you'd press the button for this particular case, I don't think that answer translates to the scenario involving unbearable torture.
Objection: Active vs. passive intervention
Maybe some object to pressing the button in the above thought experiment because this involves "actively" creating the tortured person, leaving blood on the hands of the button presser. If instead the three people already existed, and the button would make them vanish, perhaps fewer people would prefer nonexistence. This may be, but it's worth noting that most expected future suffering will result from humans actively expanding computational power throughout the galaxy, in which case the original thought experiment is more apt.
Consent-based negative utilitarianism?
I have an intuition, first inspired by a friend of mine in 2006, that small pains don't matter much, but that some suffering can be so bad as not to be capable of compensation. With this kind of "threshold utilitarianism", it's unclear where to set the threshold after which suffering is so bad that it can't be outweighed. One approach that I've been suggesting could be consent: Would the person-moment suffering agree to continue the suffering in order for future person-moments to obtain something they value? If this is true at all points in time during the suffering, then the suffering doesn't pass the threshold of unbearableness and thus can be outweighed by happiness.
This approach would seem to allay Toby Ord's concerns about negative utilitarianism overriding personal choice. It would also answer Ord's claim that threshold negative utilitarianism implies "a very strange discontinuity in suffering or happiness" in which "there are two very similar levels of intensity of suffering such that the slightly more intense suffering is infinitely worse". In consent-based threshold negative utilitarianism, that discontinuity arises naturally at the intensity of suffering where a person would press the "Stop" button rather than the "Continue" button during a painful experience endured for the sake of future reward.
However, consent-based negative utilitarianism has its problems. It might still be vulnerable to a pinprick argument if the person being pricked got no compensation from the process and thus had no selfish reason not to reject the pinprick. One possible revision would be to suppose that the single person would experience all pleasures and pains of the world in his future and then ask whether those would, in the moment of pain, be enough to induce consent to the present pain. Such a setup would typically avoid rejecting paradise due to a pinprick but would reject paradise when the pain became sufficiently extreme.
Of course, these judgments would vary from person to person. For instance, some might still reject a pinprick because they simply don't care about the pleasure of paradise. And the hypothetical process of one person with a fixed neural architecture experiencing all of the diverse pleasures and pains in the world, including those by non-humans, is not exactly coherent.
A consent-based approach also runs into the obstacles that
- minds sufficiently simpler than those of humans may not be able to express consent or to weigh future gains against present costs as ably
- even if the ruling coalition of a mind expresses consent, there will likely still be dissenting factions in the underlying neural populace who don't consent.
Elliot Olds proposes an interesting alternative formulation of consent-based negative utilitarianism. Suppose an agent decides to accept some period of torture in exchange for some other reward. However, during the moment of torture, the agent's suffering is so intense that it would give up anything to make the pain stop. According to regular consent-based negative utilitarianism, nonexistence of this agent would have been better than existence. Olds's alternative is that rather than choosing between continued torture and nonexistence, the agent is also allowed to choose to reset itself, including its brain state, back to the prior point where it made the decision to accept torture. Because the agent's brain state is reset, it will make the same choice as before. But this "reset" option allows the tortured agent the "immediate gratification" of escaping torture right now without repudiating the overall decision to be tortured in return for greater reward. If the agent chooses to reset rather than not exist, it can be said to implicitly endorse the torture/reward tradeoff even during the moment of torture. (Personally, if I were being tortured in this scenario, I expect I would choose nonexistence because I wouldn't want my current situation to repeat later. But others might feel differently.)
Why focus on consent to suffering?
Consider a scenario where a person initially judges that he would accept some brief period of torture in return for long periods of future happiness. Consent-based negative utilitarianism says that if the person changes his mind during torture and is willing to give up all future happiness to make the torture stop, then the future happiness can't compensate for the tortured person-moment, and the "torture + future happiness" deal should be rejected.
But, as several readers of this piece have pointed out to me, we can imagine a symmetric situation with respect to extreme pleasure. For example, imagine that a person initially judges that some brief period of extreme pleasure isn't worth it if it will cause a long period of future suffering. However, suppose the person happens to get a taste of the extreme pleasure, and during that moment, he changes his mind and is willing to accept any amount of future suffering in order to make the pleasure continue. (One example of this might be religious fundamentalists who have premarital sex despite the possibility they'll be eternally tortured for doing so. That said, one can dispute whether continuing to have sex in such cases is driven more by pleasure or by aversion to the pain of stopping.)
In light of this example, why not endorse "consent-based positive utilitarianism", which says that a tradeoff of "brief extreme pleasure + lots of future suffering" can only be rejected if all person-moments of the proposed experience would agree with rejecting it? Consent-based negative utilitarianism rejects any tradeoff where at least one person-moment says "This is too painful. Make it stop!" Meanwhile, consent-based positive utilitarianism endorses any tradeoff where at least one person-moment says "This is too pleasurable. Make it continue!"
Consent-based positive utilitarianism is certainly a valid (if counterintuitive) viewpoint. My reply is that I side with consent-based negative utilitarianism rather than positive utilitarianism just because negative utilitarianism aligns with my emotion-driven intuitions about what's morally important.
Consent-based views aren't limited to pleasure and pain. One could imagine a hypothetical agent who, during at least one moment, would give up any amount of future happiness in order to prevent a single paperclip from getting destroyed. In general, there's a multiplicity of possible moral viewpoints that one might hold, and we simply have to choose which one(s) align best with our own intuitions.
Small is beautiful
Fundamentally, I don't understand what's so compelling about happiness that makes it morally urgent to create de novo. Sure, many people crave pleasure, but if they don't get it, they suffer as a result. So pleasure in those cases prevents suffering. But what makes it so important to create new beings with happy lives that it's worth creating some tortured beings along the way?
Small is beautiful. Nonexistence is wonderful; there's nothing wrong with it. People who never exist aren't sad that they don't exist.
You might say that if such people existed, they would be glad to have been created. Sure, but if paperclip maximizers existed, they would be glad if you filled the universe with paperclips. I don't therefore find it morally urgent to fill the universe with paperclips. What's the morally relevant difference between desires for pleasure and desires for paperclips? As far as I can tell, it's just the ideological prejudices of the person making the moral evaluation. (Of course, my giving precedence to desires not to suffer is also an ideological prejudice—it's just one that feels more right to me.)
The idea that it would be a cosmic loss for the universe not to be filled with happy experiences strikes me as almost equally bizarre as thinking it a great loss not to fill the universe with paperclips. I'm willing to concede some ground in favor of happiness creation because some already existing people really care about that, but I don't intuitively share their compulsion. As far as I'm concerned, not desiring by never having existed is equally blissful as heaven. (Update: Sometimes I do have the feeling that it would be cool to create a bunch of meaningful and wonderful experiences on the part of new minds. But while this would be great if it had no risk of downside, it also doesn't feel morally urgent. The universe will be fine without those extra happy experiences, whereas someone enduring torment is not fine.)
There is no right answer
The choice of whether, and if so how much, we value creating new happy people varies from person to person and even within a given person from moment to moment. Sometimes if I'm feeling particularly joyful or accomplished, I can feel a small twinge of the sense of loss that many people feel imagining if that experience had never existed. However, this thought is quickly overridden when I remember how serious torture would be and how trivial all the positive experiences in my life would appear by comparison.
My suffering bias is very likely based on the biological bias for bad being stronger than good. Had I grown up in a world where the worst pains were only moderate and the best pleasures were stronger in 5 seconds than all the pains of one's whole life, I might indeed find pleasure more important than suffering, though it's hard to say for sure. But this doesn't seem to undercut my negative bias. After all, if I had developed in a society that valued paperclips above all else, my moral values would plausibly be biased toward paperclips. And so on.
Counterfactually, my brain could have been modified in vast numbers of ways toward vast numbers of values. But it wasn't, and I now care about what I now care about. Other people now care about what they now care about. There is no right answer—just differing intuitions, sometimes pushing against each other. To me, extreme suffering seems like the worst thing in the world. To other people, failure to create astronomical numbers of happy lives feels equally horrifying. That's just the way it is.
See also
"Pleasure, Unpleasure, and Pain: Symmetries and Asymmetries" in this paper.
Acknowledgements
This piece draws from ideas by Carl Shulman, as well as some of my effective-altruist friends in Basel, Switzerland. Conversations with Simon Knutsson also refined my discussion. One example was inspired by David Althaus.
Footnotes
- These will mostly be neural/algorithmic effects. One point that took me a long time to understand is that pleasure and pain in brains don't reside in "chemicals of emotion", despite terminology sometimes used in popular media. Rather, emotions are the activation of various neural patterns and subroutines, which often use molecules of various types for signaling, making chemicals helpful markers of the neural activity of interest. In a similar way, brain activity is not constituted by blood flow, even though we can use blood flow to track brain activity in fMRI. (back)
- Sometimes people who suffer enormously do look back and judge that their suffering wasn't worth it. Consider the case of Dax Cowart, who suffered so badly that he wished he had been killed, even in retrospect. (back)