Utilitarian Exchange Rates

by a friend
Published: 16 Mar. 2014; last update: 27 Jan. 2015

One's exchange rate between happiness and suffering seems ultimately arbitrary, but which exchange rate seems most compelling?

Contents

Using your personal exchange rate

One method of determining your exchange rate is to use the exchange rate you use in your own life. Given the lack of personal identity it seems as though there isn't an intrinsic reason to treat your future consciousness moments in a different way to the consciousness moments of other beings. I find this compelling, it seems like a good argument for using something like the exchange rate you would use in your daily life for ethical decisions.

However, we might be thinking about our exchange rate incorrectly, this could lead to a bias in either direction. For example, imagining an entire experience may mean that we will be swayed by things other than the hedonistic aspect. It is also hard to know what exchanges we should make for experiences we have never experienced.

I'm not sure if I can answer the question “how much healthy life would you exchange for one minute of burning alive?” without being influenced by the idea of being able to take more altruistic actions during the extra healthy life, probably as well as other factors.

These things shouldn't influence my thinking on exchange rates. I should think only of the amount of happiness or suffering.

What if we imagine the tradeoffs in terms of some kind of quasi-wirehead-experience machine, where we imagine only the hedonic tone uncoupled from everything else. The wireheading machines should just give the raw feelings of happiness and suffering, without simulated lives, etc, which might bias people.

What exchange rate would you pick when the question is framed like this?

The suffering may continue to seem just as bad, but the happiness may suddenly seem a lot less good. Is this because when we imagine happiness embedded in other experiences we assign extra value to it due to other factors, or are we just mistakenly not understanding how good these wirehead-esque experiences would be?

Negative leaning exchange rates

It’s unclear whether an exchange rate that might generally be considered “negative leaning” is actually in conflict with using one’s personal exchange rate. One might simply only be willing to accept what would generally be considered a small amount of suffering in return for a large amount of happiness, or you may expect yourself to be biased positively when thinking about your exchange rate and thus revise downwards.

However, it seems that under negative leaning utilitarianism it will basically still be moral to create some (large) amount of torture, provided a sufficiently large amount of happiness will also be created. Though the amount of happiness required may be much higher, and in many cases it may be better not to take actions which could lead to a lot of suffering, in case this relaxes the threshold of when others will be willing to cause intense suffering.

Threshold negative utilitarianism

When one really really imagines really intense suffering it's hard to think of any reason why such suffering should ever be permitted to exist. However, it seems as though classical utilitarianism basically says it is obligatory to create this suffering if it will result in an extremely large amount of happiness also. Would we really create an arbitrarily high intensity of suffering provided an extremely large amount of happiness would be created too? When you take the time to imagine the respective situations in great detail do you have the same answer?

An alternative would be to accept something like threshold negative utilitarianism. This would avoid having to destroy paradises due to pinpricks, as a negative utilitarian would. Although it still seems like paradises would have to be destroyed over any severe form of suffering. However, when we drop the word “paradise” and instead just imagine lots of wirehead robots this may not seem as bad.

This view might also have other problems, such as potentially having to choose dust specks over torture, which may be either a bonus or a problem. In fact, it seems as though the threshold negative utilitarian has to accept that any number of bad experiences below the threshold are preferable to a single experience above it.

To use an idea from Nick Beckstead, these intuitions seem like something that might be “buggy” (p. 171):

The program produced errors in one domain, and the programmers altered the program in an inelegant way that avoids the problem in that particular domain, but the programmers did not have a deep understanding exactly what it was about that domain that caused the problem, whether there could be a more general issue that caused the problem, and whether the patch may introduce new problems. (Software people would call these alterations “kludgy.”)

However, maybe these ethical theories are all inherently “kludgy”.

Negative utilitarianism

Negative utilitarianism avoids any problem that may exist for threshold NU regarding torture vs. dust specks. However, it entails that no amount of happiness can ever outweigh any suffering and that non-existence is always preferable to existence if there is any risk of any suffering.

When we imagine a very large amount of happiness we may end up suffering from scope insensitivity. This would help to explain why exchanges between happiness and suffering at the personal level seem good, but when huge quantities are involved the happiness seems less valuable. (Although why don’t we see an effect of the same size regarding large quantities of suffering?) Given this maybe we should just shut up and multiply.

In general, it seems plausible that a bunch of biases might affect our exchange rates.