Why I Don't Focus on the Hedonistic Imperative

By Brian Tomasik

First written: 5 Oct 2016. Last nontrivial update: 20 Jul 2017.

Summary

This piece enumerates several reasons why I don't prioritize bio-related technologies as a strategy for reducing suffering. Biology seems likely to be replaced by machines within centuries, and even if not, influencing political/social trajectories of the future may be more important than pushing on relatively crowded technological developments that many rich people already want for selfish reasons. And unfortunately, I expect that high-tech solutions for helping wild animals are less plausible than merely reducing wild-animal populations by expanding those human activities that already have this effect. That said, promoting a general movement of suffering reducers seems quite valuable.

Contents

Introduction

David Pearce's The Hedonistic Imperative was very influential on my life. That book was one of the key factors that led to my focus on suffering as the most important altruistic priority. However, on a practical level, my approaches to reducing suffering tend to have little connection with the technological visions that Pearce expounds, such as improved drugs, genetic engineering, brain research, and bio/nano/info technologies in general. I'll refer to these technological approaches as the "Hedonistic Imperative" project (HI), which is not italicized in order to distinguish it from Pearce's online book.

While I haven't analyzed proposals for HI in detail, I generally suspect that technologies for reducing biological suffering aren't the optimal place to focus resources for several reasons, which I discuss in the rest of this piece.

Note: Any misconstruals of Pearce's views are my fault; I last read The Hedonistic Imperative in 2006. And the present essay is only an expression of amiable epistemic disagreement. As a friend and visionary, I admire Pearce greatly and wish only the best for his work.

Far-future considerations

The most important reason why I don't prioritize HI is that I don't expect biological life to exist for more than another few hundred years, since biology is likely to be replaced by machine intelligence. This may not be true if civilization stalls or collapses, but those scenarios are less consequential than scenarios where machine intelligence is developed and colonizes space, thereby multiplying suffering astronomically. Plus, if civilization permanently collapses, it's doubtful that we'll have the resources or motivation to spread HI technology throughout the biosphere.

Machine suffering

Approaches to reducing machine suffering will look rather different from approaches for reducing biological suffering. And in any case, future generations of machine intelligences can work out the details of reducing suffering—if (as is somewhat unlikely to be the case) they care enough about suffering to do so. It seems more important to me to improve the values of our machine descendants and prevent worst-case future dynamics in that domain than to work on biological technologies.

Of course, Pearce is skeptical that digital computers will ever be conscious. Even if one believes this, one ought to maintain some probability on digital consciousness, and given that much more machine consciousness may exist in the far future than biological consciousness, scenarios where machine consciousness is possible might still dominate in expected-value calculations. But suppose one is extremely confident that biology will always be the primary locus of suffering. Even so, I'm skeptical that abolitionist technology is the most important focus area, because I think steering the values and power dynamics of the future plausibly matters more, as discussed next.

Influencing power dynamics is important

Even many people who doubt machine consciousness admit the possibility that machines will inherit the Earth. And even if Homo sapiens does maintain the reins of power, the values of the far future may be quite different from our values. Evolutionary forces, if not held in check, are likely to erode our present-day, spandrel-ridden values, and even if current values remain, society may inexorably move in ways that most of humanity doesn't approve of.

Even if we develop technologies to animate people by gradients of bliss, it's not clear that these technologies will remain in use forever. If there's any competitive edge gained from suffering mind-states (as there may well be, given that evolution produced both happiness and suffering, not gradients of bliss), those people who retain suffering may have an economic and evolutionary advantage. Unless we create a global singleton that stops evolutionary arms races, our ideals are liable to be overridden by competitive pressures.

In other words, I think that influencing the steering wheel of the future is plausibly where we should focus. That could mean influencing the development of artificial general intelligence, or influencing political/social trajectories, or spreading suffering-focused values. It may also mean pushing on beneficial technologies, but mainly with an eye toward how those technologies shape future power dynamics. It's not clear to me that wellbeing-enhancement technologies will give suffering-reduction values more control over humanity's future, though I'm open to hearing a case for this viewpoint.

Speeding up tech can be bad

HI research often involves advancing neuroscience and other scientific/technological frontiers. But the net impact of speeding technology is often not obvious, and it may be negative. For instance, accelerating neuroscience may speed up the development of powerful machine intelligence, and it may also enhance humanity's ability to create sentient digital beings, some of whom will suffer significantly.

Progress in biotech may also increase catastrophic risks, which is a complex topic from a suffering-reduction standpoint.

Suffering and empathy

There may be a correlation between experience with depression or significant life pain and a focus on reducing suffering, although I hope this topic will be explored more rigorously. If people who have never faced depression are on average less sympathetic to the plight of others, could HI reduce compassion for suffering non-humans and far-future beings? Or perhaps the opposite is true—that those with richer emotional lives are more inclined toward compassion. Perhaps HI technology focused on compassion-building could overcome this concern.

Shorter-term considerations

My own efforts are not entirely focused on the far future of machine intelligence. I also devote significant resources to reducing wild-animal suffering in the shorter term, partly for emotional "fuzzies" reasons and partly for other heuristic reasons. But even with respect to short-run suffering reduction, I think other interventions than HI are more promising, for the reasons below.

Human-focused HI is not a neglected area

Most humans throughout history have aimed to reduce their suffering and improve their wellbeing. Of course, modern technological approaches to this project haven't been pursued as extensively, and new breakthroughs await us. Still, the field doesn't seem very neglected, given that many people in rich countries are willing to pay for hedonic improvements. For example, we have lots of research on treatments for depression and other mood disorders. If improvements were relatively easy to discover, pharmaceutical companies and others surely would have jumped on them already.

In some cases, irrational or rent-seeking political restrictions may inhibit research, such as on psychedelics. But lobbying to change these rules would meet intense opposition and seems unlikely to be low-hanging fruit.

Designer babies of the future might make big strides toward improving the wellbeing of rich humans, but this development also seems likely to be propelled forward by the self-interest of the elite classes and is a crowded issue politically. Moreover, the overall impact of designer babies on the far future is far from clear, since more intelligent people will speed up harmful technologies as well as helpful ones.

Animal-focused HI seems hard

Humans comprise a tiny fraction of the animals on Earth, and even a relatively small fraction of all animal neurons. The most important place to focus when reducing short-term suffering is probably animals, which are not only more numerous than humans but also often have worse lives.

Very few people are working on HI technologies for animals, although perhaps some human HI discoveries would be applicable across species. While it might seem like animal-focused HI research could be a neglected cause, I'm skeptical that it has much promise, because it's so hard to convince people to incur costs to help animals. We can't even convince governments to outlaw factory farming or require more humane slaughter methods in 2016. It seems hard to imagine that we'll see large investments in costly research on hedonic improvements for animals, especially wild animals. Humanity is just not that compassionate, and social attitudes will not change enough in the century or two that biological animals have left on this planet before machines take over. Maybe a few wealthy "philfaunapists" (philanthropists for non-human animals) will develop and disseminate some HI improvements for animals, but these are likely to be done on a small scale or affect only a few species.

I expressed similar pessimism about HI for animals in this piece on gene drives. Given that there's debate on whether we should use gene drives to eliminate or re-engineer a single type of insect (mosquitoes) in order to save over a million human lives each year, it seems almost unimaginable that people would ever consider using risky gene drives on a much wider scale to reduce the suffering of insects without any human benefit.

What about the argument that nature might not be eliminated by machine intelligence within a few centuries? I think the loss of most of Earth's biological life within a few centuries is more than 50% likely, so planning for the very long-term future of biosphere reengineering will not pay off in many plausible future scenarios. Moreover, if biology is still prominent within a few centuries, then presumably either

  1. civilization will have collapsed, in which case society won't have the resources to implement the Hedonistic Imperative, or
  2. civilization will be more advanced than it is now, in which case reengineering ecosystems is likely to be much easier and cheaper than it would be for near-future humans.

Reducing invertebrate populations is probably more effective

As is evident from my comments above, I'm skeptical about humanity's selflessness and compassion. Strategies that rely on future humans doing altruistic things on a large scale seem unrealistic. Even widespread interventions directly intended to reduce the natural suffering of wild animals in mundane, non-high-tech ways strike me as pretty unlikely.

Instead, I suspect that we can help wild animals the most by channeling existing human activities, such as those that reduce wildlife habitats. Already, humans may have inadvertently reduced the total suffering on Earth by ~10%(?) by reducing wild-animal populations. This may be a bigger impact than most HI programs could hope to achieve.

I favor researching the net impacts of various human environmental activities on wild-animal suffering and then pushing on those that reduce wild-animal (especially invertebrate) populations. We can encourage less investment in habitat conservation and rewilding, more emphasis on creating jobs than on species preservation, and expansion of those forms of economic development that reduce invertebrate numbers. For example, rainforest-destroying palm-oil production in Indonesia is often considered an important vehicle for poverty reduction (although I'm not completely certain of the net impact of palm-oil production on insect suffering).

Reducing wild-animal populations can be done by working with the grain of human selfishness rather than against it. For this reason, reducing populations seems plausibly more impactful as a way to prevent animal suffering in the short run than developing expensive, high-tech solutions that won't arrive for many years and that humans have no selfish reason to deploy.

HI's suffering focus is important

As hinted in the "Introduction", one aspect of the HI philosophy that I do think has significant impact is its focus on suffering reduction in general. Promoting suffering-focused moral viewpoints plays a crucial role in expanding the suffering-reduction movement overall and will create more activists and thinkers who can pursue various kinds of projects. Given the number of people who have been reached by the HI message, Pearce's work has had an enormous impact in spreading suffering-reduction values.

See also

A discussion with David Pearce about this piece.