Interview on Effective Altruism

Brian Tomasik interviewed by Rhys Southan
First written: 12 Nov. 2013; last update: 24 Apr. 2014

Summary

In this interview, I answer questions on a variety of topics in effective altruism (EA), including how demanding we should be about behaving altruistically, which types of activities count as altruistic, why I focus so much on reducing suffering, what future scenarios I find most pressing, and why we should care so much about insects.

A quote from this interview appeared in Rhys's article for Aeon Magazine. (Forgive the sensationalist title chosen by the editors.)

Contents

Interview

How altruistic can people be?

Rhys:
Some people say that effective altruists want to do "as much good as possible," while others say that it's possible to be an effective altruist while donating as little as 10 percent to effective charities. Is the definition of effective altruism pretty loose, or do you think it should be seen as an applied utilitarianism, an attempt to do as much good as possible?

Brian:
Ultimately more good is better, but it remains a strategic and empirical question how to frame the definition of effective altruism to best do good. It's plausible that setting the minimum at 10% broadens the tent significantly without much discouraging above-and-beyond efforts by those who want to put in more. In general it's best to encourage small steps that are feasible within ordinary people's lives, and if anything, this seems more likely to snowball into greater dedication later. Of course, this "foot in the door" approach has a cousin, the "door in the face" approach, where you start out with an unreasonable demand and then lower it, but my guess is the latter would turn a lot of people off in the case of a voluntary activity like charitable giving.

It's good to avoid being hard on yourself to avoid burnout or turning against altruism later on. The idea that utilitarianism is too demanding and hence shouldn't be followed implies a wrong understanding of utilitarianism: If you find a utilitarian stance to be self-defeating, it wasn't actually the utilitarian stance in the first place, because the utilitarian stance should be one that actually works.

I'm very lenient with myself and almost never force myself to do something I don't enjoy. Most of the time I do useful activities anyway because they're inherently interesting and satisfying, as well as because I have many friends doing them who keep me engaged. I also take a broad view about what counts as altruistic. I think it's important for altruists to learn about many subjects, and this may mean spending time on activities that don't seem immediately useful.

Rhys:
In his post "To stressed-out altruists," Ben Kuhn reminds effective altruists that they aren't perfect utilitarian robots and not to judge themselves for failing that standard. Kuhn is obviously correct that it's impossible for EAs to perfectly determine the actions with the best consequences and then do only those, being human and all. But is there a sense among many EAs that utilitarian robothood is sort of the goal, if only it were possible?

Brian:
Again it comes back to a strategic question: Would idealizing robothood help people do more for others? My guess is not. I suspect many would object to the sense that becoming a robot would "deprive people of their essential humanity," and it doesn't seem especially profitable to work on modifying such attitudes.

What counts as altruism?

Rhys:
You wrote, "I also take a broad view about what counts as altruistic." Are there ways in which you take a broader view about what counts as altruism than other EAs might? What are some activities that may not seem immediately useful, but might actually be?

Brian:
When you want to improve the world, you have to know how the world works. Sometimes we may end up hyper-focusing on one set of considerations and not noticing others. This is fine in many careers—e.g., your electrician can focus her knowledge in great depth on how to be an electrician, without worrying too much about, say, international political dynamics. But if you're taking a big-picture view of how to do a lot of good for the world, you need some understanding of almost everything. This is both because (1) strategic cause selection requires surveying a wide array of possible charitable interventions, and (2) even within a cause, there are a lot of moving parts to how our actions impact the world—some good, some bad, and the net balance is often unclear. Having a deeper understanding of a variety of perspectives is important for reducing overconfidence when approaching these issues.

Studying widely among both academic and real-world topics helps build this perspective and improve the wisdom of the courses you pursue. Keep in mind that most EAs, including myself, are younger than 30 years old, and we have a lot of wisdom yet to accrue. The more we learn, the more we can accelerate our wisdom, and consequently, the better we can make a positive difference with our actions. Wisdom is a very general property that doesn't just come from books and scientific articles; it also comes from life events, new emotional experiences, and a deeper understanding of how other people feel about ethical issues. This is one reason why even some "normal" activities like recreation and relationships, in moderation, are not wastes of time—they can teach you more about life as a whole. Of course, another reason is to avoid emotional burnout. That said, we certainly should strive to spend more time on narrowly altruistic activities than is the social norm.

Rhys:
Effective Altruists seem to be skeptical about the direct value people create through their work if their work isn't EA- or charity-related. For instance, artists often like to think that what they create is really beneficial for the world, whereas most EAs wouldn't think that making a nice painting, play, or movie is a major contribution to reducing suffering and increasing happiness. It might help a little, but the effect is basically negligible when you look at all the suffering in the world. Are artists being self-indulgent when they congratulate themselves for making people happy, or could their works count for something even if the entertainers aren't donating money to effective causes or promoting EA?

Brian:
Much of the social status associated with philanthropy comes purely from the fact that the philanthropist is giving away wealth voluntarily in a way that doesn't benefit himself or his immediate kin. This kind of "pure" altruism is atypical among evolutionarily optimized animals and is driven partly by psychological spandrels but also partly by social praise. Alas, it's about equally praiseworthy in the eyes of society to donate to a new opera house as to donate toward an animal-welfare organization, and indeed, it might be frowned upon to donate to a "dirty" political-lobbying group. These norms are reflected in government standards about which kinds of donations are tax-deductible.

This sort of cause agnosticism may make sense for the government to apply, but for our personal donations, we should be more selective. It's hard to imagine a new art gallery being even in the same ballpark of value as what could be purchased for the same amount of money on behalf of reducing suffering. Maybe some argument could be marshalled about flow-through effects of art helping people act more prosocially, but one could also argue in reverse that more art means people spending less time on important civic issues that are no less spiritually satisfying. Of course, some people value art highly, and they should be able to continue pursuing that passion. In addition, I think even EAs should consume some art, especially literature and other media that help us expand our imaginations, understand one another more deeply, and explore what we value most. But I'm doubtful that creating more paintings or symphonies is the best thing we can do with our time or money.

I think some philanthropists regard art not just as a means to an end, helping to edify its consumers and make them more reflective, but also as something intrinsically valuable—e.g., even if the so-called "last man" on Earth were about to die, it would still be wrong to destroy humanity's collection of artwork. I don't share this intuition, and I suspect it may derive from us being unable to imagine ourselves not appreciating the artwork because it's hard to envisage our nonexistence. That said, insofar as intrinsic reverence for art represents a stable intuition by some people, I can see that they would value it more than I do. (Perhaps one way to attenuate their intuitions is to point out that all possible artwork already and eternally exists somewhere in the quantum multiverse, if only by quantum fluctuations. With reducing suffering, we care about decreasing the quantity that exists, but with art work, it seems you'd only care about existence or not in a binary fashion. So if all art already exists with some measure, isn't that good enough?)

Ethical injunctions

Rhys:
Impartiality is an important concept EA, but how far can/should that be taken? Obviously EAs don't pick their causes based on their own personal interests, like brain cancer research if they had a family member die of a brain tumor, and instead focus on suffering reduction and wellbeing enhancement more generally. Like maybe they never thought much about malaria before, but then they hear that fighting malaria is a great way to improve life for lots of people inexpensively, so that's where they put their money. Or maybe they start thinking about insect suffering, even though they aren't particularly interested in bugs. But from what I've seen, this impartiality only goes so far; EAs are expected to stay within the bounds of social norms, even if lying and stealing would appear in some instances to increase wellbeing and reduce suffering on the whole. But if EAs tend to focus on instrumental rather than inherent value, might it not be the case that social norms have only instrumental value, and that breaking these norms might sometimes lead to better consequences? And if that's the case, why wouldn't EA allow this, even if only secretly and unofficially? Isn't that a form of partiality? What determines the limitations on the general EA focus on consequences, then, if it's not all about consequences?

Brian:
This is an important question, because it's a subtle point. Indeed, when I first discovered utilitarianism in 2005, I assumed that there would be some cases where lying and stealing would be best, maybe even on a regular basis in one's ordinary life. As I read more perspectives on the matter, including Eliezer Yudkowsky's "ethical injunctions" sequence, I began to see that a more rule-utilitarian approach was called for. This goes back to the point that EA shouldn't be self-defeating: If feeling like you have to violate strong norms in certain cases causes your ethical stance to be disliked and leads to more total harm in society by degrading those those norms in general, then your effort to be clever wasn't a good idea after all. If the goal is to actually do good, then pedantically adhering to a naive decision theory is unwise.

The reason we should abide by strong norms against lying and stealing is partly empirical: It seems to be the case that when people in general try to cheat in this way, even for altruistic reasons, it tends to cause more harm than good. They might get caught and tarnish their cause. They might be misguided in their beliefs about what's the best course of action and thereby cause harm because people weren't aware to stop them. And so on. I list many more reasons in my essay, "Why Honesty is a Good Policy." It is all about the consequences, but from a broad view, it seems that the best consequences result from being honest and otherwise respecting strong social norms.

Why focus on reducing suffering?

Rhys:
Most of the effective altruists I've talked to seemed to be positive utilitarians if they were utilitarians at all. I asked a couple of effective altruists about negative utilitarianism, and one of them expressed uncertainty about whether negative utilitarians would be welcome in the EA movement at all. (This was on the presumption that the negative utilitarian would ultimately desire the extinction of all sentient beings, David Benatar style.) But from what I understand, you lean more toward negative utilitarianism. Is that true? And if so, why do you like that approach best, and does this sometimes lead you to desire different outcomes than what other effective altruists desire (and if so, what are some of these differences)?

Brian:
There are many reasons why various people feel that reducing suffering is more urgent than increasing happiness. One is the relatively uncontroversial point that, at least in the world today, it's more cost-effective to reduce suffering because there's so much suffering at such a high intensity that can be prevented, and the hedonic treadmill tends to limit how much happiness can be increased per organism. Another is the feeling that suffering is just a lot worse than happiness is good. Another is that creating happiness for new beings doesn't have the same moral urgency as preventing suffering does; happiness is great, but it's not right to leave others to suffer and instead go off to create more orgasms. A Hindu/Buddhist approach would say that our goal is ultimately liberation from unsatisfied desires.

Note also that many other ethical views share properties with negative-leaning utilitarianism. Prioritarians also feel that suffering warrants much greater urgency than increasing happiness. Egalitarians should be concerned about larger populations and future technologies that are likely to widen the gap between the worst and best off. Similarly for a Rawlsian maximin view.

A negative-leaning stance sometimes leads to similar conclusions as would be accepted by positive-leaning utilitarians (such as the severity of factory farming and wild-animal suffering), but sometimes it leads to different emphasis (such as a slightly greater concern by negative-leaning utilitarians about risks that would result from technological progress and space colonization). While the negative-leaning focus is different from the positive-leaning one, it's not the case that negative-leaning utilitarians can't contribute helpfully to the effective-altruist movement, because ultimately we need to compromise with those of differing values. Both sides can be made better off by making some concessions to the other. If negative-leaning utilitarians excluded themselves from working with others or advocated extreme measures, this would again be an instance of a self-defeating stance that ultimately doesn't advance their goals. Nearly everyone agrees that reducing suffering is important, and if negative-leaning utilitarians can improve safety measures against risks of suffering without too much obstruction of what others care about, this can be a win for many value systems, not just negative-leaning utilitarians.

Rhys:
You are a pretty big fan of parking lots because of the way they cover up organic biomass and reduce the ability of sentient life to form. Would you be in favor of paving over the entire world? (I suppose another way of phrasing this would be, do you favor extinction for all sentient beings, or do you think some lives are worth living?)

Brian:
Many people care very strongly about the survival of themselves, their families, their work, and humanity as a whole. Many people also care strongly about having some wilderness preserved. As a result, we shouldn't pave the entire world. Still, on the margin, I think it would be better if there were more parking lots. People already build parking lots based on market-based cost-benefit calculations, and if the positive externality of reducing wild-animal suffering were added to the equation, it would favor more parking lots than we have now.

I do think some lives are worth living, including probably a significant fraction of human lives. Even some lucky animals may have net positive lives, but these are the exception and are dwarfed by a vast sea of animals that die in gruesome ways shortly after birth.

Rhys:
Does your negative-leaning utilitarianism make you feel differently about existential risks than other EAs with an interest in existential risks? (Even if it's not necessarily different goals, but different reasons for those goals. For instance, those who want to stop humans from going extinct usually want to do so for humanity's own sake, but maybe you would want to save humans for more instrumental reasons—like that humans have more power than other lifeforms and could use that power to reduce suffering. On the other hand, a lot of the existential risk EAs might want to colonize space, while it seems you would not.)

Brian:
I am concerned about the potential for space colonization to astronomically multiply suffering, though as you point out, there could be upside as well as far as reducing suffering that already exists. Other people really care about space colonization,

and realistically, the best prospects for reducing suffering are to mitigate the damage that might result, by pushing for stronger safety measures and compromise arrangements in the lead-up to artificial general intelligence and embarking into space. It's in the interest of all major value systems, including suffering reduction, to foster conditions that can lead to international cooperation, so that everyone can get some fraction of what they want rather than fighting for winner takes all in a destructive manner. One "existential risk" I see as very pressing is the scenario in which one faction acquires great power and then runs roughshod over everything that all the other factions care about, and I encourage more thinking about ways to reduce this danger at many different levels, such as political dynamics, social institutions, cultural norms, etc.

Rhys:
To the extent that effective altruism is utilitarian, in general it seems like effective altruists are positive utilitarians for humans and negative utilitarians for animals. I sometimes think of it as welfarism for humans, abolitionism for animals. Why is this? Is there a sense that human lives are net positive and animal lives are net negative? Does it have to do with [...] links with the vegan movement, which tends to see extinction as the simplest, most plausible answer to (domesticated) animal suffering?

Brian:
For utilitarians, the difference is probably mainly due to what you said: Humans often have net positive lives, while farm animals almost always have net negative ones.

Also, humans matter instrumentally more than animals because they have compound returns, although I would point out that (a) it's not obvious if we want faster or slower economic progress in terms of maximizing the safety of positive outcomes in the future, so the sign of the compound returns isn't wholly obvious, and (b) promoting concern for animals itself has compound returns, contributing to a more positive future. Point (b) is just an argument for animal charities, not for causing more animals to be born.

For non-utilitarians, the difference mainly comes from objecting to the use of animals for human purposes. Eliminating such exploitation requires abolitionism in the eyes of those who hold this view.

Vegans and wild animals

Rhys:
Why do many vegans avoid the question of wild animal suffering?

Brian:
Here are a few main reasons:

See also "Does the Animal-Rights Movement Encourage Wilderness Preservation?"

The possibility of insect suffering

Rhys:
Why is possible insect sentience a pressing ethical issue?

Brian:
Earth contains about a billion insects for every human. These insects have a nontrivial chance of being sentient (I'd say ~50%, but certainly not less than 5-10%), and if sentient, I think they would count a nontrivial fraction as much as a human (I'd say ~1/20th as much as a human, but even with a much more extreme ratio, insects will probably still dominate). I think insects matter more than their size gives credit for because they are autonomous agents that have their own utility scales, and their brains perform many of the same functions as ours do with much greater efficiency.

There are many possible ways in which humans might be able to reduce insect suffering, even in the short run. One is to research the net impact of various environmental policies on insects, keeping in mind that insects in the wild very likely endure more suffering than happiness because 99+% of offspring die within a few days after birth. In doing these analyses, we have to consider other factors as well, such as the risk of environmental degradation increasing global instability in the future.

Another approach is to explore the relative painfulness of different insect-control methods and encourage farms to switch to pest-management approaches that are more humane; insofar as this keeps pest populations constant, it's a clear-cut intervention in the sense that it doesn't require understanding the longer-term implications of increasing or decreasing populations of a given insect (though in practice there would of course be some complicating factors). Millions or tens of millions of insects can be killed per hectare of pesticide application, so even if you discount insects heavily, this is a big deal. But note that I don't encourage reducing insecticide use wholesale, because it's plausible that insecticides preclude more suffering than they cause by averting vast numbers of births and rapid subsequent deaths by future insects. This is an important topic to study in its own right.

Some people find it "weird" to consider insects morally relevant, but I think as time goes on, more and more of us will come to adopt the perspective that they matter. Quoting Christof Koch:

We have literally no idea at what level of brain complexity consciousness stops. Most people say, "For heaven's sake, a bug isn't conscious." But how do we know? We're not sure anymore. I don't kill bugs needlessly anymore. [...] Probably what consciousness requires is a sufficiently complicated system with massive feedback. Insects have that.

And from the same article, Nicholas Strausfeld:

Many people would pooh-pooh the notion of insects having brains that are in any way comparable to those of primates. But one has to think of the principles underlying how you put a brain together, and those principles are likely to be universal.

Rhys:
Are there other effective altruists you know of who are concerned about insects, or are you the only EA with an interest in that issue for now?

Brian:
I think many EAs concerned with animal welfare recognize the possibility of insect suffering as an important matter to explore. Not all of them bite bullets about how much insects matter, but I think many of them recognize insects may matter to some degree. Some avoid talking about the issue so as to not turn people off rather than because they don't actually find it compelling. One of my EA friends has even considered the idea of studying insect neuroscience in grad school.

Rhys:
If insects do indeed suffer, it seems unlikely that they suffer more intensely than do larger animals like mammals (or do you disagree?). The case for giving insects special attention, then, comes down to the number of insects—which is incredibly large—and looking at insect suffering in the aggregate rather than on an individual level. But since insects suffer as individuals and not as an aggregate (if they suffer at all), someone could make the case that insect suffering isn't that big a deal if their capacity for suffering as individuals is relatively muted compared to more complicated organisms, given that there is apparently no being who feels all insect suffering at once, and so aggregating insect suffering together is just an abstraction—and perhaps a misleading one as it is not indicative of an actual sensation of massive combined pain that anyone feels. Do you agree that having a major concern for insects relies on aggregating pains across individuals, and if so, why do you think it makes sense to do this kind of aggregation?

Brian:
There's no objective way to make interpersonal (or, in this case, inter-species) comparisons of utility. Whether insects suffer more or less intensely than mammals is a question for us to decide based on how much we care about each type of organism. Most people choose to care less about insects, but this choice is based on their own aesthetic sensibilities, such as that brain complexity should be relevant. To an insect itself, it is the whole world, and its choices treat its own welfare as basically the only thing that matters, just like is the case for mammals.

So we can't make statements comparing insects vs. mammals without introducing an ethical perspective that bakes in our own feelings on the matter. But in that case, why not bake in our own feelings about aggregating across organisms?

If you wanted to say, "I care less about each insect than each mammal, and if I care a low enough amount about a given organism, I set its value to zero in the aggregation," that's a legitimate thing to do, but it's not something I endorse. If it helps, picture the insects collectively as a unified whole similar to the collection of neurons in a larger brain. In any event, as noted above, there's no objective reason I couldn't say "Each insect suffers more than a mammal relative to my values." That's not true of my personal values, but rejecting aggregation is also not true of my personal values.

One motivation for doing aggregation is that it's what we ourselves do for selfish tradeoffs with regard to our various future selves. So it seems plausible that society should do it for social tradeoffs. Of course, what we selfishly do with regard to our future selves is not a perfect guide, because for instance, we also discount the welfare of our farther future selves, but this isn't ethically right. A more basic argument for aggregation is that suffering is bad regardless of what mind is experiencing it, and greater suffering is worse.