Should Altruists Focus on Reducing Short-Term or Far-Future Suffering?

By Brian Tomasik

First written: 6 Feb 2015. Last nontrivial update: 17 Mar 2015.

Summary

Should altruists aiming to reduce suffering focus on clear ways of averting agony in the short run or on projects explicitly optimized to shape humanity's long-term future toward more humane directions? Both are very worthwhile efforts. Naive expected-value calculations might incline one to assume that far-future work is obviously superior, but there are many heuristic reasons why this may not be the case. All told, it's plausible that organizations like the Foundational Research Institute which deliberately explore big questions that affect priorities for suffering-reduction efforts have more expected benefit from a cold, calculating perspective than charities like the Humane Slaughter Association that help millions of animals in the short run, but this is far from obvious. Moreover, I feel a "spiritual" need to do some work to reduce clear instances of torture-level suffering in the short term.

Note: Most other authors who have written about the topics I discuss in this piece focus on the contrast between work on "existential-risk reduction" versus more mainstream causes like international aid. Even though I've drawn on such literature, when I talk about long-term suffering-reduction work in this essay, I don't mean reducing extinction risk via prevention of bio, nano, asteroid, etc. disasters, because I fear that such work may increase net suffering by elevating the chance of space colonization, which would astronomically multiply suffering relative to what we see on Earth. Rather, the far-future work I have in mind seeks to steer the future in more humane directions, without affecting the probability that there will be a far future. Given that rogue artificial intelligences (AIs) would probably colonize space, I don't categorize work against "AI risk" along with work on other types of extinction risks, and unlike for the other risks, I find it reasonably likely that efforts to reduce AI risk will reduce net expected suffering.

Contents

Introduction

Most people who care about reducing suffering in the world focus on the short term: Improving conditions for humans and animals in the present in various ways. Among such altruists, there are disagreements about the right level of abstraction at which to tackle social problems, with some arguing that short-term approaches are "band aids" that don't strike at the root cause, while others contend that broader efforts inspired by radical philosophies tend to either accomplish nothing or else end up causing harm in their own right (e.g., communism).

Short-term vs. far-future sufferingSome effective altruists take a particularly bold stance on the question of whether to optimize for the short term or long term. They point out that the far future may contain vast numbers of digital creatures (say, the equivalent of 1038 humans living for at least billions of years), which implies probably vast amounts of future suffering. Even if there's a tiny chance we can reduce some of that suffering by our efforts, trying to do so may overwhelmingly dominate the importance of short-term charity.

This essay critiques the argument for the dominating importance of the far future. Ultimately, I suspect that actions targeting the far future do dominate in expected value, but this has to be weighed alongside strong heuristics against tilting at windmills, as well as strong "spiritual" impulses to help those suffering in clearly preventable ways in the present.

A sketch of the argument

A prototypical example of a charity focused on the short term is the Humane Slaughter Association (HSA), which reduces the agony of slaughter for millions of animals in the coming decades but doesn't explicitly target long-term outcomes. A prototypical long-term charity for suffering reducers is the Foundational Research Institute (FRI), which explores possible game-changing insights about how altruists should shape humanity's future trajectory. In general, future-focused work tends to involve a lot of philosophy and broad exploration of many disciplines, because improving the far future is a much harder problem than reducing suffering in the short run and so requires more thorough analysis. [Full disclosure: I helped to create FRI, while I'm merely a supporter of HSA.]

Here's an outline of a common argument why the far future should dominate altruistic calculations. Suppose we're deciding between a charity, ST, that focuses on reducing suffering in the short term and a charity, LT, that focuses on the long term. Their expected impacts are a sum of expected short-term and long-term effects. Suppose LT is just 1% more effective at pushing the far future in better directions than ST is. To make the argument conservative, assume LT has no short-term suffering-reduction impact. Then

expected value of ST = (expected short-term suffering reduced) + (expected long-term suffering reduced)
= (at most ~billions of animals helped) + (small multiplier) * (at least 1038 future organisms helped * 1010 years)

while

expected value of LT = (expected short-term suffering reduced) + (expected long-term suffering reduced)
= 0 + (small multiplier) * (1.01 for being 1% more effective) * (at least 1038 future organisms * 1010 years).

So as long as the "small multiplier" factor is bigger than ~10-37, LT has higher expected value.

The multiplier rightly discounts efforts to target the far future for several reasons, including

The argument for far-future dominance maintains that these complications are small enough that even taken together, they don't make the "small multiplier" less than 10-37.

The far-future argument can be strengthened further by noting that there's a nonzero chance that future post-humans will create infinite suffering, so any nonzero improvement in the probability of preventing bad outcomes of that sort dominates anything else.

Weak arguments against far-future dominance

This section reviews some weak arguments against the idea that the far future dominates. It focuses specifically on the case of those who care most about reducing suffering.

Sequence vs. cluster thinking

The far-future argument is compelling, and I think the most likely way in which it would be wrong is at a fundamental level, i.e., that the approach of maximizing expected value over gambles involving tiny probabilities of huge payoffs is not the most effective way to make choices. This is Holden Karnofsky's main reply. Karnofsky suggests that the expected-value approach represents "sequence thinking", in contrast to the more informal, common-sense approach of "cluster thinking" that people normally use. It's not obvious which epistemological framework is better suited to altruistic prioritization. Of course, cluster thinking doesn't imply that it's better to focus on the short term; rather, it suggests that the far future can't automatically dominate the calculations purely by its sheer magnitude.

Anti-fanaticism heuristics

Sequence thinking often falls prey to situations in which a low probability of astronomical payoff seems to dominate the calculations. This doesn't just happen on rare occasions but occurs all the time, causing conclusions to constantly flip flop. Even if you think you've reached the last "crucial consideration" (in Nick Bostrom's terminology), you probably haven't, and your conclusions would likely change again with further exploration. This has happened to me over and over.

There are at least two main ways to deal with fanaticism difficulties:

  1. Adopt stabilizing heuristics that prevent fanaticism when fringe possibilities seem to dominate in expected value.
  2. Avoid acting rashly on any given fanatical hypothesis and instead seek to build organizational, epistemic, and motivational capacity to more thoroughly explore important questions. This includes pushing artificial intelligence (AI) in better directions so that it's more likely to behave how we intend when it confronts these kinds of dilemmas.

#1 takes a cluster thinking approach. #2 makes sense even using sequence thinking, by recognizing the expected value of further information and of preparing the ground for our wiser descendents to take the analysis to the next level.

I think #2 is basically right, but #1 may be a helpful guard rail as well. This isn't to say that fanatical possibilities should never dominate our choices, but we should wait until we have high confidence that our conclusions won't change before dedicating our efforts to them.

I used to think that most people didn't "get" rationality and therefore ignored high-risk, high-reward speculative possibilities. But as I've gotten older, I've come to see that common sense has more sense in it than I had supposed. This view has been reinforced by my greater appreciation of heuristics in artificial intelligence. For example, complex GOFAI navigation calculations tend to do worse than model-free, behavior-based robots, at least without astronomical amounts of computing power. Likewise, for boundedly rational humans, heuristics may often outperform complex trains of logic most of the time. In the long run, sophisticated reasoning is a goal to strive for, but it doesn't help to pretend we can do it rigorously now.

Anthropic penalty may be linear in size of future

Most anthropic arguments for why we may not have overwhelming influence on the far future predict an anthropic penalty that increases as the size of the far future increases. Indeed, an anthropic penalty of this type would be necessary to combat fanaticism from infinite gambles. If the anthropic penalty scales proportionally to the size of the future (such as in the simulation argument), then it's no longer obvious that the far future dominates after all. This seems to be the strongest argument against far-future fanaticism within the realm of expected-value calculations. See "Pascal's Muggle" for debate about such an approach.

However, this point breaks down if there's any non-tiny probability that the anthropic penalty doesn't in fact scale proportionally to the size of the future. Indeed, common sense tells us that we really are in a position to exert possibly astronomical influence on our region of the universe for billions of years to come. It seems difficult to claim that the probability of this common-sense view is in fact smaller than ~10-30 or whatever is needed to avoid far-future domination.

Still, this counterargument is itself a fanatical kind of wager, which suggests that maybe we're hitting the limits of where expected-value calculations can reasonably be applied.

Action correlations

What we choose has logical implications for what other agents in other contexts choose, depending on the degree of correlation between our decision algorithms and theirs. So the value of our choices comes not just from their causal consequences in terms of spatiotemporally contiguous reverberations but also in terms of what they logically imply about choices in general. One way to think about this is in terms of the categorical imperative: if you could fix an algorithm to have some output throughout the universe, what output would have the best consequences?

It's not obvious whether the logical implications of far-future speculation are better or worse than those of shorter-term helping. steven0461 puts it this way: "The question is whether one can get more value from controlling structures that—in an astronomical-sized universe—are likely to exist many times, than from an extremely small probability of controlling the whole thing." See here for more on this point.

Flow-through effects

A strawman version of the far-future argument might claim that far-future organizations automatically dominate short-term ones because the far future may be so huge. This ignores the reality that all charities have nontrivial effects on humanity's trajectory—side effects that are sometimes called "flow-through effects". So what's required for demonstrating that a far-future charity is better is to show that its flow-through effects on long-term outcomes are better than the (perhaps unintended) flow-through effects of a short-term charity.

That said, it seems likely a priori that an organization targeting the far future should have better far-future effects than one targeting the near term, since the far-future organization is optimizing with the far future in mind. steven0461 makes this point for a comparison of charities working on international aid vs. AI risk: "it would seem to be a fairly major coincidence if the policy of saving people’s lives in the Third World were also the policy that maximized safety." However, I once again hasten to add that I may not support many types of "existential risk" work (e.g., to reduce bio or nano risks) because they may increase future suffering by making space colonization more likely.

As a counterpoint to the argument that it would be coincidental if the best short-term charity was also an optimal long-term charity, we could note that there are many more short-term charities, so it may be that one could find a short-term charity whose work is effective enough that its flow-through effects exceed those of an explicit long-term charity.

In any case, the observation that all charities have flow-through effects at least shows that far-future charities don't dominate short-term ones by millions or billions of times, and probably not even by thousands of times.

Broad market efficiency

As I've grown older, I've become increasingly impressed by the talents of other people. For most suffering-related causes that one can think of, there's someone working on it. Karnofsky calls this finding "broad market efficiency". Some examples:

I think some philosophy is pointless and some is confused. But a decent fraction of philosophers are discussing important foundational issues in constructive ways, in fields like anthropics, philosophy of physics, interpretations of quantum mechanics, philosophy of computation, functionalist philosophy of mind, questions about modal realism, epistemology of disagreement, and some moral philosophy. If these questions are as important for the far future as they seem, does this imply that, say, funding a philosophy institute could be more valuable than preventing suffering in the short run? Naively it doesn't seem that way; it feels weird to maintain that the best way to reduce suffering in the world is to fund abstract philosophy papers in the ivory tower. Of course, I think FRI can be many times more valuable than a general philosophy institute because FRI looks at how philosophical discoveries affect the best strategies for reducing suffering. But on a daily basis, a lot of FRI's work looks like regular philosophy, and conversely, some non-FRI philosophers already think about ethical implications of their ideas.

A few issues that I think haven't gone as mainstream as they should have yet are

All of these topics are actually discussed to some degree in and outside of academia, so like the other issues cited above, they're not wholly neglected. But because these topics entail a strong moral statement that general philosophical exploration lacks, I think it's more important for altruists in particular to focus on these issues. It's a priori less crucial for altruists to promote exploration of intellectual topics that smart people would investigate anyway because of their sheer interestingness, although there may be exceptions in practice for topics that are especially neglected.

Far-future interventions are not only compelling to suffering reducers

Most big-picture questions relevant to the far future are interesting to many different value systems, including classical utilitarianism and other forms of consequentialism. So many far-future questions are already being explored to some degree by people who aren't focused on reducing suffering. This should at least lead us to ask whether there are other interventions more specifically tailored to suffering reducers.

Of course, it's often good to work on questions of wide interest, since if everyone tried to optimize for projects unique to his/her particular values, there would be a tragedy of the commons with respect to exploration of broader questions. However, this isn't always true, as the next section explains.

Discovering insights can help the majority at the expense of suffering reducers

Most people want to spread life, even if doing so increases total suffering. There are many fewer negative utilitarians in the world. This means that if the two camps are diametrically opposed on a policy question, and if each camp's effectiveness is boosted by some amount, then the boost may make things worse for the negative utilitarians, because the gain by the life-spreaders more than outweighs that by the negative utilitarians.

This means that sometimes, generating knowledge can harm suffering reducers. My hope is that this isn't often the case, and I think that cooperation and reputation heuristics generally favor sharing insights rather than not doing so. Still, this consideration does reduce the expected value of research-oriented work. For more discussion, see "Expected Value of Shared Information for Competing Agents".

Far-future employees are more expensive

People who work for charities focused on the far future tend to have strong mathematical/computer competence. Because they could take remunerative private-sector jobs, they may cost more to hire, and if they're sufficiently altruistic, they might have been earning and donating a lot had they not been doing direct research. Thus, their total (wage + opportunity) costs may be slightly more to many times more than those of employees for, say, an animal-welfare organization. This should already be factored in to a value-per-dollar comparison of charities, but my point here is just that value per dollar is not value per employee when comparing across different types of charities.

Bias toward fun, intellectual topics

I've heard the worry that philosophy-oriented altruists focus on far-future speculations because those are fun intellectual problems, compared with the boring nitty gritty of actually effecting change in the short run. I doubt this is true for all or even most far-future altruists, but it may sway the feelings of some.

If I had no altruistic commitments, I might study philosophy or related theoretical topics just because these questions are very interesting. But then I should be suspicious when I also conclude that they happen to be optimal areas for altruistic focus. Of course, it's also plausible that because I find these topics interesting, they constitute my comparative advantage. In contrast, most people find big-picture questions too abstract.

Bias toward "cool" topics

Some altruists may also be tempted toward far-future speculation because the issues tend to be more cutting-edge and sexy: AI, digital minds, technology policy, cosmology, and so on. In contrast, research on humane slaughter sounds less cool to most people. Peter Hurford calls this discrepancy the "wow factor" bias.

Hurford adds that the more cool topics have higher status among effective altruists. But maybe this higher status for far-future speculation was earned by good arguments, in which case the higher status of far-future charities is evidence in their favor.

Youthful naïveté?

With some exceptions, many of the altruists who advocate influencing the far future are young—often less than 30 years of age and usually less than 40. In contrast, activists for other causes tend to better span the age spectrum, although they still probably also have a skew toward idealistic young people.

The youngness of far-future activism may be a coincidence. Or it may be a sign that the movement is breaking new intellectual ground that older generations are hesitant to explore. But it may also suggest that far-future altruists are somewhat naïve. I think several people working to shape the far future vastly overestimate their own importance on a global scale. This is a small yellow flag about their epistemologies in general. However, there are many exceptions to these observations.

Short-term work is less likely to backfire

Generally I encourage discussion of novel, non-mainstream topics because these seem to offer the highest leverage. Spreading an unconventional idea seems more important than spreading an idea that's already somewhat well known. However, sometimes people object to this approach because of a concern that if an idea is too radical, it will scare people away from more moderate stances. How can people care about suffering digital animals in the far future if they don't even care about biological animals in the present? While I'm not particularly worried about this point, since I usually speak to more intellectually curious audiences, it may have some weak force in supporting more short-term interventions, which are likely to be more mainstream. For instance, HSA's efforts to expand humane slaughter are not radical in the eyes of ~90% of the population, and implementing such reforms helps to make animal welfare more mainstream, which is a first step toward ethical consideration of all sentient entities.

Gut check

Here may be the strongest point: Does it make sense at a gut level that the best thing for suffering reducers to do in a world filled with animal suffering is to explore abstruse philosophy of the type that's also interesting to non-altruists? An initial reaction is: No, doing philosophy is a diversion from getting your hands dirty to actually make a difference. This view seems to be shared by some of my friends, especially those who do more in-the-trenches work for the animal movement.

Of course, gut reactions are often wrong, and there are some very compelling reasons why applying philosophical insights may be on the whole too neglected by suffering reducers. The gut check is just a reason for special caution and skepticism.

Weak arguments for far-future dominance

Far-future altruism can be defended on more than just sequence-thinking grounds. Following are some heuristic, cluster-thinking arguments.

Cause flexibility

Even if a given short-term cause happens to have strong flow-through effects on the far future, this happens by accident. It's more robust to actually study various avenues of long-run impact and optimize among them. In the worst case, the long-run analysis would just converge on the original short-run charity.

That said, the discipline of short-run focus may have its own virtues. For instance, effective short-term projects achieve more concrete results in the "real world". Maybe this has more value than one would estimate from one's armchair. (Or maybe it doesn't.)

More neglected

The flip side of a common-sense argument for short-term focus is the point that if far-future efforts are promising, they're probably very promising, because the future is traditionally neglected. Most people seem to ignore the far future in part because they don't care as much about future generations, so those who do care a lot about future generations may find more low-hanging fruit in future-oriented work.

Scope neglect and risk aversion

Our judgments tend to be biased by scope neglect, which makes us feel satisfied with a little bit of short-term suffering reduction compared with a lot of expected long-term suffering reduction. Scope neglect leads to risk aversion, because we want a higher probability of making some difference, even if the expected value of the difference is lower. Far-future efforts tend to involve only a small chance of actually making an appreciable impact, while the probability of accidentally making a negative impact on the far future is often uncomfortably close to 50%.

Of course, charities focused on the short term can't escape the uncertainties of far-future activism, since their flow-through effects will have some impact on future trajectories as well. Short-term charities may have smaller impacts on the far future and therefore smaller uncertainty, but if so, then they also have correspondingly smaller upside.

Guesses are not useless

One of Hurford's arguments against speculative charities is that common-sense intuitions often perform poorly at evaluating cost-effectiveness. This may be, but as long as intuition is right 50.0000000001% of the time and wrong 49.9999999999% of the time on gambles with equal prior odds, then the far-future argument can still go through because it has so much cushioning against tiny probabilities due to its astronomical stakes.

My compromise position

From a cool-headed, "rational" perspective, the case for far-future work seems somewhat stronger to me. It's not a slam dunk because, even if the far future probably wins on explicit expected-value calculations, the debate is partly one of epistemological frameworks to begin with, and short-term work can often win when various types of cluster thinking are used.

However, this does not mean that I think we should only focus on the far-future. A portion of my neural electorate feels strongly that it would be wrong not to work towards reducing clear suffering in the short run, such as by promoting humane slaughter. This is a very powerful kind of "spiritual" sentiment, and it's not something I want to give up. Rather, I have at least two distinct utility functions in my brain: one that is willing to maximize expected value regarding far-future speculations, and one that needs to prevent torture-level suffering in the short run.

Eliezer Yudkowsky has harsh words for the emotional approach:

Altruism isn't the warm fuzzy feeling you get from being altruistic. If you're doing it for the spiritual benefit, that is nothing but selfishness. The primary thing is to help others, whatever the means. So shut up and multiply!

A lot of my brain agrees with that. But one can also offer a few replies:

  1. Epistemology: As Karnofsky says, expected-value calculations are often flawed as a model of rationality. Rarely is the situation as clean as in a philosopher's thought experiment. In the real world, if you feel uneasy about a really crazy-sounding gamble, it's quite plausible there's actually something wrong with it. In a similar way, if you feel uneasy about utilitarianism's apparent endorsement of lying and stealing for the greater good, then you've felt the right emotion, because lying and stealing for the greater good is actually a bad idea all things considered. Emotions often embody wisdom that explicit deduction lacks.
  2. Ethics: Any set of values looks absurd when pushed to its logical limits. Preventing short-run suffering may seem untenable—after all, it seems you're giving greater weight to expected suffering in the short run than expected suffering in the long run. But many moral choices are ultimately equally "silly" from an abstract perspective. Incorporating risk aversion into one's intrinsic values can be a legitimate moral choice.
  3. Diligence: The competition between short-term and long-term altruism can force long-term projects to stay on course, rather than devolving into only fun intellectualizing with unclear payoff.
  4. Staying compassionate: My feelings of compassion for suffering impel me to do something about animal suffering in the short run. I can't remain fully empathetic and impartially optimize an abstract expected-value calculation at the same time. Some might say this is a reason to ditch compassion, but that's extremely dangerous, because by doing so, I could easily slide toward some aesthetically motivated moral view that had little inspiration from empathy at all—and one that might well increase suffering.

I am haunted by the knowledge that others are enduring torture-level suffering. Images of people and animals in extreme agony have defined my purpose in life, and I just have to do something about suffering like this. I find it unsettling that many "rationalist" effective altruists don't seem to share the same degree of deep emotional pain from others' suffering. It's not that they necessarily should, since maybe they wouldn't be able to shut up and multiply as well if they did. But many in-the-trenches animal activists do share my sense of horror at suffering, and this makes me wonder whether I should lean more in their direction.

Ultimately, my position is to settle for a compromise between my brain's two utility functions: Donate some money/time toward short-run efforts (HSA) and some toward long-run efforts (FRI). This is essentially Yudkowsky's suggestion to "Purchase Fuzzies and Utilons Separately". Some effective altruists (including my former self) like to complain that splitting donations among different causes is often "irrational" because one or the other cause is generally better and so should deserve all the money. But splitting is rational if you have more than one utility function. Specifically, I currently feel the need to give at least ~40% of resources toward short-term efforts, with the rest allowed to go toward far-future analysis.

Carl Shulman has suggested a similar approach for those whose fuzzies are more attached to saving human lives in the near term, such as by donating to the Against Malaria Foundation (AMF):

AMF clearly saves lives in the short run. If you give that substantial weight rather than evaluating everything solely from a "view from nowhere" long run perspective where future populations are overwhelmingly important, then it is clear AMF is good. It is an effective way to help poor people today and unlikely to be a comparably exceptional way to make the long run worse. If you were building a portfolio to do well on many worldviews or for moral trade it would be a strong addition.

Occam's imaginary razor

I don't intend offense to anyone in particular, but I think many people who deny arguments for working on far-future speculation do so because they claim that far-future work in fact doesn't have higher expected value. Typically the rejections focus on the difficulty of knowing whether our actions will be good or bad, the possibility that our actions won't have any lasting effects, or the hypothesis that short-term change will ultimately have more far-future impact because it's more palatable to mainstream society. All of these are important points, and in some cases, they may indeed deflate the importance of far-future work. But they aren't silver bullets, and the case for the far future is unlikely to be wholly defeated by any given objection.

My position, in contrast, is that I acknowledge the epistemic force of far-future arguments but maintain some commitment to short-term helping as an intrinsic spiritual impulse. Along the lines of Occam's imaginary razor, this allows me to avoid distorting my beliefs about the far-future question based on emotional pulls to stop torture-level suffering in the present. In the face of emotion-based cognitive dissonance, it's often better to change your values than to change your beliefs.

Bounded utility functions can still favor far future

Carl Shulman notes that even if we think future generations have less moral urgency because of diminishing marginal moral weight for additional people, we should also think that present generations have relatively low weight compared with the first humans or first animals. As a result, far-future arguments may still go through even on views that ethically discount the future.

I have two replies:

  1. Views postulating diminishing marginal weight are designed to justify our intuitions. They weren't developed because they make sense in the abstract (except maybe to force convergence of infinite sums). So if these views yield conclusions other than what they were designed to yield, then they're not working properly. They could be replaced by a moral view that favors organisms near to us right now, rather than organisms nearer to some notional starting line for intelligent life on Earth.
  2. That said, I think my "help animals now" impulse might—to the extent this raw emotion can be properly considered a formal theory at all—be better described as risk/ambiguity aversion than diminishing marginal weight for later organisms. Since the far future is inherently more risky, Shulman's argument can't rescue it in this case.

Application to specific charities

How do FRI and HSA compare? The above arguments apply to them broadly, but following are some additional thoughts.

In favor of FRI:

In favor of HSA:

A charity that somewhat straddles the short- vs. long-term divide is Animal Ethics (AE). It has the advantage over HSA of promoting non-mainstream ideas about wild-animal suffering that are very important for the future of the animal-rights movement. A disadvantage is that, like FRI, it mainly occupies the realm of ideas and doesn't achieve any concrete policy changes.

I should also add that HSA and AE are the only two animal charities I feel comfortable supporting. This is because I'm uncertain whether vegetarianism prevents net animal suffering, and I worry that generic animal-rights organizations may increase enthusiasm for wilderness conservation. Worse, some animal charities actively lobby for habitat preservation, predator reintroduction, and so on.

Personalizing the question

If short-term and long-term charities are at least competitive in relative value, then your personal talents, passions, and interests can help sway the decision of where to focus your energies. However, this observation is not a blank check to claim that one or the other type of charity is better purely because you have a pre-existing bias for it; your talents and preferences can change to some degree.

Optional: My history with this topic

I first became altruistically motivated in 2000, after hearing a speech by Ralph Nader. From the beginning, I felt passionate about long-run outcomes, in part because Nader emphasized big-picture change: "systemic approaches to systemic injustices." Nader's challenge to the two-party political system by running as a Green and later independent candidate represented a long-shot gamble for producing significant change down the road. Granny D emphasized this point:

As we enter this period of great struggle, let us be willing to have short-term losses for long-term gains. This means that we must vote our hearts and let the chips fall where they may. [...]

Don't think of your vote as a day trader's investment in the candidate of the moment; vote for the long term. Invest in the moral progress of your nation.

In 2004, I wrote a journal entry, "The Trunk of the Problem" (p. 9), encouraging long-run over short-run altruism.

Until 2005, I was (misguidedly) an environmentalist, and I preferred this cause over helping humans in the short run because I thought environmental preservation would reduce more human suffering in the long run. (I wasn't thinking about non-human animals at that point.) Combating climate change is an interesting example of one of the few areas of mainstream altruism whose time scale extends beyond the next few decades.

It wasn't until late 2005 that I learned about astronomical altruism scenarios. Because I was a relatively simple-minded expected-value maximizer at the time, I quickly concluded that these far-future speculations did indeed dominate utilitarian calculations. I've continued to believe something along those lines ever since, but the degree of nuance in my position has deepened, and I now regard the question as far less obvious than I did in the past.