by Brian Tomasik
First written: 2 Nov. 2014; last update: 8 Dec. 2015
This piece describes my views on a few charities. I explain what I like about each charity and what concerns me about it. Currently, my top charity recommendation for someone with values similar to mine is the Foundational Research Institute (an organization that I co-founded and volunteer for).
- 1 Summary
- 2 Introduction
- 3 Rankings
- 4 Explanations
- 4.1 Foundational Research Institute (FRI)
- 4.2 Effective Altruism Foundation (EAF) (in Switzerland)
- 4.3 Machine Intelligence Research Institute (MIRI)
- 4.4 Animal Ethics (AE)
- 4.5 An insect charity (doesn't yet exist)
- 4.6 Animal Charity Evaluators (ACE)
- 4.7 Humane Slaughter Association (HSA)
- 4.8 Future of Humanity Institute (FHI)
- 5 My donation plans
- 6 Older version of this piece
It's hard to find charities that align with my values, because either the charities have different goals from mine (the typical case), or if their goals align, I think they often miss important pitfalls that could undercut the value of their work. This piece aims to paint a picture of how I rank a few notable charities relative to my values and beliefs. Most of the ranking scores are based on ideological alignment and overarching strategy, rather than good management or effective employees, because picking the overall direction of work seems to me the most crucial thing to get right. Productivity in the wrong direction is not very useful.
In the following table, I've rated charities both according to how much expected suffering I think they reduce in risk-neutral, "cold and calculating" terms as well as how "spiritually meaningful" they feel based on their abilities to clearly reduce suffering in the short term rather than gambling on speculative scenarios about far-future possibilities. The division is partly inspired by my own psychology and partly by Eliezer Yudkowsky's "Purchase Fuzzies and Utilons Separately".
The error bars specified by the "+/-" in the "utilons" column might be something like 75% confidence intervals, but they're not intended to be at all precise.
|Charity||Expected impartial value ("utilons")||Expected spiritual value ("fuzzies")|
|Foundational Research Institute||200 +/- 400||50|
|Effective Altruism Foundation (EAF) (in Switzerland)||150 +/- 300||25|
|Machine Intelligence Research Institute||90 +/- 300||10|
|Animal Ethics||40 +/- 150||50|
|an insect charity (doesn't yet exist)||20 +/- 150||25|
|Animal Charity Evaluators||5 +/- 100||5|
|Humane Slaughter Association||10 +/- 30||100|
|Future of Humanity Institute||0 +/- 300||-20|
This section explains the motivations for the "utilons" estimates in the preceding table. The "fuzzies" estimates were easier to pin down because they're visceral, although they're somewhat informed by the utilons estimates (as in the case of FRI), combined with the directness and clarity of the charity's suffering-reduction work.
Foundational Research Institute (FRI)
- Studies the most important questions, and unlike all other far-future organizations of which I'm aware, FRI is foremost focused on reducing suffering. FRI's challenge is to consolidate the many insights that other organizations, academics, and individuals have already developed and analyze what they imply about where suffering reducers should push.
- Not yet a notable organization.
- May be hard to find top talent because so few people share my primary focus on reducing suffering.
Effective Altruism Foundation (EAF) (in Switzerland)
- Building a movement of people concerned with reducing suffering, some of whom will spill over into FRI and other areas that have highest priority.
- Less targeted toward the most important issues than FRI specifically.
Machine Intelligence Research Institute (MIRI)
- Studying AI scenarios and design principles seems close to the best thing altruists can do, and it appears more likely than not that controlled AI will reduce net expected suffering.
- MIRI recognizes the importance of cooperation among competing value systems. For example, MIRI promotes the ideal of shaping AI values in a democratic way rather than pushing for AI with MIRI's specific flavor of consequentialism. And MIRI studies game/decision-theoretic issues regarding cooperation on prisoner's dilemmas and how to divide gains from trade.
- MIRI focuses on risks from uncontrolled AIs that would probably colonize space if they were created, so MIRI's work doesn't necessarily increase the probability of Earth-originating space colonization very much.
- It might turn out that controlling AI increases net expected suffering, in which case MIRI's work would be harmful.
- MIRI promotes scenarios like filling the galaxy with sentience as good outcomes. It's clustered with other "existential risk" work, some of which may be bad from the standpoint of suffering reduction. (On the other hand, MIRI may also take "existential risk" people away from less savory projects.)
- Filling MIRI's funding gaps might lead more people to fund non-AI "existential risk" projects instead, which could be bad. (On the other hand, helping MIRI expand and become more popular might increase its long-term room for funding.)
- Some types of "AI safety" work might indeed increase the probability that suffering-spreading space colonization eventually occurs. For instance, it probably makes it more likely that Earth-originating intelligence will eventually colonize the galaxy when people do generic work against automation disasters and risks from out-of-control nano-machines.
My current guess is that there's a ~65% chance that MIRI's work is net positive by negative-utilitarian lights and ~35% that it's net negative. But given the high leverage of MIRI's work, the expected benefits of MIRI are still substantial.
Animal Ethics (AE)
- AE is the only animal charity of which I'm aware that explicitly discusses in a big way wild-animal suffering not caused by humans.
- AE also does some conventional animal-rights advocacy.
- AE can more effectively influence animal advocates in better directions than non-animal organizations can.
- AE's messages about suffering in nature are often cautious and guarded. This could mean that many people influenced by AE will still think wildlife is good to preserve. The usual first instinct when you care about wild animals is, "Don't bulldoze them!" Unfortunately, this may be the wrong stance to take when all factors are considered, but that inferential leap could be too hard for most people to make.
- Doesn't target the far future directly or work on highest-leverage issues like AI.
- Impact on human-extinction scenarios is probably low. Probably the biggest effect is to slightly reduce climate change via encouraging veg*ism, and it's not obvious whether reducing climate change is net good or net bad, both for wild animals in the short run and with respect to astronomical suffering in the long run. At best the expected value of this effect is probably roughly zero compared with the more targeted impacts of Animal Ethics's work.
An insect charity (doesn't yet exist)
- I worry that increased concern for insects might lead people to favor preserving big insect populations, in the same way that concern for humans typically leads people to favor preserving large human populations -- ignoring the fact that most insects probably have terrible lives. People might protest against insecticides even if they reduce net insect suffering. (Whether they do is unclear to me.)
- If the charity lobbied against entomophagy, this might only increase interest in that cruel practice rather than decrease it, because any press is good press for entomophagy companies and bad press for insect welfare.
Animal Charity Evaluators (ACE)
- Does valuable research and has solid staff.
- In the long run, may multiply donations to good charities relative to just donating to those charities directly.
- ACE currently supports veg charities and will probably always support relatively conventional animal charities. I worry whether these might cause harm by increasing environmentalist sentiments. Also, veg*ism doesn't obviously reduce suffering in the short run, though I think it's more likely good than bad on balance, especially when the far future is considered.
- Doesn't target the far future directly or work on highest-leverage issues like AI.
Humane Slaughter Association (HSA)
I wrote more about HSA here.
- Plausibly reduces the pain of millions of slaughters per year in expectation.
- Insofar as humane slaughter keeps constant the number of farmed animals, it's a "safer bet" than veg*ism because it doesn't rely on assumptions about whether crop cultivation and climate change are net good or bad for wild animals.
- Personally I find humane slaughter to be a helpful meme because it shows that you can take moderate steps to improve the welfare of less powerful beings without going to extremes. This kind of thinking will be important in the future when we need to make tradeoffs about how much to run suffering computations that are economically valuable.
- Doesn't directly target the far future and high-leverage scenarios like AI.
- Many of my friends worry that humane slaughter is actually a harmful meme because it encourages people not to care about animals in the same way they do about humans. I personally don't find this argument compelling, but ultimately it's an empirical question how people at large are affected by different messages.
- Impact on human-extinction scenarios is probably low. Probably the biggest effect is through HSA's effects on climate change from meat consumption, but it's not clear whether HSA increases or decreases meat consumption on balance. It's also not obvious whether reducing climate change is net good or net bad, both for wild animals in the short run and with respect to astronomical suffering in the long run.
Future of Humanity Institute (FHI)
- Similar pros as for MIRI above regarding AI.
- Explores crucial considerations for the far future that can benefit many value systems at once.
- Works on non-AI existential risks, which may or may not be good from the standpoint of suffering reduction.
- Like MIRI, FHI promotes memes about spreading humanity to the stars, which will likely increase suffering.
- Work on "Human Enhancement" seems not terribly important relative to other questions.
- Because it has to publish formally, FHI may be less cautious about what it publishes. For instance, did "Whole Brain Emulation: A Roadmap" help to speed up the technology? (I guess Anders Sandberg thinks it may be good to speed up whole-brain emulation to reduce discontinuities in tech progress, but others disagree.)
My donation plans
If I resume earning to give, I might donate something like this:
- $60K/year to HSA for fuzzies value, to avoid feeling like I'm ignoring clear suffering in front of me
- $40K/year to FRI or EAF depending on room for funding
- $5K/year to MIRI depending on room for funding
- $5K/year to Animal Ethics for fuzzies value, to avoid feeling like I'm ignoring clear suffering in front of me
- $20K/year to my donor-advised fund, saving until better giving opportunities come along later
- the remainder to my private savings for retirement, charity, or non-charity altruism projects down the road.
Tax-deductibility for charitable donations is capped at 50% of adjusted gross income in the USA, so there's no benefit to donating all income after costs of living to my donor-advised fund. However, HSA is not currently deductible in the USA, so I might have to give to HSA out of the after-tax dollars that I can't deduct due to hitting the 50%-of-income donation limit.
Older version of this piece
The current essay is an updated version of "My Donations: Past and Present".