Note from :
The details in this piece are slightly outdated. Maybe I'll update this page at some point, but for now, here's a quick summary of my current views.
In terms of maximizing expected suffering reduction over the long-run future, my top recommendation is the Center for Reducing Suffering (CRS), closely followed by the Center on Long-Term Risk (CLR). (I'm an advisor to both of them.) I think both of these organizations do important work, but CRS is more in need of funding currently.
CRS and CLR do research and movement building aiming to reduce risks of astronomical suffering in the far future. This kind of work can feel very abstract, and it's difficult to know if your impact is even net good on balance. Personally I prefer to also contribute some of my resources toward efforts that more concretely reduce suffering in the short run, to avoid feeling like I'm possibly wasting my life on excessive speculation. For this reason, I plan to donate my personal wealth over time toward charities that work mainly or exclusively on improving animal welfare. (I prefer welfare improvements over reducing meat consumption because the sign of the latter for wild-animal suffering is unclear.) The Humane Slaughter Association is my current favorite. A decent portion of the charities granted to by the EA Funds Animal Welfare Fund also do high-impact animal welfare work. I donate a bit to Animal Ethics as well.
Update from :
My top two charities for reducing short-term suffering are now
- Shrimp Welfare Project, which has been enormously successful so far in reducing the painfulness of shrimp slaughter on a small budget
- Legal Impact for Chickens, which reduces both moderate and extreme suffering by the most numerous type of highly sentient farmed animal.
I also still think Humane Slaughter Association and Animal Ethics are good choices as well.
There are so many effective-altruism-associated animal charities these days that I can't keep track of them all, and probably there are some other excellent ones that I'm missing. I prefer not to donate to animal charities that spend nontrivial resources on reducing beef consumption (in case beef decreases net wild-animal suffering, which may or may not be the case). I also tend to shy away from organizations that use environmentalist/sustainability language in case this reinforces the idea that more nature is better, but I don't know how much this matters in practice. Some welfarist organizations seem to focus mainly on mild suffering during an animal's life, whereas I prefer charities that focus on extreme suffering—especially slaughter, which is often the worst event that a farmed animal experiences.
Summary
This piece describes my views on a few charities. I explain what I like about each charity and what concerns me about it. Currently, my top charity recommendation for someone with values similar to mine is the Foundational Research Institute (an organization that I co-founded and volunteer for).
Contents
- Summary
- Introduction
- Rankings
- Explanations
- Foundational Research Institute (FRI)
- Effective Altruism Foundation (EAF) (in Switzerland)
- Machine Intelligence Research Institute (MIRI)
- Animal Ethics (AE)
- An insect charity (doesn't yet exist)
- Against Malaria Foundation (AMF)
- Animal Charity Evaluators (ACE)
- Humane Slaughter Association (HSA)
- Future of Humanity Institute (FHI)
- My donation plans
- Older version of this piece
Introduction
It's hard to find charities that align with my values, because either the charities have different goals from mine (the typical case), or if their goals align, I think they often miss important pitfalls that could undercut the value of their work. This piece aims to paint a picture of how I rank a few notable charities relative to my values and beliefs. Most of the ranking scores are based on ideological alignment and overarching strategy, rather than good management or effective employees, because picking the overall direction of work seems to me the most crucial thing to get right. Productivity in the wrong direction is not very useful.
Rankings
In the following table, I've rated charities both according to how much expected suffering I think they reduce in risk-neutral, "cold and calculating" terms as well as how "spiritually meaningful" they feel based on their abilities to clearly reduce suffering in the short term rather than gambling on speculative scenarios about far-future possibilities. The division is partly inspired by my own psychology and partly by Eliezer Yudkowsky's "Purchase Fuzzies and Utilons Separately".
The error bars specified by the "+/-" in the "utilons" column might be something like 75% confidence intervals, but they're not intended to be at all precise.
Charity | Expected impartial value ("utilons") | Expected spiritual value ("fuzzies") |
Foundational Research Institute | 200 +/- 400 | 50 |
Effective Altruism Foundation (EAF) activities besides FRI | 170 +/- 300 | 50 |
Machine Intelligence Research Institute | 35 +/- 300 | 10 |
Animal Ethics | 40 +/- 80 | 50 |
an insect charity (doesn't yet exist) | 25 +/- 75 | 25 |
Against Malaria Foundation | 2 +/- 50 | 6 |
Animal Charity Evaluators | 10 +/- 100 | 10 |
Humane Slaughter Association | 20 +/- 50 | 75 |
Future of Humanity Institute | 0 +/- 300 | 0 |
Explanations
This section explains the motivations for the "utilons" estimates in the preceding table. The "fuzzies" estimates were easier to pin down because they're visceral, although they're somewhat informed by the utilons estimates (as in the case of FRI), combined with the directness and clarity of the charity's suffering-reduction work.
Foundational Research Institute (FRI)
- Pro:
- Studies the most important questions, and unlike all other far-future organizations of which I'm aware, FRI is foremost focused on reducing suffering. FRI's challenge is to consolidate the many insights that other organizations, academics, and individuals have already developed and analyze what they imply about where suffering reducers should push.
- Con:
- Not yet a notable organization.
- May be hard to find top talent because so few people share my primary focus on reducing suffering.
Effective Altruism Foundation (EAF) (in Switzerland)
- Pro:
- Building a movement of people concerned with reducing suffering, some of whom will spill over into FRI and other areas that have highest priority.
- Con:
- Less targeted toward the most important issues than FRI specifically.
Machine Intelligence Research Institute (MIRI)
- Pro:
- Studying AI scenarios and design principles seems close to the best thing altruists can do, and it appears more likely than not that controlled AI will reduce net expected suffering.
- MIRI recognizes the importance of cooperation among competing value systems. For example, MIRI promotes the ideal of shaping AI values in a democratic way rather than pushing for AI with MIRI's specific flavor of consequentialism. And MIRI studies game/decision-theoretic issues regarding cooperation on prisoner's dilemmas and how to divide gains from trade.
- MIRI focuses on risks from uncontrolled AIs that would probably colonize space if they were created, so MIRI's work doesn't necessarily increase the probability of Earth-originating space colonization very much.
- Con:
- It might turn out that controlling AI increases net expected suffering, in which case MIRI's work would be harmful.
- MIRI advances the (not very mainstream) idea that we should create astronomical amounts of sentience in the future and that failing to do so would be a great moral loss. Peli Grietzer writes regarding the rationalist community of which MIRI is a part: "Almost every philosophically informed person I know outside the rationalist community accepts some form of asymmetry thesis to the effect that extra good lives aren’t in themselves a major improvement of the world, whereas no one I know of in the rationalist community does." Unfortunately, ensuring that astronomical amounts of sentience get created increases the likelihood that astronomical amounts of suffering get created.
- MIRI is clustered with other "existential risk" organizations, some of whose efforts may be bad from the standpoint of suffering reduction. (On the other hand, MIRI may also take "existential risk" people away from less savory projects.)
- Filling MIRI's funding gaps might lead more people to fund non-AI "existential risk" projects instead, which could be bad. (On the other hand, helping MIRI expand and become more popular might increase its long-term room for funding.)
- Some types of "AI safety" work might indeed increase the probability that suffering-spreading space colonization eventually occurs. For instance, it probably makes it more likely that Earth-originating intelligence will eventually colonize the galaxy when people do generic work against automation disasters and risks from out-of-control nano-machines.
My current guess is that there's a ~62% chance that MIRI's work is net positive by negative-utilitarian lights and ~38% that it's net negative. But given the high leverage of MIRI's work, the expected benefits of MIRI are still substantial.
In 2018, I reduced the utilon value of MIRI a bit because MIRI became less funding-constrained than in the past, while charities like FRI remain relatively funding-constrained.
Animal Ethics (AE)
- Pro:
- AE is the only animal charity of which I'm aware that explicitly discusses in a big way wild-animal suffering not caused by humans.
- AE also does some conventional animal-rights advocacy.
- AE can more effectively influence animal advocates in better directions than non-animal organizations can.
- Con:
- AE's messages about suffering in nature are often cautious and guarded. This could mean that many people influenced by AE will still think wildlife is good to preserve. The usual first instinct when you care about wild animals is, "Don't bulldoze them!" Unfortunately, this may be the wrong stance to take when all factors are considered, but that inferential leap could be too hard for most people to make.
- Doesn't target the far future directly or work on highest-leverage issues like AI.
- Unclear:
- Impact on human-extinction scenarios is probably low. Probably the biggest effect is to slightly reduce climate change via encouraging veg*ism, and it's not obvious whether reducing climate change is net good or net bad, both for wild animals in the short run and with respect to astronomical suffering in the long run. At best the expected value of this effect is probably roughly zero compared with the more targeted impacts of Animal Ethics's work.
An insect charity (doesn't yet exist)
- Pro:
- Con:
- I worry that increased concern for insects might lead people to favor preserving big insect populations, in the same way that concern for humans typically leads people to favor preserving large human populations—ignoring the fact that most insects probably have terrible lives. People might protest against insecticides even if they reduce net insect suffering. (Whether they do is unclear to me.)
- If the charity lobbied against entomophagy, this might only increase interest in that cruel practice rather than decrease it, because any press is good press for entomophagy companies and bad press for insect welfare.
Against Malaria Foundation (AMF)
- Pro:
- Plausibly reduces invertebrate populations (although this should be verified by further research).
- Con:
- No messaging about the importance of animal suffering.
Animal Charity Evaluators (ACE)
- Pro:
- Does valuable research and has solid staff.
- In the long run, may multiply donations to good charities relative to just donating to those charities directly.
- Con:
- ACE currently supports veg charities and will probably always support relatively conventional animal charities. I worry whether these might cause harm by increasing environmentalist sentiments. Also, veg*ism doesn't obviously reduce suffering in the short run, though I think it's more likely good than bad on balance, especially when the far future is considered.
- Doesn't target the far future directly or work on highest-leverage issues like AI.
Humane Slaughter Association (HSA)
I wrote more about HSA here.
- Pro:
- Plausibly reduces the pain of millions of slaughters per year in expectation.
- Insofar as humane slaughter keeps constant the number of farmed animals, it's a "safer bet" than veg*ism because it doesn't rely on assumptions about whether crop cultivation and climate change are net good or bad for wild animals.
- Personally I find humane slaughter to be a helpful meme because it shows that you can take moderate steps to improve the welfare of less powerful beings without going to extremes. This kind of thinking will be important in the future when we need to make tradeoffs about how much to run suffering computations that are economically valuable.
- Con:
- Doesn't directly target the far future and high-leverage scenarios like AI.
- Many of my friends worry that humane slaughter is actually a harmful meme because it encourages people not to care about animals in the same way they do about humans. I personally don't find this argument compelling, but ultimately it's an empirical question how people at large are affected by different messages.
- Unclear:
- Impact on human-extinction scenarios is probably low. Probably the biggest effect is through HSA's effects on climate change from meat consumption, but it's not clear whether HSA increases or decreases meat consumption on balance. It's also not obvious whether reducing climate change is net good or net bad, both for wild animals in the short run and with respect to astronomical suffering in the long run.
Future of Humanity Institute (FHI)
- Pro:
- Similar pros as for MIRI above regarding AI.
- Explores crucial considerations for the far future that can benefit many value systems at once.
- Con:
- Works on non-AI existential risks, which may or may not be good from the standpoint of suffering reduction.
- Like MIRI, FHI promotes memes about spreading humanity to the stars, which will likely increase suffering.
- Work on "Human Enhancement" seems not terribly important relative to other questions.
- Because it has to publish formally, FHI may be less cautious about what it publishes. For instance, did "Whole Brain Emulation: A Roadmap" help to speed up the technology? (I guess Anders Sandberg thinks it may be good to speed up whole-brain emulation to reduce discontinuities in tech progress, but others disagree.)
My donation plans
If I resume earning to give, I might donate something like this:
- $60K/year to HSA for fuzzies value, to avoid feeling like I'm ignoring clear suffering in front of me
- $40K/year to FRI or EAF depending on room for funding
- $5K/year to MIRI depending on room for funding
- $5K/year to Animal Ethics for fuzzies value, to avoid feeling like I'm ignoring clear suffering in front of me
- $20K/year to my donor-advised fund, saving until better giving opportunities come along later
- the remainder to my private savings for retirement, charity, or non-charity altruism projects down the road.
Tax-deductibility for charitable donations is capped at 50% of adjusted gross income in the USA, so there's no benefit to donating all income after costs of living to my donor-advised fund. However, HSA is not currently deductible in the USA, so I might have to give to HSA out of the after-tax dollars that I can't deduct due to hitting the 50%-of-income donation limit.
Older version of this piece
The current essay is an updated version of "My Donations: Past and Present".