by Brian Tomasik
First written: 2008; last updated: 28 Aug. 2014
This page reviews some of the major insights that I've gleaned from various academic subjects. I would especially recommend these topics to students just beginning to learn about altruism. This list is far from exhaustive, and I haven't necessarily kept it up-to-date because there are just too many important things to read for me to list them all here.
I save some interesting articles that either I have read or would like to read on my Diigo page.
Below are some authors whose writings I've found especially worthwhile and transformative, together with particular topics that they cover well. I've cited some of their best individual articles in the hyperlinks of subsequent sections. I don't necessarily endorse (and sometimes oppose) specific ethical views or recommendations by some of these authors.
- Nick Bostrom (infinite ethics, future of human evolution, astronomical stakes of the far future, degrees of experience)
- Eliezer Yudkowsky (intelligence explosion, cognitive biases, reductionism, philosophical zombies, Solomonoff induction, clusters in thingspace, fake explanations)
- Gary Drescher (free will, many-worlds interpretation of quantum mechanics, reductionist understanding of consciousness)
- Paul Almond (naturalism, reductionist approach to consciousness)
- David Pearce (wild-animal suffering, general transhumanism, replacing pleasure-pain axis with gradients of happiness)
- Robin Hanson (Aumann's disagreement theorem, AI emulation scenarios, information markets)
- Peter Singer (obligation to assist, general animal suffering)
- Yew-Kwang Ng (wild-animal suffering, hedonistic vs. preference utilitarianism)
- Max Tegmark (levels of multiverses)
- Oscar Horta (wild-animal suffering)
A key notion is the idea of maximizing an objective function subject to constraints. This shows up in microeconomics with the consumer's utility-maximization problem and the firm's profit-maximization problem. A lot of good decision-analysis tools are taught in the context of strategic corporate finance, where the objective function is the net present value of the firm. The notion of the time value of money is itself an important concept, as is the efficient-market hypothesis (EMH). (EMH may not be strictly correct, but it probably is in the practical sense: that unless you spend a substantial fraction of your life researching investments, you may as well pick stocks within a given risk class at random.)
When you have a fixed budget of resources (say, a fixed amount to donate), it generally suffices to choose the option with the highest marginal utility, which leads naturally to the discipline of cost-effectiveness analysis. Health economists, for instance, often report the DALYs averted per dollar for various interventions. Similar principles can be applied to some degree in charitable giving more generally, though it's also important to consider qualitative factors.
Decision-making under uncertainty is also important. The von Neumann-Morgenstern theorem shows that any agent whose preferences satisfy certain axioms must act as if she's maximizing the expected value of some utility function. This is a
representation theorem, so it doesn't automatically imply that the said utility function corresponds to subjective well-being. However, Yew-Kwang Ng has argued that it in fact does. In any event, I think it's normatively clear that we ought to maximize the expected value of subjective welfare, summed over all organisms that experience such emotions. There are complications in making this specific (What welfare number does a given experience have? What physical operations count as emotions?), but they don't render the project unfeasible.
Most people are risk-averse because of diminishing marginal utility of income, or so the standard model says. People are also scope insensitive, perhaps because of diminishing marginal utility as a function of good accomplished. (Helping 2,000 rats doesn't feel twice as good as helping 1,000.) Indeed, many have faulty intuitions about
diminishing marginal utility of utility that lead them to be risk averse when they ought to be risk-neutral.
The rule of maximizing expected value leaves open the question: Where do the probabilities come from? The answer is given by Bayesian epistemology. Bayes' theorem takes a little bit of time to understand fluently, but the effort is well worth it. An understanding of measure theory and such concepts as countable vs. uncountable infinities can be enlightening but isn't strictly necessary.
I like the
information partition framework for probability, in which we start with a set of complete descriptions of possible states of the world, and each new fact we learn cuts down that set to the subset of possible states in which that fact is true. This is, for instance, the model used by Robert Aumann in his important paper on whether we ought to have common prior probabilities is also important.
Indeed, the standard complaint about Bayesian statistics is that your prior probability distribution is arbitrary. This is somewhat a restatement of the fundamental problem of epistemological uncertainty in philosophy, but I do think some priors are better than others. In particular, Occam's razor is a fundamental principle that I think our priors ought to reflect, and algorithmic information theory seems like a promising approach for formalizing that intuition.
We probably live in a multiverse of some type in which, for instance, infinitely many copies of you exist. This leads to the question of how to define, say,
reductions in suffering when the expected amount of suffering in the multiverse is infinite.
The block universe conception of time implies that past suffering is just as real as that in the present and future, though there appears to be little we can do to affect it. The many-worlds interpretation of quantum mechanics, if correct, implies that our universe is deterministic and that, moreover, there exist hell-like branches of the multiverse containing suffering as bad as is physically possible; these, too, appear beyond our control and so might be regarded as part of our shouldn't use up too much of our cognitive resources. (The hell worlds can be even worse in a Tegmarkian level-4 multiverse, but similar qualifications apply.)
The mathematical universe hypothesis of Max Tegmark is fun to think about. So is Jürgen Schmidhuber's
From a consequentialist perspective, there is no intrinsic distinction between doing harm and failing to do good: Decreasing an organism's utility from 0 to -5 has the same effect as failing to take advantage of the opportunity to increase a suffering organism's utility from -5 to 0. The
status quo is arbitrary and so has no claims to being a preferred option to fall back to. (This is the problem with Pareto efficiency.) Of course, there may often be instrumental reasons to respect the status quo.
Ethicists have spilled much ink trying to solve the is-ought problem, but without success (see Singer's
The Triviality of the Debate Over 'Is-Ought' and the Definition of 'Moral'), because fundamentally, justification of an ethical claim reduces to explaining
why I care about it.
David Hume is a philosopher with an extraordinary degree of insight relative to his times. Many modern ideas not only in ethics but also in metaphysics, epistemology, personal identity, etc. Hume got almost exactly right relative to the modern naturalist world view.
Yew-Kwang Ng's r-selected species. Stephen Jay Gould's
Nonmoral Nature is a more casual and detailed examination of cruelty in nature, with a focus on ichneumon wasps. Oscar Horta has written several pieces on disvalue in nature, including bibliography of papers relevant to wild suffering.
Nick Cooney's book Change of Heart discusses psychology studies in the context of what causes people to modify how they think and act. Another classic in this field is Dale Carnegie's How to Win Friends and Influence People.