by Brian Tomasik
First written: 14 Jan. 2015; last update: 30 Apr. 2017

Introduction

Following is a quick summary of my beliefs on various propositions and my moral values. The topics include philosophy, physics, artificial intelligence, and animal welfare. A few of the questions are drawn from The PhilPapers Surveys. Sometimes I link to essays that justify the beliefs further. Even if I haven't taken the time to defend a belief, I think sharing my subjective probability for it is an efficient way to communicate information. What a person believes about a proposition may be more informative than any single object-level argument, because a probability assessment aggregates many facts, intuitions, and heuristics together.

Pablo Stafforini has written his own version of this piece.

Beliefs

Belief Probability
"Aesthetic value: objective or subjective?" Answer: subjective 99.5%
"Abstract objects: Platonism or nominalism?" Answer: nominalism 99%
Compatibilism on free will 98%
Moral anti-realism 98%
Artificial general intelligence (AGI) is possible in principle 98%
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it 80%
Earth will eventually be controlled by a singleton of some sort 72%
Soft AGI takeoff 70%
Eternalism on philosophy of time 70%
Human-inspired colonization of space will cause more suffering than it prevents if it happens 69%
Type-A physicalism regarding consciousness 69%
"Science: scientific realism or scientific anti-realism?" Answer: realism 68%
Rare Earth explanation of Fermi Paradox 67%
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015 67%
A government will build the first human-level AGI, assuming humans build one at all 62%
MIRI reduces net expected suffering 62%
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past 60%
Electing more liberal politicians reduces net suffering in the far future 55%
At bottom, physics is discrete rather than continuous 53%
Human-controlled AGI in expectation would result in less suffering than uncontrolled 52%
Climate change will cause more suffering than it prevents 50%
The effective-altruism movement, all things considered, reduces rather than increases total suffering in the far future (not counting happiness)a 50%
Cognitive closure of some philosophical problems 50%
Faster technological innovation increases net suffering in the far future 50%
Crop cultivation prevents net suffering 50%
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.) 50%
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments) 50%
Faster economic growth will cause net suffering in the far future 43%
Modal realism 40%
The multiverse is finite 40%
Many-worlds interpretation of quantum mechanics (or close kin) 40%b
Whole brain emulation will come before de novo AGI, assuming both are possible to build 37%c
A full world government will develop before human-level AGI 25%
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing 15%
Humans will go extinct within millions of years for some reason other than AGI 5%d
A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) 5%

Values

While I'm a moral anti-realist, I find the Parliamentary Model of moral uncertainty helpful for thinking about different and incompatible values that I hold. One might also think in terms of the fraction of one's resources (time, money, social capital) that each of one's values controls. A significant portion of my moral parliament as revealed by my actual choices is selfish, even if I theoretically would prefer to be perfectly altruistic. Among the altruistic portion of my parliament, what I value roughly breaks down as follows:

Value system Fraction of moral parliament
Negative utilitarianism focused on extreme suffering 90%
Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about) 10%

This section discusses how much I care about suffering at different levels of abstraction.

The kind of suffering that matters most is... Fraction of moral parliament
hedonic experience 70%
preference frustration 30%

My negative-utilitarian intuitions lean toward a "threshold" view according to which small, everyday pains don't really matter, but extreme pains (e.g., burning in a brazen bull or being disemboweled by a predator while conscious) are awful and can't be outweighed by any amount of pleasure, although they can be compared among themselves. I don't know how I would answer the "torture vs. dust specks" dilemma, but this issue doesn't matter as much for practical situations.

I assess the degree of consciousness of an agent roughly in terms of analytic functionalism, i.e., with a focus on what the system does rather than other factors that don't relate to its idealized computation, such as what it's made of or how quickly it runs. That said, I reserve the right to care about non-functional parts of a system to some degree. For instance, I might give greater moral weight to a huge computer implementing a given subroutine than to a tiny computer implementing the exact same subroutine.

Weighting animals by neuron counts

I feel that the moral badness of suffering by an animal with N neurons is roughly proportional to N2/5, based on a crude interpolation of how much I care about different types of animals. By this measure, and based on Wikipedia's neuron counts, a human's suffering with some organism-relative intensity would be about 11 times as bad as a rat's suffering with comparable organism-relative intensity and about 240 times as bad as a fruit fly's suffering. Note that this doesn't lead to anthropocentrism, though. It's probably much easier to prevent 11 rats or 240 fruit flies from suffering terribly than to prevent the same for one human. For instance, consider that in some buildings, over the course of a summer, dozens of rats may be killed, while hundreds of fruit flies may be crushed, drowned, or poisoned.

My intuitions on the exact exponent for N change a lot over time. Sometimes I use N1/2, N2/3, or maybe even just N, for weighting different animals. Exponents closer to 1 can be motivated by not wanting tiny invertebrates to completely swamp all other animals into oblivion in moral calculations (Shulman 2015), although this could also be accomplished using a piecewise function for moral weight as a function of N, such as one that has a small exponent for N within the set of mammals and another small exponent for N within the set of insects but a big gap between mammals and insects.

Footnotes

  1. Why isn't this number higher? One reason it's not close to 100% is that it's extremely difficult to predict the long-run effects of one's actions. Even if the effective-altruism movement were completely suffering-focused, my probability here might not be more than 60-65% or something. However, the reason my probability is at 50% rather than higher is because many parts of the effective-altruism movement run contrary to suffering-reduction goals. For example, unlike most people in the world, many effective altruists consider it extremely important to ensure that humanity fills the cosmos with astronomical amounts of sentience. But creating astronomical amounts of sentience in expectation means creating astronomical amounts of suffering, especially if less than perfectly compassionate values control the future. That said, many effective-altruist projects aim to improve the quality of astronomical-sentience futures assuming they happen at all, and these efforts presumably reduce expected suffering.

    Maximizing ideologies like classical utilitarianism, which are more common among effective altruists than other social groups, seem more willing than common-sense morality to take big moral risks and incur big moral costs for the sake of creating as many blissful experiences as possible. Such ideologies may also aim to maximize creation of new universes if doing so is possible. And so on. Of course, some uncontrolled-AI outcomes would also lead to fanatical maximizing goals, some of which might cause more suffering than classical utilitarianism would, and effective altruism may help reduce the risk of such AI outcomes.  (back)

  2. A friend asked me why I place so much confidence in MWI. The main reason is that almost everyone I know of who has written about the topic accepts it: Sean Carroll, Max Tegmark, Scott Aaronson, David Deutsch, David Wallace, David Pearce, Gary Drescher, Eliezer Yudkowsky, etc. Secondarily, unlike Copenhagen or Bohmian interpretations, it doesn't introduce additional formalism. I'm inclined to take simple math at face value and worry about the philosophy later; this is similar to my view about physicalism and consciousness. I remain confused about many aspects of MWI. For instance, I'm unclear about how to interpret measure if we do away with anthropics, and I don't know what to make of the problem of preferred basis. The main reason I maintain uncertainty about MWI is not that I think Copenhagen or Bohmian interpretations are likely but because plausibly something else entirely is true. If that something else turns out to be an extension or small revision of MWI, I consider that to be part of my probability that MWI is true. I interpret "MWI being false" to mean that something radically different from MWI is true. I don't understand the Consistent Histories interpretation well enough, but it seems like a promising sister alternative to MWI.  (back)
  3. I formerly gave slightly higher probability to brain emulation. I've downshifted it somewhat after learning more about neuroscience, including mapping of the C. elegans connectome. The McCulloch–Pitts picture of neurons is way too simple and ignores huge amounts of activity within neurons, as well as complex networks of chemical signaling, RNA transcription, etc. To get an appreciation of this point, consider the impressive abilities of unicellular organisms, which lack neurons altogether.

    The difficulty of modeling nervous systems raises my estimate of the difficulty of AGI in general, both de novo and emulation. But humans seem to do an okay job at developing useful software systems without needing to reverse-engineer the astoundingly complicated morass that is biology, which suggests that de novo AGI will probably be easier. As far as I'm aware, most software innovations have come from people making up their own ideas, and very few have relied crucially on biological inspiration?  (back)

  4. Discussion of this estimate here.  (back)