Summary of My Beliefs and Values on Big Questions

By Brian Tomasik

First published: . Last nontrivial update: .

Introduction

Following is a quick summary of my beliefs on various propositions and my moral values. The topics include philosophy, physics, artificial intelligence, and animal welfare. A few of the questions are drawn from The PhilPapers Surveys. Sometimes I link to essays that justify the beliefs further. Even if I haven't taken the time to defend a belief, I think sharing my subjective probability for it is an efficient way to communicate information. What a person believes about a proposition may be more informative than any single object-level argument, because a probability assessment aggregates many facts, intuitions, and heuristics together.

While a few of the probabilities in this piece are the results of careful thought, most of my numbers are just quick intuitive guesses about somewhat vague propositions. I use numbers only because they're somewhat more specific than words like "probable" or "unlikely". My numbers shouldn't be taken to imply any degree of precision or any underlying methodology more complex than "Hm, this probability seems about right to express my current intuitions...".

Pablo Stafforini has written his own version of this piece.

Note: By "causes net suffering" in this piece, I mean "causes more suffering than is prevented", and the opposite for "prevents net suffering". For example, an action that causes 1 unit of suffering and prevents 4 other units of suffering prevents 3 units of net suffering. I don't mean the net balance of happiness minus suffering. Net suffering is the relevant quantity for a negative-utilitarian evaluation of an action; for negative utilitarians, an action is good if it prevents net suffering.

Contents

Beliefs

Belief Probability
"Aesthetic value: objective or subjective?" Answer: subjective 99.5%
"Abstract objects: Platonism or nominalism?" Answer: nominalism 99%
Compatibilism on free will 98%
Moral anti-realism 98%
Artificial general intelligence (AGI) is possible in principle 98%
Humans will eventually build human-level AGI conditional on no other major intervening disruptions to civilization as we know it 80%
Human-inspired colonization of space will cause more suffering than it prevents if it happens 72%
Earth will eventually be controlled by a singleton of some sort 72%
Soft AGI takeoff 70%
Eternalism on philosophy of time 70%
Type-A physicalism regarding consciousness 69%
Rare Earth explanation of Fermi Paradox 67%
By at least 10 years before human-level AGI is built, debate about AGI risk will be as mainstream as global warming is in 2015 67%
A government will build the first human-level AGI, assuming humans build one at all 62%
By 2100, if biological humans still exist, most of them will regard factory farming as a great evil of the past 60%
The Foundational Research Institute reduces net suffering in the far future 58%
The Machine Intelligence Research Institute reduces net suffering in the far future 53%
Electing more liberal politicians reduces net suffering in the far future 52%
Human-controlled AGI in expectation would result in less suffering than uncontrolled 52%
Climate change will cause more suffering than it prevents 50%
The effective-altruism movement, all things considered, reduces rather than increases total suffering in the far future (not counting happiness)a 50%
Cognitive closure of some philosophical problems 50%
Faster technological innovation increases net suffering in the far future 50%
Crop cultivation prevents net suffering 50%
Conditional on a government building the first human-level AGI, it will be the USA (rather than China, etc.) 50%
Earth-originating intelligence will colonize the entire galaxy (ignoring anthropic arguments) 50%
Faster economic growth will cause net suffering in the far future 47%
Modal realism 40%
Many-worlds interpretation of quantum mechanics (or close kin) 40%b
At bottom, physics is discrete/digital rather than continuous 40%
The universe/multiverse is finite 37%
Whole brain emulation will come before de novo AGI, assuming both are possible to build 30%c
A full world government will develop before human-level AGI 25%
Wild-animal suffering will be a mainstream moral issue by 2100, conditional on biological humans still existing 15%
Humans will go extinct within millions of years for some reason other than AGI 5%d
A design very close to CEV will be implemented in humanity's AGI, conditional on AGI being built (excluding other value-learning approaches and other machine-ethics proposals) 0.5%

Values

While I'm a moral anti-realist, I find the Parliamentary Model of moral uncertainty helpful for thinking about different and incompatible values that I hold. One might also think in terms of the fraction of one's resources (time, money, social capital) that each of one's values controls. A significant portion of my moral parliament as revealed by my actual choices is selfish, even if I theoretically would prefer to be perfectly altruistic. Among the altruistic portion of my parliament, what I value roughly breaks down as follows:

Value system Fraction of moral parliament
Negative utilitarianism focused on extreme suffering 90%
Ethical pluralism for other values (happiness, love, friendship, knowledge, accomplishment, diversity, paperclips, and other things that agents care about) 10%

However, as is true for most people, my morality can at times be squishy, and I may have random whims in a particular direction on a particular issue. I also may have a few deontological side-constraints on top of consequentialism.

While I think high-level moral goals should be based on utilitarianism, my intuition is that once you've made a solemn promise or entered into a trusting friendship/relationship with another person, you should roughly act deontologically ("ends don't justify the means") in that context. On an emotional level, this deontological intuition feels like a "pure" moral value, although it's also supported by sophisticated consequentialist considerations. Nobody is perfect, but if you regularly and intentionally violate people's trust, you might acquire a reputation as untrustworthy and lose out on the benefits of trusting relationships in the long term.

What kind of suffering?

The kind of suffering that matters most is... Fraction of moral parliament
hedonic experience 70%
preference frustration 30%

This section discusses how much I care about suffering at different levels of abstraction.

My negative-utilitarian intuitions lean toward a "threshold" view according to which small, everyday pains don't really matter, but extreme pains (e.g., burning in a brazen bull or being disemboweled by a predator while conscious) are awful and can't be outweighed by any amount of pleasure, although they can be compared among themselves. I don't know how I would answer the "torture vs. dust specks" dilemma, but this issue doesn't matter as much for practical situations.

I assess the degree of consciousness of an agent roughly in terms of analytic functionalism, i.e., with a focus on what the system does rather than other factors that don't relate to its idealized computation, such as what it's made of or how quickly it runs. That said, I reserve the right to care about non-functional parts of a system to some degree. For instance, I might give greater moral weight to a huge computer implementing a given subroutine than to a tiny computer implementing the exact same subroutine.

Weighting animals by neuron counts

I feel that the moral badness of suffering by an animal with N neurons is roughly proportional to N2/5, based on a crude interpolation of how much I care about different types of animals. By this measure, and based on Wikipedia's neuron counts, a human's suffering with some organism-relative intensity would be about 11 times as bad as a rat's suffering with comparable organism-relative intensity and about 240 times as bad as a fruit fly's suffering. Note that this doesn't lead to anthropocentrism, though. It's probably much easier to prevent 11 rats or 240 fruit flies from suffering terribly than to prevent the same for one human. For instance, consider that in some buildings, over the course of a summer, dozens of rats may be killed, while hundreds of fruit flies may be crushed, drowned, or poisoned.

My intuitions on the exact exponent for N change a lot over time. Sometimes I use N1/2, N2/3, or maybe even just N, for weighting different animals. Exponents closer to 1 can be motivated by not wanting tiny invertebrates to completely swamp all other animals into oblivion in moral calculations (Shulman 2015), although this could also be accomplished using a piecewise function for moral weight as a function of N, such as one that has a small exponent for N within the set of mammals and another small exponent for N within the set of insects but a big gap between mammals and insects.

Footnotes

  1. Why isn't this number higher? One reason it's not close to 100% is that it's extremely difficult to predict the long-run effects of one's actions. Even if the effective-altruism movement were completely suffering-focused, my probability here might not be more than 60-65% or something. However, the reason my probability is at 50% rather than higher is because many parts of the effective-altruism movement run contrary to suffering-reduction goals. For example, unlike most people in the world, many effective altruists consider it extremely important to ensure that humanity fills the cosmos with astronomical amounts of sentience. But creating astronomical amounts of sentience in expectation means creating astronomical amounts of suffering, especially if less than perfectly compassionate values control the future. That said, many effective-altruist projects aim to improve the quality of astronomical-sentience futures assuming they happen at all, and these efforts presumably reduce expected suffering.

    Maximizing ideologies like classical utilitarianism, which are more common among effective altruists than other social groups, seem more willing than common-sense morality to take big moral risks and incur big moral costs for the sake of creating as many blissful experiences as possible. Such ideologies may also aim to maximize creation of new universes if doing so is possible. And so on. Of course, some uncontrolled-AI outcomes would also lead to fanatical maximizing goals, some of which might cause more suffering than classical utilitarianism would, and effective altruism may help reduce the risk of such AI outcomes.  (back)

  2. A friend asked me why I place so much confidence in MWI. The main reason is that almost everyone I know of who has written about the topic accepts it: Sean Carroll, Max Tegmark, Scott Aaronson, David Deutsch, David Wallace, David Pearce, Gary Drescher, Eliezer Yudkowsky, etc. Secondarily, unlike Copenhagen or Bohmian interpretations, it doesn't introduce additional formalism. I'm inclined to take simple math at face value and worry about the philosophy later; this is similar to my view about physicalism and consciousness. I remain confused about many aspects of MWI. For instance, I'm unclear about how to interpret measure if we do away with anthropics, and I don't know what to make of the problem of preferred basis. The main reason I maintain uncertainty about MWI is not that I think Copenhagen or Bohmian interpretations are likely but because plausibly something else entirely is true. If that something else turns out to be an extension or small revision of MWI, I consider that to be part of my probability that MWI is true. I interpret "MWI being false" to mean that something radically different from MWI is true. I don't understand the Consistent Histories interpretation well enough, but it seems like a promising sister alternative to MWI.  (back)
  3. I formerly gave slightly higher probability to brain emulation. I've downshifted it somewhat after learning more about neuroscience, including mapping of the C. elegans connectome. The McCulloch–Pitts picture of neurons is way too simple and ignores huge amounts of activity within neurons, as well as complex networks of chemical signaling, RNA transcription, etc. To get an appreciation of this point, consider the impressive abilities of unicellular organisms, which lack neurons altogether.

    The difficulty of modeling nervous systems raises my estimate of the difficulty of AGI in general, both de novo and emulation. But humans seem to do an okay job at developing useful software systems without needing to reverse-engineer the astoundingly complicated morass that is biology, which suggests that de novo AGI will probably be easier. As far as I'm aware, most software innovations have come from people making up their own ideas—whether through theoretical insight or trial and error—and fewer discoveries have relied crucially on biological inspiration? Tyler (2009):

    Engineers did not learn how to fly by scanning and copying birds. Nature may have provided a proof of the concept, and inspiration - but it didn't provide the details the engineeres actually used. A bird is not much like a propellor-driven aircraft, a jet aircraft or a helicopter.

    The argument applies across many domains. Water filters are not scanned kidneys. The hoover dam is not a scan of a beaver dam. Solar panels are not much like leaves. Humans do not tunnel much like moles do. Submarines do not closely resemble fish. From this perspective, it would be very strange if machine intelligence was much like human intelligence.

    Marblestone et al. (2016):

    The artificial neural networks now prominent in machine learning were, of course, originally inspired by neuroscience [...]. While neuroscience has continued to play a role [...], many of the major developments were guided by insights into the mathematics of efficient optimization, rather than neuroscientific findings [...].

      (back)

  4. Discussion of this estimate here.  (back)