by Brian Tomasik
First written: 2 Aug. 2013; last update: 31 Jan. 2017

Summary

When we're not sure whether to count small brains equally with big brains, it seems naively like we should maintain some probability that they are equal, and if this probability is not too tiny, the small organisms will dominate big ones in the calculations due to their numerosity. However, we could flip this argument around the other way, causing the bigger organisms to be relatively more important in expectation. This problem is analogous to the two-envelopes paradox. A similar situation applies to uncertainty regarding whether to count child quantum universes equally with their parents and in general to many other morally uncertain tradeoffs.

Pascalian wagers on brain size

Should the moral weight of a brain's experiences depend on the size of that brain? This question has been debated by utilitarians, with adherents on both sides of the dispute. You might try to take a moral-uncertainty approach as follows:

Naive human-viewpoint wager. We don't know if insect-sized brains matter equally with human-sized brains or if they matter a lot less. If they matter equally, the universe contains a lot more (dis)value than if they matter less, so I should mostly act as though they matter equally on Pascalian grounds.

Alas, a Pascalian argument can be made in the other direction too:

Naive insect-viewpoint wager. We don't know if humans-sized brains matter equally with insect-sized brains or if they matter a lot more. If humans matter a lot more, the universe contains more (dis)value than if they matter less, so I should mostly act as though humans matter more on Pascalian grounds.

As a practical matter, this Pascalian update is less dramatic than the other one, because even if you weight by brain size, there are more insect than human neurons on Earth, and it's also possible that insect brains could be more efficient per neuron at doing relevant computations. Still, the principle is the same, and if the world contained many orders of magnitude fewer insects but still more than humans, then the wager would push us very strongly toward focusing on humans.

Two elephants and two envelopes

Two-elephants and a human. Suppose naively that we weight brain size by number of neurons. An old, outdated estimatea suggested elephants had 23 billion neurons, compared with a human's 85 billion. For simplicity, say this is 1/4 as many.

Two elephants and one human are about to be afflicted with temporary pain. There are two envelopes in front of us: One contains a ticket that will let us stop the human from being hurt, and the other will let us stop the two elephants from being hurt. We can only pick one ticket. Which should we take?

First, suppose you plan to help the human. Say you think there's a 50% chance you weight by brain size and a 50% chance you count each organism equally. If organisms are equal, then helping the elephants saves 2 instead of 1 individuals. If you weight by brain size, then helping the elephants is only 2 * (1/4) = 1/2 as worthwhile as helping the human. 50% * 2 + 50% * 1/2 = 5/4 > 1, so you should actually help the elephants, not the human.

Now suppose you plan to help the elephants. If all animals count equally, then helping the 1 human is only 1/2 as good as helping the 2 elephants. If you weight by brain size, then helping the human is 4 times as good per organism, or 2 times as good overall, as helping the elephants. 50% * 1/2 + 50% * 2 = 5/4, so you should save the human instead.

What we see here is something structurally analogous to the classic two-envelopes problem.

Applying a prior distribution

The Bayesian solution to the two-envelopes paradox is to realize that given the value of your current envelope, it's not possible for the other envelope to be equally likely to have 1/2 or 2 times the value of yours, for all possible values of your envelope. As the value in your envelope increases, it becomes more likely you got the bigger of the two.

One simple way to model the situation could be to use a fixed uniform prior distribution: The value in the larger envelope is uniform on [0,1000], which implies that the value in the smaller envelope is uniform on [0, 500]. Suppose you find that your envelope contains an amount in the range 300 +/- 1/2.b The probability of this is 1/500 if this is the smaller amount or 1/1000 if this is the larger amount. Therefore, if you started with equal priors between getting the smaller and larger amount (which you should have, given the symmetry of envelope picking), the posterior is that you got the smaller envelope with 2/3 probability. Then (2/3)*600 + (1/3)*150 > 300, so you should switch, and this is not fallacious to do. Similar reasoning should work for a more complicated prior distribution.

In some sense, the two-envelopes problem is a non-fallacious form of the gambler's fallacy. In the gambler's fallacy, a person who has rolled a series of good dice moves fears that his luck has been used up, so that on the next roll, he'll more likely lose than win. In the two-envelopes case, someone who has found lots of money in his envelope rationally worries that it's more likely he'd be left with less money if he switched envelopes than if he stayed put.

The human envelope has a low amount?

I have the intuition that what I experience as a human isn't vastly important compared to how important I can imagine things being. In this way, it feels like when I open the envelope for how much a human is worth, I find a smaller amount than I would have expected if this was actually the higher of the envelopes. If this is true, it would not be an error to "switch envelopes," i.e., care more about insects than a brain-size weighting suggests.

A good counterargument to this is that my prior distribution is biased by factors like my past experience with caring a lot about insects, so it seems counterintuitive that the universe could matter so little, even if it actually does.

Also, as Carl Shulman notes, I can't subjectively assess the quantity of my experience, only its quality. Therefore, my impressions of how much I matter aren't valid, because I need to rely on third-person criteria to determine this.

Update, 2015: I'm doubtful that the Bayesian solution to the ordinary two-envelopes problem can work for the moral-uncertainty version. In the ordinary two-envelopes problem, there's an actual right answer as far as what the envelopes contain. In the moral-uncertainty case, there's no "true amount of moral importance" that a given thing embodies, so I don't think it makes sense to conceive of something as having a "low amount" or "high amount" of moral importance in any absolute sense. Within a given moral framework, we can multiply all utilities by any given positive constant and retain the same relative comparisons of outcomes.

Pascalian wagers in the other direction

Let n be the number of neurons in a single brain and W(n) be the moral weight of that brain's experiences. The debate here is whether to take W(n) = n (brain-size weighting) or W(n) = 1 (individual-count weighting). Of course, there are other options, like W(n) = n2 or W(n) = 2n or W(n) = busybeaver(n). The Kolmogorov complexity of using the busybeaver weighting is not proportionate with the size of the busybeaver values, so Pascalian calculations may cause that to dominate. In particular, there's some extremely tiny chance that a mind 100 times as big as mine counts busybeaver([85 billion] * 100) / busybeaver(85 billion) times as much, in which case my decisions would be dominated by the tiniest chance of the biggest possible mind, with everything else not mattering at all by comparison.

Of course, this runs afoul of two-envelopes problems too, but in this case, my (ultimately unfounded) intuition that the value of myself appears to be a small amount doesn't save me; it actually strengthens the wager. If all minds counted equally regardless of n, then the fact that my brain's value seems relatively small would be strange. If value scales superlinearly in n, then the fact that I seem not to matter too much is unsurprising.

So the Pascalian wagers don't just go one way, and if I'm not willing to bite the above bullet, it's not clear I can sustain the Pascalian wager on insects either.

That said: Moral-uncertainty calculations do not need to conform to Occam's razor. I'm allowed to care about whatever I want however much I want, and I'm not obligated to enforce consistent probabilistic calculations over possible moral values the way I have to over factual variables. So I can keep caring a lot about insects if I want; it's just that I can't ground this in a Pascalian moral-uncertainty wager without accepting the other attendant consequences thereof.

But I don't need a Pascalian wager if I think it's very likely that I don't want to scale the importance of an animal's emotions by brain size. If I take it as a moral primitive that brain size doesn't matter, then caring about insects (if they're conscious) falls out naturally. In the extreme case where I said I was certain that brain size didn't matter, then two-envelopes problems would go away. If I maintain non-negligible probability on each possibility, then two-envelopes problems remain.

General moral-uncertainty wagers

Moral realists use moral-uncertainty calculations in the same way as everyone uses factual-uncertainty calculations: They're uncertain what the true morality is, so they take a weighted average based on probability of truth. Non-realists often still use moral-uncertainty calculations, not to express the probability that a normative claim is true but just the probability that, if they thought about the issue more and were more informed, they would come to adopt a given stance.c

The brain-size wager is a special case of the general problem of moral-uncertainty wagers. "What's the relative importance between a brain with 20 billion neurons and one with 80 billion neurons?" is like "What's the relative importance between one person stubbing his toe for utilitarians vs. one person being told a small lie for anti-lying deontologists?" In the latter case, we can have the same sort of two-envelope wagers:

  • Suppose the badness of toe stubbing is -10. The badness of the lie might be half this, -5, or it might be two times this, -20. Assuming equal probabilities, it looks like lying is worse in expectation.
  • An entirely symmetric argument works the other way.

Moral-uncertainty mugging

Naively, it seems like moral-uncertainty calculations can lead to muggings by extreme possibilities -- the moral theories that posit the highest possible amounts of moral (dis)value. For instance, if you're not sure that bacteria don't count morally, then they may dominate other Earth-based life. If you're not sure whether quarks and leptons matter, they may dominate, if you can figure out a way to make them suffer less than they do by default. Or in the flip direction, if huge brains matter super-super-super-exponentially more than small ones, big brains dominate. In each of these cases, if you adopt the stance of the thing that's dominating the calculation, hold it fixed, and vary possibilities for other things mattering, then the other things will start to dominate (or at least matter a lot more than otherwise).

Many-worlds interpretation

Note: I'm not an expert on this topic, and the way I describe things here may be misguided. Corrections are welcome.

In the many-worlds interpretation of quantum mechanics, when a parent universe splits into two child universes, does the moral importance thereby double, because we now have two worlds instead of one, or is the moral importance of each world divided in half, to align with their measures? Each split world "feels on the inside" just as real and meaningful as the original one, so shouldn't we count it equally with the parent? On the other hand, as far as I know, no uniquely new worlds are created by splitting: All that happens is that measure is reapportioned,d so it must be the measure rather than merely the existence of the universe that matters? Of course, we could, if we wanted, regard this reapportioning of measure as happening by creating new copies of old universes rather than by just renormalizing. If there were new copies, then the (dis)value in those universes would be multiplied, and the multiverse would become more important over time.e

So do we want to regard quantum measure as dividing up the value of the universe over time (i.e., each universe becomes less important) or do we want to regard the parent universes as splitting into child universes that are each as important as the parent?f Say the value of the parent is 1, and say it splits over time into 1000 child universes, all with equal measure. Consider a choice between an action with payoff 2 today in the parent universe vs. 1 tomorrow in each child universe. If the children count less, then each one matters only 1/1000, so the value of acting today is 2*1 vs. a value of 1*(1/1000)*1000 = 1 for acting in each child universe tomorrow. On the other hand, if the child universes also count with weight 1, then the children matter just as much as the parent, so the value of acting later is 1*1000. Say we have 50% probability that the children should count equally with the parent. Then the expected value of acting tomorrow is (0.5)(1) + (0.5)(1000) = 500.5, which is greater than the expected value of 2 for acting today. This is analogous to insects dominating our calculations if we have some chance of counting them equally with big-brained animals.

But now look at it the other way around. Say the value of a child universe is 1. Then either the parent universe matters equally, or the parent universe matters 1000 times as much. If they matter equally, then the comparison of acting today vs. tomorrow is 2*1 vs. 1*1000. On the other hand, if the parent matters 1000 times as much, then the comparison is 2*1000 vs. 1*1000. If our probabilities are 50%-50% between counting the parent equally with the children or 1000 times as much, then the expected value for acting today is (0.5)(2)+(0.5)(2000) = 1001, which is greater than the expected value of 1000 for acting tomorrow. We should act today. But this is the opposite of the conclusion we reached when fixing the value of the parent and considering various possible values for the children. Once again, the two-envelopes paradox reigns.

Value pluralism on brain size

There are many plausible perspectives from which to look at the brain-size question, and each can feel intuitive in its own way.

  • The argument for size weighting can cite paradoxes about turning one mind into two merely by gradually separating the components. It can point to the fuzziness of the boundary of "a mind." It can suggest the seeming absurdity that would result if we placed non-vanishing probability on the sentience of bacteria. It can point to the fact that morally relevant computations are done more numerously in bigger brains. And so on.
  • The argument against size weighting can suggest that we may care at least somewhat about the unified essence of an organism acting as a single unit, rather than just caring about the subcomponents. Ultimately there's just one action that an organism takes in response to environmental conditions, no matter how many neurons went into that decision. We may care the most about the highest levels of aggregation for a brain's evaluations of its emotions and less about the subcomponents that gave rise to that evaluation.

Both of these approaches strike me as having merit, and not only am I not sure which one I would choose, but I might actually choose them both. In other words, more than merely having moral uncertainty between them, I might adopt a "value pluralism"g approach and decide to care about both simultaneously, with some trade ratio between the two. In this case, the value of an organism with brain size s would be V(s) = f(s) + w * 1, where f(s) is the function that maps brain size to value (not necessarily linearly), and w * 1 is the weight I place on a whole organism independent of brain size. w needs to be chosen, but intuitively it seems plausible to me that I would set it such that one human's suffering would count as much as, maybe, tens to hundreds of insects suffering in a similar way. We can draw an analogy between this approach and the Connecticut Compromise for deciding representation in the US Congress for small vs. large states.

Note that if we have moral uncertainty over the value of w, then a two-envelopes problem returns. To see this, suppose we had uncertainty between whether to set w as 1 or 10100. If we set it at 10100, then V(s) is much bigger, so this branch of the calculations dominates. Effectively we don't care about brain size and only count number of individuals. But what if we instead flipped things around and considered V'(s) = w' * f(s) + 1 = V(s)/w, where w' = 1/w. Either w' is 1, or w' is 1/10100, and in the former case, V' is vastly larger than in the latter case, so our calculations are dominated by assuming w' = 1, i.e., that the size-weighted term f(s) matters quite a bit.

If we had no uncertainty about w, we wouldn't have a two-envelopes problem, but once we do have such uncertainty, the issue rears its head once more.

I'm doubtful that a value function like V(s) = s + w is the right approach, because if w is not trivially small, then for small critters like insects and bacteria, w might be much bigger than s, but then a single insect and a single bacterium would have close to the same value, which seems wrong. More plausible is to have a V(s) function that grows significantly with s but less than linearly, at least above a certain minimal threshold.

Suffering in fundamental physics

In "Is There Suffering in Fundamental Physics?", I suggest reasons we might care a lot about micro-scale physical processes like atomic interactions. The main argument comes down to the sheer scale of fundamental physics: Its computations astronomically exceed those that can be intentionally run by intelligent civilizations.

Naively, we might see a further prudential argument to take suffering in physics seriously: If fundamental physics does matter, then there would be vastly more suffering in the universe than if not. Therefore, our actions would be vastly more important if quarks can suffer, so we should act as though they can.

However, because the question of how much we care about electrons and protons is partly a moral one, the two-envelopes problem rears its head once more. We could argue in the reverse direction that either intelligent computations matter very little compared with physics, or they matter vastly more than physics. In the latter case, we'd be able to have more impact, so we should assume that intelligent computations dominate in importance compared with low-level physical processes.

Note that the two-envelopes problem doesn't infect the generic case for why fundamental physics may be overwhelmingly important, because that argument assumes a fixed moral exchange rate between, say, human suffering and hydrogen suffering and then points out the overwhelming numerosity of hydrogen atoms. Two-envelopes problems only arise when trying to take an expected value over situations involving different moral exchange rates.

Uncertainty between traditional vs. negative utilitarianism

Consider a person, Jim, who lives a relatively pain-free childhood. At age 23, Jim develops cancer and suffers through it for several months before death.

A traditional utilitarian (TU) would probably consider Jim's life to be positive on balance. For instance, the TU might say that Jim experienced 10 times more happiness than suffering during his life.

In contrast, a weak negative utilitarian (NU) would consider Jim's extreme suffering to be more serious than the TU thought it was. The NU points out that at some points during Jim's cancer, Jim felt so much pain that he wished to be dead. The NU thinks Jim experienced 10 times more suffering than happiness in his life.

Now suppose we're morally uncertain between TU and NU, assigning 50% probability to each. First, let's say the TU assigns values of +10 and -1 to Jim's happiness and suffering, respectively. Since the NU consider's Jim's suffering more serious than the TU did, and in fact, the NU thinks Jim experienced 10 times more suffering than happiness, the NU's moral assignments could be written as +10 happiness and -100 suffering. These numbers are on average much bigger than the TU's numbers, so the NU's moral evaluation will tend to dominate naive expected-value calculations over moral uncertainty. For instance, using these numbers and 50% probability for each of TU and NU, Jim's life was net negative in expectation: 0.5 * (10 - 1) + 0.5 * (10 - 100) < 0.

But we can flip this around. The TU again assigns values of +10 to happiness and -1 to suffering. Now let's suppose that the NU agrees on how bad the suffering was (-1) but merely gives less moral weight to happiness (+0.1, ten times less). Now the TU's numbers are on average much bigger than the NU's, so the TU's moral perspective will tend to dominate in naive expected-value calculations over moral uncertainty. Using these numbers and 50% probability for each of TU and NU, Jim's life was net positive in expectation: 0.5 * (10 - 1) + 0.5 * (0.1 - 1) > 0.

Can moral uncertainty be converted to factual uncertainty?

The two-envelopes problem bedevils moral uncertainty in a different way than it bedevils factual uncertainty. With factual uncertainty, there is a right answer to a given question. So as long as one's probability distributions are appropriate, one can simply take an expected value. This works for the original two-envelopes problem once one's probability distribution is adjusted based on the prior probability that envelopes contain different amounts of money.

With non-realist moral uncertainty, there is no "right answer". We invent a unit of value and make comparisons relative to that. The two-envelopes problem for moral uncertainty shows us that expected-value calculations seemingly depend on one's unit of comparison.

Ben West asks why we can't convert moral uncertainty into factual uncertainty in many cases. For instance, suppose we're utilitarians who care about utilons. Then it's always a factual question how many utilons a given organism has, right? Since that's a purely factual question, once we modify our probability distributions appropriately, we can just compute expected utility.

The problem is that is that "No, Virginia, there really is no such thing as a utilon in any non-arbitrary sense. Happiness and suffering are not actually cardinal numbers that live in the physics of the universe that we can measure. Rather, we use numbers to express how much we care about an experience."

If moral realism were true and the correct morality was utilitarianism, then I suppose there would be a "right answer" for how many utilons a given system possessed (up to a positive affine transformation). But I don't take moral realism seriously.

Updated thoughts, Apr. 2016

The two-envelopes problem is soluble in the case of factual uncertainty because factual uncertainty concerns questions where the units are fixed and precise -- e.g., that the value contained in an envelope is measured in dollars. This means we can hold a well defined prior distribution over how much money an envelope is likely to contain. The two-envelopes problem for moral uncertainty seems insoluble because there is no real unit of moral value "out there" that we can have a prior over.

I think the key to unlocking the moral-uncertainty version of the paradox is to realize that in the moral case, we're just dealing with different and incompatible utility functions. For example, suppose a morally uncertain person gave 50% probability to the view that a human matters only as much as a fish and 50% probability that a human matters 10 times more than a fish. Suppose we can either prevent the suffering of two fish or one human. A naive calculation would say that with 50% probability, helping the fish has value 2 and helping the human has value 1. And with 50% probability, helping the fish has value 2 while helping the human has value 10. So the expected value for helping the human is higher: 0.5 * 1 + 0.5 * 10. And had we measured value in different units, we might have concluded that helping the fish was better.

But this is wrongheaded. There's no reason to think the units in the two cases can be compared. What the statement of 50% probability means is: "My brain's moral parliament has 50% of its members belonging to a political party that weighs fish and humans equally." The two different moral views are just in conflict, and we can't unthinkingly combine them into a single expected-value calculation. This is similar to the difficulty of combining utilitarianism with deontology under moral uncertainty. Does a deontological prohibition on stealing get infinite weight, thereby overriding all (finite) utilitarian considerations? Does the stealing prohibition get weight of -100? -5? There's no right answer, because there's no unique way to combine moral views. There are just different moral views, and we need to invent some procedure for reconciling them. The case where we have uncertainty about the relative moral weights of different minds fools us into thinking that the uncertainty can be handled with an expected-value calculation, because unlike in the deontology case, both views are utilitarian and just differ numerically. But it's not so.

What about the following scenario? Suppose the neuroscience of fish suffering is not fully known. If it's eventually discovered that fish possess brain structure X, then you'll care about a fish equally as a human. If fish lack structure X, you'll care about a human 10 times more than a fish. Scientists currently say there's a 50% chance that fish have structure X. Can't we calculate an expected value here? The answer is again "no", because even though we have a real probability, we still can't non-arbitrarily combine different moral views. This is easier to see if we imagine that an unknown factual discovery would convince us either of utilitarianism or deontology. It's clear that there's no unique way to shoehorn those two different views onto the same yardstick of value.

Rather than computing an expected value, we can allow the two factions of our moral parliament to trade. Before neuroscientists confirm whether structure X exists, the fish-equality faction of a brain's moral parliament can bargain with the humans-matter-more faction when making policy choices. Both factions know that they each have 50% probability of getting all the seats in the parliament once the neuroscientists arrive at certainty regarding structure X, but before that happens, they can reach compromises.

Two envelopes and interpersonal utility comparisons

This section contains updated thoughts from Jan. 2017.

The two-envelopes problem is actually the same as the problem of interpersonal comparisons of utility in a different guise.

Return to the "Two-elephants and a human" example from the beginning of this piece. Let u1 be the utility function that values human and elephant suffering equally. That is, u1(help 1 human) = u1(help 1 elephant) = 1. Let u2 be the utility function that cares 4 times as much about a human: u2(help 1 human) = 1 but u2(help 1 elephant) = 1/4. Suppose the "true" utility function u is either u1 or u2. Naively, it looks like we can compute an expected utility as

E[u] = (1/2) u1 + (1/2) u2.

But this forgets a fundamental fact about utility functions: They're invariant to positive affine transformations. This ability to rescale the utility functions arbitrarily is exactly what gives rise to the two-envelopes paradox. To see this, let's first consider helping the two elephants. The naive expected utility would be

E[u(help 2 elephants)] = (1/2) * 2 * u1(help 1 elephant) + (1/2) * 2 * u2(help 1 elephant) = (1/2) * 2 * 1 + (1/2) * 2 * 1/4 = 5/4.

Meanwhile, you can calculate that E[u(help 1 human)] comes out to 1/2 + 1/2 = 1.

But we can rescale u1 and u2 without changing the preference orderings they represent. In particular, let's multiply u1 by 1/2 and u2 by 2. With this change, you can compute that E[u(help 2 elephants)] = 1/2 + 1/2 = 1, while E[u(help 1 human)] = 1/4 + 1 = 5/4. These calculations exactly reproduce those done in the earlier "Two-elephants and a human" discussion.

Given that the two-envelopes problem for moral uncertainty is isomorphic to the problem of interpersonal utility comparisons, we can apply various strategies from the latter to the former. For example, we could use the "zero-one rule" in which we normalize utility such that "Your maximum utility = 1. Your minimum utility = 0."

Let's apply the zero-one rule to the "Two-elephants and a human" example. Suppose the worst possible outcome is that all three individuals get hurt, and the best possible outcome is that no one does. First consider the action of helping the two elephants. If humans and elephants matter equally, then helping the elephants changes the situation from the worst case (utility = 0) to 2/3 toward the best case, because 2/3 of the individuals are helped. So the increase in utility here is 2/3. Meanwhile, if an elephant matters only 1/4 as much as a human, helping the two elephants moves us from utility of 0 to utility of 1/3, since the suffering of the human matters twice as much as the suffering of both the elephants we spared. The expected utility over moral uncertainty for helping the two elephants is then (1/2) * (2/3) + (1/2) * (1/3) = 1/2. Meanwhile, for the action of saving the human, you can compute that the expected utility is (1/2) * (1/3) + (1/2) * (2/3) = 1/2. So interestingly, the zero-one rule tells us to be indifferent in this particular case.

In other cases, the zero-one rule gives more substantive recommendations. For example, if you're uncertain whether only your own welfare matters or whether the welfare of all animals on Earth matters, and if you assign decent probability to each possibility, then the zero-one rule probably advises you to be selfish (at least if this would actually increase your own welfare), because it's probably a lot easier to achieve, say, a given increase in your own normalized welfare than the same-sized increase in the normalized welfare of all animals. (Of course, we might debate what counts as the worst vs. best possible utility values here. Is the "best possible utility" that which can be achieved given your existing brain, which has a hedonic treadmill? Or does the "best possible utility" include the possibility of rewiring your brain to bypass hedonic adaptation, to be massively bigger, to last forever, etc.?)

Needless to say, I don't favor egoism on the grounds of the above argument, and more generally, I don't agree with a blanket application of the zero-one rule in cases of moral uncertainty (just as many philosophers reject it as an approach to interpersonal utility comparisons). Unfortunately, there don't seem to be satisfactory approaches to interpersonal utility comparisons, and for the same reason, I'm doubtful about satisfactory approaches to moral uncertainty (although I haven't read much of the literature on this topic).

Acknowledgements

DanielLC first pointed out to me that the Pascalian argument for caring about small brains can be flipped around into a Pascalian argument for large brains. The comparison to the two-envelopes problem was made by Carl Shulman, whose additional thoughts on this question have been helpful.

Footnotes

  1. The "23 billion" figure comes from an old, incorrect version of Wikipedia's "List of animals by number of neurons". I'm continuing to use it here so that my example will still work, but keep in mind that it's not factually accurate. A better estimate for elephant neurons comes from an interview with Suzana Herculano-Houzel. She reports that elephants have 257 billion neurons, "BUT 98% of those neurons are located in the elephant cerebellum". If we think cerebellar neurons count less than cortical neurons, then the effective number of neurons would be much lower than this.

    By the way, elephant brain mass is roughly 3.5 times that of humans, which almost perfectly aligns with relative numbers of neurons.

      (back)

  2. I'm using a range here, because my probability distribution is continuous, so the probability of any given point value is 0.  (back)
  3. Note that I think "idealized-self" approaches assume far more convergence in one's future views than is likely to be the case.  (back)
  4. From "Parallel Universes" by Max Tegmark:

    If physics is unitary, then the standard picture of how quantum fluctuations operated early in the big bang must change. These fluctuations did not generate initial conditions at random. Rather they generated a quantum superposition of all possible initial conditions, which coexisted simultaneously. Decoherence then caused these initial conditions to behave classically in separate quantum branches. Here is the crucial point: the distribution of outcomes on different quantum branches in a given Hubble volume (Level III [quantum many-worlds multiverse]) is identical to the distribution of outcomes in different Hubble volumes within a single quantum branch (Level I [big universe containing many Hubble volumes]). This property of the quantum fluctuations is known in statistical mechanics as ergodicity.

    The same reasoning applies to Level II [inflationary multiverse]. The process of symmetry breaking did not produce a unique outcome but rather a superposition of all outcomes, which rapidly went their separate ways. So if physical constants, spacetime dimensionality and so on can vary among parallel quantum branches at Level III, then they will also vary among parallel universes at Level II.

    In other words, the Level III multiverse adds nothing new beyond Level I and Level II, just more indistinguishable copies of the same universes--the same old story lines playing out again and again in other quantum branches. The passionate debate about Everett's theory therefore seems to be ending in a grand anticlimax, with the discovery of less controversial multiverses (Levels I and II) that are equally large.

    [...] Does the number of universes exponentially increase over time? The surprising answer is no. From the bird perspective, there is of course only one quantum universe. From the frog perspective, what matters is the number of universes that are distinguishable at a given instant--that is, the number of noticeably different Hubble volumes. Imagine moving planets to random new locations, imagine having married someone else, and so on. At the quantum level, there are 10 to the 10118 universes with temperatures below 108 kelvins. That is a vast number, but a finite one.

    From the frog perspective, the evolution of the wave function corresponds to a never-ending sliding from one of these 10 to the 10118 states to another. Now you are in universe A, the one in which you are reading this sentence. Now you are in universe B, the one in which you are reading this other sentence. Put differently, universe B has an observer identical to one in universe A, except with an extra instant of memories. All possible states exist at every instant, so the passage of time may be in the eye of the beholder [...].

      (back)

  5. If we also used this "multiplication of copies" approach for anthropics, then it would be likely we'd be extremely far down on the quantum tree. Of course, we could, if we wanted, divorce anthropic weight from moral weight.

    If we do think splitting increases the importance of the universe, our valuation will be dominated by what happens in the last few seconds of existence. David Wallace notes: "Everettian branching is ubiquitous: agents branch all the time (trillions of times per second at least, though really any count is arbitrary)." Since physics will exist far longer than intelligent life, a view that our universe's importance increases trillions of times per second would require that suffering in physics dominate altruistic calculations.  (back)

  6. Note that regardless of which way we go on this question, the premise behind quantum suicide makes no sense for altruists. Quantum suicide only makes sense if you care about at least one copy of you existing, regardless of its measure. In contrast, altruists care about the number of copies or the amount of measure for worlds with good outcomes.  (back)
  7. Some ethical views claim to be foundationally monist rather than pluralist in that they care only about one fundamental value like happiness. But on closer inspection, happiness is not a single entity but is rather a complex web of components, and our valuation of a hedonic system with many parts is typically some complex function that weighs the importance of various attributes. In other words, almost everyone is actually a value pluralist at some level.  (back)