by Brian Tomasik
First written: 13 June 2014; last update: 15 Apr. 2017


Nick Bostrom's seminal work on anthropic bias proposed reasoning as though you are a random observer from a reference class of observers. While helpful in many respects, this framework leaves the reference class arbitrary. Moreover, reference classes don't map onto anything "real" physically. Radford M. Neal suggests a method for thinking about anthropics without Bostrom's machinery. It works well for small finite cases but runs into problems for a sufficiently large multiverse. For infinite scenarios, some anthropic conclusions, including validation of certain scientific discoveries, may be possible just by a principle of indifference among subjectively indistinguishable observers. For other questions, further assumptions are needed, such as that multiverse hypotheses are intrinsically more likely if they imply the existence of more copies of you. Even without further assumptions, we should act as if the multiverse is really big because we can have more impact in that case. That said, there remain some cases where Bostrom's approach intuitively still feels like the right answer to me. To address this, I suggest a physics-based rather than observer-based version of anthropics, similar to a proposal by Robin Hanson. It seems to solve most anthropic puzzles but may lead to solipsism when combined with Solomonoff probabilities. A Solomonoff-inspired anthropics might avoid this problem but leads to its own counterintuitive consequences. All told, a proper account of anthropics seems to remain out of reach.

Update, Feb. 2015: In a newly added ending to this piece, I explain why most of the previous reasoning was partly confused. A proper way to think about yourself is as the collection of all your copies at once. This allows for essentially eliminating regular anthropic reasoning and just relying on non-anthropic prior probabilities to arbitrate among hypotheses (see "Anthropic decision theory").


What can we infer about the universe based on the fact that we exist and are located in a particular time and place? This is the project of anthropic reasoning. Nick Bostrom helped to organize and advance this field in his classic 2002 book, Anthropic Bias. However, in recent years, new approaches to anthropics are emerging. I'm only vaguely acquainted with post-Bostrom theories, but this piece aims to present one take on how to think about anthropics without Bostrom's core machinery.

Bostrom's anthropics

Bostrom's book centers on the self-sampling assumption (SSA), which suggests that you should reason as though you're a random sample from all observers in your reference class. SSA has the potential for many counterintuitive implications, possibly including the doomsday argument and others outlined in Ch. 9 of Bostrom's book, depending on what observers are included in the reference class.

In Ch. 10, Bostrom proposes refining SSA to the strong self-sampling assumption (SSSA), which talks about "observer-moments" instead of "observers." He further suggests that if we choose our reference class carefully, we can avoid many counterintuitive implications while retaining SSA's ability to account for science and common-sense probabilistic conclusions. Bostrom believes that choice of reference class may always retain an element of subjectivity, in a similar way as choice of Bayesian priors retains some subjectivity, even if some reference classes / priors are clearly less defensible than others.

What is an observer, anyway?

Perhaps more than the arbitrariness of choosing a reference class, what strikes me as fishy about Bostrom's anthropics -- or any anthropic approach that involves sampling among a set of observers -- is that a "set of observers" is not well defined. Often it's assumed that an observer is somehow related to a "conscious" agent, and there does seem to be a connection between being an observer and being conscious. Maybe the set of all observers is equal to the set of all conscious beings. But this immediately raises a red flag, because it takes a confused perspective on consciousness. Consciousness is not a binary property of physical processes. Rather, there is just physics, and what we call "conscious" represents our brains carving up physical processes into categories with labels. It doesn't correspond to something "real" at bottom.

Some might say that "being an observer" is not synonymous with being "conscious." Maybe there can be non-conscious agents that are also observers. Or maybe even conscious agents only count as observers if they're smart enough to engage in anthropic reasoning. Regardless, trying to define a hard cutoff for what counts as an observer runs into the same problems that arise when trying to define a hard cutoff for consciousness. Do you have to be actively thinking about your existence right now to count as an observer? Does it still count if you're thinking about your plans to go to an event later tonight, rather than thinking about anthropic philosophy? Does it count if you're thinking about existentialism? How about nihilism? What if you can't engage in complex cognitive self-representation but still make decisions based on an implicit notion of yourself existing over time? Does a Python program that calls "print self.VariableName" count as an observer? Is an arrow pointing at itself an observer? How about two mirrors facing each other? A particle detector in a test of Bell's inequality?

One could maintain that what counts as an observer is fuzzy just like the choice of reference class is fuzzy but that there are some things that clearly are and aren't observers. This might be okay if you're doing some hacky calculation, but as a fundamental theory of anthropics, the degree of hand-waving seems too much?

Robin Hanson echoes this point: "It seems hard to rationalize [a reference class based on intelligent observers] outside a religious image where souls wait for God to choose their bodies." He continues: "The universe doesn't know or care whether we are intelligent or conscious, and I think we risk a hopeless conceptual muddle if we try to describe the state of the universe directly in terms of abstract features humans now care about."

What about reference-class forecasting?

If reference classes are not "real," how do we explain the success of reference-class forecasting? This approach seems to treat yourself as a random sample from a set of people in some reference class. Here too the proper choice of reference class is unclear and may result in "reference class tennis" in which each side of a debate changes the reference class to better support its conclusion. Still, it's undeniable that reference-class forecasting yields significant predictive success in many domains. I'm not sure what a complete response to this objection looks like, but here's my best guess.

Reference-class forecasting is a statistical method. Suppose you have sparse data about a particular case -- e.g., how long your marriage will last. If you've never been married before, you have no data on how long you've tended to stay married in the past. Thus, it can help to expand your data set to encompass other people in similar situations and look at their numbers. They are not the same people as you, so there is a bias between the average value for the reference class and your own value. But it's worth taking on this bias for the variance reduction that the larger sample affords.

If you had tons of data, you'd be better off narrowing your reference class to reduce bias. For instance, if you live in the United States, you could filter the reference class just to US marriages. If you're male between the ages of 30-35, you could filter just to that segment. If you have a college degree and a job in education, you could filter further to that segment. And so on. In the limit of massive data, you'd filter to an extremely small reference class that contains only people almost identical to you. In other words, reliance on a reference class larger than yourself is just a crutch to help with the analysis; it's not "actually" the case in some quasi-ontological sense that you're a random sample from that reference class.

The reference-class problem in probability theory

While the reference-class problem is sometimes thought mainly to beset frequentism, Alan Hájek claims that it infects most interpretations of probability. I'm not an expert on the literature, but I actually don't think the reference-class problem applies at a theoretical level -- only at a practical level. If we condition on everything we know rather than picking some or other subset of information, then the problem seems to go away.

For instance, imagine that possible worlds are infinite bitstrings, and we live on one of them. We can see the first few digits S of our bitstring -- say S = 1011. To compute probabilities, we could consider various reference classes, such as the reference class C of bitstrings that contain at least two 1's in the first 3 digits. We could then compute the probability of various worlds given C, and since S falls within C, this would improve our probability estimates. But why don't we just compute probabilities given S directly? We should be using everything we know.

The reason we can't do this in real life is because the computation would be impossibly expensive. If we could compute every possible event in all possible worlds, we could identify all locations where an agent like us observes exactly the data we've observed, and then we could simply count how many agents live in which kinds of worlds and weight those worlds by prior probabilities. When we know much less, we have to fudge and include some people not quite like us in our reference class, hoping that similarities of traits will yield somewhat similar outcomes.

But the lesson that, in theory, it's better to condition on all data should be borne in mind, and it's the basis of the proposal we consider in the next section.

Neal's anthropics

In 2006, Radford M. Neal published "Puzzles of Anthropic Reasoning Resolved Using Full Non-indexical Conditioning." Neal was dismayed by the arbitrariness and paradoxes of SSA. He offered a simpler approach to anthropics called Full Non-indexical Conditioning (FNC), which proposes to condition one's beliefs on all available evidence, not just the fact that you're a member of a given reference class. The heart of Neal's approach is the following application of Bayes' theorem:

P(theory of the universe | I exist) ∝ P(I exist | theory of the universe) * P(theory of the universe).

For example, imagine a theory which says the universe is finite, small, and not fine-tuned for life. On this theory, the probability that you would come into existence is tiny; hence, this theory is unlikely to be correct unless its prior probability is astronomical.

Neal suggests (p. 23) that FNC avoids the doomsday argument in a simple and elegant way: Once you condition on your birth rank among humans, the probability that someone with your characteristics exists at that specific birth rank is independent of how many other people exist, as long as there are at least as many people as the number of your birth rank. For instance, if you find yourself as the 12th person born, then as long as your theory of human history includes at least 12 people, the probability that a birth-rank-of-12 person with your traits exists is the same whether the universe contains 15 or 15,000,000 people (to a first approximation; the number of eventual people is entangled with other facts about nature, but we should condition on those separately).

Neal's approach is similar to pre-Bostrom anthropic theories that mainly focused on the requirement that the universe must exist in such a way as to produce observers. Bostrom (Ch. 3) rejected such views as inadequate because of the "freak-observers problem": It's widely accepted that we live in an infinite multiverse in which all physically possible combinations of matter are realized. Suppose we conduct a physics experiment and observe some outcome. Regardless of the actual laws of physics in the multiverse, as long as the multiverse is big enough, it will contain at least a Boltzmann brain which thinks it's observing that outcome in response to a physics experiment it thinks it was conducting. Hence, the probability of someone observing the outcome of the experiment is 1.0 regardless of what physical theory is true. Applied to FNC, this rebuttal suggests that

P(I exist | theory of the universe) = 1.0 as long as the universe is big enough,

so FNC doesn't help us learn anything. We just fall back on our prior probabilities.

Neal aims to squirm out of this by hoping our universe is not big enough to contain all possible observations, but he doesn't present a convincing case. Moreover, FNC itself predicts that the universe should be as big as possible, because this will maximize P(I exist | theory of the universe) -- unless P(theory of the universe) for such hypotheses is sufficiently small.

Even though FNC strictly speaking doesn't work to distinguish hypotheses in a big multiverse, it may fundamentally be on the right track: Our beliefs about where we are should be based on everything we know, not an artificially constrained subset of information (e.g., only that we're humans with particular birth ranks). This point underlies several criticisms of the doomsday argument, which claim that based on other things we know, we're not a purely random selection from all human/intelligent observers, in a similar way as a baby is not a purely random selection from the lifetime of an individual. As Robin Hanson put it:

All else is not equal; we have good reasons for thinking we are not randomly selected humans from all who will ever live.

You should include everything you know when doing inference, and you usually know things that imply you are not random. If you edit a fashion magazine, you have reason to think you will hear of fashions on their way in. If you never even read fashion magazines, you have reason to think you will hear of fashions on their way out. Similarly, standard calculations about doom suggest we are earlier than random humans.

Principle of indifference over indistinguishable agents

For huge multiverses, we can overcome the freak-observer problem in some cases using a principle of indifference. This section describes the principle, and the next section explores where it does and doesn't work to address freak observers.

Consider a simple agent, George, who perceives a limited number of observations -- e.g., that the sky is blue and the grass is green. There may be many such agents in the multiverse on different planets and even in universes with different laws of physics. George doesn't know where in the multiverse he resides. We can consider the set of agents that George thinks he might be, i.e., the set of agents that George finds subjectively indistinguishable from himself. Bostrom calls this the "minimal reference class." Many of these agents are in fact different -- e.g., some may have black beards and others may have white beards -- but George himself can't measure his beard color, so to him, these agents are indistinguishable. Furthermore, some agents live in a universe with laws of physics of type X and others in a universe with laws of physics of type Y. Some live 13 billion years after that universe's big bang, and others 14 billion years. Some live toward the center of a galaxy, and others on the periphery. There are many concrete dimensions along which the agents differ, but George doesn't know enough to tell which he is.

By a simple principle of indifference, it seems intuitive that George should assign equal credence to being any of these indistinguishable versions. Then George can engage in reasoning like this:

"Universes of type X have vastly more instances of agents that are subjectively indistinguishable from myself than do universes of type Y. A priori, universes of types X and Y are equally frequent. Therefore, I'm probably in a universe of type X."

The concept of subjective indistinguishability is binary: Either an agent can distinguish itself from being another agent or it can't. In practice, our measurements, models, and calculations are fuzzy and error-prone, so we might wish to include the possibility that we think we're probably not a given agent, but we're not completely sure because our observations or reasoning may have been faulty. In this case, rather than adopting a uniform distribution over all indistinguishable agents we might be, we can adopt a weighted probability distribution over possible agents we might be. Anthropic reasoning can then proceed as before. For example, suppose universes of type X have 10 instances of agents that are 100% indistinguishable from you; universes of type Y have 100 instances of agents that have 50% chance of giving the observations you have due to noise. If X and Y are equally frequent a priori, then you're 5 times more likely to be in a universe of type Y. In theory, indistinguishability can always be made binary if observation-space is fine-grained enough (either you observe exactly this sequence of observations, to the 10th decimal place, or you don't), but in practice a fuzzy approach is more computationally tractable and can help account for our own uncertainty about whether we did our calculations correctly.

For an illustration of a practical discussion where the principle of indifference comes into play, see the Appendix.

How to define "subjective indistinguishability"?

The previous section used the terminology of "agents" and "class of agents you might be," which I rejected earlier in this piece as non-physical. However, the use here is much more constrained, because we need only identify those agents (physical processes) that have identical data streams as the one that we have experienced. Pinpointing our life's "data stream" can be tricky, though the concept is easier to conceptualize for simple examples like a computer program that stores floating-point numbers that represent its observations over time. We can see how to check whether two programs recorded the same data stream or not (though we might need to inter-convert their representation formats). For a person, it's not accurate to say that all of her experiences through time are her data stream, because many older experiences she will have forgotten, and others will be inaccessible, and others are encoded implicitly in her collection of neural weights. We may need to define distinguishability as some active comparison process that a mind, say Susan, makes against a mental model of what other agents in other cases would look like, after which Susan guesses whether she could tell subjectively that she's not that other mind. In other words, a crude definition of distinguishability is whatever Susan tells you when you ask, "Are you sure you're not agent XYZ in ABC part of the multiverse?" As an example, I know I'm not the president of the United States because my physical features and memories don't resemble his.

This is a tricky topic where it's easy to trip. For instance, in a somewhat different context, Don N. Page suggested:

I shall also assume that the set of observations is mutually exclusive, so that any particular observation is a unique and distinct member of the set. In particular, each observation is to be complete and not a subset of another observation, to avoid double counting and unnormalized probabilities. The simplest way I know to impose this is to take observations to be conscious perceptions, all that an organism is phenomenally aware of at once [28, 61].

But this idea that our conscious perceptions all "come together" at a single moment to be perceived as a discrete, unified whole with clear boundaries is the central idea torn apart by Daniel Dennett's Consciousness Explained. Instead, we may need a more operational approach, where "indistinguishability is as indistinguishability does", i.e., indistinguishability is defined by whether the agent can recognize that it's different from some other agent in a way that affects subsequent action. If Bob's "conscious perception" at some moment is identical to Joe's except for a little yellow duck in the corner of Bob's eye that Bob's V1 neurons process but that Bob's higher thoughts fail to register to any degree, it seems that Bob and Joe are subjectively indistinguishable at that moment.

If you can't compute that "I'm this agent and not that one", then you don't know which you are, and the two are operationally indistinguishable. To this degree, I suppose that subjective indistinguishability could include a case where an agent's data would actually distinguish it from a different agent, but the agent simply lacks the time to complete the calculation.

Indifference helps with where, not what

The principle of indifference from the previous section sometimes helps in tackling Bostrom's freak-observers problem. Suppose, as in Bostrom's example, we're testing a theory about our universe's physical constants, asking whether they're set at values T1 or T2. Let's say that in our multiverse, there is a universe that has constants T1 and another that has constants T2. We observe T1 in our experiments. That means there are lots of observers in the T1 universe making our exact observations under normal circumstances. There's also a tiny number of freak observers in the T2 universe making our observations by luck. The principle of indifference tells us that we're overwhelmingly likely to reside in the T1 universe because there are vastly more agents there against whom we are subjectively indistinguishable. More generally, the principle of indifference helps us determine where we are within a given huge multiverse.

In fact, a combination of FNC for non-huge-finite situations with a principle of indifference for infinite situations can reproduce most anthropic conclusions for which we would have wanted to use Bostrom's SSA, including most of the examples in Ch. 5 of Anthropic Bias.

What the principle of indifference can't do is tell us what kind of huge multiverse we reside in. For example, suppose we're debating two hypotheses:

  • H1: Life-friendly universes are vastly more common in the multiverse than life-hostile universes
  • H2: Life-friendly universes are vastly less common in the multiverse than life-hostile universes.

The probability of someone observing what we do given either H1 or H2 is 1.0 in a sufficiently big multiverse, so we can't distinguish between the hypotheses. The indifference principle can help tell us where we might be within one or the other of those hypothesized multiverses but not which one exists.

Two possible reasons to think we're common


Even if we can't distinguish the H1 and H2 hypotheses discussed above, we may have reason to act more in line with one than the other. In particular, if H1 is true, then there are vastly more (near and exact) copies of me, vastly more life forms to be helped, and generally vastly more potential for altruistic impact. Thus, we have a prudential reason to favor H1, at least more than its prior probability would suggest if not in an absolute sense. This same idea applies when comparing certain other hypotheses, such as whether the sheer size of the multiverse is just big or really really big. If the multiverse is as big as possible, there will be as many copies of me as possible to collectively make the biggest altruistic difference, so I should act as though the multiverse is huge and ignore physical theories that predict a small multiverse.

Self-indication assumption

Another approach is to adopt the self-indication assumption (SIA), which in Bostrom's original formulation says that we should give higher probability to hypotheses that contain more observers, other things being equal. This version of SIA still relies on the nebulous "observer" concept. But sometimes the term "SIA" also refers to "SSA+SIA", which favors more copies of you specifically, without needing to define observers in general. Let me explain.

Suppose we're considering a hypothesis in which there are R total observers in your reference class, r of which are subjectively indistinguishable from you. Bostrom's original SIA says we should multiply whatever prior probability we thought this hypothesis had by R, the total number of observers. Then we apply SSA, which says the probability you'd be chosen from the reference class is r/R. Combining the multipliers for SIA and SSA, we have R * (r/R) = r. That is, we just weight hypothesis by how many copies of you it contains. Note that the calculation would be the same irrespective of the reference class, so long as the reference class contained all observers subjectively indistinguishable from you. Shulman and Bostrom explain this point on p. 9 of "How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects". Katja Grace seems to define SIA as SSA+SIA on p. 12 of "Anthropic Reasoning in the Great Filter".

When I say "SIA" in this piece, I mean Bostrom's original definition.

Prudential and SIA approaches don't mix

The prudential approach is distinct from SIA, because the prudential approach just "acts as if" we're common in the multiverse as a result of expected-value computations, while SIA "actually believes" we're common in the multiverse.

One might assume that, if we adopt SIA, the prudential argument still applies too, so there's a double whammy in favor of acting like we live in a highly populous multiverse. However, Adrian Hutter pointed out to me that the prudential and SIA cases can't mix, as we'll see now.

Consider this setup:

Modified God's coin toss. God flips a biased coin with probability Q of coming up heads. If it comes heads (H), he creates one copy of you. If it comes tails (T), he creates two copies of you. You're perfectly altruistic toward your copies and value their welfare as much as your own. If you correctly guess whether the coin came H or T, you get 1 util, else you get 0 utils. When two copies of you exist, they both answer exactly the same because their decision algorithms are identical.

If you adopt SSA, then there's no anthropic update to the probability Q of the coin having come heads, because what you see is perfectly predicted by either coin flip. But consider now the prudential side of the story: If the coin landed T, your two copies can get double the utility because there are two of you. In particular:

  • Expected utility of guessing H = 1 * P(H) = 1 * Q
  • Expected utility of guessing T = 2 * P(T) = 2 * (1-Q).

This means you should guess H iff

1 * Q > 2 * (1-Q)
Q > 2 - 2Q
Q > 2/3.

Now suppose instead you adopted SIA. In this case, your anthropic update is to favor the T hypothesis by a factor of 2 because the T scenario contains twice as many copies of you. In other words, you multiply P(T) = 1-Q by a factor of 2 and then renormalize. The ratio P(H) / P(T) is now Q / [2(1-Q)]. So H will be the more likely outcome iff

Q / [2(1-Q)] > 1
Q > 2 - 2Q
Q > 2/3,

which is the same threshold you found with the prudential reasoning above.

But what if you combine prudential reasoning with SIA? Rather than just asking whether H is more likely, you should ask whether betting on H has higher expected utility. This happens when

P(H) * 1 > P(T) * 2
P(H)/P(T) > 2
Q / [2(1-Q)] > 2
Q > 4-4Q
Q > 4/5.

Which Q threshold is right? 2/3 or 4/5? To find out, suppose Q = 3/4. Under either the prudential approach or SIA alone, you should guess H in this case because Q = 3/4 > 2/3, but with prudential+SIA, you should still guess T because Q = 3/4 < 4/5.

  • Suppose you go with the prudential+SIA strategy: You always guess T. You do this on many trials. In the long run, 3/4 of the time the coin lands H, so you earn nothing. 1/4 of the time the coin lands T and you earn 2 utils across your copies. Cumulative earnings approach 1/4 * 2 * (num trials).
  • If instead you go with either prudence or SIA alone, you always guess H. Over many trials, 3/4 of the time you earn 1 util, and 1/4 of the time you earn nothing. Cumulative earnings approach 3/4 * 1 * (num trials).

The second approach does better, so the decision threshold of prudential+SIA was wrong.

One way to think about what's happening here is that when the coin lands T, you act as though you control yourself and your other copy, because your and your copy's decisions are correlated. So you credit yourself with getting 2 utils for choosing T. But your copy is reasoning the same way and claiming that she is the one who is single-handedly earning the 2 utils. So the copies overstate the value that they're bringing by their actions. Credit assignment is tricky.

How could the copies fix this?

  • They would get the right answer if they ignored their correlations and only counted the value that their particular spatiotemporally localized atoms earn. This would mimic the pure SIA approach.
  • Alternatively, if the copies want to count their correlated decision as earning 2 utils, they should treat themselves as being one organism (with two distinct parts), which doesn't get double the anthropic credit under SIA for having two observers. This would mimic the prudential approach.

Being more common helps restore science

Once we favor hypotheses on which there are more copies of us, our ability to do scientific inference about the character of the multiverse as a whole can often be saved from freak-observer problems. Before, we were unable to distinguish between a multiverse hypothesis M1 that strongly predicted our observations and another hypothesis M2 that contradicted our observations but contained some freak observers thinking they had our observations. But the M1 hypothesis, in general, predicts vastly more observers that are copies of us, since freak observers are rare and those having our exact fake observations are rarer still.

Of course, thinking there are probably more copies of us opens up the usual presumptuous philosopher thought experiments, in which we can claim without doing any science that the universe must be very big -- indeed, infinite. It also heavily favors hypotheses on which life is common.

Hypotheses that, for example, the multiverse is tiled with nothing but identical copies of your brain are also well favored, although these also need a heavy discount in prior probability due to Occam's razor: Why your brain specifically? That seems to be a much more detailed assumption than the assumption of a few very general laws of physics that give rise via evolution to lots of complexity including many copies of you. The Theory of Everything for the multiverse may be very compact -- perhaps just a small set of equations. In contrast, even an adumbral description of your brain might be vastly longer. If, for example, the Kolmogorov complexity of a multiverse tiled with a crude version of your brain were 10,000 times the Kolmogorov complexity of the Theory of Everything for what we think is our own multiverse, the brain-tiled multiverse would need to have 210,000, or about 103010, times more approximate copies of your brain to be the favored hypothesis.a

If we adopt only the prudential argument in favor of the existence of more copies of us, I don't see the above puzzles as being terribly worrisome. Yes, we act as if the multiverse is huge and life-dense because we can make the biggest impact in those cases, but that stance almost seems obvious once you realize it. It doesn't mean we need to actually think the multiverse must be enormously infinite and life-dense. Also, a hypothesis that the universe is tiled with just your brain needn't be highly favored prudentially, because if it's just your brain, there aren't a lot of other suffering creatures to be helped by your efforts. Sure, you could think happy thoughts, but the magnitude of impact per brain is much smaller than if those brains are able to act on an external world.

I kind of like this prudential approach because it still allows for doing science (at least in the sense of "acting as if" your experiments tell you about the external world for prudential reasons) without controversial assumptions like SIA. Still, there is something unsettling about only believing scientific results about the multiverse as a whole because of a gamble where you choose the action of high expected payoff. But remember that science concerning what universe we occupy within the multiverse can be believed without prudential arguments, just using the indifference principle. If the multiverse is big enough, then science about what universe we occupy within it may constitute many of the relevant discoveries that scientists make, as discussed further below.

Suppose your current multiverse hypothesis says there are 3 types of universes (A, B, and C) that each exist infinitely often. Physicists conduct experiments and find that our universe seems to be of a new type, D. Unless we use SIA or the prudential argument for our being common, we can't rule out or even challenge our original hypothesis, because the A, B, and C universes have some freak observers that think they saw evidence for D.

On the other hand, suppose our hypothesis was modal realism -- that all universe types exist: A, B, C, D, and so on. Now the physics discoveries do change our beliefs about where we live in our multiverse, since there are many more copies of us in a D universe correctly observing D than in an A, B, or C universe thinking we observe D. Since a modal-realism multiverse contains all possibilities, this kind of science update will work regardless of which possible universe we observe.

Modal realism also has the virtue of being exquisitely simple -- arguably the simplest hypothesis of all apart from the hypothesis that nothing exists. If you pick out any subset of all possibilities to be the ones that are realized, you have to explain why they're chosen and not others, and it takes effort to specify which are the universes to "make actual." That said, maybe this discussion is confused. What does "possible" mean anyway? Maybe "possible" just means "actual," and anything that doesn't exist isn't possible. It's not clear that logical possibility specifically is the right measuring stick; why should the universe care what our dumb logical rules tell us should be true? Logic may be just a useful construction that evolved brains made up to help them think more effectively.

Also, as Alexander Vilenkin points out, modal realism needs to specify a measure over worlds, which may make it more complicated than it seemed at first. However, such a measure could potentially emerge naturally. For instance, in Jürgen Schmidhuber's multiverse model, all possible universes are computed via dovetailing, in which the first universe A1 has one instruction run every second step, the second universe A2 has one instruction run every second of the remaining steps, etc. This implicitly gives measure (1/2)n to universe An because that fraction of all instructions go toward computations in An. We've thereby created a measure over universes essentially for free.

...but there are still cases where I wish for SSA

Unfortunately, there remain cases where the reasoning in the previous section fails. I see at least two instances:

  1. Uncertainty over what's "possible." Modal-realist hypotheses may differ over what they think is possible. For instance, suppose you're uncertain whether the axiom of choice is legitimate. If so, imagine that it allows for more possible universes that aren't possible if it's false. Physicists then make observations consistent with a universe that would only be possible if the axiom were true. You can't update in favor of the axiom of choice because there are also copies of you in a modal-realist universe where the axiom is false thinking they had these experiences. In fact, Daniel Kokotajlo points out that this shortcoming applies to even mundane logical computations. Suppose you reason from premises to a conclusion. You can't update in favor of that conclusion being true, because even if it were false, there would be some freak observers thinking they observed the premises and then observing a (logically wrong) conclusion. This problem seems severe.
  2. Modal realism hypotheses can differ over the measure they assign to different universes (basically, what fraction of universes are of which type). Say you're deciding between (i) a hypothesis that assigns measure 0.99 to your type of universe within the modal-realist multiverse and (ii) another hypothesis that assigns measure 10‑1000 to your type of universe. Your observations can't update your beliefs because each type of multiverse still has some observers in your kind of world.

Both of these are cases where I really want to invoke SSA and just attack the issue head-on. If we regarded ourselves as being sampled from a reference class that included observers who weren't subjectively indistinguishable, we could then say that freak observations are highly improbable within that class.

SSA does potentially feel like the right response here. Of course, we can also use SIA or prudential arguments, but I'm not sure if they would feel as satisfying.

SSA on physics rather than observers?

For another proposal with some similarities to what's discussed in the current section, see this interesting comment by drnickbone.

One of my main objections to SSA was that it assumed a discrete, soul-like notion of an observer as distinguished from his environment -- a conceptual boundary not present in physics itself. Rather, the universe is an interconnected web of computations, with "us" being part of our environment. Where one observer ends and another begins, and what counts as an observer at all, aren't ontologically primitive.

Could we restore SSA by applying it to a category that has a better claim of being a natural kind? For instance, in a cellular-automaton world, natural kinds might be the cells themselves, while the "creatures" these worlds contain are higher-level abstractions with fuzzy boundaries. In our universe, maybe there are some physical/computational primitives analogous to cellular-automaton squares, especially if the universe is digital. We might then amend SSA to something like the following, which I'll call the "physics sampling assumption" (PSA): Regard the physical primitives you observe as a random sample from the set of all primitives that exist. Here, "random" doesn't mean that each primitive is independent of the other; rather, the sample is drawn from the joint probability distribution of physical stuff. That is, we sample contiguous chunks of the universe.

Let's take an example. For simplicity, collapse the cellular-automaton world to one dimension (a bitstring), and assume there is no time dimension (just a static string). Consider two hypotheses, H1 and H2. Under H1, these are the worlds in the multiverse:

H1: 01001, 10000, 1010011101.

Under H2, these are the worlds:

H2: 11111, 10111, 110100111.

We, as a ghostly creature in one of these worlds, can see only 3 bits, which happen to be 111. We consider the set of random samples of 3 bits under each hypothesis:

H1's set of random observations: 010, 100, 001, (from the first bitstring) 100, 000, 000, (from the second bitstring) 101, 010, 100, 001, 011, 111, 110, 101 (from the third bitstring).

The probability of drawing what we observed, 111, is 1/14 because the string appears once in the above list of 14 possible observations. For the other hypothesis:

H2's set of random observations: 111, 111, 111, (from the first bitstring) 101, 011, 111, (from the second bitstring) 110, 101, 010, 100, 001, 011, 111 (from the third bitstring).

The probability of what we observed, 111, is now 5/13. We could then update our priors by multiplying by this "likelihood" term, just as we did with SSA.

While this approach was motivated to avoid making confused assumptions about unified observers ("souls"), it also avoids the doomsday argument and similar problems with SSA, because the total number of "slots" in the universe is the same whether or not humans go extinct. For instance, assume that 0's represent human extinction, H's represent early people on Earth, and P's represent post-humans. The doomsday hypothesis, viewed over the temporal dimension of the universe, is


The non-doom hypothesis is


We observe H. The probability of this is the same in either case, so there's no doomsday update. In contrast, ordinary SSA would say that only H and L count as observers, so drawing randomly from observers, we'd be less likely to get H in the non-doom case.

Basically, where SSA says "My observations are randomly sampled from all observations in the universe (or some reference class)", PSA says "The computations that correspond to my observing are randomly sampled from all computations in the universe". This shift of perspective was partly inspired by my changing outlook on consciousness: Where I used to see consciousness as located in discrete organisms, I now see at least traces of consciousness throughout all types of computation.

PSA also avoids the other paradoxes in Ch. 9 of Bostrom's book (Adam & Eve, Lazy Adam, Eve’s Card Trick, UN++, Quantum Joe).

PSA can also go where FNC fails, because it can extend to infinite universes. Rather than just asking whether what you observe will exist somewhere, as FNC does, PSA instead asks how common what we see is within all parts of physics. In many ways PSA is just FNC normalized to the size of the universe. FNC asks the probability of the stuff we see existing at all, while PSA asks the probability that a random chunk of physics will be the chunk we see. In fact, a good way to think about PSA is as follows:

PSA: Favor hypotheses in which the universe contains higher density of the stuff you immediately perceive.

Because of this difference, PSA (unlike FNC) does not favor multiverse hypotheses to explain fine-tuning. For instance, suppose the fine-tuning probability is 10-100, and we're deciding whether there's just 1 universe or 10500 universes. FNC favors the hypothesis of 10500 universes, since this would make it overwhelmingly likely that we would exist. PSA is indifferent between the two because the per universe probability of us existing is 10-100 either way.

It's not obvious what to take as the substance within which density is calculated for PSA. Is it mass? Volume? Some other measure of physical "stuff"? Computations? Presumably it should refer to an ontological primitive? Since our only evidence comes from perceptions, presumably PSA should take "what you see now" to mean "instances of the computations that correspond to your perceiving at this moment"? What do we do when two possible instances of perception overlap physically, as we saw with 111 being drawn in 3 possible ways from 11111? What are the boundaries of "my perception", given that "myself" is not a natural kind? Why only sample length-3 substrings rather than length-2 substrings (11) or length-4 substrings (1111)? There are lots of details to fill in, and my sketch of PSA is rough. But fuzziness on these points doesn't prevent us from using PSA for many practical questions.

Unlike the principle of indifference suggested in prior sections, PSA overcomes the two problems in the preceding section, because now the probability of those freak observations or tiny-measure worlds is low relative to all the possible stuff that could have been observed. The essential idea is that PSA, unlike the principle of indifference, normalizes by how much total stuff there is. But unlike Bostrom's SSA, PSA doesn't depend sensitively on how many observers there are.

As we saw in previous sections, a principle of indifference combined with a presumption (either prudential or based on SIA) that there are more total copies of us also solves the two problems in the previous section. However, such a view infinitely favors an infinite universe and so falls prey to the presumptuous-philosopher problem. PSA solves the same two problems, but it doesn't make room for presumptuous philosophers because PSA cares about the fraction of all stuff that's like what you observe rather than the total amount of such stuff. (Of course, even if we use PSA, prudential arguments will still lead us to act as if hypotheses of bigger total multiverses are true.)

SIA, when combined with the Great Filter, implies its own kind of doomsday argument, though it's not necessarily as strong as SSA's doomsday argument. The same reasoning shows that PSA yields an SIA-like doomsday argument.

Also like SIA, PSA takes the thirder position on Sleeping Beauty, although it would take the halfer position if, in order for Beauty to be wakened twice, the universe had to be doubled in size.

We could try to extend the cell-based account PSA beyond cellular automata. Maybe one crude proposal would be to slice the universe into tiny boxes and observe what each box contains. For instance, the cubic centimeter directly in front of my eyes contains air molecules, the cubic centimeter 0.7 meters in front of my eyes contains a chunk of LCD monitor, etc. We then compare these observations with samples drawn from stuff in the universe conditional on various hypotheses. In fact, I don't think discretization is necessary (though it helps conceptually and computationally). Even if we are continuous blobs in a continuous universe, we can still randomly sample ourselves from all stuff that exists.

To make a discrete approximation of physical stuff, suppose we partition space into a grid of cubic nanometers, and we designate the substance that's most common in each cube (maybe water molecules, air molecules, some protein, etc.) as its contents. Let's simplify this to one dimension. Also suppose the universe has only 52 possible types of stuff, which we designate by the letters A-Z and a-z. Suppose my observations are composed of five cubes of five substances: B, r, i, a, and n. The universe is


If we randomly sample length-5 substrings from the universe, "Brian" can be drawn twice from this universe, so my density is 2/(number of length-5 substrings). Or maybe just 2/(length of the universe string)?

Similarity to Hanson's anthropics

Robin Hanson suggests a similar approach to PSA with his box diagrams in "Critiquing the Doomsday Argument". Hanson explains: "choosing states and priors in a more physics-oriented way seems to eliminate the doom-enhancing effects of" SSA. Hanson suggests some possible pitfalls with his approach, but I don't think they apply to my proposal:

  • "[Hanson's approach] seems to suggest that a non zero prior probability of a universe with an infinity of humans implies probability one that we find ourselves in an infinite universe." This isn't true for my proposal, because if you normalize by the total number of slots, the probability of being a human in such a universe is still far less than 1, just as in a finite universe. Hence the probability of our observations given a hypothesis of a finite universe could be comparable as for a hypothesis of an infinite universe.
  • "And it seems difficult to use [Hanson's proposal] when universes have varying numbers of space-time slots." Once again, this doesn't apply to my proposal, because I'm normalizing by the total number of slots.

While Hanson's approach adopts the same method as PSA of dividing the universe into a grid and allowing us to be sampled from any cell in that grid, because Hanson doesn't appear to normalize by the size of the universe, his principle seems to end up being SSA+SIA. (Thanks to Adrian Hutter for pointing this out.)

PSA might still encounter paradoxes if agents can change the total amount of cells in the universe in a way that doesn't preserve the fraction of cells of the type she observes relative to total cells. For instance, suppose the default outcome for the universe was 1 year of a human followed by zillions of years of empty space. The human can then choose whether to push a button that destroys all the future years of empty space. If the person presses that button, her current observations would be vastly more probable, since if the button is pressed, this implies that the zillions of years of empty space never exist, so it's much more likely she would appear where she was.

PSA's Presumption of Denseness and Cosmological Doomsday

Adrian Hutter points out that, while PSA dodges the presumptuous-philosopher problem of SIA, it carries its own cases where philosophy may trump physical observation:

Presumption of Denseness: Physicists are debating two theories of cosmology. On theory T1, the universe has volume V and contains one subjectively indistinguishable copy of each person on Earth. On theory T2, the universe has volume V * 10100, but because the extra space in this theory doesn't permit any life, everyone on Earth still has only one copy. The physicists are evenly divided between T1 and T2 based on the evidence. Then a presumptuous PSA philosopher butts in to the conversation and says that T1 is obviously correct, because T2 has a vastly tinier density of us in it. No need to carry out further experiments!

Hutter notes that we should be more skeptical of T2 a priori just because it's bigger and so is disfavored by Occam's razor, but for this example, we can imagine there's some evidence in favor of T2 that exactly offsets this prior bias against it.

Presumption of Denseness is essentially PSA's version of the doomsday argument. SSA fell prey to the doomsday argument because it normalized by the number of observers; PSA falls prey to Presumption of Denseness because it normalizes by the size of the universe.

Actually, PSA has an even less hypothetical doomsday prediction:

Cosmological Doomsday Argument: Even though it appears based on current cosmology that the universe is expanding forever and will eventually become uninhabitable (Big Freeze), PSA disfavors this hypothesis because it involves an eternity of mostly empty space in which the only copies of you are very rare Boltzmann brains. Instead, PSA favors the Big Crunch or an oscillatory universe so that the fraction of physics that exists as copies of you is vastly higher.

That PSA can favor Big Crunch over Big Freeze to an astronomical degree seems problematic. One might try to patch this problem in a similar way as Bostrom patched SSA's doomsday argument: by declaring that physics after the heat death of the universe is in a different reference class as physics before. This seems arbitrary and unjustified, though if SSA is allowed to get away with such shenanigans, PSA should too.

We can now summarize some of the main anthropic views and their problems:

Uses reference classes No reference classes; just favors more copies of you
Normalizes SSA (falls prey to doomsday argument) PSA (falls prey to Presumption of Denseness and Cosmological Doomsday Argument)
No normalization; favors biggest possible universes SIA (falls prey to presumptuous philosopher) SSA+SIA (falls prey to presumptuous philosopher and prefers Boltzmann brains if they're possible)

How to define "density" in physics?

Another puzzle that PSA confronts is what it means to speak of our density within the universe when more abstract models of physics are considered. It's easy enough to slice up fixed (3+1)-dimensional spacetime into chunks. But what about the fact that space itself is expanding? And what if our universe is restricted to a lower-dimensional brane in a higher-dimensional universe? Would everything in our (3+1)-dimensional universe have zero measure compared with something in the higher-dimensional space, in the same way that a line has zero measure relative to a 2-dimensional plane?

PSA does not necessarily imply solipsism...

Since PSA favors hypotheses with higher density of your experiences, you might wonder whether PSA leads to a high probability of solipsism -- the hypothesis that your experiences are all that exist in the universe. Solipsism does have an extremely high density of what you see, and so it scores as high as possible on PSA's penalty metric. But solipsism needs an extremely high prior discount, which ends up making it not likely even given PSA. Following is an explanation why.

Before thinking at all about anthropics, what probability would you give to solipsism being true for anyone at all (not necessarily yourself)? Take a God's eye view here. I might pick something like P(solipsism for someone) = 0.1, say. Now supposing solipsism is true for someone, what's the probability (from a God's eye view) that it would be exactly you who is the sole mind in the universe? The solipsist mind could have been any random configuration of matter, so the probability that it would have been you seems to be roughly the fraction of the universe that you constitute. This makes the actual prior probability of solipsism for you in particular something like 0.1 * P(random chunk of physics is you). When you do your PSA update, you multiply the prior by the density of you in the universe, but for solipsism, that density is 1, so the posterior remains 0.1 * P(random chunk of physics is you).

Contrast this with the non-solipsist hypothesis. It has prior probability 0.9. PSA applies a penalty based on your density in the universe, which equals P(random chunk of physics is you). So the posterior is 0.9 * P(random chunk of physics is you). Comparing this with your solipsist hypothesis, the probability comparison is 0.9 for normal universe vs. 0.1 for solipsism of you. The hypothesis that you are the solipsist mind ends up just being your prior that someone would be a solipsist mind.

A similar point applies regarding quasi-solipsist hypotheses, such as that the universe is very small. Even in a tiny universe, the God's-eye prior probability that a given chunk of stuff will be you may be the same as in a large universe. PSA differentially affects hypotheses that predict different densities of you, not different sizes of the universe for the same density. A generic small-universe hypothesis should predict a small number of copies of you and so isn't favored.

...but it may if we use a Solomonoff prior

I said above that if solipsism is true for someone, the probability it's true for you in particular is P(random chunk of physics is you). This seems intuitive if we imagine that the solipsist could just as easily be any chunk of physics, so we use a uniform prior over all chunks. We can explain the same point in the language of bits by saying that all of the N you-sized chunks of physics are equally likely to be the solipsist chunk, so we need to apply -lg[P(random chunk of physics is you)] = -lg(1/N) = lg(N) bits of penalty to the claim that you specifically are the solipsist mind.

Alternatively, we can explain this by saying that we need lg(N) bits to locate you within the universe. So the complexity of a solipsism-of-you hypothesis is (at most) the complexity of a regular universe with evolved life plus the number of bits to locate you (plus maybe a few extra bits to specify solipsism, i.e., to say that the hypothesis throws away all information except for the computations happening in your brain). The extra bits to locate you are the justification for the extra penalty of lg(N) bits in prior probabilities.

But there's a problem. What if we don't use a uniform prior over all chunks of physics in our code for identifying you? In the space of all programs, there may indeed be a shorter way to specify where you are within the universe. For instance, maybe our code ignores empty space and just focuses on galaxies, stars, planets, etc. Your location might be (to make up numbers) supercluster #293,844,238,493 (Virgo Supercluster), galaxy group #98 (Local Group), galaxy #27 (Milky Way), star #192,238,942,874 (our sun), planet #3 (Earth), look for a cluster of cells characteristic of the most technologically advanced species on the planet ("human"), human who goes by the string of characters "Sam Blorkins", born at a time specified in the human time-keeping system as 17 Dec. 1986 at 14:23, and the program then picks out the snapshot of you at 23.174 Earthly revolutions of the sun following your birth time.

A coding scheme like this might save on bits because it doesn't need to code locations of vast stretches of empty space. The code is biased toward denser clusters of (non-dark) matter. In that case, maybe PSA based on fractions of matter, rather than fractions of spatial volume, would be closer to this kind of coding scheme. But in general, there's a vast space of possible programs to encode something's location, and the minimum over all of them might find cleverer strategies than a code based on a uniform distribution over all matter or computations or whatever.

Suppose there is a program with length L < lg(N) that, given a universe, specifies where you are in that universe. Then the Kolmogorov complexity of "solipsism for you specifically" would be at most

(bits to specify a regular, non-solipsist universe) + L + (some small constant to specify solipsism).

In contrast, the probability that there really is a universe and that you're a PSA-random chunk of it is, in bits:

(bits to specify a regular, non-solipsist universe) + lg(N),

where lg(N) represents the PSA anthropic penalty. So if lg(N) > L + (some small constant to specify solipsism), then solipsism is favored by Solomonoff probabilities. Thus, it seems that PSA combined with Solomonoff probabilities does plausibly lead to solipsism. Such a conclusion is counterintuitive and is also disfavored by the prudential consideration that we should act as if we can help tons of other beings because we can have more impact in that case.

Here's one possible objection to this solipsist trap. Kolmogorov complexity is the length of the shortest program to produce some output, and when considering hypotheses about the universe, the desired output is the universe. In the solipsist hypothesis, we still have to compute the non-solipsist universe first, and then we pick out the section of the universe that's not to be thrown away. But if computing the hypothetical program is the universe (a metaphysical assumption that could be debated), then once we've computed it to find you, we can't just pretend that computation never happened. Unfortunately, it seems this reply could be overcome. For example, suppose that once we find the mind who is to be the solipsist, we run it not once but repeatedly, 9↑↑↑↑↑↑↑↑↑↑↑↑9 times (where ↑ is Knuth's up-arrow) or whatever insanely huge (but algorithmically simple) number of times is necessary so that PSA would declare that solipsist to dominate in density throughout the universe. The Solomonoff probability of this solipsist would still exceed the probability of you being in a non-solipsist universe to which a PSA anthropic penalty is applied.

Why isn't SSA also vulnerable to solipsism? After all, a reference class of just you would score as high as possible on SSA. However, even with a normal reference class (e.g., all humans), SSA's anthropic penalties are minuscule compared with PSA's, because PSA has to locate you within an entire vast universe rather than just within a group of ~100 billion humans who have existed. The cost of locating you in the whole universe is far greater than lg(100 billion), so the solipsist hypothesis can't beat SSA when SSA has a narrow reference class. Of course, I think SSA's use of a reference class is pure cheating. SSA never pays the cost of specifying its reference class. If SSA did have to spend bits to specify what the reference class of humans was (as it should), then it would be open to the same solipsist attack as was mounted against PSA, because the solipsist hypothesis could steal or improve upon SSA's anthropic coding scheme when specifying you as the sole mind in the universe.

Is your location compressible?

Above I've been assuming that there exists a program P of length L that can specify your location more compactly than by dividing the universe into N you-sized chunks and representing an index using lg(N) bits. In other words, I've been assuming that your location is compressible. Your location specified by lg(N) bits is some string, say S = 10010...11101. If P can also locate you, it could then reconstruct S by computing your coordinates. In other words, P could compress S. But most strings are incompressible, so it's not obvious that S should be compressible. On the other hand, the distribution of stuff in the universe has a particular structure, which might make it more plausible that S is compressible.

Imagine that the universe was a huge M-by-M grid of bits. You are a 1 bit, and all other bits are 0. We want to locate you. A PSA-style code assigns equal probability to all M2 bits and uses -lg(1/M2) = lg(M2) bits to locate you. Could we improve on this?

One naive approach might be to code the x and y coordinates of the 1 bit separately. We need lg(M) bits to specify x and lg(M) to specify y. Unfortunately, lg(M) + lg(M) = 2 lg(M) = lg(M2), so this has the same cost as before.

Another approach is to search for the 1 bit, such as using this Python program:

for x in range(0,M):
    for y in range(0,M):
        if cell[x,y] == 1:
            return (x,y)

Ignoring tabs, this program has length 70 characters (or 70*8 = 560 bits for one-byte charactersb), but if M is big enough that 560 < lg(M2), this would indeed compress the location information.c

Could we take a similar approach for locating you in the real universe? A program that checked each you-sized chunk of physics and returned when it found you would work, but "you" are extremely complicated, and if we had a full specification of you already, the solipsism hypothesis wouldn't need to consider a non-solipsist universe and find your location; it would already have all the information about you that it needed. Moreover, I suspect that specifying "you" in all your detail takes more bits than specifying the universe and then your location. The calculations in the Appendix help to gesture as to why this would be true, since a Boltzmann brain is similar to a brute-force description of "you", while the laws of physics + the probability of humans evolving is a description of how to generate your reference class. To that reference class would need to be added details about locating you in particular, the complexity of which could be quite high (inasmuch as it counts not just actual humans but possible humans) but plausibly would still be less than that of a Boltzmann brain because human brains are constrained within a tiny subspace of possible atom combinations.

Could a hypothetical solipsist program locate you in a simpler way than by checking for exact copies of you? Maybe there are much simpler cues for which it could search that would constrain the set of you-sized chunks that could be candidates. For example, maybe a water molecule isn't vastly complex to describe and search for, and then a program could search for you-sized chunks that are roughly 60% +/- 10% liquid water, distributed fairly evenly over the chunk. This criterion alone should narrow things down to Earth-like planets that can support liquid water, and such a search would probably return mostly human-sized organisms, plus maybe some marine regions that have water and other substances spread pretty evenly. Within this narrower subset of chunks, you could be specified by your chunk number, or else the program could impose further criteria to narrow things down.

To make such a program concrete, following is some pseudocode. As before, assume the universe is an M-by-M grid. (Of course, in reality, it has at least 4 dimensions.) Also assume we have some way of describing water and a HasHumanLikeWaterDistribution function.

yourIndexAmongWateryThings = 402912342301
currentIndex = 0
for x in range(0,M):
    for y in range(0,M):
        if HasHumanLikeWaterDistribution(cell[x,y]):
            if currentIndex == yourIndexAmongWateryThings:
                return (x,y)
            currentIndex += 1

I don't know if this proposal would use fewer bits than brute-force location specification by coordinates. If M were big enough and the number of watery things small enough, it seems like it might. Even if this particular program is not better than brute force, it seems plausible there still are programs that could compress your location, because you are rather special. Analogously, it seemed intuitive that there should be programs that could compress a huge M-by-M grid of 0s that contained a 1 somewhere, and indeed, I showed such a program above (assuming M was big enough).

Still, it's not obvious that a universe containing you is simple enough for your location to be compressed. If not, then PSA would survive the solipsist onslaught.

Kolmogorov-complexity anthropics

PSA was vulnerable to solipsism because its probability penalty in bits might have been more than the minimal number of bits needed to locate the observer in question. Inspired by this, we might try using the Kolmogorov complexity of locating an observer directly as our anthropic penalty. In particular, I'll call this:

Kolmogorov-complexity anthropics (KCA): Given a hypothesis involving a universe U, your probability of being a particular observer O is 1/2 raised to the Kolmogorov complexity of locating O within U.

This specification avoids the most obvious solipsist challenge, because a solipsism program that proceeds by specifying the entire universe and then your location within it will have at least the same description length as the description length of a non-solipsist universe combined with its anthropic penalty.

Of course, this doesn't rule out the possibility of some other, more compact way to specify the solipsist hypothesis. For instance, if you were not a complex, evolved human but instead a tiny bitstring like 11010, it would be vastly simpler to postulate that the universe consisted entirely of that bitstring than that it consists of what we think are the laws of physics plus a specification of where that bitstring lives.

What stance does KCA take on various anthropic puzzles?

  • Multiverses: Like PSA, KCA doesn't blindly favor huge multiverses, because even though we're more likely to exist somewhere in a huge multiverse, it also takes more bits to specify where we are if there are more universes. In this sense, KCA prefers a higher density of us rather than just a bigger volume of space. That said, KCA might have non-linear tricks for shortening its code lengths in big multiverses, so it's not clear that KCA is indifferent between one vs. many universes the way PSA is.
  • Doomsday argument: Unlike PSA, KCA might have some inclination in the direction of a doomsday argument. For instance, if there were only 100 billion humans, it might be easier to specify where you are than if there were 1040 humans. This would certainly be true if specification of you was based on a uniform distribution over all humans: lg(100 billion) vs. lg(1040). A uniform-distribution encoding like this would reproduce SSA's doomsday updates, since, for instance, a doubling of population would require an extra bit and would multiply the probability of being any particular human by 1/2 in both KCA and SSA. However, KCA could invent smarter codes than this. At a minimum, it could use shorter codes for smaller integers (see iterated logarithms below), i.e., taking fewer bits to specify human #98,238,481,023 than human #98,238,481,023,204,2043,105,023,230,274. But probably KCA wouldn't even locate humans by birth rank at all. Maybe it could instead code by turning points in history, noteworthy people and places, etc. There's even a small chance that KCA would produce an anti-doomsday effect if humans as a species became easier to specify with higher populations. For instance, suppose humans went on to colonize the galaxy and were the first to produce some massive change in physics that shook the whole universe. Then we wouldn't even need to explicitly locate Earth's address in the galaxy but could instead say "This person had birth rank #73,205,230,142 (relative to such-and-such founding members of the species) out of the species of organisms that led to the artificial intelligence that produced the big change to the universe." If birth-rank integers had much shorter code lengths for smaller birth ranks, the penalty for having more total people wouldn't even be that large and would well be made up for by reducing the cost of specifying Earth's location in the universe.
  • Sleeping Beauty: If Beauty awakenings were specified purely by spatiotemporal coordinates, KCA would look like PSA and take a thirder position because Beauty-heads-Monday would take the same number of bits to locate as Beauty-tails-Monday, which would have the same cost as Beauty-tails-Tuesday. If Beauty awakenings were specified by locating Beauty and then, in the case of tails, using an extra bit to say whether the day was Monday or Tuesday, then KCA would reproduce the halfer position because each of the two coin-came-tails Beautys would need an extra bit to be located. Plausibly an exact KCA searching through all possible encodings would yield something between 1/3 and 1/2 because it could improve upon the brute-force spatiotemporal encoding by PSA but could also improve upon the crude one-bit approach that halfers/SSA would take. It's also remotely possible that KCA would find shorter encodings for the coin-came-tails Beautys than for the coin-came-heads Beauty, thus yielding P(Heads) < 1/3.

Paul Christiano has described a view similar to KCA, both for anthropic and moral weighting. I personally find the view repugnant in the moral case, because I feel the same computation should count equally regardless of how hard it is to locate. But what about the anthropic case? It seems to produce counterintuitive statements, such as that you're more likely to have been George Washington (1st president of a group of humans that would be called the United States of America) than that you'd be the fourth child of an illiterate and non-notable slave family in some remote part of South Carolina around the same time as Washington's birth. The reason is that it's easier to locate George Washington via simple specifications. And the probability of being some random human computation billions of years into the future would be even smaller than that of being the slave child.

The preceding point is disputed by drnickbone, who claims that concepts like "president" are too complex to code for and that a KCA approach would probably use simpler physical measures to locate people instead. Even if this is correct, we still face a different form of arbitrariness: If the coding scheme identifies a person as number N in some list, it may give higher anthropic weight to, say, person number 1,000,000,000 (which can be written as 1e9) rather than person number 1,842,017,023 (which probably can't be easily compressed relative to most coding schemes).

Maybe computations that are simpler to locate are run more often in the universe, and this could be a legitimate-seeming reason to think it's more likely we would be them. But it's not at all clear that the frequency with which a computation is run should relate to its Kolmogorov complexity, especially not in the specific fraction of 1/2 raised to the Kolmogorov complexity. Maybe the frequency with which something is computed would more sensibly relate to that computation's density in the universe, which would favor a PSA viewpoint. Until this situation is clarified, I find KCA counterintuitive. I might sooner abandon Solomonoff probabilities and go with PSA than I would adopt KCA.

Christiano acknowledges that some may object to Solomonoff anthropics:

If you are unhappy with the anthropic reasoning implicitly used by solomonoff induction, then you may want to adjust solomonoff induction as a principle for defining your experiences, and likewise you might want to adjust your moral theory in a similar way.

(I concede that these issues loom larger for the universal prior as an allocator of moral value than as a predictive theory. But I do think that to the extent that we dislike solomonoff induction as an allocator of moral value we should also reject it as a prior over experiences–it’s just that in the latter case it may be a more acceptable approximation.)

Infinity and anthropics

Even mainstream views of cosmology predict that the universe contains not one but infinitely many copies of us in a sea of infinitely many observers. This raises some puzzles for anthropics, which is generally applied in the context of finite sets of observers.

SIA and sizes of infinity

If it's possible for the universe to be infinite, presumptuous SIA philosophers tell us that the universe is infinite with probability 1 because an infinite (ergodic) universe would contain infinitely many copies of us. But by the same token, if it's possible for different sizes of infinity to exist, then SIA tells us we should be in a universe with the size of the biggest possible infinity. Alas, there is no biggest size of infinity. So if our universe has infinite cardinality ℵn, we have to puzzle why it doesn't have the next biggest cardinality ℵn+1 instead.

Measure problem

In "Eternal inflation and its implications", Alan Guth discusses the "youngness paradox" (sec. 5): Given that the number of new universes is multiplied by about exp(1037) every second as measured in a synchronous time coordinate, at any given time cutoff, there are so many more young universes than old ones that we should find ourselves to be as young as possible. In particular, I would extend this to say that even if it looks like our universe is relatively old, it's more likely that the universe is actually young and we're mistaken. Guth notes that this problem likely results from the way he's measuring probabilities.

If we choose a different "measure" for computing fractions of observers of different types, we arrive at a different paradox: Why are we not Boltzmann brains? It looks like vacuum fluctuations in space will continue forever, even beyond the heat death of the universe. Thus, in a given region of space, there exist at most finitely many "real" (evolved, uploaded, etc.) copies of you but infinitely many fluke Boltzmann-brain copies of you.

Cosmologists have written extensively on the measure problem. As just one example, "Stationary Measure in the Multiverse" claims to overcome both the youngness paradox and the Boltzmann-brain problem.

Differing responses to Boltzmann brains

  • SSA+SIA welcomes Boltzmann brains insofar as they infinitely multiply the copies of you in existence.
  • PSA assigns negligible probability to the Big Freeze (as discussed above), but if the Big Freeze is true, PSA has no trouble telling us that we're Boltzmann brains, since that's where most copies of us would seem to reside. Indeed, if the Big Freeze is true, PSA wants Boltzmann brains to exist so that there will be a somewhat higher density of you in the universe. Note that PSA rejects the Big Freeze whether or not the empty space contains Boltzmann brains, because PSA regards even the empty space as part of its "reference class".
  • SSA is fine with an observer-less Big Freeze. SSA only starts to complain once the scenario involves large numbers of Boltzmann brains because almost all such brains are observers with disordered experiences, making the probability that a random observer is us negligibly small. Note that SIA-based anthropics doesn't do this because it doesn't care how many other observers there are, just how many copies of you there are.

Even though SSA normally seems like one of the less plausible anthropic theories, it gives the most normal-sounding answer in this case.

Note that Sean Carroll and colleagues dispute whether quiescent empty space would actually yield Boltzmann brains.

Maintaining model uncertainty

Each anthropic view seems to have its own problems:

  • SSA yields the doomsday argument, Adam & Eve, Lazy Adam, etc. (and might yield solipsism if it didn't cheat by not paying for its reference class).
  • SIA yields the presumptuous philosopher.
  • FNC tells us we should be in an infinite universe and then provides no guidance beyond that.
  • A principle of indifference over copies of ourselves, even given modal realism, can't decide between certain hypotheses about the universe where it seems obvious we should update our beliefs based on evidence.
  • PSA yields an arbitrary-seeming Presumption of Denseness and may (or may not?) imply solipsism when combined with Solomonoff probabilities.
  • KCA tells us we were more likely to have been George Washington than a random, non-notable person. Or, if that's not the case, it at least arbitrarily favors people whose birth ranks are simpler numbers.

Anthropic reasoning is confusing, contentious, and hard to validate because we don't have clear feedback about it. However, we can't abandon these questions, because they have momentous implications for where we should focus our resources. For instance, doomsday arguments (whether on the part of SSA and KCA directly, or SIA and PSA via the Great Filter) might suggest that the far future matters less than we thought.

Given the history of contradicting anthropic theories and counterintuitive implications, we should be skeptical of any given approach to anthropics. At the same time, some concepts seem more solid than others. For instance, the Copernican principle has served us well in science and seems highly intuitive. We should be wary of claims that we are likely to be in an extraordinarily special position in the universe. In general, we need to strike some balance between updating in response to counterintuitive anthropic implications while also remaining sane, skeptical, and determined to explore the issue further.

Can we justify anthropics?

Daniel Kokotajlo raises an interesting problem: Our view of life, the universe, and everything is based on evidence, and evidence can only work if we make enough anthropic assumptions to justify our not being freak observers. How, then, can we justify an anthropic theory that depends on notions like "observer" or "physical stuff"? In some sense this is not a deeper puzzle than epistemological justification more generally, but it's still worth wondering about.

It seems useful for evolved creatures to make the types of inferences from data that we make. But how do we know we're evolved? We might be a Boltzmann brain with wacky ideas about evolution that don't match what the actual universe is like.

In any case, anthropics as a global epistemology is more radical than the assumption that what we observe (at least somewhat) accurately reflects reality. Most anthropic theories yield crazy-seeming conclusions. PSA appears to me more sane than other approaches to anthropics, but probably it will accumulate its fair share of counterintuitive implications if people think about it further.

Update, Feb. 2015: You are all your copies

I've come to realize that some of the language of this essay is out of date. In particular, the following is my current way of thinking about anthropics, based on ideas from Stuart Armstrong and Wei Dai.

ata on LessWrong summarized well this updated perspective:

It seems to me that anthropic reasoning was never fundamentally about fuzzy concepts like "updating on consciousness" or "updating on the fact that you exist" in the first place; indeed, I've always suspected that whatever it is that makes anthropic problems interesting and confusing has nothing to do with consciousness. Currently, I think that in essence it's about a decision algorithm locating other decision algorithms correlated with it within the space of possibilities implied by its state of knowledge.

Suppose your thinking algorithm is being implemented by several different computers. Same source code but different hardware, different locations, etc. There are two different meanings of "you":

  1. the atoms, electron currents, etc. that make up any given computer where you're running
  2. the abstract algorithm that's being implemented on all the computers at once.

For purposes of thinking about yourself as an agent that makes choices, it's more relevant to think of "you" as being your algorithm. In this sense, "you" are simultaneously the set of all the computers where copies of you are running, because if "you" make a choice, all those computers implement that choice. When deciding how "you" want to act, you should consider the implications on all of your instantiations combined. See here for a fanciful example to illustrate.

Previously I discussed a principle of indifference. This doesn't make sense when the instances of your algorithm are identical, because "you" (your algorithm) are all the instances at once. There's no need to apportion probabilities among them. It's true that any given chunk of matter from the set of computers running you could be in one computer or another, but that's not relevant to your algorithm.

This view also helps clarify the simulation argument. John Searle makes a distinction between simulation and duplication that prima facie seems like it might undermine the hypothesis that we are being simulated on a computer:

The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms.

Given that we are all copies of our algorithm at once, this objection is defused. Poetically, "we" are both real and simulated at the same time, with our algorithm "controlling" many physical systems at once. The claim is not that all systems implementing our algorithm are physically identical (they're not); it's just that our algorithm determines the evolution of many types of physical systems jointly, both "real" and computer-simulated. (Note that our algorithm also implies facts about the execution of other, related algorithms, so in some sense, we are all algorithms in the multiverse weighted by the degree of control that our decisions have on their outputs.)

The idea that we are all copies of our algorithm at once can help rescue science on prudential grounds. For example, take Bostrom's scenario of measuring the cosmic microwave background (CMB) temperature at 2.7 K. How do we know if it's actually 2.7 K or if we're just a freak observer thinking it's 2.7 K when in fact it's 3.1 K? The answer is that "we" are all instances of ourselves observing 2.7 K, both the normal and freak-observer copies at the same time. It's better to choose actions consistent with a world where 2.7 K is the actual CMB temperature because that will cause our non-freak copies to take useful altruistic actions in their worlds, whereas our freak copies who wrongly believe the 2.7 K CMB temperature aren't much harmed by their false beliefs, since they soon vanish anyway.

Eliezer Yudkowsky almost got to the point of dispensing with anthropics in a 2009 post. He agreed that anthropics was suspicious. But he hesitated to throw out anthropics because he worried that without anthropics, he should conclude that he was a Boltzmann brain. Where Yudkowsky went wrong (and I did in all the preceding sections of this piece) was in acting as though "he" was just one or the other clump of atoms: either a Boltzmann brain or an organically evolved person. In fact, "he" (his algorithm) is all of those copies at once. Even if there are vastly more Boltzmann-brain copies than evolved copies, this doesn't mean he's a Boltzmann brain; he's still all the copies at once. And since his choices are most robustly carried out by the evolved copies, he should focus on what happens to them.

You might still have this intuition: Given that you find yourself as your algorithm, your algorithm should be more common in the multiverse. I actually don't see an epistemic justification for this intuition. Given that your algorithm is running, all we know is that it's running at least somewhere. But the intuition that you should be common can be accommodated as a prudential point: You can have more impact if your algorithm is running more often, so you should act as if you're common. Since, as noted above, prudential and epistemic reasons to think we're common can't be combined, there may be nothing lost by only using the prudential argument anyway.

A similar statement applies with respect to quantum probabilities in the many-worlds interpretation. Say you've just done a quantum experiment with 90% probability for outcome A and 10% for outcome B. I claim that the "measure" of an observation as computed by the Born rule isn't an anthropic probability. It's not that "you" are 90% likely to find yourself in the branch where the outcome is A. Rather, versions of "you" find themselves in both branches. What's different between the copies is that the copy in branch A has higher measure (by 9 times) than the copy in branch B. So assuming you value outcomes in proportion to their measure (which you should do or else nothing makes any difference in a quantum multiverse because all outcomes are realized with some nonzero measure), then your actions have 9 times as much impact in branch A, which means you should generally act as if you're in branch A other things being equal.

One puzzle that remains is how to reason about uncertainty over what computation we are. For instance, suppose some computers run algorithm A, and others run B, but instances of A can't tell that they're not B and vice versa. In this case, it makes sense for each algorithm to be uncertain about whether it's A or B. Exactly how to apportion these probabilities is not clear to me. For example, suppose there are 100 computers running A and 2 running B. Should you give 1/2 prior odds of being either algorithm? And then, since A has more copies, you can make a bigger difference if you're A, so you should act as if you're A?


I learned of many of the ideas in this piece from Carl Shulman, though he doesn't necessarily endorse my statements here. Some arguments trace their roots to Wei Dai and others on LessWrong. The discussion of how to define subjective indistinguishability was inspired by a conversation with Daniel Kokotajlo. Daniel Kokotajlo also pointed out to me why PSA + Solomonoff may lead to solipsism. Adrian Hutter suggested some ideas I've included here and helped me clarify some explanations.

Appendix: Boltzmann brains and fine-tuning

Christian apologists are typically not happy about multiverse hypotheses. They often see them as inelegant attempts to explain why our universe appears so fine-tuned for life without resorting to Creationism.

In "Modern Cosmology and Anthropic Fine-tuning: Three approaches," Christian philosopher Robin Collins argues against multiverse explanations of fine-tuning. In a previous version of the same paper, titled "The Fine-tuning of the Cosmos: A Fresh Look at its Implications," he explained why as follows (pp. 6-7):

Because [Boltzmann Brains] BBs have a finite probability of occurring in any region of space-time with a positive energy density, in a sufficiently large universe that has some lower-bound to its mass-energy density, many BBs are almost certain to occur. This is true even if the constants and parameters of physics are not fine-tuned -- for example, if the dark energy density is too large for galaxies and stars to form. Furthermore, in standard versions of the multiverse [...] the bubble universes that are produced are infinite in size and meet this minimal mass-energy density condition. Such infinite universes will have an infinite number of BBs, whether or not their constants are fine-tuned.

William Lane Craig echoes this idea in a post, "Invasion of the Boltzmann Brains." Collins and Craig are implicitly citing the principle of indifference here. They're claiming that since there are vastly more BBs with your experiences than evolved versions of your brain under the multiverse hypothesis, you're probably a BB. Then, because this seems absurd, the multiverse hypothesis must be wrong.

The problem with this argument is that throughout a multiverse, BBs are not astronomically more common than evolved brains; in fact, the opposite is probably true. To see this, let's consider a multiverse that contains N non-fine-tuned universes for every fine-tuned universe. We'll compare the expected number of BB copies of you to the expected number of evolved copies of you. This calculation draws inspiration from a comment by Carl Shulman.

Approximating N

First, what is N? 1/N is basically the probability of a fine-tuned universe. Philosophers debate how to define this. Some have claimed, essentially, that because certain constants could take on any value between 0 and +infinity, a principle of indifference over all possible values yields an improper prior, whose integral diverges. See "The Normalizability Objection to Fine-Tuning Arguments" for further discussion.

However, algorithmic probability gives us a very sensible way to assign prior probabilities to physical constants. For a constant c, the probability of it taking on that value is roughly 2-K(c), where K(c) is the Kolmogorov complexity of the number c. Kolmogorov complexity is defined relative to some universal Turing machine (UTM), but exactly which UTM doesn't matter too much. Kolmogorov complexity is uncomputable and in any case is more exact than we need for the present calculation. We can bound Kolmogorov complexity from above with the description length used in a minimum description length (MDL) framework in which we develop explicit, human-understandable encodings of constants and model structure. These code lengths exceed the Kolmogorov complexity, but this only makes our calculation more conservative.

In MDL, an arbitrary positive integer n can be encoded using a number of bits that's roughly lg*(n). (See "A Universal Prior for Integers and Estimation by Minimum Description Length.") Physical constants are typically decimals, but we can make them positive integers by multiplying them by 10m for some m and then flipping the sign if they were negative. m can be set as the number of decimal points we want to keep. It's claimed that if the constants for either gravitation or weak nuclear force were different by one in 10100, our universe could not support life. For other constants, the margin of error is said to be 1040. We can adjust the required precision m on a per-constant basis by encoding m as lg*(m). We can also encode the constant's original sign using an extra bit. Thus, the complexity of a given physical constant k (say, k = 9.10938291 * 10-31 kg for the mass of an electron) can be written in the following number of bits using this encoding scheme:

lg*(abs(round(10m * k))) + lg*(m) + 1,

where round() is a function that rounds a decimal to a whole number, and abs() is the absolute value. In this equation, the first summand is the length of the encoding of the positive-integer version of the constant; lg*(m) is the number of bits to tell us how many places left to move the decimal point to restore the actual constant; and 1 is for the bit to tell us the sign of the constant. Probably this encoding is inefficient and could be improved, but this extra slack just makes the calculation more conservative.

It's estimated that we may need something like 26 constants in our current physical theories. Many physicists hope the number can be reduced, but for generosity, say it's actually four times this: 100 constants. Say each one needs m = 100 digits of precision. And say the maximum value of a given constant on its own is k = 10100 (just to make up a number). Then the number of bits required to encode these 100 constants is

100 * [ lg*(10100 * 10100) + lg*(100) + 1 ]
= 100 * [ lg*(10200) + 11.36 + 1 ]
= 100 * [ lg(10200) + lg*(lg(10200)) + 11.36 + 1 ]
= 100 * [ 200 * lg(10) + lg*(200 * lg(10)) + 11.36 + 1 ]
= 100 * [ 664 + lg*(664) + 11.36 + 1 ]
= 100 * [ 664 + 15.05 + 11.36 + 1 ]
= 69141

or approximately 105.

We also need to encode the equations of physics and other features of how the model is specified. An equation could be encoded as a string of symbols from some alphabet of constants and mathematical operators. Say there are ~1000 required symbols, each equation string is ~100 symbols long, and there are ~1000 equations. We could encode this with lg*(1000) bits to specify the symbol-dictionary size and then lg(1000) = 10.0 bits per symbol times 100 symbols per equation times 1000 equations, or about 106 bits. This is very likely an overestimate; for instance, lg(1000) bits per symbol could be greatly improved upon by a compression algorithm that gives shorter code lengths to more common symbols (just as gzip can compress text files). But this works as an upper bound.

Mathematical operators have meaning to humans who understand how they work. In order for a Turing machine to compute our equations, we'd need additional work to convert those operators into an actual algorithm. I don't know how complex this would be, but it's probably conservative to assume that this mapping could be specified in fewer than, say, 1010 bits (just making up a number)?

Assuming we require at most ~1010 bits in total to specify a fine-tuned universe, 1/N is bounded below by 2-1010, so N is bounded above by 21010. This equals

(10log 2)1010 = 10(log 2) * 1010 = 10(0.3) * 1010.

Let's be generous, ignore the 0.3 factor, and call this 101010.

How many BBs?

What's the probability that any given brain-sized region of universe contains a BB? Your brain has about 1026 atoms. But maybe the brain could be many times smaller and still be subjectively indistinguishable. In particular, suppose for the sake of concreteness that it could be 1010 times smaller. Such a brain wouldn't have nearly the memory or reasoning capacity of yours, but maybe it could still hold the instantaneous observation that you're making right now. So let's downgrade this estimate to 1016 atoms. Let's assume that BBs are formed by random combinations of atoms. (In fact, in non-fine-tuned universes, there probably aren't atoms(?), so the particles composing a BB would typically be more elementary. This would only make BB formation exponentially harder.) If we assumed a probability for any given atom to go to the right spot of 10-1 (which is way too high), the probability of the whole brain forming would be (10-1)1016 = 10-1016. This is for a given brain-sized region of fluctuating space-time.

For simplicity if not accuracy, suppose each universe in the multiverse is the size of the observable universe, which is about 1080 (hydrogen) atoms. Dividing by 1016 atoms per brain, this gives ~1064 brains that could exist within a given universe, ignoring the fact that some of the atoms in brains are bigger than hydrogen. (I also did this calculation based on the volume of a brain and the volume of the observable universe, and it yielded ~1061, which is a nice affirmation of the estimate.)

Now, to get the total expected number of BBs in the non-fine-tuned parts of the universe, we multiply

(N universes) * (number of brain-sized atom clumps per universe) * P(BB forming in a brain-sized clump)
= 101010 * 1064 * 10-1016
= 10(1010 + 64 - 1016) << 10-1015

because the -1016 dominates in the exponent. So there aren't many BBs after all! Of course, in the real multiverse, there are infinitely many fine-tuned universes and many times more non-fined tuned universes, making infinitely many BBs, but this calculation is just examining relative proportions.

How many evolved brains?

Now, how many expected versions of your brain are in the 1 universe that's fine-tuned relative to the N not fine-tuned? This is hard to calculate precisely. We could decompose it slightly:

Expected numbers of your brain = P(life evolving in the observable universe | fine-tuning) * P(humans evolving | life) * P(a given human's brain-moment is indistinguishable from you | humans) * (expected number of humans at a given time).

To make up numbers, I'd guess this is something like

10-1 * 10-103 * 10-105 * 1010, which basically equals 10-105.

There's massive uncertainty on these numbers. 10-105 for some human having a state indistinguishable from mine seems generously small. It should be vastly higher than the probability of a BB because the set of accessible configurations of an evolved brain is a tiny subspace of the set of all configurations of a comparable number of atoms in general.

A simpler approach

If we look back on the previous calculation, we can see that the only potentially important numbers were:

  • the probability F of a fine-tuned universe (bounded below by 10-1010 in this example)
  • the probability B of a BB of you forming randomly in a non-fine-tuned universe (estimated as 10-1016)
  • the probability E of evolving your brain in a fine-tuned universe (estimated as 10-105).

The relevant comparison is B vs. F * E. Because of the way one factor basically dominates all others, this comparison ends up being essentially equivalent to B vs. min(F,E). Since E is necessarily bigger than B (your brain is way more likely to emerge in an evolved organism's head than appear randomly), the only real comparison we need to do is B against F: Is a Boltzmann brain in a given brain-sized clump of atoms more or less likely than a fine-tuned universe?


Even if an astronomical proportion of the universes in our multiverse can't support life, the expected number of evolved brains is probably vastly bigger than the expected number of Boltzmann brains. Hence, by a principle of indifference, we're probably evolved brains, contrary to Creationist claims. That said, these calculations are extremely messy and involve many debatable parameters.


  1. This is because the favored hypothesis is the one with highest

    (prior probability of hypothesis) * P(observations | hypothesis) * (number of copies of me | hypothesis).

    For the hypothesis that the multiverse is roughly as it appears, these factors are

    (some prior probability) * 1.0 * (some number of copies of me).

    For the brain-tiled hypothesis, the factors are

    [ roughly (some prior probability) * 10-3,010 ] * 1.0 * [ (some number of copies of me) * 10(big) ]

    for some value of "big".  (back)

  2. Of course, Python programs are hardly optimal from the perspective of short program lengths. For instance, the "return" statement could easily be shortened to one letter if Python were to choose small size over readability. Moreover, we could use variable-length character encodings rather than requiring 8 bits for every character, so that more common characters would have short codes. Maybe we wouldn't even need a full 28 = 256 possible characters in the language.  (back)
  3. If the M2 cells were numbered from left to right in reading order, then the zero-indexed number of the returned cell would be y*M + x. When written in binary, that number would be the bitstring representing your location.  (back)