by Brian Tomasik
First written: 17 Aug. 2014; last update: 16 Nov. 2015
This essay explores the speculative possibility that fundamental physical operations -- atomic movements, electron orbits, photon collisions, etc. -- could collectively deserve significant moral weight. While I was initially skeptical of this conclusion, I've since come to embrace it. In practice I might adopt a kind of moral-pluralism approach in which I maintain some concern for animal-like beings even if simple physics-based suffering dominates numerically. I also explore whether, if the multiverse does contain enormous amounts of suffering from fundamental physical operations, there are ways we can change how much of it occurs and what distribution of "experiences" it entails. An argument based on vacuum fluctuations during the eternal lifetime of the universe suggests that if we give fundamental physics any nonzero weight, then almost all of our expected impact may come through how intelligence might transform fundamental physics to reduce the amount of suffering it contains. Alas, it's not clear whether negative-leaning consequentialists should actively promote concern for suffering in physics, even if they personally care a lot about it.
Note: I'm not an expert on the topics discussed here, so corrections are welcome.
- 1 Summary
- 2 Preface
- 3 Introduction
- 4 Why fundamental physics may matter
- 4.1 Numerosity
- 4.2 Either smallest or largest should dominate
- 4.3 Physics is fundamental
- 4.4 Continuity of levels of organization
- 4.5 When "dumb" physics is intelligent
- 4.6 Panpsychist/eliminativist theories of consciousness
- 4.7 Information as fundamental in physics and consciousness
- 4.8 Biases of observability
- 4.9 Biases of size
- 4.10 Clock speed
- 4.11 Historical trends regarding compassion
- 4.12 Mystical/religious intuitions
- 5 How much do I care?
- 6 Practical implications?
- 7 In the long run, physics dominates?
- 8 Unknown unknowns
- 9 Ontological indeterminacy
- 10 Should negative-leaning consequentialists promote this issue?
- 11 Other literature
- 12 Appendix: Physics disasters
- 13 Appendix: Hypercomputation
- 14 My history with this topic
- 15 Reactions to this piece
- 16 Acknowledgments
- 17 Footnotes
In 2005, Nathan Poe coined Poe's law, whose general form states:
Without a blatant display of humor, it is impossible to create a parody of extremism or fundamentalism that someone won't mistake for the real thing.
I've heard this modified to a statement like the following:
Any sufficiently advanced consequentialism is indistinguishable from its own parody.
The present article is sincere, though it might come across as absurd depending on one's perspective. I write it in the spirit of exploring new ideas rather than because I'm committed to the line of reasoning I advance.
In order to reduce suffering, we have to decide which things can suffer and how much. Suffering by humans and animals tugs our heartstrings and is morally urgent, but we also have an obligation to make sure that we're not overlooking negative subjective experiences in other places. I've written elsewhere about suffering in insects and digital minds. This piece explores what is arguably the most extreme possibility: seeing at least traces of suffering in fundamental physics.
I've defended a kind of computational panpsychism in which every physical system can be thought of as having its own kind of consciousness, even if it's too simple or too alien for us to possibly imagine. Another essay on whether video-game characters have moral significance elaborates on more particular ways in which we can see sentience-like operations in very simple systems. It mentions that we could potentially apply Daniel Dennett's intentional stance to some "dumb" physical systems like electrons orbiting atoms or a washer tied to a string. It further notes how we can see some empathy-inducing similarities between us and all of physics.
As an example, even a metal ball -- like an animal -- could be said to take in inputs (various forces acting on it, conveyed via gauge bosons and gravitons), integrate those inputs (compute the net force magnitude), and act in response (move in the direction of the force). Information integration, feedback loops, and (at least implicit) optimization among choices are seemingly relevant attributes of agent-like minds but are also rampant throughout mundane physics. An electron often "chooses" the path of least resistance, based on integrating signals about the physical landscape where it lies.a A maglev train initially falls downward due to gravity, but then is pushed back up by magnets, leading to a "happy" equilibrium (dare we say "homeostasis"?) position.
The remainder of this essay takes a more abstract view and proposes general reasons why it's plausible that basic physics could be seen to contain suffering -- perhaps enormous amounts of suffering. It then elaborates on how much I care and whether there are practical ways we could ameliorate the situation.
Why fundamental physics may matter
Why might we take seriously the possibility that fundamental physical operations contain significant amounts of suffering? Following are several weak arguments.
Operations by fundamental physics are the most numerous things in the universe. (Of course, this claim depends on how we define "things".) Hence, even if we value them only an extremely tiny bit, they may collectively dominate in our valuations.
The observable universe contains roughly 1080 hydrogen atoms. Contrast this with 1030 bacteria, 1019 insects, and 1010 humans on Earth. If a hydrogen atom matters even 10-70 as much as a person, there would be more total hydrogen suffering in the observable universe than human suffering. (This comparison might need some adjustment depending on the specific computational state of the humans and protons. For instance, a human in great pain would count orders of magnitude more than a human with just an itch.)
And each hydrogen atom contains its own little world. A hydrogen atom has a diameter on the order of 10-10 meters, while the Planck length, "the limit below which the very notions of space and length cease to exist", is on the order of 10-35 meters. This is really small:
the smallest particle, the electron, is about 1020 times larger [than the Planck length] (that's the difference between a single hair and a large galaxy).
Superstring theory proposes that the fundamental particles of physics, vibrating strings, are on the order of the Planck length in size. Some suggest that spacetime may be discrete, with "pixels" roughly the size of the Planck length.
Finally, while the hydrogen atoms I've been discussing are all part of ordinary matter, dark matter and dark energy comprise roughly 95% of the mass-energy in the universe.
A later section of this piece, "In the long run, physics dominates?", explains why the amount of fundamental physics is actually vastly higher than what I've said here if we think about the long-term future of our universe well past when intelligence can survive.
In general, doesn't it seem odd that a moral theory declares almost everything that exists to be basically morally irrelevant? Of course, perhaps this would be taken as an objection to sentiocentrist valuation rather than an objection to our presumption that most of the multiverse lacks sentience. One might say that most of the multiverse doesn't have subjective experiences but still has (dis)value for non-utilitarian intrinsic reasons.
Either smallest or largest should dominate
Most of the time, when we have a function defined on some interval, the maximum of the function occurs either at the left or right end of the interval. It requires a more complex function structure to get a maximum in between. For instance, a line's maximum is always on the left or right end of an interval (excepting a horizontal line), but a parabola's maximum could be somewhere in between. Even with a parabola, the maximum will be on one or the other end of the interval if the interval doesn't overlap with the parabola's vertex.
In particular, if we plot "size" of objects on the x axis and "collective amount of suffering at that size" on the y axis, it's plausible that this function has an optimum either at the smallest or largest possible scale. In other words, either suffering by elementary particles dominates, or else high-level emergent behavior of the single "multiverse-wide brain" that we live in dominates.
Physics is fundamental
Physics appears to be the most basic way to describe reality, so it's plausible that our ethics should have something to say about that level of reality and not just about bigger emergent processes.
Continuity of levels of organization
Similar mathematics can describe high-level and low-level systems. Complex processes familiar to us in the human realm may have extremely rudimentary counterparts in the (sub)atomic realm. It seems hard to specify a dividing line where a process becomes too simple to matter.
The same concepts and structures reappear at many levels of organization. For example, oscillating brain networks are sometimes thought to be crucial for consciousness. Yet the mathematics used to model these dynamical systems is general and might be seen in other basic physical processes?
Neuroscience teaches us that consciousness is certain complex patterns of computation. That feels weird, but we know it's true because we are conscious but can also see that our brains are all that's going on. Fundamental physics displays simpler patterns of computation. Instinctively we think these aren't conscious either -- and certainly they don't have the algorithmic machinery to contain any of the high-level thinking or rich sensory integration that we do -- but why are we so sure that simple computations aren't also conscious in their own, elementary ways? It seems weird to proclaim a dividing line that separates some computations as fundamentally privileged compared with others. Rather, I suggest we see all computations along a continuum of complexity and animal-like behavior. Basic physics seems very different but not wholly different, very unimportant but not wholly unimportant.
Consider two systems:
- A hot stovetop triggers a reflex that leads to withdrawal of a hand
- Two protons placed near each other repel due to both being positively charged.
Each system can be modeled at a high level in the same way: If a certain condition is satisfied, then produce a withdrawal behavior. Of course, the hand reflex contains astronomically greater complexity in its lower layers of physical processing. But the most abstract description of the process is relatively simple and recurs throughout levels of organization.
I suspect many people would respond to this example with the claim that they don't care about either "unconscious" reflexes or proton-proton repulsion. But what if we augmented the hot-stove scenario to include many follow-on effects in the brain, making the event more "neurally famous" and hence more conscious? By analogy, what if the proton-proton repulsion affected lots of neighboring physical systems, making it more "physically famous" and leaving relatively permanent traces ("memories") that physicists could later use to determine that the repulsion had happened?
When "dumb" physics is intelligent
Ant colonies and slime molds are often praised for their abilities to solve complex optimization problems. But similar sorts of intelligence can be found in supposedly "dumb" physics as well. Many swarm intelligence algorithms are inspired by physics: particle swarm optimization, gravitational search algorithm, river formation dynamics, self-propelled particles, etc. To be sure, the exact details of these algorithms involve operations that physics isn't implementing on its own. But the general structure is often similar, and if we squint, we see how dumb physics is solving its own optimization problems.
Of course, solving optimization problems is not the same thing as feeling emotions. But we typically think that minds are more ethically important if they're more intelligent. So if some parts of physics are solving problems, and other parts are implementing emotion-like operations, is the whole system an emotional, intelligent mind in a vague way? Of course, it's not the kind of mind we should treat as a game-theoretic agent, and its intelligent computations are not necessarily united toward a common goal. But it still seems plausible that all this "thinking" on the part of physics counts for something.
Panpsychist/eliminativist theories of consciousness
Elsewhere I've argued that many mainstream theories of consciousness are in some sense panpsychist, because there are at least trivial interpretations of even simple physical systems that implement those theories.
According to Internet Encyclopedia of Philosophy:
many contemporary philosophers have argued that panpsychism is simply too fantastic or improbable to be true. However, there is actually a very long and distinguished history of panpsychist thinking in Western philosophy, from its beginnings in ancient Greece through the present day. Some of the greatest names in philosophy have argued for some form of panpsychism, or expressed a strong sympathy toward the idea. Notably, as we progress into the 21st century, we find the beginnings of a philosophical renaissance for the subject.
In The Conscious Mind (1996), David Chalmers suggests that even thermostats may have experiences and that "If there is experience associated with thermostats, there is probably experience everywhere: wherever there is a causal interaction, there is information, and wherever there is information, there is experience." (p. 297)
- A nontrivial minority of philosophers of mind are panpsychists (although I think many of them are misguided in their conceptions of panpsychism).
- Historically, other philosophers have been neutral monists.
Chalmers classifies both of these as "Type-F Monism" views.
In Representation and Reality (p. 121), Hilary Putnam proves that "Every ordinary open [physical] system is a realization of every abstract finite automaton." While Putnam took this as a refutation of functionalism, a functionalist can take it as a demonstration that even simple physical systems have some degree of morally relevant mental life. Likewise, Ned Block warns that functionalism, if it's not overly chauvinist by denying mental states to beings that have them, ends up being overly liberal in attributing mental states to systems that (allegedly) don't have them. But as a liberal functionalist, I embrace the attribution of (at least some degree of) mental states to all kinds of systems. See "What is a computation?" for further discussion.
One of the leading theories of consciousness, integrated information theory (IIT), attributes nonzero degrees of consciousness to even hydrogen ions. Scott Aaronson has shown that seemingly trivial mathematical operations which can be run on present-day computers could have integrated information in excess of that found in human brains. While I agree with Aaronson that this example suggests a flaw in IIT, others could take it to mean that consciousness may be more abundant in unexpected places than we thought. And perhaps a more plausible future account of consciousness would more legitimately find vast quantities of sentience in physical operations we had previously assumed were morally irrelevant.
Dennett characterizes "presentiments" (thoughts) in this way (Consciousness Explained, p. 365):
What there is, really, is just various events of content-fixation occurring in various places at various times in the brain. These are nobody's speech acts, and hence they don't have to be in a language, but they are rather like speech acts; they have content, and they do have the effect of informing various processes with this content.
From a very abstracted perspective, we can see this as just some information-bearing physical events influencing others. In animal brains, these informational events conform to certain regularities that produce adaptive responses, but the broadest outlines of what Dennett describes seem to run throughout physics.b While I don't fully endorse his article, I agree with Tam Hunt's claim that
Daniel Dennett is a panpsychist. He wouldn’t admit it in public, and he might not even realize it. Yet Dennett, one of the foremost materialists in the early part of the 21st century, advocates views regarding consciousness, biology, and philosophy that unavoidably lead to that most ridiculous of philosophical views: that all things have some degree of consciousness, otherwise known as panpsychism.
if I thought that people were just very complicated physical mechanisms and nothing more, I would give people really no more respect than I would give to atoms. I mean, I might give some respect to an atom [though] I don't know how I'd do that [...].
I disagree that a person would deserve only the same degree of respect as an atom, since the human is astronomically more complex. But I agree with the fundamental point. Ward's modus tollens is my modus ponens.
It's natural -- common sense -- for us to approach the world by dividing it into things with minds (you, me, other people, dogs, birds...) and things without minds (stones, trees, pencils, fingernails). Reflecting on intermediate cases, such as various types of worms, one might sense trouble for a sharp distinction here, but vagueness along a single spectrum of mindedness isn't too threatening to common sense. The essential difference between the minded and the un-minded remains, despite a gray zone.
Figdor's picture challenges all that. If what she says about "prefer" also goes for some other important psychological terms (as she thinks it does), then mentality spreads wide into the world. [...]
Figdor has taken, I think, a crucial step toward jettisoning the remnants of the traditional dualist view of us as imbued with special immaterial souls -- toward instead seeing ourselves as only complex material patterns whose kin are other complex patterns, whether those patterns appear in other mammals, or in coral, or inside our organs, or in social groups or ecosystems or swirling eddies. Some complexities we share and others we do not. That is the radical lesson of materialism, which we do not fully grasp if we insist on saying "here are the minds and here are the non-minds", demanding a separate set of verbs for each, with truly "mental" processes only occurring in certain privileged spaces.
Information as fundamental in physics and consciousness
Many modern neuroscience theories consider "information processing" of certain sorts as fundamental to what consciousness is. Likewise, some physicists are increasingly suggesting that the universe may be fundamentally informational. John Wheeler proposed an "It from bit" idea:
Otherwise put, every 'it'--every particle, every field of force, even the space-time continuum itself--derives its function, its meaning, its very existence entirely--even if in some contexts indirectly--from the apparatus-elicited answers to yes-or-no questions, binary choices, bits.
Hence, if we see consciousness as fundamentally about computation, it's plausible to see consciousness as fundamental in the universe. Of course, if we think that only particular kinds of computational configurations count as conscious, then it doesn't follow that shades of consciousness appear throughout physics.
Biases of observability
We tend to care more about things we see on a regular basis. This is only natural, because what's immediately presented to us becomes most salient. This is one reason why most of us care a lot about ourselves, more about our community than some community in another country, more about humans than animals, and -- arguably -- more about happenings on the human scale than things at vastly smaller or larger scales. Insofar as this is a bias rather than a stance we prefer to take even upon reflection, we may need to give more ethical weight to things we don't typically observe.
Following are some animations of interactions at the molecular level:
- animation of water molecules
- polymer animation
- animation of cell organelles, proteins, and genetic regulation.
While these objects don't look sentient the way animals do, we can see some "life-like" elements to them -- much more than when we look at a solid object from a macroscopic perspective. If we wanted to faunapomorphize, we could imagine the molecules as little creatures going about their day doing various things. We could tell their stories. While such a perspective is not legitimate because it sneaks in huge amounts of cognitive machinery from our imagination that's absent from the objects we're observing, it may at least be plausible that all these atomic-scale hustlings and bustlings matter for something. They are their own little, simple societies.
Most ethicists have probably taken only a few courses in physics. When one hasn't studied a topic in depth, it's easy to write it off as unimportant. By analogy, hearing a poem in another language that you don't understand might sound like uninteresting gibberish. The more immersed you are in a topic, the more vivid it becomes and the more plausible it seems that its subject matter has moral significance.
Biases of size
When people explain why they don't care much about insects, one answer they give is "They're so small!" Indeed, it's hard to feel extensive moral concern for an entity so tiny that it's easily stepped on without noticing.
However, the logical complexity of a system is not necessarily the same as its physical size. To carry information about whether "you still want me", one could either "tie a yellow ribbon 'round the old oak tree" or encode the one bit of information in a transistor several dozen nanometers in length. If human brains were computed by insect-sized devices that crawled on the floor, perhaps we would have different intuitions about the value of human life, or about our need to watch our steps. Compare to Rick Moranis's meticulous efforts to avoid treading on his lawn in Honey, I Shrunk the Kids.
Of course, there is some reason behind size bias. Especially given constant neural hardware, brain size is a necessary (if not sufficient) condition for a complex mental life. In addition, one possible interpretation of Nick Bostrom's insulator thought experiment is that raw physical size of computing materials is morally relevant even if the logical computation remains constant. I personally feel skeptical about this and believe that algorithmic/logical complexity is more morally important. As a result, tiny physical particles don't automatically count as vastly less significant just due to their size.
Atomic and subatomic interactions occur at blazing speeds compared with macroscopic algorithms. Hence, we might poetically conceive of micro-operations as having vastly greater "clock speeds" (speeds to complete a cycle of a computational loop). This may increase the moral weight per second of micro-operations relative to macro-operations.
Also: "Plasmas are by far the most common phase of ordinary matter in the universe, both by mass and by volume." Plasmas in stars change rapidly and hence are more "animated" and "lifelike" than, e.g., molecules in a solid. Compare to the cheela in Dragon's Egg that ran a million times faster than humans. (Of course, I suppose one could argue that since life has relatively low entropy, plasmas are less lifelike than solids?) I don't know if the same is true for the plasma of the intergalactic medium.
Historical trends regarding compassion
Over the ages, there has been a trend of what Peter Singer called the "expanding circle". We've seen increasing spheres of ethical concern: from ourselves, to kin, to other powerful tribe members, to all white men, to all men, to women, to gays, to mammals, to birds, to fish, to insects, and so on. Where is the end point of this process? Right now many people put a line somewhere within the animal kingdom, or perhaps between animals and plants. But as Singer himself has noted in the "Equality for Animals?" chapter of Practical Ethics:
It is easy for us to criticize the prejudices of our grandfathers, from which our fathers freed themselves. It is more difficult to distance ourselves from our own beliefs, so that we can dispassionately search for prejudices among them. What is needed now is a willingness to follow the arguments where they lead, without a prior assumption that the issue is not worth attending to.
Valuing fundamental physical operations seems to be a kind of bound on how "crazy" our moral views can get, at least within our standard conception of physics and using an aggregationist approach for (dis)valuing suffering in which we assess suffering in individual parts of a system and try to sum them. Ultimately this aggregation approach is misguided, because the universe is one big whole not separated into isolated parts, but at the moment I don't have a better replacement for ethical aggregation other than a modification of it in which we sum over all levels of abstraction together (i.e., in addition to valuing A, B, and C separately, also value the unified system that we can see emerging from the collective behavior of A, B, and C).
Albert Einstein expressed similar ideas when he discussed the concept of circles of compassion in 1950:
A human being is a part of the whole, called by us "Universe", a part limited in time and space. He experiences himself, his thoughts and feelings as something separated from the rest -- a kind of optical delusion of his consciousness. This delusion is a kind of prison for us, restricting us to our personal desires and to affection for a few persons nearest to us. Our task must be to free ourselves from this prison by widening our circle of compassion to embrace all living creatures and the whole of nature in its beauty.
While Einstein emphasizes nature's beauty, I would point out that nature may also be filled with tiny horrors -- disvaluable atomic or subatomic interactions. Any one of them may seem trivial, but collectively they might matter a lot.
The idea of universal consciousness sounds woo-woo, and perhaps it is. But it seems to have been a component of several mystical and religious traditions throughout human history, showing that the idea has at least been something of an attractor in the space of human thought paradigms.
According to David Skrbina: "Monotheism and the Christian worldview were fundamentally opposed to panpsychism [...]." Rather, Christian metaphysics seemed to align well with dualism. The excessive influence of Christian thinking over modern Western culture probably helps explain why panpsychism seems so weird. In contrast, Graham Parkes reports that "Most of traditional Chinese, Japanese and Korean philosophy would qualify as panpsychist in nature." Consider also the animism of many indigenous peoples. Some Western spiritual movements likewise embrace ideas of nonduality.
How much do I care?
In "Empathy vs. aesthetics", I describe the dilemma that I feel about going "too far down" regarding what I consider to be morally relevant suffering. On the one hand, I don't want to just enjoy playing with elegant theoretical ideas while actually important animals are being eaten alive as we speak. But at the same time, if there is potentially immense suffering-like computation going on in strange places, I don't want to ignore it.
In practice, I may take the following approach to the situation. I adopt a kind of "moral pluralism", similar to the parliamentary model of Nick Bostrom and Toby Ord. I devote some fraction of my attention, resources, and donations for different levels of focus. For instance, the following breakdown seems plausible:
- 10% of resources on tangible, clearly important suffering like that by higher animals, especially in nature
- 25% of resources toward concrete suffering by insects and other creatures whose sentience is more questionable but whose numbers imply vast importance
- 40% of resources toward far-future speculation about digital minds that will be created throughout our astronomical supercluster in the coming gigayears
- 25% of work (such as the writing of this essay) toward the most speculative scenarios of all, such as whether we should care about fundamental physics.
Apportioning resources is straightforward enough, but some policies that help one value might hurt another. In this case, the conflict is resolved by imagining some game-theoretic compromise between the parties, in which each side maintains values it considers most important in return for giving up on points that matter less.
This approach prevents crazy-seeming conclusions, such as that all our efforts should go to save the suffering neutrinos and gluons, while at the same time allowing room for opening our hearts to realms previously ignored.
Suffering may be more prevalent than we thought
If we think of suffering as existing mainly in systems as big and sophisticated as animals, then it appears that most of the universe is currently devoid of suffering. Colonizing space and utilizing vast amounts of computing power looks worrisome because this allows for creating vastly greater numbers of minds capable of at least animal-like suffering.
If most of the universe's suffering lies in fundamental physical operations, then most of the harnessable energy in the universe may not be currently "dead" but may in fact have significant disvalue. If so, then suffering reducers might look more favorably on greater human intelligence if humans could find ways to reduce suffering in fundamental physics. On the other hand, if fundamental physics contains significant amounts of suffering, it presumably also contains significant amounts of happiness, so it's plausible that happiness-focused altruists would push to create more mentally active physical computations rather than fewer, especially if there aren't more fine-grained ways to change the net balance of happiness vs. suffering within basic physics. This could make greater human intelligence look even worse than before.
Changing the amount of mind within physics?
If fundamental physics does contain significant mind-like operations, are there ways to change how many of them happen? Creating ordinary computing hardware seems to be one means to increase the amount of high-level mentation within physics. But are there proposals to change the atomic-scale operations too?
Maybe changing temperature of systems would make a difference? When thinking about harnessable computation, Landauer's principle explains that to erase one bit of information, at least kB*T*ln(2) joules need to be released, where kB is the Boltzmann constant and T is the temperature of the system. As Anders Sandberg explains, this bound on computing cost is two orders of magnitude lower at the 3 K cosmic background temperature than at ~300 K room temperature. Wei Dai suggests that advanced civilizations might use black holes to dissipate excess heat and save in efficiency given the Landauer bound.
Sandberg has discussed limits on computation by an expanding superintelligent civilization. Perhaps similar ideas could be used to change the amount of morally relevant sentience. Or maybe the "excess heat" of computation has moral status too? Maybe sentience in fundamental physics relates directly to energy, in which case the law of conservation of energy implies that we can't change how much exists?
Another interesting question is the moral status of reversible computing and reverse computation, which aim to compute more cheaply than the Landauer limit. Presumably a sentience-like operation matters just as much if it's reversed as if not? Or do we include entropy creation as a fundamental component of our valuation? Does a less energy-efficient computer matter more per operation?
Robert A. Freitas proposed a "sentience quotient" (SQ) to measure the information-processing rate (bits/second) of a given piece of matter. The word "sentience" here is not meant to align with "consciousness" necessarily but just with "information". Freitas calculated a lower limit SQ of -70 and an upper limit of 50, with human brains hovering around 13.
Changing the quality of mind within physics?
While it seems easier to imagine how we might change how much computation the universe does, it's less clear how to improve the "welfare" of (sub)atomic processes. It's hard enough looking after the welfare of needy humans! Of course, artificial intelligence and nanotechnology may one day allow us to, as David Pearce suggests, "micro-manage every cubic metre of the planet". But micro-managing every atom is probably impossible -- since the policer would presumably need to be many atoms big.
Rather than active management, the main ways in which we would improve (sub)atomic welfare would be through changing physical systems at large. We would have to ask whether some kinds of physical processes contain less micro-suffering than others. This seems like almost an absurd task at present, since it's hard to imagine which kinds of fundamental physical operations seem more suffering-like than which others, but perhaps if we reflected on the problem for a long time and studied physics extensively, we would develop nontrivial intuitions on the matter.
Large-scale changes to the dynamics of physics seem possible for a superintelligent civilization. For instance, maybe operations in the sun are more painful than operations in asteroids, so a civilization could aim to speed up conversion of solar matter to heavy metals. Maybe dense stars contain less suffering than diffuse ones. Maybe energy is more painful than matter, so we'd like to slow down nuclear fission. And so on.
This piece's appendix on physics disasters reviews some scenarios that people have feared at high-energy particle accelerators. While these appear extremely unlikely in current Earth-based facilities, they illustrate the ways in which there may be many levers for tweaking physics that we naively hadn't even imagined. False-vacuum decay seems to be a very bad idea because it would destroy almost everything that most people consider valuable -- and who knows, maybe the true-vacuum state would have more physics-based suffering? But perhaps there are similar proposals, not yet discovered, that would be widely embraced and would push physics in a more humane direction.
In the long run, physics dominates?
Suppose you think the importance of a proton is exceedingly tiny -- much less than 10-70 times the importance of a human. This implies that humans have more collective importance than protons in the observable universe at present. But it doesn't get you off the hook. In the long run, the tortoise of fundamental physics may still win the race if it has any importance at all.
Consider the timeline for cosmological events of the far future. Suppose that post-humans fill their region of the universe with highly sentient life, some of which might suffer. While computational systems would presumably(?) become impossible well before the black hole era, let's be extra generous and assume they could persist until the longest estimated time at which all nucleons will have decayed: 10200 years.
But even after nucleons vanish, space will not be completely empty. It always contains vacuum energy due to quantum uncertainty -- estimated at a nontrivial 10-9 joules per cubic meter in free space. Virtual particles are created and vanish soon thereafter, and presumably they would have nonzero ethical significance if bosons, photons, etc. do. How long would these fluctuations last? Well, basically forever, given the most likely cosmological scenario of "big freeze".
Actually, because of quantum fluctuations, it's estimated that a new big bang would be created on the order of 101056 years from now. So if we pretend that point marks a finish line (even though the original, parent universe continues to exist beyond that point?c), we get for each big bang a ratio of
- at most 10200 years of intelligent computations, versus
- 101056 years of vacuum energy.
The ratio of the second to the first is 10(1056 - 200). The 200 is completely negligible in the face of 1056, and we still end up with an inconceivably huge number. Unless you assign vacuum energy basically exactly zero weight, then all the computations of intelligent civilization represent an imperceptible blip in the face of vacuum fluctuations. In the end, physics dominates in (dis)value.
Of course, there's model uncertainty here. The numbers I sketched are based on current cosmological understanding, which is likely to change with time. They also appear to give rise to the Boltzmann brain problem: If there are infinitely many vacuum fluctuations in the universe's future, why aren't we one of those?d And maybe there's some wacky scenario by which intelligently directed computations could last forever along with vacuum energy.
Questions about where the line of sentience begins have previously been raised in the context of Boltzmann brains: How small does a fluctuation have to be before we count it as sentient? This relates to the continuity argument why even fundamental physics may matter a teeny tiny bit.
Also note that this argument does not depend on my particular view on consciousness. If you think consciousness is an objective, binary property that some systems have and others don't, then you should still assign nonzero probability that a virtual-particle pair, say, is conscious. Even if that probability is 10-1055, this still washes away into nothingness when multiplied by 101056 years:
(10-1055)(101056) = 10(1056 - 1055) = 10(9 * 1055).
What the above point may suggest is that speculative scenarios to change the long-run future of physics may dominate any concrete work to affect the welfare of intelligent computations -- at least within the fraction of our brain's moral parliament that cares about fundamental physics. The main value (or disvalue) of intelligence would be to explore physics further and seek out tricks by which its long-term character could be transformed. For instance, if false-vacuum decay did look beneficial with respect to reducing suffering in physics, civilization could wait until its lifetime was almost over anyway (letting those who want to create lots of happy and meaningful intelligent beings run their eudaimonic computations) and then try to ignite a false-vacuum decay for the benefit of the remainder of the universe (assuming this wouldn't impinge on distant aliens whose time wasn't yet up). Triggering such a decay might require extremely high-energy collisions -- presumably more than a million times those found in current particle accelerators -- but it might be possible. On the other hand, such decay may happen on its own within billions of years, suggesting little benefit to starting early relative to the cosmic scales at stake. In any case, I'm not suggesting vacuum decay as the solution -- just that there may be many opportunities like it waiting to be found, and that these possibilities may dwarf anything else that happens with intelligent life.
Unfortunately, intelligence might also make things worse. Perhaps the universe would have evolved to a relatively peaceful state, but our post-human descendants find a way to spice up post-dark era physics and make it more lively -- creating in it both more happiness and more suffering. This could be worse on balance than anything else intelligence ever does.
These ideas are all extremely speculative, and our views of physics will continue changing radically in the coming decades, centuries, and beyond. Our conceptions about what counts as suffering might also mutate at least as dramatically. For now the main point of these musings is to remind us how much we don't know and how radically our understandings of altruism may migrate upon reflection. We should generally eschew policies whose goodness depends sensitively on particular assumptions about sentience, physics, and other considerations. Most of these questions will have to be handed off to our far-future descendants. The appendix on hypercomputation elaborates on this point.
Do vacuum fluctuations exist?
One clear illustration of the disruptive potential of model uncertainty is a 2014 proposal by Sean Carroll and colleagues that vacuum fluctuations don't actually exist in quiescent de Sitter space because such space lacks out-of-equilibrium systems to produce decoherence. This idea has been endorsed by Scott Aaronson.
If this is correct, does it reduce or eliminate the potential moral importance of physics after the Big Freeze?e If so, does the argument for physics dominating in the long run fail? Or are there still other physical phenomena happening until the end of time that also command ethical weight?
Of course, physics can still dominate in importance even if the eternal restlessness of vacuum fluctuations is ruled out, since inanimate matter/energy/etc. in the short and medium terms still vastly outnumber biological and digital minds. It's just that the case is less of a slam dunk.
Presumably non-quiescent space still containing real physical variation will exist for astronomically longer than the time after which intelligence-controlled computation becomes impossible.
What if physics is fake?
It seems reasonably likely that we are in fact digital minds in a virtual reality (VR) run by another civilization. In order to save computing power, most VRs will skimp on computing physics in any great detail and will focus on computing the intelligent minds in the VR worlds. In this case, most of the physics we think we see doesn't actually exist beyond its surface appearances, except when we probe it more precisely in physics laboratories. If so, the astronomical importance of suffering in physics would be undercut, because there isn't in fact much physics.f
That said, there is still presumably a lot of physics in the "basement universe" that's running our VR. If our actions correlate with actions of the basement agents to any reasonable degree, or if we can influence the basement by writing ethical arguments that the basement agents can read, then we might still be able to contribute toward reducing suffering in the basement's physics. Our ability to do this is just much more constrained than if we directly inhabit a universe with lots of real, detailed physics.
Of course, this VR possibility also seems to undercut the importance of normal attempts to reduce suffering in the far future, since if we're in VR, it's much less likely that our descendents will have access to astronomical amounts of computing power in the coming billions of years (since this would be expensive to compute), so it's (thankfully) much less likely that our descendents will create lots of suffering that we should try to reduce. Thus, if we're in VR, there's lower importance to both efforts to reduce suffering in organism-like computations of the future and efforts to reduce suffering in fundamental physics, and it's not clear if or how much this changes the relative urgency of organism-like suffering vis-à-vis physics suffering.
The picture of fundamental physics that I've discussed has been framed in terms of particles moving about. However, this may not be accurate. For instance, Sean Carroll argues that the universe is actually made of fields:
The universe is full of fields, and what we think of as particles are just excitations of those fields, like waves in an ocean. An electron, for example, is just an excitation of an electron field.
The ontological interpretation of quantum field theory is an active area of philosophical debate, with four main candidate views. We see similar questions arise in the interpretation of quantum mechanics. Many alternate interpretations of physical theories exist. Often the underlying mathematics is terribly abstract and doesn't have obvious relation to familiar entities.
These ideas are well understood in the philosophy of science. Pessimistic induction suggests that entities we regard as ontologically "real" may not stand the test of time. This becomes particularly relevant to ethics if we consider fundamental physics to be marginally sentient, because determining what we're attributing sentience to becomes unclear. If particles are really fields defined over all space, this gives a somewhat different conception of the objects whose welfare we're caring about. While we're accustomed to thinking about aggregation of experiences by discrete agents, it becomes less intuitive (though perhaps ultimately more consonant with reality) to aggregate the "experiences" of fields or abstract mathematical objects. The utilitarian aggregation framework itself may need revision to accommodate these strange new perspectives.
Should negative-leaning consequentialists promote this issue?
In general, if you think something is morally relevant, it makes sense to promote moral concern for that thing so that others will get on board. But for negative-leaning utilitarians and other negative-leaning consequentialists, the question is more difficult, because majority opinion is not negative-leaning. For instance, if negative-leaning utilitarians think there's net suffering in physics, but the median voter of the future thinks there's net happiness, this could lead to policies that would horrify the negative-leaning utilitarian.
Following are some considerations on each side of the question.
Reasons to promote concern for physics:
- If the balance of happiness minus suffering in physics is incorrigibly negative even as judged by the median person, then greater concern for physics should generally lead to a reduction of suffering in physics, such as by reducing the size of the multiverse or by reducing the activity level of physics, assuming these are possible.
- Maybe promoting concern for physics has little long-term effect, because superintelligences would have converged on the idea anyway, but peeking ahead to the issue sooner can improve the wisdom of near-term altruism. (Of course, this can also mean improving the effectiveness of pro-physics ideologies.)
- If there are ways to reduce suffering in physics while keeping the total amount of physics constant, then these would benefit both negative- and positive-leaning utilitarians. However, I expect that changing the size of the multiverse can probably make a vastly bigger expected impact than changing the dynamics of a fixed-size piece of the multiverse, so I'm skeptical of how optimistic this point should make us.
- There's a general heuristic that more discussion of moral topics is better, especially since dialogue benefits many value systems.
- If suffering reducers discuss this issue first, they imbue the subsequent debate with a negative-leaning bias.
Reasons against promoting concern for physics:
- If the balance of happiness minus suffering in physics is positive as judged by the median person, or if it can be made positive by human efforts, then this should lead to policies that negative-leaning consequentialists oppose, such as expanding the amount of physics that exists, if possible. If negative-leaning consequentialists remain silent about the issue, it's less likely positive-leaning consequentialists will pick it up as an important topic and thus less likely they'll cause harm.
- Plausibly it's easier to create more physics than to eliminate some of what already exists. In such case, there might be more expected harm from physics creation than expected benefit from physics diminution.
- Many people are not utilitarians but instead value existence, complexity, beauty, etc. in their own rights. These kinds of values would tend to favor increasing the size of physics. Of course, people with these values would presumably feel this way whether or not the idea of suffering in physics is discussed, but maybe more discussion of suffering in physics would make more salient the fact that physics also contains these other values.
The net balance of these considerations is unclear to me. I think it's about equally likely that happiness outweighs suffering as that suffering outweighs happiness in fundamental physics -- as judged by a median person -- since aversion-like processes should be roughly balanced by seeking-like processes, at least a priori. If there are many ways to change physical dynamics, it becomes more likely that at least one of these would yield net happiness as judged by the median voter, which may suggest that it's more likely post-humans would increase physics. This is sad.
Some Buddhists would consider both aversive and appetitive physical processes as suffering, since all are forms of "striving". Schopenhauer thought of even electricity and gravity as "fundamental forces of the will" -- endless desire that causes suffering. Unfortunately, these negative-biased views are not common among the general population.
While panpsychism in general is a mainstream philosophical topic, there are few discussions of the ethical implications that panpsychism would entail. One piece that tackles the issue is "If Matter Matters: Navigating the Moral Implications of Panpsychism", which concludes that even if panpsychism is true, electrons and other components of basic physics don't warrant moral consideration.
Appendix: Physics disasters
High-energy particle accelerators -- such as the Relativistic Heavy Ion Collider (RHIC), sponsored by Brookhaven National Laboratory -- have in the past sparked concern over the following three potential physics disasters. A response report (henceforth called "RHIC report") discounted all three of these as virtually impossible on both theoretical and empirical grounds.
If colliding particles were compressed to extremely small sizes, they might create a gravitational singularity.
"Strange matter" is a form of quark matter hypothesized to reside in the high-pressure cores of neutron stars ("RHIC report", p. 11). It might be possible for strange matter to exist in zero-pressure environments, in which case the material would be called a "strangelet" ("RHIC report", p. 12). If a particle accelerator like RHIC produced a negatively charged, moderately stable strangelet through a heavy-ion collision ("RHIC report", p. 4), the strangelet would absorb surrounding atoms, fall to the center of the Earth, and compress the entire planet into a ball roughly 200 meters in diameter ("Will relativistic heavy-ion colliders destroy our planet?" by Dar et al., pp. 1-2).
In Catastrophe: Risk and Response, Richard Posner estimated a 10-7 probability for a strangelet disaster due to RHIC over the next decade. But "RHIC report" (p. 5), based on high-energy collisions on the surface of the moon, finds the probability to be far smaller. One might contrive an "'ad hoc' hypothesis" to explain why we don't observe the effects of a strangelet disaster; Dar et al. examine one such hypothesis and find its probability to be on the order of 10-49 (p. 9). But Adrian Kent, in "Problems with empirical bounds for strangelet production at RHIC" (2000), acts as a sincere "devil's advocate" and points out a number of other overlooked empirical questions, including whether positively charged strangelets might pose a hazard if they accidentally reached the sun by hitching a ride on spacecrafts.
We may currently live in a "false vacuum" -- a region of higher energy density than the "true vacuum" ground state (Bostrom 2002, "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards"). If this false vacuum decayed into a true one, we would be engulfed by a bubble of destruction that would expand outward at a rate asymptotically approaching the speed of light (Sidney and De Luccia 1980, "Gravitational Effects on and of Vacuum Decay", p. 3305):
Vacuum decay is the ultimate ecological catastrophe; in a new vacuum there are new constants of nature; after vacuum decay, not only is life as we know it impossible, so is chemistry as we know it. However, one could always draw stoic comfort from the possibility that perhaps in the course of time the new vacuum would sustain, if not life as we know it, at least some structures capable of knowing joy. [But because space-time in this new vacuum would be prone to frequent gravitational collapse,] This possibility has now been eliminated. [p. 3314]
Some fear that high-energy particle accelerators could trigger vacuum decay. Hut and Rees, "How stable is our vacuum?", argue that this risk is "completely negligible since the region inside our past light cone has already survived some 105 cosmic ray collisions at centre of mass energies of 1011 GeV and higher."
The fact that life on Earth has already survived 4 * 109 years without any of these disasters does not in itself provide evidence that they are unlikely to occur. Tegmark and Bostrom, "Astrophysics: Is a Doomsday Catastrophe Likely?" (2005), take account of such an observer-selection effect and calculate an new upper bound of 10-12 for the annual probability of destruction due to all of the risks of particle accelerators.
There's always a chance -- especially in theoretical physics -- that our models are entirely wrong or that we neglect an important consideration (Bostrom 2002). It's hard to see how such risks could be reduced to zero:
In everyday research, a hypothesis which has perhaps at best a 1% chance of being correct is often dismissed as not worth pursuing, and a scenario which relies on two or three such hypotheses is very unlikely to be taken seriously. Such judgements have to made: science would progress much more slowly if disproportionate attention was paid to unlikely hypotheses and implausible explanations. However, when trying to exclude the possibility of a global catastrophe, a 10-2, 10-4, or even 10-6 probability of error is far from negligible, indeed alarmingly high. [Kent 2000]
Ord et al., "Probing the Improbable", discuss the problem of model uncertainty in such calculations more generally, using the Large Hadron Collider as an example.
Hypercomputation is the speculative idea of infinite computation of various sorts. A number of theoretical models have been proposed, but many of them are ungrounded in physics. A few are potentially physically possible, though all seem to be stretches. Scott Aaronson's "NP-complete Problems and Physical Reality" argues against the workability of various physics tricks to efficiently solve NP-complete problems.
One scenario that isn't shown to be physically impossible is Mark Hogarth's proposal. Hogarth's idea involves an M-H spacetime, which allows a computer to travel along a world line with infinite proper time, sending a signal to an observer who travels along a different world line where only finite proper time elapses. Hogarth admits that all the examples of M-H spacetimes that he presents are physically dubious, violating a principle of Roger Penrose. Still, as "On the Possibility of Supertasks in General Relativity" argues, they're not completely ruled out in physical terms at this point.
Regardless, Hogarth's point is more general: that it's good to avoid jumping to conclusions about the way the universe works just because something seems unintuitive. The more I've studied math and physics, the more I've learned not to be weirded out by seemingly crazy ideas. I sometimes find that atheists reject religion for the wrong reasons: They say it "seems absurd" to imagine realms beyond the ordinary things that we know in daily life. A better reason to reject creationism is based on Occam's razor. The same lesson applies more generally when we're assessing how much we know about what's possible in the future. Our high degree of uncertainty about unknown unknowns pushes more in favor of robustly improving cooperation, wisdom, and altruism in the future in ways that don't make assumptions about particular physical or even mathematical frameworks.
Hogarth's lecture proffers an analogy with geometry in the late 1800s. Immanuel Kant had asserted that we could know Euclidean geometry was the "true" geometry just through a priori reasoning. Other mathematicians of the day held similar views. Then in the early 1900s, general relativity turned everything upside down. Hogarth cites a nice poem from Emily Dickinson:
Experiment escorts us last -
His pungent company
Will not allow an Axiom
An Opportunity -
My history with this topic
I first learned about panpsychism in summer 2006 from Yew-Kwang Ng's "Utilitarianism and interpersonal comparison". At the time, I wrote the following email reply to it:
If we assign panpsychism a nonzero probability, then might this not significantly affect our expected-utility calculations?
It would be hard to identify the smallest size of a chunk of matter at which consciousness exists (if consciousness exists in every atom, why do we--combinations of trillions of atoms [actually, about 7 * 1027 atoms]--have only one consciousness that doesn't change when we shed those atoms?). But suppose chunks of matter bigger than, say, 1000 cm3 have consciousness. Then, nonliving consciousness dominates all living consciousness by many orders of magnitude. (Assuming that such comparisons make sense when both living and nonliving consciousness might be infinite.)
Maybe one might make the argument that, since we have no idea what it's like to be a rock or a collection of water molecules, the expected change in the utility of nonliving matter that we could effect by any action would be zero. Of course, we might create new matter [...] or convert existing energy into matter and thereby create more nonliving utility. But we have no way of knowing whether the net utility of, say, newly created asteroids is positive or negative.
This idea that "even if matter is conscious, we don't know how to benefit it, so it cancels out of our calculations" is a common refrain. For example, Felicifia commenter Arepo wrote:
If plants (or grains of sand) do feel pain despite having no evolutionary impetus to do so, it seems impossible to predict how, why, in what form etc. [...]
When we have no information on which to go, I think it's a good epistemic principle to assume equal expected value to your ignorances.
But this logical move is too fast. It ignores the possibility that even a small amount of further reflection on the problem could reveal asymmetries in the happiness-vs.-suffering tradeoff. Moreover, axiologies that regard suffering as more significant than happiness already have an asymmetry from the get-go. Neglecting panpsychism may thus reflect motivated stopping. In my case, ignoring the panpsychist argument may have come from a feeling like, "Basic physics is too abstract for my empathy, so I don't want to go there." Now that I use a moral-pluralism approach to my ethical concern, this barrier to taking panpsychism seriously is weakened, because now I can safely contemplate suffering in basic physics without worrying that it's going to derail my focus on animal-like creatures that so obviously have moral importance.
That said, there are some better objections to the argument that a small chance of panpsychism being true leads basic physics to dominate in expected value.
- I think panpsychism is not "true" or "false" but is a moral attitude that we adopt toward physical processes. Hence, panpsychism is not a question of factual uncertainty but moral uncertainty. As a result, it's subject to the two-envelopes paradox for moral uncertainty: If we set human sentience as the baseline, then under panpsychism, fundamental physics cumulatively matters vastly more, but if we set fundamental physics as the baseline and reject panpsychism, then human sentience cumulatively matters infinitely more! Different axiologies don't have objective exchange rates among their values.
- The "panpsychist wager" is one of untold numbers of Pascalian wagers that we can concoct. Each wager tells us that some random consideration seemingly dominates everything else. Pascalian wagers in general have a fundamental brittleness problem and should never be taken at face value. Rather, one needs to keep account of how much one doesn't know and how much one's views will change with further discoveries.
What led me to take the panpsychist argument seriously is that it no longer seems like a random possibility drawn from a huge set of them but appears plausibly to follow from other world views that I entertain. In particular, during 2014 I updated toward a kind of panpsychism (different from the befuddled notion I discussed in my quotation from 2006 above). Then combining this with the most widely accepted scenarios about the future of the universe and taking scope sensitivity seriously leads to a compelling argument. My heart does not fully embrace it, since it feels cold and abstract compared against really terrible suffering I can identify with, but I do take the conclusions somewhat seriously.
Reactions to this piece
- Anders Sandberg: "Somebody think of the electrons!"
This piece was inspired partly by my own reading of neuroscience and philosophy of mind and partly by several conversations, including an email exchange with Anders Sandberg, in which Sandberg mentioned that Nick Bostrom has mused about whether the (perhaps unlikely) possibility of tiny consciousness in physics might dominate utilitarian expected-value calculations. Michael Moor suggested some clarifications to the text. Lukas Gloor pointed out that anthropic decision theory doesn't object to the existence of infinitely many Boltzmann brains in the future.
- One reader of this piece, Barry Kort, offered another analogy:
According to the Buddhist view, desire (or attachment) is the root of suffering.
Consider a transistor, which is a 3-layer sandwich of P-type and N-type (doped) silicon. The electrons naturally seek (e.g. "desire") to be in their lowest possible energy state in the lattice. But because of doping (in N-type materials), there are more electrons than there are low-energy resting (i.e. "home") states. The surplus electrons are then "homeless" wanderers, in search of a scarce vacancy in the lattice where they could come to rest. While it might seem poetic to ascribe emotional terms (like attachment, desire, suffering, or wandering homelessness) to electrons, the mathematical modeling and associated dynamics do fit the analogy.
- Dennett clarifies (p. 457) that by "content" he implies some notion of intentionality as connected to the intentional stance. I'm not familiar enough with Dennett's theory of content to say for sure, but maybe this fact weakens the claim that Dennett's characterization of thought runs throughout physics. That said, I think physical vs. intentional stances are themselves fuzzy; any physical process can have crude intentional interpretations. (back)
My reason for setting a finish line at 101056 years is to make the ratio as conservative as possible. In fact, when we consider any given universe in isolation, it has infinitely many years of vacuum fluctuations compared with a finite period of intelligence beforehand(?). However, if we watch the timeline of big bangs as they proceed along, the ratio is not infinite. I'm worried this might be one of those cases in cosmology where the relative measures depend on how the limit is taken.
In particular, if we follow along an intuitive timeline for how the universes progress, the ratio converges to roughly that of 10200 versus 101056 as discussed in the text. To see this, suppose we start with one big bang, which then has one long period of 101056 years (ratio of big bangs to long periods afterward = 1/1). Then we get another big bang, and each of these universes has a long period of 101056 years. Now we've had 3 periods of 101056 years and 2 big bangs (ratio = 2/3). Now each of those two universes has a big bang and then all four universes have a period of 101056 years (ratio = 4/7). The ratios continue along as 8/15, 16/31, 32/63, ..., which converges to 1/2. Relative to the magnitudes involved, this is basically the same as a ratio of 1. Hence, if we take the limit this way, we get a more modest ratio like that used in the main article.
- The time until the first Boltzmann brain is about 101050, while the time until the first new universe is 101056. This conservatively implies the following number of Boltzmann brains per universe:
101056 / 101050 = 10(1056 - 1050), which basically equals 101056.
This astronomically exceeds any number of mind-moments that could be intentionally created by intelligent computation given our present understanding of physics. So even if you only care about humans, it seems that vacuum fluctuations still dominate in importance over biologically evolved or digitally computed beings. However, the proposed existence of such vast numbers of Boltzmann brains suggests that this picture of physics is wrong, since if it were correct, we should be a Boltzmann brain with disordered experiences. Thus, we can't draw any solid conclusions, and we should maintain copious model uncertainty. [Update, 2015: Anthropic decision theory, which I now think is the best view of anthropics so far developed, has no trouble accepting vast numbers of Boltzmann brains and so doesn't find anything wrong with this picture of cosmology. Hence, it does suggest that there may be infinite numbers of human experiences as Boltzmann brains in the future, which we might be able to affect if there's a way to radically change physics.] (back)
- I'm not completely sure if the proposal by Carroll eliminates all virtual particles after the Big Freeze, but that's my impression. (back)
- Alternatively, there might be both "real" and "virtual" copies of us, but if there are lots more virtual copies, then the impact of our algorithm's choices on the virtual minds may be more significant in aggregate because there are so many more of them. (back)