Summary
Traditional utilitarian aggregation treats minds as discrete individuals whose welfare can be summed. But when we disabuse ourselves of dualism, we see that minds are not separate units but rather blur together into a single, fuzzy whole. Moreover, big minds can be composed of many little minds, each with their own individual consciousnesses. I suggest a few proposals for amending utilitarian aggregation to handle this difficulty, but none seems particularly promising. For the moment, I continue to use the traditional, kludgey approach of individuating the most salient subsystems at each salient level of organization. The problem of fuzzy, nested objects/entities confounds not just utilitarianism but many other forms of consequentialism.
Contents
Introduction
Jeremy Bentham's utilitarianism was premised on the principle "each to count for one". Welfare economics similarly aggregates the preferences of individuals. But what is an individual?
Conventional Western thinking sees a single, discrete soul or unified consciousness within each person and sentient animal. An individual is then a unit inhabited by a soul or a unified subject of experience.
But Daniel Dennett's view of consciousness vanquishes the Cartesian theater—a single place where all consciousness happens. Rather, consciousness is distributed, with different events happening at different places and times and interacting to generate what we end up calling a stream of consciousness. Mind boundaries become fuzzy, since there's no principled separation between the agent and its environment.
Moreover, little minds might combine to form bigger minds, which can combine to form yet bigger minds. Hence, minds are no longer disjoint. The next section elaborates some cases where this problem arises.
Examples of nested minds
Burd, Gregory, and Kerbeshian discuss brain-tissue cultures and ask when a clump of neural tissue is sufficiently complex as to constitute "a person". If mental life emerges with tissue clusters smaller than a full brain, then our skulls might contain several "minds". Indeed, we can see this to some degree in split-brain patients.
Our brains are built from many subprocesses, some of which may be sufficiently complex and independent that they could deserve to be seen as little minds of their own ("demons" in Dennett's terminology). Global Workspace Theory (GWT) talks about "coalitions" of lower-level components that compete for attention. GWT usually calls these subprocesses "unconscious" and only declares global information broadcasts to be "conscious", but this separation of conscious from unconscious is artificial.
For functionalists, one of the clearest cases of minds within minds is the China brain thought experiment, in which the population of China implements, via telephone communication and the like, the right organized functional processes to create a collective mind. The Chinese citizens don't suddenly become unconscious during this process, so the result is clearly (at least) two layers of consciousness, one nested within the other.
I've discussed the China brain and related thought experiments elsewhere:
Difficulty with utilitarian aggregation
At first glance, nested minds might seem to pose little problem for utilitarian aggregation: We can count all the Chinese citizens and then also count their collective brain (perhaps with a different moral weight than we use to count an ordinary human). If nested minds were always cleanly separated like this, such a procedure would work fine. But in practice, "Mind boundaries are not always clear". If all of China is a brain, what if we omit one random Chinese person whose information processing isn't that essential? Does "China minus one person" also constitute another collective mind? Does it deserve its own ethical weight in addition to the full China mind?
Eric Schwitzgebel argues that "If Materialism Is True, the United States Is Probably Conscious". But if the USA as a whole is conscious, how about each state? Each city? Each street? Each household? Each family? When a new government department is formed, does this create a new conscious entity? Do corporate mergers reduce the number of conscious entities? These seem like silly questions—and indeed, they are! But they arise when we try to individuate the world into separate, discrete minds. Ultimately, "we are all connected", as they say. Individuation boundaries are artificial and don't track anything ontologically or phenomenally fundamental (except maybe at the level of fundamental physical particles and structures). The distinction between an agent and its environment is just an edge that we draw around a clump of physics when it's convenient to do so for certain purposes.
My own view is that every subsystem of the universe can be seen as conscious to some degree and in some way (functionalist panpsychism). In this case, the question of which systems count as individuals for aggregation becomes maximally problematic, since it seems we might need to count all the subsystems in the universe.
Possible solutions
Sum over all sufficiently individuated "objects"
The standard approach to the aggregation problem is pretty hacky:
- Individuate seemingly distinct levels of computational organization (e.g., Chinese citizens are one level and the collective brain is another level).
- At each level, individuate systems that seem relatively unified as distinct individuals (e.g., distinguish each Chinese person from each other Chinese person).
- Weigh each individual by how sentient it appears.
- Sum over all individuals over all levels of organization.
This procedure is sensitive to how many layers of abstraction are envisioned (step 1) and to the artificial boundaries of what things count as individuated objects (step 2). For instance, maybe each cell of an animal counts and each organ of the animal counts. Do pairs of cells also count? Do individual muscle fibers count in addition to the whole muscle? Does each chamber of the heart count in addition to the whole heart? Each layer of skin in addition to the whole skin? Each lung or just the pair? Each continent or just the whole Earth? Each force that acts on an object or just their sum? Each subroutine or just the whole program?
The problem with utilitarian aggregation arises because minds are both fuzzy and nested. The approach of individuating objects tries to eliminate the fuzzy-ness of minds, though still allowing them to be nested. The next proposal discussed below instead aims to eliminate the nested-ness of minds, since if minds must be disjoint, then the number of possible minds is vastly reduced.
Only count smallest or largest scales
We could try to sum the smallest systems, ignoring bigger systems. For instance, only sum the individual Chinese people and ignore their collective brain. But this is wrong, because the collective brain has a consciousness different from each of the Chinese people.
The opposite approach is to count the collective brain and ignore the individual Chinese citizens, but this is again wrong, because the Chinese citizens don't lose consciousness just because they're implementing the collective brain. (Schwitzgebel makes a similar point in his paper's section on "Anti-Nesting Principles".)
Also note that in practice, the smallest level of minds is not individual Chinese people but perhaps subsystems within their brains, individual neurons, individual atoms, or even individual superstrings. And the largest conscious mind is not just China but the whole multiverse taken together.
Sum over all systems
A brute-force approach could be to sum over all systems, i.e., all subsets of the whole universe! This exponentially explodes the computational complexity of the utilitarian analysis, since given a set of size N, its power set has size 2N. So for instance, if we restricted attention to just the observable universe and just to atoms as the basic building blocks, ~1080 hydrogen atoms would imply ~21080 subsystems within this universe.
It also seems odd to count disconnected subsets of the universe—e.g., a system composed of half of my brain and half of the brain of someone on the other side of the planet.
Sum over all spatiotemporally connected systems
Maybe we could restrict ourselves to spatiotemporally connected subsets of the universe. But even then, the set of systems remains vast.
Counting all connected subsets also retains a lot of redundancy. For instance, there's a system consisting of my body, but there's also a system consisting of my body plus one cubic centimeter of air in front of my forehead, and another system consisting of my body plus a trapezoidal-shaped region of air in front of my left pinky, and astronomically more such permutations. Most of these represent roughly the same conscious individual because the contribution of the air molecules to the system is minimal. Intuitively, these should all collapse together into just the most important part of the system, or else the systems that include trivial air molecules should count vastly less in utilitarian calculations.
Sentience templates
Utilitarian individuation is a two-step process: First identify systems that seem relatively unified, and then assess their degrees of sentience. Maybe we could combine the two steps. That is:
- Consider each kind of sentience—e.g., happiness of a specific type, fear of a specific type, seeing blue in a specific way. What general "template" of functional processing does each involve?
- Consider all subsets of the universe, trying to pattern-match that template to that subset.
- Take the smallest possible subsets of the universe that implement the template to be the individuals. So, for instance, if one template was for my body, then this approach would ignore the joke system "(my body) + (air molecules in front of me)" because that's not a minimal instance of the template.
- End up with, for each template, a list of systems that best fit the template. Aggregate over those. (Alternatively, measure the degree of match of each possible system to the template, and sum each possible system weighted by its degree of match.)
This approach seems somewhat promising, though for non-parochial views of sentience, the number of templates may be astronomically high—in the limit, maybe as high as the number of subsets of the universe. We might identify a few crucial processes that we think constitute phenomenal experience and just use those. Indeed, this is what many theories of consciousness do. But there's a risk of leaving out large swaths of morally relevant computation.
That said, the problem of having lots of templates also bedevils the other individuation approaches because they too need to assess the degree of sentience of each system.
Dispense with individuation altogether
The holy grail would be to avoid looking at subsets of the universe in the first place—if we could make this work in a way that accords with our intuitions. But we're so used to thinking in individuated terms that our intuitions are likely to rebel against a non-individuating approach. Individuation is also important for assessing utility functions, since a utility function emerges from the preferences of a unified, individuated physical process.
There are trivial ways to dispense with individuation. For instance, we could just sum the universe's total number of elementary particles, total kinetic energy, or some other physical quantity. But it's hard to see how this has much direct ethical import.
Ultimately, we want some more complex function that maps from universes to (dis)utilities in a way that doesn't decompose the universe into subsets. Maybe this is an impossible task for any welfarist theory of value, since welfare seems to require assessing the feelings and preferences of individual systems.
Other consequentialisms also face nesting issues
Utilitarianism is not alone; the individuation and aggregation problems also confound most other consequentialisms.
For example, a paperclip maximizer might cram little paperclips inside bigger paperclips. And what if you consider the subset of a paperclip that omits its ends? Surely that could still hold paper too? What about an inner cylindrical ring within the paperclip? That also has a paperclip shape. What about the system consisting of a paperclip plus the dust on it? Or a paperclip plus a few molecules of the table on which it rests? Is a clothes hanger a paperclip? How about two human fingers? Three human fingers? How about two magnets placed on opposite ends of a piece of paper? What about lots of little magnets used for that purpose? Is any subset of the little magnets a paperclip as long as there's at least one magnet on each side of the papers? And what counts as paper? Could a few plant fibers suffice? In that case is a tree a paperclip? And every ring of the tree also a paperclip? And air molecules that surround the tree? And the outer space surrounding the Earth?
As another example, suppose we wanted to maximize love. The complex concept of love includes many components, like (1) attraction, (2) doing nice things, (3) increased energy levels, etc. Trivial instances of these can be seen throughout the universe, like (1) when magnets stick together, (2) when sunlight warms a chilly animal, and (3) when the wind spins a whirligig. And love can nest, such as when electrons are attracted to protons (level 1), within one of two neurons that have recently strengthened a synapse (level 2), within the brain of a person who's in love (level 3), within a country that has recently strengthened bonds with its allies (level 4), within a planet that continues to be pulled toward its sun (level 5), etc.
Consequentialist aggregation becomes wickedly complicated unless extremely precise specification of value is imposed. But most consequentialist values—happiness, love, fairness, knowledge, etc.—are inherently fuzzy and would fail to meet our desired meanings if formalized both precisely and concisely at the same time.
Conclusion
I don't know the right answer. I hope that the standard, hacky approach of individuation within the most salient levels of organization can serve as a good enough approximation of whatever more elegant approach(es) I would eventually endorse upon further reflection.
Performing any exact computations seem intractable for now, but we may still be able to wave our hands at general principles. For instance, if animal-like agents seem vastly more sentient than other kinds of systems, we can mostly focus our evaluations on how our actions affect animal-like agents.
In the long run, though, our best bet is to punt these questions to future generations. What we can do now is push the future in better directions, setting the stage for our more intelligent descendents to explore these puzzles further.
Maybe our descendents would reject our entire approach as a misguided attempt to patch inherently confused moral notions of individual experiences. But I feel very strongly that, e.g., common-sense instances of animal suffering are terrible; an approach that's too abstract might lose this fact. So I'm not sure to what extent I would regard overhauls to utilitarian aggregation as moral progress vs. a corruption of my values.
Acknowledgements
Some of these ideas may have been influenced by early drafts of Caspar Oesterheld's "Formalizing Preference Utilitarianism in Physical World Models". Jonah Sinick and David Pearce also helped inspire parts of this paper.