by Brian Tomasik
First written: 30 Dec. 2014; last update: 27 Apr. 2017

Summary

When we notice ourselves to be conscious, we do so by specific neural processes of self-reflection. Any emotion that we can imagine in a first-person way is an emotion that we are reflecting upon. Hence it's natural to assume that the only emotions that matter morally are those upon which someone reflects. This gives rise to a higher-order account of consciousness. Such a view is problematic, because from a neutral standpoint, it's not clear why self-reflective algorithms are so special, whether overly simple self-reflection matters, and why the operations being reflected on don't also have moral significance in their own right. I think self-reflection can be one important consideration among many when we're assessing a mind's degree of consciousness, but excluding all cognitive processing that isn't noticed by some sufficiently complex reflection algorithm seems overly parochial.

Introduction

It's clear that consciousness must reduce to physical processes, probably to patterns of neural activity. This is because we observe exquisite correlations between brain dynamics and consciousness in humans, and given that it's no less strange for consciousness to be neural activity than for it to be anything else, it seems very likely that it is just neural activity.

But this leaves unanswered the question: Which neuronal operations give rise to consciousness? It seems that most of our brain operates in the dark, so what makes the objects of our attention "light up" with a distinctive feeling?

Follow the neurons

In order to explain what happens in politics, it's helpful to follow the money back to campaign donors. Likewise, in explaining our mental lives, we should follow the neurons. When you notice to yourself that "I'm conscious", what's going on in your brain to make that thought happen? Presumably you have some concept of what consciousness and raw feelings are, and this combines with a neuronal cluster representing yourself, perhaps combined with your current processed data input stream. I'm just speculating about the exact implementation details here, but the details aren't crucial. There must be some specific neural steps that implement your thought that you're conscious, and these steps explain why you think you're conscious.

"But", you might protest, "that sequence of neural steps doesn't explain why my consciousness is lit up in a special way. Why isn't that affirmation of self-consciousness happening in the dark, the way it would for a robot?" Of course, this thought also is produced by some sequence of neural steps, which we could trace in your brain if we had high-resolution scanning devices. What else could that raw feeling be besides neural activity? If it were anything else, wouldn't consciousness be just as strange?

You can't think outside your brain; any confusions you have about consciousness are implemented in the physical machinery that seems so unlike the vivid phenomenology that you experience.

Eliezer Yudkowsky makes the same point:

If you close your eyes and concentrate on your inward awareness, you will begin to form thoughts, in your internal narrative, that go along the lines of "I am aware" and "My awareness is separate from my thoughts" and "I am not the one who speaks my thoughts, but the one who hears them" and "My stream of consciousness is not my consciousness" and "It seems like there is a part of me which I can imagine being eliminated without changing my outward behavior."

You can even say these sentences out loud, as you meditate.  In principle, someone with a super-fMRI could probably read the phonemes out of your auditory cortex; but saying it out loud removes all doubt about whether you have entered the realms of testability and physical consequences.

This certainly seems like the inner listener is being caught in the act of listening by whatever part of you writes the internal narrative and flaps your tongue.

Am I conscious now?

In Zen and the Art of Consciousness, Susan Blackmore's first question is "Am I conscious now?" Her answer is "Of course I am. Yes, I am conscious now." If we follow the neurons, we can see that this response is some kind of self-reflection operation in Blackmore's brain that, when activated, always produces a "yes" answer. Blackmore has compared this to always seeing the light on in the fridge: "You may keep opening the door, as quickly as you can, but you can never catch it out - every time you open it, the light is on." Kevin O'Regan calls this the "refrigerator light illusion".

Michael Graziano's attention schema theory proposes something similar:

If you are attending to an apple, a decent model of that state would require representations of yourself, the apple, and the complicated process of attention that links the two. [...]

When you look at the colour blue, for example, your brain doesn't generate a subjective experience of blue. Instead, it acts as a computational device. It computes a description, then attributes an experience of blue to itself. [...] The brain insists that it has subjective experience because, when it accesses its inner data, it finds that information.

The above views might be considered narrative-interpretative theories of consciousness. The most famous such theory is Daniel Dennett's multiple drafts model, in which consciousness is constructed from pieces of brain activity on an as-needed basis. Like executing queries to a distributed and constantly changing database, we compute and combine together the information that's required for our current object of attention. Various "probes" (e.g., verbal questions, action choices) may fix a snapshot of some contents of our minds, but snapshots are constructed on the fly and needn't be fully consistent among each other.a

Was I conscious a moment ago?

Blackmore's second question is: "What was I conscious of a moment ago?" I'll revise it slightly: "Was I conscious a moment ago?" Even if we agree that there's a specific kind of self-reflection process that corresponds to active thinking about our consciousness, there also seems to be a kind of implicit consciousness that we carry around as we perform our daily tasks. 99.9% of the time, we're not actively reflecting on our consciousness, and yet we seem still to feel an implicit what-it's-like-ness.

...at least, that's what we think when introspecting on the matter. The only way we can tell now is by inspecting memories, and when we inspect memories, we reconstruct a mental image of ourselves being self-aware at the moment that we're recollecting. But this could just be a Graziano-style operation in which our brains now claim they were conscious in the past, because they can access the past data. Data that weren't globally broadcast in the past didn't remain in memory, so we claim we weren't conscious of the non-broadcast data, in agreement with the observations of global-workspace theory; but even for the data that were globally broadcast, this retrospective memory may be the first time we actively ask whether we were conscious of them. Moreover, even if we have false memories of some fictional event, we can clearly see in our mind's eye that we were conscious when it happened. Hence, recollection of having been conscious in the past is not foolproof.

Reasoning of this type may have been the motivation for Eliezer Yudkowsky's comment:

Maybe humans are conscious only while wondering about whether or not we're conscious, and so we observe that we're conscious each time we check, but at all other times our experiences are of no ethical value.

(Of course, it's also possible that some baseline self-reflective operations of noticing that you're conscious are actually running all the time in some implicit, non-verbal, non-distracting way.)

I think Yudkowsky's view here is interesting, and it seems to me like the main plausible contender to the moral standpoint advanced in the current piece. If one insists that "the act of thinking (verbally or non-verbally) that oneself is conscious" is essential for consciousness, then even humans may be "unconscious" much of the time, and it's unclear to what extent non-human animals would be conscious, depending on what degree of complexity is involved with "telling oneself (possibly non-verbally) that oneself is conscious". On the other hand, if we think human experience still matters even in "flow" states where we lose track of our own minds and are merely taking in the world without noticing that fact, then the arguments in this piece seem to apply.

Higher-order theories

The narrative-interpretative theories discussed above see consciousness as a construction in which our explicit-thought machinery makes sense of our mental events. This narration is a form of self-reflection. However, the manner of self-reflection differs somewhat from conventional higher-order theories of consciousness, which propose that consciousness consists in reflection on lower-level brain processing in general, rather than reflection on the specific question of whether I'm conscious right now. Still, the general idea of self-reflection as the crucial component seems shared among the approaches, and in the remainder of this piece I talk about "higher-order theories" loosely as views of consciousness according to which self-reflection of some sort is required for consciousness. My critiques may not apply to specific higher-order views.

The following diagram illustrates the general idea of higher-order theories:
Depiction of higher-order processing

If we only tell ourselves that we're conscious by some self-reflective higher-order thought about our minds, then isn't higher-order reflection consciousness itself? Several philosophers think so. But I feel this assessment is too hasty, and following are some reasons.

Why higher-order theories are inadequate

Oversimplified instances of self-reflection seem to matter less

We don't know exactly what these self-reflective operations are in which we tell ourselves that we're conscious, but suppose we had an algorithm for them in human brains. Suppose we then, step by step, took away pieces of brain functionality not essential to self-reflection, like removing blocks from a Jenga tower. Would the result still be conscious? Suppose we simplify the self-reflection algorithm itself by compressing some complicated steps into a slightly simpler step. Is that brain still conscious? As we continue stripping away details, does consciousness become extinguished at some point? In the limit of extreme simplicity, is this Python agent conscious when it prints the string "Does it feel like something to see red?" and then prints a boolean answer computed by a rudimentary but still self-reflective function?

I think the most plausible response here is to hold that consciousness comes in degrees depending on the complexity of the self-reflective operations and the complexity of the mind being reflected upon. A complex self-reflective algorithm operating on an empty brain, with just enough fake inputs to make the self-reflection work, doesn't seem legitimately conscious to anywhere near the same degree that an actual brain is.

"Conscious" reflection in human brains involves broadcasting thoughts throughout the brain so that other brain components can access the information. Insofar as this broadcasting process loops in many brain components, there can't be a richly complex self-reflection broadcast without also having richly complex processes that receive the broadcast. Of course, some parts of the brain are less connected to brain-wide broadcasts, so if you have intuitions favoring a higher-order approach, you might give those disconnected brain modules much less weight.

What counts as self-reflection?

David Rosenthal's higher-order thought theory specifically excludes conscious inferential reasoning from the set of thoughts that can make unconscious processing become conscious. For instance, if you learn about your unconscious brain via psychoanalysis, Rosenthal would say that this doesn't make those low-level operations conscious. Only direct access within the brain can make unconscious processes "light up". But why? Many of our "unconscious" thoughts are also forms of inference, such as the brain's efforts to resolve optical illusions or read misspelled words. Where is the boundary between this kind of inference and "conscious" deduction?

Consider another example of higher-order reflection on "unconscious" processing: Seeing your brain light up in an fMRI. There's no fundamental difference between neurons sending signals internally versus fMRI images sending photons to your retina. Both are just information processing of various sorts. And if there is no fundamental difference, would this mean that those who deny the consciousness of, say, fish should believe that a fish (or, rather, the fish+observer system) becomes conscious when neuroscientists inspect the fish's real-time brain functioning in sufficient detail? Or are these higher-order thoughts by the experimenters about the fish's brain not of the right type to generate consciousness according to higher-order theory?

As before, higher-order theorists could answer these questions by adopting a sliding-scale approach to self-reflection, in which what counts as a higher-order thought comes in degrees based on context. The resulting higher-order theories would still be workable, but we might question whether insistence on the necessity of self-reflection is overly dogmatic. Depending on how one defines self-reflection, trivial instances of it occur all the time. In any physical process where event A influences event B, we could call event B a "higher-order thought" about event A.

The extended mind

There's no principled distinction between oneself and the external world; we just cluster some parts of physics together because they're relatively more self-contained. The cells in your body tend to move together as you walk, whereas the clouds above you may be moving in the opposite direction, so it's useful to talk about the cells in your body as being "part of you", while the clouds aren't. But this distinction is fuzzy. For example, suppose it starts to rain, and you ingest some raindrops or inhale moisture-rich air. Photons reflecting off the clouds may enter your eyes, triggering brain processing. And so on.

If there's no sharp separation of oneself from the outside world, then why can't we consider everything happening in the outside world as part of your "extended mind"? And in that case, doesn't lower-order brain activity (e.g., early stages of visual processing) constitute a higher-order thought about what's happening in the external environment? Let's take an example. Ordinarily we might envision higher-order thoughts like this:

First-order processing: brain visually identifies a cloud in the sky.
Second-order thought: think to yourself, "I see a cloud".

But if the cloud is part of your extended mind, then we could reconceptualize the situation like this:

First-order processing: cloud moves through the sky.
Second-order processing: brain visually identifies a cloud in the sky.
Third-order thought: think to yourself, "I see a cloud".

If anything above first-order thinking counts as conscious, then early-stage visual processing is conscious according to the latter framing. And of course, we could construct a framing in which the cloud's movement is itself a higher-order thought about earlier events.

What's the boundary between first- and higher-order processing?

My sense is that the distinction between first-order vs. higher-order thoughts may blur together as we look at the brain more closely. Of course, there will still be distinctions like "the primary visual cortex mostly processes more raw data than the third visual complex does". But a clear separation between low-level vs. high-level processing is likely to be elusive and unhelpful. Ultimately, there's just lots of complex, interacting stuff going on, which we can look at from a variety of points of view.

For instance, suppose we think that early stages of visual processing are unconscious, while later stages are conscious. But what about when this later visual processing feeds back on earlier stages of visual processing? Does the "consciousness" of the higher processing get transferred to the purportedly "unconscious" lower-level processing? And what happens when later stages of visual processing influence other supposedly unconscious events, like hormone release? More generally, how do we define "higher" processing in a brain where each part is connected to tons of other parts in a messy network of interactions? As an analogy, where in the World Wide Web does "higher-order processing" occur, in contrast to lower-order processing?

Dennett (2016) articulates the gist of my feeling here when he writes about a somewhat different topic (pp. 69-70):

I submit that, when we take on the task of answering the Hard Question [namely, ‘And then what happens?’], specifying the uses to which the so-called representations are put, and explaining how these are implemented neurally, some of the clear alternatives imagined or presupposed [...] will subtly merge into continua of sorts; it will prove not to be the case that content (however defined) is sharply distinguishable from other properties, in particular from the properties that modulate the ‘reactions and associations evoked’. [...] The answer may well be that these distinctions do not travel well when we [...] get down in the trenches of the scientific image.

Maybe we could define higher-order thoughts based on specific functions, such as language generation. Language is somewhat localized in the brain, so very roughly separating (this aspect of) higher-order thought from other brain processing would not be completely misguided. Still, I would question why we're so insistent on separating higher-order thoughts in the first place, rather than taking a more holistic perspective on the messy biology of brains.

Which computational systems make "judgments"?

We might hold the view that "we have a conscious feeling of X" when "we judge that we have feeling X". For example, suppose I see a shadowy figure in the dark and am alarmed by it. We might say that my conscious feeling of fear starts when my brain judges to itself that "I'm feeling afraid."

But where do these "judgments" happen? One suggestion could be that judgments happen when I tell myself, via verbal inner monologue, that something is the case, e.g., by thinking to myself, "I'm scared of that shadowy figure." But I think most people would agree that verbal reports aren't necessary for conscious experience, since we might imagine a human who was raised by wolves and never learned language but who otherwise had a very similar brain to myself. And there are many experiences of which it feels like I'm conscious without ever verbalizing those experiences.

So, if we're looking for the stage of brain processing where "judgments" occur, we should look at some stage prior to our brains producing verbal reports. Suppose we identify some such stage. Presumably it will consist of some configuration of our brain state and/or some set of processing steps.

But then we can ask: Do subsystems in our brains also make judgments? For example, before my whole brain became aware of the frightening shadowy figure, maybe a subset of my brain processed the visual input and triggered "alarm bells" to the rest of the brain. Could those "alarm bell" signals be considered a judgment by the danger-detection subsystem of my brain? That judgment wouldn't be expressed in words; rather, it would be expressed in a more abstract language of neural activation. But it would still be a "statement" of sorts that one entity was passing along to others.

If subsystems of our brains can also make judgments, how about individual neurons? For example, a nociceptive neuron could be seen as making the very simple "judgment" that "there's some tissue-damaging stimulus here".

And so on. Even if we try to cash out "consciousness" in terms of "brain's judgments", we find that there's not a principled way to distinguish judgments from non-judgments. Rather, different systems have different degrees of complexity in the "judgments" that they make.

Rothman (2017), describing Daniel Dennett:

He regards the zombie problem as a typically philosophical waste of time. The problem presupposes that consciousness is like a light switch: either an animal has a self or it doesn’t. But Dennett thinks these things are like evolution, essentially gradualist, without hard borders. The obvious answer to the question of whether animals have selves is that they sort of have them. He loves the phrase “sort of.” Picture the brain, he often says, as a collection of subsystems that “sort of” know, think, decide, and feel. These layers build up, incrementally, to the real thing. Animals have fewer mental layers than people—in particular, they lack language, which Dennett believes endows human mental life with its complexity and texture—but this doesn’t make them zombies. It just means that they “sort of” have consciousness, as measured by human standards.

From a different perspective, we actually care about first-order computations

Consider why we associate consciousness with moral significance in the first place. Presumably it's because when we explore what we care about, we do so by imagining ourselves having emotional experiences. You might, for instance, think of burning your hand and then screaming in pain as you realize what happened. Since this image involves reflection on an emotion, you may conclude that self-reflection on emotions is what you care about.

But in a different sense, our brain algorithms "actually" care about the emotional experiences themselves -- those "in the dark" brain operations upon which we imagine introspecting. Our implicit behavior is tuned to optimize actual rewards, not just noticed rewards. Carl Shulman makes this point in response to Yudkowsky:

The total identification of moral value with reflected-on processes, or access-conscious (for speech) processes, seems questionable to me. Pleasure which is not reflected on or noticed in any access-conscious way can still condition and reinforce. Say sleeping in a particular place induced strong reinforcement, which was not access-conscious, so that I learned a powerful desire to sleep there, and not want to lose that desire. I would not say that such a desire is automatically mistaken, simply because the reward is not access-conscious.

Shulman adds that we may feel that "the computations that matter are the sensory processing and reinforcement learning, not the [higher-order thoughts]. The action-guiding, conditioning computations that the reflections are about."

Brandon Keim wrote about insects:

The nature of their consciousness is difficult to ascertain, but we can at least imagine that it feels like something to be a bee or a cockroach or a cricket. That something is intertwined with their life histories, modes of perception, and neurological organisation. For insects, says Koch, this precludes the reflective aspects of self-awareness: they don’t ponder. Rather, like a human climber scaling a cliff face, they’re immersed in the moment, their inner voice silent yet not absent. Should that seem a rather impoverished sort of being, Koch says it’s worth considering how many of our own experiences, from tying shoelaces to making love, are not self-conscious. He considers that faculty overrated.

Why are representations so important?

Higher-order theories seem to place special importance on mental representations: Having internal models of external objects and of one's own thoughts and behaviors regarding those objects.b But why is representation so special?

In a famous paper, Rodney A. Brooks wrote:

When we examine very simple level intelligence we find that explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model.

A cognitive representation is a simplistic model of complex real-world phenomena. But why can't the real-world phenomena themselves be an ethically relevant "mental representation" (where this representation exists in the "extended mind" of an organism rather than within its neurons specifically)? Indeed, the real world is a vastly more sophisticated "model" than what occurs within an organism's brain.

Suppose a bacterium is attracted in a certain direction by chemical cues. A higher-order theorist would say that the bacterium isn't conscious because it doesn't have complex mental representations. But the physical world in which the bacterium lives does have such a representation -- namely, the bacterium itself, in all its elaborate detail. Why can't we regard the world as a "brain" in which the bacterium is "cognitively" represented? And in that sense, the bacterium's behavior is indeed "conscious" in the the mind of the world itself, according to this strained higher-order view.

And if we interpret the world itself as its own representation, then other organisms who interact with the world can be said to be "using" that "cognitive" representation. For example, a paramecium that eats bacteria uses a representation of the world (namely, the world itself) when seeking out and devouring its food.

The following figure illustrates this point:

If the representational model that exists within a thought bubble is a "conscious" thought, then why can't we put the world itself (the best model of all) inside a thought bubble and thereby consider it to be a conscious thought?

Self-reflection looks like other brain processing

What makes the self-reflection algorithm so special anyway? I expect that up close, the steps would look pretty simple and uninspiring -- much like many other types of brain processing. Shulman notes this as well:

I don't see [Yudkowsky] presenting great evidence that the information processing reflecting on sense inputs (pattern recognition, causal models, etc) is so different in structure [from other types of information processing].

The higher-order view looks to me like it's falling into a dualist fallacy of supposing that self-reflection (beyond a certain minimal complexity) "really is" conscious, while everything else is not. Why else would it so privilege self-reflection, which is just one algorithm among many?

Blackmore (2012):c

all brain events entail the same kinds of processes – waves of depolarisation travelling along axons, chemical transmitters crossing synapses, summation of inputs at cell bod[i]es and so on. What could it mean for some of these to be “giving rise to” or “creating” conscious experiences while all the rest do not? If the hard problem really is insoluble or meaningless then shifting it to apply only some brain events does not help at all.

We don't just care about what's immediately introspectively visible

Suppose you learned tomorrow that there are invisible ghosts with brains just like yours (except implemented in ghostly substrate) that suffer when you wear green hats but enjoy themselves when you wear red hats. Upon learning this, it seems you should care about the ghosts and adjust your head wardrobe, even though you have no introspective access to ghost brain operations. You know rationally that ghost brains matter even if you can't subjectively access their pained responses to green hats directly.

Likewise, the verbal, deliberative parts of our minds only have immediate access to the brain operations that they reflect upon. In a pre-scientific world, we would care only about the brain operations that we could introspectively access, because those would be the only operations whose existence we would know about. But now that we have third-person data on all the other things happening in our brains, why can't we extend sympathy to those processes as well -- just as we extended sympathy to the ghosts? The original motivation for focusing on the reflected-upon operations seems to have been undercut. It's like the drunkard looking for his keys under the streetlamp, but then the morning sun comes up, at which point he can look elsewhere too.

Visceral inclination toward higher-order views

We can care about whatever we want, and I feel there is a gut inclination in favor of a higher-order view of consciousness, because self-reflection on our emotions is the most immediate and obvious characterization of what's going on when we feel bad or good.

Before 2005, I assumed that animals weren't conscious because they lacked language, and while my memory is hazy, I think the idea might have been that animals couldn't be conscious of emotions if they couldn't manipulate them using verbal machinery. In 2005 I learned that most scientists believed that at least mammals and birds were conscious, but I maintained a less extreme higher-order view: Namely, that nociception by itself didn't matter but only reflection upon nociception. My opinion here was heavily influenced by Vegan Outreach's quotations from The Feeling of What Happens by Antonio Damasio:

Would one or all of those neural patterns of injured tissue be the same thing as knowing one had pain? And the answer is, not really. Knowing that you have pain requires something else that occurs after the neural patterns that correspond to the substrate of pain – the nociceptive signals – are displayed in the appropriate areas of the brain stem, thalamus, and cerebral cortex and generate an image of pain, a feeling of pain. But note that the “after” process to which I am referring is not beyond the brain, it is very much in the brain and, as far as I can fathom, is just as biophysical as the process that came before. Specifically, in the example above, it is a process that interrelates neural patterns of tissue damage with the neural patterns that stand for you, such that that yet another neural pattern can arise -- the neural pattern of you knowing, which is just another name for consciousness. [...]

Tissue damage causes neural patterns on the basis of which your organism is in a state of pain. If you are conscious, those same patterns can also allow you to know you have pain. But whether or not you are conscious, tissue damage and the ensuing sensory patterns also cause the variety of automated responses outlined above, from a simple limb withdrawal to a complicated negative emotion. In short, pain and emotion are not the same thing.

I maintained this view roughly intact until 2013, when comments by Carl Shulman similar to those quoted above forced me to re-evaluate. It really doesn't make sense to specially privilege self-reflection once we extricate ourselves from the dualism-like trap of thinking that consciousness is a special event that suddenly turns on when we notice ourselves thinking. Rather, noticing ourselves thinking is just one immediate way in which we can realize that thinking is happening, but neuroscience shows us other ways.

This discussion reports:

Mr. Shulman thinks it is reasonable to expect that our moral intuitions, by default, would not treat some kinds of cognitive processes as morally relevant — specifically, those cognitive processes of which “we” (our central, stream-of-consciousness decision-making center) have no conscious awareness, e.g. the enteric nervous system, the non-dominant brain hemisphere, and other cognitive processes that are “hidden” from “our” conscious awareness. Upon reflection, Mr. Shulman does not endorse this intuitive discounting of the moral value of these hidden-to-“us” cognitive processes.

Since 2013, I've been haunted by the question of whether, and how much, I care about processes that aren't reflected on in some elaborate self-modeling way. Viscerally I can still see how we might not care about unreflected-on emotions, since we can't imagine such emotions from a first-person perspective. Any emotions we do imagine are reflected-on emotions because the imaginative act makes them reflected on. Out of sight, out of mind. But we don't feel the same way about suffering in other minds of which we're unaware. It still matters if someone gets hurt, even if I never know that it happened. So maybe I should care about unreflected-on emotions too.

This dilemma raises a broader question: What do I care about in general, and why? The answer in my case seems to be something like: When I imagine myself suffering, it feels really bad, and I want to stop it. Likewise, if I know that something else is suffering in a similar way, I want to stop that. But what does "in a similar way" mean? Naively, it means "in a way that I can imagine for myself". But then this seems to include only emotions that are reflected upon, since any brain processing that I imagine from a first-person perspective is reflected upon. Rationally, however, it seems I should broaden my moral circle to include minds that are too alien for me to imagine being directly but that still share third-person similarities to myself. In this case, processes not reflected on may begin to matter somewhat as well.

Higher-order theories are complementary to first-order theories

For the reasons discussed above, I find it implausible that a proper account of moral standing will only include (sufficiently complex) higher-order reflection. It seems that the properties of what's being reflected upon matter as well, and we can use various first-order accounts of consciousness (global-workspace theory, integrated-information theory, etc.) to ground some of our intuitions about which first-order processes matter and how much. I suggest that higher-order theory can represent one factor among many that we consider when evaluating how much we care about a given cognitive system.

Different theories of consciousness are not mutually exclusive. Global broadcasting of integrated information can lead to higher-order thoughts. All of these components are parts of what happens in brains, and some theories of consciousness just privilege certain of them more than others. I don't think any one of these accounts by itself does justice to the depth of what we want "consciousness" to mean. Robert Van Gulick makes the same point: "There is unlikely to be any single theoretical perspective that suffices for explaining all the features of consciousness that we wish to understand."

Is self-reflection special?

My views on the subject of this essay vacillate. There are times when I feel like consciousness should be seen as relatively binary, while at other times it seems very graded. We have a strong intuition that there's something special going on in our heads that's different from "dumb" processes outside us and even many dumb processes in our brains. Pre-theoretically the distinction is cast in terms of a feeling of what it's like to be ourselves. Post-theoretically, it's plausible to explain the distinctiveness in terms of our brain's ability to reflect on itself in such a complex way that it synthesizes narratives about its experience and generates strong intuitions about its unified selfhood and consciousness. This kind of sophisticated self-reflection is indeed rather distinctive among physical processes, though of course self-reflection comes on a continuum, since any system is self-reflective to some degree.

The big question is whether a self-reflective system has to be sufficiently complex and has to assert its own consciousness like our brains do before it has nontrivial moral status. Does the fact that we intuitively find our own consciousness to be special imply that we actually don't care about much simpler systems? Alas, there is no straightforward answer to the question of "what do we really care about?" We just have fragments of intuitions in various directions at various times. There's not a concrete thing we're pointing to when we describe "our values". Feelings about whether self-reflection of a sufficiently complex sort is special among physical operations vary from person to person, context to context, and thought experiment to thought experiment.

An anthropic principle for consciousness

Several modern theories of physics predict a multiverse containing many kinds of universes, most of which are unsuitable for life. The reason we find ourselves in a universe that can support life is simply that we couldn't exist in the others. This is the "anthropic principle".

Likewise, if consciousness of some sort exists throughout computational systems, why do we find ourselves in complex human bodies rather than in, say, our laptops?d Again the answer is a kind of anthropic argument: At least on present-day Earth, the only computations that are capable of asking these kinds of questions about themselves in a nontrivial way are intelligent biological minds, especially advanced mammals and birds. (Maybe a few computer programs can already ask these questions in a nontrivial way as well; it's not clear.)

We can take this observation one of two ways:

  1. Only care about agents that can reflect on their own consciousness to a nontrivial degree, since if we're asking this kind of question, we couldn't be anything else.
  2. Care about all consciousness to some degree and remember this anthropic principle as just an explanation of why we find ourselves inside the bodies of highly intelligent animals.

I tend to prefer #2, though I can imagine why #1 might be somewhat compelling as well.

Anthropic observer ≠ conscious observer

Sometimes it's suggested that anthropic reasoning casts doubt on animal consciousness because if animals were conscious, they're so numerous that we should be them rather than humans. Yvain proposes this in "It's not like anything to be a bat" (2010), and Russell K. Standish presents a more nuanced version of the argument in "Ants are not Conscious" (2013). (Standish clarifies: "An alternative reading of this paper is that it is not talking about consciousness per se, but what is, or is not, allowable within the anthropic reference class." That's fine, but in that case the paper's title is potentially misleading.)

These arguments are wrong because most conscious observers are not anthropic observers. An anthropic observer is someone who asks questions like, "Why am I in this universe rather than another?" The types of physics in universes not friendly to life can't configure themselves into the right patterns to ask that question (except by chance in the case of certain Boltzmann brains), which is why we can conclude we're not in those universes. That's the anthropic principle.

But there's little connection between that and being conscious. After all, even most humans don't reflect on being observers in one place rather than another 99.99% of the time. In that sense, even most humans usually aren't anthropic observers. If we were forced to divide up spacetime into regions that are anthropic observers and those that aren't, we'd carve out little circles around those minds that happen to be thinking about anthropics, and everything else -- both in our universe and in other, inhospitable universes -- would not be anthropic observers. Being "conscious" doesn't require being an anthropic observer. There's nothing special about thinking about anthropics per se as far as consciousness goes. Even higher-order theorists probably agree with this, since they think high-level reflection of some sort is crucial for consciousness, not high-level reflection on one's status as an anthropic observer.

Of course, one could extend the anthropic principle to ask: "What universes are capable of supporting (complex) conscious minds of any kind, whether they reflect on their own existence or not?" But that's a different question from what anthropics ordinarily asks about, and reflection on "what observer we find ourselves to be" has little to say about the extent of consciousness in general throughout the rest of the world.

I think the self-sampling assumption in anthropics is wrong, but even if it were right, it wouldn't apply to "conscious" minds in general. Why would it? It's not as though any conscious mind is always implicitly affirming its own status as an anthropic observer at all times. Moreover, "conscious" minds themselves don't even exist as discrete entities with clear boundaries. This whole business of anthropic reasoning based on discrete observers is confused.

Footnotes

  1. In the language of databases, the brain is more BASE than ACID. The Cartesian theater idea of a single, consecutive stream of consciousness with a definite finish line for when something becomes conscious might be compared with Single Version of the Truth.  (back)
  2. Maybe higher-order theorists don't care so much about representations of objects and care more about thoughts about one's own thoughts. But I think the idea that mental representations are important for consciousness is reminiscent of the higher-order view, which is why I'm generally conflating the ideas in this informal essay.  (back)
  3. The linked page says "Do not quote" because this is an early draft of a book chapter. But the full book is super expensive, so I don't have a copy of it.  (back)
  4. Note that this can be asked without invoking confused notions of "observers" as discrete "souls" that inhabit bodies. Rather, we're just asking why the computations that are capable of asking questions like these reside in big animals. This is no more mysterious than asking why pollen is found on flowers rather than in a plant's roots.  (back)