Higher-Order Theories of Consciousness by Themselves Are Too Parochial

By Brian Tomasik

First written: 30 Dec 2014. Last nontrivial update: 31 Aug 2017.

Summary

When we notice ourselves to be conscious, we do so by specific neural processes of self-reflection. Any emotion that we can imagine in a first-person way is an emotion that we are reflecting upon. Hence it's natural to assume that the only emotions that matter morally are those upon which someone reflects. This gives rise to a higher-order account of consciousness. Such a view is problematic, because it's not clear why self-reflective algorithms are so special, whether overly simple self-reflection matters, and why the operations being reflected on don't also have moral significance in their own right. I think self-reflection can be one important consideration among many when we're assessing a mind's degree of consciousness, but excluding all cognitive processing that isn't noticed by some sufficiently complex reflection algorithm seems overly parochial.

Note: In this piece I use "higher-order theories" in a very imprecise way to designate views that generally consider reflection of some sort to be crucial for consciousness.

Contents

A caveat on imprecise language

Carruthers (2016) warns:

the different versions of a higher-order theory of phenomenal consciousness need to be kept distinct from one another, and critics should take care to state which version of the approach is under attack, or to frame objections that turn merely on the higher-order character of all of these approaches. (Compare the care that one has to take if one wishes to mount an attack on utilitarian moral theory, given the multitude of different theories that actually fly under that banner.)

My current article does not heed this warning. I'm not an expert on the literature about higher-order theories, and I have not tried to challenge the precise claims of any particular view. Rather, I discuss the general idea that reflection is important for consciousness, which is a common thread both in the philosophy of mind and in layperson discussions of consciousness. I use "higher-order theories" merely as a convenient label for this general, messy class of views.

My replies are equally imprecise. Rather than challenging some particular higher-order theory, I aim to problematize some of the conceptual foundations or assumptions that underlie at least some higher-order theories.

I feel as though some of the philosophical literature presumes false precision. For example, philosophers distinguish "Higher-Order Perception Theory" from "Higher-Order Thought Theory". But I'm not sure whether the distinction between perceptions vs. thoughts carves reality at its joints. Plausibly both of these concepts will be eliminated in favor of more precise terms by neuroscience. One way to see this is by considering computers: what's the difference between a perception and a thought in a computer? Ultimately there are just various data structures and algorithms, transforming certain kinds of data into other kinds of data, creating various high-level representations that can be used in various ways. Likewise, I suspect that philosophy's account of higher-order theory should be replaced by the more detailed facts of neurobiology, where seemingly sharp distinctions will, I predict, be seen to lose their force.

In my article, I haven't always been careful to distinguish reflection vs. self-reflection. That is, I haven't distinguished mere higher-order thoughts that occur during ordinary consciousness vs. the kind of self-reflective thought that occurs when you think to yourself, "Hey, I'm conscious!" I've sometimes conflated these into the general category of "reflection of some sort".

Follow the neurons

In order to explain what happens in politics, it's helpful to follow the money back to campaign donors. Likewise, in explaining our mental lives, we should follow the neurons. When you notice to yourself that "I'm conscious", what's going on in your brain to make that thought happen? Presumably you have some concept of what consciousness and raw feelings are, and this combines with a neuronal cluster representing yourself, perhaps combined with your current processed data input stream. I'm just speculating about the exact implementation details here, but the details aren't crucial. There must be some specific neural steps that implement your thought that you're conscious, and these steps explain why you think you're conscious.

"But", you might protest, "that sequence of neural steps doesn't explain why my consciousness is lit up in a special way. Why isn't that affirmation of self-consciousness happening in the dark, the way it would for a robot?" Of course, this thought also is produced by some sequence of neural steps, which we could trace in your brain if we had high-resolution measurement devices. What else could that raw feeling be besides neural activity? If it were anything else, wouldn't consciousness be just as strange?

You can't think outside your brain; any confusions you have about consciousness are implemented in the physical machinery that seems so unlike the vivid phenomenology that you experience.

Am I conscious now?

In Zen and the Art of Consciousness, Susan Blackmore's first question is "Am I conscious now?" Her answer is "Of course I am. Yes, I am conscious now." If we follow the neurons, we can see that this response is some kind of self-reflection operation in Blackmore's brain that, when activated, always produces a "yes" answer. Blackmore has compared this to always seeing the light on in the fridge: "You may keep opening the door, as quickly as you can, but you can never catch it out - every time you open it, the light is on." Kevin O'Regan calls this the "refrigerator light illusion".

Michael Graziano's attention schema theory proposes something similar:

If you are attending to an apple, a decent model of that state would require representations of yourself, the apple, and the complicated process of attention that links the two. [...]

When you look at the colour blue, for example, your brain doesn't generate a subjective experience of blue. Instead, it acts as a computational device. It computes a description, then attributes an experience of blue to itself. [...] The brain insists that it has subjective experience because, when it accesses its inner data, it finds that information.

The above views might be considered narrative-interpretative theories of consciousness. The most famous such theory is Daniel Dennett's multiple drafts model, in which consciousness is constructed from pieces of brain activity on an as-needed basis. Like executing queries to a distributed and constantly changing database, we compute and combine together the information that's required for our current object of attention. Various "probes" (e.g., verbal questions, action choices) may fix a snapshot of some contents of our minds, but snapshots are constructed on the fly and needn't be fully consistent among each other.a

Was I conscious a moment ago?

Blackmore's second question is: "What was I conscious of a moment ago?" I'll revise it slightly: "Was I conscious a moment ago?" Even if we agree that there's a specific kind of self-reflection process that corresponds to active thinking about our consciousness, there also seems to be a kind of implicit consciousness that we carry around as we perform our daily tasks. 99.9% of the time, we're not actively reflecting on our consciousness, and yet we seem still to feel an implicit what-it's-like-ness.

...at least, that's what we think when introspecting on the matter. The only way we can tell now is by inspecting memories, and when we inspect memories, we reconstruct a mental image of ourselves being self-aware at the moment that we're recollecting. But this could just be a Graziano-style operation in which our brains now claim they were conscious in the past, because they can access the past data. Data that weren't globally broadcast in the past didn't remain in memory, so we claim we weren't conscious of the non-broadcast data, in agreement with the observations of global-workspace theory; but even for the data that were globally broadcast, this retrospective memory may be the first time we actively ask whether we were conscious of them. Moreover, even if we have false memories of some fictional event, we can clearly see in our mind's eye that we were conscious when it happened. Hence, recollection of having been conscious in the past is not foolproof.

Reasoning of this type may have been the motivation for Eliezer Yudkowsky's comment:

Maybe humans are conscious only while wondering about whether or not we're conscious, and so we observe that we're conscious each time we check, but at all other times our experiences are of no ethical value.

(Of course, it's also possible that some baseline self-reflective operations of noticing that you're conscious are actually running all the time in some implicit, non-verbal, non-distracting way.)

I think Yudkowsky's proposal is interesting, and it seems to me like the main plausible contender to the moral standpoint advanced in the current piece. If one insists that "the act of thinking (verbally or non-verbally) that oneself is conscious in some very complex/specific way" is essential for consciousness, then even humans may be "unconscious" much of the time, and it's unclear to what extent non-human animals would be conscious, depending on what degree of complexity is involved with "telling oneself (possibly non-verbally) that oneself is conscious". On the other hand, if we think human experience still matters even in "flow" states where we lose track of our own minds and are merely taking in the world without noticing that fact, then the arguments in this piece seem to apply.

Higher-order theories

The narrative-interpretative theories discussed above see consciousness as a construction in which our explicit-thought machinery makes sense of our mental events. This narration is a form of self-reflection. However, the manner of self-reflection differs somewhat from conventional higher-order theories of consciousness, which propose that consciousness consists in reflection on lower-level brain processing in general, rather than self-reflection that yields thoughts like "I'm conscious now". Still, the general idea of reflection as the crucial component seems shared among the approaches, and in the remainder of this piece I talk about "higher-order theories" loosely as views of consciousness according to which reflection of some sort is required for consciousness.

The following diagram illustrates the general idea of higher-order theories:
Depiction of higher-order processing

Gennaro (2005) explains his higher-order-thought (HOT) theory (p. 9):

In very much a Kantian spirit, the idea is that first we passively receive information via our senses. This occurs in what Kant (1781/1965) calls our “faculty of sensibility.” Some of this information will then rise to the level of unconscious mental states which can, of course, also cause our behavior in various ways. But such mental states do not become conscious until the “faculty of understanding” operates on them via the application of concepts. I contend that we should understand such concept application in terms of HOTs directed at the incoming information. Thus, I consciously experience the brown tree as a brown tree partly because I apply the concepts 'brown' and 'tree' (in my HOTs) to the incoming information via my visual perceptual apparatus. More specifically, I have a HOT such as “I am seeing a brown tree now.”

Rather than saying that HOTs are directed at perceptual data, I would say that HOTs arise from perceptual data being processed in various complex ways, but this is merely a different preference about how to use words.

Problems for higher-order theories

Oversimplified instances of self-reflection seem to matter less

We don't know exactly what the self-reflective operations are in which we tell ourselves that we're conscious, but suppose we had an algorithm for them in human brains. Suppose we then, step by step, took away pieces of brain functionality not essential to self-reflection, like removing blocks from a Jenga tower. Would the result still be conscious? Suppose we simplify the self-reflection algorithm itself by compressing some complicated steps into a slightly simpler step. Is that brain still conscious? As we continue stripping away details, does consciousness become extinguished at some point? In the limit of extreme simplicity, is this Python agent conscious when it prints the string "Does it feel like something to see red?" and then prints a boolean answer computed by a rudimentary but still self-reflective function?

I think the most plausible response here is to hold that consciousness comes in degrees depending on the complexity of the self-reflective operations and the complexity of the mind being reflected upon.

"Conscious" reflection in human brains involves broadcasting thoughts throughout the brain so that other brain components can access the information. Insofar as this broadcasting process loops in many brain components, there can't be a richly complex self-reflection broadcast without also having richly complex processes that receive the broadcast.

Meanwhile, a complex self-reflective algorithm operating on an otherwise empty brain, with just enough fake inputs to make the self-reflection work, doesn't seem legitimately conscious to anywhere near the same degree that an actual brain is. This idea about higher-order reflection on empty inputs is called "the targetless higher-order representation problem" (Carruthers 2016). Carruthers (2016): "it seems that a higher-order experience of a perception of red, say, or a higher-order thought about a perception of red, might exist in the absence of any such perception occurring."

Carruthers (2016) goes on to describe an argument by Ned Block:

It would be remarkable (indeed, mysterious) if a higher-order belief should have all of the causal powers of the mental state that the belief is about. And in particular, there is no reason to expect that a higher-order belief that one is in pain should possess the negative valence and high-arousal properties of pain itself. But the latter are surely crucial components of phenomenally conscious pain. If so, then a higher-order belief that one feels pain in the absence of first-order pain will not be sufficient for the conscious feeling of pain.

I agree with this basic point, but rather than talking about whether a brain state "really is" conscious pain or not, I would say that "conscious pain" is a graded, fuzzy concept that covers lots of underlying neural detail, some parts of which can be present in the absence of other parts.

Finally, note that this question about how to treat oversimplified instances of consciousness also applies to first-order theories, such as global-workspace theory. So this isn't a unique challenge for higher-order theorists.

What counts as self-reflection?

Consider this example of "higher-order reflection" on "unconscious" processing: seeing your brain light up in an fMRI. There's no fundamental difference between neurons sending signals internally versus fMRI images sending photons to your retina. Both are just information transmission of various sorts. And if there is no fundamental difference, would this mean that those who deny the consciousness of, say, fish should believe that a fish (or, rather, the fish+observer system) becomes conscious when neuroscientists inspect the fish's real-time brain functioning in sufficient detail? Or are these higher-order thoughts by the experimenters about the fish's brain not of the right type to generate consciousness according to higher-order theory?

As before, higher-order theorists could answer these questions by adopting a sliding-scale approach to self-reflection, in which what counts as a higher-order thought comes in degrees based on context. The resulting higher-order theories would still be workable, but we might question whether insistence on the necessity of self-reflection is overly dogmatic. Depending on how one defines self-reflection, trivial instances of it occur all the time. In any physical process where event A influences event B, we could call event B a "higher-order thought" about event A.

Gennaro (2005) concedes (p. 10): "To be sure, even nonconscious mental states also involve some form of conceptualization or categorization in so far as they have intentional content." But, Gennaro (2005) argues, first-order theories can't account for the distinction between conscious vs. unconscious mental states (p. 10). That distinction, he says, must come from the concepts deployed during higher-order thoughts (p. 10).

I think if you define "what it is like" to have experiences based on very particular kinds of concepts that the human brain deploys when information is aggregated in very particular kinds of ways, then you can argue that other information-processing systems, such as those in your gut or in a tree, are not conscious. But this is just a trivial matter of definitions, and the real question is why you single out those particular kinds of higher-order thoughts and not others. Is it because those are the thoughts that are most associated with verbal report? Is there any less arbitrary way to pick the higher-order thoughts that "count" vs. those that don't?

Carruthers (2016) insists that not just any higher-order representations involve phenomenal consciousness:

For example, a belief can give rise to a higher-order belief without thereby being phenomenally conscious. What is distinctive of phenomenal consciousness is that the states in question should be perceptual or quasi-perceptual ones (e.g. visual images as well as visual percepts). Moreover, most cognitive/representational theorists will maintain that these states must possess a certain kind of analog (fine-grained) or non-conceptual intentional content. What makes perceptual states, mental images, bodily sensations, and emotional feelings phenomenally conscious, on this approach, is that they are conscious states with analog or non-conceptual contents.

I don't buy the idea that "beliefs", "visual percepts", emotional feelings", etc. can be so cleanly separated at the neurobiological level. (I guess we intuitively feel differently about, e.g., the quale of a visual scene compared against the experience of deducing a new belief, and presumably there are reasons we feel differently between these two cases....) Also, the "fine-grained" attribute comes on a continuum, suggesting no clean distinction between states with content that's "fine-grained enough" or not.

I agree with Van Gulick (2006)'s "Extra Conditions Problem" for higher-order theories (pp. 13-14):

Higher-order theories need to include further conditions to rule out obvious counterexamples to the sufficiency of the higher-order analysis (e.g., that the meta-intentional state be simultaneous with its lower-order object and that it be arrived at noninferentially). But those conditions call into question the theory’s basic idea of explaining consciousness solely in terms of meta-intentional content. The extra conditions may be required to sort the cases correctly, but it is not clear why they should matter[...].

Since I don't think consciousness is a "real, objective thing", I guess someone could legitimately say that "I choose these extra conditions on my higher-order theory because I want to (due to various specific neurobiological and cultural reasons that lead me to want to)". Another response could be to embrace the unintuitive implications of jettisoning the extra conditions and conclude that, e.g., a higher-order thought on a temporally distant first-order state still counts as conscious.

Extra conditions in Rolls (2008)

An example of what I consider unmotivated "extra conditions" can be found in the higher-order theory of Rolls (2008). One condition is that (p. 147):

the system that is having syntactic thoughts about its own syntactic thoughts (higher-order syntactic thoughts or HOSTs) would have to have its symbols grounded in the real world for it to feel like something to be having higher-order thoughts. The intention of this clarification is to exclude systems such as a computer running a program when there is in addition some sort of control or even overseeing program checking the operation of the first program. We would want to say that in such a situation it would feel like something to be running the higher-level control program only if the first-order program was symbolically performing operations on the world and receiving input about the results of those operations, and if the higher-order system understood what the first-order system was trying to do in the world.

Why is it necessary for the symbols to be grounded in lots of detail? Page 148:

The need for this is that the reasoning in the symbolic system must be about stimuli, events, and states, and remembered stimuli, events and states, and for the reasoning to be correct, all the information that can affect the reasoning must be represented in the symbolic system, including for example just how light or strong the touch was, etc.

Sure, detailed symbol grounding is important for the higher-order reasoning to be correct, but why is it necessary for the higher-order reasoning to feel like something? This condition strikes me as a hacky way to rule out certain "too simple" computer programs from being considered conscious.

Self-reflection comes in degrees

Yudkowsky, explaining his higher-order intuitions:

To spell it out in more detail, though still using naive and wrong language for lack of anything better: my model says that a pig that grunts in satisfaction is not experiencing simplified qualia of pleasure, it’s lacking most of the reflectivity overhead that makes there be someone to experience that pleasure. Intuitively, you don’t expect a simple neural network making an error to feel pain as its weights are adjusted, because you don’t imagine there’s someone inside the network to feel the update as pain. My model says that cognitive reflectivity, a big frontal cortex and so on, is probably critical to create the inner listener that you implicitly imagine being there to ‘watch’ the pig’s pleasure or pain, but which you implicitly imagine not being there to ‘watch’ the neural network having its weights adjusted.

Similarly, Dennett (1995) says:

Are the "pains" that usefully prevent us from allowing our limbs to assume awkward, joint-damaging positions while we sleep experiences that require a "subject" (McGinn, 1995), or might they be properly called unconscious pains? Do they have moral significance in any case? Such body-protecting states of the nervous system might be called "sentient" states without thereby implying that they were the experiences of any self, any ego, any subject. For such states to matter—whether or not we call them pains or conscious states or experiences—there must be an enduring, complex subject to whom they matter because they are a source of suffering.

My response is that self-reflection, selfhood, etc. come in degrees. What is a "self" other than a collection of cognitive representations referring to other internal parts of a computational system? Such self-representations come in varying degrees of sophistication.

For example, let's take Yudkowsky's example of "a simple neural network making an error". Suppose that this neural network (NN) is part of a larger software system, and when the NN makes an error, this not only updates the NN weights but also updates

These meta-level components could be considered an extremely crude "self-model" of the NN. Of course, these particular pieces of information are still super simple, and the self-reflectiveness of this software should be considered very low. But I don't see a clear dividing line anywhere between simple self-modeling by this NN software vs. the very complex self-modeling that humans do.

As a real-world example, Babu and Suresh (2012) developed a "Meta-cognitive Neural Network" in which learning is adapted based on extremely simple forms of self-reflection. The "Meta−cognitive component" of the system computes various measures, such as the predicted classification label for a new training example and the confidence in this classification based on how strong the network's output score for the input example was (p. 89). Based on this information, the system might decide to, e.g., avoid training on a training example where the classifier already gets the classification correct and has high confidence in this classification (p. 89). Meanwhile, if the classifier gets a training example's class wrong, and if the training example also seems to contain significant new information, then a new hidden-layer neuron will be added to the radial basis function neural network (pp. 89-90). And so on.

Hadoop JobTracker

As another example of crude self-modeling, take a JobTracker daemon in Hadoop 1, which monitors ("higher-order thoughts") the status of a distributed MapReduce computation ("first-order thoughts"). deRoos (n.d.):

The JobTracker maintains a view of all available processing resources in the Hadoop cluster and, as application requests come in, it schedules and deploys them to the TaskTracker nodes for execution.

As applications are running, the JobTracker receives status updates from the TaskTracker nodes to track their progress and, if necessary, coordinate the handling of any failures. The JobTracker needs to run on a master node in the Hadoop cluster as it coordinates the execution of all MapReduce applications in the cluster, so it’s a mission-critical service.

JobTracker could be said to deploy abstract concepts within its higher-order thoughts that are directed at the machines in the Hadoop cluster. For example, JobTracker monitors TaskTracker nodes, and if those nodes "do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker." While I haven't read the Hadoop source code, I would guess that JobTracker has one or more state variables to indicate whether a given TaskTracker has failed. This can be seen as an extremely crude "summary concept" about the first-order computations. Like when humans deploy the concept of "beautiful" in response to a majestic landscape, JobTracker deploys the concept of "failed" in response to (the lack of) heartbeat signals from a TaskTracker node.

Game servers

According to Wikipedia, "A game server [...] is a server which is the authoritative source of events in a multiplayer video game. The server transmits enough data about its internal state to allow its connected clients to maintain their own accurate version of the game world for display to players. They also receive and process each player's input."

Collecting user keyboard/mouse inputs, graphical display, and other computations happen on the client machines of each player, and then high-level summary information, such as player movements and other actions, are aggregated by the game server. The game server may, among other things, "Verify legality of moves, requests, etc." And the game server is likely to monitor its own "bodily health": "Hardware-related information, such as CPU and RAM usage, can be provided along with game specific information, such as average player-latency and number of players active on the server" (Google 2017).

Thus, we can think of a game server as an extremely crude second-order computation, which aggregates, verifies, and processes first-order computations by client machines; monitors its own "body" state; and then takes the action of sending new state information to clients.

Massive multiplayer games may require many game-server machines, which are the targets of third-order "thoughts" by other components, such as autoscaling algorithms, logging of in-game data, and analytics (Google 2017). Chu (2008) adds that "All game servers can schedule a repeated keep-alive message to the Web and database back end. [...] The server status information can [...] be incorporated into any game server load-balancing algorithm, so users are not sent to a game server that is apparently down."

The extended mind

There's no principled distinction between oneself and the external world; we just cluster some parts of physics together because they're relatively more self-contained. The cells in your body tend to move together as you walk, whereas the clouds above you may be moving in the opposite direction, so it's useful to talk about the cells in your body as being "part of you", while the clouds aren't. But this distinction is fuzzy. For example, suppose it starts to rain, and you ingest some raindrops or inhale moisture-rich air. Photons reflecting off the clouds may enter your eyes, triggering brain processing. And so on.

If there's no sharp separation of oneself from the outside world, then why can't we consider everything happening in the outside world as part of your "extended mind"? And in that case, doesn't lower-order brain activity (e.g., early stages of visual processing) constitute a higher-order thought about what's happening in the external environment? Let's take an example. Ordinarily we might envision higher-order thoughts like this:

First-order processing: brain visually identifies a cloud in the sky.
Second-order thought: think to yourself, "I see a cloud".

But if the cloud is part of your extended mind, then we could reconceptualize the situation like this:

First-order processing: cloud moves through the sky.
Second-order processing: brain visually identifies a cloud in the sky.
Third-order thought: think to yourself, "I see a cloud".

If anything above first-order thinking counts as conscious, then early-stage visual processing is conscious according to the latter framing. And of course, we could construct a framing in which the cloud's movement is itself a higher-order thought about earlier events.

"Rock" objection

Carruthers (2016) explains this objection, due to Alvin Goldman:

We don't think that when we become aware of a rock (either perceiving it, or entertaining a thought about it) that the rock thereby becomes conscious. So why should our higher-order awareness of a mental state (either via a perception-like state, produced by inner sense, or via a higher-order thought about it) render that mental state conscious? Thinking about a rock doesn't make the rock ‘light up’ and become phenomenally conscious. So why should thinking about my perception of the rock make the latter phenomenally conscious, either?

Gennaro (2005) offers the following reply (pp. 4-5):

We must first and foremost distinguish rocks and other nonpsychologial things from the psychological states that [higher-order] HO theories are attempting to explain. HO theories must maintain that there is something not only special about the meta-state [...], but also something special about the object of the meta-state, which, when combined in certain ways, result in a conscious mental state. The HO theorist must initially boldly answer the problem of the rock in this way in order to avoid the reductio whereby a thought about any x will result in x's being conscious. So the HOT theory does not really prove too much in this sense and various principled restrictions can be placed on the nature of both the lower-order and the meta-state in order to produce the mature theory. In this case, a rock is not a mental state and so having a thought about one will not render it conscious. After all, the HOT theory is attempting to explain what makes a mental state a conscious mental state. This is not properly recognized by those who put forward the problem of the rock.

Okay, so what are these principled distinctions between mental and nonmental states? Gennaro (2005), p. 5:

What makes a state a mental state? There are differing views here, but one might, for example, insist that mental states must fill an appropriate causal-functional role in an organism (Armstrong 1981). Alternatively, one might even simply identify mental states with certain neural or bio-chemical processes in an organism (Crick 1994). Either way, however, it is clear that external objects, such as rocks, cannot meet these criteria. The lower-order states in question thus have certain special properties which make it the case that they become conscious when targeted by an appropriate HOT.

Unfortunately, I don't see any sharp lines distinguishing the mental from the nonmental. For example, in reply to the claim that "rocks[...] cannot meet these criteria", consider Gray (2013), which reports: "Scientists have found tiny clumps of iron inside neurons in the ears of birds that may allow the animals to detect the Earth’s magnetic field as they fly. [...] The scientists now hope to carry out more research on the iron spheres, which are up to 2,000 times small than the width of a human hair, to find out whether they react when moving through a magnetic field." These iron spheres are miniature "rocks", and assuming they participate in sensation, they certainly seem to be part of a bird's mental state. But what difference would it make if, rather than being in birds' heads, these rocks were external to birds and transmitted the appropriate information by radio waves? I see no important difference. However, Gennaro (2005) absurdly suggests that location within the head might indeed matter (p. 5):

if we return to the idea that the meta-state is an intrinsic part of a complex conscious state, then it is also clear that rocks cannot be rendered conscious by the appropriate HOT. This is because, on such a view, the meta-state must be more intimately connected with its object, and it is most natural to suppose that the target object must therefore be “in the head.” That is, both “parts” of the complex conscious state must clearly be internal to the organism. Van Gulick (2000, 2004), who calls this “the generality problem,” makes a similar point when he says that “having a thought...about a non-mental item such as the lamp on my desk does not make the lamp conscious...because [the lamp] cannot become a constituent of any such global [brain] state.” (Van Gulick 2000, p. 301)

This is silliness. If the concern is that "the meta-state must be more intimately connected with its object", then what if you look at your rock all the time? Perhaps you're meditating upon your rock while staring at it for hours on end. The rock would certainly seem to be an intimate part of your cognitive system during that period.

Van Gulick (2006) replies to the rock objection by arguing (p. 36) that on his version of higher-order theory, "the nonconscious–conscious transformation is a matter of its being recruited into a globally integrated state that crucially embodies a heightened degree of higher-order reflexive intentionality, and not merely a matter of having a separate HO state directed at it. Pencils and stones are never recruited into any such globally integrated states—indeed the idea of their being so does not even really make any sense." I don't fully understand Van Gulick's view because the middle of his article reads to me more like poetry than a theory, which makes discussing the theory difficult. But the basic point seems to be that consciousness of something requires that it be tightly linked into one's complexly interacting brain states. However, in this case, it's not clear to me why consciousness isn't a matter of degree, depending on how tightly integrated something is. And in this case, rocks and pencils would seemingly be conscious to some small degree, depending on how deeply the rest of your brain interacts with them? (I personally would embrace a view like this.)

Brons (2017) argues that the "Problem of the Rock" is based on a misunderstanding. He contends that higher-order theories aver that a rock is the patient of consciousness, i.e., that someone is conscious of the rock. But this is completely non-controversial. I don't know enough to comment on this argument, but it seems like Carruthers (2016) had something slightly different in mind than the truism that people can be conscious of rocks when he phrased the Problem of the Rock as follows: "Thinking about a rock doesn't make the rock ‘light up’ and become phenomenally conscious."

What's the boundary between first- and higher-order processing?

My sense is that the distinction between first-order vs. higher-order thoughts may blur together as we look at the brain more closely. Of course, there will still be distinctions like "the primary visual cortex mostly processes more raw data than the third visual complex does". But a clear separation between low-level vs. high-level processing is likely to be elusive and unhelpful. Ultimately, there's just lots of complex, interacting stuff going on, which we can look at from a variety of points of view.

For instance, suppose we think that early stages of visual processing are unconscious, while later stages are conscious. But what about when this later visual processing feeds back on earlier stages of visual processing? Does the "consciousness" of the higher processing get transferred to the purportedly "unconscious" lower-level processing? And what happens when later stages of visual processing influence other supposedly unconscious events, like hormone release? More generally, how do we define "higher" processing in a brain where each part is connected to tons of other parts in a messy network of interactions? As an analogy, where in the World Wide Web does "higher-order processing" occur, in contrast to lower-order processing?

Carruthers (2016) motivates higher-order theories with, among other things, the example of blindsight: "What is it about a conscious perception that renders it phenomenal, that a blindsight perceptual state would correspondingly lack? Higher-order theorists are united in thinking that the relevant difference consists in the presence of something higher-order in the first case that is absent in the second. The core intuition is that a phenomenally conscious state will be a state of which the subject is aware." He mentions that one possible first-order reply to this argument is as follows: "It might be said, for example, that conscious perceptions are those that are available to belief and thought, whereas unconscious ones are those that are available to guide movement (Kirk 1994)." While I haven't read the cited literature, I interpret the contrast here as something of a distinction without a difference. Both first-order and higher-order views agree that consciousness involves certain kinds of thoughts about sensory information, especially those thoughts that can be verbalized. How we partition the neural chain of events between first-order processing and higher-order processing is a bit arbitrary. Ultimately, there's just a bunch of stuff going on, with neural networks triggering other neural networks, which trigger other neural networks, with lots of feedback connections among multitude brain regions. While brain functions are somewhat localized, the idea of separating the morass of brain activity into clean "first-order" and "higher-order" buckets seems naive.

Dennett (2016) articulates the gist of my feeling here when he writes about a somewhat different topic (pp. 69-70):

I submit that, when we take on the task of answering the Hard Question [namely, ‘And then what happens?’], specifying the uses to which the so-called representations are put, and explaining how these are implemented neurally, some of the clear alternatives imagined or presupposed [...] will subtly merge into continua of sorts; it will prove not to be the case that content (however defined) is sharply distinguishable from other properties, in particular from the properties that modulate the ‘reactions and associations evoked’. [...] The answer may well be that these distinctions do not travel well when we [...] get down in the trenches of the scientific image.

Maybe we could define higher-order thoughts based on specific functions, such as language generation. Language is somewhat localized in the brain, so very roughly separating (this aspect of) higher-order thought from other brain processing would not be completely misguided. Still, I would question why we're so insistent on separating higher-order thoughts in the first place, rather than taking a more holistic perspective on the messy biology of brains.

Implicit judgments and representations

We might hold the view that "we have a conscious feeling of X" when "we judge that we have feeling X". For example, suppose I see a shadowy figure in the dark and am alarmed by it. We might say that my conscious feeling of fear starts when my brain judges to itself that "I'm feeling afraid."

But where do these "judgments" happen? One suggestion could be that judgments happen when I tell myself, via verbal inner monologue, that something is the case, e.g., by thinking to myself, "I'm scared of that shadowy figure." But I think most people would agree that verbal reports aren't necessary for conscious experience, since we might imagine a human who was raised by wolves and never learned language but who otherwise had a very similar brain to myself. And there are many experiences of which it feels like I'm conscious without ever verbalizing those experiences.

So, if we're looking for the stage of brain processing where "judgments" occur, we should look at some stage prior to our brains producing verbal reports. Suppose we identify some such stage. Presumably it will consist of some configuration of our brain state and/or some set of processing steps.

But then we can ask: Do subsystems in our brains also make judgments? For example, before my whole brain became aware of the frightening shadowy figure, maybe a subset of my brain processed the visual input and triggered "alarm bells" to the rest of the brain. Could those "alarm bell" signals be considered a judgment by the danger-detection subsystem of my brain? That judgment wouldn't be expressed in words; rather, it would be expressed in a more abstract language of neural activation. But it would still be a "statement" of sorts that one entity was passing along to others.

If subsystems of our brains can also make judgments, how about individual neurons? For example, a nociceptive neuron could be seen as making the very simple "judgment" that "there's some tissue-damaging stimulus here".

And so on. Even if we try to cash out "consciousness" in terms of "brain's judgments", we find that there's not a principled way to distinguish judgments from non-judgments. Rather, different systems have different degrees of complexity in the "judgments" that they make.

Rothman (2017), describing Daniel Dennett:

He regards the zombie problem as a typically philosophical waste of time. The problem presupposes that consciousness is like a light switch: either an animal has a self or it doesn’t. But Dennett thinks these things are like evolution, essentially gradualist, without hard borders. The obvious answer to the question of whether animals have selves is that they sort of have them. He loves the phrase “sort of.” Picture the brain, he often says, as a collection of subsystems that “sort of” know, think, decide, and feel. These layers build up, incrementally, to the real thing. Animals have fewer mental layers than people—in particular, they lack language, which Dennett believes endows human mental life with its complexity and texture—but this doesn’t make them zombies. It just means that they “sort of” have consciousness, as measured by human standards.

In reviewing a paper on "A higher-order theory of emotional consciousness", Hankins (2017) writes: "At the risk of begging the question a bit we might say that if you don’t know you’re afraid, you’re not feeling the kind of fear LeDoux and Brown want to talk about." But what does it mean to "know you're afraid"? Doesn't the amygdala sort of "know it's afraid" in a simplistic and implicit way? And, I maintain, higher-level "awareness" of fear comes in degrees depending on how intelligently information can be combined and deployed for appropriate responses across a variety of contexts and how rich are one's concepts of one's emotional state in connection with other concepts. In fairness to LeDoux and Brown, they do answer the question of what it means to "know you're afraid", by pointing to a specific sketch of higher-order representations of representations within human brains. But in my view, saying that only this particular collection of cognitive processing counts as conscious emotion is too parochial.

LeDoux and Brown (2017, SI) discuss the idea of a "deflationary view" of awareness in which "we are conscious of our first-order states but not because of any kind of distinct higher-order awareness. To have the state is to be conscious of it, and nothing else is required" (p. 12). In analogy, I propose we could state a deflationary view of judgments according to which "judging that X" is just "having neural activity representing that X", without requiring some higher-order notion of judgment. LeDoux and Brown (2017, SI) reply: "The problem with this view is that it is unable to distinguish conscious states from non-conscious states. In fact, this kind of deflationary awareness seems to accompany every state of the brain, which then, would make all brain states phenomenally conscious" (p. 12). To this I say: "Exactly!" Their modus tollens is my modus ponens. I take the deflationary view of awareness or judgments to imply that awareness and judgments don't have sharp boundaries but can be seen in gradations in all cognitive processing. The presumption that there is a clear distinction separating "conscious states from non-conscious states" is one that I reject, so the failure for an account of awareness or judgments to find such a sharp distinction is not a flaw.

Carruthers (2016) criticizes an approach that allows for merely implicit higher-order representations:

Van Gulick (2006), in contrast, suggests that all of the higher-order representing sufficient to render an experience phenomenally conscious can be left merely implicit in the way that the experience enters into relationships with other mental states and the control of behavior. So animals that lack the sorts of explicit higher-order concepts tested for in comparative ‘theory of mind’ research can nevertheless be phenomenally conscious. The difficulty here, however, is to flesh out the relevant notion of implicitness in such a way that not every mental state, possessed by every creature (no matter how simple), will count as phenomenally conscious. For since mental states can't occur singly, but are always part of a network of other related states, mental states will always carry information about others, thus implicitly representing them. It is implicit in the behavior of any creature that drinks, for example, that it is thirsty; so the drinking behavior implicitly represents the occurrence of the mental state of thirst. Likewise it is implicit in the state of any creature that is afraid that the creature is representing something in the environment as dangerous; so fear implicitly represents the occurrence of a representation of danger. And so on and so forth.

As with LeDoux and Brown (2017, SI), Carruthers (2016)'s modus tollens is my modus ponens. I think caring to some degree about merely implicit representations (where the implicit/explicit distinction is itself fuzzy) is the kind of non-parochial higher-order view I could get behind.

Van Gulick (2006), p. 23:

From a teleopragmatic perspective, reflexive meta-intentionality is a pervasive and major feature of the mental domain. Once one recognizes the diversity of degrees and forms in which it can occur implicitly as well as explicitly, one finds it playing a key role in all sorts of contexts and organisms that one might not ordinarily associate with meta-intentionality. Rather than coming in only at the latest and most sophisticated levels of evolution and mentation, one finds some degree of reflexive meta-intentionality playing an important role at both the lower levels of the phylogenetic scale and the lower organizational levels of complex mental systems.

From a different perspective, we actually care about first-order computations

Consider why we associate consciousness with moral significance in the first place. Presumably it's because when we explore what we care about, we do so by imagining ourselves having emotional experiences. You might, for instance, think of burning your hand and then screaming in pain as you realize what happened. Since this image involves reflection on an emotion, you may conclude that self-reflection on emotions is what you care about.

But in a different sense, our brain algorithms "actually" care about the emotional experiences themselves—those "in the dark" brain operations upon which we imagine introspecting. Our implicit behavior is tuned to optimize actual rewards, not just noticed rewards. Carl Shulman makes this point in response to Yudkowsky:

The total identification of moral value with reflected-on processes, or access-conscious (for speech) processes, seems questionable to me. Pleasure which is not reflected on or noticed in any access-conscious way can still condition and reinforce. Say sleeping in a particular place induced strong reinforcement, which was not access-conscious, so that I learned a powerful desire to sleep there, and not want to lose that desire. I would not say that such a desire is automatically mistaken, simply because the reward is not access-conscious.

Shulman adds that we may feel that "the computations that matter are the sensory processing and reinforcement learning, not the [higher-order thoughts]. The action-guiding, conditioning computations that the reflections are about."

I don't know if the following example is something that actually happens, but it seems plausible that it could be true for humans or at least an artificial mind similar to humans. Suppose you've become an expert at distracting yourself from pain. You perform a trick for your friends in which you touch a flame for 15 seconds in a row while thinking about something else. Afterward, you think to yourself and tell your friends that "I didn't feel any pain." But the next day, when your friends ask to see the trick again, you have a strange sense of hesitation, and you can't bring yourself to repeat the flame-touching demonstration. That's because (in this hypothetical scenario) your brain still learned that the flame was punishing and seeks to avoid flames in the future. Your verbal statements claimed no pain, but your behavior is implicitly "declaring" that you did feel pain. Why should we privilege verbal reports over behavioral "reports"? (Perhaps a defender of higher-order views could bite the bullet, contending that implicit behavioral "reports" don't matter and that the only thing bad about your not wanting to touch the flame again is the high-level, verbalizable hesitation/angst caused by your aversion to repeating the flame demonstration. In other words, nothing bad happened on day 1 of this scenario, only on day 2.)

Brandon Keim wrote about insects:

The nature of their consciousness is difficult to ascertain, but we can at least imagine that it feels like something to be a bee or a cockroach or a cricket. That something is intertwined with their life histories, modes of perception, and neurological organisation. For insects, says Koch, this precludes the reflective aspects of self-awareness: they don’t ponder. Rather, like a human climber scaling a cliff face, they’re immersed in the moment, their inner voice silent yet not absent. Should that seem a rather impoverished sort of being, Koch says it’s worth considering how many of our own experiences, from tying shoelaces to making love, are not self-conscious. He considers that faculty overrated.

Burton (2017):

According to Daniel Dennett[...] consciousness is nothing more than a “user-illusion” arising out of underlying brain mechanisms. He argues that believing consciousness plays a major role in our thoughts and actions is the biological equivalent of being duped into believing that the icons of a smartphone app are doing the work of the underlying computer programs represented by the icons.

In this analogy, the app icons are like the higher-order thoughts by which our brains represent to themselves that "I'm conscious", "I feel pain", and so on. If we were going to morally value smartphones, wouldn't it be odd to only value the high-level presentation of apps and not the underlying computations behind them? Would we regard a smartphone as morally unimportant if it had no user interface but could still run the same computations? Obviously this is an oversimplified analogy, since the "app icons" that are our folk-psychological understandings of our experiences feed back into the underlying computations that our brains run in crucial ways, while I assume that the app icons of a smartphone don't affect how the smartphone apps themselves run. (Smartphone user interfaces of course do affect how a smartphone runs by influencing the user of the smartphone, but I'm trying to omit the smartphone user from this analogy because the smartphone user would be like a homunculus to the smartphone, and brains don't contain full-fledged homunculi.)

LeDoux and Brown (2017, SI) mention findings with split-brain patients (p. 2):

In later studies of these patients behavioral responses were triggered from the right hemisphere, and the patient was then asked why he did that. Verbal reports from the left hemisphere explained the behavior in ways that made some sense given what was observed (if the right hemisphere produced a scratching action by the left hand, the left hemisphere said, “I had an itch”). But these were fabrications. Such observations suggested that a role of consciousness is to explain responses generated by non-conscious brain systems[...].

But isn't it odd to regard the (in this case wrong) storytelling by the left brain as conscious while the actual events being reported upon (the actions of the right brain) aren't counted as conscious? That seems like saying that "news" is only "the process of writing articles and producing TV segments by journalists", while the happenings covered in those stories, as well as other current events never reported upon, aren't actually part of the news. Even if we embrace that way of speaking about news and consciousness, the point remains that it seems weird to exclude from ethical consideration the underlying details that are being reported upon.

Probably some higher-order defenders would reply that their theories don't require language, and that the right brain in a split-brain patient is still performing the requisite higher-order computations. Fair enough, but I think the lesson generalizes: it seems weird to privilege certain high-level summaries (whether verbal or pre-verbal) to the exclusion of the detailed work being summarized.

Self-reflection looks like other brain processing

What makes the self-reflection algorithm so special anyway? I expect that up close, the steps would look pretty simple and uninspiring—much like many other types of brain processing. Shulman notes this as well:

I don't see [Yudkowsky] presenting great evidence that the information processing reflecting on sense inputs (pattern recognition, causal models, etc) is so different in structure [from other types of information processing].

The higher-order view looks to me like it's falling into a dualist fallacy of supposing that self-reflection (beyond a certain minimal complexity) "really is" conscious, while everything else is not. Why else would it so privilege self-reflection, which is just one algorithm among many?

Blackmore (2012):b

all brain events entail the same kinds of processes – waves of depolarisation travelling along axons, chemical transmitters crossing synapses, summation of inputs at cell bod[i]es and so on. What could it mean for some of these to be “giving rise to” or “creating” conscious experiences while all the rest do not? If the hard problem really is insoluble or meaningless then shifting it to apply only some brain events does not help at all.

As an oversimplification, we can imagine first-order processing as a brain system that combines sensory inputs into a higher-level representation, such as what's done by a neural network or a collection of them. Meanwhile, we can imagine higher-order processing as a brain system that combines lower-order cognitive representations into a new set of "summary information" about what's going on. Perhaps this higher-order summary draws on information about the organism's environment, body state, memories, self-concept, etc. What makes the higher-order processing qualitatively different from lower-order processing? From a distance, they both look like processes of combining inputs and producing outputs, and what's different is what inputs they operate on and what kinds of outputs they produce. This is a main reason why I have trouble locating any sharp dividing line between "morally irrelevant" first-order processing and "morally relevant" higher-order thoughts. Of course, one could insist that the kinds of inputs that a system processes matter crucially to its moral importance and that the more sophisticated, high-level inputs processed by higher-order thinking are required for the processing to count morally. But this strikes me as overly parochial—why privilege only a certain kind of inputs? That said, I probably do care more (per unit of computation) about higher-order processing than lower-order processing, because higher-order processing seems more like the cartoon picture of consciousness that I carry around in my mind.

Perhaps some would say there is a qualitative difference between higher-order brain processing and lower-order sensory representations. For example, Rolls (2008) suggests that consciousness arises from higher-order syntactic thoughts that "involve syntactic manipulation of symbols, probably with several steps in the chain[...]. The first or lower order thoughts might involve a linked chain of ‘if ’ ... ‘then’ statements that would be involved in planning[...]. The hypothesis is that by thinking about lower-order thoughts, the higher-order thoughts can discover what may be weak links in the chain of reasoning at the lower-order level, and having detected the weak link, might alter the plan" (p. 146). Perhaps there are some ways in which syntactic symbol-manipulating thoughts differ qualitatively from non-syntactic thoughts, and in which higher-order syntactic thoughts differ qualitatively from lower-order ones. Even if so, at a more abstract level, both symbolic and non-symbolic computation, both lower-order and higher-order, involve transforming inputs into outputs in potentially complex ways that, when viewed from a distance, can be seen to be intelligently performing certain functions. Assuming we care about this general property more than particular algorithmic implementations, it's hard to find sharp edges among types of computations.

We don't just care about what's immediately introspectively visible

Suppose you learned tomorrow that there are invisible ghosts with brains just like yours (except implemented in ghostly substrate) that suffer when you wear green hats but enjoy themselves when you wear red hats. Upon learning this, it seems you should care about the ghosts and adjust your head wardrobe, even though you have no introspective access to ghost brain operations. You know rationally that ghost brains matter even if you can't subjectively access their pained responses to green hats directly.

Likewise, the verbal, deliberative parts of our minds only have immediate access to the brain operations that they reflect upon. For example, Rolls (2008) explains (p. 142):

It is of interest that the basal ganglia (and cerebellum) do not have back-projection systems to most of the parts of the cerebral cortex from which they receive inputs (Rolls and Treves 1998; Rolls 2005a). In contrast, parts of the brain such as the hippocampus and amygdala, involved in functions such as episodic memory and emotion respectively, about which we can make (verbal) declarations (hence declarative memory, Squire and Zola 1996) do have major back-projection systems to the high parts of the cerebral cortex from which they receive forward projections (Treves and Rolls 1994; Rolls 2008).

In a pre-scientific world, we would care only about the brain operations that we could introspectively access, because those would be the only operations whose existence we would know about. But now that we have third-person data on all the other things happening in our brains, why can't we extend sympathy to those processes as well—just as we extended sympathy to the ghosts? The original motivation for focusing on the reflected-upon operations seems to have been undercut. It's like the drunkard looking for his keys under the streetlamp, but then the morning sun comes up, at which point he can look elsewhere too.

Dialogues

Here's one hypothetical dialogue between a first-order adherent and a higher-order adherent:

First-order theorist: Whenever we introspect upon our consciousness, we do so using higher-order thoughts. Consciousness seems to be a combination of underlying neural representations plus a (nonverbal) feeling/thought that "this feels like something". This leads us to believe that higher-order reflection is essential to consciousness, because we never see consciousness apart from higher-order reflection. But actually, higher-order thought is just the "measurement device" that's always present when observing the first-order computations. First-order computations can still matter even when not being observed.

Higher-order theorist: You misunderstand. Consciousness isn't the first-order computations that are sometimes observed using higher-order thoughts. Consciousness is the observation by higher-order thoughts itself. What you consider an extraneous observation process I consider the core of consciousness.

First-order theorist: Well, even if I held your view, I would consider it unclear what counts as "observation of lower-order computation". Why don't higher layers in a neural network count as "observing" the lower layers? Why doesn't a high-level model of lower-level phenomena count as an "observer" of lower-order computations in a variety of domains, such as when a central computer monitors the computations of worker machines on a computer cluster?

Higher-order theorist: I consider consciousness to be only the specific kind of higher-order observation of one's mind that human minds do when they generate thoughts like "there's something this feels like". Other forms of higher-order observation don't count, even if they look superficially similar and perform similar functions.

First-order theorist: Ok, but that seems like a rather parochial conception of consciousness. (Hence the title of this essay.) Also, the "what it's like" concept that our brains generate when noticing sensations might be a fairly graded thing. For example, if concepts are subsets of networks in which nodes acquire their meaning from connected nodes, then concepts come in degrees depending on the size and complexity of these interconnected networks.

Here's another hypothetical dialogue between two illusionists about consciousness:

Higher-order theorist: In response to the question of "Why does it feel like something to see a sunset?", the answer is: "There's no ontological thing that is 'feeling like something', but your brain represents to itself that 'it feels like something'."

First-order theorist: I agree.

Higher-order theorist: But in that case, a certain flavor of higher-order theory is clearly correct, since the best referent for "feels like something" is this process of generating some higher-order representation ("it feels like something") about one's own sensations. Only a high-level, conceptual, quasi-linguistic thought framework even has the concept "feels like something" that can be attributed to sensations.

First-order theorist: Hold on. You're equating consciousness with a very particular kind of brain process in which the concept that it feels like something is represented to oneself. But the lesson of illusionism is that "feels like something" concept attribution is hollow, in the sense that it's one particular, simplistic thing our brains tell us in trying to make sense of a far more complicated mess of computations. But "consciousness" isn't just this process of telling ourselves that things feel like something. Once we understand illusionism, we see that there's no fundamental distinction between some brain processes vs. others; it's just that some processes are accessible to be combined with high-level, conceptual thoughts that "it feels like something". That doesn't mean other processes that don't happen to be so combined don't also matter.

Metaphysically, this is just a dispute over definitions, but it does matter ethically insofar as our conception of consciousness affects which things we morally care about.

Visceral inclination toward higher-order views

We can care about whatever we want, and I feel there is a gut inclination in favor of a higher-order view of consciousness, because self-reflection on our emotions is the most immediate and obvious characterization of what's going on when we feel bad or good.

Before 2005, I assumed that animals weren't conscious because they lacked language, and while my memory is hazy, I think the idea might have been that animals couldn't be conscious of emotions if they couldn't manipulate them using verbal machinery. In 2005 I learned that most scientists believed that at least mammals and birds were conscious, but I maintained a less extreme higher-order view: Namely, that nociception by itself didn't matter but only reflection upon nociception. My opinion here was heavily influenced by Vegan Outreach's quotations from The Feeling of What Happens by Antonio Damasio:

Would one or all of those neural patterns of injured tissue be the same thing as knowing one had pain? And the answer is, not really. Knowing that you have pain requires something else that occurs after the neural patterns that correspond to the substrate of pain – the nociceptive signals – are displayed in the appropriate areas of the brain stem, thalamus, and cerebral cortex and generate an image of pain, a feeling of pain. But note that the “after” process to which I am referring is not beyond the brain, it is very much in the brain and, as far as I can fathom, is just as biophysical as the process that came before. Specifically, in the example above, it is a process that interrelates neural patterns of tissue damage with the neural patterns that stand for you, such that that yet another neural pattern can arise—the neural pattern of you knowing, which is just another name for consciousness. [...]

Tissue damage causes neural patterns on the basis of which your organism is in a state of pain. If you are conscious, those same patterns can also allow you to know you have pain. But whether or not you are conscious, tissue damage and the ensuing sensory patterns also cause the variety of automated responses outlined above, from a simple limb withdrawal to a complicated negative emotion. In short, pain and emotion are not the same thing.

I maintained this view roughly intact until 2013, when observations by Carl Shulman similar to those comments of his I quoted above forced me to re-evaluate. I realized that it doesn't make sense to specially privilege a very particular kind of high-level self-reflection once we extricate ourselves from the dualism-like trap of thinking that consciousness is a special event that suddenly turns on when we notice ourselves having perceptions.

This discussion reports:

Mr. Shulman thinks it is reasonable to expect that our moral intuitions, by default, would not treat some kinds of cognitive processes as morally relevant — specifically, those cognitive processes of which “we” (our central, stream-of-consciousness decision-making center) have no conscious awareness, e.g. the enteric nervous system, the non-dominant brain hemisphere, and other cognitive processes that are “hidden” from “our” conscious awareness. Upon reflection, Mr. Shulman does not endorse this intuitive discounting of the moral value of these hidden-to-“us” cognitive processes.

History is written by the victors, and my explicit morality is written by the parts of my brain that can talk. But the speaking parts of my brain only have access to a tiny fraction of the cognitive events occurring within my brain. Like people who help others nearby because nearby people are more visible, my spoken judgments intuitively argue in favor of concern for those neural processes introspectively accessible to them.

Since 2013, I've been haunted by the question of whether, and how much, I care about processes that aren't reflected on in some elaborate self-modeling way. Viscerally I can still see how we might not care about unreflected-on emotions, since we can't imagine such emotions from a first-person perspective. Any emotions we do imagine are reflected-on emotions because the imaginative act makes them reflected on. Out of sight, out of mind. But we don't feel the same way about suffering in other minds of which we're unaware. It still matters if someone gets hurt, even if I never know that it happened. So maybe I should care about unreflected-on emotions too.

This dilemma raises a broader question: What do I care about in general, and why? The answer in my case seems to be something like: When I imagine myself suffering, it feels really bad, and I want to stop it. Likewise, if I know that something else is suffering in a similar way, I want to stop that. But what does "in a similar way" mean? Naively, it means "in a way that I can imagine for myself". But then this seems to include only emotions that are reflected upon, since any brain processing that I imagine from a first-person perspective is reflected upon. Rationally, however, it seems I should broaden my moral circle to include minds that are too alien for me to imagine being directly but that still share third-person similarities to myself. In this case, processes not reflected on may begin to matter somewhat as well.

I think the moral case for caring about simple forms of self-reflection, or even unreflective processes, is much easier to see from a third-person standpoint. When I think about my own consciousness in first-person terms, it intuitively feels like a binary, on/off property that's far too complex to be present in simple systems. However, from a third-person perspective, it seems more clear that there are no sharp dividing lines between simpler and more complex computational processes—just shades of gray. Indeed, it's often difficult from a third-person perspective to see consciousness emerging at all from a complicated combination of mechanistic parts like those in the human brain. The fact that everything looks at first glance to be unconscious from a third-person, mechanistic perspective makes it more intuitive that "simple and apparently unconscious" processes may not be fundamentally distinct from "complex and conscious" processes, even if they feel that way during first-person introspection.

Luke Muehlhauser's intuitions

Muehlhauser (2017) and I are both illusionists regarding consciousness. Muehlhauser (2017) reports that, for him, there may not be any significant moral implications of illusionism:

After all, my intuitions about (e.g.) the badness of conscious pain and the goodness of conscious pleasure were never dependent on the “reality” of specific features of consciousness that the illusionist thinks are illusory. Rather, my moral intuitions work more like the example I gave earlier: I sprain my ankle while playing soccer, don’t notice it for 5 seconds, and then feel a “rush of pain” suddenly “flood” my conscious experience, and I think “Gosh, well, whatever this is, I sure hope nothing like it happens to fish!” And then I reflect on what was happening prior to my conscious experience of the pain, and I think “But if that is all that happens when a fish is physically injured, then I’m not sure I care.” And so on.

I emotionally sympathize with the intuition that I don't care about pain when it's not "noticed". But unlike Muehlhauser (2017), I think illusionism does have major implications for my moral sensibilities here. That's because prior to illusionism, one imagines one's "conscious" feelings as "the real deal", with the "unconscious" processes being unimportant. But illusionism shows that the difference between conscious and unconscious feelings is at least partly a sleight of hand. (Conscious and unconscious experiences do have substantive differences, such as in how widely they recruit various parts of the brain (Dehaene 2014).)

Put another way, what is the "this" that's referred to when Muehlhauser cares about "whatever this is"? From a pre-illusionism mindset, "this" refers to the intrinsic nature of pain states, which is assumed by many philosophers to be a definite thing. After embracing illusionism, what does "this" refer to? It's not clear. Does it refer to whatever higher-order sleight of hand is generating the representation that "this feels like something painful"? Is it the underlying signaling of pain in "lower" parts of the nervous system? Both at once? Unlike in the case of qualia realism, there's no clear answer, nor does there seem to me a single non-realist answer that best carves nature at its joints. That means we have to apply other standards of moral reasoning, including principles like non-arbitrariness. And as my current article has explained, the principle of non-arbitrariness makes it hard for me to find an astronomical gulf between "noticed" and "unnoticed" pains, especially after controlling for the fact that "noticed" pains tend to involve a lot more total brain processing than "unnoticed" ones. Even on a "per recruited neuron" basis, I think "noticed" pain does matter more than "unnoticed" pain because the extra self-reflectiveness entailed by "noticed" pain seems to add extra nuance to the process. But I struggle, especially looking at the neural systems from a third-person standpoint, to see an enormous difference in per-neuron moral importance between the two cases.

Muehlhauser (2017) says: "The pain I felt 5 seconds after I twisted my ankle is a positive example of conscious experience, and whatever injury-related processing occurred in my nervous system during those initial 5 seconds is, as far as I know, a negative example." If we take it as given that the pre-"noticing" nociception wasn't "conscious", then we can find an enormous gap between the pre-"noticing" and post-"noticing" pain. But this raises the question of why we assumed the pre-"noticing" pain wasn't conscious to begin with, given that the gulf between pre-"noticing" and post-"noticing" looks less sharp in light of illusionism and neuroscience.

Muehlhauser (2017) acknowledges "the possibility that somewhere in my brain, there was a conscious experience of my injured ankle before 'I' became aware of it." He seems to hold out hope that future empirical and theoretical discoveries will make it more clear whether to classify the pre-"noticing" neural activity as conscious or unconscious. Personally, I suspect the matter will always remain a non-obvious and contested judgment call, but certainly future findings may help clarify the issues in play.

Higher-order theories are complementary to first-order theories

For the reasons discussed above, I find it implausible that a proper account of moral standing will only include (sufficiently complex) higher-order reflection. It seems that the properties of what's being reflected upon matter as well, and we can use various first-order accounts of consciousness (global-workspace theory, integrated-information theory, etc.) to ground some of our intuitions about which first-order processes matter and how much. I suggest that higher-order theory can represent one factor among many that we consider when evaluating how much we care about a given cognitive system.

Different theories of consciousness are not mutually exclusive. Global broadcasting of integrated information can lead to higher-order thoughts. All of these components are parts of what happens in brains, and some theories of consciousness just privilege certain of them more than others. I don't think any one of these accounts by itself does justice to the depth of what we want "consciousness" to mean. Robert Van Gulick makes the same point: "There is unlikely to be any single theoretical perspective that suffices for explaining all the features of consciousness that we wish to understand."

Is self-reflection special?

My views on the subject of this essay vacillate. There are times when I feel like consciousness should be seen as relatively binary, while at other times it seems very graded. We have a strong intuition that there's something special going on in our heads that's different from "dumb" processes outside us and even many dumb processes in our brains. Pre-theoretically the distinction is cast in terms of a feeling of what it's like to be ourselves. Post-theoretically, it's plausible to explain the distinctiveness in terms of our brain's ability to reflect on itself in such a complex way that it synthesizes narratives about its experience and generates strong intuitions about its unified selfhood and consciousness. This kind of sophisticated self-reflection is indeed rather distinctive among physical processes, though of course self-reflection comes on a continuum, since any system is self-reflective to some degree.

The big question is whether a self-reflective system has to be sufficiently complex and has to assert its own consciousness like our brains do before it has nontrivial moral status. Does the fact that we intuitively find our own consciousness to be special imply that we actually don't care about much simpler systems? Alas, there is no straightforward answer to the question of "what do we really care about?" We just have fragments of intuitions in various directions at various times. There's not a concrete thing we're pointing to when we describe "our values". Feelings about whether self-reflection of a sufficiently complex sort is special among physical operations vary from person to person, context to context, and thought experiment to thought experiment.

An anthropic principle for consciousness

Several modern theories of physics predict a multiverse containing many kinds of universes, most of which are unsuitable for life. The reason we find ourselves in a universe that can support life is simply that we couldn't exist in the others. This is the "anthropic principle".

Likewise, if consciousness of some sort exists throughout computational systems, why do we find ourselves in complex human bodies rather than in, say, our laptops?c Again the answer is a kind of anthropic argument: At least on present-day Earth, the only computations that are capable of asking these kinds of questions about themselves in a nontrivial way are intelligent biological minds, especially advanced mammals and birds. (Maybe a few computer programs can already ask these questions in a nontrivial way as well; it's not clear.)

We can take this observation one of two ways:

  1. Only care about agents that can reflect on their own consciousness to a nontrivial degree, since if we're asking this kind of question, we couldn't be anything else.
  2. Care about all consciousness to some degree and remember this anthropic principle as just an explanation of why we find ourselves inside the bodies of highly intelligent animals.

I tend to prefer #2, though I can imagine why #1 might be somewhat compelling as well.

Anthropic observer ≠ conscious observer

Sometimes it's suggested that anthropic reasoning casts doubt on animal consciousness because if animals were conscious, they're so numerous that we should be them rather than humans. Yvain proposes this in "It's not like anything to be a bat" (2010), and Russell K. Standish presents a more nuanced version of the argument in "Ants are not Conscious" (2013). (Standish clarifies: "An alternative reading of this paper is that it is not talking about consciousness per se, but what is, or is not, allowable within the anthropic reference class." That's fine, but in that case the paper's title is potentially misleading.)

These arguments are wrong because most conscious observers are not anthropic observers. An anthropic observer is someone who asks questions like, "Why am I in this universe rather than another?" The types of physics in universes not friendly to life can't configure themselves into the right patterns to ask that question (except by chance in the case of certain Boltzmann brains), which is why we can conclude we're not in those universes. That's the anthropic principle.

But there's little connection between that and being conscious. After all, even most humans don't reflect on being observers in one place rather than another 99.99% of the time. In that sense, even most humans usually aren't anthropic observers. If we were forced to divide up spacetime into regions that are anthropic observers and those that aren't, we'd carve out little circles around those minds that happen to be thinking about anthropics, and everything else—both in our universe and in other, inhospitable universes—would not be anthropic observers. Being "conscious" doesn't require being an anthropic observer. There's nothing special about thinking about anthropics per se as far as consciousness goes. Even higher-order theorists probably agree with this, since they think high-level reflection of some sort is crucial for consciousness, not high-level reflection on one's status as an anthropic observer.

Of course, one could extend the anthropic principle to ask: "What universes are capable of supporting (complex) conscious minds of any kind, whether they reflect on their own existence or not?" But that's a different question from what anthropics ordinarily asks about, and reflection on "what observer we find ourselves to be" has little to say about the extent of consciousness in general throughout the rest of the world.

I think the self-sampling assumption in anthropics is wrong, but even if it were right, it wouldn't apply to "conscious" minds in general. Why would it? It's not as though any conscious mind is always implicitly affirming its own status as an anthropic observer at all times. Moreover, "conscious" minds themselves don't even exist as discrete entities with clear boundaries. This whole business of anthropic reasoning based on discrete observers is confused.

Footnotes

  1. In the language of databases, the brain is more BASE than ACID. The Cartesian theater idea of a single, consecutive stream of consciousness with a definite finish line for when something becomes conscious might be compared with Single Version of the Truth.  (back)
  2. The linked page says "Do not quote" because this is an early draft of a book chapter. But the full book is super expensive, so I don't have a copy of it.  (back)
  3. Note that this can be asked without invoking confused notions of "observers" as discrete "souls" that inhabit bodies. Rather, we're just asking why the computations that are capable of asking questions like these reside in big animals. This is no more mysterious than asking why pollen is found on flowers rather than in a plant's roots.  (back)