What Knowledge of Consciousness Says about Theories of Consciousness

By Brian Tomasik

First written: 4 Sep 2014. Last nontrivial update: 13 Mar 2017.

Summary

Philosophical frameworks for understanding consciousness need to explain how we know that we have consciousness—both physically in our patterns of neurons and phenomenally in our subjective experiences. Some views on consciousness explain our self-knowledge of being conscious more simply than others. Functionalism is the most elegant account. Its rejection by philosophers is based on a religious intuition that not everyone shares about whether there is a hard problem of consciousness.

Contents

Introduction

In Zen and the Art of Consciousness, Susan Blackmore explores the nature of subjective experience via "ten Zen questions" for introspection. In this piece I propose another Zen-sounding question that any theory of consciousness must answer:

How do you know you're conscious?

I don't mean this in the skeptical sense of "How can you know anything?" Rather, I intend the question more literally: By what mechanism does knowledge of your being conscious enter your physical brain and phenomenal awareness?

Views

In this section, I describe how various perspectives answer this question. I refer to the views by their ordinary names as well as by the letters that David Chalmers gives them in his "Consciousness and its Place in Nature".

For each view, I present a diagram showing a proposed pathway. We need to explain two things:

  1. Why neural networks in the brain have configurations corresponding to the belief that I'm conscious. These networks, inter alia, produce output verbal attestations of my consciousness. Even so-called philosophical zombies have these network configurations. I circle these explanations in green in my diagrams.
  2. Why I have the phenomenal experience of feeling conscious. Philosophical zombies, if they were possible, would lack this part. I circle this orange in my diagrams.

Functionalism (type A)

Functionalism has an easy time answering these questions. For functionalism, phenomenal consciousness is certain operations of information broadcasting and self-reflection. Hence, the physical mechanism that allows one's brain to learn about itself as being a conscious mind is the phenomenal experience of consciously perceiving oneself as a conscious mind.


Functionalist explanation of how I know I'm conscious. I release this image into the public domain.
Functionalist explanation of how I know I'm conscious.

Interactionism (type D)

Interactionisms, such as substance dualism, claim that matter is not sufficient for mind. Rather, the mind is some other stuff that interacts with matter. Dualists must recognize that our physical brains contain knowledge of ourselves as conscious. But how does that information enter our brains? One possibility is that the conscious experience tells our physical brains about its existence:


One possible interactionist explanation of how I know I'm conscious. I release this image into the public domain.
One possible interactionist explanation of how I know I'm conscious.

It's not clear how mind helps the neurons configure themselves, since it would seem that their behavior can be explained by physics alone. Maybe mind is a sort of "guiding spirit" that helps push the neurons along so that they move in ways they wouldn't without the spirit.

Epiphenomenalism (type E)

Epiphenomenalism holds that the mind is causally influenced by physical operations, but it's an effete byproduct that causes nothing further. This raises the question: How does the physical brain know that it's conscious? It must do its own physical computations, analogous to those that the functionalist claimed it did. Then perhaps those beliefs are transmuted into phenomenal beliefs by the epiphenomenon-generation process?


One possible epiphenomenalist explanation of how I know I'm conscious. I release this image into the public domain.
One possible epiphenomenalist explanation of how I know I'm conscious.

However, justifying belief in one's consciousness because of the physical operations seems questionable, because as Chalmers notes: "my zombie twin would produce the same reports (e.g., 'I am conscious'), caused by the same mechanisms". So, Chalmers suggests, maybe we should consider phenomenal experience to be self-justifying: "consciousness plays a role in constituting phenomenal concepts and phenomenal beliefs. A red experience plays a role in constituting a belief that one is having a red experience, for example. If so, there is no causal distance between the experience and the belief." I draw this self-justification with a circular arrow in the next diagram, which proposes another possible epiphenomenalist account:


Another possible epiphenomenalist explanation of how I know I'm conscious. I release this image into the public domain.
Another possible epiphenomenalist explanation of how I know I'm conscious.

Depending on the mechanics of the interaction, one might fear that there would no longer be a necessary connection between conscious beliefs and the belief states of one's physical brain. Chalmers agrees: "the relationship between consciousness and reports about consciousness seems to be something of a lucky coincidence, on the epiphenomenalist view."

Eliezer Yudkowsky lambasts this seeming absurdity:

Once you've postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the "mysterious redness of red"?

Isn't Descartes taking the simpler approach, here?  The strictly simpler approach?

[...]

I am not endorsing Descartes's view.  But at least I can understand where Descartes is coming from.  Consciousness seems mysterious, so you postulate a mysterious stuff of consciousness.  Fine.

But now the zombie-ists postulate that this mysterious stuff doesn't do anything, so you need a whole new explanation for why you say you're conscious.

Robert Kirk likewise finds this situation worrisome. He calls it "the problem of epistemic contact": Under epiphenomenalism, our bodies couldn't epistemically access our actual qualia.

Neutral monism (type F)

Neutral monism proposes that the world has a single causal structure of physics, but physics itself has a certain essence, which is neither physical nor mental but "neutral" and can give rise to both the physical and mental aspects of reality.


One possible neutral-monist explanation of how I know I'm conscious. I release this image into the public domain.
One possible neutral-monist explanation of how I know I'm conscious.

But this leaves it unexplained why we believe we're conscious. How does the fundamental essence of what makes up the world create patterns of brain activity that correspond to knowledge of our consciousness?

One could posit something like parallelism, where our physical brain states just happen to be correlated with our phenomenal states, but this strongly violates Occam's razor.

Alternatively, one could claim there's a direct causal chain from essences to our physical brain states. That would be odd, like wood somehow shaping itself into a house. But even if that is possible, why does the essence of physics cause our brains to correctly know that we're conscious rather than leading them to mistakenly believe that we're not? And even if we overcome that question, we still seem to end up with something that looks like either substance dualism (essence causes physical events, which in turn affect the essence) or epiphenomenalism (the essence of physics is the cause and physical processes are the epiphenomenal result).

Type-B physicalism

Type-B views such as type physicalism suggest that consciousness is identical with certain brain states but that there remains an epistemic gap between physics and consciousness. The diagram here turns out to be the same as that for type-F views, because type-B physicalism say that, at least conceptually, consciousness means something different than just functional processes (even though the two things turn out to be identical, whatever that means). For this reason, I and others claim that type-B physicalism is really property dualism (type E, except without an arrow from physics to consciousness, because type-B views don't say that physical operations cause consciousness, but that physical operations are consciousness, whatever that means).

Ned Block's "biological theory" of consciousness suggests that functional operations are not sufficient for consciousness, but rather the specific electrochemical constitution of brains may be required. For example: "Information in the brain is coded electrically, then transformed to a chemical code, then back to an electrical code, and it would be foolish to assume that this transformation from one form to another is irrelevant to the physical basis of consciousness" (p. 1113).

Property-dualist views have the same problem as type-F ones did: If consciousness is separate from functional processing, how does our consciousness inform our physical brains that we're conscious? Block acknowledges this point: On his view, "the biological machinery of consciousness has no necessary relation to the biological machinery underlying reporting" (p. 1113).

Idealist monism

Idealism suggests that there is only thought, and physics doesn't exist except as patterns of conscious perceptions.


One possible idealist explanation of how I know I'm conscious. I release this image into the public domain.
One possible idealist explanation of how I know I'm conscious.

Why functionalism wins

I find the functionalist (type A) account simplest and most intuitive. The diagrams may disguise hidden complexity in the other accounts by using fewer words for them, but substance dualism is actually more complex than functionalism because within the realm of the mind, there are presumably operations similar to what's going on in a functionalist account, or else an exponentially large plethora of fundamental primitives. Otherwise how would we generate so much variety of conscious experiences? Likewise, idealism looks simple, but the laws specifying how illusions of the physical world change over time involve as much complexity as if the physical world actually existed.

Here is the main point of these diagrams: Suppose there were some extra "consciousness stuff" over and above functions. How would we know we have it? We would know we have it because it would causally affect our material brains to tell them about its presence. But in that case, why does it help for the "consciousness stuff" to be non-functional (non-physical, epiphenomenal, an "essence", or whatever), if its only contribution to our knowing about it is via its function? As John Lennon would have said: "All you need is function." This idea that non-functional properties of a brain don't affect an agent's behavior, verbal reports, etc. is a motivation behind Chalmers's principle of organizational invariance.

Honestly, I think the next-best picture to functionalism (type A) is interactionism (type D)—good old Cartesian dualism. Yudkowsky seems to agree judging from his passage quoted previously. At least on this picture the consciousness is actually doing something, and it's clear by what mechanism we learn about it. But I think interactionism is a wrong-headed concept. If something influences and can be influenced by physics, it seems proper to call that thing "physics" also. Fundamentally, there would be no ontological difference between that thing and all other parts of physics. But in that case, interactionism reduces to (functionalist) physicalism. And if the interactionist substance has a different nature than the rest of physics, then that's just property dualism (type E). So I don't think there really is a coherent interactionist position that doesn't collapse to something else.

In fact, I think we can generalize this point. I have yet to come across a philosophical approach to consciousness that doesn't boil down to one of type-A physicalism, idealism, or property dualism. As we saw, identity theories (type B) are really just property dualism. Substance dualism collapses to either type-A physicalism or property dualism depending on how we interpret it. Neutral monism, again, is either type-A physicalism (if we don't postulate consciousness as separate property that can either be present or not in physical systems) or property dualism (if we postulate that the intrinsic consciousness of physical stuff is somehow a separate conceptual property from the behavior of physical systems). Basically, we either think that there's just physics in motion, or we think consciousness is a property that's different from physics in motion (property dualism), or we adopt idealism (which is ultimately just like physicalism but perhaps more complicated). All the detailed proposals I've ever seen boil down to this trilemma.

Objection: "We know about consciousness by other means"

The following section is most easily exposited as a dialogue between a hypothetical defender of type E and F views (call him "EF") and myself ("Me"), a defender of type A.

EF: You've been assuming that in order for our physical brains to learn that X exists, something about X has to causally affect our minds. For instance, I learn that rocks exist because photons bounce off the rocks and enter my eyes. But maybe we learn that we have phenomenal consciousness by some other means. In this case, epiphenomenalism and neutral monism have an easier time explaining why our physical brains correctly believe that there's an extra what-it's-like-ness over and above functional processes. For example, one possibility is that our possession of what-it's-like-ness is a truth in a platonic realm of truths that we can access in the same way as we access mathematical truths.

Me: Well, I don't accept mathematical platonism either. In fact, the reasons why I reject platonism are the same as the reasons why I reject type-E and type-F views: It complicates the ontology to postulate a separate realm of existence (whether phenomenal or mathematical), and even if it did exist, how would we come to know about it? The latter question is called "Benacerraf's Epistemological Problem", and it's essentially the same as the argument in this piece against epiphenomenalism or neutral monism.

EF: Even if you don't buy platonism, maybe it can still be the case that our physical brains deductively conclude in a correct manner that there's an extra what-it's-like-ness to our experiences. That's how the correct physical belief in phenomenal consciousness can exist even on type E or F views.

Me: The neural configurations that correspond to our feeling of what-it's-like-ness are visceral, intuitive, and widespread. In contrast, an inferential argument for our having what-it's-like-ness over and above physical functions is abstract and probably only thought about by philosophers. So even if our brains could configure themselves to correctly believe in what-it's-like-ness via logical reasoning, that's not how most people acquire the belief. A further explanation is needed.

EF: Maybe the intuition about our what-it's-like-ness is hard-wired by evolution, in a similar way as our intuition that modus ponens is true is hard-wired. This would explain why what-it's-like-ness is intuitive.

Me: Believing in modus ponens helps intelligent animals survive better because they can reason more powerfully. In other words, the truth of modus ponens implies an evolutionary landscape that causes intelligent organisms to come to believe it, and if modus ponens were false, we would see a different kind of evolutionary landscape (assuming the coherence of such a thought experiment). But how does the truth of type-E or type-F what-it's-like-ness imply anything different about the evolutionary landscape than if those views are false? I don't see a way in which the evolutionary landscape would induce our physical beliefs to track the truth of this supposed non-functional feature of reality. In other words, we believe in modus ponens because that fact has structural/functional consequences for physics, which affects the structural/functional process of evolution. But epiphenomenal/intrinsic consciousness properties have no structural/functional implications for physics and so no influence on the process of evolution.

EF: Maybe the intuitive belief in what-it's-like-ness is ultimately based on rational arguments by previous generations of philosophers that have diffused into society at large. Or maybe people just get lucky that intuitive what-it's-like-ness feelings match the logical deductions of our reasoning about consciousness. This still doesn't show that the logical reasoning is wrong.

Me: At least you must admit that logical reasoning about having what-it's-like-ness can be wrong. You believe that zombies are possible, right?

EF: Of course. Zombies are the basis of my belief that consciousness is more than physical/functional properties.

Me: Zombies engage in logical reasoning to prove that they're conscious in a special non-physical and non-functional way, but by definition, this conclusion is in error. Since zombies are possible, it's possible to derive false conclusions from logical reasoning by a rational agent when he deduces that there's something it's like to be him.

EF: Fine.

Me: How do you know you're not a zombie? Rationally, you don't. All the logical arguments that you might put forth are also made by your zombie twin.

EF: "I" include both my physical and phenomenal qualities, not just my physical qualities. And my phenomenal qualities know in some way that they're conscious. So even though just my physical body can't tell if it's inhabited by consciousness, the set of (physical body + phenomenal mind) does know.

Me: All of your actions are taken only by the physical/functional part of "you", so even if you try to group the extra phenomenal experiences in with "you", they don't affect the ways in which you act. The (alleged) fact that you're conscious has no impact on your actions to make the world a better place, and so on. It seems to be just a coincidence that there's an extra phenomenal nature/essence dangling on (or inherent within) the physics of your behavior.

EF: Things that are coincidences can be true.

Me: Sure, but coincidences can also be false (and usually are). Occam's razor favors the view that you're actually a zombie merely claiming to have dangling/inherent phenomenal properties.

EF: Why do you believe in Occam's razor? That's an a priori, religious-type belief of yours. If you can have your "Occam's razor" religion, why can't I have my "non-type-A consciousness" religion?

Me: Fair enough. Of course, Occam's razor works very well on many practical problems. But you're right that its application to untestable metaphysical views is something of a leap of faith. I believe in Occam's razor without complete proof of it. Your physical/functional brain believes in a kind of consciousness for which it has and can never have any direct evidence. Religious people believe in a God who never reveals Himself. And so on.

Dennett's response

Daniel Dennett also addresses this topic in Consciousness Explained, Ch. 12.5. After noting that "There could not be an empirical reason [...] for believing in epiphenomena", he suggests that the main alternative would be to believe in epiphenomenal qualia for a priori reasons. He finds this absurd, because, for instance, if someone told you "that there are fourteen epiphenomenal gremlins in each cylinder of an internal combustion engine"—gremlins which have no effect on anything physical—it would be crazy to postulate them a priori (p. 403).

Dennett then considers the objection that qualia, unlike gremlins, are sincerely believed and have played "a major role in our conceptual scheme." Dennett replies (p. 404):

And what if some benighted people have been thinking for generations that gremlins made their cars go, and by now have been pushed back by the march of science into the forlorn claim that the gremlins are there, all right, but are epiphenomenal? Is it a mistake for us to dismiss their 'hypothesis' out of hand? [...] These are not views that deserve to be discussed with a straight face.

A defender of epiphenomenalism might object that the consciousness case is fundamentally different from other reductions because all other reductions make intuitive sense, while reduction of consciousness does not. But isn't this likely to be just an artifact of contingent neural wiring? Maybe Newton and Leibniz would have found it equally preposterous to explain the universe without God. For my own part, I often don't find Dennett's view very strange at all, because I've rewired my intuitions to see how much sense it makes (though I can also return back to the explanatory-gap view on demand, by activating different intuitions). Similarly, special relativity rewires your intuitions about time and length. Intuitions are rubber. If you can't imagine Dennett's view being intuitive, maybe it would help to absorb yourself in it more fully for a while and see whether that perspective changes.

Explanatory gap?

Some would object to my drawing an orange box in the functionalist diagram. They might claim that the physical can't account for the mental. This is the whole reason that, say, epiphenomenalists draw a separate box: Physical processing isn't necessarily consciousness, so we need something else to be consciousness. But why? Isn't it at least as plausible that physical processing to perceive one's brain configuration is what we consider phenomenal awareness of one's awareness, as it is that phenomenal awareness is some other mysterious thing?

In "Dissolving the hard problem of consciousness", Glenn Carruthers and Elizabeth Schier assert that arguments purporting to demonstrate the existence of the hard problem are circular: Only if you begin with intuitions that physics alone doesn't necessitate subjective experience do zombie-type arguments for the explanatory gap have any force. But what if we have intuitions, perhaps refined based on other philosophical arguments, that zombies are impossible?

The hard problem is fundamentally a religious dispute—a battle of conflicting presuppositional intuitions. As Dennett notes in "Explaining the 'Magic' of Consciousness" (p. 17): "There is no way to nudge these two alternative positions closer to each other; there are no compromises available. One side or the other is flat wrong." According to The PhilPapers Surveys, 16.0% of philosophers find zombies inconceivable. (Only 23.3% find them metaphysically possible.)

Consciousness as illusion

Alice: How do you know you're conscious?

Bob: It's obvious! It's the most certain thing in the world.

Alice: Ok, but how does it work? What is phenomenal consciousness made of?

Bob: As David Chalmers says, maybe "experience itself [is] a fundamental feature of the world, alongside mass, charge, and space-time."

Alice: So does the feeling of knowing that you're conscious pop into your subjective experience instantaneously and atomically, with no smaller parts?

Bob: Umm, no. The subjective experience is made of smaller parts that combine together to create phenomenal states.

Alice: Why is that any less weird than physics combining to create phenomenal states? I can ideally conceive of a mind with all the right subjective-experience parts that isn't conscious.

Bob: Hmm, well then maybe consciousness is a single, unified, indivisible, instantaneous thing.

It's common to see people embrace the idea of conscious thought as somehow unbound by physical limitations. "Mind over matter", as they say. For example, in Bruce Coville's The Search for Snout (p. 52), Tar Gibbons explains: "while the limit on physical speed seems to be that of light, we have found that the speed of thought is instantaneous." Our naive conception imagines consciousness as the kind of thing that can make itself manifest in some pure, immanent way.

Reductionists are right to call this an illusion. Phenomenal experience is not an ethereal thing to which we have instantaneous, special access. Any claim we make to being certain about our consciousness is a claim that our mechanical neurons produce by mechanical means. We can or will in the future be able to see this by brain imaging and neuroscientific modeling. It's an illusion to suppose we can have experiences in some ghost realm that grant us access to non-corporeal truths. When people protest: "But I'm certain I'm conscious", we can say: "Yes you are, but that feeling of being consciousness is a series of material steps, not something that's 'just known' as if truths could appear into conscious minds like rabbits in hats."

"How do you know you're conscious?" This is the koan that first brought me a feeling of enlightenment in fall 2009. When I internalized how every sensation I had, including indubitable certainty of my own consciousness, corresponded to a set of physical operations in my brain, I felt a sense of elation, as though I could finally see through a haze. There are times when I can recover the same frisson of insight.

Taking a cue from the protest slogan about democracy, I understand that "This is what physics [of certain types] feels like." But contrary to neutral monism as Chalmers describes it, physics doesn't have a separate essence that somehow embodies an ineffable stuff that is phenomenal experience; rather, phenomenal experience is exactly and nothing more than the physical operations that comprise it. What else could consciousness be? How would it help for it to be anything else? Talk about "the mind" sets us up for failure, because then our brains picture a separate thing and can't figure out how it relates to physical things. The mind is not a separate substance, nor an ineffectual byproduct, nor an internal essence—the mind just is the set of physical operations, full stop.

As we saw above, if consciousness were anything other than the processing itself, we'd have to invent stories of how we know we have it. We'd double the number of entities involved—creating one pathway for physically knowing ourselves to be conscious and a separate pathway for phenomenally perceiving ourselves to be conscious. Rather, there is only the processing, and talk about "what it feels like" is a kind of poetry that we apply to soothe our otherwise restless (material) souls.

Awareness vs. self-awareness

This essay has focused on the process of reflecting on our own consciousness as an important test of philosophical accounts of consciousness. However, this doesn't mean that consciousness requires self-reflection. I think this is where higher-order theorists confuse themselves. We report on and think about our consciousness via reflection, but that doesn't imply there aren't important things going on before we reflect on them. Indeed, when we realize that physics is all there is, it becomes clear that first-order brain processing is in a sense fundamentally alike to second-order thoughts about first-order processing. It's not as though some light magically turns on when a layer of monitoring is added.

What's so special about cognitive modeling, anyway? The actual world is vastly more vivid and detailed than a model. Rodney Brooks built robots without models that used perceptions from the world instead. The world is in some sense the best possible model. So how important is it for a mind to reflect on itself as being a mind that does things, compared against just doing those things? It seems plausible to me that modeling adds some moral importance, but it doesn't contain the entire importance of the mind.

Penrose-Hameroff theory

Sir Roger Penrose and Stuart Hameroff have proposed orchestrated objective reduction (Orch-OR) to explain consciousness via quantum mechanics. Hameroff claims that classical neural computation (and hence non-quantum artificial intelligence) can perform information processing, but Orch-OR is required for phenomenal experience. Technically Orch-OR is a brand of physicalism, but not all physicalist hypotheses are created equal. Let's ask Orch-OR the same question as we asked non-physicalist theories of consciousness: How does the theory explain your believing and reporting that you're conscious?

Neuroscience based on classical computation does this fine because neural activity in thought and speech centers can generate beliefs and reports of having phenomenal experience. How does Orch-OR explain it? Presumably either the quantum process posited by Orch-OR interacts with classical neurons to generate the right output (which looks similar to type-D interactionist dualism, even though Orch-OR is technically physicalist) or else Orch-OR does it's thing to create consciousness while classical neurons generate the right observed outputs by some other means (which looks similar to type-E epiphenomenalism). Either way, the Orch-OR part seems unnecessary, because it's easy enough to imagine the classical neurons generating the right outputs on their own. (Where Orch-OR would become more compelling would be if somehow the observed neuron-level activity of thinking and reporting couldn't be explained via classical computation and instead required quantum effects to behave the way it does. If that were the case, then Orch-OR wouldn't just be important for understanding consciousness but would also be a crucial part of ordinary, third-person neuroscience. I don't know enough about Orch-OR to say whether this is what it claims or whether the quantum stuff in the theory is seen as explanatorily separate from higher-level neural firing patterns.)

We can see that as a general principle, if we try to separate consciousness from thinking and behaving, we run into trouble. Good theories of consciousness will directly and clearly explain why we physically say we're conscious.