Summary
Physicalists are divided on the question of whether there's a hard problem of consciousness. David Chalmers taxonomizes the two main camps of the debate as "type-A" and "type-B" physicalists. This piece defends type-A physicalism, which is the view that there is no hard problem of consciousness because consciousness is not an ontologically primitive thing.
Note, 22 Jul. 2017: When first writing this piece, I mistakenly assumed that type-B physicalists were a monolithic group. When I criticize type-B views, I have in mind those particular views that take the identity between certain physical and phenomenal states as unexplained and a posteriori. However, for example, some (type-B) adherents of the phenomenal concept strategy seek to explain the epistemic gap between the physical and phenomenal in sensible, non-mysterious ways. Rather than criticizing "type-B" views, I perhaps should have criticized what's known as "a posteriori physicalism". Anyway, I'm not an expert on the type-B literature, so further corrections are welcome.
Contents
- Summary
- Introduction
- The views contrasted
- Defending type A
- Phenomenal experience as acquaintance knowledge
- Meanings and reductions
- Type-B physicalism is disguised property dualism
- Dilemma against type B: Many interpretations or epiphenomenalism
- A heuristic case against the "zombic hunch"
- Type A feels more right
- Why this question matters
- Analogy with the measurement problem
- What is indexicality?
- Acknowledgements
- Footnotes
Introduction
- "I know I'm conscious, but I can't be sure others are."
- "What's the probability that insects are conscious?"
- "How is it that neural firing gives rise to the qualia I feel?"
These are probably the most common ways to think about consciousness among science-minded people.
However, some thinkers, like Daniel Dennett and Marvin Minsky, contest these statements as embodying a residual dualism: Such ideas reify consciousness as more than the functional operations that brains perform.
This piece contrasts the naive view with the Dennettian view. Chalmers taxonomized them as "type-B materialism" and "type-A materialism", respectively, in his "Consciousness and its Place in Nature". (In my article, I typically use the word "physicalism" rather than "materialism".)
The views contrasted
Both type-A and type-B views are physicalist. They both agree that consciousness is a natural process emerging from physical operations. They may even both agree on functionalism: that consciousness is best thought of in terms of what a system does. Since the views converge on these points, arguments defending physicalism in general don't help resolve the dispute; this is an intra-family feud.
Where the perspectives disagree is on what kind of thing consciousness is. This relates to whether they find the hard problem of consciousness compelling. The following table summarizes the differences, which I'll briefly explain below the table:
Type A | Type B | |
---|---|---|
Theory | analytic functionalism or, equivalently, eliminativism on qualia | non-analytic identity theories or non-analytic functionalism |
Is there a "hard problem" / "explanatory gap"? | no | yes |
If you know the laws of physics, then whether a given thing is conscious can in principle be determined | a priori | a posteriori |
Zombies are | inconceivable (for analytic functionalists), or we are zombies (for eliminativists)a | conceivable (though impossible) |
Explaining the table
The entries in each row of the table are not necessarily equivalent to one another, but the answers all follow from the views in the "Theory" row.
Analytic functionalism
Analytic functionalism looks for functional processes in the brain that roughly capture what we mean by words like "awareness", "happy", etc., in a similar way as a biologist may look for precise properties of replicators that roughly capture what we mean by "life". Just as there can be room for fuzziness about where exactly to draw the boundaries around "life", different analytic functionalists may have different opinions about where to define the boundaries of "consciousness" and other mental states. This is why consciousness is "up to us to define". There's no hard problem of consciousness for the same reason there's no hard problem of life: consciousness is just a high-level word that we use to refer to lots of detailed processes, and it doesn't mean anything in addition to those processes. Given the laws of physics, we could deduce the existence of consciousness in an a priori way (such as by simulating the laws of physics and looking at what creatures emerged) in the same way as we could deduce the existence of life a priori. Since consciousness/life are only defined in functional terms, we could determine whether a hypothetical simulation of the universe contained instances of those functional processes. Finally, a philosophical consciousness zombie is inconceivable for the same reason that it's inconceivable to have a creature that satisfies all the biological properties of life but isn't "alive".
Note: I'm not an expert on analytic functionalism, and I think some authors may use the term to refer to a descriptive hypothesis about how ordinary people use language. I intend the term as a metaphysical thesis about what "mental states" actually are. (In particular, the thesis, on my interpretation, is that mental states are merely labels that we choose to apply to certain sorts of functional relations within physical systems.) I guess that how people use language matters in the following way:
- if by "consciousness" people mean "such-and-such functional processes", then analytic functionalism is the better way to describe type-A physicalism, while
- if by "consciousness" people mean "the intrinsic quality of a mental state (such as what qualiaphiles describe)", then eliminativism (see below) is the better way to describe type-A physicalism, since that kind of consciousness doesn't exist.
Eliminativism
Eliminativism amounts to the same metaphysical view as analytic functionalism; eliminativists just choose to use words in a different way. Rather than appropriating philosophers' words like "qualia" and giving them functional meanings, eliminativists reject the existence of the kind of qualia that philosophers (such as Nagel or Chalmers) have in mind. Eliminativists instead prefer to use less baggage-laden words to talk about consciousness.
For the eliminativist, because "consciousness" (as the philosophers define it) doesn't exist, we're free to define our own new language to describe brain processes. There's no hard problem of consciousness because phenomenal consciousness doesn't exist. Likewise, we can tell a priori whether a given thing has consciousness because no things have (the philosopher's kind of) consciousness. Zombies are conceivable, and in fact, we are zombies.
Type-B physicalists
Type-B physicalists maintain that consciousness is an actual property of the world that we observe and that is not merely conceptually described by structural/functional processing, even though it turns out a posteriori to be identical to certain kinds of structures or functional behavior. I don't have a great analogy for this view because, in my opinion, this view is wrongheaded, and there are no legitimate analogies for it. The identity between water and H2O is often given as a rough analogy for type-B physicalism, but I think this is wrong, because water's properties are even conceptually nothing but the functional behavior of H2O molecules, and "water" is merely a label we apply to the kind of substance that behaves physically/functionally in such-and-such ways.
Type-B views see a hard problem of consciousness because consciousness is not just conceptually reducible to structural/functional properties, and it's unclear where the phenomenal nature itself of consciousness comes from. Determining whether a given thing is conscious requires more than just knowledge of physics because consciousness is not merely defined in structural/functional terms but has to be empirically correlated with structural/functional properties (such as through research on the neural correlates of consciousness). Because consciousness is not conceptually just structure/function, zombies are conceivable (though because physicalism is actually true, zombies are not metaphysically possible; conceivability does not imply metaphysical possibility).
Examples of type-A and type-B physicalists
Here are some people who have advanced what I construe as type-A views:
- Daniel Dennett. See "Quining Qualia". Seth (2007): "According to Dennett, the only ‘hard problem’ of consciousness is acknowledging that there is nothing more to consciousness than mechanisms of conscious access and global influence." Dennett (2012), p. 88: "This makes me, in Chalmers’ taxonomy, a ‘type-A materialist’ as contrasted with ‘type-B materialists’ such as Ned Block and ‘property dualists’ such as Chalmers himself."
- Marvin Minsky: "Our old ideas about our minds have led us all to think about the wrong problems. We shouldn't be so involved with those old suitcase ideas like consciousness and subjective experience. [...] 'consciousness' is only a name for a suitcase of methods that we use for thinking about our own minds. Inside that suitcase are assortments of things whose distinctions and differences are confused by our giving them all the same name."
- Susan Blackmore:b "From this perspective there is no mystery because there are no contents of consciousness and no difference between conscious and unconscious processes or events. [...] the science of consciousness is built on false premises." And:c "I do not doubt that neuroscientists can find, in ever greater detail, the [neural correlates] NCs of specific actions, thoughts, perceptions, and so on. [...] But they will never find the NCs of an extra added ingredient – ‘consciousness itself’ – for there is no such thing."
- Michael Graziano: "the argument here is that there is no subjective impression; there is only information in a data-processing device."
- Carruthers and Schier (2017): "We have no reason to think there is a Hard Problem of consciousness because we have no reason to think the Hard Phenomenon exists."
- Frankish (2012): "if we reject classic qualia realism, we should accept that all that needs explaining are ‘zero’ qualia – our dispositions to judge that our experiences have classic qualia." Frankish (2016): "Is the illusionist claiming that we are mistaken in thinking we have conscious experiences? It depends on what we mean by ‘conscious experiences’. If we mean experiences with phenomenal properties, then illusionists do indeed deny that such things exist."
- Rob Bensinger: "By process of elimination, I conclude that phenomenal anti-realism, or eliminativism, is probably true: phenomenal consciousness is neither reducible nor irreducible (in our universe), because it doesn’t exist. [...] physicalists who appreciate the severity of the hard problem should shift their focus from defending reductionism to constructing eliminative theories."
- Aaron Sloman
- Stanovich (1991): "The failure of vernacular consciousness to match up in any way with the stages posited by third-person information processing theories is a finding quite congenial to eliminative materialists, who are skeptical of folk psychology. That 'consciousness' plays no causal role is certainly unproblematic from an eliminative point of view. And the reason 'consciousness' is not causal, the reason it doesn't map consistently to stages in information processing models, or to the idea of cognitive resources [...], or to whatever theoretical term we pick from empirical psychology is because the notion of consciousness is incoherent."
- Jacy Reese: "I identify as a type-A materialist or a type-A physicalist. This means I don't believe there's a hard problem of consciousness."
- Many people on LessWrong
- Brian Tomasik.
Meanwhile, here are some adherents of type-B views. I'm less familiar with people's exact positions in this space, so I've mostly omitted names:
- Ned Block
- Many philosophers of mind. Dennett (2012), pp. 88-89: "Chalmers thinks ‘It is worth noting that the majority of materialists (at least in philosophy) are type-B materialists and hold that there are epistemologically further facts’ (fn 27, p. 43). He’s probably right about this, too, more’s the pity, but I think it tells us more about the discipline of philosophy than about the likely truth."
- "Clark and Hardcastle. These two are clearly realists about phenomenal consciousness, and they are equally clearly materialists. They reconcile the two by embracing an empirical identity between conscious experiences and physical processes. Although consciousness is not equivalent a priori to a structural or functional property (as type-A materialists might suggest), the two are nevertheless identical a posteriori. We establish this identity through a series of correlations: once we find that consciousness and certain physical processes are correlated, the best hypothesis is that the two are identical. And this postulated identity bridges the explanatory gap." (Chalmers 1997)
- Most neuroscientists (at least implicitly). As one example, I interpret Rolls (2008) as a type-B physicalist based on the following passage (p. 147): "the present approach suggests that it just is a property of [higher-order syntactic thought] HOST computational processing with the representations grounded in the world that it feels like something. There is to some extent an element of mystery about why it feels like something, why it is phenomenal, but the explanatory gap does not seem so large when one holds that the system is recalling, reporting on, reflecting on, and reorganizing information about itself in the world in order to prepare new or revised plans." In my opinion, if Rolls (2008) were a type-A physicalist, he wouldn't find any element of mystery, because the task would not be to explain why it actually feels like something, but merely to explain how we believe/claim/represent to ourselves that it feels like something, which doesn't involve any ontological mystery.
Formerly I included David Papineau in the type-A list based on the following quote from Papineau (2003): "if you think that the cognitive workings of intelligent beings depend on nothing but the operation of normal physical forces, without any extra forces operating only in brains, then you will see things differently. You may begin your textbooks with a few remarks about the distinguishing characteristics of conscious systems, but once this essentially classificatory question is out of the way, you won't want to spend any more time agonising about the nature of consciousness" (p. 3, emphasis added). This seems a type-A view to me because Papineau recognizes that consciousness is just a label we attach to certain physical processes, not a phenomenon that has to be empirically correlated with brain function in an a posteriori way. However, Chalmers's "Consciousness and its Place in Nature" mentions Papineau (1993) (which I haven't read) as an example of a type-B physicalist view. Also, Papineau (2003) seems to advance a "phenomenal concept strategy" argument, and those are typically associated with type-B physicalism. So maybe Papineau is technically a type-B physicalist. But I feel that his position amounts to the same as a type-A view for practical purposes. In contrast, the idea held by other type-B philosophers that consciousness is mysterious and must be empirically correlated with physical events does imply a significantly different picture of consciousness and how much we can know about it than a type-A view does. (I'm not an expert on the fine terminological distinctions here, so actual philosophers are invited to correct me.)
Defending type A
Like most science-literate people, I held an implicitly type-B view until 2009. Of course, I didn't know what these words meant at the time, but I assumed that consciousness was a definite thing that a creature either did or didn't have in an objective sense. After 2009, I shifted toward a type-A view. I now think type A is basically right, but I have sympathy for type B and can still be pulled by its intuitive gravity.
This section answers some common complaints against type-A views.
Objection: Denies consciousness
Claim: How can you ignore subjective experience? That I have phenomenal consciousness is the most certain thing in the world for me—far more certain than whatever theories or experiments you're citing in order to suggest otherwise.
Reply: Type-A views don't deny that you have phenomenal consciousness. What's at issue is what the nature of that consciousness is. Is it just "things happening consciousness-wise" in the way that an election is "things happening election-wise"? That is a non-obvious, intellectual question requiring concepts and (possibly faulty) intuitions to answer. Philosophers don't agree on it.
So it's wrong to say that the distinction between types A and B is the most obvious thing in the world, in a similar way as it's wrong to say that the second line is longer than the first in the Müller-Lyer illusion or that it's impossible for one twin to age slower than another by traveling near the speed of light. Where concepts and reasoning are concerned, naive intuitions can be wrong.
It's not that you're mistaken about having consciousness at all. As philosophers say, there is no appearance/reality distinction for the existence of consciousness because the appearance is the reality. But we can be mistaken about what the nature of consciousness is, including whether its reduction differs from or is the same as reductions of other processes like digestion, volcanic eruptions, or weather patterns.
It's not obvious that the so-called "what it's like" of experience must be some independent property rather than being a concept our brains use to refer to an emergent process in exactly the same way as our brains refer to tables and fireworks.
I think if we get to the point of declaring a fundamental mystery like the explanatory gap because of the way we conceptualize something, we're most likely making a mistake with our mode of thinking. This doesn't tell us what the mistake is, but it's a red flag that we are confused and need to approach the question a different way. Trying to draw dramatic ontological conclusions from our confusion the way Chalmers does is an epistemic failure mode—in a similar way as rearranging your ontology because of mental distortion following brain damage would be.
Objection: Consciousness doesn't mean functions
Claim: When we talk about consciousness, we mean something more than functional operations. We're referring to what it's like.
Reply: Imagine yourself as a small child. You see a car for the first time. Your mom tells you "that's a car". You learn that you can get inside of it, and it takes you somewhere. It makes noise and has seat belts. And so on. At this point, the meaning of "car" to you is a thing that moves, makes noise, has seat belts, etc.
Years later you learn that a car is made of parts: Engine, body, seats, steering wheel, etc. And each of these is made of smaller parts. Ultimately, those parts are made of atoms (and subatomic particles, etc.).
An identity theorist says a car is identical with atoms arranged car-wise. That's something we discovered. But when we were young, "car" meant the thing that drove, not metal and plastic atoms in particular configurations held together by electric bonds. Hence the identity is a posteriori.
However, the analytic functionalist disputes this. She observes that if you knew how atoms behave, you could simulate their behavior under various arrangements. For some of the arrangements, you'd reproduce high-level car structure and function. Hence, it's a priori that cars reduce to certain atomic configurations because you could figure that out just by simulation; you wouldn't need to actually observe it in the real world.
The functionalist on consciousness asserts that a similar reduction applies for phenomenal experience. Chalmers claims, as do other explanatory-gap proponents, that there's a fundamental difference in trying to reduce consciousness compared with trying to reduce cars, or water, or genes. I think this is a mistake that results because consciousness is not just a propositional fact but also a procedural experience, not just third-person but first-person. This point is elaborated in later sections of this piece.
How consciousness does mean functions
Why do we think consciousness doesn't refer to something functional or structural? I don't see why the vague cluster of confusion that characterizes our notions of consciousness couldn't be shown to be functional upon clarification. In fact, I think that's what we do see. Otherwise consciousness is just a messy blob of mysteriousness, whose existence and nature are no better explained than is the supposed explanatory gap between brain functions and phenomenal experience.
The following seems to me like a reasonable meaning for the ill-defined notion of consciousness: Phenomenal consciousness is stuff like whatever constitutes the phenomenal experiences I have. (This kind of definition implies something like an exemplar view of the concept of phenomenal consciousness.) Pre-scientifically, phenomenal consciousness could be anything; for instance, maybe our consciousness results from air bouncing off our skin. But then we study what correlates with phenomenal experiences, and we find that various brain processes do. We also develop computational theories to understand why certain types of functional information processing make good sense as candidates for what we had been more vaguely pointing to as our phenomenal experiences. This allows for a plausible analytic reduction of phenomenal consciousness to those functional processes.
This is not very different from reductions in other contexts. For instance, suppose we live in 10,000 BC and see a rock. We have no idea what the rock reduces to: It could be made of fairies holding hands, or maybe it's just indivisible, ontologically primitive "rock stuff". Then gradually people perform experiments and find various types of atoms in rocks. They further develop theoretical models showing why certain sorts of atomic configurations would make good sense as candidates for the constituents of rocks. This allows for a plausible analytic reduction of rocks to those atomic configurations.
Another reply: Failure to refer
Another way to reply to the objection is to say: "So what if people don't mean functional behavior when they talk about qualia?" When non-pantheists talk about "God", they don't mean "the universe", but that doesn't imply there's a better referent for their word. The same could be true for consciousness.
Objection: Appearance is reality for consciousness
Claim: John Searle maintains that "where consciousness is concerned, the existence of the appearance is the reality."
Reply: I agree, but maybe "what we mean by consciousness" is pointing to a confused conception of consciousness, and the "reality" giving rise to the appearance is a type-A ontology.
More precisely, we could restate Searle's claim as follows: If you consciously think you're conscious, then you're conscious. But let's be careful with words here. The first use, call it "conscious1", talks about the reality of how we're actually conscious. The second use is how it appears to us; call it "conscious2". The claim is that the existence in consciousness1 of the appearance that you have consciousness2 is the reality that you have consciousness1. Or, more succinctly: If you consciously1 think you're conscious2, then you're conscious1. But this is just a more complicated form of the tautological claim that if you're conscious1, then you're conscious1. It imposes no constraints on what conscious1 is, and so it could very well be a type-A notion of consciousness.d
As an extreme illustration of this point, imagine taking the eliminativist position on phenomenal consciousness. Then the argument that "If you phenomenally think you're conscious then you're phenomenally conscious" proves nothing, because the eliminativist denies the antecedent. Now, most of us feel this is absurd—it gets back to the "you're denying consciousness" objection. We think this can't be right because we have an immediate, overwhelming experience of consciousness. But so too a born-again Christian may have an immediate, overwhelming experience of God, and if, as I claimed, there is still an appearance/reality distinction for qualia, then we can't claim an asymmetry between our immanent impression of qualia and our immanent impression of God. In the God case, there is really something going on; it's just not what it seems to be. So too it can be in the consciousness case.
Magnus Vinding channels Searle when he says "All the beliefs we are aware of appear in consciousness". But again, this is consciousness1, whose ontological status is not determined. For example, many people think that present-day computers aren't conscious, but present-day computers can process inputs and form crude "beliefs" about things (e.g., that the user has clicked the "Restart" button). Why can't our own beliefs about ourselves being conscious be similar in kind to how computers form "beliefs" based on their "observations"?
Descartes was probably correct with "I think; therefore, I am." Or, as others have revised the statement to be less presuppositional: "Thinking is occurring." Or we could say: "There's stuff happening thinking-wise." Similarly, we can infer from our feelings that "There's stuff happening feeling-wise." But the exact nature of the stuff happening feeling-wise is undetermined by this observation.
Comparison with philosophy of time
Dwight Holbrook has proffered an argument against temporal eliminativism that sounds much like Searle's argument against phenomenal eliminativism. In "Is Present Time a Precondition for the Existence of the Material and Public World?", Holbrook explains:
Try knowing anything before or outside the moment when you come to know or acquire knowledge of it. Whatever it is you have come to know, however much thought or reflection you may have put into knowing it correctly, there is no possibility—no conceivable way—of it being something you know unless this act of apprehension occurs in the first place.
[...]
"absence of present time" runs up against the contradiction of presupposing what is being denied. This constraint acts upon all human knowledge claims. To know necessarily implies the provision of a present time to know it in.
We might re-state this point in closer analogy with Searle's: If it appears that there is a real present moment, then because our thinking that it appears as such is an act that occurs at some time, it must be that there actually is a present moment. Appearance implies reality.
My reply is similar as in the consciousness case: Yes, there is something going on that creates this sense of the present, but its exact nature is not clear. In fact, Holbrook seems to acknowledge this:
Is this occurring of knowing—the construction of it as a temporal act—tantamount to saying that knowing takes place only in present time?
At first glance, the question would seem to answer itself. What can an occurrence be, as a temporal act, if not something that takes place in a present time, whenever that present time might be? The underlying issue, however, is where this temporal act, this occurring we call NOW, is taking place. [...] do the tenses point outward to the nature and ontology of external reality or merely inward and indexically to the speaker using such tenses? Hence, the opposing sides of that debate, the tense and tenseless (McTaggert’s A and B time) theorists.
I didn't understand most of Holbrook's paper, so I won't say more about his subsequent discussion.
Phenomenal experience as acquaintance knowledge
When we imagine deriving the behavior of a car from its atomic configuration, we can think about all parts of the system in third-person terms. We adopt a physical stance toward the atoms, and the metal/plastic pieces, and the whole joint system. Phenomenal experience is different because this requires shifting from third-person to first-person, which is a switch that no other scientific reduction needs.
I think this is where philosophers like Chalmers err. Philosophers tend to think that if you can put a name on something, it's a thing that you can describe in a third-person way. So, we feel "what it's like" and call this "qualia", and now that we have a name for it, we can think about qualia as something out there. Then we puzzle ourselves about why it doesn't seem derivable from physics.
No. Instead, qualia are in here; they are what my physics is doing. Physics must produce phenomenal experience, but this isn't deducible from physical laws in the way that any other third-person observation is because observing consciousness in the je ne sais quoi sense of "what it's like" requires not propositional knowledge but aquaintance knowledge. As William James explained in The Principles of Psychology: Volume One (p. 221):
I know the color blue when I see it, and the flavour of a pear when I taste it; [but] I cannot describe them, make a blind man guess what blue is like [...]. At most, I can say to my friends, Go to certain places and act in certain ways, and these objects will probably come.
If we knew the laws of physics and were in God's position, we could create a universe with those laws (though it would be unethical to do so), and the creatures in that world would necessarily be acquainted with phenomenal experience. There's a sense in which this is a priori given physical laws, because we just did a kind of deduction from those laws—where the universe we created is the deduction. A priori thinking is always a computation done with initial rules, and the playing out of a newly created universe is one such possible computation.
At the same time, there's a sense in which the acquaintance knowledge of our creatures is inaccessible to us, because we are not them. Likewise, I may know propositionally that my friend's deceased mother existed without ever having been acquainted with her. So type-A and type-B materialists are kind of right at the same time: Phenomenal consciousness is analytically functional, but it's also sort of epistemically inaccessible in an acquaintance rather than propositional sense. The acquaintance view makes us feel better about explaining qualia than crude type-A caricatures while not allowing for the ideal conceivability of zombies as would be the case for type B.
Chalmers complains "that there is a sense in which any type-B materialist position gives up on reductive explanation. Even if type-B materialism is true, we cannot give consciousness the same sort of explanation that we give genes and like, in purely physical terms." But acquaintance knowledge accounts for the seeming explanatory gap because it says, in Massimo Pigliucci's words, that the explanatory gap is a "category mistake". Acquaintance is a different kind of thing than propositions:
Of course an explanation isn't the same as an experience, but that's because the two are completely independent categories, like colors and triangles. It is obvious that I cannot experience what it is like to be you, but I can potentially have a complete explanation of how and why it is possible to be you.
Earl Conee uses the concept of acquaintance knowledge to explain what Mary the super-scientist learns upon seeing red for the first time. This idea also relates to the indexical form of the "phenomenal concept strategy".
Objection: How do you know?
A type-B supporter might object to the acquaintance-knowledge solution:
Claim: If, as you admit, acquaintance knowledge is fundamentally private and inaccessible to anyone other than the mind itself, how can you know that the mind has it? Maybe there's not anything it's like to be that mind after all.
Reply: I worry about this, but I think it misunderstands the idea. Any system necessarily has some sort of acquaintance knowledge of itself in some form. Acquaintance knowledge is another way of talking about what it is to be a thing rather than to look at that thing from the outside. A system cannot fail to be itself.
Maybe this is just linguistic trickery and fails to capture the heart of the seeming explanatory gap. But the insistence that there is an explanatory gap assumes that the "what it's like" is more than being the thing. I can't defeat that fundamental intuition, but I'm trying to defuse it by showing that another perspective is possible. If what we mean by phenomenal consciousness is just being the thing that computes, then zombies are not conceivable.
Note that ordinary reductions also require leaps of faith. It's impossible to mathematically simulate all the atoms in a car and show that they reproduce the behavior of the car. We assume that it works because what we can see gestures in that direction, and nature would be simpler if that were the case. Likewise, we can't ever "verify" that functional operations necessitate phenomenal experience, and in fact, this is impossible, because there's no way to subjectively simulate a thing without being that thing. But that doesn't mean we should add some extra property to our ontology and then impose a brute law that the property is identical with the functional operations.
Is acquaintance knowledge a posteriori because we can't access it by reasoning? If so, would that align more with type-B views? I think not, because acquaintance knowledge is neither a priori nor a posteriori; it's not really knowledge at all in the conventional sense of those words. Rather, it's a mode of being.
Meanings and reductions
Type-A proponents claim that phenomenal experiences are categories into which we classify physical operations when those operations are viewed "from the inside". Type-B advocates object that talk about qualia means something more than this analytic definition. Previously I showed with the example of a car how meanings can initially refer to high-level properties that we observe but then can be reduced to more precise, analytic definitions once we understand the systems better. This section elaborates on that idea.
In a Monty Python skit, Michael Palin asked: "What do I mean by the word 'mean'?" There are entire subfields of philosophy and linguistics devoted to that question, so there's no short answer. But I'll suggest one approach, inspired by artificial intelligence.
Clustering
One common instance of unsupervised machine learning is clustering. For instance, an algorithm may take a set of input images and apply a similarity measure sim(x1,x2) to compare each pair of images x1 and x2. Then images that have close similarity can be grouped together into a single cluster that refers to the set of images in that neighborhood. For instance, if the algorithm was fed images of various horses and various barns, it would—if it processed the data well—put most of the horse pictures together and most of the barn pictures together. It could then refer to the two clusters using labels. The algorithm might call the labels "1" and "2", but we could call them "horses" and "barns". If the computer later referred to those labels, it would mean "some image that looks kinda like the images in this cluster". Presumably human language does something similar.
Car reduction
Returning to the example of cars, a child may have a mental cluster of images, sounds, smells, tactile sensations, episodic memories, etc associated with cars, which define his/her notion of what a "car" is. S/he can classify new objects as cars or not based on their similarity to the existing cluster along various dimensions. But now suppose the child is taught how engines work, that the car's components are made of atoms, etc. These aren't what the child means by car, are they? No, not at first. The child's meaning for car is still the cluster of data that s/he had in his/her head about cars. But given that the principles of how motors work, how atomic interactions creates metals, etc necessarily explain the higher-level features of the data in the child's head, the child can be convinced that a reduction is going on. And in principle, one could create analytic definitions of car-ness based on properties of how the underlying atoms behave, though in practice this would be obscenely complicated.
A similar situation applies to phenomenal experience. Initially what we mean by it is the data that we feel directly. Then as we learn more, we see how the kinds of phenomenological features that we experience are deducible from underlying neural dynamics. Of course, as noted, this reduction is weirder than other reductions because it requires changing our perspective. But this is not a fundamental problem, as the next example shows.
Change of perspective
Suppose you live in a neighborhood where neighbors don't talk with each other. No one ever invites neighbors over for social gatherings. You've also never gone to stores, schools, or hospitals because your butlers provide those services to you at home. You can see your house from the inside. It looks a certain way—walls, staircases, windows, etc. You also go outside and see your house from the outside. It looks different on the outside, but you notice how the regularities in shape on the outside determine the regularities in shape on the inside. You go for strolls on the sidewalk and can see all the other houses in the neighborhood from the outside. But you can't enter any of them, so you wonder: Do they look like anything from the inside? Or is it all empty in there?
One of your brothers, David, tells you he can ideally conceive of things that look like houses on the outside but have nothing on the inside. Another brother, Daniel (an analytic functionalist), thinks that's nonsense. "We can explain the way my house looks from the inside based on its structure", Daniel says. "Anything which has this kind of structure must have an inside. We can see that the other houses have this kind of structure. So there must be something they look like on the inside too. Or at least, they would look like something if we could get inside them, which we can't do because our neighbors are unfriendly."
The analogy with the case of consciousness is similar though not perfect. As Susan Blackmore observes, there is no inside or outside to our consciousness. Rather, I think, "inside" means the being the algorithm in motion, while "outside" means looking at an algorithm that's in motion somewhere else. It's still an indexical shift, but not one that has a physical boundary.
Another analogy
I would say that "consciousness" is "stuff like what's going on to generate my thoughts and feelings", where those words are understood to be handwavy gestures, similar to a pre-scientific person looking at lightning and saying that "lightning" is "stuff like that bright, loud thing". Then as we investigate further, we see that various kinds of algorithms and physical processes are what we were trying to point at with our vague language.
This page says regarding U.T. Place's view:
to the objection that "sensations" do not mean the same thing as "mental processes", Place could simply reply with the example that "lightning" does not mean the same thing as "electrical discharge" since we determine that something is lightning by looking and seeing it, whereas we determine that something is an electrical discharge through experimentation and testing. Nevertheless, "lightning is an electrical discharge" is true since the one is composed of the other.
We can choose exactly what parts of the more precise account of consciousness to include in our definition, just like the lightning watcher can decide where to draw the boundaries around what constitutes lightning (e.g., is a shock of static electricity on a door handle lightning? is a computer simulation of lightning still lightning?).
Type-B physicalism is disguised property dualism
Following is a hypothetical dialogue between types A and B:
A: What is "consciousness"? It's the way things seem to a mind. What is "the way things seem"? It's certain types of functional input/processing/output behavior.
B: No, the meaning of a quale like pain is more than just its functional role in the brain; there's something the quale is like to experience.
A: Look at Max Black's "distinct property argument": If pain and functional operations are different, that's really just a form of property dualism. What is the extra "property" doing? Nothing. It doesn't even explain why our physical brains believe it exists. So why keep it in our theory?
B: Type-B physicalism is a monist theory because consciousness is reducible to brain states by way of identity between the two.
A: Where does that identity come from? Reduction is about showing how one set of facts yields another deductively. The identity you declare isn't deductive. It's just claimed by fiat and/or inductively. So it's not really a reduction. You're only describing a correlation between two properties (physical brain states and phenomenal states), which is just property dualism.
Chalmers concurs with this criticism of type B:
If one acknowledges the epistemically primitive connection between physical states and consciousness as a fundamental law, it will follow that consciousness is distinct from any physical property, since fundamental laws always connect distinct properties. So the usual standard will lead to one of the nonreductive views [such as property dualism]. By contrast, the type-B materialist takes an observed connection between physical and phenomenal states, unexplainable in more basic terms, and suggests that it is an identity. This suggestion is made largely in order to preserve a prior commitment to materialism. Unless there is an independent case for primitive identities, the suggestion will seem at best ad hoc and mysterious, and at worst incoherent.
[...] there is a sense in which any type-B materialist position gives up on reductive explanation. [...] the view may preserve the letter of materialism; but by requiring primitive bridging principles, it sacrifices much of materialism's spirit.
Chalmers (1997): "even if type-B materialism is accepted, the explanatory picture one ends up with looks far more like my naturalistic dualism than a standard materialism."
Dennett (2012) says (p. 89) that Chalmers "gives very good arguments for type-A materialism, and finds no flaws in them. He also sides with type-A materialism against type-B materialism."
If one accepts this point, then the real argument is not between type-A and type-B physicalism but between type-A physicalism and a non-reductive theory. Many materialists happily occupy a middle ground in type-B land, hoping to have their cake (physicalism) and eat it too (consciousness being a posteriori). This is not a consistent stance. They must either affirm genuine physicalism (Dennett) or affirm one of dualism or neutral monism (Chalmers).
Searle's view
Searle's views on consciousness are somewhat in dispute. While he proclaims physicalism, he admits the conceivability and maybe even possibility of zombies. Such a position looks like property dualism. Searle denies this, though others aren't convinced.
I agree with most of Searle's paper against property dualism. His main argument resembles the "change of perspective" that I suggested earlier:
What is the difference between consciousness and other phenomena that undergo an ontological reduction on the basis of a causal reduction, phenomena such as colour and solidity? The difference is that consciousness has a first-person ontology; that is, it only exists as experienced by some human or animal, and therefore, it cannot be reduced to something that has a third-person ontology [...].
This mirror's Pigliucci's point above. A system X that's different from another system Y is not Y (doesn't have Y's first-person ontology), even if X can observe Y and crudely simulate it (view Y's third-person ontology). But if this is Searle's view, it's unclear why he finds zombies conceivable, since the zombies themselves should have a first-person ontology (they should be identical to themselves). Given that
human neurobiology + first-person ontology = consciousness,
zombies should necessarily be conscious.
Dilemma against type B: Many interpretations or epiphenomenalism
In type-B physicalism, qualia are real properties that may be present or absent from a system, although for a given physical system, they must always be either present or absent (i.e., zombies are impossible). However, this presents a dilemma for type-B proponents:
- If qualia supervene purely on the functional (i.e., if we adopt a posteriori functionalism), then there's a problem because a system has no unique interpretation as being one function rather than another. For more on this, see "What is a computation?"
- As a stylized example, suppose we think that "addition" and "subtraction" are distinct properties that supervene on arithmetical functions. Then suppose a computer does the operation 001 + 100 = 101, with the 0s and 1s being physical voltages. If we interpret the digits as unsigned binary numbers, this operation is 1 + 4 = 5, on which the "addition" property supervenes. But if we interpret them as signed numbers, "100" will be negative (its value depends on the exact signed representation chosen), so that the "subtraction" property instead supervenes on the same physical process. A similar situation could obtain for "happiness" versus "suffering" properties with respect to more complicated functions.
- If qualia also involve non-functional elements, those elements don't do any work to explain why we say we have qualia.
- For instance, in Ned Block's "biological theory", the constitutional nature of electrochemical physics plays an important role in making something phenomenally conscious. But if so, this extra "essence" of electrochemical physics doesn't contribute to why we physically say we have qualia—all of that work is done by the functional behavior of the system. So the qualia resulting from the electrochemical essence are epiphenomenal.
Type-A analytic functionalism avoids the second problem by definition, but it also avoids the first problem, because there is no extra property apart from a system's functional mechanics that may or may not be instantiated. A type-B physicalist needs to worry that the operations in my brain could, under some interpretation, be seen as playing out the functions of your brain, but in this case it's not clear why my brain doesn't have the phenomenal properties of both my qualia and yours. Type-A functionalists can just leave the systems to be what they are, without anything extra needing to be attached to them. Of course, when it comes to making ethical assessments, we act as if a given functional system has a given quale, because this is the way our minds conceptualize sentience. But this is not an ontological puzzle—just an anomaly of our valuation process that we need to search our hearts to resolve.
Type-B physicalism also faces other puzzles regarding where a mind begins and ends and how to deal with nested minds. Type-A physicalists face these problems in the moral realm—deciding how they want to interpret and morally value different physical processes. Type-B adherents, in contrast, need to believe that these questions have real answers and that there's some set of actual laws of nature that map from physical systems to how much consciousness they contain and with what sorts of precise experiences.
Nonreductive functionalism, such as that proposed by Chalmers, faces the same difficulties as type-B functionalism as far as deciding which interpretations of physical systems are given actual consciousness vs. which interpretations are fake. This isn't surprising, because as discussed above, type-B physicalism is actually just property dualism.
A heuristic case against the "zombic hunch"
So far I've been arguing for type-A physicalism directly in terms of why we should think it's true and how we can overcome our intuitions that it's in error. In this section, I take another approach: I admit that type-A physicalism seems absurd but suggest that the alternative is even less plausible. We might say: "Type-A physicalism is the worst view of consciousness, except for all the others." Rather than thinking in terms of logical persuasion, take a gut check on the situation the way you would if you were assessing claims of ghosts or the afterlife: Does it really make sense to give so much weight to a single, persistent belief held by very fallible brains?e As a result of this kind of System 1 reasoning, we can conclude that even if we don't have clear arguments for the plausibility of type-A physicalism, it's still very likely to be correct.
As I see it, the only serious barrier between our present position and a philosophically satisfactory, type-A-physicalist account of consciousness is what Dennett calls the "zombic hunch": "the conviction that there is a real difference between a conscious person and a perfect zombie". The analogy between analytically reducing cars to atoms and analytically reducing consciousness to neural activity would be acceptable to most philosophers were it not for the feeling that neural activity of the sort that happens in my brain could exist in the absence of qualia. The zombic hunch is all that stands in the way.
The zombic hunch also creates a conundrum: It seems to yield a picture of consciousness that no explanation could make sense of even in principle. This is the idea behind new mysterianism. As Susan Greenfield says: "If I said to you I'd solved the hard problem, you wouldn't be able to guess whether it would be a formula, a model, a sensation, or a drug. What would I be giving you?" It seems that no answer would actually be an answer. There's nothing that this kind of consciousness could be, because whatever it was, that thing would still have its own hard problem: Why is that thing conscious?f
The zombic hunch is just a single, powerful intuition. Humans have many reproducible intuitions that are shown to be wrong, such as our Newtonian conceptions of time and space. It seems just preposterous that the speed of light should be the same in all inertial reference frames (call it the "anti-relativity hunch").g
We can only imagine 3 dimensions (+ time), not the 11 dimensions that may actually exist. No one can visualize 11 dimensions. We don't believe in 11 dimensions because we can imagine them but only because other arguments are sufficiently compelling. Likewise, even if we can't imagine how qualia are necessarily just functional, we can still believe that proposition because of other compelling arguments.
Sometimes human brains consistently fail in predictable ways. Optical illusions, delusional misidentification syndromes, and similar cognitive errors show this forcefully. Thus, it's quite plausible that the same is true for our inability to see how "the right sorts of neural activity necessarily produce consciousness" in a similar way as "the right sorts of atomic arrangements necessarily produce a car". Probably our brains are just wired to think about consciousness in a distorted way, maybe because we so clearly separate physical from phenomenal stances.
So we have a dilemma:
- conclude that the zombic hunch is one example of many in which our brains' intuitive reasoning fails, or
- maintain the zombic hunch, throw a monkey wrench into an otherwise basically flawless physicalist picture of nature (minus the mysteries of the origin and fundamental ontology of the universe), and insist on a conception of consciousness for which any explanation appears inadequate!
The choice seems clear to me.
Note that rejecting the zombic hunch does not cast doubt on the certainty of your being conscious. The only dispute is about whether zombies are logically consistent, i.e., whether we should conceive of consciousness in non-analytic terms. This is a technical question on which your certitude of being conscious seems to have little to say.
Of course, the existence of confusions is sometimes a signal that there's a deeper problem at play. If your programming debugger gives you an error, sometimes you can make a simple fix to remove the error, or assume the error is just a false positive. But sometimes the error indicates a more systemic problem with your approach. We should maintain the possibility that we're all thinking about the consciousness dilemma wrong in some fundamental way. But given the many instances in which we know that our most sincere beliefs are erroneous—relativity, optical illusions, cognitive biases, paranormal reports, alien abductions, etc.—it seems to me much more likely that the zombic hunch is another instance of this sort than that it requires a complete philosophical revolution.
My experience from four years of doing data science at Bing was that if you got really unexplained results, almost certainly you made a mistake, and very rarely was the weird finding genuine. The history of UFOs, ghosts, parapsychology, and a hundred other paranormal phenomena points in a similar direction. There are a few cases of paranormal events that are really hard to explain, but they just can't be correct because they imply too much Occam penalty.
The phenomenal concept strategy (PCS) is an effort by physicalists to suggest specific ways in which the zombic hunch fails. Chalmers has a "Master Argument" against PCS, but it relies on the conceivability of zombies, which type-A physicalists deny. In other words, if you start out denying the zombic hunch, you can continue doing so, and PCS offers some possibilities for where the zombic hunch comes from. If you start out believing the zombic hunch, and if Chalmers's argument works, you won't be much moved by PCS. Hence, either stance remains self-consistent.h
By the way, one of the best PCS-style accounts I've come across is Richard Loosemore's paper "Human and Machine Consciousness as a Boundary Effect in the Concept Analysis Mechanism". Loosemore presents what I consider a biologically plausible sketch of connectionist concept networks in which a concept's meaning is assessed based on related concepts. For instance, "chair" activates "legs", "back", "seat", "sitting", "furniture", etc. (p. 294). As we imagine lower-level concepts, the associations that get activated become more primitive. At the most primitive level, we could ask for the meaning of something like "red". Since our "red" concept node connects directly to sensory inputs, we can't decompose "red" into further understandable concepts. Instead, we "bottom out" and declare "red" to be basic and ineffable. But our concept-analysis machinery still claims that "red" is something—namely, some additional property of experience. This leads us to believe in qualia as "extra" properties that aren't reducible.
The zombic hunch is not infallible
There's a strong intuition that our consciousness can't be an illusion. Sure, it may be that the existence of phlogiston was illusory, but our subjectivity is different, because it's the most certain, direct information we have. But, as I've tried to explain previously, the impossibility of consciousness being an illusion doesn't speak against consciousness being as the type-A physicalist portrays it. Whether type-A physicalism is right or wrong, your consciousness does exist, and so your certitude of its reality doesn't invalidate a type-A view. Rather, what's up for grabs is only the correctness of the explanatory gap, or all the other equivalent manifestations of the same idea.
In particular, the choice is between
- Hypothesis 1 (H1): The explanatory gap is real (which is equivalent to type-A physicalism being false, zombies being conceivable, the hard problem existing, etc.)
- Hypothesis 2 (H2): The explanatory gap is illusory (which is equivalent to type-A physicalism being true, zombies being contradictory, the hard problem not existing, etc.).
Both H1 and H2 predict your feeling that there's an explanatory gap equally well. So the only question that remains is a third-person, objective one about the relative probabilities of H1 vs. H2.
Some arguments for each side:
- H1: Typically when our reasoning finds a gap somewhere, we're correct in that assessment. Moreover, lots of smart people continue to find a gap even after hearing many arguments to the contrary. This is easier to explain if there is in fact a gap than if there isn't.
- H2: Occam's razor penalizes the existence of a gap to an extreme degree, because any ontology over and above functionalism fails to explain why we viscerally believe we're conscious and hence does no explanatory work in that regard. H1 can only help explain why rational deduction leads to belief in the explanatory gap, in a similar way as rational calculation of pi leads to 3.14159.... But all the arguments for an explanatory gap, zombies, etc. rely heavily on raw, undecomposed intuitions. It's quite easy to imagine intuitions of this type being misguided, given that we see misguided intuitions in so many other domains of physics, mathematics, and philosophy. For instance, someone who knew nothing of infinite series might find Zeno's paradox intuitively compelling. Intellectual history is full of instances where mental intuitions tripped up. It's not very surprising that belief in the explanatory gap exists as just one more, particularly strong instance of faulty intuition.
As best I can tell, this is where the fundamental debate lies. The disagreement is over probabilities for different ontological hypotheses being true. This helps make sense of why different people go in different directions and why the two camps don't seem to budge much. It's hard to persuade people about ontological assumptions that can't be arbitrated by empirical observation.
Many weak arguments for type-A physicalism
Belief in phenomenal consciousness over and above functional processes tends to rely on a single strong intuition, that I just know I'm phenomenally conscious. The type-A physicalist can marshal many weak arguments for her own view, such as Occam's razor, that type-A physicalism fits better with everything else we know about the world, that type-A physicalism continues a long tradition of (non-mysterious) reductionism in science, that other views on consciousness seem more supernatural, that beliefs people hold with utmost conviction are often wrong, that type-A physicalism avoids perplexing edge cases regarding whether a given physical system (such as a homomorphically encrypted emulation of your braini) is really conscious, and so on.
Broadly speaking, the Cartesian project is to start with the thing you feel most certain of, and base everything on those things. Confidence/knowledge is eroded as we move from firmer premises to less firm conclusions, but enough is transmitted from our high-probability (if not outright certain) foundations that we can achieve our practical goals.
That's a nice idea on paper, and there are obviously limited settings in which we do follow processes similar to that. But on the whole the way things actually work is that we muddle through a bunch of competing lines of evidence and reasoning simultaneously, and do everything in a piecemeal and out-of-order fashion, and try to figure out rough and often (though hopefully temporarily) internally inconsistent probabilities for different claims, and shift those probabilities up and down based on complicated factors concerning how the claims relate to each other and how long we've had to think about them. I think in the long run I trust that overall flawed messy process (of thinking through the evidence and trying to update away from ideas incrementally as more and more evidence against them accumulates) more than I trust any particular claim or category of claim.
Why non-reductive views violate Occam's razor
We face a dilemma between two contradictory intuitions:
- Explanatory gap on consciousness
- Occam's razor.
"But", you might ask, "how bad can the Occam penalty be for positing an explanatory gap? In other cases where our intuitive reasoning leads to a conclusion, we tend to endorse that conclusion." The answer is that an explanatory gap entails an immense Occam penalty, maybe bigger than any other single conclusion of reasoning. It's no accident that non-reductive philosophers of consciousness sometimes remark that consciousness requires a revolution in our ontologies.
There are two main problems:
- Multiplying entities: If consciousness exists as a property or substance beyond matter, it needs its own ontological laws, expressing how smaller conscious parts combine into bigger ones, how they evolve with time, and so on. These would mirror many of the algorithmic principles that happen in physical brains. So non-reductive views essentially double the size of our ontological commitments. (Or maybe multiply them by 1.5 or 1.3 or something, if phenomenal reality is simpler than physical reality.)
- Correlations: Why are physical states so perfectly correlated with their phenomenal counterparts? For instance, why does my physically believing that there's an explanatory gap correspond to a phenomenal experience of believing there's an explanatory gap, rather than to a phenomenal experience of sitting on the beach?
Maybe #2 could be answered without too much difficulty, such as by hooking up the phenomenal experience that shares as much content as possible with its corresponding physical experience.
Even more simply, maybe there could just be physical processes, and some non-physical "consciousness dust" is sprinkled on top of them in the realm of ontological properties, turning biological puppets into real conscious boys. While the "consciousness dust" proposal is not much more ontologically complex than physicalism and does allow for the possibility of zombies (i.e., zombies are copies of you that lack consciousness dust), this proposal doesn't provide many dualist desiderata. For instance, it fails to allow for inverted color qualia, since it's not possible within physicalism to invert color experiences while keeping neural wiring fixed, and consciousness dust only adds one more degree of freedom (whether something is conscious or not), which doesn't allow for also specifying whether, e.g., green and red are inverted. Of course, we could also hypothesize a theory with two degrees of freedom: conscious or not, and whether red vs. green are inverted. But then we also need degrees of freedom for orange vs. blue being inverted, pain feeling like tickles, and many more variations. Accounting for all of those would bring us back to an extremely complex ontology.
And this whole philosophical mess is created by a single, non-decomposable, unfalsifiable, malleable intuition: the zombic hunch / explanatory gap / etc.
I feel like if you generate a hard problem of any sort, chances are you're thinking about the question wrong.j
Expgap syndrome
Capgras syndrome involves a persistent false belief that a family member or other close person is an impostor. Following is one account:
The first episode occurred one day when, after coming home, Fred asked her where Wilma was. On her surprised answer that she was right there, he firmly denied that she was his wife Wilma, whom he "knew very well as his sons' mother", and went on plainly commenting that Wilma had probably gone out and would come back later.
Perhaps we're all Capgras patients when it comes to the explanatory gap (call us "Expgap patients"). Even though a Capgras patient can see that the person in front of him is identical with his wife on any dimension that might be assessed, the patient still insists that the person is not really his wife—that there's something missing. Likewise, even though a believer in the zombic hunch can see that a zombie is identical with a normal human on any dimension that might be assessed, the person still insists that the zombie is not really conscious as she is—that there's something missing.
Jonathan Erhardt suggests explaining Capgras delusions as follows:
the part of my brain which should be active when I see my close relatives is not active when I see them, and that *this* explains my lack of emotional response.
But we can turn this into an explanation of Expgap syndrome:
the part of my brain which should be active at attributing the presence of phenomenal experiences when I imagine a zombie is not active when I imagine them, and that *this* explains my belief in the explanatory gap.
Erhardt replied:
the "should" in the Capgras case is a "should" of proper function, not of truth or correctness. It is just fairly typical that this part of the brain is active and generates emotional responses when seeing relatives. This is why we say it should be active, not because we're making a cognitive mistake if its not active. (It's not clear how lack of emotional response can be a cognitive mistake, although I guess some philosophers think so.) But the "should" of conceivability is a rational "should". You should believe that 2+2=4, or that it is conceivable that there are zombies, else you're making a cognitive mistake.
But as we saw above, Capgras syndrome can involve false beliefs, not just absent feelings. Fred might say: "I just know that's not Wilma, despite complete similarity on all objective criteria." In a similar way, when imagining zombies, we just know they're not conscious in the same way we are, despite complete similarity on all objective criteria. Both are beliefs that people just have and can't subjectively decompose.
William Hirstein seems to agree that lack of emotion is not all that's at play in Capgras:
According to my current approach, we represent the people we know well with hybrid representations containing two parts. One part represents them externally: how they look, sound, etc. The other part represents them internally: their personalities, beliefs, characteristic emotions, preferences, etc. Capgras syndrome occurs when the internal portion of the representation is damaged or inaccessible. This produces the impression of someone who looks right on the outside, but seems different on the inside, i.e., an impostor. This gives a much more specific explanation that fits well with what the patients actually say. It corrects a problem with the earlier hypothesis in that there are many possible responses to the lack of an emotion upon seeing someone. The patient could think: There’s something wrong with me, or My feelings about Dad have changed, or Dad is playing a trick on me, or Dad has some disorder, etc.
Wikipedia adds:
Most likely, more than an impairment of the automatic emotional arousal response is necessary to form Capgras delusion, as the same pattern has been reported in patients showing no signs of delusions.[28] Ellis and Lewis suggested that a second factor explains why this unusual experience is transformed into a delusional belief;[29] this second factor is thought to be an impairment in reasoning, although no definitive impairment has been found to explain all cases.[30]
Regardless of the details for Capgras specifically, it's clear there are pathological cases of persistent errors in belief and reasoning.
Anosognosia is another example of major errors in reasoning. Yvain (2009):
Take the example of the woman discussed in Lishman's Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
Erhardt asked if non-pathological humans have any cases of brute errors in reasoning that don't resolve when the mistakes are pointed out. My answer is that I'm not sure. It certainly seems plausible, but by the nature of the question, it's hard to tell if we've found one, since unlike for rare mental defects, we would all be subject to these cognitive limitations. Something like the explanatory gap—where a huge metaphysical morass gets built around one leap of reasoning—seems like a good candidate. In math, there are many examples where two seemingly reasonable intuitions collide, such as
- the intuition that there are more counting numbers than even counting numbers, despite the cardinalities of these sets being equal
- the Banach–Tarski paradox, which depends crucially on the seemingly plausible axiom of choice.
Most mathematical paradoxes revolve around infinite sets, and maybe we could fix this by rejecting infinity, but it seems likely that the universe is in fact infinite.
Toy examples of cognitive closure
Our brains are kludges—piles of different systems that push in various directions. We have lots of special-purpose logic, including an entire brain region for detecting faces. We can do general, abstract reasoning, but it's easy to imagine this reasoning sometimes being corrupted by other mental processes. Since we can't introspect on most of our brain modules, these errors appear "brute" and unexplained. (This could theoretically change with advances in reverse engineering the brain, but the brain is so complicated that most of its mysteries may remain forever out of our grasp. I should also point out that I probably don't support speeding up neuroscience research.)
Following is a toy example of a "brute" cognitive error in a hypothetical mind:
def add(x, y): if x==2 and y==2: return 5 else: return x+y
Assuming the agent doesn't have access to its source code, this error would remain a mystery: "Why, when I think about 2+2, is the answer 5 rather than 4? This seems unexplained by the ordinary laws of addition. I'm forced to conclude that addition can't always be explained with its simple mathematical formulation."
An example closer to home is the following:
def conceivability(x, y): if x=="physical processing" and y=="phenomenal consciousness": return "yes, I can conceive of x without y" elif XPredictsEverythingAboutY(x,y): return "no, I can't conceive of x without y" else: return "yes, I can conceive of x without y"
Since we can't see our source code, we take the output of this subroutine at face value and thus generate the hard problem of consciousness.
Case study: "The Dress" illusion
In Feb. 2015, a new optical illusion went viral: "The Dress". (If you haven't seen it yet, take a look now.) When I first saw the photo, I thought it was clearly white and gold. This seemed so obvious that I wondered whether the controversy was even real until I searched for additional sources to confirm. I tried to see the colors as blue and black, but I just couldn't: That they were white and gold was just so immediately certain.
With enough staring, eventually I could see the white color as light blue, but the gold color remained. I tried harder, but the colors wouldn't budge. Eventually I tried hiding the back light and only looking at a segment of the dress itself. Within a few seconds, the gold turned to black. Aha! Now I could clearly see dark blue and black colors. I allowed myself to see the whole image again. Now the whole dress was dark blue and black. Success!
However, at that point, I couldn't return to seeing white and gold; I was stuck at black and blue. Eventually I could convince myself that the dark stripes were gold, but the blue stripes still looked light blue, not white. Eventually the gold color has returned, and I now can't see black again.
The exact details don't matter, but the point is just how astonishing it is that we can be wrong about something that feels infallible. What's more, I felt helpless to change the color I saw no matter how much I tried. The same is true for the hard problem of consciousness. Once someone frames the hard problem (analogy: once a neural coalition wins its election and makes you see white and gold), it's extremely hard to extricate yourself from it (analogy: see blue and black). Often the hard problem feels least weird to me when I haven't pondered it in a while, presumably because the dictatorial rule of the initial winning interpretation in my brain has weakened by that point. Once you see something, it's hard to unsee it, but a delay can help.
A friend replied to this point:
I don't really mind at all that I see white/gold when others see blue/black. I don't even try to think about it in terms of right and wrong. Color as perceived by the eye is subjective and context-dependent. The people who see it as white/gold aren't even wrong, they're just implicitly using different information inside their brains to judge how to "see" the image.
My reply is that there is a right and wrong answer to the dress question because the wavelengths of light most reflected by the dress do fall in the blue/black range, not the white/gold range. The challenge is then to predict that fact based on the data in the image. Some cognitive scientists characterize perceptions as hypotheses. We "see" the most probable hypothesis or at least are more likely to see hypotheses with greater a posteriori probability.
Intuitions about whether zombies are conceivable are also subjective, based on various information inside our brains that we use to judge the matter. From this jumble of internal information, we try to predict whether there really is an explanatory gap for consciousness or not. Some conclude "yes", and others conclude "no". Both judgments can happen, just like both color perceptions can happen, but one or the other of them corresponds to a more accurate characterization of reality.
Do error theories lead to nihilism?
Jonathan Erhardt presents a valuable counterpoint to the kind of reasoning used above:
It seems to me that there are many potential problems for this use of Occam's razor. It is, for example, much simpler to think that nothing exists and nothing is in need of explanation than to think that our traditional views on what explananda exist are right. And we can defend this view by relying on an error theory along the lines of what you invoke to explain the zombie hunch.
I think the claim by the ontological nihilist that "there's actually nothing, but you mistakenly think stuff exists" is different than the analytic-functionalist claim that "there is actually just physics performing functions, but you mistakenly think there's something more." Why? The hard problem of consciousness can be explained away by a single cognitive error—the belief that there's "something it's like" to be conscious over and above functional brain processing. Those who deny the hard problem of consciousness can still account for the structure of all phenomenal facts that the qualiaphiles put forward—e.g., that we can see colors and shapes, that we can taste sweetness and sourness, and so on. The only thing allegedly missing from the functionalist accounts of these qualia is the "feeling of what they're like". Believing that this feeling of what things are like isn't explained by functionalism is a single cognitive mistake. In contrast, a physics eliminativist who claimed that physics doesn't exist and we merely think it does would have to explain a great many cognitive mistakes, such as why we believe that chairs exist, why we believe chairs are solid when touched, why chairs fall down when thrown, why quantum-mechanical experiments show the results they do, why the moon and sun appear to rise and set in the sky, and many more things. It seems that each of these observations requires a different cognitive mistake; or at least we require enough cognitive mistakes to collectively generate this vast set of mistaken observations. The key difference from the hard problem is that in the case of phenomenal consciousness, type-A functionalism provides all the backbone needed regarding how our qualia are structured (e.g., that we see different colors, hear different sounds, etc.). In contrast, an error theory about physics still needs to specify all the structure that physics itself specifies, which suggests that such an error theory about physics may be at least as complicated in terms of minimum description length as physics itself is.
Perhaps the physics error theorist could propose a more ambitious proposal—that, as Erhardt said, "nothing is in need of explanation". So, rather than explaining why we believe chairs exist, why we believe trees exist, and so on, we would just assume that we don't need to explain those observations. But this is more extreme than what the type-A physicalist is suggesting. Type-A physicalism can explain all our observations by postulating a single cognitive mistake. The more radical physics error theorist is denying the need to explain anything at all, including all the messy empirical details of our world. This seems to me like cheating.
Erhardt continues:
[Ontological nihilism] is an extreme case, but I guess we could construe other cases. We could say that although we do not know presently how Newtonian mechanics can explain all the evidence which seems to be conflicting with it, we can be very sure that Newtonian mechanics is true. It is, after all, so much simpler than any of the other contenders. True, there is almost universal agreement that QM and GR epistemically necessitate the evidence whereas Newtonian mechanics doesn't, but we are most likely just mistaken about what Newtonian mechanics epistemically necessitates.
Once again, the view that Erhardt proposes amounts to cheating. Suppose we believed that Newtonian mechanics is true, and we just fail to realize what it actually predicts—e.g., we're mistaken in thinking that it allows objects to move faster than the speed of light. From this error-theory view, we would still be unable to predict the right answers to physical experiments. In order to predict observations accurately, we would still need to use post-Newtonian physics, even though we claimed to think those theories were wrong. And the combination of "Newtonian physics + error theory about our understanding of Newtonian physics + use post-Newtonian physics anyway to get the right answers" is more complicated in a minimum-description-length sense than "Newtonian physics + post-Newtonian physics" when we're predicting our observations. In contrast, "type-A physicalism + error theory about the hard problem" has no similar flaw, because it can predict everything the qualiaphiles want with complete accuracy, with no need to sneak extra theories in the back door.
Originally I wrote a different response to Erhardt, which I've now consigned to a footnote.k
Type A feels more right
When I read Dennett, Minsky, Blackmore, Aaron Sloman, and other type-A physicalists, it feels like a breath of fresh air. These people actually make sense, amidst a mass of what is otherwise confusion. The feeling of "Ah, this is it" can be a useful heuristic for choosing better theories. Of course, it's not foolproof. For instance:
- Before special relativity was shown correct, most people probably felt the "Ah, this is right" sensation for the vastly more intuitive Newtonian physics.
- Some people convert to Christianity because they feel that the Gospels "speak to them".
But most of the time, things that feel wrong are actually wrong. Many conclusions in science seem counterintuitive, but up close they make a fair amount of sense.
Of course, many people think Dennett's writings feel completely wrong. Maybe they regard Chalmers's hard problem as speaking to them more (as I did in the past and still do sometimes). So taking account differing peer intuitions on the subject weakens the force of my point.
Castles in the clouds
One reason type-A physicalism feels right is that it's just a restatement of what we know from neuroscience, without extra machinery that philosophers have invented. To me, philosophy of mind, while insightful, feels like building castles in the clouds. One philosopher discusses a theoretical idea, such as Thomas Nagel's "what it's like". Then other philosophers take that as a real thing and build explanatory gaps upon it. Then more philosophers discuss theories and theories upon theories to explain our epistemic gaps.
Of course, the mind-body problem is far older than Nagel or even than Descartes, but a lot of the specific machinery that philosophers use to debate these ideas (such as modal arguments, zombies, etc.) are more modern and seem contingent to me, because they're not grounded in concrete reality the way sciences are. I can imagine a counterfactual philosophical community developing qualitatively different concepts had history gone in another direction.
I don't want to downplay the importance of theorizing. Many of the more neglected questions in altruism involve theoretical speculation, because practical questions can be more easily funded or developed with profit incentive. I just think we have to take theoretical speculation with appropriate doses of salt before we reach dramatic conclusions.
Why this question matters
Philosophy of mind may seem like a hopeless cesspool of confusion, but it has practical consequences for what we care about ethically, assuming (as I do) that qualia are a crucial component of what makes a mind matter morally.
- If physicalism were false, then it might possibly not be like something to be a person, animal, or robot. (However, because zombies are indistinguishable from non-zombies, we could never tell which was which unless we discovered a way to tap into non-physical knowledge.)
- On type-A views, qualia are not privileged things that can be picked out from the totality of nature, so it's not meaningful to say that a system is or isn't conscious except by whether we regard it as such.
- Type-B views suggest that consciousness does describe something, but that thing is necessarily present given certain physical conditions.
- The "acquaintance knowledge" proposal says there's a sort of ineffable, inaccessible "what it's like" to be a given system. As type-A views observe, consciousness is not a separate thing to be carved out in our ontology, but there is a poetic sense in which a system necessarily has its own subjectivity—a way things seem to it, which we call qualia, that outsiders can't become acquainted with in the same way. This is why there appears to be something missing from reductive explanations when in fact there isn't. This view has similar ethical conclusions as type A.
Analogy with the measurement problem
It's widely known that discussions of quantum mechanics (QM) outside of physics are risky, and discussions of QM in the context of consciousness are often red flags. This section draws an analogy between the measurement problem of Copenhagen QM and introspection, but it's only an analogy. I don't think QM has much to do with consciousness as a matter of physical fact.
In the course of normal life, we go about doing various things, with various parts of our brains and bodies carrying out their functions, many of them "unconscious". This is like the quantum wavefunction evolving unitarily, with particles in superpositions. Then we might ask ourselves, as Susan Blackmore does: "Am I conscious now?" We introspect using high-level thought centers and see that we are conscious. In so doing, we change our conscious state and focus just on what we can see—typically the high-level verbalizable thoughts. This may incline us to think that only those high-level thoughts were conscious, when in fact the whole neural ecosystem is really working together in a continuous fashion.
This analogy is not perfect, because there still seems to be a distinction between the so-called "conscious" and "unconscious" parts of our brains even when we're not explicitly thinking about the question. Of course, how can we know for sure? Whenever we think about this question, we might be "collapsing" the system by imagining that only the parts of our brains that our high-level thought centers can see (either through lingering neural activity or memory logs) were "conscious".
Say you glance at a visual scene that contains a firetruck. You think: "Oh, a scene." Then you look at the firetruck in more detail: "Oh, a firetruck." Then you notice the redness of the firetruck: "Oh, there's a redness to its red color." You wonder why there's a redness of red. I conjecture that the redness feeling is a higher-order reflection on a piece of data that had previously been just part of your brain's recognitional processes. When you pick out the redness attribute, it feels like it embodies more than just information because your focus on that attribute creates a complicated thought about it. This is different from how the redness information was processed before you noticed it as being a quale.
In Consciousness and the Brain, Stanislas Dehaene mentions another analogy with QM. Our "unconscious" brain seems to store a probability distribution over models that explain our sense data. But only one of those models becomes "consciously" broadcast. This may explain optical illusions like the Necker cube where we only see one interpretation at a time but can switch between them.
What is indexicality?
I've suggested that being a chunk of physics can explain the first-person nature of our experiences. This seems to be a step forward, but it leaves the question: What does it mean to be a given chunk of physics?
A naive notion of indexicality might conceive of a "soul" that can be embodied in various spatiotemporal locations—this person, that animal, that tree, etc. These souls are discrete things that can consume specific subsystems within physics, such as the set of cells in my body. We see this language of souls used in a non-literal but still conceptual way in discussions of anthropic reasoning. For instance, John Leslie said: "it seems wrong to treat ourselves as if we were once immaterial souls harbouring hopes of becoming embodied [...]." The incoherence of this soul-like conception of anthropics was the main motivation behind my essay "Anthropics without Reference Classes", though I'm not sure if my proposal there is successful.
A less confused notion of what it means "to be a given physical system" might be that reflective thoughts are directly hooked up to the system rather than merely conceptualizing the system in the abstract. This seems to track what we mean when we distinguish first person from third person. An experience is first-person if information about it influences our (verbal and non-verbal) thoughts via direct wiring within a person's body. It's third-person if we only see it, hear of it, or imagine it. When we imagine zombies, we do so by looking at them "from the outside", so our thoughts can avoid activating subroutines that, if they were executed directly in our brains, would cause us to feel and say that we were conscious.
Anthropic reasoning seems relevant to consciousness. For instance, I often ask myself why "I" seem to be my linguistic thinking brain rather than, say, the nerves in my arm or computations in my basal ganglia. Presumably the answer is that, by a selection effect, if I'm asking this question in a linguistic way, I must be the linguistic brain regions. Of course the language regions are going to say they're the center of my conscious universe because they're the only ones who can answer my query. As John Gregg analogizes:
The chatterbox produces words, and words are very potent or sticky tags in memory. They are not merely easy to grab hold of, they are downright magnetic. They are velcro. The output of this particular module seduces us into thinking that what it does, its narrative, is "what I was thinking" or "what I was experiencing" because when we wonder what we were experiencing or thinking, its report leaps to answer.
I think this selection effect misleads some people into thinking that language is essential for consciousness, because whenever they ask questions about where their consciousness is, it's the chatterbox who speaks up. But of course there's plenty of activity elsewhere in the mind. As Dennett says, this chatterbox-created "self" is a kind of "center of narrative gravity"—a conceptual construct that doesn't actually exist as a concrete thing.
Suppose I have my higher thought and language centers disabled, perhaps under general anaesthesia. Then I undergo an operation. What is it like to be my peripheral nerves and lower-level brain centers? Does it still "hurt" for them? When my verbal centers try to answer this, they can only talk in terms they understand. They can make metaphors and gestures in some direction, but they themselves can't be something they're not. The parts of my brain that decide what they care about in an explicit way seem mainly to have access to the explicit thoughts, which leaves out the peripheral nerves and such unless I include them using abstract reasoning of the type I'm doing now. One of my biggest sources of moral uncertainty is whether I care about the parts of me that my explicit thinking processes are not. This question haunts me on a regular basis.
Acknowledgements
Thanks to Jonathan Erhardt for fruitful conversations and for inspiring me to explore standard philosophy-of-mind literature more than I had done in the past. Out of that exploration emerged this essay as a new way to frame an old debate that I had with others and myself during 2009-2014. Adriano Mannino challenged me on whether "consciousness" means "functional processing". Diego Caleiro pointed out some confused statements I had written about indexicality.
Footnotes
- This depends on which notion of consciousness zombies are defined to lack. If "phenomenal consciousness" has the analytic-functionalist meaning, then zombies are logical contradictions. If "phenomenal consciousness" means the kind of consciousness that eliminativists reject—some additional essence of experience beyond functional processing—then zombies are not only conceivable, but in fact, we are zombies! (back)
- The linked page says "Do not quote" because this is an early draft of a book chapter. But the full book is super expensive, so I don't have a copy of it. (back)
- The linked page says "please do not quote from this version", but I don't have access to the final published text. (back)
- Alternatively, if we interpret the statement as "If you consciously1 think you're conscious2, then you're conscious2", then it's obvious this might be false. For example, if you consciously think that you're an elephant, this doesn't make you an elephant. (back)
- You might reply that the hard problem of consciousness is taken seriously by many smart people who reject ghosts and religion, but remember that in previous centuries, the smartest minds (Newton, Leibniz, etc.) did believe in religion. Elite beliefs are not infallible and change on long time scales. (back)
- Taking consciousness as a primitive of nature still retains just as much mystery. You can't solve mysteries by declaring them as brute facts; you can only sweep them under your rug of ignorance. In any case, as I've noted elsewhere, taking consciousness as primitive doesn't even explain why we believe in consciousness they way physicalism does. (back)
- When I was somewhere around 10-12 years old, my mom told me about the time dilation based on a book she was reading. I replied with something like this: "That cannot be right. It's logically absurd! You must have understood the book wrong." (back)
- An alternate way to phrase this reply to Chalmers's "Master Argument" is to embrace our status as zombies relative to a non-type-A conception of consciousness. In this case, using Chalmers's symbols, "P&~C is not conceivable", and Chalmer's premise (5) that "Zombies do not share our epistemic situation" is false (since we are zombies). (back)
- Scott Aaronson:
Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker. In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry. What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point. So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc. But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.
You can probably see where this is going. What if we homomorphically encrypted a simulation of your brain? And what if we hid the only copy of the decryption key, let’s say in another galaxy? Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?
- Erhardt offered a reply to this claim:
You write that if we discover a hard problem in some domain, we probably think about the problem the wrong way.
I think there are billions of little "hard problems", and these hard problems lead us to the best physical theories we have. There is, for example, the hard problem of the moon. Explaining the functional structure of Earth and how it affects earthquakes, etc. is simply not sufficient to explain the existence and behaviour of the moon. This point is so trivial that we don't even notice it usually: of course describing earth facts does not explain moon facts! But the reason we believe this, the cognitive mechanism behind this view, is a conceivability move. Imagine all the earth facts hold - do they epistemically necessitate the moon facts? Can you imagine a "moon zombie", namely a world where all the earth facts hold but the moon facts don't? And yes, we clearly can. The "moon hard problem" is the reason why we postulate additional stuff to earth-stuff to explain the moon. And if we look at the number of hard problems in the history of our scientific inquiry, then the hard problem intuition might be among the most reliable intuitions we've ever had. For every single additional existence postulate, be it an elementary particle or a galaxy, we've accepted some hard problem intuition.
Of course we don't call them "hard problems" because they're not hard. It's pretty obvious what kind of facts explain moon facts. But the mechanism behind it is exactly as in the hard problem of consciousness. Question: Do A-facts explain B-facts? Conceivability: Try conceiving of a B-zombie, a world where A facts obtain and B facts don't. Conclusion: If a B-zombie is conceivable, then A-facts don't explain B-facts.
This is a good point. I guess what I meant by a "hard problem" was a question where a sensible explanation doesn't seem forthcoming. To me, and apparently to most scientists, all non-reductive explanations of consciousness set of absurdity alarms. In any case, it's not clear there are really "consciousness facts" over and above physical facts. (back)
- An error theory against more than Newtonian mechanics is possible, but it seems somewhat different from rejecting the zombic hunch because beliefs in more than Newtonian physics don't rest on just one single intuition; rather, those ideas can be fleshed out from many angles. In contrast, zombies, Mary's room, Nagel's bats, and all the rest seem to me to be one single idea—and one which gives us no help in engineering GPS navigation the way relativity does. The possibility of nonsense in philosophy is so much higher than in physics due to having no experimental feedback.
In any case, Erhardt's example actually helps prove my point in the following sense. While physicists don't expect that Newtonian mechanics does actually explain our data and they simply can't see how, most leading physicists do expect that some other simple theory of everything can explain our data and they simply can't (now) see how. Because of Occam's razor, most of them believe there will be a theory of everything simpler than our current set of equations and constants in spite of failures so far to produce the theory of everything. Likewise, for those who think there does seem to be an explanatory gap, Occam's razor should lead them to expect the explanatory gap to eventually be dissolved, in spite of the (alleged) failure to do so yet. This is what Chalmers calls type-C physicalism, which I would expect to eventually collapse into a type-A view. One counterargument might be that physics has made some progress already—gradually converging toward a theory of everything, slowly but surely unifying seemingly disparate phenomena. But so too has naturalist philosophy gradually converged toward physicalism, slowly but surely removing ghosts, gods, Platonic forms, and the like. Moreover, my personal feeling of an explanatory gap for consciousness has dropped steadily as I've learned more neuroscience and artificial intelligence. (back)