A Simple Program to Illustrate the Hard Problem of Consciousness

By Brian Tomasik

First published: . Last nontrivial update: .

Summary

This page links to a simple Python program that's designed to illustrate the hard problem of consciousness. The basic idea is that the hard problem is confusing because our brains create a thought/intuition that there's something it's like to be us. The details of how this idea arises are hidden from introspection, in a similar way as the neural mechanisms behind optical illusions are hidden from our ability to describe them using introspection alone.

Of course, this doesn't explain what thoughts even are in the first place. For further puzzling over this, see "My Confusions about the Hard Problem of Consciousness".

Translations:   EspaƱol

Contents

The program

The Python code for the agent is here: HardProblemAgent.py (GitHub version)

Luke Muehlhauser created a better-commented version of the program.

A sample run

Here's the output from one run of the program:

Hi there.

I'm going to look at an object.
(Wavelength = 662.)
I see red.
It reminds me of firetrucks.

Cool. Now, let me see if it feels like something to see red.
Does it feel like something to see red?
Answer: yes
Ok, but _why_ does it feel like something to see red?
This seems completely unexplained. It's clear that my brain can perceive colors, but why, when I ask myself whether there's something it feels like to perceive these inputs, do I realize that yes, there is something it's like? Hmm. Off to read more David Chalmers, I guess.

Moral of the program

The moral of this program is as follows. Whenever we think we feel the redness of red or other qualia, this is a thought that our brains produce. We have some notion of what the raw feels of experience are, and when we notice our feelings and ask whether they have a distinct texture, we think: Yes they do!

fMRI scan of brainWhy? Because our minds generate the thought that our sensations do "feel like something". We have some intuitive notion of being conscious that's somehow encoded in our neural networks, and the best linguistic expression of this that we can produce is to say "it feels like something". We don't have much better vocabulary for putting into words the way our neurons reflect on their own network configurations.

"Ineffable qualia" are our best descriptions of neural codes that get activated by various stimuli, because our brains lack the machinery to analyze these basic neural representations further. Similarly, this Python program has no subroutines to meaningfully reflect on its own lower-level processing. See also Gary Drescher's discussion of qualia as comparable to "gensyms".

Note that this program is not constructed in the same way animal brains are; it's just intended to convey the gist of an idea.

Application to inverted spectrum

This program also affords insight into the inverted spectrum argument.

Assuming this program had rudiments of color qualia, what would it take to reverse those qualia? For instance, how would we swap green and blue?

We could keep going, but no matter how we slice it, we either end up with trivial symbolic changes or else something becomes inconsistent. Ultimately, colors are as colors do. A color perception is the set of functional operations involved with processing that color. Consciousness Explained, ch. 12.4 elaborates on this further, using similar arguments.

Alternatively, if one rejects Juliet's views about roses and considers the symbol itself to be where the color qualia reside (an extreme version of an identity theory), then the qualia could be inverted by a mere symbol change—but this could be done in a completely non-mysterious, physical way. For example, in analogy with the idea that C-fiber firing should be called "pain", one could declare that the neural code '10' should be called "blue experience", even if that code results from receiving green light and reminds the observer of grass. But using words in this way is rather silly and meaningless.

Why are conscious experiences unified?

When I look at the world, it seems as though I can see everything in front of me at once. But if, as the hard-problem agent suggests, computing thoughts about a certain visual input (in its case, a color) is an important part of perceiving that input, then how can I perceive all of my visual field at once? In Consciousness Explained, Dennett reminds us that even though it seems we "see" our entire visual field at once, we're actually focused on only a small portion of it, as change blindness and inattentional blindness demonstrate.

While an accurate explanation of why we seem to perceive our full visual field requires neuroscience research, here's a possible theoretical account, based on Dennett's multiple-drafts model. Our eyes take in lots of visual information, and our early visual cortices process it. Higher-level visual information then becomes available in the brain. Because all the data are there, whenever our attention focuses on any given portion of the visual field, we find that we can see it. All the other items in the periphery remain available, and we could quickly move attention to them as well. So our visual field just is the simultaneous activation of lots of visual neurons conveying information. There's no need to re-broadcast all the data in some Cartesian theater. Whenever we notice something in depth, our brain simply gives prominence to a particular portion of the data that it already has.

From Consciousness Explained (pp. 257-58):

incautious formulations of 'the binding problem' in current neuroscientific research often presuppose that there must be some single representational space in the brain (smaller than the whole brain) where the results of all the various discriminations are put into registration with each other—marrying the sound track to the film, coloring in the shapes, filling in the blank parts. There are some careful formulations of the binding problem(s) that avoid this error, but the niceties often get overlooked.

The high resolution of what we see comes from the immense numbers of visual neurons that humans have. Our "olfactory field" and "gustatory field" are vastly less detailed.

This sort of account of sensing input data is weakly present in even simple computer programs: Data come in, get stored in globally accessible data structures, and can be used as needed by subsequent processes that focus more on whatever portion of the data they're interested in.

One time, I saw the letters "tornad" from far away on my computer. They were actually the first part of the word "tornado", but from a distance, I interpreted them as "tomas"—the first five letters of my last name. My brain's language model assigns high prior probability to the unigram "tomasik", so it interpreted the characters as that word. Not only did I think at an abstract level that I was viewing the first letters of "tomasik", but I could actually see that those were the letters in front of me: "tomas". But if consciousness gives us high-resolution access to everything in our visual field, then I shouldn't have seen the wrong word. Dennett's model of consciousness better explains this anecdote: our judgments about what's in our visual field are neurally represented in more abstract ways, and we tell ourselves that we see certain things when the need arises to make such judgments. When my brain thought about the word it was seeing, it asked for that information from the part of my brain that (mis)classified the word.

"Am I conscious?" "Yes!"

One way to think about the eliminativist view on consciousness, inspired by Michael Graziano, is with yes-or-no questions. Consider the rich visual scene in front of you. You ask yourself, "Am I seeing a rich visual scene?" And your brain computes, "Yes, I am! It has various shapes and colors to it." Likewise:

And so on. Whenever we ask ourselves whether we're conscious, we judge that we are, via various internal computations that reflect on ourselves and our brain's stored data. It's not hard to imagine writing simple computer programs that would caricature these more complex mental processes.

Of course, our brains don't (usually) literally ask yes-or-no questions. Most of the relevant processing is more implicit. But the idea is similar. Our brain components represent to ourselves that we are conscious in whatever sundry ways those brain components have of telling us things.

There's no need for an extra ontological property that is "consciousness". Our brains just claim that they're conscious whenever we ponder the question, and we believe them.

Suppose you run your finger across your arm. What is that feeling that you have? It's activation of various sensory receptors, which trigger follow-on effects in the brain, including eventually construction of thoughts like "I feel my finger on my arm." But why does it feel like something? It doesn't. Your brain just insists that it does whenever you think about the issue, because your brain models yourself as having ineffable feelings. Any thought you have that "No, there really is something it feels like!" has already been tampered with by your brain's propaganda agency to ensure conformity with the ideology that you have experiences that "feel like something". You can't reach the end of a rainbow, because as you move, the optical illusion that is the rainbow moves with you. Likewise, if you try to think faster than your thoughts to find the "real qualia" that exist before your propagandistically censored thoughts about qualia occur, you will be engaging in further thoughts (further "optical illusions") that will still represent qualia as being further away (i.e., not yet explained).

As-needed judgments

In one discussion on consciousness, Alan Alda noted: "I star in all of my dreams." Why do we star in all our dreams? Presumably it's because we star in all experiences that our brain has, in the sense that when our brain has perceptions, it attributes them to ourselves. Presumably this involves activation of some self-model to make sense of the situation.

That said, it's important not to think of this self-awareness attribution as being added to our experiences before they reach consciousness in the Cartesian theater. Rather, we merely have many brain processes computing what they compute, and when it becomes relevant to ask whether we're the star of our dreams (such as when you ask the question now), then this attribution of selfhood can contribute to the output responses, such as by influencing speech acts (including silent talk to oneself), mental imagery, etc. In general, information is used as needed. It doesn't all glob together into one big pile of stuff that all gets presented simultaneously to a Cartesian theater.