btrmt. lectures
the btrmt. lectures
The Hard Problem of Consciousness Isn’t a Problem
0:00
-15:06

The Hard Problem of Consciousness Isn’t a Problem

It’s the Behaviour That Matters

Below is a lightly edited transcript. For the article that inspired this one, see Stupid Questions.

Welcome to the btrmt. Lectures. My name is Dr Dorian Minors, and if there’s one thing I’ve learnt as a brain scientist, it’s that there’s no instruction manual for this device in our head. But there are patterns to the thing—patterns of thought, patterns of feeling, patterns of action. Because that’s what brains do. So let me teach you about them. One pattern, one podcast. You see if it works for you.

Now, this is another one in my series of bits that I have on questions that people ask me which seem important but actually don’t really end up mattering for most people, certainly not in the way that they’re typically deployed. When people are asking me questions about this, it’s because they’ve heard about an interesting problem in science. But often when I see it out in the wild, they’re deployed like stupid questions that make you seem smart.

So let me tell you about these irrelevant questions so you can avoid wasting mental energy on them.

The Redness of Red

What is consciousness? Everybody wants to know lately, largely because we wonder if AI is conscious. But understanding whether something is conscious means that we first have to understand what consciousness is. And this is where you start running into problems. And these are problems that I actually don’t really think matter.

Now, the typical way that people teach consciousness is to talk about the colour red—the redness of red. And I have no idea why this is true. Maybe it’s because it’s a good illustration of the thing, because there aren’t actually many good illustrations of the thing.

So let me give you an example of this. And for me, the cleanest example is Frank Jackson’s thought experiment that he called Mary’s Room.

So you imagine this woman, Mary. She’s a brilliant scientist, just like me, and for whatever reason, she was forced to investigate the world from some kind of black-and-white room. And she’s looking into a computer screen that is also completely black-and-white. And what she specialises in is the neurophysiology of vision. And in the process of her studying the world from a black-and-white room through a black-and-white computer screen, she obtains everything there is to know—every physical fact there is to obtain about colour, about red, about ripe tomatoes or how we see the sky. She knows the terms red and blue. She can describe everything about how colour is processed in the brain.

But the question that we want to know is: what happens when Mary is released from a black-and-white room or her computer screen is transformed into a colour monitor? Does she learn something new?

Mary has never seen the colour red before, but she knows everything there is to know about it. And then one day she sees it. Has she learned something new about the colour red?

I think you’ll agree that she has. Certainly Frank Jackson thought so. Lots of people think so. There is some kind of knowledge beyond the physical properties that we understand about them. For red, it’s its redness. And that knowing about red isn’t the same as experiencing it.

Now, this—whatever this is, the redness of the colour red, the feeling of pain that we experience when we’re slapped across the face, the feeling of beauty that we experience when we’re looking out over a vista—whatever this is, is an example of what’s known as qualia.

And Thomas Nagel is another famous philosopher, famous for consciousness, who puts it in an interesting way. And I’ll link the paper, although honestly, it’s a bit impenetrable. But what he says is that although we can in theory understand everything there is to know about how bats echolocate—how they make their clicking sounds that allow them to see through their ears—we can understand the physics of sound waves, we can understand the physiology, we can understand the behavioural responses, the information processing that happens in the brain. We can understand all of this stuff, but we will never know what it’s like to experience the world through echolocation.

There is something that it is like to be a bat, and it doesn’t really seem like you can pass along that subjective phenomenal character of batness with a description. That is consciousness.

Chalmers and the Hard Problem

So now to the problem. And the problem is: why does red have redness? Why is there something it is like to be at all, never mind something it is like to be a bat? And it’s a difficult question to answer.

Chalmers famously called this the hard problem of consciousness. And he called it the hard problem of consciousness because he poses it against what he calls easy problems. There are phenomena that are related to qualia, to experience, that we can in theory explain. I’ll quote his book: “the performance of all the cognitive and behavioural functions in the vicinity of experience—perceptual discrimination, categorisation, internal access, verbal report.” That’s the quote.

You know, all of these are processes that lend themselves to examination. They can be explained functionally and mechanistically. To Chalmers, they are easy problems. The hard problem is explaining why these things are accompanied by a sense of experience, by qualia. Why does Mary learn something new when she sees red beyond knowing all its physical properties? And why can’t we know what it’s like to be a bat? That’s the question.

And Chalmers uses the example of a sort of automaton to drive this home. So you could imagine a person who goes about behaving in all the ways that you or I do, but they have absolutely no experience attached to that behaviour. They’re some kind of zombie or a robot, just mechanically acting and reacting to the world around them.

There doesn’t seem on the surface of it any reason to build that robot so that it also has to experience the stuff. If you were going to save money, you would save money on the experience. It doesn’t need it—at least in theory, conceivably. That is the hard problem of consciousness.

The Non-Solutions

And there are a lot of solutions to it. And I’m going to detail them briefly in a second. But what I really want to point out is that none of them seem to really matter very much. So let’s get into that.

So lots of people try and solve the hard problem of consciousness in lots of ways. And I have an entire article that details this. So I’ll link that in the show notes. But I will talk about them here kind of briefly—maybe even a little more than briefly, to be honest, because I think it is kind of interesting.

So the first solution to the hard problem of consciousness is the non-materialist view. Non-materialists say that there’s both material stuff, physical stuff, and there’s this separate kind of experience stuff. A classic example of this is dualism. So this is the idea that there is a distinction between the mind and the body or the body and the soul. And this is kind of sliding out of fashion in an increasingly secular world, but it can be a secular position.

Then there are the emergentists and the functionalists. And these people say that consciousness is just some kind of special property of the material world that emerges from specific configurations of material stuff. So in the same way that you get water when you put together two oxygen molecules and a hydrogen molecule, you get this sort of property of liquidity. If you arrange neurones in a certain way, you get consciousness. That’s the sort of basic idea.

And then there’s people who treat it as a mistake. There are illusionists and eliminativists. And these people say that thinking of consciousness at all is a mistake.

Illusionists, the easier position to describe, they basically say that consciousness is an illusion. In the same way, I guess, that movies are a sort of illusion. Movies produce the illusion of motion by flashing still images so fast that we process them as moving. And in the same way, maybe consciousness, this experience, is produced by a bunch of snapshots of what the brain is doing at any given point in time. I think it was Daniel Dennett who called it an edited digest of all the events going on in the brain, like a general sense of the shape of things.

And then there’s the sort of last group, and these guys are called panpsychists. And what they say is that maybe consciousness sits in the space that physics doesn’t explain. So maybe it’s like the intrinsic nature of stuff—and I’m going to have to explain that a little bit, aren’t I?

So physics tells us what things do. It doesn’t tell us what things are. Physics can tell us that atoms have a certain mass, for example, but mass is characterised by behavioural properties. So you’ve got gravitational attraction or resistance to acceleration. I don’t know, I’m not a physicist. But these physical properties don’t actually have anything to say about their underlying nature. So maybe physics describes what things do, and consciousness is what things are.

Why None of It Matters

And at this point, I think we can stop, because if you’re getting tired, I’ve sort of made my point. Because what’s super annoying about all of this is that it’s impossible to have a conversation with all these different perspectives in the room, because you basically have to take whichever one you prefer on faith. All of them—and I’ll link to an article that describes this in more detail—all of them suffer from the same explanatory gap.

Whether you think that consciousness arises from the configurations of neurones, or it comes from a soul, or it comes from this sort of space that physics leaves unexplained, you still have to explain how it actually interacts with the stuff that we do know about—Chalmers’ easy problems. And nobody’s managed this. Nobody’s even close.

Now, the modal position in academia is the emergentist one—that consciousness sort of comes about with the right configuration of neurones or whatever. Back at my old lab, you ask any given brain scientist there, and this is basically what they would say. And in fact, they would probably be confused that there were other perspectives on this. I would have said this a while ago, even having studied consciousness as part of my academic trajectory.

And I think that this view is so popular because it feels like science, even though the explanatory gap is actually identical. It feels more scientistic, and that’s what we sort of value now. And maybe it also feels reasonable because consciousness certainly seems like it’s dependent on our perceptions. The hard problem seems related to the easy problem because you can’t really experience something without perceiving it first.

So I think there’s this sort of optimism that if we just study the brain hard enough, eventually consciousness will turn from a hard problem into an easy one. And so what you see is all these debates about whether AI systems have genuine experiences or whether honeybees are conscious. Or there’s one of my favourite articles out there by a guy called Brian Key, and he’s writing in the equivalent of academic caps lock, some of the most vehement academic writing I’ve ever seen, that fish cannot feel pain, followed by tens of people responding with articles talking about how fish do have pain, that they do have the neurobiology, that they can suffer, that they are conscious.

People love to talk about this stuff, even though none of them have actually got any closer to solving the hard problem of consciousness.

So What?

And more to the point, you don’t need to pay attention to any of this because none of it matters. Let me wrap up and show you why.

You know, these kinds of academic debates are precisely when I start to lose interest in the project, because so what? Under precisely what circumstances does any of this stuff matter? When would it actually matter whether something was truly conscious or illusorily conscious? Because if things seem conscious, we already know what to do about it.

And Sam Harris is a philosopher who has a nice bit on this, and I’ll link to it in the show notes. He’s got both an essay and a TED Talk. And what he says—and I’ll quote him—“Why is it that we don’t have ethical obligations towards rocks? Why don’t we feel compassion for rocks? It’s because we don’t think rocks can suffer. And if we’re more concerned about our fellow primates than we are about insects, as indeed we are, it’s because we think they’re exposed to a greater range of potential happiness and suffering.” That’s the quote.

So I think Sam’s pointing at this sort of secret hope that we have that by working out what consciousness is, we can reduce suffering. But we can do that now. We can do it by caring about things that seem to suffer in a way that makes them seem to suffer less. And we can do all of that without proving that they have qualia, because we’re just not close to understanding this distinction.

And some people reckon that we’ll never crack it. There’s a group that I didn’t talk about called mysterians, and they reckon that like an ant would never crack calculus, we’re never going to crack consciousness.

But I think, again, that’s sort of a distraction, because even if we could, I can’t actually tell what would change. Would we stop caring about animal welfare if we proved that they weren’t strictly conscious? Would we treat rocks differently if we found out that they were?

Of course not. Because it’s not an interesting question. Whatever consciousness rocks might have isn’t likely to change how we treat them, because what matters is the behaviour and how the behaviour expresses suffering. Whether there’s some ineffable “what it’s like” behind the curtain is practically irrelevant.

So why bother asking?

I’ll leave you with that.

Discussion about this episode

User's avatar

Ready for more?