Вы находитесь на странице: 1из 8

Anil K Seth

is professor of cognitive and computational neuroscience at the University of


Sussex, and co-director of the Sackler Centre for Consciousness Science. He is
also editor-in-chief of Neuroscience of Consciousness. He lives in Brighton.

https://aeon.co/essays/the-hard-problem-of-consciousness-
is-a-distraction-from-the-real-one

Brought to you by curio.io, an Aeon partner

3,600 words

Edited by Nigel Warburton

SYNDICATE THIS ESSAY

12,492Tweet
Make a donation

What is the best way to understand consciousness?

371Responses
What is the best way to understand consciousness? In philosophy, centuries-old
debates continue to rage over whether the Universe is divided, following Ren
Descartes, into mind stuff and matter stuff. But the rise of modern
neuroscience has seen a more pragmatic approach gain ground: an approach
that is guided by philosophy but doesnt rely on philosophical research to
provide the answers. Its key is to recognise that explaining why consciousness
exists at all is not necessary in order to make progress in revealing its material
basis to start building explanatory bridges from the subjective and
phenomenal to the objective and measurable.

In my work at the Sackler Centre for Consciousness Science at the University of


Sussex in Brighton, I collaborate with cognitive scientists, neuroscientists,
psychiatrists, brain imagers, virtual reality wizards and mathematicians and
philosophers too trying to do just this. And together with other laboratories,
we are gaining exciting new insights into consciousness insights that are
making real differences in medicine, and that in turn raise new intellectual and
ethical challenges. In my own research, a new picture is taking shape in which
conscious experience is seen as deeply grounded in how brains and bodies work
together to maintain physiological integrity to stay alive. In this story, we are
conscious beast-machines, and I hope to show you why.

Lets begin with David Chalmerss influential distinction, inherited from


Descartes, between the easy problem and the hard problem. The easy
problem is to understand how the brain (and body) gives rise to perception,
cognition, learning and behaviour. The hard problem is to understand why and
how any of this should be associated with consciousness at all: why arent we
just robots, or philosophical zombies, without any inner universe? Its tempting
to think that solving the easy problem (whatever this might mean) would get us
nowhere in solving the hard problem, leaving the brain basis of consciousness a
total mystery.

But there is an alternative, which I like to call the real problem: how to account
for the various properties of consciousness in terms of biological mechanisms;
without pretending it doesnt exist (easy problem) and without worrying too
much about explaining its existence in the first place (hard problem). (People
familiar with neurophenomenology will see some similarities with this way of
putting things but there are differences too, as we will see.)

There are some historical parallels for this approach, for example in the study of
life. Once, biochemists doubted that biological mechanisms could ever explain
the property of being alive. Today, although our understanding remains
incomplete, this initial sense of mystery has largely dissolved. Biologists have
simply gotten on with the business of explaining the various properties of living
systems in terms of underlying mechanisms: metabolism, homeostasis,
reproduction and so on. An important lesson here is that life is not one thing
rather, it has many potentially separable aspects.

In the same way, tackling the real problem of consciousness depends on


distinguishing different aspects of consciousness, and mapping their
phenomenological properties (subjective first-person descriptions of what
conscious experiences are like) onto underlying biological mechanisms
(objective third-person descriptions). A good starting point is to distinguish
between conscious level, conscious content, and conscious self. Conscious level
has to do with being conscious at all the difference between being in a
dreamless sleep (or under general anaesthesia) and being vividly awake and
aware. Conscious contents are what populate your conscious experiences when
you are conscious the sights, sounds, smells, emotions, thoughts and beliefs
that make up your inner universe. And among these conscious contents is the
specific experience of being you. This is conscious self, and is probably the
aspect of consciousness that we cling to most tightly.

Sign up for Aeons Newsletter

Daily Weekly

Subscribe

W hat are the fundamental brain mechanisms that underlie our ability to be
conscious at all? Importantly, conscious level is not the same as wakefulness.
When you dream, you have conscious experiences even though youre asleep.
And in some pathological cases, such as the vegetative state (sometimes called
wakeful unawareness), you can be altogether without consciousness, but still
go through cycles of sleep and waking.
So what underlies being conscious specifically, as opposed to just being awake?
We know its not just the number of neurons involved. The cerebellum (the so-
called little brain hanging off the back of the cortex) has about four times as
many neurons as the rest of the brain, but seems barely involved in maintaining
conscious level. Its not even the overall level of neural activity your brain is
almost as active during dreamless sleep as it is during conscious wakefulness.
Rather, consciousness seems to depend on how different parts of the brain
speak to each other, in specific ways.

A series of studies by the neuroscientist Marcello Massimini at the University of


Milan provides powerful evidence for this view. In these studies, the brain is
stimulated by brief pulses of energy using a technique called transcranial
magnetic stimulation (TMS) and its electrical echoes are recorded using
EEG. In dreamless sleep and general anaesthesia, these echoes are very simple,
like the waves generated by throwing a stone into still water. But during
conscious states, a typical echo ranges widely over the cortical surface,
disappearing and reappearing in complex patterns. Excitingly, we can now
quantify the complexity of these echoes by working out how compressible they
are, similar to how simple algorithms compress digital photos into JPEG files.
The ability to do this represents a first step towards a consciousness-meter that
is both practically useful and theoretically motivated.

Complexity measures of consciousness have already been used to track changing


levels of awareness across states of sleep and anaesthesia. They can even be
used to check for any persistence of consciousness following brain injury, where
diagnoses based on a patients behaviour are sometimes misleading. At the
Sackler Centre, we are working to improve the practicality of these measures by
computing brain complexity on the basis of spontaneous neural activity the
brains ongoing echo without the need for brain stimulation. The promise is
that the ability to measureconsciousness, to quantify its comings and goings,
will transform our scientific understanding in the same way that our physical
understanding of heat (as average molecular kinetic energy) depended on the
development, in the 18th century, of the first reliable thermometers. Lord Kelvin
put it this way: In physical science the first essential step in the direction of
learning any subject is to find principles of numerical reckoning and practicable
methods for measuring some quality connected with it. More simply: To
measure is to know.

But what is the quality that brain-complexity measures are measuring? This is
where new theoretical ideas about consciousness come into play. These start in
the late 1990s, when Gerald Edelman (my former mentor at the Neurosciences
Institute in San Diego) and Giulio Tononi now at the University of Wisconsin
in Madison argued that conscious experiences were unique in being
simultaneously highly informative and highly integrated.

The maths that captures the co-existence of information and integration maps
onto the emerging measures of brain complexity

Consciousness is informative in the sense that every experience is different


from every other experience you have ever had, or ever could have. Looking past
the desk in front of me through the window beyond, I have never before
experienced precisely this configuration of coffee cups, computers and clouds
an experience that is even more distinctive when combined with all the other
perceptions, emotions and thoughts simultaneously present. Every conscious
experience involves a very large reduction of uncertainty at any time, we have
one experience out of vastly many possible experiences and reduction of
uncertainty is what mathematically we mean by information.

Consciousness is integrated in the sense that every conscious experience


appears as a unified scene. We do not experience colours separately from their
shapes, nor objects independently of their background. The many different
elements of my conscious experience right now computers and coffee cups, as
well as the gentle sounds of Bach and my worries about what to write next
seem tied together in a deep way, as aspects of a single encompassing state of
consciousness.

It turns out that the maths that captures this co-existence of information and
integration maps onto the emerging measures of brain complexity I described
above. This is no accident it is an application of the real problem strategy.
Were taking a description of consciousness at the level of subjective experience,
and mapping it to objective descriptions of brain mechanisms.

Some researchers take these ideas much further, to grapple with the hard
problem itself. Tononi, who pioneered this approach, argues thatconsciousness
simply is integrated information. This is an intriguing and powerful proposal,
but it comes at the cost of admitting that consciousness could be present
everywhere and in everything, a philosophical view known as panpsychism. The
additional mathematical contortions needed also meanthat, in practice,
integrated information becomes impossible to measure for any real complex
system. This is an instructive example of how targeting the hard problem, rather
than the real problem, can slow down or even stop experimental progress.

W hen we are conscious, we are conscious of something. What in the brain


determines the contents of consciousness? The standard approach to this
question has been to look for so-called neural correlates of consciousness
(NCCs). In the 1990s, Francis Crick and Christof Koch defined an NCC as the
minimal set of neuronal events and mechanisms jointly sufficient for a specific
conscious percept. This definition has served very well over the past quarter
century because it leads directly to experiments. We can compare conscious
perception with unconscious perception and look for the difference in brain
activity, using (for example) EEG and functional MRI. There are many ways of
doing this. One of the most popular is binocular rivalry, in which different
images are presented to each eye so that conscious perception flips from one to
the other (while sensory input remains constant). Another is masking, in which
a briefly flashed image is rapidly followed by a meaningless pattern. Here,
whether the first image is consciously perceived depends on the delay between
the image and the mask.

Experiments such as these have identified brain regions that are consistently
associated with conscious perception, independently of whether that perception
is visual, auditory or in some other sensory modality. The most recent chapter in
this story involves experiments that try to distinguish between those brain
regions involved in reporting about a conscious percept (eg, saying: I see a
face!) from those involved in generating the conscious percept itself. But as
powerful as these experiments are, they do not really address the real problem
of consciousness. To say that a posterior cortical hot-spot (for instance) is
reliably activated during conscious perception does not explain why activity in
that region should be associated with consciousness. For this, we need a general
theory of perception that describes what brains do, not just where they do it.

In the 19th century, the German polymath Hermann von Helmholtz proposed
that the brain is a prediction machine, and that what we see, hear and feel are
nothing more than the brains best guesses about the causes of its sensory
inputs. Think of it like this. The brain is locked inside a bony skull. All it receives
are ambiguous and noisy sensory signals that are only indirectly related to
objects in the world. Perception must therefore be a process of inference, in
which indeterminate sensory signals are combined with prior expectations or
beliefs about the way the world is, to form the brains optimal hypotheses of the
causes of these sensory signals of coffee cups, computers and clouds. What we
see is the brains best guess of whats out there.

Its easy to find examples of predictive perception both in the lab and in
everyday life. Walking out on a foggy morning, if we expect to meet a friend at a
bus stop, we might perceive her to be there, until closer inspection reveals a
stranger. We can also hear words in nonsensical streams of noise, if we are
expecting these words (play Stairway to Heaven backwards and you can hear
satanic poetry). Even very basic elements of perception are shaped by
unconscious beliefs encoded in our visual systems. Our brains have evolved to
assume (believe) that light comes from above, which influences the way we
perceive shapes in shadow.

People consciously see what they expect, rather than what violates their
expectations

The classical view of perception is that the brain processes sensory information
in a bottom-up or outside-in direction: sensory signals enter through receptors
(for example, the retina) and then progress deeper into the brain, with each
stage recruiting increasingly sophisticated and abstract processing. In this view,
the perceptual heavy-lifting is done by these bottom-up connections. The
Helmholtzian view inverts this framework, proposing that signals flowing into
the brain from the outside world convey only prediction errors the differences
between what the brain expects and what it receives. Perceptual content is
carried by perceptual predictions flowing in the opposite (top-down) direction,
from deep inside the brain out towards the sensory surfaces. Perception involves
the minimisation of prediction error simultaneously across many levels of
processing within the brains sensory systems, by continuously updating the
brains predictions. In this view, which is often called predictive coding or
predictive processing, perception is a controlled hallucination, in which the
brains hypotheses are continually reined in by sensory signals arriving from the
world and the body. A fantasy that coincides with reality, as the psychologist
Chris Frith eloquently put it in Making Up the Mind (2007).
Armed with this theory of perception, we can return to consciousness. Now,
instead of asking which brain regions correlate with conscious (versus
unconscious) perception, we can ask: which aspects of predictive perception go
along with consciousness? A number of experiments are now indicating that
consciousness depends more on perceptual predictions, than on prediction
errors. In 2001, Alvaro Pascual-Leone and Vincent Walsh at Harvard Medical
School asked people to report the perceived direction of movement of clouds of
drifting dots (so-called random dot kinematograms). They used TMS to
specifically interrupt top-down signalling across the visual cortex, and they
found that this abolished conscious perception of the motion, even though
bottom-up signals were left intact.

More recently, in my lab, weve been probing the predictive mechanisms of


conscious perception in more detail. In several experiments using variants of
the binocular rivalry method mentioned earlier weve found that people
consciously see what they expect, rather than what violates their expectations.
Weve also discovered that the brain imposes its perceptual predictions at
preferred points (or phases) within the so-called alpha rhythm, which is an
oscillation in the EEG signal at about 10 Hz that is especially prominent over the
visual areas of the brain. This is exciting because it gives us a glimpse of how the
brain might actually implement something like predictive perception, and
because it sheds new light on a well-known phenomenon of brain activity, the
alpha rhythm, whose function so far has remained elusive.

Predictive processing can also help us understand unusual forms of visual


experience, such as the hallucinations that can accompany psychosis or
psychedelic trips. The basic idea is that hallucinations occur when the brain
pays too little attention to incoming sensory signals, so that perception becomes
unusually dominated by the brains prior expectations. Different sorts of
hallucination from simple geometric experiences of lines, patterns and
textures to rich hallucinatory narratives full of objects and people can be
explained by the brains over-eagerness to confirm its predictions at different
levels in the cortical hierarchy. This research has significant clinical promise
since it gets at the mechanisms that underlie the symptoms of psychiatric
conditions, in much the same way that antibiotics tackle the causes of infection
while painkillers do not.

O f the many distinctive experiences within our inner universes, one is very
special. This is the experience of being you. Its tempting to take experiences of
selfhood for granted, since they always seem to be present, and we usually feel a
sense of continuity in our subjective existence (except, of course, when emerging
from general anaesthesia). But just as consciousness is not just one thing,
conscious selfhood is also best understood as a complex construction generated
by the brain.

There is the bodily self, which is the experience of being a body and of having a
particular body. There is the perspectival self, which is the experience of
perceiving the world from a particular first-person point of view.
The volitional self involves experiences of intention and of agency of urges to
do this or that, and of being the causes of things that happen. At higher levels,
we encounter narrative and social selves. The narrative self is where the I
comes in, as the experience of being a continuous and distinctive person over
time, built from a rich set of autobiographical memories. And the socialself is
that aspect of self-experience that is refracted through the perceived minds of
others, shaped by our unique social milieu.

In daily life, it can be hard to differentiate these dimensions of selfhood. We


move through the world as seemingly unified wholes, our experience of bodily
self seamlessly integrated with our memories from the past, and with our
experiences of volition and agency. But introspection can be a poor guide. Many
experiments and neuropsychological case studies tell a different story, one in
which the brain actively and continuously generates and coordinates these
diverse aspects of self-experience.

Our experiences of being and having a body are controlled hallucinations of a


very distinctive kind

Lets take the example of bodily selfhood. In the famous rubber-hand illusion, I
ask you to focus your attention on a fake hand while your real hand is kept out
of sight. If I then simultaneously stroke your real hand and the fake hand with a
soft paintbrush, you may develop the uncanny feeling that the fake hand is now,
somehow, part of your body. This reveals a surprising flexibility in how we
experience owning our bodies and raises a question: how does the brain decide
which parts of the world are its body, and which arent?

To answer this, we can appeal to the same process that underlies other forms of
perception. The brain makes its best guess, based on its prior beliefs or
expectations, and the available sensory data. In this case, the relevant sensory
data include signals specific to the body, as well as the classic senses such as
vision and touch. These bodily senses include proprioception, which signals the
bodys configuration in space, and interoception, which involves a raft of inputs
that convey information from inside the body, such as blood pressure, gastric
tension, heartbeat and so on. The experience of embodied selfhood depends on
predictions about body-related causes of sensory signals across interoceptive
and proprioceptive channels, as well as across the classic senses. Our
experiences of being and having a body are controlled hallucinations of a very
distinctive kind.

Research in our lab is supporting this idea. In one experiment, we used so-called
augmented reality to develop a new version of the rubber-hand illusion,
designed to examine the effects of interoceptive signals on body ownership.
Participants viewed their surroundings through a head-mounted display,
focusing on a virtual reality version of their hand, which appeared in front of
them. This virtual hand was programmed to flash gently red, either in time or
out of time with their heartbeat. We predicted that people would experience a
greater sense of identity with the virtual hand when it was pulsing
synchronously with their heartbeat, and this is just what we found. Other
laboratories are finding that similar principles apply to other aspects of
conscious self. For example, we experience agency over events when incoming
sensory data match the predicted consequences of actions and breakdowns in
experienced agency, which can happen in conditions such as schizophrenia
can be traced to abnormalities in this predictive process.

These findings take us all the way back to Descartes. Instead of I think therefore
I am we can say: I predict (myself) therefore I am. The specific experience of
being you (or me) is nothing more than the brains best guess of the causes of
self-related sensory signals.

There is a final twist to this story. Predictive models are good not only for
figuring out the causes of sensory signals, they also allow the brain to control or
regulate these causes, by changing sensory data to conform to existing
predictions (this is sometimes called active inference). When it comes to the
self, especially its deeply embodied aspects, effective regulation is arguablymore
important than accurate perception. As long as our heartbeat, blood pressure
and other physiological quantities remain within viable bounds, it might not
matter if we lack detailed perceptual representations. This might have
something to do with the distinctive character of experiences of being a body,
in comparison with experiences of objects in the world or of the body as an
object.

And this returns us one last time to Descartes. In dissociating mind from body,
he argued that non-human animals were nothing more than beast machines
without any inner universe. In his view, basic processes of physiological
regulation had little or nothing to do with mind or consciousness. Ive come to
think the opposite. It now seems to me that fundamental aspects of our
experiences of conscious selfhood might depend on control-oriented predictive
perception of our messy physiology, of our animal blood and guts. We are
conscious selves because we too are beast machines self-sustaining flesh-bags
that care about their own persistence.

SYNDICATE THIS ESSAY

IDEAS MAKE A DIFFERENCE


If you enjoy Aeon, please show your support
Aeon is not-for-profit
and free for everyone
MAKE A DONATION
Get Aeon straight
to your inbox
NEWSLETTER
Follow us on
Facebook
LIKE

Вам также может понравиться