Вы находитесь на странице: 1из 9

Can consciousness emerge in a machine simulation?

Yissar Lior Israeli

There are many sci-fi books, movies and TV series depicting conscious machines; artificial entities
that are aware of themselves and their surroundings; machines that both think and emote. Could
such conscious machines ever be actualized? If so, it seems that an explanation is required as of how
consciousness could exist in an inorganic matter. Even before that, to appreciate what such
possibility would involve, we have to deal with some fundamental questions: What is consciousness?
How could we come to know that a machine is conscious?
It should be obvious by now that there is a relation between brain and mind. The most
straightforward evidence of this is that people with damaged brain tissue, due to injuries or sickness,
behave and experience the world differently than before. For example, those suffering from
Alzheimer’s disease, resulting from a degeneration of the brain, not only lose their memory but also
exhibit mild to dramatic behavioural changes and loss of self-identity.

This being the case, most philosophers believe that something like the possession of a brain is a
necessary but not sufficient condition for consciousness, i.e. without a brain, consciousness is not
possible, but having a brain is not enough; there is an extra something that enables consciousness.
The main discussion in philosophy of mind is what is exactly this “extra” and how it can explain the
relation of the brain to consciousness. While avoiding the trap of supplying an explicit definition of
consciousness, a term which often proves elusive, I will say that when philosophers use the term
consciousness the following two criteria are commonly referred to: having a first-person point of
view and having subjective experience in the sense of “what it is like” for me to be in this state. Over
the course of what follows, I hope to clarify and expand upon these criteria by presenting some
different theories of mind that do try to explain what consciousness is and how it can be accounted
for. In the first section of this essay I will briefly cover some of the main theories in philosophy of
mind regarding consciousness for this purpose. Following this, I will present an argument in favour of
accepting consciousness as an emergent phenomenon. Lastly, I’ll argue that if consciousness is an
emergent property, then there is no principle reason why it should not emerge from a machine
simulation.

1
1. Overview

Discussions of what consciousness is can be found in early Greek and Hindu philosophy, however,
Descartes may have been the first in the Western philosophical tradition to frame the mind-body
problem in a clear and reasoned way. Descartes’ view is that consciousness (mind) is immaterial and
somehow interacts with the matter (brain) and thus, the material brain and immaterial mind are two
ontologically distinct substances. This thesis is what has become known in the literature as Cartesian
dualism. Today, only few philosophers or scientists support Cartesian substance dualism.

In contrast with dualism, physicalism, a form of monism, holds that there is nothing beyond physical
matter, and hence consciousness is a result of brain activity. We may not yet have the correct
scientific explanation of how consciousness arises from the brain but such an explanation involves
nothing beyond the world of physics. Physicalists usually explain consciousness via supervenience –
the idea that psychological states supervene on physical states; if two persons are indistinguishable
in all of their physical properties, they must also be indistinguishable in all of their mental properties.
The same goes for computer running software, if my computer runs solitaire, then it is impossible for
any other computer with exactly the same electronic states not to be running solitaire.

Thomas Nagel (1974) introduced the following argument against physicalism: physical information
doesn’t tell you: what it is like to be a bat. How the world is from another point of view. Nagel: “An
organism has conscious mental states if and only if there is something it is like to be that organism –
something it is like for the organism.”1 If we’ll consider pain vs. C-fibre firing, according to Nagel,
there is no such thing as ‘pain in itself’ (C-fibre firing) as an objective experience, there’s only how
pain strikes me – a subjective experience.

Another strong argument against physicalism is the knowledge argument, where Frank Jackson
(1982, 1986) introduces Mary - a brilliant scientist who learns all the physical truths about the world,
vision and colour from a black and white room 2. The argument posits that when Mary leaves her
B&W room and sees a red tomato, she learns a new fact – what it is like to see red. This occurs as a
result of her having a new subjective experience by which she becomes familiar with qualia, the
intrinsic qualities of phenomenal experience; in this case redness. The conclusion of the argument is
that physicalism is false. While the gist of Nagel‘s argument is that one can have all the physical facts

1
Nagel, "What is it Like to Be a Bat?" (1974), p. 436.
2
Jackson, “Epiphenomenal Qualia”, 1982, p. 130.

2
without having knowledge of the other persons’ view, the point of Jackson’s is that one can have all
the physical facts without having knowledge of qualia.

Several other arguments that support the existence of qualia include the conceivability argument,
inverted qualia, philosophical zombies and the explanatory argument. Chalmers (2003) says that
these arguments are part of a general problem, exemplified in what has become known as the
explanatory gap; Chalmers named this the ‘hard problem’ of consciousness – the problem of
explaining how qualia or subjective experience emerges in our minds3.

Physicalists have not remained silent to these objections and have in response suggested several
counter arguments. One such counter argument, the ability hypothesis (Nemirow & Lewis) argues
that rather than having learned new facts or truths, Mary acquires new abilities; She gains know-
how not knowing-that4. Another response emphasizes the possibility of Mary acquiring “Knowledge
by acquaintance”5. On this view, it is held that Mary becomes directly acquainted with the
phenomenal character of colour experience, in the way that one can become acquainted with a city
by visiting it.

Another view adopted by both Patricia and Paul Churchland is that when it is said that consciousness
emerges from the brain there is a suggestion that something else besides the neural activity is going
on, something correlated but distinct from  the neural activity. But then again, what is that extra
thing that happens? According to this view, all questions about consciousness can be reduced to
what Chalmers calls the “easy problems” and eventually be solved. Put otherwise, the concepts of
popular psychology that we use to explain our mental states (intentions, beliefs, desires, etc.) will
eventually be replaced by neurobiological models that have yet to be developed. According to
Patricia Churchland, the fact that it is currently very hard for us to imagine a solution to the problem
of consciousness tells us absolutely nothing about whether or not this phenomenon can actually be
explained. In her view, it is too easy to conclude that a phenomenon such as consciousness is
inexplicable simply because current human psychology cannot grasp it 6.

The knowledge argument continues to inspire ongoing discussion on the nature of consciousness
and its relation to the physical world. The main discussion point being, how subjective experience
and mental states arise from brain states.
3
Chalmers, Paper "Facing Up to the Problem of Consciousness", 1995
4
Lewis, “What Experience Teaches,”, 1990
5
Conee , “Phenomenal Knowledge”, 1994, p. 144
6
Patricia Churchland, “Chalmers' Zombies and The Hornswoggle Problem”, article, 2003

3
2. Emergence

Sometimes a system with multiple interacting components gives rise to some surprising and
unpredicted dynamics that cannot be found or predicted by looking at any of the components in
isolation; such an emergent phenomena is not a priori predictable from its substrata and none of the
components share the property the system at large holds. The explanation for emergent
phenomena takes place at another level distinct from its substrata.
Like water (H2O) has the novel properties of wetness and liquidity that cannot be found in either
Hydrogen or Oxygen and by analogy mental states arise from brain states but do not share their
properties, mental states are not identical to any brain states but instead emerge from them. If the
later is the case, then it offers an account as to why qualia cannot be reduced to any particular
physical substratum.

Emergent properties are properties of a system and dependent on that system’s components, their
properties, and configurations; emergence arises when the system in question passes a critical
threshold of complexity and organization. Philosophers who support emergentism, view emergence
to be compatible with physicalism in the sense that the universe is made exclusively of physical
entities while at the same time rejecting the reducibility of the mental to the physical. Moreover, it is
important to note that the truth of emergentism is consistent with the falsity of substance dualism
(in the Cartesian sense).

David Chalmers (2006) defined a distinction between weak and strong emergence in order to
capture the difference usage of the term ‘emergence’ in science and philosophy.
Strong Emergence: “We can say that a high-level phenomenon is strongly emergent with respect to a
low-level domain when the high-level phenomenon arises from the low-level domain, but truths
concerning that phenomenon are not deducible even in principle from truths in the low-level domain.
Strong emergence is the notion of emergence that is most common in philosophical discussions of
emergence, and is the notion invoked by the British emergentists of the 1920s”7
Weak Emergence: “We can say that a high-level phenomenon is weakly emergent with respect to a
low-level domain when the high-level phenomenon arises from the low-level domain, but truths
concerning that phenomenon are unexpected given the principles governing the low-level domain.
Weak emergence is the notion of emergence that is most common in recent scientific discussions of

7
Chalmers, “Strong and weak emergence”, in “The re-emergence of emergence”, 2006

4
emergence, and is the notion that is typically invoked by proponents of emergence in complex
systems theory.” 8

While we can usually say that instances of strong emergence are also instances of weak emergence –
the phenomenon is not deducible and unexpected – the converse, however, does not hold.
Weak emergence is common in complex system theories and other scientific fields where
complexity, self-organization, functional organization and system’s behaviour are paramount.
Let’s look at an ant colony as an example of weak emergence. The ant colony exhibits complex
behaviour - food is gathered, tunnels are excavated, waste discarded and the colony is protected
and populated - without any centralized decision making (the queen does not give direct orders). All
of these processes are successfully carried out despite the fact that each individual ant acts
independently; only communicating with other ants by leaving chemical traces that are picked up
and acted upon by other colony members.

An objection to strong emergence – mainly raised by physicalists – is that even if strong emergence
is possible there are no real cases of it in the world. In response, the proponents point to mental
properties. Chalmers (2006) considers consciousness as the only intrinsic example of strong
emergence in nature9. If consciousness can be considered generally to be quite a unique state, which
it is, then its uniqueness lends to the idea that strong emergence is something that is also unique.

When considering the behaviour of an ant colony, although we can analyse the behaviour of a
specific ant or we can analyse the behaviour of the entire colony, the mechanism that gives rise to
the organization and behaviour of the colony as a whole eludes us. Similarly, the use of fMRI
(functional magnetic resonance imaging) and other tools enable scientists to explore the behaviour
of neurons and their functions, researching the neurophysiological structure of the brain. However
this may be, the mechanism that gives rise to consciousness from brain matter eludes us. Emergence
gives us the best explanation to date of that ‘mechanism’ in its claiming that multiple autonomous
agents (ants, neurons) interact and communicate with other local agents or occasionally remote
ones, and that this results in a system’s behaviour which itself cannot be reduced to the individual
agents.

8
Ibid
9
Ibid

5
In response to this, I concede that biology and neuroscience may not yet have the means to explain
the mechanism(s) underlying emergence, but just because it is presently inexplicable, does not mean
that emergentism is a false theory or that it will always be inexplicable.
Patricia Churchland presents a view that in the history of science many problems seemed
inexplicable but eventually with the progress of science an explanation was found 10. She makes a
distinction between "we cannot now explain" and "we can never explain". At this point in time our
understanding of the mind is still immature and we should not speculate about consciousness as a
‘hard problem’. “When not much is known about a topic, don't take terribly seriously someone else's
heartfelt conviction about what problems are scientifically tractable. Learn the science, do the
science, and see what happens”.11

3. Emergence of consciousness in machine simulation

While there are several on-going projects around the world researching the brain, one in particular -
the European Human Brain project (HBP)12 – has brain simulation as one of its objectives: “Simulate
the brain - Develop ICT tools to generate high-fidelity digital reconstructions and simulations of the
mouse brain, and ultimately the human brain.” 13 HBP supervisor, Prof. Henri Markram, believes this
can be achieved by 2023.

Suppose then that consciousness is an emergent property of a physical system (the brain). If this is
the case, then would our creating an accurate simulation of a human brain in another platform, say
equivalent with all the neurons, synapses and the intricate and complex activity of the brain, bring
about the sufficient conditions for consciousness to emerge?

Before going on, there is an important distinction to be made here in regard to artificial intelligence
and artificial consciousness. It is not the purpose of this essay to explore the question of whether or
not consciousness can be created artificially in some machine (machine consciousness) or whether
an artificial intelligence (AI) can exhibit conscious behaviour, but rather to explore the possibility of
consciousness emerging in a machine that is a brain simulation.

10
Patricia Churchland, “The Hornswoggle Problem”, 1996
11
Ibid
12
https://www.humanbrainproject.eu
13
https://www.humanbrainproject.eu/en_GB/roadmap

6
With this in mind, let us approach the question above. While it seems obvious that the brain is a
necessary requirement for having consciousness, is it sufficient? Some philosophers believe that
aspects of the agent’s body (other than the brain) are constitutive for cognitive processing. This view
is termed embodied cognition and the following examples 14 are representatives:
1. We typically gesture when we speak to one another, and gesturing facilitates not just
communication but language processing itself (McNeill 1992).
2. Vision is often action-guiding, and bodily movement and the feedback it generates are more
tightly integrated into at least some visual processing than has been anticipated by
traditional models of vision (O'Regan and Noë 2001).
3. There are neurons, mirror neurons that fire not only when we undertake an action, but do so
when we observe others undertaking the same actions (Rizolatti and Craighero 2004).
4. We are often able to perform cognitive tasks, such as remembering, more effectively by
using our bodies and even parts of our surrounding environments to off-load storage and
simplify the nature of the cognitive processing (Donald 1991).

The brain simulation should have input and output capabilities for interacting with the world. Sally
Goerner and Leslie Allan Combs write: “Consciousness always has an object. In other words, it is
always about something. We are not just conscious, we are conscious of the taste of food, the smell
of the sea, a tooth ache. We are conscious of joy, of boredom, of the meaning of words on the page
in front of us, of the sound of music playing in the next room, of our own thoughts, of memories.
The point is that virtually all experience is experience of something. … Consciousness would seem to
be intimately involved with the informing of the brain and mind by objects of attention.” 15
I would concede that input and output mechanisms are needed as causal processes – sounds, sights,
etc. – but these mechanisms are not part of the conscious states. When we see a sunset or listen to
music a chain of events occurs with the activation of receptors, and proceeding subsequently to the
transduction of proximal stimuli, neurons firing in the brain, integration with memory, attachment of
emotion and meaning and so on. Somewhere along this process, consciousness emerges, but the
stimuli (certain wavelengths in the case of the sunset and a series of sounds in the case of music) are
not those things within which a state occurs or those things which it constituted of; a conscious
states – the experience of seeing a sunset or listening to music - emerge within this process.

14
Wilson, Robert A. and Foglia, Lucia, "Embodied Cognition", The Stanford Encyclopedia of Philosophy (Fall
2011 Edition), Edward N. Zalta (ed.)
15
Sally Goerner and Allan Combs, “Consciousness as a self-organizing process: an ecological perspective”,
1998, p 123–127

7
4. Conclusion
The question of how the brain gives rise to consciousness is the key question and science has yet to
identify the mechanism that is involved. This does not mean that the explanation is not ‘out there’ to
be discovered; as Patricia Churchland prompts us to go do the science and let the future be the
judge of it. Notwithstanding, the best explanation so far is given by emergence; consciousness arises
from the brain via emergent property. It follows that if a simulation of the brain can be created in a
machine, it is a possibility that consciousness could emerge. I acknowledge of course that it is not as
simple as that, as proponents of embodied cognition are quick to point out; being that mere brain
simulation is not enough, we should simulate sense organs and the simulation should be able to
interact with the world. Today we do not have the technology at our disposal to create a brain
simulation of small mammal, let alone a human brain with around 100 billion neurons – the HBP is
extremely ambitious in its timeline objective. But if the technology were to become available and it
would allows us to build a simulation in a machine, another difficulty arises. How would we ever
know that it was conscious?

As there is no ‘test for consciousness’, there is no method of coming to know with certainty whether
other people share the same conscious states that we experience; we have a sense of self and
subjective experience and we assume that other people share this experience and we believe them
when they say they do. So even if a brain simulation were eventually produced and it told us that it
is aware of itself, feels and experiences “what it’s like”, our knowing whether or not it would qualify
as conscious would be questionable at most.
So how can we test if a machine has qualia if there is no test available for knowing of the presence of
consciousness in other people?
So long as there is no formal test, I would suggest that the same practice of belief that we extend to
other people should be extended to the machine simulation. If its behaviour is indistinguishable
from that of (what we assume to be) a conscious person and it will report to us about its subjective
experience we should believe it.

"When we understand consciousness - when there is no more mystery -consciousness will be


different, but there will still be beauty, and more room than ever for awe" – Dennett, Consciousness
Explained (1991)

8
Bibliography
Chalmers D., "Facing Up to the Problem of Consciousness", Journal of Consciousness Studies, 1995,
pp.  200–219

Chalmers D., “Strong and weak emergence”, in P. Davies & P. Clayton (eds.) “The re-emergence of
emergence”, Oxford University Press, 2006

Churchland Patricia, “The Hornswoggle Problem”, Journal of Consciousness Studies 3, 1996, pp. 402-
8

Conee E., “Phenomenal Knowledge”, Australasian Journal of Philosophy, 1994, pp. 136-150

Jackson F., “Epiphenomenal Qualia”, Philosophical Quarterly 32, 1982, pp. 127–136

Lewis D., “What Experience Teaches”, In William G. Lycan (ed.), Mind and Cognition, 1990, pp. 29--
57

Nagel T., "What is it Like to Be a Bat?", In Philosophical Review 83 (October), 1974, pp. 435-50

Wilson, Robert A. and Foglia, Lucia, "Embodied Cognition", The Stanford Encyclopedia of


Philosophy  (Fall 2011 Edition), Edward N. Zalta (ed.)

Sally Goerner and Allan Combs, “Consciousness as a self-organizing process: an ecological


perspective”, 1998, pp. 123–127

Links
https://www.humanbrainproject.eu
https://www.humanbrainproject.eu/en_GB/roadmap

Вам также может понравиться