Вы находитесь на странице: 1из 17

Minds & Machines (2011) 21:203–219

DOI 10.1007/s11023-011-9236-0

Philosophy of Mind Is (in Part) Philosophy


of Computer Science

Darren Abramson

Received: 1 January 2010 / Accepted: 21 September 2010 / Published online: 5 March 2011
Ó Springer Science+Business Media B.V. 2011

Abstract In this paper I argue that whether or not a computer can be built that
passes the Turing test is a central question in the philosophy of mind. Then I show
that the possibility of building such a computer depends on open questions in the
philosophy of computer science: the physical Church-Turing thesis and the exten-
ded Church-Turing thesis. I use the link between the issues identified in philosophy
of mind and philosophy of computer science to respond to a prominent argument
against the possibility of building a machine that passes the Turing test. Finally,
I respond to objections against the proposed link between questions in the philos-
ophy of mind and philosophy of computer science.

Keywords Turing test  Philosophy of computer science  Church-Turing thesis 


Embodied cognition  Philosophy of mind

Introduction

In this paper I argue that whether or not a machine can be built that passes the
Turing test1 is a question that depends on open questions in the philosophy of
computer science. I will show that if the extended Church-Turing thesis and
physical Church-Turing theses are true,2 then a machine can in principle be built
that passes the test. I define what I mean by ‘in principle’ below. On the other hand,
suppose that either or both of these theses are false, and that it can be shown what

1
A machine can pass the Turing test if its verbal behavior in a text-mediated format is indistinguishable,
on average and in the long run, from that of a human being. See Turing (1950).
2
I define these theses below.

D. Abramson (&)
Dalhousie University, Halifax, NS B3H 4P9, Canada
e-mail: da@dal.ca

123
204 D. Abramson

physically possible conditions provide counterexamples to them. Then, I argue,


strong evidence will be available concerning whether or not a machine can in
principle be built that passes the test. Since I take the question of whether a machine
can be built to pass the Turing test to be central to the philosophy of mind, I
conclude that the philosophy of mind is, in part, philosophy of computer science.

Open Issues in Philosophy of Mind and Philosophy of Computer Science

In this section, I begin by arguing that whether a machine can pass the Turing test is
a central question for the philosophy of mind, despite controversy over whether
passing the test is a sufficient condition for having a mind. I then identify broad
questions in the philosophy of computer science that are, at present, open, and show
that whether a machine can pass the Turing test is decided (but not necessarily
known to be decided) by resolution of these questions.

Verbal Behavior and the Philosophy of Mind

The question of whether or not a machine can reproduce the purely verbal behavior
of a person has long been at issue in the philosophy of mind. I have argued recently
that Descartes’s ‘language test,’ and Turing’s imitation game (now referred to as the
Turing test) play nearly identical epistemological roles for their respective authors
(Abramson, forthcoming). I will summarize briefly the similar roles and a bit of the
evidence for understanding the two tests in this unified fashion. In doing so, I will
attempt to bolster the claim that understanding whether a machine can reproduce the
verbal behavior of a person is significant whether or not one takes the view that
successful impersonation of the verbal behavior of a person implies thinking.
The term ‘language test’ refers to Descartes’s proposal for a way to distinguish
between any automaton and a being with a soul. In his Discourse on Method,
Descartes claims that machines are unable to ‘‘produce different arrangements of
words so as to give an appropriately meaningful answer to whatever is said in its
presence, as the dullest of men can do’’ (139–140, Descartes 1637). So, Descartes’s
language test purports to distinguish between machines and beings with souls by
measuring the ability to converse in natural language on whatever topic an
investigator chooses.
First, I have argued, following John Cottingham, that Descartes understands his
own test as a nearly certain method for, in all cases, showing that a particular object
is not a mere machine (Cottingham 1992). This is quite different from understand-
ing the test as a fallible method for showing that objects are not machines. Descartes
cannot imagine that an assemblage of mechanical responses to the environment
could display the endlessly diverse, flexible responses that human language users
display. So for Descartes the ability to respond meaningfully to queries in natural
language shows that the object possessing this ability is more than such an
assemblage. An object displaying the ability to respond meaningfully to whatever is
said in its presence must also have a soul, according to Descartes.

123
Philosophy of Mind Is (in Part) Philosophy of Computer Science 205

Many have detected the obvious similarity between Turing’s presentation of his
own test and Descartes’s discussion of the evidential basis for the presence of a soul.
However, Turing is still often considered a behaviorist (i.e., one who holds that
having mental states is nothing more than possessing dispositions to behave in
certain ways) despite his similarity to Descartes on this matter. I have shown
elsewhere that Turing was aware of Descartes’s views on evidence for the presence
of a soul in an object, and argue there that, similarly, Turing thought that flexible
language use implied the presence of an inner cause of a particular sort (thus failing
the definition of behaviorism.) (Abramson, forthcoming)
The sorts of considerations that Descartes and Turing present in defence of the
evidentiary support verbal behavior gives for the presence of a mind have become
standard in the philosophy of mind (see, for example, the papers collected in Shieber
(2004)). We may disagree over whether or not passing the Turing test is conclusive
for the possession of a mind, or whether any mere machine could ever be built that
passed the Turing test. In discussing these questions, though, I take it as
undisputable that we are centrally engaged in the philosophy of mind.
So, a moderate position to be taken in this paper is that whether machines can be
built that pass the Turing test is a central issue in the philosophy of mind. For
example, as just noted, one might take the test to be a successful operationaliza-
tion—merely a very reliable indicator—of the possession of mind without thereby
taking a behaviorist view of mental states and processes. My moderate view will
interact with other views in the philosophy of mind to the extent that those other
views consider the language test, and the Turing test, to be methods for measuring
the possession of mind.
I will attempt to refute the view that holds both that human minds are essentially
‘embodied’ (in senses to be discussed below), and that the Turing test can,
necessarily, distinguish human minds from mere machines due to the essentially
embodied character of human minds. Whether or not the Turing test can distinguish
computers from human beings with embodied minds will, instead, depend on open
questions in the philosophy of computer science.
I argue that whether a computer can, in principle, pass the Turing test does not
depend on whether human verbal behavior reveals, through simple probes, our
embedded and embodied experience. But whether a computer can pass the Turing
test depends instead on particular open questions in the philosophy of computer
science, to be discussed in the next section.
I do not stake a position on whether machines that are indistinguishable from
embodied human beings deserve to be attributed mental states or processes. (I do
briefly discuss this issue in the penultimate section of this paper.)

Two Versions of the Church-Turing Thesis

In this section I will introduce two modified versions of the Church-Turing thesis
whose resolutions, at present, are open questions in the philosophy of computer
science. The Church-Turing thesis says that all of the effectively computable
functions are Turing computable. In less formal terms, the thesis states that all of the
functions that a human being can compute by using unbounded amounts of time,

123
206 D. Abramson

paper, pencil, and simple rules can be computed by a regular Turing-equivalent


computer. I now consider an expansion of the Church-Turing thesis, called the
physical Church-Turing thesis (following terminology in Deutsch (1985)): all of the
physically computable functions are Turing computable. One way to see the
difference between the physical Church-Turing thesis and the original Church-
Turing thesis is to ask whether having access to novel physical objects (say, a chunk
of radium and a Geiger counter) allows a person to compute a function that is not
Turing computable. The physical Church-Turing thesis is a major open question in
the philosophy of computer science.
One of the first formulations of the physical Church-Turing thesis is given by
Robin Gandy, who uses the name ‘Thesis M’ for it (Gandy 1980). Gandy gives an
argument for principles that, he claims, guarantee the physical Church-Turing thesis.
Lately, there has been an explosion of articles identifying processes that appear to be
logically and/or physically possible that allow ‘hypercomputation:’ the physical
computation of a function that is not Turing computable (for a small selection, see
Copeland and Sylvan (1999), Hamkins (2002), Shagrir and Pitowsky (2003), Shagrir
(2004), and Siegelmann (2003)). On the other hand, arguments have been attempted
to disprove the possibility of hypercomputation, but have been found wanting (for
example, see Cotogno (2003) and response in Ord and Kieu (2005)).
The second question in the philosophy of computer science to be considered is an
extension of the Church-Turing thesis out of the realm of computability and into
computational complexity. The second question asks whether any function
computable by a physical device can be computed by a Turing-equivalent computer
with, at most, polynomial slowdown. This ‘extended Church-Turing thesis’, as Peter
Shor (1988) calls it, might be false. In the paper in which Shor introduces the
extended Church-Turing thesis, he discusses an algorithm with which a ‘quantum
computer’ can compute prime factors of a natural number in an amount of time that is
a polynomial function of the size of the number to be factored.3 If quantum
computers can be built in a manner that allows them to factor ever larger numbers in
an amount of time that is a polynomial function of the number to be factored, and if
certain open questions in complexity theory are answered as expected, then we will
have as a result that the extended Church-Turing thesis is false (Hagar 2010,
Section 3.1.4).
The two modified versions of the Church-Turing thesis, as I understand them,
belong to both the foundations of computer science and the philosophy of computer
science. The status of these theses and their relationship to the standard Church-
Turing thesis have recently been widely discussed in philosophical venues,
including, notably, the 2002 special issue of Minds and Machines devoted to
hypercomputation (Vol. 12, No. 4). Nevertheless, research in computer science itself
may inform the resolution of questions concerning the various versions of the
Church-Turing thesis.
For example, in his seminal mathematical paper introducing the modern notion of
a Turing machine, Alan Turing attempts to make the notion of ‘effective

3
A quantum computer is a physical device in which computation relies on ‘qubits,’ or physical states that
both encode information, and are in a state of superposition. See Hagar (2010).

123
Philosophy of Mind Is (in Part) Philosophy of Computer Science 207

computability’ precise, and to argue that the effectively computable functions are
Turing computable (and in doing so, argues for the standard Church-Turing thesis)
(Turing 1936, Section 9). Turing gives an argument for the Church-Turing thesis
that relies on the claim that, in effectively computing a function, a human being may
only enter a bounded, finite number of states of mind. The Church-Turing thesis is
itself subject to both philosophical and mathematical criticism, as in McCarty
(1987) and in Abramson (2006b). Given the approaches to addressing the various
versions of the Church-Turing theses, and the venues in which they have appeared,
there is a widespread commitment to the view that their resolution will result from
reflections both in formal sciences, and philosophical reflections on the formal
sciences.

Suppose the Theses are Shown to be True

I do not have an answer here as to whether the physical or extended Church-Turing


theses are true. I have argued previously that there are not, as yet, any convincing
refutations of the possibility of building a machine that outcomputes a standard
Turing machine (Abramson 2006a). As for the extended Church-Turing thesis, there
has been considerably less interest in the philosophy of computer science in
investigating whether physical objects might display complexity properties that
standard computers don’t, prompting one researcher to comment that there is ‘‘. . .
an overemphasis on the concept of computability’’ (Volchan 2002, 61).
If the two modified Church-Turing theses are true, then it seems that it is, in
principle, possible to construct a Turing-equivalent computer that passes the Turing
test. Here is how.

How to Build a Machine that Passes the Turing Test

1. Pick a person. Draw an imaginary line around their brain and nervous system.
2. Code up a complete model of everything inside the line. That is, for completely
deterministic portions of the tissue involved, construct a model which behaves
exactly as the tissue does. For non-deterministic portions, have the model
behave in identically probabilistic fashion.
3. Code up a model of the effect the following stimuli have on the nervous system
just modeled:
(a) being invited to participate in a Turing test;
(b) arriving at the test site; (If the model outputs some other decision than
participating, erase the session, go back to the first model, and keep
simulating until a decision is made to participate. If the model turns out to
be recalcitrant, pick someone more cooperative to model.)
(c) sitting down at the terminal, and seeing questions flashed on a screen in a
room just like the one in which the real human subject is sitting.
4. Apply the above stimuli to the model. Copy the model to the computer in the
Turing test scenario.

123
208 D. Abramson

5. For each question, deliver inputs to the brain model in the form of simulated
inputs to the nervous system; track simulated efferent outputs to motor neurons,
thereby tracking simulated key presses, and send the output to the questioner.
Notice that the computer program described will not only pass the Turing test, but in
the case that the program is competing against the ‘source’ for the model, it will be
indistinguishable from the competing real human being. The reason for this should
be obvious. We have imagined coding a program which faithfully reproduces the
outputs of a human nervous system. Then, we have imagined stimulating this
simulated nervous system with the inputs a nervous system would receive in taking
the Turing test. The only source of difference we might find would be in errors in
either simulating the input, or the nervous system itself, and we are supposing no
such errors have been made.
It should be noted that the above recipe for building a machine that passes the
Turing test requires that only a single person, and their responses following a single
successful invitation to participate in the test, be modeled.
The nature of the possibility of creating a machine in the manner just described is
tied directly to the modal scope of the modified Church-Turing theses. Together, the
theses under review state that there is no function computed by a physical object
that is either uncomputable, or efficiently uncomputable, by some Turing machine.
Consider the function computed by a particular person participating in a Turing test
through their responses to queries. The inputs to the function are the questions asked
(or, perhaps, a concatenation of all questions asked so far of the subject, together
with previous responses). The responses of the subject are the output of the function.
If the extended and physical Church-Turing theses are true, then there is some
Turing machine that efficiently computes (is not eventually outpaced by an actual
person for ever-longer inputs) the function just described. Therefore, while an
attempt to build a computer that efficiently computes the function just described
might be limited by resources available, or by our ability to discover which machine
ought to be built, it would not be limited in principle by the nonexistence of a
program description matching the task at hand.

Suppose the Theses are Shown to be False

At least since 1961, philosophers have published arguments that purport to show
that human mathematical ability demonstrably exceeds that of any computer. (See
Lucas (1961); for a more recent version, see Penrose (1994).) Turing envisioned
such arguments and responded to them in his 1950 paper introducing the Turing test
under the description ‘the mathematical objection’ to the claim that machines can
think (Turing 1950, 444–445). I have argued at length that Turing’s method for
dismissing the mathematical objection to the claim that machines can think presages
the response given by many other mathematicians and logicians to it over the past
60 years (Abramson 2008).
If the mathematical objection succeeds, and human beings can compute functions
that no machine can, then the physical Church-Turing thesis fails. On the other
hand, suppose the physical Church-Turing thesis failed for some reason other than

123
Philosophy of Mind Is (in Part) Philosophy of Computer Science 209

the outstripping of computers’ abilities by human abilities, say because a machine


perched at the event horizon of a black hole could outcompute a Turing machine, as
suggested by Shagrir and Pitowsky (2003).4 Human beings, though, tend not to be
perched at the event horizons of black holes. So the same possibility, described
above, of building a computer that passed the test would hold, if such exotic
conditions were discovered to be required in order to produce a counterexample to
the physical Church-Turing thesis. Therefore, if the physical Church-Turing thesis
fails, and we know under what conditions it fails, then (given suggestions in the
literature of cases in which the thesis fails) it seems very likely that we will discover
whether machines that pass the Turing test are thereby ruled out.
Exactly analogous arguments can be made in the case of the extended Church-
Turing thesis. If physical objects (including human beings) are simulable to
arbitrary accuracy in an efficient manner by a Turing machine, then it follows that
some Turing machine can simulate the performance of a person in the Turing test.
(For a contrary view on the possibility of simulating all physical systems efficiently
by a general purpose computer, see Wolfram (1985).)
In fact, some have argued that results in complexity theory suggest that human
mathematical activity is simulable by Turing machines, notably Kurt Gödel, in a
letter anticipating the question of whether P = NP. (See Gödel’s letter to von
Neumann, translated and reproduced in Sipser (1992).) So, given investigations into
the status of variations on the Church-Turing thesis, constructive falsifications of
them would be informative as to whether a machine can be built, in principle, that
passes the Turing test.

Can a Mere Machine Pass the Turing Test?

In this section, I present a prominent argument that attempts to show that, due to
considerations of the embedded and embodied nature of cognition, no computer can
pass the Turing test. I argue that the argument fails precisely in that it does not
address the questions in the foundations of computer science discussed above.

Subcognition and the Turing test

This part of the paper involves a short demonstration, requiring the cooperation of
able readers.
Please bring your hands together, palms pressed together, as if you were
praying, touching the fingertips of your left hand with the corresponding
fingertips of your of your right hand. Fold down your two middle fingers—and
only your two middle fingers - so that the middle knuckles of both come
together. (The tips of your thumbs, index, ring and pinky [little] fingers should

4
In essence, such a scenario would result in a computer having a locally infinite amount of time to
complete its computation, while to an observer the same computation would appear to take a finite
amount of time.

123
210 D. Abramson

be still be together.) Now, move your other fingers one at a time and report
what happens. (French 2000, 337)
Can you move your ring fingers? Did you know you wouldn’t be able to? Finally,
what would happen if you asked this question in the Turing test of the two subjects
being interviewed? Before we get to his argument, let us notice what Robert French’s
demonstration does for our intuitions. Surely no programmer could ever anticipate
every such question; thus, by constructing tasks like these for the subjects in a Turing
test, we can build a toolbox for easily distinguishing human from machine.
In an earlier paper, French imagines applying myriad results from cognitive
psychology to the Turing test, by asking questions of the candidates concerning
word similarity and semantic appropriateness of novel word strings (French 1990).
French concludes that the physical experience of human beings radically affects
cognitive processing in ways that are revealed by these so-called ‘subcognitive’
questions. I have no argument with this claim. However, he goes further to make
claims about the necessary relationship between the ability to answer subcognitive
questions and having had certain experiences.
French suggests asking the two subjects in the Turing test even more salient
questions, questions that will reveal not just cognitive differences which depend on
physical differences, but will directly reveal physical differences (French 2000). For
example, the subjects can be asked to observe novel properties of their own
physiology, which they may not even be aware of until being asked, as we just saw.
French anticipates objections that either these performative questions (such as the
one involving observations of one’s folded fingers above), or subcognitive questions
(such as one involving the semantic appropriateness of novel word strings), are not
fair game in the Turing test scenario. I have no such objections.
French’s claim, in short, is that
. . . as a test of general intelligence, the Turing test is not particularly
appropriate precisely because it so hard: it tests not for intelligence, in general,
but, rather, for culturally oriented human intelligence. In order to pass it, a
machine would have to experience the world in essentially the same manner as
we humans had, and, in order to do this, it would have to have a body and a set
of experiences very similar to our own. And it is this that would make it
virtually impossible for any machine to actually pass the Turing test. (French
2000, 339).
Rather than rehearse the various subcognitive and embodiment-related questions
that French mentions, I offer the following well-constrained example, which is both
illustrative of his argument and makes clear the issue in question.
Consider, for example, the question ‘does freshly baked bread smell nicer than
a freshly mowed lawn?’ A machine that had never smelled either baking bread
or a newly mowed lawn would have a great deal of trouble answering this
question, unless, of course, it had been specifically programmed to answer that
particular question. But there are infinitely many such questions that humans
can answer immediately because they can make a judgement, based on actual
physical experience, about the degree of pleasure associated with each.

123
Philosophy of Mind Is (in Part) Philosophy of Computer Science 211

Further, on average, most people within a particular culture will respond to


this type of question in a similar manner. It is this fact that will be used to
infallibly trip up the computer. (French 2000, 334–335).
It is worth noticing that French’s characterization of the only reasonable method
for programming machines that pass the Turing test is explicitly rejected by Turing
himself. I have shown elsewhere (Abramson 2008) that Turing’s presentation of the
construction of ‘learning machines’ is motivated precisely by his rejection that
computers can only do what we explicitly tell them to do—witness Turing’s lengthy
discussions of Lady Lovelace’s objection, for instance (Turing 1950, 450–451;
454–460). As we have seen, the philosophy of computer science has already done a
great deal to make issues about what implemented computers can do more precise.
French elsewhere emphasizes the modal nature of his claim that in order to be
prepared to answer certain questions, minds must physically interact in certain ways
with the environment. He claims that being prepared to answer questions at the
subcognitive level ‘‘[is] the product of a lifetime of interaction with the world which
necessarily involves human sense organs, their location on the body, their sensitivity
to various stimuli, etc.’’ (French 1990, 62; emphasis in original.)
French’s argument is as follows. Necessarily, if you can appropriately answer
subcognitive questions, then you have had experience of physical interaction with
the world. Computers haven’t had this lifetime of interaction. Therefore, by modus
tollens, computers can’t appropriately answer subcognitive questions and are
therefore unable to pass the Turing test.

Responding to French’s argument: An Outline

French’s argument purports to show that no computer could pass the Turing test,
since no computer has experienced the world as a human being has. The claim that
no computer has experienced the world as a human being has is a plausible,
suppressed premise in French’s argument.
In the next section I will suggest that the general conclusion, that no computer
can pass the Turing test, does not follow from French’s argument. Later I will revisit
the suppressed premise and consider the possibility that French’s argument fails
merely because the suppressed premise is false, and that any computer that passes
the Turing test has experienced the world as a person has. I do not take the overall
success of this paper, of tying together potential success in building machines that
pass the Turing test to questions in the philosophy of computer science, to hinge on
the issue of whether simulations of human beings have experience of the world.

Really Hard Questions for Machines

If construction of a machine running the program described in ‘‘Open Issues in


Philosophy of Mind and Philosophy of Computer Science’’ above is possible, and if
the program is run so as to efficiently simulate a person’s performance in the Turing
test, then it will answer subcognitive questions in a manner indistinguishable from a
human. It will respond with (apparent) amazement that it cannot move its ring

123
212 D. Abramson

fingers away from one another, when asked to complete the task described above.
Plausible responses to subcognitive questions concerning present and past embodied
experience will be offered despite the computer’s lacking a human body altogether.
I will argue in this and the next subsection that the program described in ‘‘Open
Issues in Philosophy of Mind and Philosophy of Computer Science’’, if possible at
all, can even—if programmed well—pass questions that are so hard that even
French excuses them from counting as fair game during the Turing test. In his 1990
article, French supposes that the judge prepares him or herself by doing a series of
experiments on the psychological phenomenon of ‘associative priming’, which I
will now explain.
Suppose the judge in the Turing test has available a series of macros for sending
strings at a preset pace over the terminal hookups in the test, and a piece of
software scanning responses from subjects on the screen and measuring latency.
Having practiced earlier with a host of human subjects in similar terminal
hookups, the judge can come in with statistically reliable predictions of variable
latencies for responding to key presses for, say, distinguishing words from non-
words. Associative priming is just the name for the psychological phenomenon that
results in the profile of latencies common to human beings with similar
backgrounds.
It turns out that people respond faster in identifying a string of letters as a word if
it is preceded by a semantically related word. As French tells us, ‘‘[if] . . . the item
‘butter’ is preceded by the word ‘bread’, it would take significantly less time to
recognize that ‘butter’ was a word than had an unassociate word like ‘dog’ or a
nonsense word preceded it’’ (French 1990, 57). The familiar conclusion, that no
computer could pass the Turing test, is presumed to follow from the ability of the
judge to ask such questions.
The machine would invariably fail this type of test because there is no a priori
way of determining associative strengths (i.e., a measure of how easy it is for
one concept to activate another) between all possible concepts. Virtually the
only way a machine could determine, even on average, all of the associative
strengths is to have experienced the world as the human candidate and the
interviewees had. (French 1990, 58)
Precisely what is left out by the word ‘virtually’ here? Perhaps French means
that, barring exceptional luck and accident, there is no way to build a machine
that one expects to pass such tests. He supposes that a critic might object to
these questions, since they don’t really seem in the spirit of the test. After all,
these are explicitly subcognitive questions, and the Turing test is intended to
make only cognitive distinctions. So, French ‘‘obligingly disallows’’ such
questions (ibid.). I will argue now that disallowing these questions is not
necessary. Machines designed as I have described can, if certain programming
choices are made, pass psychometric tests of associative priming perfectly well.
Also, by seeing that this is true, we can help avoid a mistake that comes up
from time to time in discussions of cognition and computation.

123
Philosophy of Mind Is (in Part) Philosophy of Computer Science 213

Simulation and Emulation

Cummins et al. discuss ways in which you might decide between connectionist and
language of thought implementations of cognitive architecture in people. They
distinguish between ‘primary’ and ‘incidental’ effects of algorithms (Cummins
et al. 2001). Primary effects refer to the computational task a machine or organism
accomplishes, while incidental effects are all those facts which may be true or false
without affecting whether the task is performed. If one machine emits a beep while
factoring natural numbers, but the other emits a squawk, then they might have the
same primary effects (factoring numbers correctly) but different incidental effects.
There might be connectionist and rule-based parsers that recognize precisely the
same strings, but Cummins et al. argue that there is reason to think that
connectionist parsers would be revealed by little or absent growth in processing
time compared to input size. However, rule-based parsers, they say, would have
some positive correlation between input and processing time.
Here is their summary of the possible sources of difference between two systems
in their incidental effects.
One important source of incidental effects thus lies in the algorithm producing
the primary effects. But systems that implement the same algorithm may
exhibit different incidental effects. If the underlying hardware is sufficiently
different, they may operate at greatly different speeds, as is familiar to anyone
who has run the same program on an Intel 486/50 MHz [computer] and on a
Pentium III [computer] running at 750 MHz. A significant difference in
response times between two systems, in other words, does not entail that they
implement different algorithms; it may be a direct result of a difference at the
implementation level. Hence, two systems that have identical primary effects
may exhibit different incidental effects either because they compute different
algorithms or because the identical algorithms they compute are implemented
in different hardware (Cummins et al. 2001, 174).
Notice that these authors are not saying that, if architecture and/or algorithm are
different, then there will necessarily be a difference in incidental effects. In fact,
they go on to say that one cannot in general infer from sameness of incidental
effects to sameness of functional architecture, which presumably includes both
algorithm and implementation.
However, they also appeal to our idea that if you have two completely different
functional systems, surely timing will generally go askew between them. But I will
now briefly argue that this is not correct.

Ms. Pacman at 2.4 GHz

I will stipulate the following definition of emulation: one computer emulates another
just in case they display both all the same primary effects and all the same
secondary effects, on the definitions offered above. Furthermore, I will limit
secondary effects to those that can be detected by a judge in the context of the

123
214 D. Abramson

Turing test. So, latency between stimuli and responses will count as secondary
effects, but the warmth of the processor of the computer being tested will not.
There was a time when video games from one PC platform were unplayable on
another. However, in dealing with rapidly increasing processor speeds, program-
mers tied event timings to some absolute measure instead of clock ticks. With a
computer equivalent to a universal Turing machine, the ability to simulate exactly
the timing of another computational system depends only on the speed at which it
operates and the number of instructions it can execute per cycle. Routinely, popular
web browsers can display interactive windows in which entire computer architec-
tures in production less than a decade ago are emulated in the sense defined here.5
There is a short lesson to be taken from results in complexity theory, a lesson
implicit in the experience we have had with actual computers emulating previous
generation computers. If one computer can complete a task efficiently, then another
computer, if fast enough (though constructed with completely different architec-
ture), can complete the same task at the same speed as the first computer by
simulating the the performance of the first computer on the task. If the second
computer is fast enough, then the first and second computer can be made
indistinguishable in terms of secondary effects. These comments summarize results
discussed in Denker and leCun (1992).
Therefore, the restriction that French charitably makes on the Turing test, of
disallowing questions that measure timing effects due to embodied experience, is
not necessary in principle so long as some Turing machine efficiently emulates the
verbal performance of a human being. Results in the philosophy of computer
science not only determine the possibility of a computer passing the Turing test, but
determine the possibility of a program that has a timing profile indistinguishable
from a person.
So, I hope to have shown so far that whether a computer can, in principle, pass
the Turing test is not decided by the issue of whether human verbal behavior
depends sensitively and somewhat uniformly on features of embedded and
embodied experience. But whether a computer can pass the Turing test does
depend on open questions in the philosophy of computer science.

Objections

In this section I consider objections to the central claim of this paper, the claim that
the in-principle possibility of building a computer that passes the Turing test is
related to open questions in the philosophy of computer science. In particular, I
consider objections to the possibility of building a computer that passes the Turing
test that do not depend on either version of the Church-Turing thesis. I show that
they fail, and that French’s argument cannot be supplemented in various ways. I also
consider whether French’s suppressed premise, discussed above, fails, in which case

5
Due to copyright restrictions governing the systems emulated, websites offering such services come and
go quickly. For a selection of such sites, one can Google the terms ‘web based emulator.’

123
Philosophy of Mind Is (in Part) Philosophy of Computer Science 215

French’s conclusion does not obtain, but his claimed relationship between passing
the Turing test and embodiment does obtain.

Brains and their Environment Cannot be Separated, Even in Principle

Recent opinions in the foundations of cognitive science suggest that the biological
basis of mind cannot be separated by an organism’s environment, even in principle.6
Consider the idea that human minds are intrinsically embodied and embedded. A
strength of French’s argument, one might claim, is that it takes advantage of this
property of persons. If the objection is taken to mean something like ‘if you isolate a
brain from body and environment then you no longer have a person’, I am likely to
agree. However, the computer program described in ‘‘Open Issues in Philosophy of
Mind and Philosophy of Computer Science’’ is not an attempt to reproduce a
person—it is merely an attempt to simulate part of a body. So, the view just
presented merely bears on the relationship between embodiment and personal
identity; it does not inform the possibility of building a machine that passes the
Turing test.
If the objection is taken to mean something stronger, like ‘one cannot in principle
isolate and emulate a biological subsystem, namely the brain and spinal cord’, then
the objection appears weak. No doubt isolating and emulating any biological
subsystem is prohibitively difficult with present technology. For all I know, it is
impossible: discussions of variations of the Church-Turing thesis above showed
why emulating a human body might be impossible.
In practical terms, despite continuous interaction between neurons and the rest of
the world, it seems at least plausible that future scientists could arbitrarily pick a
physical boundary—say synaptic junctions at distal nerve endings—and model
everything inside. And, if they couldn’t do this in principle (in the sense discussed
above), then, by measuring the interaction a nervous system has with the
environment, we could compute a function that no computer could.

We Cannot, in Principle, Model Input to the Nervous System

Someone might argue that although computer models of the human body can be
constructed (pace variations of the Church-Turing thesis), we cannot model
appropriate input to such a model so as to simulate the performance of an actual
person in the Turing test.
There are a few problems once we have our nervous system modeled. We must
be able to specify a physical scenario in somewhat large-scale terms, and then
translate that scenario into simulated effects on individual synapses. There is no
doubt that translating environmental stimuli into simulated effects on individual
neurons is a difficult task. A few examples of imaginary beings capable of doing the
difficult task in question come to mind: Descartes’s evil demon, and the managers of
the fictional ‘Matrix’ in the popular movie by that name. Let us consider this

6
For example, consider work supporting the so-called ‘enactive theory of perception’ as presented, for
example, in Noë (2004).

123
216 D. Abramson

objection in isolation from the one above. Suppose that we are only worried about
simulating inputs to a nervous system, whether the nervous system itself is real or
simulated. Someone who objects to the possibility of accurately modeling the input
that particular environments provide to the nervous system is committed to the
success of a judge in what I will call the Wachowski test.7
The Wachowski test is a lot like the Turing test. However, instead of trying to
figure out which chat window is connected to a machine and which is connected to a
person, the goal of the judge is to distinguish a person experiencing a simulated
terminal, entirely computer-generated and delivered directly to the nervous system,
from the person who really is sitting at a terminal typing responses. Again, for all I
know, no one, even in tandem with the best engineering possible, could produce a
judge-fooling entry for the Wachowski test. Two observations should suffice for
rejecting this objection against my counterexample to French’s argument, however.
First, it is at least possible that no judge can beat the Wachowski test for sufficiently
advanced simulations. Second, and now I am considering both objections presented
so far, French never mentions any reason to think we can’t simulate either the
environment or the nervous system. Instead, as we have seen, questions in the
philosophy of computer science do offer direction as to whether objects can be
modeled in a manner permitting a computer to pass the Turing test.

Embodied Cognition Revisited

Recall that, according to French, no computer has experienced the world as a human
being has. This is a required premise for his conclusion that no computer can pass
the Turing test, since to pass the Turing test, on his view, requires having
experienced the world as a human being has. The questions French formulates are
intended to probe for the test’s subjects having had embodied and embedded
experiences of the world as a human has.
French could respond to the claim of this paper as follows. Suppose the extended
and physical Church-Turing theses are true. Also suppose we succeed in
constructing a simulation of a particular person’s participation in the Turing test.
Then, instead of performing a modus tollens on French’s position, we can perform a
modus ponens. French could claim that the computer successfully simulating a
particular person has experienced the world as a human has.
To claim that a computer that passes the Turing test experiences the world as a
person does is to hold a substantive position in the philosophy of mind beyond the
question of what is required to have human-like verbal properties: it is to take a
computational functionalist position for qualitative content.8 Therefore, the
response under discussion to the view I am presenting ties issues in philosophy
of mind even more closely to issues in philosophy of computer science. I will make

7
The Wachowski brothers directed, produced, and wrote the scripts for the Matrix movie trilogy.
8
I have avoided in this paper discussion of computational functionalism: the theory that having a mind is
nothing more than implementing a particular computer. If true, then building minds involves nothing
more than building computers. Arguably, too, issues in philosophy of mind can then be subsumed under
the philosophy of computer science.

123
Philosophy of Mind Is (in Part) Philosophy of Computer Science 217

only some preliminary remarks on the idea that any machine that passes the Turing
test has experiences that reflect our embodied and embedded experiences.
First, if French’s argument is intended to have a general conclusion, then it must
deal with computers randomly selected from the space of all possible computers. It
is true that given the space of possible computers complex enough to simulate a
human brain, very few will produce Turing test-passing output. In fact, very few
will produce recognizable outputs at all! However, there will be some which
simulate my brain, others that simulate your brain, and so on for every other brain
(granting, for the moment, the truth of the two versions of the Church-Turing thesis
discussed above). The mathematical existence of brain-simulating computers holds
even granting that which computers simulate which brains will depend on the
program we use for supplying simulated environments. So, while French’s brain
may play an epistemic role in programming a particular machine, the program itself
hasn’t had any experience—it waits quietly in Plato’s heaven until we retrieve it
using a neural scanner, or some other such putative device.
Second, there is a strong sense in which the computer hasn’t had any of the
experiences that French has had. In a naive sense, the computer—if it had
experiences at all—merely had those of being manufactured, plugged in, and then
programmed. Its program doesn’t even contain mention of interactions with the
environment, only numerical solutions to differential equations, perhaps, modeling
neural spike trains given certain stimuli.
Third, experiences presume an experiencer. If we are able to program a computer
as I have described, does that imply that computational functionalism, as a theory of
qualitative experience, is true? It doesn’t seem so. Notice that if we are committed
to the proposed computer’s sharing French’s experiences, then we presume the truth
of computational functionalism. I find it difficult to see how the joint truth of the
physical and extended Church-Turing theses by themselves could rule out possible
worlds where computers simulate brains, but minds—which do the experiencing in
those possible worlds—exist as epiphenomenal immaterial souls.

Conclusion

If the extended and physical Church-Turing theses are true, then any function
computable by a physical object is computable with, at most, polynomial slowdown
by a Turing machine. These theses are under considerable scrutiny from
philosophers, mathematicians, and computer scientists. Discovering that these
theses are true would guarantee that, in principle, a computer can be built that passes
the Turing test. A negative answer to the truth of either thesis, supposing that the
nature of counterexamples to the theses were understood, would similarly inform us
of whether a human being’s performance in the Turing test can be reproduced by a
computer. I have considered an argument that claims that no computer can pass the
Turing test because a recently programmed box of electronics has not experienced
the world as a person has. Such arguments either fail, or they tie questions in the
philosophy of mind to the philosophy of computer science even more closely than
has been argued for here. Perhaps by revealing the relationships between these

123
218 D. Abramson

disciplines, greater investments of resources will allow questions of interest to


diverse research communities to be answered more quickly.

Acknowledgments The author is grateful to three anonymous referees, whose comments have resulted
in significant improvements to this paper. I am also grateful to audiences at the Dalhousie University
Philosophy Department Colloquium Series and the 2010 meeting of the Society for the Study of Artificial
Intelligence and Simulation of Behavior, who commented on previous versions of this paper. Finally, I
would like to thank Sara Louise Parks and Duncan MacIntosh for their encouragement and feedback.

References

Abramson, D. (2006a). Church’s thesis and philosophy of mind. In Church’s Thesis after 70 Years. Ontos
Verlag.
Abramson, D. (2006b). Computability and mind. Unpublished doctoral dissertation. Indiana University:
Department of Philosophy, Program in Cognitive Science.
Abramson, D. (2008). Turing’s responses to two objections. Minds and Machines, 18(2), 147–167.
Abramson, D. (forthcoming). Descartes’s influence on Turing. Studies in History and Philosophy of
Science.
Copeland, J., & Sylvan, R. (1999). Beyond the universal Turing limit. Australasian Journal of
Philosophy, 77(46–66).
Cotogno, P. (2003). Hypercomputation and the physical Church-Turing thesis. British Journal for the
Philosophy of Science, 54, 181–223.
Cottingham, J. (1992). Cartesian dualism: Theology, metaphysics, and science. In: Cottingham, J. (Ed.),
The Cambridge Companion to Descartes. Cambridge: Cambridge University Press.
Cummins, R., Blackmon, J., Byrd, D., Poirier, P., Roth, M., & Schwarz, G. (2001). Systematicity and the
cognition of structured domains. Journal of Philosophy, 98, 167–185.
Denker, J. S., & leCun, Y. (1992). Natural versus ‘‘universal’’ probability, complexity, and entropy.
AT&T Technical Memorandum. Republished in the proceedings of the 1992 IEEE Workshop on the
Physics of Computation.
Descartes, R. (1985/1637). Principles of philosophy. In The philosophical writings of Descartes, Vol. I
(J. Cottingham, R. Stoothoff & D. Murdoch, Trans.). Cambridge: Cambridge University Press.
Deutsch, D. (1985). Quantum theory, the Church-Turing principle and the universal quantum computer.
Proceedings of the Royal Society of London A, 400, 97–117.
French, R. (1990). Subcognition and the limits of the Turing test. Mind, 99(393), 53–65.
French, R. M. (2000). Peeking behind the screen: The unsuspected power of the standard Turing Test.
Journal of Experimental and Theoretical Artificial Intelligence, 12(3), 331–340.
Gandy, R. (1980). Studies in logic and the foundations of mathematics, Chapter Church’s Thesis and
principles for mechanisms (pp. 123–148). Amsterdam, New York: North-Holland Publishing.
Hagar, A. (2010). Quantum computing. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy.
Spring 2010 edition.
Hamkins, J. D. (2002). Infinite time Turing machines. Minds and Machines, 12, 521–539.
Lucas, J. (1961). Minds, machines, and Gödel. Philosophy XXXVI, 112–127.
McCarty, D. C. (1987). Variations on a thesis: Intuitionism and computability. Notre Dame Journal of
Formal Logic, 28(4), 536–580.
Noë, A. (2004). Action in perception. Cambridge, MA: MIT Press.
Ord, T., & Kieu, T. D. (2005). The diagonal methods and hypercomputation. British Journal for the
Philosophy of Science, 56, 147–156.
Penrose, R. (1994). Shadows of the mind. London: Vintage.
Shagrir, O. (2004). Super-tasks, accelerating Turing machines and uncomputability. Theoretical
Computer Science (317).
Shagrir, O., & Pitowsky I. (2003). Physical hypercomputation and the Church-Turing thesis. Minds and
Machines, 13, 87–101.
Shieber, S. (2004). The Turing test: Verbal behavior as the Hallmark of intelligence. Cambridge, MA:
MIT Press.

123
Philosophy of Mind Is (in Part) Philosophy of Computer Science 219

Shor, P. W. (1988). Quantum computing. Documenta Mathematica—Extra Volume—Proceedings of the


International Congress of Mathematicians, I, 467–486
Siegelmann, H. T. (2003). Neural and super-Turing computing. Minds and Machines, 13, 103–114.
Sipser, M. (1992). The history and status of the P versus NP question. In Proceedings of the twenty-fourth
annual ACM symposium on theory of computing (pp. 603–618). Association for Computing
Machinery.
Turing, A. (1936). On computable numbers, with an application to the entscheidungsproblem.
Proceedings of the London Mathematical Society, 45, 230–265.
Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.
Volchan, S. B. (2002). What is a random sequence?. Mathematical Association of America Monthly, 109,
46–63.
Wolfram, S. (1985). Undecidability and intractability in theoretical physics. Physical Review Letters,
54(8), 735–738.

123

Вам также может понравиться