Вы находитесь на странице: 1из 24

Thinking With Machines:

Intelligeke Augmentation,
Evolutionary Epistemology, and Semiotic
Peter Skagestad

Introduction

Philosophical interest in the computer revolution has to date largely focused on the as-
yet unfulfiied promise of artificial intelligence, and the debate of the past three decades
has canvassed the two tightly intertwined questions of whether a thinking machine is
possible, and whether the concept of such a machine provides an appropriate model of
the human mind. While this debate has been going on, a different kind of computer
revolution has taken place and is still taking place-to wit, the transformation of knowledge
work through the ubiquity of the personal computer, and more specifically the networked
personal computer. Although philosophers, like other knowledge workers, have grown
to depend on the personal computer and to take it for granted as a tool of their trade,
the personal-computer revolution has attracted only negligible attention from
philosophers.’ Part of the thesis of the present essay is that the (still unfolding) personal-
computer revolution is philosophically as consequential and as interesting as the (actual
or potential) AI revolution. Also, as computers increasingly become ubiquitous as internal
components of automobiles, stereo systems, and dishwashers, discussion of how our lives
are affected by “the computer,” understood as any physical embodiment of a finite Turing
machine, rather than by particular types of computers, understood in terms of their
functionality, makes less and less sense.’ At any rate, as we shall see, the pioneers of the
personal-computer revolution did not theorize about the essence of the computer, but
focused rather on the essence of human thinking, and then sought ways to adapt computers
to the goal of improving human thinking.
This goal was understood ambitiously: the automation of intellectual housekeeping
tasks was intended to bring about qualitative changes in our thinking, not just to enable
us to do more of the same kind of thinking we had been doing all along. Like the American,
the French, and the Russian revolutions, the personal-computer revolution has been the
work of people who had very definite ideas about what they were doing and who were

Skqatd,
Peter University of Messachosctts-Lowell, MA 01854.

Jotmml of Social and Eiduio~ Systems 16(2)%7-180 Copyright e 1993 by JAI Pxess, Inc.
ISSN: 0161-7361 Au liRhts of rcDmduc(ion ill an” form rcsaved.

157
1!58- PETER SKAGFSTAD

not shy about committing those ideas to paper. Just as the Bolsheviks were dead wrong
in their understanding of the project they were embarked on-namely, preserving the
Russian Empire from the disintegration that seemed a certain consequence of World War
One-so this new breed of revolutionaries may prove quite mistaken about the meaning
of their revolution; yet we can no more understand this revolution in isolation from the
self-understanding of the computer revolutionaries than we could hope to understand the
Bolshevik Revolution without at least some acquaintance with Lenin’s ideas.
Specifically, this essay examines the seminal writings of some of those who pioneered
the personal-computer revolution, and documents an explicit conceptual framework,
comprising certain definite ideas about how people think, how people adopt artifacts as
tools for thought and how those artifacts in turn affect our thinking through feedback
loops, and finally, how computer systems can be developed to extend or “augment” our
intellectual capabilities. I argue that this explicit conceptual framework-which, following
Douglas C. Engelbart, I refer to as “augmentationism”-rests on certain implicit
philosophical assumptions regarding language, mind, and knowledge, assumptions that
have at various times and in various ways been made explicit in the philosophical literature.
We trace the presence of these assumptions in the evolutionary epistemology which has
been formulated by Donald Campbell and Karl Popper, and which itself restates some
of the central doctrines of Charles Peirce’s semiotic.
Although this paper does not directly concern itself with the philosophic critique of
artificial intelligence, it documents the existence, in the sources mentioned, of a third
philosophical paradigm, different from both the computational paradigm and the
intentionalist paradigm, which is the chief source of critiques of computationalism.3

The Augmentation&t Framework

The term “augmentation” was adopted as a slogan by Douglas C. Engelbart in a report


written in 1%2.4 Over the next fifteen years, Engelbart was in charge of the Augmentation
Research Center (ARC) at the Stanford Research Institute (SRI), a project funded by
ARPA’s Information Processing Techniques office (IPTO), which was headed initially
by J. C. R. Licklider of MIT and subsequently by Ivan Sutherland, the inventor of bit-
mapped graphics. For a detailed account of the pioneering work done under the direction
of these three men, I can do no better than refer the reader to Howard Rheingold’s
fascinating account Took For Thought, which reads like a novel.’
Suffice it to list their chief practical results in (roughly) chronological order: interactive
computing, time sharing, word processing, networking, and graphical user interfaces. ARC
was dissolved in 1977; by then, dozens of Engelbart’s collaborators were already at Xerox’s
Palo Alto Research Center (PARC), developing the first personal computer, the Alto,
many of whose most revolutionary features*.g., windows, pulldown menus, and mice-
were later incorporated into the Apple Macintosh, under the direct management of
Engelbart’s erstwhile colleague Bob Belleville.6 Although regarding current technology as
a literal embodiment of Engelbart’s ideas would be an error, there is nonetheless an
unmistakable causal chain from his ideas to today’s technology.’
In a 1960 paper, Licklider’ inaugurated the augmentationist program in all but in
name, by proposing the development of a temporary “man-computer symbiosis.” Instead
Thinkingzuith Machines - 159

of directly challenging the artiticial intelligence paradigm, Licklider simply granted that,
ultimately, by 1980 at the earliest, the computer might outdo the human brain in most
respects; meanwhile, however, most intellectual advances would still have to be made by
people, who could benefit from computer systems specifically designed to extend the
intellectual capabilities of humans. To demonstrate the need for such a symbiotic
relationship, Licklider noted that he had at one point kept track of exactly how he spent
his working hours, and reached the depressing conclusion that about 85 percent of his
time was spent getting ready to think, e.g., finding information, plotting graphs, or
converting units of measurements to make different data sets comparable:

Throughout the period I examined, in short, my “thinking” time was devoted mainly to
activities that were essentially clerical or mechanical: searching, calculating, plotting,
transforming, determining the logical or dynamic consequences of a set of assumptions or
hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what
to attempt or not to attempt were determined to an embarrassingly great extent by
considerations of clerical feasibility, not intellectual capability.

Licklider went on to list the requirements of a human-computer symbiosis: greater


computer speed, more memory, greater tolerance of human natural language, and new,
graphically based user interfaces, such as were soon to be made possible through the efforts
of Licklider’s proteges Engelbart and Sutherland.
What a human-computer symbiosis might look like was fleshed out in great detail
by Engelbart in his 1962 report, which in part draws on, and includes lengthy quotations
from, Vannevar Bush’s 19459 description of the “Memex”, a notional workstation using
dry photography rather than electronic storage media, and incorporating the earliest
description of what later has become known as “hypertext,“a term introduced by Theodore
Nelson in 1965. In his article, Bush addressed the problem-an aggravated one in the age
of rapidly increasing information-of getting at data organized in counterintuitive ways,
e.g., alphabetically, numerically, or in descending hierarchies of subclasses. The human
mind, Bush observes, functions rather by association; hence information retrieval via trails
of association is something that ought to be mechanized:

When the user is building a trail, he names it, inserts the name in his code book, and taps
it out on his keyboard. Before him are two items to be joined, projected onto adjacent viewing
positions.. . . The user taps a single key, and the items are permanently joined.. . . Thereafter,
at any time, when one of these items is in view, the other can be instantly recalled by tapping
a button below the corresponding code space. Moreover, when numerous items have been
thus joined together to form a trail, they can be reviewed in turn, rapidly or slowly, by deflecting
a lever lie that used for turning the pages of a book. It is exactly as though the physical
items had been gathered together from widely separated sources and bound together to form
a new book. It is more than this, for any item can be joined into numerous trails.

Nonlinear thinking is popularly considered a discovery of the 1980s; amazingly, the above
was written in the 1940s. Bush’s essay is also striking in that, though he mentions that
the Memex might be connected to a computer to provide calculations, the storage-and-
retrieval system of the Memex is not itself computerized. In other words, both the concept
of the personal workstation and that of hypertext were formulated quite independently
160 - PETER SKAGESTAD

of the concept of the digital computer, which was later to prove the enabling technology
for their realization.” But perhaps this ought not surprise us. In the 196Os, computers
were generally thought of as data-processing machines. However, punched card data
processing was invented by Herman Hollerith in 189O-decades before the advent of the
digital computer.” The uses to which computers are put have always been independently
conceived, never dictated by the potentialities specifiiy inherent in the digital computer.
We note the irony, however, that Bush, himself a pioneer of analog computers, initially
had no use for digital computers and actually opposed their early development.‘*
Finally, in his 1962 report, Engelbart formulated a detailed conceptual framework for
developing new technologies to “augment” the human intellect. He also accepted the
cybemeticist William Ross Ashby’s term “intelligence amplification,” but with a caveat. In
Design fora Bruin(1952), Ashby had introduced the term “amplification” apparently in
the sense of leverage-specifically with the example of knowing how to use a dictionary
as opposed to knowing the meanings of a number of words.” But the word is used differently
in his classic 1956 paper, “Design for an Intelligence-Amplifier.” Ashby here presented a
possibility-proof of a machine that would solve problems its creators were incapable of
solving. This machine would amplify human intelligence the way human physical power
is amplified by the steam engine, and unlike the way it was earlier amplified by the lever
or cog and pulley.14 Thii analogy, along with its companion contrast, blurs the distinction
between intelligence amplification and artificial intelligence. The steam engine, while human-
made, is an independent source of power, unlike the lever which augments the human power
exerted on it. (Of course human energy may be expended on shovelling coal into the steam
engine, but that is not the only, or most important, energy which the steam engine
transforms.) Without going into the details of Ashby’s usage, Engelbart stresses (with Ashby)
that his objective is not to increase our native intelligence, but to augment the human being
with means for organizing experience and solving problems so that an intelligent system
results in which the human being is the central component (possibly in contrast with Ashby,
whose intelligence-amplifier was conceived of as a free-standing system).
The other components are called “augmentation means,” of which Engelbart defines
the four basic classes-artifacts, language. methodology, and training-and accordingly
defines the intelligent system as a “Human using Language, Artiiacts, Methodology, in
which he is Trained,” or, for short, the H-LAM/T system.” Humans living in our
contemporary society have already been significantly augmented by the language, artifacts,
and methodologies provided by our culture, and Engelbart identifies three stages in the
evolution of our intellectual capabilities: first, concept manipulation, by which he means
the formation of mental abstractions “in their raw, unverbalized form”; second, symbol
manipulation, by which Engelbart does not intend the (admittedly extremely important)
ability to communicate through symbols, but simply the ability of the individual to
represent objects to him or herself through symbols (e.g., the ability of a shepherd to keep
track of the flock by remembering the cardinal number 27, instead of having to visualize
27 different sheep); and third, manual, external symbol manipulation, such as that made
possible by “a stick and sand, pencil and paper and eraser, straight edge or compass, and
so on.” Again, the evolutionary significance Engelbart is concerned with is not the “value
derived from human cooperation made possible by speech and writing,” but the effect
that supplementation of memory and powers of visualization with graphical
representations of symbols has on an individual’s thinking.16
T’hinkingwith Machines - 161

Taking note of Benjamin Lee Whorf’s hypothesis that a culture’s world view is
constrained by the expressive repertoire of its language, Engelbart proceeds to formulate
an analogous hypothesis (termed “the Neo-Whorfian Hypothesis”) regarding the other
augmentation means: “Both the language used by a culture, and the capability for effective
intellectual activity, are directly affected during their evolution by the means by which
individuals control the external manipulation of symbols.“” As a corollary of this
hypothesis, Engelbart deduces that signiticant modifications in our language and our way
of thinking could result from “integrating the capabilities of a digital computer into the
intellectual activity of individual humans.. . ,” and which such modifications would be
useful ones remains to be determined. This integration would constitute a fourth stage
in the evolution of our intellectual capabilities, the stage of automated external symbol
manipulation:

In this stage, symbols with which the human represents the concepts he is manipulating can
be arranged before his eyes, moved, stored, mcalled, operated upon according to extremely
complex rules-all in very rapid response to a minimum amount of information supplied
by the human, by means of special cooperative technological devices.‘*

These devices, Engelbart goes on, could conceivably be provided by a highly interactive
digital computer hooked up to a threedimensional color display, as in the technology
we today call “virtual reality”.19 Imagining all the new types of symbol manipulation that
this technology would enable was impossible in the pm-computer age (1962), just as, at
an earlier stage of mental symbol manipulation-prior to the evolution of external symbol
manipulation-explicit imagining of the bar graph or the process of long division was
impossible.
The scope and magnitude of the intellect augmentation involved could nonetheless
be illustrated by an exercise in de-augmentation. For this purpose, Engelbart performed
the simple experiment of tying a pencil to a brick and measuring the effect on writing
speed; what he found was that a sentence which took seven seconds to type and 20 seconds
to write in cursive script took 65 seconds to write in “de-augmented” (brick) script-
although the writing time could be reduced to 42 seconds by enlarging the script to the
point where the sentence took up a whole page.”
Now, if Engelbart had wanted to prove experimentally that a brick is not an ideal writing
implement, then that of course was hardly a starthng result. What was intended to provide
a mental jolt here is rather the very idea of the extent to which our intellectual, social, and
cultural life depends on the physical characteristics-literal weight and otherwise-of such
tools as pencils. Indeed, that graphite has a weight relative to human muscular strength such
as to produce the vectors resulting in “normal” human writing speed is only a fortuitous
fact-it could have been otherwise, and with profoundly diierent consequences:

Brains of power equal to ours could have evolved in an environment where the combination
of artifact materials and muscle strengths were so scaled that the neatest scribii tool
(equivalent to a pencil) possible had a shape and mass as manageable as a brick would be
to us-assuming that our muscles were not specially conditioned to deal with it.. . . How would
our civilization have matured if this had been the only manual means for us to use in graphical
manipulation of symbol&’
162 - PETEX SKAGESTAD

In response to his own question, Engelbart makes the two observations that (a) the mcord-
keeping that underlies our organixation of commerce and government would undoubtedly
have taken a different form, and (b) far fewer people would have made the physical effort
required to engage in reasoned argumentation or experiment with new concepts. This same
point has recently been made by Henry Petroski in his fascinating history of the pencil,
referring to the ancient use of lumps of charcoal as writing implements, Petroski writes
that “Writing or drawing with a lump of anything for an extended period of time can
cramp the fingers, thus cramping one’s style and perhaps even cramping one’s mind.&’
Think about trying to write the Principiu Muthemuticu with a lump-not a stick-of
charcoal, and the dependence of abstract thought on the specific physical characteristics
of the implements of intellectual work positively leaps out at you.
Three further comments are in order here. First, as has been noted by Michael Heim,
Engelbart’s observations strikingly parallel Eric Havelock’s almost simultaneously
appearing interpretation of Plato’s philosophy as, in part, a cultural by-product of the
emergence of literacy-Havelock’s observation being that Plato’s preference for
abstractions over sense experience reflects a commitment to modes of thought that could
not have been sustained within a predominantly oral culture.*’ And second, although the
brick-pencil has a certain dramatic force, there is actually no need to take recourse to
make-believe to make Engelbart’s point. We do, after all, know of cultures like that of
the ancient Norse, whose only writing implements were chisels and stone tablets which
must have made the writing of even the simplest sentence an extremely laborious and time-
consuming task. (This point is also illustrated by Petroski’s description of the ancient use
of charcoal.) And we know that, among the ancient Norse, literacy was far from
widespread, and that the runes were regarded as sacred symbols understood only by a
few initiates, and not as tools for developing rational arguments or experimenting with
abstract concepts. [Editor’s Note: And see also Innis and McLuhan in the works cited
in Note 23 for discussion of similar effects in the slow-writing culture of ancient Egypt.
See also my Mind at Large: Knowing in the Technological Age, Greenwich, CT: JAI Press,
1988,pp. 131-132,“The Book in History” for consideration of writing systems and their
impact on religions.] But third, Engelbart was not writing for the same audience as
Havelock. He was writing specifically for the technocrats within the computer science
community and, as he has later made clear, his objective was to distinguish his research
project, on the one hand, from the speculative approach of artificial intelligence and, on
the other hand, from the nuts-and-bolts issues of information retrieval technology.**
Borrowing C. P. Snow’s dichotomy of the “two cultures,” Havelock’s book on Plato may
be described as an event in a different culture from Engelbart’s report, making the parallel
all the more striking.
Having distinguished the above four stages of evolution, Engelbart fills in his
conceptual framework with a detailed description of a hypertext system-one which builds
on and significantly extends Bush’sseminal ideas2’ But I shall not on this occasion pursue
these ideas further. Let us instead conclude our overview of the augmentationist
framework. In this framework, thinking is not a process that takes place inside the brain.
Thinking is an activity in which the brain participates along with the eyes and the hands
and a multitude of external devices, ranging in complexity from the pencil or the straight
edge and compass to today’s interactive computer systems. The computer, in turn, is not
a neutral tool for the more efficient execution of intellectual projects antecedently conceived
Thinking with Machines - 163

and adopted; rather, the computer is a medium that informs our choices of which
intellectual projects to adopt, and that thereby transforms our conception of what we are
doing and perhaps of who we are.‘6
We have noted Engelbart’s acknowledged indebtedness to Ashby, Bush, and Whorf.
I can find no trace of any explicitly philosophical influences on Engelbart, nor, as far
as I know, have Engelbart’s ideas as yet had any influence on contemporary philosophy.
In what follows, I examine a philosophical framework which, I shall argue, stands in a
logical, not an historical, relationship to Engelbart’s conceptual framework. No influences
in either direction will be imputed. In other words, in Popper’s and Peirce’s doctrines we
find a more general framework which elucidates the augmentationist framework by relating
it to other phenomena, and which itself is validated by whatever success the personal
computer may eventually turn out to have in augmenting the human intellect. The question
of such success will not be raised here, but I cannot leave the subject without concurring
with both Engelbart and his critic Heim that the personal computer must be evaluated
not simply on the basis of the greater efficiency with which it enables us to carry out familiar
tasks, but on the basis of the value of the new modes of thinking this machine enables
and inspires-or perhaps enforces.

Evolutionary Epistemology

Karl Popper’s evolutionary epistemology was originally sketched in his 1965 Arthur Holly
Compton Memorial Lecture “Of Clouds and Clocks,” and further elaborated in the two
papers “Epistemology Without a Knowing Subject” (1967) and “On the Theory of the
Objective Mind” (1968). The term “evolutionary epistemology” is not actually found in
these articles, but was introduced in 1974 in the landmark essay of that title by the
psychologist Donald Campbell who, incidentally, has been profoundly influenced by
Ashby as well as by Popper. Campbell’s essay explores the blind-variation-selective-
retention paradigm exemplified by both Darwinian evolution and the Popperian scientific
method of conjectures and refutations, but Campbell himself was first introduced to this
paradigm by Ashby in the 1950~.~’The past twenty years have seen a veritable explosion
of publications on evolutionary epistemology, but I shall here focus simply on Popper’s
original formulations.
In a 1965 lecture, Popper raised and discussed two problems: “Compton’s problem,”
or the problem of how to reconcile rationality with physical determinism, and “Descartes’
problem,” or the problem of mind-body interaction. Drawing on both Charles Peirce’s
indeterministic cosmology and the evolutionary language theory propounded by Popper’s
erstwhile teacher Karl Biihler, Popper proceeded to sketch an account of human evolution
as being largely exosomatic: “Human evolution proceeds, largely, by developing new
organs outside our bodies or persons: ‘exosomatically,’ as biologists call it, or ‘extra-
personally.’ These new organs are tools, or weapons, or machines, or houses.“2* Noting
that other animals build lairs, nests, and dams, Popper points to the greater role played
by exosomatic organs in human evolution, stressing the higher functions of language
already postulated by Buhler:
164- I’BTBB SKAGBSTAD

Yet the kind of exosomatic evolution which interests me here is this: instead of growing better
memories and brains, we grow paper, pens, pencils, typewriters, dictaphones, the printing
press, and libraries. These add to our language-and especially to its descriptive and
argumentative functions-what may be described as new dimensions. The latest development
(used mainly in support of our argumentative abilities) is the growth of computers2’

This passage calls to mind Engelbart’s use of the de-augmented pencil as an inverted
metaphor for computer augmentation. In a footnote to an earlier passage in Popper’s paper
there is an even more striking echo of Ashby’s characterization of an “intelligence-
amplifier” as a machine that will solve problems people are incapable of solving. Against
the view, attributed to Alan Turing, that people are-or are indistinguishable from-
computers, Popper points out that the difference between people and computers is precisely
what makes computers useful to people:

Moreover, we use, and build, computers because they can do many things which we cannot
do; just as I use a pen and pencil when I wish to tot up a sum I cannot do in my head. ‘My
pencil is more intelligent than I,’ Einstein used to say. But this does not establish that he is
indistinguishable from his pencil.”

In Popper’s and Einstein’s view, then, as in the view propounded by Engelbart and Ashby,
the specialized tools devised for symbol manipulation are valuable to the extent that they
extend, augment, or amplify human intelligence, rather than replicating it. As far as I
understand, no one denies that trying to make computers reason “like people”-whatever
exactly that may mean-may be heuristically useful. What is denied, however, is that
success in making computers perform certain specified tasks as well as people will prove
that people, in some meaningful sense, are computers.
Central to Popper’s evolutionary epistemology, as set forth in“Of Clouds and Clocks,”
is the notion of hierarchical systems of plastic (i.e., nondeterministic) control. This
epistemology implies a “view of organisms as incorporating-or, in the case of man,
evolving exosomatically-this growing hierarchical system of plastic controls.“31 This
system, of course, is in all essential respects the same as Engelbart’s H-LAM/T system.
Thus far, I am pointing to parallels between what Popper is saying in the context of trying
to solve two age-old philosophical problems and what Engelbart had already said in the
context of formulating an R & D (research and development) strategy for computer
technology. The existence of these parallels is intriguing-especiahy as there is no evidence
that Popper had ever heard of Engelbart or vice versa, although, as we shall see, they
share an indebtedness to Wharf-but what Popper the philosopher has to add to what
Engelbart the engineer had already said is a more comprehensive framework, unifying
the above ideas with a framework for understanding the nature of knowledge in general
and of scientific knowledge in particular.
To examine that framework, we turn to Popper’s 1967 paper, “Epistemology Without
a Knowing Subject,” published a decade before the appearance of personal computers.
This is where Popper introduces his distinction among the first, second, and third worlds,
later renamed World 1, World 2, and World 3. World 1 is the physical world, and World
2 is the mental, or inner world-both familiar from Descartes’s dualism. World 3 is less
familiar; this is “the world of objective contents of thought, especiahy of scientific and
Thinkingwith Machines - 165

poetic thoughts and works of art.&* Popper admits that World 3 bears resemblances to
Plato’s world of ideas and to Hegel’s objective spirit, while also differing significantly from
these; it most closely resembles Gottlob Frege’s realm of objective contents of thought-
actually, Frege himself referred to thii as “a third realm,” a nomenclature apparently
unnoticed by Popper, most of whose references to Frege are to his classic paper “On Sense
and Reference,” where this expression does not occur.33 The ontological status of Frege’s
“thoughts” or “thought contents” is of course debatable; since Frege states that they cannot
be perceived by the senses, it would seem that they are intensional objects.34 Popper does
not, however, engage in an exegesis of Frege, but proceeds to list the principal denizens
of World 3; they are: “theoretical systems, . . . problems, . . . problem situations, . . . critical
arguments, . . . the state of a discussion or the state of a critical argument; and, of course,
the contents of journals, books, and libraries.“35
Although the above items, so characterized, could be regarded as purely intensional
objects, Popper goes on to list not only the contents of books, etc. as World 3, but the
books-as-media themselves, and he makes clear that he regards these not as expressions
of mental states, or as tools of communication, but as autonomous carriers of knowledge.36
Since World 3 contains both physical objects, such as journals and books, and intensional
objects that can be physically embodied, such as theories and arguments, one is led to
conclude that the inmates of World 3 are physical objects regarded as carriers of meaning.
This corresponds closely to Charles Peirce’s definition of a “sign,” to be explored below;
also, of course, this makes World 3 a category of “augmentation means,” in Engelbart’s
sense. To continue, the inmates of Popper’s World 3 are all human-made; they are artifacts
like spider’s webs and bird’s nests; but, like other artifacts, they also have an autonomous
existence, and Popper’s chief thesis is that, to the extent that the theory of knowledge seeks
to understand scientific knowledge, it should focus on World 3 objects, rather than on the
beliefs and other subjective states of mind emphasized by mainstream epistemology from
Descartes to Russell. Popper views as a fundamental mistake the consideration of books
and articles as only the outwards expressions of knowledge that is “really” residing inside
the human mind. A book must obviously be readable or it would not be a book, but it
need never actually be read and it would still be a book. In a passage eerily reminiscent
of Peirce’s definition of a “sign” as something capable of being interpreted, Popper goes on:

It is its possibility or potentiality of being understood, its dispositional character of being


understood or interpreted, or misunderstood or misinterpreted, which makes a thing a book.
And this potentiality or disposition may exist without ever being actualized or realized.”

In an extreme case, proposed by Popper, a book of logarithms could be computer


generated and printed, and it need never be read by anybody, but it would still constitute
knowledge, in the scientific sense.38 Similarly, though human languages are products of
the human mind, the externality of language is far more important than its mind-
dependency. Referring to Karl Biihler’s distinction of three functions of language-the
expressive, communicative, and descriptive functions-Popper notes that the externality
of language is an enabling condition of the critical method that is at the heart of science
in particular and of rational thought in general:

Without the development of an exosomatic descriptive language-a language which, like a


tool, develops outside the body-there can be no object for our critical discussion. But with
166 - PETER SKAGESTAD

the development of a descriptive language (and further, of a written language), a linguistic


third world can emerge; and it is only in this way, and only in this third world, that the problems
and standards of rational criticism can develop.39

Popper reiterates the evolutionary theme from “Of Clouds and Clocks,” to wit, that human
cultural evolution proceeds largely through the invention and improvement, via selection,
of exosomatic organs that embody ideas that are generated by the human mind, but that
could not play the role they do if they remained within the mind. False ideas in the mind
(e.g., about what happens if you jump out a tenth-floor window) may kill the person who
has them and acts on them; false ideas outside the mind can be bloodlessly killed through
rational criticism: “Scientists try to eliminate their false theories, they try to let them die
in their stead. The believer-whether animal or man-perishes with his false beliefs.“@
This analogy between natural selection and the scientific selection among hypotheses was
stated already in the earlier lecture along with a warning against concluding that only
the “utilitarian” theories which help us survive will themselves survive. Artifacts are indeed
selected from the vantage point of their fitness to some human aim-say, discovering truth
or understanding nature-and obviously that aim will survive only if we survive;
nonetheless, Popper holds the indirectness of the selection process as essential to its
functioning.4’
The most important thesis Popper adds is that we can learn more about the human
mind by studying its externalized products without regard to the specifics of their human
origins than vice versa: “Contrary to first impressions, we can learn more about production
behaviour by studying the products themselves than we can learn about the products by
studying production behavior.“’ There are two reasons for this. First, Popper notes-
with an acknowledgement to the economist and Nobel laureate Friedrich Hayek, who
had in turn repeatedly credited Descartes, David Hume, and Adrun Ferguson-that the
most important products of the human mind are the unintended by-products of purposive
behavior directed towards some other end. Language is paradigmatic in this respect: it
is the result of intelligent, purposive behavior, but it is not the result of any attempt to
create language?3 Second, the mind itself is affected and shaped through feedback from
the evolution of its own externalized creations. As an example, Popper instances Wharf’s
celebrated interpretation of the difference between English and Hopi time perceptions as
products of the respective languages-an interpretation rooted in Whorf’s relativistic
hypothesis later embraced and extended to the realm of non-linguistic symbols by
Engelbart, as we saw above. In Popper’s words:

I personally find Benjamin Lee Wharfs report on the Hopi Indians and their utterly different
intuition of time convincing.. . . Should Whorf be right, then our intuitive grasp of time-
the way in which we ‘see’ temporal relations-would partly depend on our language and the
theories and myths incorporated in it: our own European intuition of time would owe much
to the Greek origins of our civilization, with its emphasis on discursive thought.”

These themes are further pursued in the paper “On the Theory of the Objective Mind,”
with special emphasis on their application to the history of science. After arguing that
Galileo’s mistaken theory of the tides is best understood with reference to its (World 3)
problem situation, rather than with reference to biographical factors, Popper proceeds
Thinkingwith Machines- 167

to compare and contrast his own method, called “situational analysis”, with the
philosopher-historian R .G. Collingwood’s method of “tecnactment,” without noting that
the two methods had been explicitly equated by Popper’s student Joseph Agassi.45
Although concurring with Collingwood on his emphasis on problems and problem
situations, Popper then dismisses the reenactment doctrine as a subjective method focusing
on World 2 and using the analysis of objective problem situations as a mere aid to subjective
understanding.46 As I have argued elsewhere, this is somewhat of a misunderstanding of
Collingwood, whose doctrine was much closer to Popper’s than the latter real&d. Briefly,
what Collingwood held was that any statement is intelligible only as the answer to a definite
question and that any action is intelligible only as the solution to a problem. In a passage
quoted by Popper, Collingwood argues that, to understand a certain edict by Emperor
Theodosius, the historian must m-enact in his own mind the experience of the emperor;
but he also makes clear that what this means is fmding and attempting to solve the problem
confronting the emperor, and arriving at the solution expressed in the edict.” There is
no suggestion that, once the problem has been matched with its solution, there is some
further subjective understanding or experience required for historical knowledge truly to
exist; Collingwood’s method is thus no less objective than Popper’s.
To summarize this section: In Popper’s epistemology we find an explicit recognition
of the dependence of thought-specifically of rational, critical thought-on evolved external
artifacts, which are human-made, but which through their externality attain an essential
autonomy vi&vis the conditions and purposes of their creation. Popper repeatedly makes
the important point that the externality of, say, linguistic formulations is essential to their
c&i&ability, and hence to their ability to function as vehicles of rational thought. In
Popper’s often stated view, only once something has been written down does it become
criticixable and hence epistemologically interesting.q In addition to explicitly embracing
the Whorfian hypothesis of the linguistic relativity of thought, Popper thus implicitly lends
support to Engelbart’s “Neo-Whorfian hypothesis” that our future intellectual evolution
will take place chiefly through advances in the tools for our manipulation of external
symbols. Popper also explicitly classifies the computer with the pencil as tools for the
manipulation of World 3 objects-years before the appearance of the personal computer.
What we do not find in Popper is an explicit statement that some important World
3 objects-theories, arguments, books, journals, etc.-are physically embodied, and that
their specific physical embodiment, as distinct from their mere externality, affects their
epistemic role. Although the mind-expanding function of the pencil was noted, the
dependence of that function on the pencil’s weight and shape-a dependence noted by
both Engelbart and Petroski-is not explicitly noted by Popper. This defect in Popper’s
characterization of World 3 has been aptly noted by Paul Levinson, in a study that is
otherwise highly sympathetic towards Popper:

The problem with Popper’s model is that its World 3 is drawn in terms too primarily ideational:
the fundamental criterion for World 3 citizenship is being a humanly produced idea, with
the material expression of ideas awarded a second-class or derivative status.49

Levinson proceeds to formulate a technologically emended version of Popper’s World 3


epistemology and apply it to the development of a philosophy of technology in general,
and of media technology in particular.~
168 - PETER SKAGESTAD

For a classic thematization of the dependence of thought on the specific physical


characteristics of the implements and vehicles of thought, in which the material expression
of ideas is brought to the foreground, we shall turn, next, to Popper’s great predecessor
Peirce.

Peiree’s Semiotic

A closer relationship than with Collingwood-meaning more points of contact-may be


detected between Popper’s World 3 epistemology and the semiotic tradition represented
by Popper’s teacher Btlhler, as well as by Biihler’s predecessor Peirce and Peirce’s latter-
day disciple Umberto Eco.” When Popper says, as already quoted, “we can learn more
about production behaviour by studying the products themselves than we can learn about
the products by studying production behaviour,” the very wording foreshadows Eco’s later
distinction between a theory of codes and a theory of sign production, as well as the claim
that the latter presupposes the former. The autonomy of World 3 is also expressed with
all desirable clarity in the following passage from Eco’s A Z%eory of Semiotics:

A signification system is an autonomous semiotic construct that has an abstract mode of


existence independent of any possible communicative act it makes possible. On the contrary
(except for stimulation processes)every act of communication to or between human beings-
or any other intelligent biological or mechanical apparatus-presupposes u signification
system as its necessary condition.52

This semiotic tradition goes back, through the intermediary of Charles Morris, to the great
nineteenth-century American astronomer, mathematician, logician, and philosopher
Charles Sanders Peirce, who defined a sign as “something which stands to somebody for
something in some respect or capacity.“53 Peirce’s various “anticipations” of Popper have,
of course, been widely documented. As we have seen above, Popper himself traces his
indeterministic cosmology back to Peirce, the common element being their shared
conviction that classical mechanics is compatible with indeterminism. In addition, Eugene
Freeman and Hemyk Skolimowski have instanced: fallibilism (a label used by both Peirce
and Popper); Peirce’s analysis of abductive reasoning and Popper’s method of conjectures
and refutations (first equated, I believe, by A. J. Ayer ‘3; Peirce’s pragmatic maxim and
Popper’s principle of falsiflability; and their shared evolutionary conception of science as
a living and growing enterprise.5s In a paper that is somewhat critical of Freeman and
Skolimowski, as well as of Popper, Ilkka Niiniluoto has also pointed to Popper’s
rediscovery of Peirce’s propensity theory of probability, and has documented that the
common features listed by Freeman and Skolimowski are traceable to a shared ancestrage
in the thought of the nineteenth-century philosopher and historian of science William
Whewell, an indebtedness repeatedly acknowledged by Peirces6 Finally, Israel Scheffler
has noted that Peirce’s rejection of foundationalism is echoed by Popper, in a wording
that closelyrecalls Peirce’s substitution of the bog for the bedrock as a metaphor for the
basis of scientific knowledge.57
One parallel, however, has not been noted by any of the above. Although Freeman
and Skolimowski discuss both Peirce’s doctrine of signs and Popper’s World 3
Thinking with Machines - 169

epistemology, they do not note that the entities comprising Popper’s World 3 are signs
in the Peircean sense, or that Peirce’s doctrine of signs represents a World 3 epistemology
that in many respects is more detailed and developed than Popper’s; nor has this been
noted by any other writer that I am aware of. Again, I do not mean simply that World
3 consists of objects which Peirce would have classified as signs-that would be a rather
trivial observation, given the ambitious scope of Peirce’s doctrine of signs. Rather, as we
have already seen, Popper’s own statement that a World 3 object, such as a book, is
constituted by its “dispositional character of being understood or interpreted” is what
recapitulates Peirce’s definition of a sign as whatever is capable of being interpreted.
Peirce’s doctrine of signs was first introduced in a series of articles published in The
Journal of Speculative Philosophy in 1868. Peirce here specifically targeted Descartes and
the Cartesian spirit in philosophy, and claimed that we have no ability to think without
signs. This claim presupposes a prior argument that all self-knowledge can be accounted
for as inferences from external facts and that, therefore, there is no reason to posit any
power of introspection.58 We are directed, therefore, to look to external facts for evidence
of our thoughts; thus, we encounter a near-tautology if we conclude that the only thoughts
so evidenced are in the form of signs: “If we seek the light of external facts, the only cases
of thought which we can find are of thought in signs.““’ This doctrine is repeated in 1909
in the following words, which recall Socrates and Aristotle: “All thinking is dialogic in
form.... Consequently, all thinking is conducted in signs that are mainly of the same
general structure as words.. . . *
We have just seen one of Peirce’s definitions of “sign”; in 1868 he gave this formulation:

Now a sign has, as such, three references: first, it is a sign to some thought which interprets
it; second, it is a sign for some object to which in that thought it is equivalent; third, it is
a sign, in some respect or quality, which brings it into connection with its objeck6’

What brings the sign into connection with its object is some material quality it possesses,
which enables it to represent its object in a particular way. A picture signifies by means
of an association that connects it, in the interpreter’s brain, with its object (the picture
resembles its object in some way). A weathervane or a tally signifies simply by being
physically connected with its object by a chain of causation. Words, finally, can function
as signs only because they are capable of being connected into sentences by means of a
copula (what we would call an abstract or convention-based connection). 62Later, Peirce
was to classify all signs as: icons, which signify by virtue of resemblance; indices, which
signify by virtue of a physical connection with the object; and symbols, which signify by
virtue of the existence of a rule governing their interpretation.63
The exact physical embodiment of symbols is of course largely, but not entirely, a
matter of convention. The choice of Roman versus Old English lettering may be purely
conventional, but that is certainly not the case with one’s preference for Arabic over Roman
numerals for the purpose of performing long division-a point also made by Peirce, and
later by Engelbart.@ The important observation Peirce made in 1868, and was later to
develop in depth, as we shall see below, is that the existence of symbols depends crucially
on the existence of a notation that is capable of symbolic interpretation, and moreover
that our thinking is facilitated or impeded by the specific physical features of our notation.
170 - PETER SKAGFSTAD

By claiming that all thinking is signification, i.e., the production and interpretation
of signs, Peirce was not denying the psychological fact that thoughts can be subjectively
experienced as internal states of mind; what he is claiming is that such experiences are
merely psychological facts: “[Every] thought, in so far as it is a feeling of a peculiar sort,
is simply an ultimate, inexplicable fact4’ The meaning or intellectual value of a thought
lies in its potential for further interpretation, whether by my mind or by some other mind-
that is, it lies in its signhood:

Finally, no present actual thought (which is a mere feeling) has any meaning, any intellectual
value; for this lies not in what is actually thought, but in what this thought may be connected
with in representation by subsequent thought; so that the meaning of a thought is something
altogether virtual.”

In a footnote, Peirce adds: “just as we say that a body is in motion and not that motion
is in a body we ought to say that we are in thought and not that thought is in us.&’ These
are the ideas which Peirce proceeded to develop into the famous and widely discussed
theory of meaning known as pragmatism; here we shah look at some less known aspects
of this doctrine of signs.
The recognition of the sign character of thought led Peirce repeatedly to emphasize
the externality and artifactuality (if I may so call it) of knowledge. Thus, in 1902 Peirce
wrote, in a passage echoed by Popper sixty-odd years later:

I think it more accurate to say that there are stores of knowledge in a library than to say
that there are books there which, if I was reading, having become acquainted with the
languages they are printed in, would be conveying knowledge to my mind.68

Similarly, in another manuscript from the same year: “In my opinion it is much more
true that the thoughts of a living writer are in any printed copy of his book than they
are in his braind9 These are not just casual observations, but corollaries of the doctrine
of signs.
A sidelight on just how seriously Peirce took the physical embodiment of knowledge
as an essential constituent is provided by his obituary for Charles Babbage, the inventor
of history’s first programmable computer, the Analytical Engine. After praising Babbage’s
engine as “the most stupendous work of human invention,” Peirce goes on to instance
Babbage’s publication of a volume of logarithms, where Babbage tried ffity different colors
of paper, and ten of ink, before settling on the combination that was easiest to read.”
From Peirce’s perspective, Babbage’s choice of the right paper-ink combination for his
table of logarithms was itself as much a contribution to scientific knowledge as the
computation of the logarithms themselves. Knowledge, in Peirce’s semiotic doctrine,
consists less in states of mind (“ultimate, inexplicable facts”) than in the potentiality of
external objects to induce certain states of mind, and this potentiality depends on the
specific physical characteristics of said external objects.
Peirce never denied the existence of consciousness, but he did deny that consciousness
is an essential attribute of mind.” In an especially topical passage, Peirce goes on to
emphasize the dependence of our language faculty on external tools for linguistic
expression:
Thinking with Machines - 171

A psychologist cuts out a lobe of my brain (nihil animale a me alienum puto) and then, when
1 find I cannot express myself, he says, “You see, your faculty of language was localized in
that lobe.” No doubt it was; and so, if he had filched my inkstand, I should not have been
able to continue my discussion until I had got another. Yea, the very thoughts would not
come to me [emphasis added]. So my faculty of discussion is equally local&d in my inkstand.72

As is indicated by the emphasized sentence, Peirce is not making the trivial point that
without ink he would not be able to communicate his thoughts. The point is, rather, that
his thoughts come to him in and through the act of writing, so that having writing
implements is a condition for having certain thoughts-specifically those issuing from
trains of thought that are too long to be entertained in a human consciousness. This is
precisely the idea that, sixty years later, motivated Engelbart to devise new technologies
for writing so as to improve human thought processes, as well as the idea that motivated
Havelock’s interpretation of Plato.
As was pointed out by Freeman, Peirce’s doctrine of signs implies that all reasoning
is diagrammatic, a corollary repeatedly made explicit by Peirce himself, e.g., in this passage
quoted by Freeman: “For reasoning consists in the observation that where certain relations
subsist certain others are found, and it accordingly requires the exhibition of the relations
reasoned within an icon.“73 The value of the iconic representation, Peirce repeatedly
insisted, lay in the possibility it afforded of performing experiments on our thoughts, by
changing some elements in the diagram and literally seeing new relations appear. Since
Popper has opined that there is “a big gulf’ between Peirce’s dictum “all reasoning is
diagrammatic” and his own “all reasoning is critical,” we need to note that, in Peirce’s
view, the chief benefit of the externalization of thought is the opportunity it offers for
logical criticism through the performance of the type of experiments just mentioned.74 We
should also note that Peirce regarded the self-critical use of language as the chief difference
between humans and other animals. People not only use signs; we think of signs as signs,
and thii mode of thinking allows us to control our thinking through criticism. In Peirce’s
words:

All thinking is by signs; and the brutes use signs. But they perhaps rarely think of them as
signs. To do so is manifestly a second step in the use of language. Brutes use language and
seem to exercise some little control over it. But they certainly do not carry this control to
anything like the same grade that we do. They do not criticize their thought 10gically.‘~

In short, we differ from other animals through a use of language that enables logical
criticism of our thought-precisely the doctrine Popper inherited from Biihler.
In keeping with this view, Peirce developed his system of “existential” graphs as a
new notation for logic. (That Peirce wrote extensively on topology and cartography is
also not coincidental.) As has been observed by Martin Gardner, no doubt correctly, the
graphs were not intended as a practical improvement over existing algebraic notations,
but as a means of laying bare the diagrammatic essence of thought and laying before the
reader, in Peirce’s words, “a moving picture of thought.“6 Such a moving picture can
also be exhibited by a machine, as Peirce made clear in his 1887 article on the logic machines
constructed by William Stanley Jevons and Allan Marquand:
172 - PETE3 SKAGESTAD

The secret of all reasoning machines is very simple. It is that whatever relation among the
objects reasoned about is destined to be the hmge of a rati~~ation, that same general relation
must be capable of being introduced between certain parts of the machine.77

To illustrate, the syllogism, ‘If A then B, if B then C, therefore, if A then C,‘can be embodied
in a machine where pushing the lever A activates the piston B, which in turn rotates the
wheel C, so that pushing A in effect activates C. Machines, then, are capable of reasoning,
in the sense of drawing inferences, but in Peirce’s view this is not a peculiarity of logic
machines but a general ch~~~stic of machines, including here a wide variety of apparatus
not generally thought of as machines: “Accordingly, it is no figure of speech to say that
the alembics and cucurbits of the chemist are instruments of thought, or logical machines.“78
Without here venturing into speculations about what Peirce might have said about
modern digital computers, we shall observe that he pointed to two differences between
people and the logic machines of his day. Pirst, machines are “destitute of all o~~ty,
of all initiative”. In a machine, Peirce stresses, this is a good thing; it is precisely the machine’s
lack of originality that makes it predictable and hence useful; a balloon, for instance, has
limited usefulness because it has too much initiative: “We no more want an original machine
than a house-builder would want an original journeyman, or an American board of college
trustees would hiie an original professor.“” This is precisely the view of machines later
echoed by Popper, as well as by Licklider. It was also expressed more than thirty years
before Peirce wrote the above, by Charles Babbage’s associate Lady Ada, Countess
Lovelace, history’s first computer programmer: “The Analytical Engine has no pretensions
whatever to originate anything. It can do whatever we know how to order it to perf~rm.~~
Second, Peirce observes, a logic machine is limited by its design: “it has been contrived
to do a certain thing, and it can do nothing else.“JL’Now, Peirce is not denying, but explicitly
omitting, that there could be nondete~nistic machines constructed like Jacquard looms,
incorporating if-then loops, and capable of handling a great variety of problems.82 What
is of chief interest here is not, however, Peirce’s observations about machines per se, but
what he is obliquely saying about people. People are like machines, in that the unaided
individual is as limited by his/ her design as is the machine. But people do not think unaided,
and therein lies the difference: “The unaided mind is also limited in this as in other respects;
but the mind working with a pencil and plenty of paper has no such limitations,“B” (There
are of course other limitations; sometimes you need tools for building three-dimensional
models to visualize relations that cannot be represented twodimensionally,) Peirce goes
on to extol the virtues of algebra, in particular the empowering function of the parenthesis,
which enables levels of abstraction that would be literally unthinkable without it; later
in life, as he perfected his logical graphs, he was to express a preference for dia~~atic
over algebraic notation, but his general point remains that people reason by means of
parentheses, diagrams, alembics, cucurbits, and other machinery (what we would today
call soft or virtual and hard), and that no one machine is likely ever to capture this general
machine-dependency of human reasoning.

Peirce’s view of thinking as the production, development, and interpretation of signs led
him to attribute reasoning to a wide range of artifacts, simple as well as complex. But
Thinkingwifh Machines- 173

the distinctly human type of reasoning consists in the use of external artifacts as aids to
thought. For Peirce, as later for Popper, this extemalixation of the objects of thought
is what made logical criticism, and hence rationality, possible. Peirce, however, showed
a far greater awareness of the importance of the exact physical embodiment that this
externalization took, an awareness that led him to spend a great portion of his life studying
and perfecting various types of notation, diagrammatic as well as symbolic. Again, for
both thinkers, there are no utilitarian or instrumentalist implications regarding this aspect
of copulation: that thoughts are inst~men~ for survival (as some thoughts certainly
are) is not primary; the point is rather that a variety of external things are instruments
for our thinking. Further, our conception of how we think will determine how we structure
those external things, just as their structure in turn guides the direction of our thinking.
In the above we have examined and documented a coherent set of ideas common
to Engelbart’s augmentationism, Popper’s evolutionary epistemology, and Peirce’s
pragmatistic semiotic. Let us summarize these ideas in eight theses, not all of them explicitly
shared by all three thinkers:

1. Human evolution proceeds exosomatically, through the evolution of external


WtifiWtS.
2. Human knowledge does not reside in the head, but in a subclass of our evolved
artifacts: Engelbart’s “external symbols,” Popper’s “World 3”, Peirce’s “signs.”
3. The Whorfian hypothesis: Our fundamental world view is shaped by one of our
evolved exosomatic organs, our language.
4. The Neo-Whorfian hypothesis: Our thinking is also shaped in a variety of ways
by other exosomatic organs, specifically the tools by which we manipulate
external symbols.r4
5. Abstract, critical thought is made possible only by the existence of written
language. This thesis is also embraced by Biihler and Havelock.
6. The possible degrees of abstraction (as well as the possible kinds of intellectual
projects) available to us depend on the logical or effective features of the symbols
by which we represent abstractions-Arabic numerals, parentheses, and the
like-i.e., not what the symbols look like, but what they can and cannot do.
7. The possible degrees of abstraction available to us depend on the physical features
of the tools with which we produce and manipulate symbols-pencils, computers,
and the like.
8. The digital computer is most fruitfully regarded as a means for augmenting human
intelligent activity by automating and accelerating the production and
m~ipulation of symbols, rather than as an inde~ndent source of intel~~nt
behavior.

Theses 6 and 7 are not explicitly found in Popper, and Thesis 8 is perforce not found in
this exact form in Peirce, although his comments on logic machines come very close. All
technology, as has been made clear by Paul Levinson among others, is the embodiment
of ideas?’ But only some t~hnolo~es are the emb~iment of ideas about ideas (what
Levinson calls “meta-cognitive” technology); such a technology is today’s personal-computer
technology. The digital computer itself may be considered the physical embodiment of
Boolean algebra together with the recursive Church-Turing definition of computability.*6
174 - PETER SKAGESTAD

The articulation and extrapolation of these ideas as an approach to understanding


the computer as a reasoning machine may seem to lead to a mechanistic paradigm of
reasoning and of mental activity in general-a paradigm actually embraced by Turing
himself.*’ Whether such a paradigm is in any way implied by the Church-Turing thesis
is obviously too large a question to be raised here. But, although this question is not
irrelevant to the subject of this paper, we need to emphasize that the ideas embodied in
the construction of the inference engine are of little help in explaining what the computer
is for, or how to use it for maximum benefit to the human race.08
These are the questions that were raised by the augmentation&s-Bush, Liclclider,
Engelbart, et al.-and answered with reference to human capabilities and to the design
of the computer’s user interface, rather than to the principles governing the operations
of the inference engine behind the inte&uza9 And the thesis with which I shall conclude
is that, by studying the revolution the augmentationists have accomplished in the computer
and our conception of the computer, as well as the ideas motivating that revolution, we
are led to a nonmechanistic, evolutionary, and semiotic conception of mind and knowledge
as the most appropriate philosophical paradigm for understanding this latest revolution
in technology.

Notes

1. Notable exceptions are Michael Heim; see Etbric Language: A Philosophical St&y of
Word Recessing, New Haven: Yale University Press, 1987, and Paul Levinson, who approaches
the subject from the Perspective of a profoundly philosophical student of media; see esp., “InteQent
Writing: The Electronic Liberation of Text,” Technology and Society, IZ(1989): 387400, and
“Electronic Text and the Evolution of Media,” Journal of Social and Riological Structures,
23(2)(1990): 141-149.
2. A somewhat similar point was made a few years ago by Alan Kay, who can probably
he fairly described as the father of the laptop computer; see “Inventing the Future,” in 77te AZ
Rusiness: The Commercial Uses of Artificial Intelligence, edited by Patrick H. Winston and Karen
A. Prendergast, 103-112,esp. 108, Cambridge, Massachusetts, MIT Press, 1984: ‘Computers are
going to disappear as physical objects. They will disappear into the wiring of our houses and into
the clothes that we wear.”
3. No purpose will be served here by attempting a bibliography of this debate. However,
indiipensable sources include Daniel Dennett, Brainrtorms, Cambridge, Massachusetts, MIT Press,
1978; Jerry Fodor, Representations: Htilosophical Essays on the Foundations of Cognitive Science,
Cambridge, Massachusetts: MIT Press, 1983; Hubert Dreyfus, what Computers Gml Do, Revised
Edition, New York, Harper & Row, 1979; John Searle, M&r&, &ainr, and Scienw, Cambridge,
Massachusetts, Harvard University Press, 1984, and the papers collected in John Hat&and, ed., Mind
Design: Philosophy, Psychology. Artifcial Intelligence, Cambridge.,Massachusetts: MIT Press, 1981.
4. D. C. Engelbart, “Augmenting Human Intellect: A Conceptual Framework,” Summary
Report, SRI Project No. 3578, AFOSR-3223, Contract AF 49(638)-1024,1%2.
5. Howard Rheingold, Tools For l7tought: The People and I&as Behind the Next Computer
Revolution, New York: Simon & Schuster, 1985. See also Rheingold, Virtual Reality, 76-81, New
York: Simon and Schuster, 1991, and my review of the latter in the Journal. of Social and
Evolutionary Systems, Z6(l), 1993.
6. D. C. Engelbart, “The Augmented Knowledge Workshop,” in A History of Personal
Workstations, edited by Adele Goldberg, 187-236, esp. 206-210, New York ACM Press, 1988.
Thinking with Machines - 175

7. Just how direct the causal chain is can be gauged from the fact that Engelbart holds the
patent to the computer mouse. But Engelbart has distanced himself from current trends in user-
interface design by taking exception to a perceived sac&ice of long-term to short-term productivity
gains in the name of “user friendliness” (i.e., he thinks real gains in human cognitive ability, not
superllcial ease of use, are the important results for measurement of the worth of computers, including
graphic interfaces); see “The Augmented Knowledge Workshop,” 200,217-218.
8. J. C. R. Licklider, “Man-Computer Symbiosis,” reprinted in Goldberg, 131-140, esp. 133-
134.
9. Vannevar Bush, “As We May Think,” Z&e Atlantic Monthly, (July 1945), 101-108.
Reprinted in Goldberg, 237247, esp. 245-246. Also, this and several other classic papers by Bush
and others are now available in James Nyce and Paul Kahn, eds., From Memex to Hypertext:
Vannevar Bush and the Mind’s Machine, Cambridge, Massachusetts: Academic Press, 1991.
10. On enabling technologies, see Rheingold, VirtuaZReality, 61.
11. See Lawrence S. Orilia, in Introduction to wiriness Data Processing, 54-55, New York:
McGraw-Hill, 1979.
12. See Raymond Kurxweill, in 7Ire Age of Intelligent Machines, 198, Cambridge, MA: MIT
Press, 1990.
13. W. Ross Ashby, in Design for a Brain, 236, London: Chapman and Hall, Ltd., 1952;
second revised edition 1960.
14. W. Ross Ashby, “Design for an Intelligence-Amplifier,” in C. E. Shannon and J.
McCarthy, eds., 215-233, Automutu Studies, Princeton: Princeton University Press, 1956. The
contrast between the steam engine and the lever is explicitly drawn on p. 218.
15. Engelbart, “Augmenting Human Intellect,” 9-11. The potential for misunderstanding
inherent in the concept of intelligence augmentation is illustrated by a recent rebuttal of the myth
that the possession of a computer makes the owner inherently smarter, a myth mistakenly claimed
to have one of its roots in Engelbart’s augmentationism; see Steve Aukstakalnis and David Blatner,
in Silicon Mirage: The Art and Science of Virtual Reality, 307-308, Berkeley, CA: Peachpit Press,
1992. In my innocence I had not encountered this particular myth before seeing it rebutted, but
I suppose there is no limit to the range of superstitions that may spring up around a new technology.
Engelbart, to make this point absolutely clear, is not even claiming that computer training makes
a person inherently smarter; rather, through training the person is augmented by becoming part
of a complex system that is smarter than the unaided person.
16. Engelbart, in “Augmenting Human Intellect,” 21-23. The effect on our thinking of
graphical representation of symbols has more recently been literally illustrated by Edward R. Tufte
in his two remarkable works, 77re Visual Display of Quantitative Information, Cheshire, CT:
Graphics Press, 1983, and Envisioning Information, Cheshire, CT: Graphics Press, 1990.
17. Engelbart, “Augmenting Human Intellect,” 24. Engelbart cites Benjamin Lee Whorf,
Language, Thought, and Reality, New York: MIT and John Wiley & Sons, 1956. For a delightful
popular exercise in applied Whorfian linguistics, see Howard Rheingold, i%ey Have a Wordfor
It: A Lighthearted Lexicon of Untranshztable Words and Phrases, Los Angeles, CA: Jeremy P.
Tarcher, Inc., 1988. I shall not here pursue the intriguing question of whether the chief chronicler
of Engelbart’s ideas also being a practicing Whorllan is more than coincidence.
18. Engelbart, “Augmenting Human Intellect,” 25.
19. Engelbart’s role as a pioneer of virtual reality has been chronicled in Rheingold, Virtual
Reality; see Note 5 above.
20. Engelbart, ‘Augmenting Human Intellect,” 27, Fig. 2.
21. Ibid., 26.
22. Henry Petroski, The Pencil: A History of Design and Circumstance, New York: Alfred
Knopf, 1989,27. Gf related interest is the same author’s i%e Evolution of Useful i%ings, 23, New
York: Alfred Knopf, 1992.
176 - PETER SKAGESTAD

23. Eric Havelock, fiefice to Ptizto, Cambridge, Massachusetts, Harvard University Press,
1%3. For a discussion of Havelock’s relevance to the personal computer revolution, see Heim, 46-
57. [Editor’s Note: See also the texts of Innis and McLuhan--e.g., Harold Innis, Z?ze IIiar of
Communication, Toronto University Press, 1951, and Marshall McLuhan, Z&e Gutenberg Galaxy,
Mentor, New York, 1962-for work earlier than and contemporaneous to Havelock’s in this area.
See also Paul Levinson, “The Technological Determination of Philosophy,” in Culture and
Communication,: Methodology, Behavior, Artifacts, and Institutions, vol. 3, edited by Sari Thomas,
21-26, Norwood, NJ: Ablex, 1987.-PL.]
24. Engelbart, “The Augmented Knowledge Workshop,” 1%.
25. Engelbart’s pioneering role in hypertext development is noted in Emily Berk and Joseph
Devlm, “A Hypertext Timeline,” in Hyperrext/ Hypermedia Handbook, edited by E. Berk and J.
Devlin, 13-16, New York: McGraw-Hill, 1991; Learning With Interactive Multimedia, edited by
Sueann Ambron and Kristina Hooper, 9-10, Redmond, California: Microsoft Press, 1990, Nigel
Woodhead, Hypertext & Hypermedia: Theory and Applications, 5-6, Reading, Massachusetts:
Addison-Wesley, 1991. All of the above find even earlier intimations of hypertext in Vannevar Bush’s
pioneering paper “As We May Think”; see Note 9 above. For a general overview of hypertext, see
Edward Barrett, ed., The Society of Text: Hypertext, Hypermedia, and the Social Construction
of Information, Cambridge, Massachusetts: MIT Press, 1989.
26. See Licklider, reference in Note 8 above, and Engelbart, “Augmenting Human Intellect,”
p. 46. The concept of the computer as a medium has been articulated, e.g., by Alan Kay, who in
this respect has been influenced by Marshall McLuhan, see Rheingold, Virtual Reality, p. 85.
27. The three papers by Popper are all published in his Objective Knowledge: An Evolutionary
Approach, Oxford: Oxford University Press, 1972. Campbell’s “Evolutionary Epistemology” is
found in The Philosophy of Karl R. Popper, edited by P. A. Schilpp, 412463, LaSalle, IL: Open
Court, 1974. References to Ashby are found in this paper, 421423, and elsewhere in Campbell’s
writings. An acknowledgement that Campbell was first introduced to this paradigm by Ashby is
found in Campbell, “Methodological Suggestions from a Comparative Psychology of Knowledge
Processes,” Inquiry, No. 2, 152-182, esp. 154, 1959. For some reservations about the scope of this
paradigm, see Peter Skagestad, “Taking Evolution Seriously: Critical Comments on D. T. Campbell’s
Evolutionary Epistemology,” The Monist, Vol. 61, No. 4,61 l-621, 1978. Finally, a comprehensive
bibliography of evolutionary epistemology, compiled by Campbell and Gary A. Cxiko, has been
published in the Journal of Social and Biological Structures, Vol. 13, No. 1,41-82, 1990.
28. Popper, Objective Knowledge, 238,
29. Ibid., 238-239.
30. Ibid., 225, n. 39.
31. Ibid., 242.
32. Ibid., 1%.
33. Gottlob Frege, “The Thought: A Logical Inquiry,” in P. F. Strawson, ed., Philosophical
Logic, Oxford University Press, 1967, pp. 17-38, esp. p. 29. See also, “On Sense and Reference,”
in l”ran&tions from the Philosophical Writings of Gottlob Frege, edited by P. Geach and M. Black,
56-78, esp. 57, Oxford: Basil Blackwell, 1966. Popper’s World 3 also bears a certain resemblance
to John William Miller’s “Midworld”, i.e., the world of artifacts or “functioning objects,” which
in Miller’s view was indispensable to bridging the gulf between objectivity and subjectivity; see his
The Paradox of Cause, 106-129, New York: W. W. Norton & Co., 1978.
34. Frege, 29. For further discussion of this subject, see Peter Skagestad, Making Sense of
History: The Philosophies of Popper and Collingwood, Universitetsforlaget, Oslo, Norway, 1975,
ch. 6.
35. Popper, Objective Knowledge, 107.
36. Ibid., 107-108.
7’hink-ingwith Machines - 177

37. Ibid., 116. Peirce’s defiinition, of which more below, also anticipates Roman Jakobson’s
insistence on t~~~~ as an essential character of the linguistic sign, as has been argued by
Jak6b Liszka; see his “Peirce and Jakobson: Towards a Stmctum&t Reconstruction of Peirce,”
Thrnsacr&r of the Charles S. Petice Society%Vol. XVII, No. 1,41-61, Winter 1981.
38. Popper, Objective Knowledge, 115. See the Peirce quotation cited in Note 68 below.
39. Ibid., 120.
40. Ibid., 122
41. Ibid., 253. For Charles Peirce’s reliance on this same diitinetion, see Peter Skagestad,
“Peirce and Pearson: Pragmatism vs. I~~~n~rn,~ in Language, Logic, and Method, edited
by R. S. Cohen and M. W. Wartofsky, Boston Studies in the Philosophy of Science, Vol. 31, D.
Reidel, Dordrecht, 263-282, 1983.
42. Popper, Objective Knowledge, 114.
43. Ibid., 113; 117. Popper here references Friedrich Hayek’s Studies in Philosophy, Politics,
and Economics, %, 100, and 111, The University of Chicago Press, 1967; the last page referenced
has a pertinent reference to Hume, while p. 96 references Adam Ferguson, in&ding his famous
formulation: “the result of human action, but not the execution of any human design.” See also
Popper’s “On the Theory of the Objective Mind,” 159, with further references to Hayek and to Butler.
44. Popper, Objecrive Kkowkdge, 135. Wharf’s view, I should add, was that our European
tendency to objeetify leads us to posit make-believe entities whenever we use cardinal numbers,
whereas the Hopi would not use cardinal numbers except for things ah-eady ascertained to exist.
Thus we can think of days and years as things, which we count and measure, whereas the Hopi
simply thii of day as a condition, to the Hopi, then, you do not stay ‘ten days’, you stay until
the eleventh day’. See Benjamin Lee Whorf, Language, lItought, and Reality, 57a, 139-140.
45. Joseph Agassi, “Towards an Historiography of Science,” History and lkory, Beiheft 2,
50,1%3. Agassi does suggest that Collingwood imposed needless restrictions on the use of the method
because he, tmlike Popper, sought certainty. This seems to me a mistake. Collingwood held that,
in cases where the agent’s solution is the only evidence of his problem, we carmot rediscover the
problem unless the solution was successful. Whether or not we accept this view does not appear
to me to have anything to do with whether or not we are looking for certainty-where there is
no evidence, there surely can be no knowledge, certain or conjectural.
46. Popper, 187-188.
47. R. G. Collingwood, The Idea of History, Oxford University Press, 283, 194a, also An
Autobiography, Oxford University Press, 68-71, 1939. For further diiussion of this issue, see Peter
Skagestad, M&g &rse of History, esp. Ch.5. For an exceptionally detailed diiion of the re-
enactment doctrine, see Heikki Saari’s dissertation “Re-Enactment: A Study in R. G. Collingwood’s
Philosophy of History,” Acta Academiae Aboensis, Ser. A, Vol. 63, No. 2, 1984. Some interesting
parallels between Collingwood and Peirce are discussed in Kenneth L. Ketner, “An Emendation
of R.G. Collingwood’s Doctrine of Absolute Presuppositions,” Graduate Studies, No. 4, Texas Tech
University, July 1973.
48. E.g., Popper, %.
49. Paul Levinson, Mind at Lurge: Knowing in fhe Technoiogica~ Age, JAI Press, Greenwich,
CT 1988, p. 79
50. On Levinson’s philosophy of technology, see also my forthcoming review of Levinson’s
Electronic Chronicles (1992) in the Journal of Social and Evolutionary Systems.
51. On Btihler’s contributions to semiotic, see, e.g., Robert E. Innis, Karl Waler: Semiotic
Foundations of Language Theory, New York Plenum, 1982, as well as &rmioricp: An Introductory
Anthology edited by Robert E. In&, Bloo~~on: Indiana University Press, 1985,6686.
52. Umberto Eco, A Theory of Semiotics, Bloomington, IN: Indiana University Press, 9,1979,
emphasis in the original. The Peircean ancestrage of the discipline is noted by Eeo, I$16
53. Charles Sanders Peirce, Collected Papers, Vols. l-6, edited by Charles Hartshome and
178- PEIER SKAGESTAD

Paul Weiss,Vola. 7-8, A. Burks, ed., The Belknap Press of the Harvard University Press, Cambridge,
Massachusetts 1935,1958,VoL 2, para. 228. On Peirceb semiotic, see Peter Skagesmd, “c. S. ptitce:
Discourse and Semiotic,* in Z?te any of 13iscowse, edited by C. Sills and G. II. Jensen,
53-69,Portsmouth, New Hampshirez Heinemann, 1992;for Peiree’sphilosophy mote generaUy,see
my T&eRoadof Inquiry: Chark Peirce’s -tic Realism, New York: CohunbiaUnivetsity Press,
1981. Charles Morris’ classic contributions to semiotic include his Foundations of the ZIteory of
Signs, Chicago: University of Chicago Press, 1938,and Signs, Lmypuage,and Bebior, New York:
Prentice-Hall, 1946.
54. A, J. Ayer, Uigirrs o~~~,, London: -an, W-84,1968.
55. Eugene Freeman and Henryk Skolimowski, “The Search for Objectivity in Peirce and
Popper,” in Schilpp, 464-519.As has been pointed out by Ilkka Niiniluoto, however, Freeman misses
the mark by claiming that P&X’s fallibilism, unlike Popper’s, is quahfied by a theory of ‘manifest’
truth; Peirce’s ideal-limit theory is in fact nothing of the kind; see Niiniluoto, “Notes on Popper
as Follower of Whewell and Peirce,” Ajatus, Vol. 37,272-327,esp. 316317,1978. On Peimeb theory
of truth, see Skage&ad,“Peime’s Conception ofTruth A Framework for Natumli& Epistemology?”
in ~a~~~~ &&amo@y: A Sympossium of Ike Decades, edited by Abner Shimony and Debra
Nails, 73-90, Dordrecht: Reidel, 1987.
56. Niiniluoto, p. 320. The influences on Popper, other than the repeatedly acknowledged
infhiences of Kant and Blihlcr, are harder to trace, as is somewhat testily noted by Niiuoto, 274,
n. 4. As Freeman and Skolimowski remark, 509, Popper was not familiar with Peirce’s thought
until 1952, by which time his own peppy outlook was largely formed. Nihiluoto obscnms,
276, n. 8, that Popper does not mention Whewell until 1954, and then lumps him together with
his opponent J. S. Mill as an inductivist. As Niiniluoto also notes, 274, n. 5, Whewell’s’anticipations’
of Popper seem to have been first observed by Imre Lakatos in 1970,see Lakatos, “Criticism and
the Methodology of Scientifii Research Programmes,” in Criticism and the Growth afB&owMge,
edited by I. Lakatos and A. Musgrave, 91-195,CambridgezCambridge University Press, 1970.
57. Israel Scheffler, Four Progrnat&s: A Critical Introduction to Peirce, James, Mesd and
Dewey, Hail Press, New York, 57,1974.
58. Peirce, Coiieeted Papers, Vol. 5, paras. 247-249.
59. Ibid,, Vol. 5, para. 251,
60. Ibid., Vol. 6, para. 338.
61. Ibid., Vol. 5, para. 283.
62. Ibid., Vol. 5, para. 286. Peirce’sprofound and lifelong interest in notation is documented,
e.g., in Carolyn Eiile, “Charles S. Peirce: Semiotician in Mathematics and the History of w”
in Stud& in Peirce k Semiotic, edited by Kenneth L. Ketner and Joseph M. Ransdell, 3 l-39, Institute
for Studies in Pragmaticism, Lubbock, Texas 1979.
63. Peirce, vol. 2, paras. 276292. See also Arthur W. Burks, “Icon, Index, Symbol,”
Philosophy and Phenomenol&cal Research, 9,673-689,1949.
64. Arabic numerals were introduced to the Christian world through Fibonacci’sLibcr Abaci,
the definitive second edition of which appeared in 1228,for Peirce’sanalysis of the same, see Carolyn
Eisele, ‘The Liber Abaci,” in Studies in the ScieentjFcand Mathematical Philosophy of Charles S.
Peirce, edited by Richard M. Martin, 1l-34, The Hague: Mouton 1979;see also David M. Burton,
7?reHistory of Mathematics, Boston: Allyn and Bacon, 268-274,1985and Engelbart, “Augmenting
Human Intellect,” 35. Howard DeLong has drawn my attention to the superior ease of use of
Leibnitx’snotation for the calculus over Newton’s as an analogous example.
65. Peirce, vol. 5, para. 289.
66. Ibid.
67. Ibid., n. 1.
68. Ibid., vol. 2, para. 54.
69. Ibid., vol. 7, panr. 364.
Thinkingzuith Machines - 179

70. Writings of Charles Peirce: A Chronological &&ion, edited by E. C. Moore, 457459,


esp. 459 Bloomington, IN: Indiana University Press, vol. 21984.
71. Peirce, Vol. 7, para. 366: “I hold that purpose, or rather, final causation, of which purpose
is the conscious modification, is the essential subject of psychologists’ own studies; and that
consciousness is a special, and not a universal, accompaniment of mind.”
72. Ibid.
73. Freeman and Skolimowski, 477; Peirce, Vol. 3, para 363. See Perice, Vol. 4, paras. 530;
571. For a related discussion, see Jaakko Hintikka, “C.S. Peirce’s “Fit Real Discovery” and its
Contemporary Relevance,” in 77re Relevance of Charles Peirce edited by Eugene Freeman, 107-
118, Monist Library of Philosophy, La&he, Illinois 1983.
74. Popper, “Freeman on Peirce’s Anticipations of Popper," in Schilpp, reprinted in Freeman,
7%~Relevance of Charles Pierce, 78-79.
75. Peirce, Vol. 5, para 534.
76. Martin Gardner, Logic Machines and Diagrams, Second Edition, Chicago: University
of Chicago Press, 54-58, 1982. Gardner exaggerates, however, when he postulates the need for “a
gig$gze,““” of practice and study to master Peirce’s intricate technique to the point of
. . . * 58. Kenneth L. Ketner has brilliantly shown that at least Peirce’s graphical notation
for propositional logic is not significantly more intricate or difficult to learn than the algebraic
notation normally used in introductory logic courses; see Ketner, “The Best Example of Semiosis
and its Use in Teaching Semiotics,” American Journal of Semiotics, Vol. 1, No. l-2,47-83, (1981).
77. C. S. Peirce, “Logical Machines,” American Journal of Psychology, Vol. 1, No. 1, 165-
170, 168, 1887. Gardner, 116, Note 2, observes that the first known diagram of electrical switching
circuits for Boolean operators is found in a letter from Peirce to his former student Marquand written
in 1886, and rediscovered only in the 1970s; Gardner references Arthur W. Burks, “Logic, Biology,
and Automata-Some Historical Reflections,” International Journal of Man-Machine Studies, Vol.
7,297-312, 1975.
78. Peirce, “Logical Machines,” 168. See also Collected Papers, Vol. 2, para 58, in which
Peirce attributes reasoning to blocks of wood dragged through water by yacht designers to study
hydrodynamics.
79. Peirce, “Logical Machines,” 169.
80. Quoted in Jeremy Bernstein, 7%eAnalytical~&ine, Second Edition, New York: Wiiam
Morrow, 57, 1981. I am indebted to Howard DeLong for reminding me of this passage, although
Countess Lovelace’s *anticipation* of Peirce in this respect is also noted in Gardner, 151 (without
a direct quotation, however). I have found no references to Lovelace in Peirce’s published works,
although he was of course familiar with Babbage’s work; see Note 70 above.
81. Peirce, “Logical Machines,” 169.
82. Ibid, 170; Gardner, 151.
83. Peirce, “Logical Machines,” 169.
84. That our thinking is shaped in various ways by our other tools, ranging from plot&hares
to automobiles, is of course not denied; the Neo-Whorfian hypothesis simply emphasizes one
particular class of tools, namely those by which we manipulate symbols.
85. Levinson, Mind at Large, passim, but esp. chapters 4 and 5.
86. Excellent sources for these ideas are Martin Davis, Computability and Unsolvability, New
York: Dover, 1982; George S. Boolos and Richard C. Jeffrey, Computability and L&c, Second
Edition, Cambridge, UKCambridge University Press, 1980, and Howard DeLong, A Refile of
Mathematical Logic, Reading, MA: Addison-Wesley, 1970.
87. Alan M. Turing, “Computing Machinery and Intelligence,” Mind, Vol. 59,433460,1950.
An admirably clear statement of the case for mechanism based on the Church-Turing thesis has
been made by Judson Webb, in his “GMel and Church A Prologue to Mechanism,” in Cohen
and Wartofsky, 309-353, which also cites relevant passages from Peirce and Babbage. For an
180 - PETER SKAGESTAD

opposing view, see Margaret Boden, ArtiJicial Intelligence and Natural Man, New York: Basic
Books, 426433,1977. Peiice, I should add, opposed mechanism on the ground, among others, that
the logic of relations (later proven undecidable) required creativity and inventiveness even for the
derivation of logically necessary conclusions; see his Collected Papers, Vol. 3, paras. 618, 641, as
well as the article by Hintikka referenced in Note 73 above.
88. To use a somewhat obvious, albeit inexact, analogy: we cannot fully grasp the concept
of an automobile without some understanding of the internal combustion engine, yet automotive
design owes little to this understanding and a great deal to materials science, aerodynamics, and
ergonomics.
89. Bush was not actually talking about the computer, or at least he did not know he was;
see Note 12 above, and the preceding text.

Acknowledgements

I am indebted to Howard DeLong of Trinity College for numerous helpful comments


on successive drafts of this paper.

About the Author

Peter Skagestad is author of l%e Road of Inquiry: Charles Peirce’s Pragmatic Realism
(New York: Columbia University Press, 1981) and is on the editorial board of this Journal.

Вам также может понравиться