Вы находитесь на странице: 1из 10



Current version: 04 August 87 17:44
Started: 25 Jul 87


/v /6

The 1987 William James Lectures v u

Alien Newell
4 August 1987

Departments of Computer Science and Psychology

Carnegie-Mellon University
Pittsburgh, Pennsylvania 15213

W' G

'& / b^C
'fi i H^ 1'


Table of Contents
1. The human is a symbol system
2. System Levels
3. The Timescale of Human Action
4. The Neural Band
5. Neural Circuit Level
6. The Real-Time Constraint on Cognition
7. The Cognitive Band
8. Level of Simple Operations


10. The Intendedly Rational Band

11. Higher Bands M Social, Historical and Evolutionary
12. Summary


9. Level of General Operations

WJ Ch. 3. HCA: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.



List of Figures
Figure 2-1:
Figure 2-2:
Figure 3-1:
Figure 4-1:
Figure 5-1:
Figure 7-1:
Figure 7-2:
Figure 7-3:
Figure 8-1:
Figure 9-1:
Figure 9-2:
Figure 9-3:
Figure 9-4:
Figure 10-1:
Figure 11-1:

Expansion of space with levels.

Expansion of space with levels.
Timescale of human action.
The neural level.
The neural circuit level.
The necessary phases of deliberation.
Properties for automatic and controlled behavior.
Example of automatic and controlled behavior (Shiffrin & Schneider, 1977).
The four levels of the cognitive band.
Compression of the level scale factor to squeeze N+l levels into N.
Residence times for various tasks (SimoN72).
The intendedly rational band M knowledge-level systems.
Higher bands.

WJ Ch. 3. HCA: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.


Unified Theories of Cognition

Chapter 3. Human Cognitive Architecture


In this lecture we turn to the human cognitive architect.ure.iS!j~lhe last lecture >*fc provided ouf3e,KFS!TmJ.Lh u the
basic concepts necessary to understanding v&> intelligent systeiy^ Representation, knowledge, symbols .and search
apply as much to machines and to humans. Humans, of course, are specific in n^ny ways. Back in Figure
UTC-CONSTRATNTS we laid out many of the constraints that shape the human mind.-k For example, d^t they are
constructed of neural technology, tiwithey arose through evolution, and tip* they must be highly autonomous.

a theory
Our ultimate goal is a unified theory of human cognition. That will be expressed, we have maintain^
that is, of the fixed (or slowly varying) structure
brms the
of the architecture of human cognition
framework for the immediate processes of.-cognitive performance and learning. Tjbus, we need to set out that
architecture. We will do so in phases. In th^ lecture we will attempt to derive some.aspects of the human cognitive
architecture, attending only to the way the human is situated in the world. This will lay the groundwork for
proposing in the fourth lecture a specific architecture in detail. (Though, even there, vtyiouc aspects will remain

.Alternative^, the architecture could be laid as a total system. ifclose/hpproximati*l4e the axiomatic ideal. An
advantage of the phased descriptions, in addition to whatever didactic virtues it might have, is^trf separatu what
justified by detailed
general grounds, from what must
the architecture on t&iiKg&y
can be claimed about
-f.tr be
experimental data} /Thus, this lecture will be devoted to a quite general argument ^Hw^tfill start from the neural
technology, which is quite clearly the technology of the human cognitive architecture. This architecture must
support mind-like behavior. From the prior lecture we already have a characterization of what that means. We will
add to these what we will call the real-time constraint on human cognition. From this three constraints, we will
derive a number of key aspects of the cognitive architecture.
no longer
However, whatever we are able to derive from them ^te
Theila are indeed rather wgeneralt*^*
options in specifying the
divide and conquer should be evident from the discussion in the last lecture. There (Figure FCS-ARCHVAR) we
saw, thai large degreegofj^egdgrn \vfe available to construct architectures of symbolic systems. We also noted the
that there is no way to determine
the architecture may be Jtte essentially^hidden variable
what'representations^?!: whjg ggmtrol structures are used. Since any general (i.e., universal) architecture can mimic
any other, the situation is hopeless.^learly, then, whatever aspects of the architecture can be pinned down from the
such can only be of great help
general situation within which the huWin architecture is constructed and operates
have some doubts about whether anything can be said, but as
in making the internal details identifiablksspne
we will seem immediately that is far from so7~-^4 QUJ.
Here is another way to think about the enterprise of this lecture. Evolution is the designer of the human cognitive
architecture. It will pick and choose systems that will aid in the survival of the species. Our problem as scientists is
to guess what design evolution has settled for to date. We know a little about evolution as a designer. It never starts
over, always working with what is available. In Jacob's (1914) now famous phrase, evolution is a tinkerer. For
instance, once a species is committed to a k-strategy (heavy investment in few progeny) evolution will not shift to
it is too hard to get from here to there. Evolution
the opposite R-strategy (light investment in many progeny)
picks its^sfotonJVithin the design constraints posed by the situation in which the organism finds itself. If we can
understand these constraints, then we can somoro cloatly tho fimitpjl field within which evolution must operate.
This lecture will be fairly speculative. Any attempt to get at general characteristicsAmust run this risk. We will

WJ Ch. 3. HCA: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.

'attempt to assess that risk at the end of the lecture, after the results are before us. Even if the speculations are
wrong, I think/they *H be wrong inkitorooting^ays.
Here is a preview of tne results. Different cognitive worlds have different-time scales. The different kinds of
the neurological, cognitive,
cognitive worlds that we see are governed by the time scales on which thv occur
are to enable mind-like behavior to emerge / to get to computational,
rational, and social. They are j.whatthey
tr/*4t "^
symbolic systems/^ Within the cognitive werid there must be a distinction between" automatic and controlled
processes, which again is a distinction in level. In addition, the architecture has to be recognition-based and there
has-to be a continual shift towards increased recognition. This corresponds to continual movement along the isobar
in Figure FCS-PREPDELffiTRADEOFF, towards increased preparation.

1. The human is a symbol system

The treatment in tfre last lecture was abstract in a very particular way. It discussed symbol-level systems and
knowledge-level systems as the general structure that was necessary to obtain general intelligent behavior. I did not
though it was clearly
specifically make the case that humans Jwerefoiow463ge systems and symbol systems
understood that was what we were after. *3it we settled for understanding the nature of these various systemf and
what gay^rise to them, namely the need to deal with large amounts of variability in response functions,
At this point we wish to be explicit that humans are symbol systems that are at least modest approximations of
knowledge systems. They might be other kinds of systems as well, but at least they are symbol systems. The
grounds for thisjargument, as made clear by the entire previous lecture, is the variety of response functions that the
human uses. Ifjy|rity of response functions is immense enough, a system will be driven to compose its response
functions by means of a. cconoutational system that constructs representations by means of composed
transformations and^use^ symbols to obtain distal access. That whole apparatus exists because of the demands of
variety. The groundwork is laid. We just wish to be explicit about its application to humans.
One weak argument is that humans can undoubtedly emulate a universal madiine. They might do it rather
slowly, because they may have to spend a large amount of time memorizing tflt new states. But, if we wait long
enough, they can do perform the operations of a universal machine. They are of course limited in their lifetimes
(measured in terms of total number of operations they can perform) and ultimately in the reliability of their memory.
But neither of these is of the essence, just as thejtare not for QOjflrjuters, which also have limited lifetimes and
reliabilities. Thus, technically they are the kind of 4 beast vvfcSncan be a universal machine. AfclWth that comes,
, again technically, all ther>ther properties. However, the argument is weak, because the imagined type of verification
.^ v'""""""""^"sp>oifying to a humaJra specific universal machinef Csay a Turing Machine) and then observing the person's
execution of itjnterpretively ao to opcak is artificial, n is not an observation on the kind of life that humans lead.
TTcould be the exploitation of a capability that is actually irrelevant to the regular style with which the human
interacts with ^Environment.
The more substantial argument is to reflect on the variety of response functions that humans d Not can do, but
the, time. To adopt the well-worn device of the
functions all
do. The fact is that they seem to create new response(.lio*-.
J *^
Martian biologist, what would impress him most if he looked at the human species biologically is
efflorescence of adaptation. Humans appear to go around simply creating opportunities of all kinds to build
different response functions. Look at the variety of jobs in the world. Each one has humans doing different kinds of .
response functions. Humans invent games. They have all different kinds of sports. They no sooner invent one, than
they invent new ones. They not only invent card games, they collect them in a book and publishthejj*/(150 strong)
(HoylSO). That implies that people buy them so they can develop yet new response functions by^ore. . They also
dance, write books, have conversations. A conversation is nothing but an opportunity to interact) with the

WJ Ch. 3. HCA: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.


environment in a way that was different than prior interactions that is, to build new response functions. Think of
the Library of Congress as evidence of the variety of response functions that human have exhibited by writing them,
will exhibit by reading them, and want to exhibit by building buildings to make them available. To academics the
mention of books suggests intellectual functions, as if all this was perhaps a phenomena of the high end of the
socio-economic scale. What then of rapping? A creation of the black ghetto, rapping is an invention to produce an
opportunity for new responses to a highly dynamic environment. It serves other functions as well, but it builds on
the human proclivity to invent new forms of responses. People sa^*create these opportunities. Indeed, our
Martian biologist would not be wrong to conclude that the biggest biological puzzle about earthlings is why they
have developed this efflorescence of adaptation.1
^Ethology, looking at other organisms from digger wasps to herring gulls, has properly become intrigued with the
adaptations they exhibit. Each adaptation is/a unique biological phenomena, each is to be understood by exploring
the behavioral and physiological mechanisms that support it. They are to curated one by one. Not so with humans/
they will invent new adaptations faster than they can be record^ The
There is no enumerating their adaptations
or rather a whole generator of them, as the problems of such a
act of recording is itself one more adaptation
scientific enterprise unfold and are responded to.
What I am saying is not new, nor it is supposed to be. There are many ways of talking about the life of homo
sapiens, so as to reveal what is the life of the mind. Always eooentiolly the same gross facts are to be described. The
issue is what it takes to convince ourselves that humans deal with sufficient variety, so that they must be symbol
systems that no system of less power and universality could suffice. I am attempting a description that takes as
that behavior. I wish to avoid
the givens observations on the behavior of humans, to wit, on the variety of

tA ysfr"' ^
judgments on the content of that behavior, for example, on its rationality or oegroe of its adaptationfTespecially
wish to avoid involvement with any internal or structural aspects of the human. My objective is to ground the
assertion that the human is a symbolfsystem on externaLaspects, so that it can serve as a design constraint for
~h JLoJ^oJ****^
considering the nature of the internal structure.
To indicate this behavioral character of humans, IAuse the phrase unlimited qualitative adaptation. Clearly,
humans are not infinitely adaptive. That is/ easy to show. Just pit two humans against each otheBJin a competition;
one will win, the other will lose. The human that loses clearly was not sufficiently adaptive. Th^ it is the variety of
adaptations that is asserted their range^nialitatively speaking. Here thejj seems to be no limit whatsoever. What
might have seemed a limit, in terms of specific sensors and time-bound contact, humans manage to transcend by
instruments and historiesy They even bury time capsules so that some far-future human will have one additional
opportunity to behavjfcf adaptively with respect to a past he or she might otherwise have missed.


This argument has a soft spot, which should be noted. It is an asymptotic argument. That is, as the variety of
functions that can be exhibited by a system increases without limit, we know the set of functions becomes the set of
computable functions. Furthermore, we know such a set can be generated only by a universal computational
systems (exactly so, since these two notions simply go together). If we consider systems that produce an ever
J greater variety of functions, at some point they must have the structure of a universal system, i.e., of a symbol
system. A human is capable of producing a immense variety of functions and does/so in its everyday life. But is
Ihis^enough variety so that the structure must be that of symbol system? Computational theory does not yet provide
any useful answers to this question in part because it hasn't sought therri,afikegh in general such questions are
hard to answer in useful ways. It would be nice to have some theory like that, but I don't know of any. It is

n\ we
would exhibit that same efflorescence. But no
1Of course, iijfould puzzle about it, he might a fortiori know the answer-because the Martian\
metaphor can

WJ Ch. 3. HCA: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.

instructive to observe that computers are in the same situation, to compute any computable function a computer has
to have the structure of a symbol system. We construct computers so that^iave this structure. But need we? Is the
variety of actual functions that we want to compute such that we could get by with some structure, perhaps far
removed from a symbol systems. It seems hardly likely, but there is not the mathematical theory to provide more
definitive answers.
^^ \^^^^^S^^^E^^**^
li^gsa^^^^ >

r we will now take it as established that the architecture of human cognition is a symbol system.

2. System Levels
Let us turn to understanding the technology out of which the human architecture is constructed. The first point is
that, of necessity, intelligent systems are built up of multiple levels of systems. A system level is a collection of
components, which are linked together in some arrangement and which interact, thus producing behavior at that
system level. Multiple levels means that the components at one level are realized by systems at the next level
We have of course been here before, as witneeeed by Figure FCS-COMPSYSHIERARCHY, which showed the
computer systems levels. Empirically, everything we understand about engineering such systems to get intelligence
is to build up multiple levels. This is one of the great empirical invariances
although many different ways have
been found to construct information processing systems, they still all consist of a hierarchy of levels and indeed
essentially the same hierarchy.
I wish to maintain that the human architecture is built up of a hierarchy
that it cannot be otherwise structured
so that the discovery of the architecture can proceed within this assumption. That engineered computer systems
seem to have a hierarchical structure, as just reviewed, can certainly be taken as one piller of support. A second
pillar comes from Herb Simon's analysis far hierarchy (Simo62). The argument there was that stability dictates that
systems have to be hierarchical. To build complicated systems without first building stable subassemblies will
always fail the entire structure wijl'disintegrate before it all gets put together. If stable subassemblies are created,
layer upon layer, then each one A has a reasonable probability of being constructed out of a few parts. Thus, there
exists a general argument that stability dictates the existence levels. The stability argument, of aefurse, can be taken
to underlay the entire hierarchical structure of matter, from nucleons, to atoms, to molecules, an on up. So the levels
hypothesis may have nothing to do with intelligent systems, but simply with the^ way all systems are put together.
All we need for the argument is that intelligent systems will be hierarchical. i/lu. &** fUL^^J^j. ^L^-p^-^
<J*CCMI-M, T^-^,
Levels are clearly abstractions, being alternative ways or descrtmng the same system, each alternative, ignoring
some of what is specified at the level beneath it. It is all in the head of the observer. ^Ffiere is more to it than that.
Levels can be stronger or weaker, depending on how well the behavior of the system, as described at a level, canjbe
predicted or explained by the structure of the system described at the same level. In standard treatments^ystems
analysis (SystemAnalysis), systems are called state determined when their future behavior is determined wnjfl their' "23
current state. That is what holds for a strong level. A level is weak if considerations from lower levels enter into
determining the future course. In engineered systems (Figure FCS-COMPSYSHIEARCHY again), great care is
taken to make strong levels to seal off each level from the on below. When dealing with logic circuits there is no
need to understand the continuous circuitry underling them
except when things go wrong. When dealing with
programs there is not need to understand the register-transfer circuits that realize the operations and the interpreter
again, except when things go wrong. And so on. These are all very strong system levels, as evidenced by how
small the failure rates are in commercially successful systems. Many natural system levels are also very strong,
such the atomic and molecular levels. The stronger a level is, the more it forms a distinct world, in which nothing,
must be known about lower levels in order to live within it. However, aH natural-system levels need be strong. In

WJ Ch. 3. HCA: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.

particular, though it may be taken that intelligent systems are organized in levels,/it cannot^e taken that these are
strong levels. There could be lots of ways in which phenomena from lower levelsxpercolate^ up wards- UPfrvyi

. -4- A^L*

~J* ..*. IM *J

~ ~

. - _

f\ ft n/Vf A .

As one moves up the hierarchy of system levels, size increases and speed decreases. That is an obvious, but
important, property that follows directly from the nature of levels. Figure
^ & I _ _ _ 7i ^ ?
re piff together to form a"component
situation. At level N a collection of K components, say of charterlstii
at level N+l. The size of this higher components is at least 10! but it could be somewhat bigger because the level-N
components are simply spread out in space, with a certain amount of interstitial tissue of some sort. The collection
of K components of level Nyl are put together to form a component of level N+2. This higher component is at least
K(K)yin size, hence K^Sin terms of the bottom level. In general, components of level N+m are Km times the size of
components of level M. Of course the number of components need not be the same at each level, so that the K is
some (geometric) mean value. In summary, not only does the size increase with level, it does 0t geometrically, so
that the rignrunit is the logarithm.

Figure 2-1: Expansion of space with levels.


Expansipjv^ccurs with time as well, as Figures HCA-TIMEEXPANSION shows. -Given that a system at level N
takes tim&Mt) generate a response given a change in its inputs. If K of them are connected together as components
to form a system at level N+l, then it will take this system longer to produce its outputs than the level-N
components. How much longer is somewhat more complicated than in the spatial case. If the K components were
arranged in series it would take Kt times as long. If they were arranged in a big loop in which circulation continued
until some state were achieved, then it could take L(Kt), for as many iterations (L) as necessary; this could be
indeterminately longer than t. If they were arranged in a binary discrimination net they might take only (log2K)t.
The time could be reduced to just t, but only if the K components didn't interact at all, and each one simply
delivered its output; but this of course is the limiting case^vhere the higher system does not exists at all. Thus, what
exact multiplier will exist depends on the processing organization at the level. But there will be some T such that
the characteristic time at level N+l is T.t. Then, just as in the spatial case, the time for level N+m is T"1 times the
time for level N, where T is some (geometric) mean of the time multiplier at each level. There is no invariant
relationship between T (the temporal multiplier) and K (the spatial multiplier), except that in general T increases
with K (perhaps sublinearly), since in general additional components play some role. In summary, as one goes up
the scale, everything slows down. At^ms behave more rapidly than molecules, which in turn behave more rapidly
and so on up to planetery systems, and galaxies.
than macromolecules, which behavi^more rapidly the cells
n . i
The scale is geometric j so that the unit is the logarithm of time.



t -. .__*.. - ^

. -

*% v <&Ui <.<**.__.


/V"A^LJlf JCfUt**}^- ) *

interesting-frrt: the smallestMevel must be about a factor of ten bigger than the next lower level. Some
number of things has to be put together and allowed to interact for a few cycles in order to get a new system level

WJ Ch. 3. HCA: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.

Figure 2-2: Expansion of space with levels.

with new properties. It can't/be done with two or three components and it can be done with just a couple of system
times. Novel behavior cantbuild up that rapidly. On the other hand, there are lots of cases where less than 100 or
1000 components or component-times suffice. Thus, we can take a factor of 10 to be the minimum characteristic
factor to get from one level to the next. This factor is highly approximate, of course. It might be only 3 in some
CMiremc cases, it might be as many as 30. Let us use ^0 as a special notation to indicate such very approximate

. The Timescale of Human Action

Let us now consider Jhe timescale at which human action occurs, as shown in Figure HCA-TIMESCALE. At the
o, J
left time is measure^n seconds, using a log scale. The next column to the right names the time units, milliseconds
(ms) to sec/ to mki to hours. The units themselves are, of course, geometrically related to each other. The next
n system with that characteristic operation time. Time is the useful measure of system level, for us^^ */ ^
space. But of course the characteristic physical size increases correspondingly. Different levels are
characterized by different theories, and these are shown in the right hand column.
This figure provides an overview of where we are going. Startin at the bottom, there is the neural bane} of three
\/ levels neurons: organelles, which are a factor of ten dpwjvfroL _-__
_, and
_._ neural
__._. a_ factor
_ . of
_. ten up.
' __ circuits,

Ak/^4J^" '
jK"M < *) t ^TT/t tiff f
As we'll* see,(*+**
have a characteristic operation
time ofaoout a ms, and neuraTcircuits have a characteristic
(a***) ^s^
r\ about 10 ms. Thpii^here is the cognitive band. Here the levels are unfamiliar .r ^ve called them deliberate acts,
cognitive operajions and unit tasks. Each of them takes about ten times as long as the level beneath. It will be the
main task of this lecture to establish these levels and something of their criaract^^rx)ve^ecpjmitivej?and lies-thethe rational band, ^which is of the order of minutes to hours. All of these levels are given the same label, namely
/ass. We will see why that is appropriate. Finally, even higher up thej^ lies something^alled the social band.
which we will have only a little to say about
L f^t rm^C? *
One striking feature of Figure HCA-TIMESCALE is that each level is only the minimal factor of tenyvabove its
components. This is evident directly from what we know empirically about the levels of the neural band. Each new
level occurs, so to speak, iarf as soon as it can in terms of components.3 This justification of the minimal property

10 rather than -10 because we want to preserve -10 to indicate the usual degrees of approximation. In the lectures we used ~
special character is not available here. %10 is a place holder for a special notation.
3N<ne: This needs work. One interesting aspect is that what makes for larger steps is aridity, as in system plains. On the contrary, if one is
trying to get much complication as possible, then one wants levels as soon as possible.

WJ Ch. 3. HC A: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.


Fodor, J. A. The Modularity of Mind. Cambridge, MA: Bradford Books, MIT Press, 1983.
Kolers, P. A. Memorial consequences of automatized encoding. Journal of Experimental Psychology: Human
learning and memory, 1975, 1, 689-701.
Newell, A. & Simon, H. A. Human Problem Solving. Englewood Cliffs: Prentice-Hall, 1972.
Posner, M. I. & Snyder, C. R. R. Facilitation and inhibition in the processing of signals. In Rabitt, P. M. A. &
Dornic, S. (Ed.), Attention and Performance V. New York: Academic Press, 1975.
Schneider, W. & Shiffrin, R. M. Controlled and automatic human information processing: I. Detection, search, and
attention. Psychological Review, 1911,84, 1-190.
Shepherd. The Synoptic Organization of the Brain, 2nd Ed. New York: Oxford University Press, 1979.
Shiffrin, R. M., & Schneider, W. Controlled and automatic human information processing: II. Perceptual learning,
automatic attending, and a general theory. Psychological Review, 1977, 84, 127-190.
Simon, H. A. The architecture of complexity. Proceedings of the American Philosophical Society, 1962, 26,

WJ Ch. 3. HCA: Preliminary draft of 04 August 87 17:44. Limited distribution. Do not quote.