Вы находитесь на странице: 1из 4

Do we need representations to understand and explain cognition?

Hielke Prins
6359973

In need of metaphors to get a firmer grip on the nature of cognition, the brain has been compared to a
computer running software that implements cognitive processes. As a consequence cognition became a
problem solving ability that involves producing behavioral output given certain perceptive input,
handling discrete symbols as representations of the problem state. In this paper I illustrate that one
should be cautious to make such a comparison because neither cognition nor representations can be that
neatly isolated from their environmental and evolutionary context. I then will argue in favor of internal
albeit dynamic representations as useful in understanding but oblique when explaining cognition.

COGNITION AS PROBLEM SOLVING


There are some clear advantages to the computer metaphor approach to cognition, such as abstracting
away from the intractable properties of human brains as the only possible implementing substrate for
cognition and the ability to focus on the cognitive processes themselves. As Brooks [1] points out
however abstraction entails the danger of over-simplification. Traditional artificial intelligence (AI)
tends to separate solving cognitive problems from unaddressed issues envolving interaction with the
environment. Furthermore the problems solved were of intrinsic symbolic nature due to the chosen
domains of theorem proving, chess and block worlds, to name a couple of examples.
From an evolutionary perspective it makes sense to define cognition as an ability to solve problems as
they are posed by the environment. The nature of these problems however changes during evolution
along with changes in the environment. It seems unlikely that the problems studied in classical AI were
of any importance or even existent during early stages in evolution. Likewise, the abilities of an agent
to solve cognitive problems has been changed over time as solutions to existing problems were used in
solving new ones. Brooks horizontal decomposition of cognition into activity specific layers enabled
him to follow a similar iterative approach when developing obstacle avoiding robots.
The individual layers are implemented by final state machines and thus still composed out of several
parts but there is no clear distinction necessary between action, perception and intermediate cognition.
The layers operate asynchronously and interaction between the layers is weakly defined in terms of
suppression and inhibition mechanisms. These mechanisms influence in- and output to existing lower
level finite state machines, affecting the way the environment is determining the behavior of the robot,
without the need for detailed central representations. Brooks robots solve navigation problems using
embodied mechanisms and the environment as it's own representation.

1/4
THE ROLE OF REPRESENTATIONS IN PROBLEM SOLVING
Encouraged by his results Brooks argues that internal representations are unnecessary to explain
cognition. Indeed, the ability of his robots to sustain themselves in a dynamic environment are a mayor
improvement over the unrealistic behavior of Shakey that relied on intensive computation of detailed
internal representations of the outside world. In terms of realistic real-time behavioral responses, even
the simple tortoise-like robots of Walter [2] seem to outperform Shakey's. However none of these
robots were able to play chess or perform the kind of deductive reasoning Shakey used to manipulate
an external block world in order to reach a goal state.
As long as the mechanisms embodied in these robots are hardwired there is no way they can store goal
states nor the required intermediate steps to reach them. According to Brooks no such explicit
representations of goals are necessary because competition between specific activity devoted layers
implicitly embodies the purpose of a robot. This seems to imply that in order to reprogram the robot to
follow different instructions (reach a different goal state) it has to be rewired. Furthermore, since the
world serves as its own representation the set of solvable problems is limited to those that can be
decomposed in incremental steps that are implicitly embodied goals on their own. In other words, the
robots seem to be incapable of sophisticated learning and detached reasoning.
Like the artificial cricket of Reeve et al. [3], his networks of finite state machines seem to be examples
of an embodied solution to an existing problem, instead of embodied cognition that handles arbitrary
new ones as they are continuously posed by a realistic environment. Addressing these issues at the end
of the paper Brooks proposes an additional isolated subsystem capable of learning in a way comparable
to insects, although not fully implemented yet. Required finite memory capacities described by Brooks,
as well as the breakdown of a crickets behavior in components in order to design an artificial
counterpart already seems to entail operations on some sort of representation, whether discrete or not.
In A Machine That Learns [4], Walter describes a mechanism capable of associating neutral stimuli
with evolutionary useful ones (for instance ringing a bell with the availability of food). Describing such
a mechanism in more detail, he explains that the impact of a neutral stimulus has to be extended in time
in order to notice its significance to the associated event and has to trigger a (gradual) memory
consolidation process that changes the internal state of the mechanism in order to influence future
behavior. Even behavioristic learning, in short, seems to require some sort of representation.
The mechanism is implemented by carefully selecting electrical circuit components that can perform
the required operations to meet these conditions. Together these components are suited to fill in a
functional part of the circuit that is labeled the “learning box”. Given Brooks concerns about required
redesign of used pieces of equipment followed by precise descriptions of the requirements 1, illustrates
how problems solved by his robots are likewise decomposed in smaller ones in order to facilitate its
design process. Although Brooks probably refuses to accept such a weakened definition, appropriate
mapping of these components and their states might well define a useful representation of a partially
solved problem.
THE NATURE OF REPRESENTATIONS
The kind of representations necessary for cognition has been debated vigorously. In mental imagery
this debate centered around the question whether representations are of a depictive or a descriptive

1 For an “high-speed low-power processing box to run specially developed vision algorithms at 10 frames per second”, 8.3

2/4
kind. Depictive representations are characterized by features that allow them to be processed during
imagination much in the same way they are during visual perception. Kosslyn [5], argues for depictive
nature using conceptual arguments taken from experiments showing correlation between the amount of
rotation and spatial scanning required in manipulating imaginary objects and the time used to do so.
Empirical evidence, he believes, is provided by neuro-imaging studies showing activation of brain
areas involved in visual perception during metal imagination.
Both of these arguments however only imply that perception and imagination use similar mechanisms,
not necessarily detailed internal picture-like representations. O'Regan and Noë [6], for example, instead
claim that seeing is a way of acting. Like Brooks, they argue that the world serves as its own
representation, and seeing is one of the ways of exploring this world. Doing so cognitive agents acquire
a skill described as mastery of the laws governing sensimotory contingency.
SKILL ACQUISITION AND PROBLEM SOLVING
Skills can be seen as a form of implicit knowledge, in this case an understanding of the way that
changes in the relationship between object and agent affect perception. For visual perception for
instance, this means that changing the distance or angle between retina and an object, changes size and
shape of the sensory input on the retina. Properties of the object and the retina, such as heaving a cup
ear or a blind spot, as well as eye or object moving actions performed by the agent, all have their
influence on these sensimotory contingencies.
Unlike artificial crickets and designed robots, evolution has brought much more arbitrary wired agents
into being, continuously recycling existing solutions in exploring the environmental problem space,
even within modality specific layers. It thus seems likely that they did so within layers dedicated to
visual perception but it is not only image processing alone that plays a role here. Moreover image
manipulation itself might, for instance, be dependent on motor related rather then mental image
processing algorithms. In sum, multiple circuits in different layers might deploy different sensimotory
contingencies mastered by the agents in different degrees.
EMBODYING SKILL ACQUISITION
Skilled human chess players are believed to solve problems posed by a chessboard in different ways,
depending on their level of experience [7]. The degree in which they apply abstract pattern matching in
the evaluation of relevant constellations on the board seems to vary with their chess playing skills.
Taken as mastery of sensimotory contingencies, these skill are the implicit knowledge of rules that
govern the interaction between the play and the players actions. It is clear that the explicit rules of
chess play an important role in governing this interaction. The discrete nature of a chess problem
implied by these rules probably bears a more straightforward relation to the acquired skill then
handling the chess pieces and perceiving the external problem state. After all, these are not relevant for
survival in a chess world.
These rules describe relations such as the action range of a queen and the movements allowed for a
horse. They also define dimensions like the value of pieces or a discrete time axis and abstract problem
states like chess mate. All of these features might have their impact on the acquisition of skills that
exploit them.

3/4
EXTENDING THE METAPHOR FOR COGNITION
To exploit the more modern internet metaphor to cognition: not only the existence or precise amount of
available representations plays a role in understanding a virtual world but also the way these
representations are entered or generated, structured, encoded, related and labeled; as well as the way
they are available and exploited when transferred, indexed, aggregated, searched or visualized. In fact
there is no straightforward relationship between stored representations and available information nor
between formalized and usage based representations. Cognition might be mining and constructing
distributed, incomplete, multi-modal and dynamic representations.


1531

[1] R. A. Brooks, “Intelligence without representation,” Artificial Intelligence, vol. 47, no. 1, pp. 139-
159, Jan. 1991.
[2] W. G. Walter, “An imitation of life,” Scientific American, vol. 182, no. 5, pp. 42–45, 1950.
[3] R. Reeve, A. van Schaik, C. Jin, T. Hamilton, B. Torben-Nielsen, and B. Webb, “Directional
hearing in a silicon cricket,” Biosystems, vol. 87, no. 2, pp. 307-313, Feb. 2007.
[4] W. G. Walter, “A machine that learns,” Scientific American, vol. 185, no. 2, pp. 60–63, 1951.
[5] S. M. Kosslyn, “Mental images and the Brain,” Cognitive Neuropsychology, vol. 22, no. 3, p. 333,
2005.
[6] J. K. O'Regan and A. No\ë, “A sensorimotor account of vision and visual consciousness,”
Behavioral and brain sciences, vol. 24, no. 05, pp. 939–973, 2001.
[7] A. C. Lehmann and Ericsson, “EXPERT AND EXCEPTIONAL PERFORMANCE: Evidence of
Maximal Adaptation to Task Constraints,” Annual Review of Psychology, vol. 47, no. 1, pp. 273-
305, 1996.

4/4

Вам также может понравиться