Академический Документы
Профессиональный Документы
Культура Документы
grow a brain. The central technique in this research is referred to as the Darwin
Machine and has more than passing resemblance to the Frankenstein theme. Like
Dr Frankenstein, the developers of the Darwin Machine are seeking to cobble
together a quasi-human machine.[8] Like most cutting edge `techno-science'
research projects, including the exercise of Alife in general, the researchers
conveniently and almost unnoticeably omit any mention of just what this
technology might be used for.
Anthropologist Stefan Helmreich has noted that the Alife community is statistically
30-40 years old, straight white males who are agnostic or atheist of judeo-christian
backgrounds. Within this community, subjective and value laden assumptions of the
researchers themselves are disguised as axioms. As an example of these
assumptions, Helmreich quotes Tom Ray as saying "I'm omniscient to the physics
of the world I create" and notes the similarity of this position to that of the judeochristian notion of the omniscient creator.
A greater cause for concern is the particular `flavor' of Darwinism which is enlisted.
The very simplistic, individualistic and mechanisitic evolutionary narrative chosen
has a decidely nineteenth century ring to it, and implicitly supports social
darwinism.[9]
Central to the ideology of Alife is the conception of the feasible separation of the
`informational' from its material substrate. Alife avoids the aspects of cell dynamics
and evolution in which the informational and the material are "deeply
entangled"[10] thereby endorsing the simplistic DNA = algorithm generalisation
which was the keystone around which Watson and Crick's conceptualization of DNA
was originally based.[11 ]Thus Alifers maintain that there is no reason why `life'
cannot exist with silicon as a substrate, rather than carbon. It must again be
emphasised that this is a rhetorical device, a narrative construction which takes as
its basis the nature of the digital computer, in which hardware and software are
separate. This separation is the defining quality of the computer, a machine which
is the quintessential reified form of Cartesian Dualism.
This separation is then `naturalised' by the nomenclature Artificial Life, and the
nature of the computer is applied to organic life. This induces the (quite wrong)
assumption that modern computational techniques are structurally similar to the
deep structure of biological life. This phenomenon is similar to the trend
FranciscoVarella (et al) calls `cognitivism' in cognitive science.[12] Proposing a
division between matter and information in biological systems is yet another
instance of a narrative construction rooted in Enlightenment precepts. It serves to
reinforce other such contrived dualistic structures as mind and body, form and
content.[13]
Elsewhere I have discussed the similarity between the attitudes of St Augustine and
Descartes to the body, and those of cyberpunks, epitomized by Gibson's words "the
body is meat"[14] It is through examples such as these that we can see just how
clearly so called `objective science' can be heavily value laden, perpetuating
dualistic and colonialising ideologies[15] . High tech enterprises, such as Artificial
Intelligence and Top-down robotics validate and reinforce these dichotomies with
the rhetorical power they derive from being scientific, high tech and futuristic.
Paradigm Busters
Scientific ideas have been a powerful influence in shaping western culture. The
power of influence of the hard sciences has encouraged social sciences and
humanistic disciplines to become more `scientific' (and therefore, by definition,
blocked. If the properties of matter and energy at any given level of organisation
cannot be explained by the properties of the underlying levels, it follows that
biology cannot be reduced to physics or anthropology to biology" [22] Or, one
might add, psychology to physiology, or knowledge to information.
The Top-Down Artificial Intelligence paradigm has come in for its fair share of
bashing in recent years, one of the earliest and loudest whistle blowers being
Hubert Dreyfus[23] who refers to the paradigm as GOFAI : `Good Old Fashioned
Artificial Intelligence'. The Top-Down paradigm, exactly replicates the dualistic
structures outlined above, like the hardware/software pair, the Top Down method
centralises `intelligence' in a central processor, into which data is fed from
(unintelligent) sensors, and which in turn instructs actuators, having meantime
prepared a master plan. The problem with this method is that it is computationally
intensive and causes processing bottlenecks in with real world problems without
formally bounded domains. [24]
The observation that a cockroach is better at crossing a road than the best Top
Down robot, (the cockroach can eat and reproduce into the bargain) led to a variety
of research projects which searched for alternatives to the Top Down paradigm.
These include parallel processing and connectionist computer architectures, the
development of Brooksian Subsumption Architecture, a revivial of study of neural
nets and the exploration of emergent order. Parallel processing and connectionist
computer architectures propose alternatives to the production-line-for-data model
of standard serial processing. Subsumption, emergent order and other approaches
share a `Bottom-Up'strategy. The distinction between Top-down and Bottom-up is
not consistent, but can be characterised in terms of dualism. Top down embraces
the notion of a panoptical mind-proxy in control, bottom up stategies (in different
ways) make a less clear distinction between `mind' and `body'. Subsumption
architecture does not centralize processing, often the behavior of a robot is
organised by several unconnected processors! Similarly in the exploration of
emergent order is based on the notion that a group or multitude of simple units can
generate complex phenomena `accidentally'. There is substantial political force in
this trend, as bottom-up theories implicitly oppose authoritarian power structures
which dualistic structures reinforce.
Another techno-scientific paradigm which has caused havoc when ported into the
humanities and particularly into telematic arts is is Claude Shannon's
Communication Theory. This entirely technological theory assesses communication
in terms of a match between bits sent and bits recieved [25] In human
communication, `interpretation' is everything.[26] Horst Hendriks Jansen offers an
example from Piaget which suggests that Shannon's communications theory is not
particularly relevant to the study of human communication. One such example
concerns the behavior of a suckling infant which trigger responses in the mother by
appearing to be intentional. This `bootstraps' the child into meaning.[27] The
significance of such `exchange' is that the message received was never sent!
Artificial Life and Interactive Art:
Putting the Darwin Machine to work.
The Esthetics of Interactivity.
Artists are confronting unexplored territory: the esthetics of machine mediated
interactivity. Contrary to the industry-driven rhetoric of freedom and liberation,
freedom in machine interaction is an entirely constrained affair. As Sarah Roberts
succinctly puts it: "Interactive is not a bad word, but the illusion that goes along
with it is of a kind of democracy, ... that the artist is sharing the power of choice
with the viewer, when actually the artist has planned every option that can
happen...its a great deal more complex than if you [the user] hadn't had a sort of
choice, but its all planned"[28] Indeed, it is a great deal more complex. Designing
the interactive experience adds an entire dimension to the esthetic endevour, one
without precedent in the visual and plastic arts. In the west, the visual arts have no
tradition of an esthetics of interactivity. Six hundred years of painting has resulted
in a rich esthetics of the still image, of color and line, shape and area, of
representational geometry and perspective. One hundred year of moving image has
given us an esthetics of time-based image, of camera angle and movement, wipe
and cut. But we do not have an esthetic language of real time interaction.
Interactive media artists do not create instances of representation, they create a
virtual machine which itself produces instances of representation based on real time
inputs.
The machine, of course, has no idea that its interacting with anyone. It has no idea
of anything. So how are we to concieve of this machine mediated interactivity, with
respect to the artist, and with respect to the user? Winnograd and Flores argue that
computers simply facilitates human interaction.[29] So can we say that the
machine is a proxy for the artist? This seems just a little simplistic. The interactive
artwork is not conversation precisely, it is an encapsulated cultural act. Interaction
may occur with that cultural capsule even after the maker is deceased.The
approach of Winnograd and Flores veils a subtle anthropomorphism: the machine
`represents' the artist. But does a user (say of a video game) imagine s/he is in
conversation with the artist through the narrative and the characters? In the early
days of AI and interactivity, the goal was the emulation of the human. Certainly the
famous historical examples: `Eliza' and the Turing Test before it, were
unselfconsciously anthropomorphic.[30 ]It seems no longer necessary to require
that the interactive work is anything more than a machinic system which simulates
responsiveness. A reflexive idea of emulation arises here: in order to function, the
work must sucessfully trigger associations in the user which make the experience of
the work meaningful. For a user to recognise a machine event as a response, s/he
must `re-cognise' it, by comparison with previous experience.[31] But this
experience is not necessarily of the organic world, we are already attuned to a
plethora of machinic responses, from traffic lights to ringing telephones to the dos
prompt.
When we posit a synthetic agent doing work in cyberspace, such as locating
references on a certain subject at various sites, the environment of the agent is
alien to us. It and its environment are digital, with little equivalence of the
geography and physics which we inhabit. We could not emulate it because it is
entirely unfamiliar. Yet it must be represented in some way that allows for
information to be exchanged, for work to be done The interface becomes the glass
of the aquarium in a dramatic way: we are looking at an alien species in its
environment. The interface is the zone of translation.. We understand only our
image of it, which is to say, we extrapolate from our cultural experience examples
which carry some traits which seem to have an analogous relationship with what it
is we think we're seeing. [32]
As Erkki Huhtamo has noted: "One might argue that the proliferation of forms of
computer mediated interactivity in our everyday lives has already given rise to a
new subject position in relation to modes of audio-visual experience."[33]
Emergent Behavior in Art
What bearing do ideas of emergent order and techniques of Artificial Life have on
interactive art practice, as ideas or tools? The most profound, as I see it, is that it
offers an alternative to the current all too deterministic paradigm of interactivity as
pre-set responses to user navigation through an ossified database. This paradigm is
firmly within the Top-Down camp. Emergent interactive behavior would not be
derived from a set of pre-determined alternatives. Rather, behaviors might arise
through a contingent and unconnected chain of triggers. In the behaviors of
termites (described by Grass) highly complex building behavior arises when a
simple behavior produces an action or a product which then triggers a higher level
behavior. [34] This is a new paradigm of interactivity, radically different from the
notion of a pre-linked database. Simply regarding this method as a possibility
points up the presence of deterministic top-down strategies in current interactive
art practice. This must lead us to consider just how much of interactive art theory
(such as it is) is predicated upon Shannonesque and Top-Down approaches which
are quite questionable in our context. An interactive work, like any work,
consciously or unconsciously embodies a value system. Particularly in the realm of
computer art, we are always subject to the insinuation of the very C19th value
system of engineering. Considering an emergent approach to interactivity is a way
of bringing those value laden basic assumptions into visibility.
Does complexity and Alife offer us as tools for an esthetics of behavior? A
celebrated and early appllication of self-organising behavior is the now well
established field of procedural modeling. Craig Reynold's Boids are perhaps the best
known example, in which, rather than draw a flock of birdlike images and animate
them, he equipped a virtual space with a terrestrial physics and equipped individual
`boids' with some basic flocking behavior and voila! a virtual flock flew through the
space and around obstacles. More recently Jessica Hodgins has used similar
techniques to generate a `herd' of cyclists. Jeffrey Ventrella, Karl Sims and others
have combined the notion of procedural modeling with simulated evolution,
`breeding' new characters according to the strictures of their enviroment.
Of course, you can breed anything, a physics, a joke, if you establish the payoffs
and taxes, the right `rugged fitness landscape' and manipulate your mutation rate
to avoid becoming stranded on local fitness peaks. [35] You could have a self
evolving user interface, in which the environment it adapted to was the habits and
interests of the individual user. Such an interface might hide little used functions in
sub-menus or mutate a new floating palette. Use of the palette would reward that
mutation, allowing its progeny a better rate of survival in the next generation. This
could be fun, you'd never know quite what your interface would look like, though I
guess the mutation would be fairly conservative, given the trained and habit bound
nature of user activity. Radically novel mutations would be nipped in the bud
because the user wouldn't understand them.
If procedural modeling will replace figure animation, perhaps genetic algorithms will
breed not only creatures, but storylines as well! And indeed, this is possible now,
not simply in screen-based simulation, but in the more difficult case of mobile
robotic platforms where power and computation are limited, as Luc Steels has
recently demonstrated [36] This presents a curious prospect that art practice might
become a kind of virtual horticulture as evolutionary models are adopted. The role
of designer then becomes one step further removed from the surface of the work.
The artist is not making the work as it appears, nor is she making a rule base which
generates the work. She is building an ecology-like model with environmental
constraints within which variation can occur.
Behavior and mimesis
I've previously observed that mimesis has been the focus of western art since the
Greeks, and historically, has advanced by utilising the most advanced technologies
and techniques available. [37] The abstraction of modernism is an aberration in
that historical flow. Certainly the history of popular media followed the mimetic
path. From automata of the seventeenth century, such as Vaucanson's famous
Duck, a mechanical automaton which could flap its wings, eat and excrete foul
smelling waste matter, mimesis has been the guiding principle. This trajectory
continues through photography, C19th visual novelties such as Wheatstones
stereograms and Daguerre's Diorama, to cinema, TV, computer animation and
theme park attractions.[38]
Now interactive media and artificial life offer a quite new type of mimesis, one
which combines the trajectory of technological mimesis with ideas influenced by the
`systems art' of the 70s. The representationalism here tends to be not so much
optical as systematic. It is the dynamics of biological systems which are modeled.
This change of order is akin to the move from harnessing the products of
biodiversity to harnessing the mechanism of biodiversity which I discussed earlier.
Numerous works employ `nature' not as a representation but in the structure of the
systems: biological growth algorithms, simulated ecosystems or communities,
genetic algorithms, neural networks. [39]
The parallelism inherent in Paul Cezzane's early modernist dictum that `Art is
harmony parallel to nature' is very like the goals of Alife research. This makes Alife
researchers `artists' in Cezzanes terms. The discussion of mimesis is complexified
by the intrusion of this goal of a parallel order. If the Alife researcher seeks a
condition parallel to nature, then this is in a sense, very like the goals of the
modernist artists following on from Cezzane, whose goals (in the words of Paul
Klee) were not to represent the world, but to `render visible'. One of the active
issues in an esthetics of interactivity, then, is a question of mimesis and
modernism: Is the artist concerned with simulating or interactively representing an
existing being, or with inventing another possible being.[40]
Conclusion
The new ideas in Complexity and Alife seem to have great potential in art, both as
techniques and as food for philosophical thought. But several cautions must be
noted. I've tried to show that while the disciplines I've discussed do take radical
positions with respect to traditional ideas, in other ways they perpetuate a view of
the world which is deterministic and instrumentalizing, and are thus themselves,
ripe for critical examination. Foremost among these, in the case of Artificial Life, is
the instrumentalization of a very particular notion of `evolution'.
New scientific ideas powerfully inform the value systems and world view of our
culture. New technologies are almost always clad in utopian rhetoric. Any
technology which is trumpeted as a `liberation' should be examined extra-carefully.
Historically, those technologies have transpired to be the most oppressive. Artists
must be careful not to unconsciously and unquestioningly endorse the value
systems and narratives hidden in scientific discourses, where they often lie hidden,
disguised as axioms.