Вы находитесь на странице: 1из 101

Vrije Universiteit Brussel

Faculteit van de Wetenschappen


Departement Informatica

RSIT EIT
VE BR
NI
U

US
E
VRIJ

S EL
SCIE

RAS
EB
NT

IA

N
V IN TE
CERE

Exploring the Amorphous


Computing Paradigm

Ellie D’Hondt
August 2000

In partial fulfillment of the requirements for the degree of


Master’s in Computer Science.

Promotor: Prof. Dr. Bernard Manderick


Contents

Acknowledgements 1

1 Introduction 2

2 Amorphous Computing: An Overview 5


2.1 The Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Metaphors from Biology . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Group Formation . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 Establishing a Coordinate System . . . . . . . . . . . 13
2.2.3 Tropisms and Pheromones . . . . . . . . . . . . . . . . 16
2.3 Metaphors from Physics . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.2 Diffusion: the Gray-Scott model . . . . . . . . . . . . 20
2.3.3 General Conservation Equations . . . . . . . . . . . . 22
2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 Amorphous Computing Media 26


3.1 Hardware Implementations . . . . . . . . . . . . . . . . . . . 26
3.1.1 Gunk on the Wall . . . . . . . . . . . . . . . . . . . . 27
3.1.2 Constructing the Particles . . . . . . . . . . . . . . . . 29
3.2 Simulator Implementations . . . . . . . . . . . . . . . . . . . 31
3.2.1 hlsim: A High Level Simulator for Gunk . . . . . . . 32
3.2.2 A Simulator in Java . . . . . . . . . . . . . . . . . . . 38
3.3 Cellular Computing . . . . . . . . . . . . . . . . . . . . . . . 40
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4 The Growing Point Language 46


4.1 Introductory Examples . . . . . . . . . . . . . . . . . . . . . . 46
4.1.1 Drawing a Line Segment . . . . . . . . . . . . . . . . . 47
4.1.2 Tropism Expressions . . . . . . . . . . . . . . . . . . . 49
4.1.3 A CMOS Inverter . . . . . . . . . . . . . . . . . . . . 50
4.2 Implementation: ecoli and the gpl Illustrator . . . . . . . . 55
4.2.1 Common Aspects . . . . . . . . . . . . . . . . . . . . . 56
4.2.2 ecoli: The Extensible Language of Local Interactions 58

i
4.2.3 The gpl Illustrator . . . . . . . . . . . . . . . . . . . . 61
4.3 Further Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.3.1 Impact of the Domain Layout . . . . . . . . . . . . . . 63
4.3.2 Extensions to gpl . . . . . . . . . . . . . . . . . . . . 65
4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5 Thoughts on Amorphous Geometry 67


5.1 Robot Motion Planning . . . . . . . . . . . . . . . . . . . . . 68
5.2 Drawing a Polygon . . . . . . . . . . . . . . . . . . . . . . . . 72
5.3 Finding the Diagonals of a Convex Quadrangle . . . . . . . . 75
5.4 Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

6 Conclusion 88

Appendix: The Secretion Finite State Machine 91

ii
List of Figures

2.1 The communication model. . . . . . . . . . . . . . . . . . . . 6


2.2 Result of the overlapping clubs algorithm. . . . . . . . . . . . 9
2.3 Result of the tight clubs algorithm. . . . . . . . . . . . . . . . 10
2.4 The tree region algorithm at work. . . . . . . . . . . . . . . . 12
2.5 Gradients originating from two corner anchors. . . . . . . . . 15
2.6 Error in position after triangulation. . . . . . . . . . . . . . . 15
2.7 A two-dimensional wave. . . . . . . . . . . . . . . . . . . . . . 19
2.8 Reflection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.9 Refraction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.10 Reaction-diffusion: regular versus amorphous layout. . . . . . 20
2.11 Simulation of the Gray-Scott model. . . . . . . . . . . . . . . 22
2.12 Domain partition. . . . . . . . . . . . . . . . . . . . . . . . . 23

3.1 Central Processing Unit of M68HC11. . . . . . . . . . . . . . 27


3.2 Two versions of the EPPP. . . . . . . . . . . . . . . . . . . . 30
3.3 A lawn of EPPPs. . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Simulation of a potential. . . . . . . . . . . . . . . . . . . . . 36
3.5 Simulation of the clubs algorithm. . . . . . . . . . . . . . . . 37
3.6 The Gray-Scott simulator. . . . . . . . . . . . . . . . . . . . . 39
3.7 Synchronous simulation of the Gray-Scott model. . . . . . . . 40
3.8 The two idealised cases for a biological inverter. . . . . . . . . 42
3.9 The biological inverter process schematically. . . . . . . . . . 42
3.10 The transfer curve I. . . . . . . . . . . . . . . . . . . . . . . . 43
3.11 Dynamical behaviour of a biological inverter gate. . . . . . . 44

4.1 Construction of a line segment. . . . . . . . . . . . . . . . . . 49


4.2 Primitive tropism expressions. . . . . . . . . . . . . . . . . . . 50
4.3 A CMOS inverter circuit. . . . . . . . . . . . . . . . . . . . . 51
4.4 gpl’s realisation of a CMOS inverter circuit. . . . . . . . . . 54
4.5 Tropism implementation. . . . . . . . . . . . . . . . . . . . . 56
4.6 The secretion finite state machine. . . . . . . . . . . . . . . . 60
4.7 Partial structure of the gpl illustrator. . . . . . . . . . . . . . 61

5.1 Simulation results for the robot motion planning problem. . . 70

iii
5.2 Drawing a quadrangle. . . . . . . . . . . . . . . . . . . . . . . 72
5.3 Several polygons constructed with gpl. . . . . . . . . . . . . 73
5.4 The structure of the diagonal growing point. . . . . . . . . . 76
5.5 The composition of the diagonal growing point. . . . . . . . 77
5.6 A quadrangle’s diagonal. . . . . . . . . . . . . . . . . . . . . . 79
5.7 A camera’s view in an art gallery. . . . . . . . . . . . . . . . . 80
5.8 A polygon with a possible triangulation and 3-colouring. . . . 80
5.9 Triangulation tropisms. . . . . . . . . . . . . . . . . . . . . . 81
5.10 The extract-pheromone procedure. . . . . . . . . . . . . . . 83
5.11 Triangulation of a pentangle and a hexagon. . . . . . . . . . . 86
5.12 Star shapes inscribed in a pentangle and a hexagon. . . . . . 86

1 The secretion finite state machine: conditions. . . . . . . . . . 92


2 The secretion finite state machine: actions. . . . . . . . . . . 93
3 The secretion finite state machine: processes. . . . . . . . . . 94

iv
Acknowledgements

I thank my promotor Bernard Manderick for granting me the opportunity


to carry out the research I intended. Specifically, I am grateful for being
provided with the means along with a stimulating environment to investigate
concretely the field of amorphous computing. I appreciate Tom Lenaerts for
updating my computer with all necessary software and for proofreading this
work. Likewise, I thank the rest of the ASyG-team for sitting the ”holidays”
through with me in 10F720. Without all that work activity surrounding me,
I am not sure if I would have managed to finish this thesis.
I also want to thank Daniel Coore for his help and feedback on his growing
point language, and for sending me the necessary files to run simulations
with it. As the only one that really knows gpl through and through, his
extensive answers to my emails proved very valuable, and I regret not having
been able to experiment with gpl in time to include it in this work.
I thank my parents and my sister Maja for their support and for their
valuable comments on my writing. Although I am certain that they helped
me with pleasure, I think they would prefer to read on other subjects if
they had the choice. More specifically, I thank my father for his suggestions
concerning computational geometry and his help on the secretion finite state
machine, my mother for meticulously checking the number of parentheses
in code extracts, and Maja for her help with the conclusion. Likewise, I
appreciate Patrick for supporting me and lending me an ear whenever I
needed to voice my thoughts.

1
Chapter 1

Introduction

Modern-day computer science generally produces systems which lack ro-


bustness against adaptations to new situations. They are unreasonably de-
pendent on the strict correctness of the implementation, and they cannot
be easily modified to accomodate design adjustments, extra specifications
or evolving requirements. These are fundamental problems, and to find a
solution for them calls for an entirely different way of thinking. Amorphous
computing is a new field in computer science, and while it does not claim
to be the ultimate solution to the above problems, it is a new computing
paradigm which allows one to reason about computation in a fundamentally
different way.

Amorphous computing emerged in anticipation of the fast evolving fields


of microfabrication and cellular engineering. Within the context of these evo-
lutions, mass production of small computational units with limited power
seems feasible, provided that there need be no guarantees that each unit
works faultlessly. However, while producing a system composed of such
units is within our reach, as yet there exist no programming paradigms ap-
plicable to it. Amorphous computing is exactly the domain trying to fill the
gap between the construction and the programming techniques required of
such a system, consequently referred to as an amorphous computer. While
cellular computing already works with a multitude of particles cooperating
to achieve some computational goal, fundamentally new aspects of the amor-
phous computing model are evolvability, asynchrony and locality. The latter
is reflected in the communication model, which is such that particles may
only communicate with neighbours within a certain communication radius.
Hence, amorphous computing particles have to achieve global goals through
local communication and cooperation, such that they cannot rely on any
global knowledge about the system they are a part of.

2
CHAPTER 1. INTRODUCTION 3

One of the main goals of amorphous computing is to identify methods for


obtaining predefined behaviour through local interaction only. Since sciences
such as biology and physics often operate within this context, they were
adopted as a source for metaphors applicable within the field of amorphous
computing. For example, biological organisms tend to organise themselves
from cells into tissues, tissues into organs, and organs into systems, thus
creating a hierarchical structure. Taking amorphous computing particles as
the metaphorical equivalent of cells, one can define algorithms to create a
hierarchical structure of clubs. Furthermore, biology often applies chemical
gradients to create a sense of direction within an organism; analogously, an
amorphous computing system can establish a coordinate system through its
equivalent of chemical gradients. Physics, whenever particles come into play,
also provides a rich source of metaphors. As a result, a physical wave, or
any conserved system in general, can be adopted as behavioural paradigm
for amorphous computing particles.

At a certain point, algorithms must be converted into programs, and a


framework must be present in which these programs can be executed. Such
an amorphous computing medium’s only constraint is that it behaves like an
actual amorphous computer would, whatever the implementation behind it.
As such, an amorphous computing medium can either be simulated or im-
plemented in hardware, and both approaches have already been elaborated
to some degree. Hardware media come in two flavours, both still in a pre-
liminary stadium: one relies on microprocessors for amorphous computing
particles, while the other uses biological cells. Because hardware implemen-
tations are as yet uncapable of serving as full-blown amorphous computing
media, several simulators were constructed. As such, amorphous algorithms
can be simulated and tested, while at the same time investigations towards
the construction of a ”real” amorphous computer are carried out.

Eventually, the idea of amorphous computing is to develop engineering


principles and languages to control, organise and exploit the behaviour of
a set of programmable computing particles. The first serious attempt to
achieve this is the growing point language, a language adopting the con-
struction of complex patterns as model for global behaviour. The specifica-
tion of these patterns relies on biological ingredients such as tropisms and
pheromones, which help direct the growth of structures towards or away
from others. With the help of these and other abstractions, complex pat-
terns such as digital circuits can be constructed. The growing point language
is implemented through an interpreter, but preliminary work towards the
development of a compiler was also carried out. Extending the language as
well as completing the compiler hopefully leads towards the construction of
the first amorphous computing language ever.
CHAPTER 1. INTRODUCTION 4

As a first step towards the extension of the preliminary amorphous com-


puting paradigm existing at this stage, we propose to incorporate the im-
portant and useful discipline of computational geometry. Since the physical
layout of an amorphous computer serves as a basis for a one-to-one corre-
spondence with geometrical configurations, it seems natural to look at what
we will refer to as amorphous geometry. Moreover, there are a whole set
of easily imagineable useful applications, such as smart surfaces interacting
with their environment in any way. Exploring the possibilities of amorphous
geometry, we show that amorphous computing can deal with this class of
problems as well, while at the same time we expand the field considerably
and provide a basis for useful applications.

The structure of this dissertation is as follows. In the chapter Amorphous


Computing: An Overview we give an overview of the amorphous comput-
ing model, as well as some applications based on metaphors from biology
and physics. As such, we are the first to provide a proper synthesis of the
field as well as a clear classification of existing amorphous algorithms. Also
novel is that the discussion up to this stage is deliberately independent of
the associated amorphous computing medium. These media are classified
properly in the chapter Amorphous Computing Media, where the presently
available hardware implementations and simulators are covered, as well as
concrete implementations of some of the algorithms presented in the prior
chapter. Next follows the chapter The Growing Point Language, which gives
an overview on the equally named language. Through some example specifi-
cations, it is shown how the growing point language constructs the associated
patterns. In this chapter we also discuss the actual limitations of gpl, as well
as possible extensions. The chapter Thoughts on Amorphous Geometry in-
troduces computational geometry within the field of amorphous computing.
Proposed solutions to well-known problems such as robot motion planning
and polygon triangulation underline that amorphous computing is powerful
enough for dealing with geometrical problems as well. The final chapter
contains a conclusion.
Chapter 2

Amorphous Computing: An
Overview

Extrapolating recent developments in cellular engineering and microfabrica-


tion, one can easily imagine that in the nearby future it will be feasible to
assemble systems that incorporate myriads of information-processing units
at almost no cost, provided that all the units need not work correctly, and
that there is no need to manufacture precise geometrical arrangements of
units or interconnections among them. For example, one could construct
”particles” containing microelectronical components, such as logic circuits,
sensors, actuator and communication devices, and immerge them in bulk
materials to obtain smart paint, bridges, buildings, and the like. More strik-
ingly, advances in cellular engineering could lead to the use of cells instead
of electronics as the basics of unit construction. Computation within this
context, as well as the construction and programmation of computers, take
on a fundamentally different form. Moreover, from the viewpoint of cellular
automata, one can wonder if global synchronisation and a crystalline lat-
tice are fundamental aspects, or if useful, more general models can be built
without depending on these properties [Rau99]. The above realisations lead
to the emergence of the field known as amorphous computing.
The structure of this chapter is as follows. In section 2.1, we discuss the ba-
sic attributes of amorphous computing. The bulk of this chapter describes
aspects of amorphous computing, relying on metaphors from biology (sec-
tion 2.2) and physics (section 2.3). We finish with a summary in section
2.4

2.1 The Basics


The starting point of amorphous computing is synthesised in the following
question:

5
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 6

How do we obtain coherent behaviour from the cooperation of


large numbers of unreliable parts that are interconnected in un-
known, irregular, and time-varying ways?

In the context of biological organisms, cooperation of animals such as bees


and ants, or even in human behaviour, this question has been recognised
for generations as fundamental. Amorphous computing brings it down to
the field of computation, with as a goal the identification of engineering
principles and languages that can be used to observe, control, organise, and
exploit the behaviour of programmable multitudes.

The basic model of amorphous computing consists of numerous identical,


mass produced computing elements randomly placed on a surface or in a
volume, where each processor has modest computing power and a modest
amount of memory. These particles possess no a priori knowledge of their
location, orientation, or of the other particles’ identities; each can talk to a
few neighbours, though not reliably. All processors operate asynchronously
but they have similar clock speeds. Components are sensitive to the envi-
ronment, may effect actions, and are unreliable: fault tolerance is achieved
via redundancy rather than relying on hardware perfection. Although in
principle individual processors are mobile, most of the work done in amor-
phous computing assumes fixed positions. The only fundamental property
of the communication model is that communications should be local and
occur by wireless broadcast; however, mostly a model is adopted where each
processor can talk to those processors that are within a certain fixed radius,
referred to as the communication radius. Moreover, all processors use a sin-
gle channel, which means that collisions can occur when two processors with
overlapping broadcast areas send messages simultaneously. The receiver can
detect a collision by receiving a garbled message, while the sender cannot,
since transmitting and listening to messages cannot take place simultane-
ously. The communication model is illustrated in figure 2.1.

The important task in amorphous computing is the identification of appro-


priate organising principles and programming methodologies for controlling
amorphous systems. Since both biology and physics often work with sys-
tems of numerous, locally interacting units (cells and particles respectively),
many principles can be borrowed from these sciences. Also, since an amor-
phous system is actually a massively parallel computing system, another
source of ideas can be found in the domain of cellular automata. The rest
of this chapter presents an overview of the work already done along these
lines of thought.
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 7

r C
A B

If A and B broadcast a message


r = communication radius at the same time, C will detect a collision

Figure 2.1: The communication model.

2.2 Metaphors from Biology


Since the general model for an amorphous computer is reminiscent of a
biological organism, preliminary ideas focus on hints from biology. The
growth of organisms demonstrates that well-defined shapes and functional
structures can develop through the interaction of cells under the control of
a genetic program. Hence, biology provides a rich source of methaphors for
amorphous computing. The subjects presented in this section all rely in one
way or another on such a biological metaphor; these range from an algorithm
for constructing a hierarchical structure (2.2.1), over one for establishing a
global coordinate system (2.2.2), to pattern formation (2.2.3).

2.2.1 Group Formation


In biology, it is common practice to organise cells into tissues, tissues into
organs, and organs into systems, thus creating a hierarchical structure. Like-
wise, one can organise amorphous particles into hierarchies of groups that
can act as single entities and can collaborate to form higher-level groups,
such that members within a group can specialise to perform particular func-
tions. In [CNW97] a method to achieve exactly this is discussed extensively.
It consists basically of two parts: first, the lowest level in the hierarchy is
established by the clubs algorithm; next, the higher levels are taken care of
by the tree algorithm. All algorithms presented further on follow the same
basic steps, shown in the box below.
DO
Leader Election: Processors compete for leadership.
Recruiting: New leaders recruit their neighbours. Recruited processors
stop competing for leadership and become members of the leaders’ group.
UNTIL
Termination: All processors are either leaders or group members.
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 8

The Clubs Algorithm


A club is a group of processors at the first level of the group hierarchy.
Each club has a leader, who is chosen in the following way. Assuming that
each processor has a random number generator, a processor chooses a ran-
dom number within a certain fixed range [0, R[ and decrements this number
silently. Once a certain processor reaches zero without being interrupted,
it declares itself a leader and recruites all its neighbours by sending them a
recruit-message. As such, the dimension of a club is determined by the com-
munication radius. Once a processor is declared a follower it stops counting
down and listens for other recruit-messages. Hence, in this algorithm, a
single processor can belong to several clubs, which is why it is also referred
to as the overlapping clubs algorithm. After R time each processor is either
itself a leader or recruited by a leader, and the process terminates. The code
executed by a single processor is as follows (from[CN98b])

integer R (upper bound for random numbers)


boolean leader, follower = false

procedure CLUBS ()
ti := R
ri := random[0,R)
while (not follower and not leader)
ti = ti - 1
if ( ri > 0)
ri := ri - 1
if (not empty(msg queue))
if (rst(msg queue) = "recruit")
follower := true
else
leader := true
broadcast("recruit")
while ( ti >= 0)
listen for other leaders
ti = ti - 1

The resulting clubs contain exactly one leader, which guarantees a min-
imum separation of a broadcast radius between leaders and reduces the
overlap between neighbouring clubs. Leadership conflicts may occur when
two processors declare themselves leader at the same time, violating the
minimum separation between leaders. If these conflicts are tolerated, the
time required for the algorithm is simply R. Leadership conflicts can be
eliminated by running the algorithm several times: after a time R, those
regions where a conflict has been detected (by neighbourhood conference)
run another round of overlapping clubs. If a processor detects a collision it
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 9

assumes that at least two leaders tried to recruit it at the same time, and
it becomes a follower. During leadership conflict resolution, it determines
who the leaders are. Hence, one can see that the algorithm is robust against
collisions. A result of the overlapping clubs algorithm can be seen in figure
2.2, where each circle presents a club 1 .

Figure 2.2: Result of the overlapping clubs algorithm.

Leaders act as coordinators of their group, since they can communicate


with all their members. The disadvantage of the overlapping clubs algorithm
is that an entire group can become disconnected if its leader fails. One
way out of this is to work with adaptive clubs [CN98a], in which case a
club elects a new leader when the former leader fails. This issue is also
addressed in a variation on the clubs algorithm, the tight clubs algorithm, by
constructing clubs that are more tightly coupled. The basic differences with
the overlapping clubs algorithm is that each processor belongs to exactly
one group, and that each member of a club can communicate with all the
other members. In this way, each processor can become the leader of its
club. Details of this algorithm can be found in [CNW97]; for a graphical
view on the result see figure 2.3.

The Tree Regions Algorithm


Establishing the higher levels of the hierarchy is accomplished by the tree
regions algorithm. This depends on the inter-group communication capabil-
ities provided by the immediate lower level to construct processor groupings
1
The figures in this section are from [CNW97].
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 10

Figure 2.3: Result of the tight clubs algorithm.

(at the first level, a minimum of two communication hops is required for
communication between leaders). The construction of level n groups starts
with a leader selection mechanism at level n − 1 in the same way as de-
scribed for the club algorithm. The main difference here is in the recruiting
mechanism: an extra parameter h determines the diameter bound of the
groups formed in the following way. Once a group is elected as leader, it
becomes a tree root. It then seeds a tree of fixed height h by recruiting its
neighbouring groups as children. The child group then tries to recruit its
own neighbours, unless it is itself at depth h from the root. Hence, when all
level n − 1 groups are either recruited or leader, one obtains level n groups
which are trees with a maximum depth h.

This algorithm can be run on top of either tight clubs or overlapping


clubs; the procedures used for seeding the tree and recruiting neighbours
are as follows (from[CNW97]):

integer R ; range for choosing random numbers


integer T ; expected time to create a region/tree
integer MAX H ; maximum height a tree can grow to
integer current h = inf
boolean competing = true

procedure MAIN()
r = random(R) * T
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 11

ELECTION LOOP()

procedure ELECTION LOOP()


if (not empty(msg queue))
msg = pop(msg queue)
if (msg = "recruit")
RECRUITED(msg)
else if (competing)
decrement r
if (r = 0)
SEED TREE()
else
ELECTION LOOP()

procedure RECRUITED(msg)
if ((msg.h < current h) &&(msg.h < MAX H))
broadcast ("recruit", (h+1)) ; recruit neighbours for depth h+1
current h = msg.h
competing = false
DONE()

SEED TREE()
root := true
broadcast ("recruit", 1) ; recruit neighbours for depth 1
DONE()

DONE()
wait for time out

Figure 2.4 shows the algorithm running on top of overlapping clubs. In


this case, club leaders coordinate the club’s decision to seed or join a tree,
with messages between leaders routed through club members. The lines
connecting club leaders represent the spanning trees, in this case of bounded
height 2, formed by the algorithm.

Applications
An amorphous computing hierarchy, established as explained above or in
any other way, simplifies programming an amorphous computer and facili-
tates the analysis of amorphous computing algorithms. This is because such
a hierarchy is really an abstraction of an amorphous computer that simplifies
high level programming. In this way, algorithms already designed for other
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 12

Figure 2.4: The tree region algorithm at work.

architectures can easily be implemented, where the physical properties of


the hierarchy stress the connection between the geometrical arrangement of
the processors and programming abstractions. Also, robustness is increased,
since aggregating processors increases reliability above that of a single pro-
cessor.

As an example, let us look at the mergesort algorithm. An amorphous


computing hierarchy provides a simple and efficient framework for distribut-
ing problems, inherent with divide and conquer algorithms such as merge-
sort. Treating groups in the hierarchy as computational units (instead of
single processors), mergesort can be invoked on a certain group, which par-
titions the sorting task amongst its children and merges the results. In
[CNW97], the following pseudo-code is presented for mergesort expressed in
terms of a method on a group; in this implementation, the array of integers
to be sorted is stored in the group’s state.

// inherited state:
// Members = list of members
group MergeSortGroup {
// state
IntegerArray my_ints
// methods
void sort () {
if (expected_compute_time(my_ints)
< time_to_distribute_problem_to_members)
quicksort()
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 13

else
for all members in this.Members, in parallel
{
member.set_my_ints(sublist)
member.sort()
}
merge_member_lists()
// set my_ints to the merge of members my_ints
}
}

A language abstraction built on top of the hierarchy would allow a user to


ignore intimate details of implementation of coordinated group behaviour,
by providing operations on groups of processors. Coherent group behaviour
is attained through the group leader, which coordinates and distributes the
subtasks to member groups, and through which the group maintains state.
The task a group has to accomplish is partitioned according to the num-
ber of subgroups and their resources. In this example, a group determines
whether additional recursion is needed by comparing the cost of distribut-
ing the task with that of executing it in-place. In general, any divide and
conquer algorithm, for example factorisation and primality testing, can be
implemented efficiently and in a straightforward manner on an amorphous
computer.

Another application follows directly from the clubs algorithm itself, which
can be used to solve the maximum independent set (MIS) problem. This
is so because a set of club leaders with no leadership conflicts is in itself
a MIS. The clubs algorithm can be further extended to solve the problem
known as ∆ + 1 vertex colouring, which is closely related to MIS. In this
problem, the goal is to find a graph colouring which uses at most ∆ + 1
colours, where ∆ is the maximum degree of the graph. This is achieved
by modifying the clubs algorithm so that there are no leaders or followers;
instead, when a processor’s countdown reaches zero, it simply chooses the
”smallest” colour it has not heard and broadcasts that colour. Both the
MIS and the ∆ + 1 vertex colouring algorithms implemented as above on an
amorphous computer are O(logN ), where N is the total number of elements
[CN98a].

2.2.2 Establishing a Coordinate System


For many applications, for example intelligent environments such as bridges
monitoring traffic loads, airplane wings actively reducing drag, or smart
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 14

walls that cancel noise, it is important that the amorphous computing com-
ponents know their physical location relative to other elements in order to
interpret sensor information. Therefore, the question arises whether it is
possible to establish a global coordinate system within an amorphous com-
puting medium. More precisely, can each processor assign itself a logical
coordinate that maps approximately to its global physical location, starting
from local information only?

One approach is inspired by biological systems that use chemical gradients


to determine the positions of cells [Nag99] – a chemical is released from a cell,
such that the concentration of this chemical decreases as one moves further
away from the cell, hence giving an indication of distance. Relying on the
fact that the position of a point on a 2-dimensional plane can be determined
uniquely relative to three non-colinear points, the algorithm consists of the
following basic steps:

1. three non-colinear anchor processors are chosen, either by external


stimulus or by a leader election mechanism (see section 2.2.1)
2. each anchor produces a gradient, thus enabling other processors to
determine their distance from the three anchors
3. a triangulation formula is used to convert the determined distances to
cartesian coordinates

Step (2) is achieved as follows: an anchor processor sends out a message to


its neighbours with a count set to one. Each of these neighbours forwards
the message, incrementing the count by one. In this way, a wave of mes-
sages propagates outwards from the anchor. Each processor remembers the
minimum count value it receives, ignoring messages with higher ones. The
minimum count value hi maintained by processor i will eventually be the
shortest path length to the anchor expressed in communication hops. Since
a communication hop has a maximum physical distance associated with it –
the communication radius r – processors with the same count tend to form
concentric rings, of width approximately r, around the anchor points. The
circular shape and the uniformaty of width of the ring improve with increas-
ing average neighbourhood size, because then the shortest communication
path between processors is closer to the straight-line connection between
them. Figure 2.5 2 shows gradients originating from two corner anchors.

To obtain a better resolution – at this stage, distances are integral multiples


of the communication radius – a smoothing step is performed by each proces-
sor, where they average their distance values with those of their neighbours,
using the following formula
2
The figures in this section are from [Nag99]
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 15

Figure 2.5: Gradients originating from two corner anchors.

Σj∈nbrs(i) hj + hi
si = − 0.5
|nbrs(i)| + 1
Finally, the resulting distances serve as input for a triangulation formula,
such that cartesian coordinates can be determined for each particle.

Figure 2.6: Error in position after triangulation.

Figure 2.6 shows the result of the algorithm with three anchors chosen as
shown, and each processor represented by a line with as length the difference
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 16

between the actual distance from the anchor and the estimated distance.
There are three main sources of errors in the position estimates

• errors in the gradient step due to the discrete distribution of processors

• errors in the smoothing step due to variations in the processor densities

• region specific errors in the triangulation step

Theoretical analysis in [Nag99] shows the following two key results: there
is a critical minimum average neighbourhood size of 15 for good accuracy,
and there is a fundamental limit on the resolution of any coordinate system
determined strictly from local communication (that is, on the order of the
communication radius). It is also shown that random distributions of pro-
cessors produce significantly better accuracy than regular processor grids,
such as those used by cellular automata. While it is so that larger anchor
triangles generally produce better results, for very large areas the use of a
single anchor triangle may not be feasible because of the time taken and the
accumulation of errors. In this case one can construct a coordinate system
by creating a manifold of multiple overlapping patches of coordinates.

Another approach to the coordinate problem is given in [Coo98], where


only one anchor point is chosen. Once the gradient has been established, a
coordinate system is found by solving Laplace’s equations within a circle.
While the algorithm seems to produce similar average errors, it is much more
sensitive to variations in density, because it is based on iteratively averaging
neighbour information, and termination is difficult to detect locally.

2.2.3 Tropisms and Pheromones


Using chemical gradients in biology to determine the position of a cell is be-
lieved to play a role in biological pattern formation. Hence, drawing further
on this metaphor, one can attempt to organise amorphous processes into
prespecified patterns. As an example, diffusion waves can be used to pro-
duce regions of controlled size, just by having particles relay a message only
if the hop count is below a certain bound. Regions constructed in this way
can influence each other’s growth. For instance, say that A and B are two
particles producing a diffusion wave, such that the B-wave can be relayed
only by particles that have not been visited by the A-wave. One might in-
terpret this phenomenon by saying, metaphorically, that A generates a wave
that inhibits the growth started from B. As a second example, suppose that
the B-wave can only be relayed by the particle located closest to A in each
neighbourhood. Our biological metaphor explains this by stating that the
region growing from B has a tropism attracting it towards A. ”Closest to A”
can be interpreted biologicaly through the presence of pheromones, whose
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 17

concentration decreases monotonically further from the source. Therefore,


in a certain neighbourhood the particle with the highest concentration of a
pheromone is closest to the source secreting that pheromone.

Diffusion wave mechanisms fit well within an amorphous computing con-


text, since phenomena such as growth, tropism and inhibition are insensitive
to the precise arrangement of individual particles. Because of this, faulty
communication or malfunctioning particles do not have a serious impact as
long as there are sufficiently many particles. Based on these concepts a pro-
gramming language was designed, the growing point language (gpl), which
enables programmers to specify complex patterns. However, since gpl is the
first serious attempt to developing a programming paradigm for amorphous
computing, we feel that it deserves a chapter of its own. Hence, a discussion
of gpl and its applications is deferred to chapter 4.

2.3 Metaphors from Physics


Physics, as well as biology, can be a source of new metaphors for amor-
phous computing. Most biological metaphors rely in one way or another on
a diffusion process. Diffusion is a natural candidate for simulation in amor-
phous media because such dissipative processes lose information, erasing mi-
croscale errors that occur in the computation. The fundamental processes
in physics, however, are conservative, and therefore harder to simulate on
an amorphous computer since errors accumulate instead of die out. One
approach is to simulate conservative processes in terms of discrete compu-
tational tokens; global conservation is achieved by local exchanges of the
tokens. One important problem with this approach is that one must ensure
that tokens are not lost or duplicated due to faulty processors of an imperfect
communication network. One idea is to represent the tokens in a redundant
distributed form. While in the spirit of amorphous computing this prob-
lem must surely be taken care of, the details of how to accomplish this still
remain an important challenge. We will discuss the token-approach for con-
servative processes characterised by wave equations in section 2.3.1, followed
by a discussion on reaction-diffusion processes, which are also conservative,
in section 2.3.2. Lastly, we give an overview on general conservative systems
in section 2.3.3.

2.3.1 Waves
The two-dimensional scalar wave equation is given by

∂2u
= c2 ∇2 u
∂t2
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 18

where u(x, y), the amplitude, is the displacement of a surface perpendicu-


lar to that surface. Imagine this surface to be an amorphous computing
medium densely sprinkled with processors. The conservative wave process
described by this equation can then be viewed as an ongoing exchange of
momentum ”tokens” between neighbouring processors. Each processor up-
date corresponds to a transaction of the given conserved quantity, such that
the gain of one processor is the loss of another. After the transaction, each
processor goes on to interact with other neighbours. The wave process is
thus described discretely, but as opposed to most discrete models of differ-
ential equations, such as cellular automata, an amorphous computer has no
regular lattice, nor a synchronous update. One can interpret this by saying
that an amorphous computer is a very local discrete model of a continuous
process [Rau99].

Taking momentum exchange as the sole possible interaction, the amount


of momentum transacted between processors i and j is determined by the
force f between them

∆pi = f (ui − uj )∆t


∆pj = −∆pi
where ui and uj are the amplitudes of processors i and j respectively. The
second equation is exactly the momentum conservation; it implies that the
force f is anti-symmetric. In [Rau99] Hooke’s law is adopted, which states
that

fij = k(ui − uj )
Hence, after the momentum transaction, each site i can update its amplitude
ui independently, according to the following rule
pi
∆ui =
∆t
m
A possible extension of this model is one where energy as well as momentum
is conserved – we refer to [Rau99] for further details.

By starting an amorphous computing in some initial configuration and


watching the evolution of the system, it is clear that this model supports all
phenomena of physical waves3 . Moreover, the solutions found agree very well
with the exact solutions of the continuous wave equation [Rau99]. Figure
2.7 shows the evolution of a two-dimensional wave, where there are fixed
boundary conditions.

3
The simulation of waves in itself will be discussed in section 3.2.1.
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 19

Figure 2.7: A two-dimensional wave.

A wave ripples out and reflects off the boundaries to interfere with itself.
Different colours represent different amplitudes; between the snapshot, each
processor has made roughly 15 transitions. Figures 2.8 and 2.9 4 represent
the reflection, respectively refraction, of a one-dimensional wave.

In figure 2.8 the right boundary is shaped like a parabola, focusing the
wave into a point before it reflects back.

Figure 2.8: Reflection.

In figure 2.9, there is a central region of more densily packed processors;


inside this region, the communication radius is decreased, so that the av-
erage neighbourhood size remains the same. Hence, the central region has
a different ”refraction index” than the rest of the medium and refracts the
wave.

Figure 2.9: Refraction.

4
The figures in this section are from [Rau99].
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 20

2.3.2 Diffusion: the Gray-Scott model


Reaction-diffusion systems are described by partial differential equations of
the form

∂u
= k∇2 u + F (u)
∂t
where u is a vector-valued field, which at each time t describes the changing
spatial concentrations of a set of chemical substances. Since it has often
been suggested that reaction-diffusion can explain the mechanisms of mor-
phogenesis5 , these chemical substances are also referred to as morphogens.
The resulting patterns for various reaction specifications F have been widely
simulated and, whether or not reaction-diffusion is the actual mechanism be-
hind morphogenesis, have been shown to produce regular patterns similar
to those found in nature.

Figure 2.10: Reaction-diffusion: regular versus amorphous layout.

Usually, simulations of reaction-diffusion are carried out by working with


a regular grid as a representation of discretised space, such that each grid
square acts as reaction locus. However, simulations on an amorphous com-
puter have shown that the resulting patterns are often independent of the
initial state of the system. In figure 2.10 6 for example, the same simulation
was performed on a square grid as well as an amorphous “grid”; although
the details differ, one can see that the general character of the patterns
developed is similar.
5
Meaning the formation of the structure of an organism or part; also the differentiation
and growth of tissues and organs during development.
6
From http://www.swiss.ai.mit.edu/projects/amorphous/GrayScott/
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 21

Let us now discuss a particular example of reaction-diffusion and its im-


plementation on an amorphous computer, taking into account the issues
mentioned at the beginning of this section. The Gray-Scott model involves
the morphogens U , V and P , interacting in the following way

U + 2V ← 3V
V ← P

The reaction kinetics of this system is described by two coupled non-linear


partial differential equations in u and v, the concentrations of U and V , as
follows

∂u
= ru ∇2 u − uv 2 + f (1 − u)
∂t
∂v
= rv ∇2 v + uv 2
∂t
where p, the concentration of P , can be determined from u and v. These
equations are of the form of a general reaction-diffusion equation, with
F (u) = f (1 − u) and F (v) = −(f + k)v; ru and rv are the (constant)
diffusion rates. On an amorphous computer, each processor is regarded as
a reaction site, maintaining information on the concentration of different
chemical species at that locus [CN98c]. Since reaction-diffusion is a conser-
vative process, we will again work with token exchange, the tokens in this
case being the morphogens. Following the standard regular grid approach,
updating the values for u and v at a certain site i incorporates the whole
processor’s neighbourhood N (i). This is because the discrete approximation
of the term ∇2 u is taken to be the mean neighbour value of u
 u(in ) − u(i)
∇2 u(i) ≈
in ∈N (i)
|N (i)|

However, since in an amorphous computer the neighbourhood size is no


longer a constant, this approximation does not conserve the total amount
of U in the system. A better idea is to work with pairs of processors, as in
section 2.3.1, and view the ∇2 -operator as a continuous description of the
essentially discrete process of molecule exchange between neighbouring sites.
In this model, the amount transferred from a processor i to a neighbour j is
proportional to u(i) − u(j) and vice versa; hence, conservation of matter is
upheld. To update a value from time step n to time step n+1 for a processor
i, transacting with processor j, one uses the forward Euler formula

un+1
i = uni + ∆t(ru (unj − uni ) − uni (vin )2 + f (1 − uni ))
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 22

A similar formula is used for v. Figure 2.11 7 shows a simulation of the


Gray-Scott model for 2500 processors with a communication radius of 0.05,
where the initial concentrations u and v are chosen randomly between 0 and
1, ∆t = 1, ru = 1.6, rv = 0.4, f = 0.04 and k = 1. The random initial
configuration at the left evolves into a regular pattern at the right after
20000 time steps.

Figure 2.11: Simulation of the Gray-Scott model.

Looking at this specific pattern, one can contemplate the possibility of or-
ganising amorphous computing systems by starting with the particles in a
random state and solving a discrete analog of a reaction-diffusion system,
an intriguing alternative to the clubs algorithm presented in section 2.2.1.

2.3.3 General Conservation Equations


A conservation law asserts that the rate of change of a certain substance u in
fixed domain G is equal to the flux f of that substance across the boundary
of G, ∂G. This is expressed as
 
d
u dv = − f.n ds
dt G ∂G
with n the vector normal to ∂G. In differential form this becomes

∂u
+ divf = 0
∂t
where, for simplicity, sources of u within G are ignored. The flux function
f is usually made a function of u; for example, if we take f = au, we obtain
7
From http://www.swiss.ai.mit.edu/projects/amorphous/white-paper/amorph-new.
html.
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 23

the one-way wave equation, while taking f = a.gradu results in the heat (or
diffusion) equation.

Working within the framework of an amorphous computer, one has to


discretise the problem at hand. In general, this can be done by partitioning
a domain G, which contains a series of amorphous computing processors, as
depicted in figure 2.12. Each processor is connected with its neighbours; at
the midpoint of each connection vector, a line is drawn orthogonally to it.
If our domain is packed densily enough with processors 8 , this construction
results in a series of surfaces, such that each surface contains exactly one
processor.

ui

hij

uj

Figure 2.12: Domain partition.

Each of these so constructed surfaces conserves the quantity u if the flux of u


across its boundary equals the time derivative of u within this surface; hence,
exchanging tokens of u between processors, while adapting the value for u
maintained by each processor, is a suitable representation of a conservation
equation. Using the notation of figure 2.12, we can now represent the flux
term of the conservation equation for processor i as follows. First, since
the vector hij is orthogonal to the boundary for each j, we can replace the
h
normal unit vector n by the factor |hij ij |
. Next, suppose fij is the flux from
processor i to processor j; then the total flux fi of u across the boundary of
the surface Gi associated with processor i is given by
 
fi .nds = fij hij cij
∂G
j∈N (i)

with
8
“densily enough” meaning that when connecting all processors to their neighbours,
the resulting triangles are all acute-angled
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 24


1
cij = dsi
|hij | ∂Gi
Furthermore, if ui does not change too fast within Gi , we can make the
following approximation

ui dv ≈ Ai ui
G
where Ai is the area of Gi . Putting all this together, we obtain the following
result

|∆t| 
∆ui = − fij hij cij
Ai j∈N (i)

Applying this to the heat equation, in which case fij ≈ haij (uj − ui ) (as a
result of the mean value theorem), the new value of u for processor i is found
by applying the forward Euler equation

a|∆t| 
un+1
i = uni − cij (unj − uni )
Ai j∈N (i)

In case of the heat equation, one can prove that the amorphous computing
forward Euler method provides a convergent approximation to the heat con-
servation law problem [Kat99] (this is still true with the cij ’s equal to 1).
Other conservation problems can be treated similarly.

Programming an amorphous computer to solve a conservation problem


will require each processor to be able to do the following things: retriev-
ing the u-values from its neighbours, broadcasting its own value for u, and
evaluating the updating formula. When receiving a “retrieve” message, a
processor must send a “my-u-value” message; each processor must however
have the ability to distinguish between messages received from its neighbours
and those originated elsewhere. Once a processor receives all its neighbours’
u-value messages, it calculates the new u-value and stores it. Each processor
updates after regular time intervals ∆t; also, boundary and/or initial condi-
tions have to be assigned to each processor. While there are some subtleties
due to the asynchronism of an amorphous computer, this scheme is not at
all unattainable. Hence we have the important result that amorphous media
can be programmed to integrate any differential equation for a conservation
law, provided the processors are sufficiently dense.

2.4 Summary
Amorphous computing emerged in anticipation of the fast developing disci-
plines of microfabrication and cellular biology. These enable the construc-
CHAPTER 2. AMORPHOUS COMPUTING: AN OVERVIEW 25

tion of numerous identical information-processing units cooperating as a


computational framework, which is precisely the basis of an amorphous
computer. In addition to this, an amorphous computer has no geometri-
cal arrangement nor an asynchronous updating of processors, as opposed
to a cellular automaton. Also, communication between processors occurs
only locally. Taking into account that processors are mass produced, any
amorphous computer should be fault tolerant, since individual units as well
as the communication between them might be nonfunctional.
Being a relatively new domain, there as yet exist no general engineering tech-
niques or languages to exploit and control behaviour of such programmable
multitudes – in other words, there is no amorphous computing paradigm.
Achieving this is the actual issue of amorphous computing, such that one
can program the particles to produce predefined global behaviour, result-
ing from local interactions only. Due to obvious parallels with biology and
physics, most work done until now has concentrated on metaphors from
these sciences. This resulted in a series of applications, such as constructing
a coordinate system, establishing a hierarchy, and the simulation of waves.
Chapter 3

Amorphous Computing
Media

Contemplating on applications and simulations on an amorphous computer,


one inevitably wonders how to construct and program one. At a certain
point, algorithms must be converted into programs, and a framework must
be present in which this program can be executed. We will refer to such a
framework as an amorphous computing medium. An amorphous computing
medium consists of independent computational particles, all identically pro-
grammed; each particle can maintain state, send messages to other particles
within a certain range, and respond to received messages. Particles might
also be able to move, or possess sensors and actuators to interact with the en-
vironment. Any framework with these properties is suitable for amorphous
computation, whatever the actual implementation behind this behaviour.
As such, an amorphous computing medium can either be simulated or hard-
ware implemented, and both approaches have already been investigated.
Hardware implementations will be discussed in section 3.1, while section
3.2 covers simulator implementations, most importantly hlsim, a high level
simular for an amorphous computer. Some ideas on cellular computing,
where an amorphous computing particle is represented by a biological cell,
are presented in section 3.3. Lastly, a summary is given in section 3.4.

3.1 Hardware Implementations


Although a completely functional amorphous computer with all necessary
attributes has not been constructed as yet, there does exist an experimental
prototype, which is, however, not completely within the spirit of amorphous
computing. This is mainly because the processors used are based on micro-
electronical elements existing already, but not specifically built for use in an
amorphous computer. Nevertheless, experiments can and have been done
with this prototype, and the results are promising. Next to this, there is

26
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 27

ongoing research towards ”real” amorphous computing particles, designed


specifically for that purpose, and having all the necessary requirements.
This topic will be discussed in section 3.1.2. First, however, we discuss the
existing prototype and its use in section 3.1.1.

3.1.1 Gunk on the Wall


The first amorphous computer is nicknamed Gunk on the wall, referring to its
appearance. Its particles are based on motorola’s M68HC11 microcontroller;
see figure 3.1 for a graphical representation of its extremely simple CPU.

Figure 3.1: Central Processing Unit of M68HC11.

The particles are capable of communication by wired broadcast, control


four LEDs, and have a sensor which can be triggered by, for example, a
laser beam. The communications layer is provided through an UART, or
Universal Asynchronous Receiver/Transmitter. Thus, while asynchrony is
maintained, a first non-amorphous aspect of Gunk is the wired, in prin-
ciple long-range communication that is provided. This implies that Gunk
should be configured manually as an amorphous computer, by attaching
wires such that local communication is simulated; however, a configuration
with all-to-all communication is in principle possible. Secondly, the amor-
phous computing particles are of a size which does not allow them to be
”sprinkled” randomly and in great numbers on a surface or in a volume.
This means that at this stage, Gunk works with a modest amount of pro-
cessors, manually placed on a wall. Nevertheless, the prototype can be used
to run amorphous programs on, and several experiments have already been
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 28

carried out. However, programming the particles is not very user-friendly


since this must be done in machine code. As an example, the code for Gunk
to initiate a hop count (part of the technique used to establish a coordinate
system in section 2.2.2) is given by 1

1. ORG USERPROG
2. VER FCB \$04
3. PROGBEGIN COMA
4. JSR DELAYINT
5. CMPB \#\$04
6. BHS SENSEHIT
7. TSTB
8. BEQ PROGBEGIN
9. LDAA \#\$7F
10. BRA SENDNEIGHBORS
11. SENSEHIT LDAA \#\$01
12. SENDNEIGHBORS LDAB RBUF
13. INCB
14. JSR LESSAB
15. STAA PORTB
16. LDAB \#\$01
17. JSR BROADCAST
18. BRA SENDNEIGHBORS
19. PROGEND NOP

Here LDAA #$nr and LDAB #$nr denote loading the hexadecimal number nr
into accumulators A and B respectively, CMPA indicates that the subsequent
value should be compared with that in register A, and JSR stands for jump
to subroutine. Hence, DELAYINT, LESSAB and BROADCAST are subroutines
specified elsewhere2 . The full translation of this program is as follows
1. each user program starts with this command
2. accumulators A and B are automatically initialised to 0
3. take the COMplement of A - in order to flash
4. DELAY, but be open to INTerruptions
5. was a sensor hit?
6. if so, goto SENSEHIT
7. did the delay run out without interruptions?
8. then toggle lights and delay again
9. a message was received
1
This example is from [Bee].
2
An overview of these subroutines as well as Gunk programming in general can be
found in [Bee].
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 29

10. display it and send it to your neighbours


11. I was hit - display and broadcast one
12. get the contents of the received message
13. increment this by one
14. put the LESSer in A and the greater in B
15. display the lesser...
16. ... and start broadcasting it
17. jump to broadcast subroutine
18. keep on broadcasting it
19. each user program ends with this command
Note that this program does not take into account that messages might be
garbled due to collisions.

Another experiment carried out on Gunk is that of pulse synchronisation


[Bee97]. The idea is to synchronise individual amorphous computing pro-
cessors by applying results from biology, where the analogous problem of
firefly synchronisation is already investigated. The initial implementation of
the algorithm was executed on Gunk and iteratively modified by analysing
experimental results. In a subsequent stage, the algorithms were transferred
to hlsim, an simulator for an amorphous computer (see section 3.2.1). This
was necessary in order to carry out experiments with a more realistic number
of particles. After significant experimentation, a mathematical model arose
explaining the observed behaviour. This approach shows the importance of
an amorphous computing environment in which hypotheses can be tested
and modified according to experimental results.

3.1.2 Constructing the Particles


Currently, there exist two main ideas for the physical construction of an
amorphous computing particle: the PPP or pinless processor particle, and
the EPPP or explicitly powered processor particle [AKS96]. The PPP is
more attractive since it has no physical interconnections for communica-
tions nor for power, which is derived from the environment. However, while
it is possible to build it in principle, in practice this is difficult using easy-to-
obtain processes. Therefore, attention was focused on the EPPP, a particle
which has explicit connections to the power supply but still communicates
wirelessly. It is expected that an EPPP can be fabricated with currently ac-
cessible technology in a few square millimeters of silicon. While full wireless
particles are the ultimate amorphous computing particles, a particle such as
the EPPP would be sufficient for near-term experiments.

Wireless communication can be provided, for example, by electroqua-


sistatic coupling, where each particle has an ”antenna” electrode that is
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 30

capacitively coupled to similar antenna electrodes on neighbouring parti-


cles. The main problem with this approach is the parasitic capacitance
between the antenna and the substrate of the processor to which it is at-
tached. This can be avoided either by using non-planar antennas, that can
be unfolded out of the chip, or by removing the silicon substrate under the
antenna structure to reduce the parasitic capacitance. In this way, it is pos-
sible to attain communication speeds among neighbouring particles in the
range of tens of megabaud. While some might think that radio communica-
tion schemes might be a better approach, this solution results in problems
because much higher frequencies are required and the resulting commu-
nication network is not very local, leading to severe interference between
signals. There are several alternatives for providing a particle with wireless
power, for example photodiodes, acoustic transducers, microwaves, mag-
netic coupling, and chemical methods. However, each of these possibilities
raises problems. Photodiodes are not capable of obtaining sufficient power
for computational applications, acoustic transducers suffer from impedance
mismatches, microwaves require an environment with high levels of ambient
microwave radiation, magnetic coupling is difficult for large systems, and
chemical methods are insufficient for long-term power supply. Since all of
these difficulties appear to be technical rather than fundamental, wireless
power supply will probably be solved by one or more emereging technolo-
gies in the near future. However, since amorphous computing focuses on
the organisation of computation, not on the technology behind it, for the
time being it is more productive to avoid these issues and rely on explicit
powered particles, or in other words, concentrate on the development of the
EPPP.

+ -
Power I/O Memory Processor Sensors

+ -
Power Memory Processor Sensors I/O

Figure 3.2: Two versions of the EPPP.

An EPPP communicates by capacitive coupling, as explained above. Power


is obtained by planting each particle into a three-layer elastomer with con-
ductive outer layers separated by an insulator. Contact with these layers is
established by providing each processor with pads in appropriate locations;
as such, each processor is explicitly powered by the elastomer. A graphical
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 31

representation of two versions of an EPPP is given in figure 3.2 3 . The upper


processor communicates through an antenna attached to a pad. The lower
processor is designed to work with a five-layer elastomer, such that particles
communicate by sensing voltages and injecting currents into a conductive
layer. Sensors are present in the part of the particle protruding from the
elastomer; the easiest approach is to use photodiodes or temperature sensors.
It is suggested to use RAM memory, which would result in something like
32 KBytes per particle. This implies using an instruction set with high code
density, or interpreting a high-density instruction set. Important is also that
each processor includes a random-number generator – not pseudo-random –
since the ability to choose different numbers, used as particle identities, is
crucial in the envisioned algorithms. The idea is then to insert the particles
random and with little precision into the elastomer, resulting in a ”lawn” of
computing elements, as shown in figure 3.3.

-
-

+
+

Figure 3.3: A lawn of EPPPs.

3.2 Simulator Implementations


Although hardware developments are underway, as yet they are in no stage
to achieve the construction of an amorphous computer with a realistic num-
ber of processors. That is why the need for a simulating environment arose,
such that several thousand processors could be run within a single envi-
ronment, each processor receiving a copy of the top level program. What
top-level environment is employed is a development choice. For the moment,
there exist two simulators: hlsim, which is written in MIT Scheme, and a
synchronous simulator written in Java. Since most experiments have been
3
The figures in this section are adapted from [AKS96].
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 32

done in hlsim, and the Growing Point Language, discussed in chapter 4, is


built on top of hlsim, we first give an extensive overview of this simulator
in section 3.2.1. Afterwards, we discuss the Java simulator in section 3.2.2.

3.2.1 hlsim: A High Level Simulator for Gunk


hlsim is a simulator for an ensemble of Gunk processors, which are distin-
guished by having no input or output other than a short-range radio-like
communications device, possibly augmented with some local sensors and
actuators. It allows programs to be written in Scheme and simulates the
execution of copies on thousands of asynchronous processors, as if they were
running in parallel. We discuss several aspects of hlsim: implementation,
use, and several examples, amongst which some of the algorithms presented
in chapter 2, worked out within this framework.

Implementation
An amorphous computer is a huge system of processors which act in paral-
lel and execute their Gunk program independently, if necessary interacting
through local communication. hlsim simulates this parallel activity by asso-
ciating each processor with a different thread. Next to this, a Gunk Scheme
program may itself be multithreaded, starting off with one thread and cre-
ating more whilst executing. This means that there are actually two levels
of threads: on the higher level, there are the processors’ threads, while on
the lower level, there may exist several threads for each processor. All these
threads are synchronised by events, which may be signalled (i.e. associated
with a value) or unsignalled; an event object becomes signalled when its as-
sociated thread terminates. Each processor has the ability to schedule local
events in time using their associated threads. hlsim transforms these local
events into a global schedule, such that the order of events in this sched-
ule is consistent with the order the events would fall in if each processor
were running independently in parallel. That is why the Scheme code for a
Gunk processor cannot be loaded into a simulation directly, but must first
be compiled into continuation passing style. This is a style of writing pro-
grams such that the events in the future, i.e. the rest of the computation,
are passed as an explicit parameter, called the continuation. A continuation
is a procedure that takes the value of the current expression, which repre-
sents the ”past” of the computation, and invokes this with the ”future” of
the computation. Hence, each event is partitioned into a part which has
already been computed, and a part which still has to be computed. As such
one can halt an event at any time, and resume after an arbitrary pause.
This is the mechanism behind the sequential simulation of parallel events.
hlsim pays attention to all the details involved in a particle simultaneously
handling local computation and communication. Each particle is given a
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 33

time slice within which it executes part of its program. Handshaking be-
tween particles through events is used to force an active particle to wait for
cooperation-dependent commands to terminate.

Programming with hlsim


There is a whole series of commands available in hlsim, centered mostly
around simulations, multithreading, devices and communications. We dis-
cuss the most important of these commands – for an overview see [Ada97].
A typical series of commands to create, run and show a new simulation of a
Gunk program foo would look like this:

(define sim (make-sim/1 1000 .05))


(simulation.display! sim #T)
(simulation.load sim "init" "foo")
(simulation.run sim)

The first line constructs a simulation with 1000 processors and a commu-
nication radius of 0.05 units; the processors are uniformly distributed in
a unit square. Alternatively, one could use the procedures make-sim/j,
make-sim/f or make-sim/c, which produce respectively a fixed grid (with
on optional ”jiggly” factor), a grid with varying neighbourhood density (de-

termined by a function f which defaults to ), and a grid which satisfies a
predicate constraint for its neighbourhood density. As a result of the second
step the use of a Scheme graphics window is included, in which each proces-
sor is displayed as a colour-coded dot. Next, the Gunk program in question
is loaded into the simulation, together with the ”init” file, which contains
the initialisation code for a processor and should be loaded into the simula-
tion first. Finally, the last line in the example runs the simulation. Once it
is finished, one can extract certain information by using simulation.eval.
For example, the number of collisions each processor detected is found by
executing (simulation.eval sim ’collisions), which returns a vector
containing the result for each processor. Also, a simulation object may have
additional slots and operations added to it at any time; this is done through
the following procedures

simulation.set! simulation name value


simulation.get simulation name
simulation.operation simulation name argument argument ...

The first two commands are straightforward; the third applies the operation
in the slot name to the given arguments.

Several procedures are provided for multithreading, the most important of


which are the following. The procedure parallel starts a thread for each of
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 34

its arguments, and waits for all these to terminate. The select form, which
takes as arguments an arbitrary set of clauses, allows a nondeterministic
choice between several events. Each clause can be in one of the following
forms

(guard expression-sequence)
(guard => procedure)
(else expression-sequence)

where a guard evaluates to an event, a thread or false. All guards are


evaluated; when an event becomes or is already signalled, the clause in
question is chosen. In the => form, procedure is called with the value with
which the event is signalled. Primitive-message-event is a global variable
which, if bound to an event, signals it with every incoming message. One
can create events with make-event and make-timeout-event, which create
an unsignalled event and an event which becomes signalled after a given
duration respectively. Last, wait waits for an event to be signalled and
returns the associated value. A thread is suspended while it waits for a
given event.

There are several procedures concerning devices and communication worth


mentioning. Creating a sensor or actuator I/O device is done in the follow-
ing way:

(make-io-device initialiser action)

where both arguments are procedures. Controlling these devices is done


through read-sensor and apply-actuator. Finally, a message is sent with
the command broadcast, messages take a time message-delay to be trans-
mitted, and a garbled message can be detected by its representation as an
ordinary message, the symbol collision.

Establishing a Potential
As a first example4 , let us again consider the implementation for a hop count,
a common ingredient in amorphous computing algorithms (see sections 2.2.2
and 3.1.1). We start with a randomly chosen processor, acting as a source
for a potential, which broadcasts a message containing a potential of 36.
When any other processor receives a message, it compares the potential in
the message with the highest potential it has recorded. If the potential in
the message is greater, the processor updates its highest potential, changes
its color to reflect the potential, and broadcasts a message with a potential
4
This example is from [DeR97].
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 35

one less than that received. If the potential in the received message is less
than or equal to the highest recorded potential, no action is taken. When
a processor detects a collision, it flashes white for 100 time units. This is
reflected in the following loop, which is carried out for all processors apart
from the source:

(select
(global-timeout ’done)
(primitive-message-event
=> (lambda (message)
(event.clear! primitive-message-event)
(if (eq? message ’collision)
(begin (colour-me "white")
(wait (make-timeout-event 100))
(colour-me my-colour))
(if (and (number? message) (> message potential))
(begin
(set! potential message)
(broadcast (-1+ message))
(set! my-colour
(remainder (truncate (/ message 5)) 7))
(colour-me my-colour))))
(propagator-loop)))))

Initially, each processor is coloured blue, while the source is cyan. The
eventual colour of a processor is determined by a number between 0 and 6,
i.e. that number is taken as index for the following vector

(define colours ’#("blue" "orange" "yellow" "green" "cyan"


"magenta" "pink"))

The result of a simulation with 10000 processors and a communication radius


of 0.02 is shown in figure 3.4.

The Clubs Algorithm


We now present an implementation of the clubs algorithm, covered in section
2.2.1. The main loop is as follows

(define (club-event-loop)
(select
(global-timeout ’done)
((and still-competing? leader-timeout)
(event.clear! leader-timeout)
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 36

Figure 3.4: Simulation of a potential.

(color-me "red")
(set! still-competing? #F)
(broadcast id) ; a LEADER message
(club-event-loop))
(primitive-message-event
=> (lambda (message)
(event.clear! primitive-message-event)
(if (eq? message ’collision)
(set! collisions (+ collisions 1))
(begin
(set! still-competing? #F)
(set! clubs-i-belong-to
(cons message clubs-i-belong-to))
(color-me (length clubs-i-belong-to))))
(club-event-loop)))))

Processor id’s are chosen as random numbers in a large interval. Both global-
timeout and leader-timeout are defined with make-timeout-event, such that
leader-timeout is less than global-timeout. The second select clause is carried
out each time a processor declares itself as a club leader; the third select
clause is the recruiting stage. Initially, all processors are coloured blue,
when a leader is elected it is coloured red. When the simulation is finished,
each processor has a colour ”equal” to the number of clubs it belongs to.
Any number not falling within the colour vector is associated with the colour
white. The result of a simulation with 2000 processors and a communication
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 37

radius of 0.1 is given in figure 3.5.

Figure 3.5: Simulation of the clubs algorithm.

Simulating Waves
The figures in section 2.3.1 are all results of simulations carried out with
hlsim. The colour of a processor in these figures represents the amplitude
of the wave at that point. The implementation of a simulated wave process
is actually quite straightforward. The central procedure is the following 5

(define (do-exchange other)


(let* ((p (my-p)) (pn (neighbour-p other))
(q (my-q)) (qn (neighbour-q other))
(result (momentum-conserving-exchange q p qn pn)))
(set-my-q! (enforce-boundary-conditions (first result)))
(set-my-p! (second result))
(set-neighbour-q! other
(enforce-n-boundary-conditions other (third result)))
(set-neighbour-p! other (fourth result))

(show-self (my-q) q)
(show-neighbour other (neighbour-q other) qn))

where momentum-conserving-exchange determines the new values for am-


plitude and momentum after an exchange between neighbours, using the
5
The code extracts are from [Rau99], where the full simulation program can be found.
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 38

formulas derived in section 2.3.1. Since the amplitudes of the boundary


are fixed by the boundary conditions, one has to check if a processor is on
the boundary before updating its value for q, hence enforce-boundary-
conditions is called. After the exchange, the colours of the processors are
adapted, reflecting the change of their internal state. To start the simula-
tion, one first imposes the initial conditions; then, one calls the activity
procedure for each processor, which is defined as follows

(define (activity)
(define (start lst)
(if (null? lst)
’()
(begin
(make&start-thread
(lambda () (pause (random period))
(wait-and-do-exchange (car lst))))
(start (cdr lst)))))
(start neighbor-list))

where wait-and-do-exchange is a procedure which waits and then executes


do-exchange. Calling activity sets up a thread for each neighbour, where
each thread pauses and then does the exchange with that neighbour. The
start times of the threads are staggered, such that the exchange with neigh-
bours occurs one by one, but the order in which this happens changes over
time.

Since the simulation of waves requires each processor to know who its
neighbours are, normally a message should be broadcast requesting an ex-
change. Each processor that receives this message should then reply co-
operatively. Another approach is to establish the neighbours beforehand,
keeping a list of neighbours as part of the internal state of each processor.
However, in this simulation they used a shortcut by “cheating”, i.e. step-
ping outside the abstraction of each Gunk processor running isolated from
its surroundings. This is because hlsim provides certain procedures which
can deliver information on the whole simulation, such as each processors’
neighbours. In this example one can argue that without cheating the pro-
gram would have been less clear, and that it is certainly possible to simulate
waves without preliminary knowledge on each processor’s neighbours. Nev-
ertheless, using global information is against the spirit of Gunk, and thus
not recommended.

3.2.2 A Simulator in Java


Next to hlsim, there is a synchronous Java simulator available, which, to our
knowledge, has been used mainly for simulations of the Gray-Scott diffusion
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 39

equations, discussed in section 2.3.2. For this reason, as well as the fact
that this simulator is synchronous, we will not discuss it in great detail.
However, it is interesting to compare results obtained with an asynchronous
and a synchronous simulation.

Figure 3.6: The Gray-Scott simulator.

A Java applet for the Gray-Scott equations is presented in figure 3.6 6 .


The large window in the left-upper corner is analogous to the hlsim graphic
displays presented above. When clicking a processor, its neighbourhood
is coloured white, and detailed information on that processor is displayed
below the window. Again, the colour of a processor is a measure of its
internal state, in this case the concentration of u or v, depending on whether
the “watch u” or “watch v” button is marked. At the upper right the
running time measured in steps is displayed; it is also possible to change the
update rate. The three buttons below speak for themselves. The simulation
parameters are more or less the same as those for hlsim, for example one can
choose the number of processors, grid (random or regular), communication
radius and initial conditions. Then there is a series of parameters specific
for the Gray-Scott simulation; these were defined in section 2.3.2.
6
This applet as well as a tutorial for it can be found at http://www.swiss.ai.mit.edu/
projects/amorphous/jsim/sim/GrayScott.html.
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 40

The result of a simulation with about the same parameters as for fig-
ure 2.11 is shown in figure 3.7. Important to notice is that the qualitative
nature of the resulting patterns is comparable to those obtained with the
asynchronous hlsim simulations from section 2.3.2. In that section we al-
ready mentioned the similarity of the patterns obtained with a regular as
opposed to a random grid. So it seems that amorphous media are at least
as expressive as synchronous and/or regular simulations.

Figure 3.7: Synchronous simulation of the Gray-Scott model.

3.3 Cellular Computing


At the beginning of the previous chapter, we mentioned the influence of
recent advances in cellular engineering on the emergence of the field of
amorphous computing. This is because due to these developments, one
can imagine using biological cells instead of micro-electronical components
as amorphous computing particles. Cells are small, energy efficient, have
limited ”computational” power, can easily be configured amorphously (that
is, randomly distributed over a surface or in a volume), and are governed
by identical genetic programs. They are isolated, controlled environments,
in which complex chemical reactions can be used to control and organise
biological processes, just like micro-electronical elements employ electrical
mechanisms to control electrical processes. Moreover, cells reproduce them-
selves, allowing the creation of many copies with little manufacturing effort.
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 41

All these aspects fit perfectly within the framework of an amorphous com-


puting medium. So, if we could co-opt existing biochemical mechanisms or
even construct novel ones for our purposes, biology could become an actual
implementation as well as an important source of metaphors for amorphous
computing.

An important subset of cellular engineering is cellular computing, which


investigates the construction of digital-logic circuits within living cells. The
basic idea is to represent signals by concentrations of naturally occuring
DNA-binding proteins, where the nonlinear amplification is implemented by
in vivo DNA-directed protein synthesis [KS98]. Production of a particu-
lar protein is controlled by the sequence of DNA codons in the associated
gene. In the first step of protein synthesis, known as transcription, the
primary DNA sequence for the gene in question is copied into messenger
RNA (mRNA), under control of an enzyme complex called RNA polymerase
(RNAp). Next follows the translation step, in which the mRNA is used by
another enzyme complex, the ribosome, to manufacture proteins. Since
mRNA as well as proteins are gradually degraded within a cell, the con-
tinuous presence of a particular protein requires ongoing transcription and
translation processes. Which portions of the gene are copied is carefully
controlled by a series of mechanisms, which form the basis of biological logic
gates. Each gene is accompanied by an upstream non-coding control region,
signalling the binding location for RNAp, which catalyses the creation of
mRNA, and binding locations for promotors and repressors. Promotors fa-
cilitate the binding of RNAp, while repressors physically interfere with its
binding by overlapping the binding site. Hence, if we take an ”input” pro-
tein A which acts as a repressor for the synthesis of an ”output” protein Z,
and a gene GZ which codes for the production of Z, we have all the neces-
sary ingredients for a biological inverter. Indeed, presence of the A protein
(signal 1) implies absence of the output protein Z (signal 0), and vice versa,
as shown schematically in figure 3.8 7 .

An essential aspect of any digital logic gate is a high-quality nonlinearity.


In biological systems, the main chemical technique to achieve this is cooper-
ative binding. This refers to mechanisms in which the first of several protein
binding reactions occurs at a relatively low rate, while subsequent binding
reactions take place more rapidly due to the presence of the already bound
protein, which enhances the binding affinity. As shown in figure 3.9, we
can divide the protein synthesis process into modules, namely cooperative
binding, transcription and translation. Here φA and φZ denote the synthe-
7
Figures in this section are from http://www.swiss.ai.mit.edu/∼rweiss/bio-
programming/dimacs99-evocomp-talk/.
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 42

Figure 3.8: The two idealised cases for a biological inverter.

sis rates of the input and output protein respectively, ρA is the repression
activity, and ψZ is the mRNA synthesis rate.

C T L
φA cooperative ρA ψZ φZ
transcription translation
binding
input repression mRNA output
protein synthesis protein
input mRNA
protein

ρA ψZ φZ

φA ρA ψZ

Figure 3.9: The biological inverter process schematically.

Each module is responsible for a different aspect of the inverter signal: co-
operative binding C takes care of the nonlinearity, transcription T does the
actual inverting of the signal, and translation L is mostly a linear process,
which means that it results in a scaling of the output. Putting all this
together, we obtain the following expression for the inversion relation I

φZ = I(φA ) = L ◦ T ◦ C(φA )
shown graphically in figure 3.10. The resulting curve is similar to that of an
NMOS inverter.

Natural gene regulation systems exhibit characteristics useful for implement-


ing in vivo logic circuits. An example is the phage λ promoter PR and
repressor (cI) mechanism [HKW99], of which the chemical reaction system
and the associated kinetic constants are fairly well known. Figure 3.11 shows
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 43

Figure 3.10: The transfer curve I.

the dynamic behaviour of the inverter as modeled with this system; shown
from top to bottom are the concentrations of the input protein A, the active
(i.e. unrepressed) gene GZ and the output protein Z. As one can see, this
system behaves like a ”normal” electrical inverter.

Once inverters are constructed, it is easy to build any logical circuit.


Multiple-input logic functions, for example, can be manufactured by com-
bining several inverters, each having the same binding protein as a product.
Experiments with a NAND-gate, ring oscillator and RS latch have been
carried out [HKW99], showing the intended dynamic behaviour. An aid in
the construction of general circuit is Biospice [HKW99], a prototype system
for simulating and verifying genetic digital circuits. It takes as an input a
series of gene expression systems and simulates the dynamic characteristics
of the target system, consulting a database of reaction kinetics and diffusion
rates. Also, the protein synthesis approach can be extended to developing
cells which act as actuators, when the gene in question codes for an enzyme
that effects some action within the cell, such as motion, illumination or cell
death. Analogously, the input of a cellular gate could consist of a sensor
instead of a protein, reacting in response to illumination or a chemical in
the environment. Eventually, the objective is that of microbial circuit de-
sign, where a desired logic circuit and a database of kinetic rates are taken
as input, and a genetic network is produced that implements the circuit.
However, this requires an ambitious research program, since at the moment
there is no such database available.

Several implementation issues should be taken into consideration. First,


there exist certain complexity limitations, since each input protein required
must be different from other input proteins, and not used elsewhere in the
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 44

Dynamic behavior of inverter (gene count = 10)


2
A
1.5

1
protein concentration (1uM = 598 molecules/cell)
0.5

0
0 5 10 15 20 25 30 35 40 45 50

Gz

0.02

0.01

0
0 5 10 15 20 25 30 35 40 45 50

2
Z
1.5

0.5

0
0 5 10 15 20 25 30 35 40 45 50
time (x100 sec)

Figure 3.11: Dynamical behaviour of a biological inverter gate.

host cell’s control mechanisms. Complexity is also constrained due to the


finite volume of the host cell, implying a maximum limit on internal pro-
tein concentrations. Another issue is the interference with a cell’s life cycle.
Next to this, the choice of the host organism is critical. Either a very sim-
ple existing organism should be used (such as yeast bacteria or Eschericia
Coli), or an engineered, dramatically simplified cell. Above all, these gates
are slow, which seems to imply that their use will be in addition to, not
in place of electrical gates. Besides the obvious application of control of
biological processes to medicine, the main goal is to co-opt these processes
to manufacture novel material and structures at molecular scale. Since cells
have limited computational power and frequently fail, programming them
falls within the amorphous computing paradigm. Mastering mechanisms of
biological differentiation, morphogenesis and pattern formation, one can use
home-made biological entities as construction agents for building and main-
taining complex ultramicroscopic electronic systems. Hence, while biological
gates are slow, they can aid in molecular-scale manufacturing of macroscopic
objects. It is these objects we desire, or as stated in [KS98], the slow bio-
logical systems are our machine shops, with proteins as the machine tools,
and with DNA as our control tapes.
CHAPTER 3. AMORPHOUS COMPUTING MEDIA 45

3.4 Summary
An amorphous computing medium is an environment which behaves as if it
were a collection of randomly spread, locally interacting computing parti-
cles that update asynchronously. The way to write an amorphous program
depends on the medium at hand. The most logical amorphous computing
medium is, of course, a “real” amorphous computer. At this stage there
exists a hardware prototype, nicknamed Gunk, based on existing micropro-
cessors. However, this is not completely within the spirit of amorphous
computing, using a limited amount of processors and wired communication.
Therefore, there are ongoing experiments towards the construction of an
amorphous computer particle which would fit better in the context, with
wireless local communication and of a size rendering feasible the sprinkling
of many such particles on a surface or in a volume. Since hardware im-
plementations are not yet at the point of behaving truly amorphously, a
simulating environment is asked for. Hence, we arrive at the next group of
amorphous computing media: simulators. At the moment there is a Java
synchronous simulator available, as well as an asynchronous one, hlsim,
which has been used in many experiments, including some of the algorithms
discussed in the previous chapter. Lastly, a more futuristic medium can be
found in biological cells. While as yet it is not possible to have cells coop-
erate amorphously, progress has been made on the subject of using cells to
construct arbitrary elctrical gates.
Chapter 4

The Growing Point Language

Ultimately, programming an amorphous computer comes down to specify-


ing microscopic interactions to attain predetermined macroscopic goals. An
amorphous computer is a very general system in the sense that it is capable
of producing a wide range of global behaviour, allowing one to engineer the
desired outcome by constraining local interactions in a particular way. How-
ever, the associated engineering techniques are not at all well known at this
stage. While several applications show that this approach is certainly fruit-
ful, it is important to extract general methodologies from specific examples,
such that a programming paradigm can be developed. Within this con-
text the Growing Point Language originated, as an effort to investigate how
global behaviour can be obtained by differentiating local behaviour of amor-
phous computing particles in a controlled way – i.e. knowing the emergent
behaviour in advance. In the growing point language (henceforth referred to
as gpl) pattern generation of interconnect topologies was chosen as model
global behaviour. This non-trivial problem fits very well within the context
of the above; moreover, it is easy to verify whether one has achieved the pre-
specified goal of reproducing a certain pattern. Nevertheless, it is important
to keep in mind that the motive behind these nice pattern reproductions is
the development of an amorphous computing paradigm.
The structure of this chapter closely follows that of [Coo99], the disserta-
tion in which gpl is presented. First, we give an overview of the language
through some introductory examples in section 4.1. Section 4.2 describes
the implementation of gpl. Next, some further issues, such as the effect of
the domain layout and possible extensions, are covered in section 4.3. Last,
we give a summary in section 4.4.

4.1 Introductory Examples


The principal concept in gpl is the growing point, a locus of activity that
describes a path through connected elements in the system. At each time,

46
CHAPTER 4. THE GROWING POINT LANGUAGE 47

a growing point resides at a single location in the gpl domain, called its
active site. A growing point propagates by transferring its activity from a
particle to one of its neighbours according to its tropism (recall the pre-
liminary discussion on gpl in section 2.2.3). At its active site, a growing
point may deposit material and it may secrete pheromones – in fact, both
processes are realised by setting certain state variables available at each
location. A growing point’s tropism is specified in terms of the neighbour-
ing pheromone concentrations. Actually, these concentrations determine a
scalar field whose gradient directs the motion of the growing point. A path
of a growing point is detectable only if there was material deposited. In
the context of pattern formation, presence of a material at a location im-
plies that part of the pattern has been established there. Typically, each
material is represented by a certain colour in the gpl domain, comparable
to simulation representations of hlsim. Of course, each material also corre-
sponds to a pattern fragment type. Invoking a growing point at a location
may yield one or more successor locations in the domain, unless the growing
point terminates at that location. The step-size is the fundamental unit
distance, defining a growing point’s neighbourhood, giving an upper limit
to the distance over which a growing point moves, and applied as the unit
by which all distances in gpl are expressed. We will make clear how gpl
works, as well as what means are provided to construct a certain pattern
by discussing some introductory examples. We start with a simple program
that draws a line segment between two points.

4.1.1 Drawing a Line Segment


Initial conditions required to draw a line are, of course, the endpoints of the
segment, say A and B. To make a line ”grow” between its endpoints we
establish a growing point at A, in the following way

(define-growing-point (A-to-B-segment)
(material A-material)
(size 0)
(tropism (ortho+ B-pheromone))
(actions
(when ((sensing? B-material)
(terminate))
(default
(propagate)))))

Next to its (globally scoped) name a growing point may have parameters; in
this case however, there are none. The definition of a growing point has two
parts: attributes and instructions. The attributes given for A-to-B-segment
are material, size and tropism. A material is one of the two possible primitive
CHAPTER 4. THE GROWING POINT LANGUAGE 48

objects of gpl. Depositing A-material corresponds to setting a logical


marker in each visited location. Each location keeps a list of material tags
that may be polled by visiting growing points, allowing a growing point to be
sensitive to the presence of a certain material. If the material attribute for
a growing point is left unspecified, its trajectory cannot be displayed, since
colour-coding of paths occurs through deposited materials. The following
attribute for the line segment is its size, indicating the half-width of the
growing point in step-size units – i.e. the thickness of the line segment.
The integer value for size is used as the radius of the neighbourhood with
which the material tag will be associated. In this case, we might also have
omitted the size attribute, which is by default taken to be 0. Next comes the
tropism attribute, which specifies the direction in which the growing point
moves. It determines to what pheromones the growing point is sensitive
and in what way. Pheromones are, next to materials, the second primitive
object present in gpl. For A-to-B-segment the tropism expression ortho+
directs the growing point towards increasing concentrations of B-pheromone.
The instruction set of a growing point is executed at each active site, such
that a growing point may move only if all commands within this set are
executed. In the example, the growing point will terminate when it senses
any B-material at its current location. Otherwise, it tries to move according
to its tropism attribute. There must be at least one pheromone present in a
neighbour to guide the movement of a growing point; if not, the active site
of a growing point is stalled until it becomes detectable. If the appropriate
pheromones are present but no acceptable location can be found, the active
site retries up to three times. If there still is no acceptable site found, the
active site halts execution, and the growing point is stuck.

As A-to-B-segment is written, it will require some cooperation from the


endpoint B in order to produce the desired line segment, as is indicated by
the presence of B-pheromone and B-material in the code above. Hence, a
growing point is installed at B in the following way

(define-growing-point (B-point)
(material B-material)
(size 0)
(for-each-step
(secrete 10 B-pheromone)))

This growing point does not have a tropism declaration, which implies that
it is stationary (use of propagate is prohibited in this case). Also, B-point’s
only action is to secrete B-pheromone, the substance to which the former
growing point is attracted. The distribution of B-pheromone is radially
symmetric about the source B, such that its concentration is monotonically
CHAPTER 4. THE GROWING POINT LANGUAGE 49

B
...
.
A

Figure 4.1: Construction of a line segment.

decreasing in the distance from B for up to 10 step-size units. Finally, to


initialise the growing points at A and B, we add the following

(with-locations
(A B)
(at A (start-growing-point A-to-B-segment))
(at B (start-growing-point B-point)))

The with-locations command is part of the interface between gpl and


user; only here are locations ever explicitly named. It should be clear at this
stage how both growing points interact to create the desired line segment.
Due to the secretion of B-pheromone by B-point any A-to-B-segment
located within 10 step-size units of B will grow positively orthotropic –
i.e. directly towards – B-point. The line produced will be straight1 since
the pheromone distribution is radially symmetric. Once A-to-B-segment
reaches B-point it will detect B-material and halt. For a schematic repre-
sentation of the establishment of the line segment see figure 4.1.

4.1.2 Tropism Expressions


As an intermezzo, we give an overview of the available tropism expressions
in gpl. Basically, there are two sorts of tropism expressions: primitive
and compound. At this stage there exist five primitive tropism expressions,
namely

• ortho+: move in direction of maximal increase of pheromone concen-


tration
1
I.e. the line is straight within the range of B-pheromone. The user should determine
secretion ranges such that their extent covers the domain where necessary.
CHAPTER 4. THE GROWING POINT LANGUAGE 50

• ortho-: move in direction of maximal decrease of pheromone concen-


tration

• plagio+: determine the average of the highest surrounding pheromone


concentration and that of the current location; move in direction clos-
est to this value

• plagio-: determine the average of the lowest surrounding pheromone


concentration and that of the current location; move in direction clos-
est to this value

• dia: move in direction of most similar pheromone concentration

A graphical representation of the primitive tropism expressions is given in


figure 4.2.

s s s

ortho+ ortho- dia

s s

plagio+ plagio-

Figure 4.2: Primitive tropism expressions.

Compound tropism expressions are obtained from primitive ones mainly


through the use of and and or, which take an arbitrary number of arguments.
A broad range of tropism behaviour can be described by combining the five
given primitive tropisms. However, if required, gpl can be augmented with
extra primitive tropisms.

4.1.3 A CMOS Inverter


The construction of the CMOS inverter pattern presented in figure 4.3 will
guide us through the remainder of gpl. Apart from being a didactically well-
chosen example, CMOS circuit layouts served as a case-study throughout
[Coo99], which led to the development of a CMOS library for constructing
arbitrary circuits. The means to combine these library elements will become
clear by the inverter example in itself.
CHAPTER 4. THE GROWING POINT LANGUAGE 51

Figure 4.3: A CMOS inverter circuit.

Let us suppose that the rails (parallel blue lines) as well as an initial start-
ing point at the left are given as initial conditions. Parallel lines can be
constructed by having one line segment, constructed as in the previous ex-
ample, secrete a long-range pheromone to which the growing point for the
second line grows diatropically. Once the rails are established, all other com-
ponents of the pattern can be specified as either parallel or orthogonal to
them. Let us start with the horizontal polysilicon (red) segments, of which
there are three2 . By having both rails secrete a long range pheromone,
these segments can grow diatropically to both of these pheromones. How-
ever, since in this way there are still two possible directions, we require an
additional pheromone to distinguish between the left and right direction.
We can secrete this pheromone at the starting point, which is given as an
initial condition, in the following way

(define-growing-point (orient-poly length)


(material poly)
(size 1)
(tropism (and (ortho- dir-pheromone)
(and (dia up-rail) (dia down-rail))))
(actions
(when ((< length 1)
(terminate))
(default
(propagate (- length 1))))))

The orient-poly growing point is a first example of a growing point with a


parameter, used to determine the length of the segment. Also, its movement
is determined by a compound tropism expression involving three pheromones:
2
The fourth segment is the start of another CMOS inverter coupled to the first.
CHAPTER 4. THE GROWING POINT LANGUAGE 52

those secreted by the rails, and dir-pheromone, secreted by the initally


given starting point. The disadvantage with this approach is that the
dir-pheromone would have to be secreted over a very long range in order to
have effect on all constituting horizontal polysilicon segment. Therefore, we
rely on a different strategy that uses two growing points: the one given above
to get the segment started, and another one which continues the segment
for as long as we require, without requiring the presence of dir-pheromone.
The second growing point is defined as follows

(define-growing-point (poly-with-inertia length)


(material poly)
(size 1)
(tropism (and (ortho- poly-pheromone)
(and (dia up-rail) (dia down-rail))))
(actions
(secrete+ 3 poly-pheromone)
(when ((< length 1)
(terminate))
(default
(propagate (- length 1))))))

This growing point looks almost exactly the same as orient-poly, but does
not rely anymore on dir-pheromone. However, we do have to add the clause
(secrete poly-pheromone) to the action attributes of orient-poly. Also
worth mentioning is the use of the secrete+ command in poly-with-inertia.
Where secrete always remembers the highest pheromone concentration
upon receipt of several secretions, secrete+ accumulates those values. By
using secrete+ in the definition of the above growing point, we ensure that
it propagates in the same direction on each step. Finally to put the two
pieces together, we define horiz-poly in the following way

(define-growing-point (horiz-poly length)


(material poly)
(size 1)
(actions
(--> (start-growing-point orient-poly 2)
(start-growing-point poly-with-inertia (- length 2)))))

Here we use -->, an alias for the connect command, which links two grow-
ing points, and is the first of several gpl commands that can be used to
build larger constructions from existing growing points. Through this com-
mand, one can sequence growing points without having to invoke the second
growing point explicitly in the actions of the first. As such one can define
growing points independently and use them as building blocks for any con-
nected structure.
CHAPTER 4. THE GROWING POINT LANGUAGE 53

The vertical polysilicon segments are easier to define as being orthotropic


to the rail pheromones; this results in the growing points up-poly and
down-poly, which are very alike orient-poly. This leaves us with the
p-fet (yellow) and n-fet (green) components of the CMOS-inverter. Each fet
consists of three growing points: a horizontal polisilicon segment, a diffu-
sion segment growing upwards, and one growing downwards. These growing
points are easy to define in themselves, but in order to compose them into
a larger abstraction we require something else than the connect command.
This is because with connect, growing points have to follow each other se-
quentially. However, groups of growing points viewed as one structure do not
necessarily have to be connected one after another. Therefore, gpl provides
network abstractions, allowing to compose growing points in a broader way.
For example, to define a p-fet one would require the following definition

(define-network (vert-p-fet (gate) (src drain))


(at gate
(start-growing-point horiz-poly 2)
(--> (start-growing-point up-p-diff FET-HEIGHT)
(->output src))
(--> (start-growing-point down-p-diff FET-HEIGHT)
(->output drain))))

The definition for vert-n-fet is analogous. The code above confronts us


with several network-related commands. First of all, define-network is a
top level command which defines a relation between two sets of locations
called the inputs and the outputs. Hence, it is an extension of a growing
point, which takes only one input and output. For a p-fet there is one input,
the gate, which is connected to two outputs, source and drain. At gate,
the three growing points start producing the associated line segments, such
that both vertical segments are connected to the two outputs source and
drain. This is made clear through the command ->output, which is used to
associate actual locations with names. FET-HEIGHT is a global constant, as
are all upper case words by default.

At last, we have all the ingredients to define an inverter network, which


is done in the following way

(define-network (inverter (poly-in) (poly-out))


(let-locs
((p-fet n-fet)
(at poly-in
(--> (start-growing-point horiz-poly POLY-LEN)
(secrete+ (* 2 POLY-LEN) dir-pheromone)
(--> (start-growing-point up-poly
CHAPTER 4. THE GROWING POINT LANGUAGE 54

(+ INV-HEIGHT N-WIDTH))
(--> (start-growing-point horiz-poly
(+ POLY-LEN (* 2 N-WIDTH)))
(->output p-fet)))
(--> (start-growing-point down-poly
(+ INV-HEIGHT N-WIDTH))
(--> (start-growing-point horiz-poly
(+ POLY-LEN (* 2 N-WIDTH)))
(->output n-fet))))))
(==> (p-fet n-fet) inverter-fets (poly-out))))

Again, we encounter several new commands. First, let-locs has the general
form (let-locs (( <loc-names> ) <loc-defn> ) <body> ). The seman-
tics are that the locations given in loc-names are bound to the results of
evaluating loc-defn, after which body is evaluated within the scope of this
new binding. Apart from the last line the whole program constructs all
polysilicon segments present in the pattern. The last line arranges for the
fet components to grow; also, it uses the new command ==>, which is an alias
for cascade. This command takes a set of inputs, networks and outputs,
such that each network’s outputs serve as inputs for the next. In this case
there is only one network given, inverter-fets, which basically arranges
a p-fet and an n-fet to grow from the outputs of the polysilicon patterns.
Hence, we constructed an inverter abstraction in gpl, which in itself can
serve as a building block for more complicated circuits such as, for example,
an AND-gate. The simulation result of the inverter construction with gpl
is shown in figure 4.4.

Figure 4.4: gpl’s realisation of a CMOS inverter circuit.


CHAPTER 4. THE GROWING POINT LANGUAGE 55

4.2 Implementation: ecoli and the gpl Illustrator


Initially, every amorphous computing particle is governed by the same ge-
netic program, and apart from possibly having a different entry point into
this program, each particle behaves identically. Differentiating the particles
occurs over time and is self-induced through communication with neigh-
bours, sensory input values and by random numbers. Within the context
of gpl, sensory input values come down to the actual presence of materials
and concentration of pheromones. Communication between particles occurs
only through the gpl-commands secrete and propagate, which therefore
play a special role in the implementation of gpl. In this section, we give an
overview on how a gpl program is interpreted by each particle in an amor-
phous computer. This discussion is ambivalent in the sense that there are
two separate levels of abstraction, namely ecoli – from Extensible Calculus
Of Local Interactions – and the gpl Illustrator. ecoli is intended to be a
language for an amorphous computing particle, although at this stage it is
more of a programming discipline applied to Scheme. It is a predecessor of
a gpl-compiler, where the compiled program is executed by an amorphous
computing medium; as such it does not implement medium-dependent in-
gredients like communication details. While we present ecoli independent
of the medium, it was used in relation with hlsim, whose architecture is
therefore suggested to fix one’s thoughts (see section 3.2.1). Besides ecoli
there is the gpl illustrator, an interpreter for gpl programs. The gpl il-
lustrator provides a simulator environment specifically for gpl programs;
hence, it does not require the aid of an amorphous computing medium such
as hlsim. It was developed for conceptually exploring the domain as well
as for practical reasons: ecoli converts even a compact gpl program into
unwieldly code to be executed by hlsim. Since hlsim simulates execution
of a program on several thousand processors on one computer, lengthy code
can and does lead to memory problems, such that the length of the code is
inversely proportional to the number of processors attainable in a simula-
tion. Therefore, an interpreter was required as a transitional phase towards
a more complete ecoli language.

Section 4.2.2 discusses ecoli, specifically the communication-dependent


commands secrete and propagate, which are represented more accurately
in this model. Next, we describe the gpl illustrator in section 4.2.3, where we
mainly cover communication-independent commands, which allow a higher
level of abstraction. First, however, we discuss some aspects of gpl which
are common to both ecoli and the gpl illustrator.
CHAPTER 4. THE GROWING POINT LANGUAGE 56

4.2.1 Common Aspects


The common aspects of ecoli and the gpl illustrator are concentrated
around the internal representations of primitive datatypes and the pre-
processing of tropism expresssions. The primitive datatypes of gpl are
pheromones, materials and growing points. Each location keeps a table of
all pheromones it has ever detected, where a pheromone is represented as
a pair containing the pheromone name and its concentration. Also, the set
of materials is remembered by each location; when a growing point visits a
location, its associated material is added to this set. The colour of a particle
in the gpl domain is determined by the number and kinds of materials in
its material set. Considering growing points, there is static as well as dy-
namic information which must be maintained. Static information, such as
size, material and tropism, can be determined from a growing point’s defi-
nition and is represented as a vector of attributes. For each growing point
the attributes vector is shared by all particles throughout the gpl domain,
since it is derived from the gpl program which each processor inherits. A
growing point’s dynamic information is generated when the growing point
is invoked, evolving as its active site moves from particle to particle. It con-
tains information such as parameter values, previous pheromone result and
the current continuation, all passed along to where a growing point’s active
site moves. One might think that continuations are too heavy to be sent
along in a message, but as yet a growing point cannot be nested, which im-
plies that its closing environment can maximally extend one level deep. As
such a growing point continuation can be represented simply by the bindings
for its parameters at the current location, together with a pointer into the
text of the program. Therefore, these continuations are small enough to be
communicated through.

my-result last-result my-result last-result

<idi, conci>
filter sorter
pairings from filtered filtered and
neighbours values sorted values

Figure 4.5: Tropism implementation.

Tropisms are implemented as a combination of a filtering and sorting


process, as shown schematically in figure 4.5. Before a growing point moves,
it sends a stream of pairs neighbouring pheromone concentrations through
the translated tropism. These pairs are first filtered according to the tropism
CHAPTER 4. THE GROWING POINT LANGUAGE 57

in question, whereafter they are sorted in order of preference. On the basis of


this result a growing point moves its active site towards a specific neighbour.
More specifically, the analysis of a tropism occurs through the following code
extract

(define (analyse-tropism tropism-exp targets)


(let ((filter-gen-maker (tropism->filter tropism-exp targets)))
(let ((sort-gen (pred-gen->sorter-gen
(tropism->pred-gen tropism-exp targets))))
(lambda (empty-case non-empty-case)
(lambda (my-result last-result)
(let ((sorter (sort-gen my-result last-result)))
((filter-gen-maker
empty-case
(lambda (results) (non-empty-case (sorter results))))
my-result last-result)))))))

A tropism is actually translated into a procedure that first takes two con-
tinuations, empty-case and non-empty-case, applied when the filtering
process results in an empty and non-empty list respectively. This procedure
returns a filter-generator, which takes the pheromone values at the prede-
cessor and the current location – last-result and my-result in the above
– and does the actual filtering and sorting of the given list of pheromone val-
ues. Filtering and sorting translators first decide whether they are dealing
with a primitive or compound tropism expression, then to dispatch on the
particular type. For example, in case of the ortho- tropism, filtering is done
according through the predicate (lambda (last-result) (< last-result
my-result)); sorting, coincidentally, also occurs through the < predicate,
such that the neighbour with the lowest pheromone concentration is pre-
ferred. Compound tropisms are handled by cascading the filters for each
constituting primitive tropism and combining their sorters in the following
way

(define (compose-pred-gen gen1 gen2)


(lambda (my-result last-result)
(let ((pred1 (gen1 my-result last-result))
(pred2 (gen2 my-result last-result)))
(lambda (elt1 elt2)
(or (pred1 elt1 elt2)
(and (not (pred1 elt2 elt1))
(pred2 elt1 elt2)))))))

In other words, sorting predicates is applied left to right, such that later
sorters are taken into account for a set of elements only if these elements
have no ordering with respect to the previous sorters.
CHAPTER 4. THE GROWING POINT LANGUAGE 58

4.2.2 ecoli: The Extensible Language of Local Interactions


Each particle has the following four components at its disposition

• the working environment, which maintains bindings for variables


that were assigned or referenced by messages;

• the working dictionary, which maintains a set of bindings from mes-


sage types to their associated handlers;

• the current agenda, which keeps track of actions that have been
started but not yet terminated;

• a message queue, which serialises arriving messages and buffers them


until they can be processed.

ecoli may affect the first three of these components directly. At any time, a
processor’s message queue represents the interaction of that processor with
its neighbours. Execution occurs through the following ecoli code extract

(define (eval-msg message dictionary)


(if (pair? message)
(let ((handler (get dictionary (car message))))
(if handler
(apply handler (cdr message))
#f))
(error "Improper message format: " message)))

After the handler is applied, the current agenda is executed, after which a
new message is retrieved from the message queue. In order to run a gpl pro-
gram on an amorphous computer, each particle must be capable of dealing
with all available gpl commands. These commands can be divided concep-
tually in those that can be executed independently on each processor, and
those that require communication between particles. Since the ecoli proto-
col was invented for the purposes of the latter class, consisting of secrete
and propagate, we head off with discussing their implementation.

The secrete command


The ecoli secrete command is implemented according to the finite state
machine presented in figure 4.6. Secretion relies on a hop count mechanism,
which was covered in sections 2.2.2 and 3.4. The implementation in itself is
actually completely analogous to that of establishing a potential, explained
in the latter section, augmented with an acknowledgement mechanism such
that each particle knows when the secretion has been established to its full
extent. The secretion finite state machine consists of seven states, where
CHAPTER 4. THE GROWING POINT LANGUAGE 59

transitions occur through the messages SECRETE, UPDATE and RESTORE, or


through a TIMEOUT mechanism. This mechanism works through a timer,
which is reset when a message is received; if the timer reaches countdown,
a state transition occurs. States 5 and 6 are source particle states; the
secretion source waits for a RESTORE message, implying that the secretion
is done, upon which it transitions to state 6. Following timeout the source
particle enters its end state, after which it can continue pending calculations.
The situation for non-source particles is slightly more complicated. State
1 is the state in which the first secretion wave is established; it is reached
after receiving a SECRETE message. After timeout, a particle enters state
3, where possible UPDATE messages are handled. If such a message arrives
with a lower count than that previously established, a particle goes to state
4 to deal with this situation. Otherwise, it stays in state 2. State 3 is
entered after timeout or when a particle receives a RESTORE message. If
a particle manages to timeout out of state 3, the secretion is finished and
the particle can continue with its calculations. Important also is that if a
received message has a count greater than the extent of the secretion it is
ignored, while if it is equal the particle goes straight from state 0 to state
3. In the figure arrows are labeled with conditions necessary to make the
transition, where an unlabeled arrow means that the message is ignored.
For the full series of graphs – labeled with actions, conditions and processes
respectively – we refer to the appendix.

The propagate command


When a growing point wants to move its active site, it sends a REQUEST
message to its neighbours, to determine their pheromone values. Neighbours
respond either with an ACKNOWLEDGE message containing the requested con-
centrations, or a WAIT-RETRY message to ask for more time. When a particle
receives a REQUEST message, first it deactivates the associated handler, such
that the particle cannot be recruited twice for the same growing point. Next,
it installs the ACKNOWLEDGE message handler, which retrieves the required
pheromone concentrations; also, a timeout mechanism is installed, such that
WAIT-RETRY message is sent when the timer reaches countdown. After all
necessary pheromone values are acquired at the growing point’s active site,
it sends an ACTIVATE message to itself, which results in the filtering and
sorting of neighbouring pheromone concentrations according to the prepro-
cessed tropism expression. As such, a successor location is selected, after
which that location is sent a CONFIRM message, containing its own and the
predeccessor’s identity, together with all required dynamic information of
the growing point. At the successor location, the same steps are repeated;
hence, the growing point’s path is established throughout the gpl domain.
secrete(extent,count) new-count+1<min
count<min
update(extent,old-count,new-count)
restore(count)
count+1>=min
timeout(send)

t-secrete wait
(1) (2)

nt
te
in

ex
m

<
>=
in

nt
1

ou
t+
=m

-c
nt>

w
ou
ou

ne
-c
c in

w
t =m

ne
ten 1>
ex t+
t<
count+1<min
new-count+1<min

un
un co
co

count>extent
count=extent count+1<min
default t-restore t-update
(0) (3) (4)

new-count=extent new-count+1<min

new-count>extent
CHAPTER 4. THE GROWING POINT LANGUAGE

Figure 4.6: The secretion finite state machine.


count‡min

src-t-
src-default
(5) restore
(6)
60

count<min
CHAPTER 4. THE GROWING POINT LANGUAGE 61

4.2.3 The gpl Illustrator


The gpl illustrator is an interpreter for the growing point language written
in Scheme. Its implementation closely mirrors the Scheme interpreter pre-
sented in [ASS96], using a case to divide between expression types, where
each expression’s syntax is abstracted into a series of syntax procedures. For
gpl there are cases at several levels, as shown schematically in figure 4.7,
where only the main gpl commands are mentioned at each level for the sake
of clarity. The first partition of gpl occurs through the eval-top-level
procedure, which handles all possible top-level gpl commands such as define-growing-point
and define-network. The lower level takes care of gpl commands which
may be used within the body of a top-level command, leading to a case for
network commands like at, cascade and let-locations , and a case for
growing point commands such as secrete, propagate and connect. On the
figure is also mentioned at which level the actual tropism analysis, discussed
in section 4.2.1, is carried out.

Top-level commands

eval-top-level

define-network define-growing-point ...


with-locations ...

network growing point


commands commands

project - < command > analyse-tropism

cascade let-locs at ... illustrate - <command >

secrete propagate ...

Figure 4.7: Partial structure of the gpl illustrator.

The gpl illustrator maintains an agenda of active particles – i.e. particles


that are currently evaluating the body of some growing point. Hence, fewer
particles are given a time slice as compared with hlsim, where all particles
are considered active. Therefore, simulations with the gpl illustrator run
faster. The gpl illustrator guarantees that each command on a single parti-
cle is completed before the next one in sequence is evaluated. However, this
CHAPTER 4. THE GROWING POINT LANGUAGE 62

policy only approximates what happens in hlsim for the propagate and
secrete command. Interpreting these commands with the illustrator ba-
sically neglects acknowledgement mechanisms such as those present in the
ecoli implementation. This implies that each particle participating in a
communication-dependent command trusts that the other cooperating par-
ticles will carry out the rest of the command correctly, as opposed to ecoli
where this is checked explicitly through acknowledgement. For this reason,
as well as to cover another subject than in the previous section, we will not
discuss communication-dependent commands for the gpl illustrator. We
shall limit our discussion to gpl commands concerned with combination
of growing points, namely the growing point command connect and the
network command cascade.

The connect command


The connect command is implemented through growing point continua-
tions, such that the first argument to connect is evaluated, while the next
arguments are passed along wrapped up in a continuation (recall the justifi-
cation in section 4.2.1 on the feasibility of passing around continuations). It
cooperates with the procedure illustrate-terminate, which checks, once
a growing point terminates, if there are no deferred growing points waiting
to be connected behind it. Once a terminate command is encountered,
the continuation containing the next arguments to connect is invoked by
illustrate-terminate. The precise code for illustrating connect is as
follows

(define (illustrate-connect exp env point cont stop


id last-result size gp)
(let ((start-command (car exp))
(end-commands (cdr exp)))
(illustrate-command start-command env point cont
(make-gp-continuation
end-commands env process-next-point
stop id last-result size gp)
id last-result size gp)))

Here point represents the active site, cont captures the pending compu-
tation following the connect command, while the stop continuation is the
one that is invoked by illustrate-terminate.

Networks and the cascade command


The define-network command is interpreted through a projection oper-
ation, such that a particle emulating a particular network terminal filters
CHAPTER 4. THE GROWING POINT LANGUAGE 63

out the commands within the body of the network definition that are not
relevant to that terminal. This is done through the following code extract

(define (project-seq terminal seq env point cont stop id)


(cond ((null? seq) (cont))
((null? (cdr seq))
(project terminal (car seq) env point cont stop id))
(else
(project terminal (car seq) env point
(lambda ()
(project-seq terminal (cdr seq) env
point cont stop id))
stop id))))

Again, the first command is projected while the rest are wrapped up in a con-
tinuation; note that network continuations take a terminal name as parame-
ter, which makes sense since a network is not necessarily a connected pattern.
The project procedure dispatches on the type of network command. As
a result a network command is parsed to find any instructions that are as-
sociated with terminal; these instructions are executed when the location
corresponding to this terminal is instantiated, typically in an at or cascade
command. The implementation of cascade is analogous to that of connect,
with the command ->output now playing the role of terminate. Hence,
the network continuation to be invoked after the first argument to cascade
is evaluated, is now called by illustrate->output. Specifically, the ex-
pression (==> (i1 ...in ) network-a network-b ... (o1 ...om )) is interpreted in
two steps: first, network-a is constructed at the locations (i1 ...in ) to pro-
duce the outputs (a1 ...ak ). Next, the expression (==> (a1 ...ak ) network-b ...
(o1 ...om ) ) is evaluated, as this is the continuation invoked by the ->output
command of network-a.

4.3 Further Issues


Although the examples and implementation above give confidence that gpl
is a truly powerful tool for an amorphous computer, as yet we have not
traced out what the actual theoretical and practical limitations of gpl are,
nor how we should incorporate amorphous computing aspects such as faulty
communications or malfunctioning processors. The former issue is closely
related to the layout of the domain and will be discussed in section 4.3.1.
Next, we cover possible extensions to gpl in section 4.3.2.

4.3.1 Impact of the Domain Layout


The main proposition of [Coo99] is stated as follows
CHAPTER 4. THE GROWING POINT LANGUAGE 64

Given a planar graph G, there is a gpl program that when eval-


uated on a square lattice, either draws G or decides that it has
failed to do so.

Along with the proof for this proposition comes a general method for gener-
ating such a program. Two ingredients of the statement above are important
in relation to the domain. First, it holds on a square lattice, immediately
leading to the question of what the effect of a randomly distributed domain
is. Secondly, there is the possibility that a growing point “fails to do so”
because it gets stuck, again a domain-related issue. In general, there are
three important concerns that are directly affected by the geometry of the
gpl domain, namely

• can growing points propagate at all, i.e. is the domain connected;

• can they propagate in the right direction, i.e. is every direction repre-
sented by a neighbour from each particle’s viewpoint;

• are pheromone levels monotonically decreasing in their shortest-path


lengths from their source, i.e. is the hop-count closely correlated to
the Euclidian distance.

The density of particles in the domain affects all three of these issues; more
particularly, a low mean and a high variance of the neighbourhood density
are aspects that can have a negative effect. Experimental and theoretical
results show that the mean neighbourhood density should be relatively large
(at least 6) in order for the two first issues above to be taken care of. While
this condition is necessary in the general case, it is not a sufficient one; in
other words, while chances are considerably lower that a growing point gets
stuck or that it cannot propagate in the desired direction, it is still a pos-
sibility. For example, even with every direction represented, all neighbours
might be poisonous for a growing point, or they might not have a pheromone
gradient according to the growing point’s tropism. However, high average
neighbourhood sizes do have a penalty, since the secretion process is delayed
proportionally. As for the third issue in the list above, in order to have good
correlation between the hop count and the Euclidian distance a necessary
condition for the domain distribution is that the variance of the density of
neighbours should be low. Without such a correlation, growing point tra-
jectories can be disturbed indirectly, because in this case the pheromone
secretion patterns provide a growing point with a non-Euclidian sense of
distance. Low variance in neighbourhood sizes means that the particles are
more evenly distributed throughout the domain. Therefore, a particle is just
as likely to have a neighbour in one direction as in another, resulting in an
Euclidian secretion pattern and, hence, a non-biased propagation of growing
points.
CHAPTER 4. THE GROWING POINT LANGUAGE 65

4.3.2 Extensions to gpl


Several extensions to gpl spring to mind, of which some are short-term at-
tainable changes in the line of what is already present, while others funda-
mentally broaden gpl requiring considerably more effort. As for the former
class, one obvious extension is to generalise tropism expressions. Examples
are extending the set of primitive tropisms or making them user-definable,
broadening the means of combination of predecessor pheromone concentra-
tions and allowing growing points to propagate to multiple successor sites.
Also, it would be interesting to allow time to get into play, allowing tropisms
as well as secretion mechanisms to be time-varying. Currently, pheromones
are static in that once they are secreted, their concentration remains the
same throughout the execution of the program. A more realistic model is
to have the pheromone levels disappear by default after some time. A third
possible short-term extension would be denotable growing points, such that
entire trajectories instead of single active sites can be named and handled.
Since a growing point’s path can be viewed as a non-local link between its
endpoints, one possible application is enforcing non-local constraints. Also,
one could allow entire trajectories to move, for example as they come too
close to one another, or design miniature versions of a certain pattern, af-
terwards letting the denoted pattern grow to smooth out irregularities.

There are several aspects of amorphous computing that have not been
taken into account in gpl. gpl operates in an extremely friendly envi-
ronment, where communications always succeed and particles never mal-
function. As these reliability issues are ingredients of the amorphous com-
puting model, at one stage they should be incorporated into gpl. Since the
only communication-dependent commands are propagate and secrete, the
implementations of these commands should be adapted when faulty com-
munications are taken into account. Secretion already relies on redundant
message sending between several particles cooperating together, so there
probably need be no fundamental changes to its implementation. However,
for propagation lost messages can have more serious effect since growing
points move to only one location. Considering malfunctioning particles, the
fact that a growing point’s propagation is basically the responsability of
one particle also poses problems. However, since a location is notified be-
fore it becomes the successor, one could use neighbouring particles, which
inevitably hear this message as well, as potential stand-ins for a malfunc-
tioning successor particle. In this context there have also been some in-
vestigations into how a certain pattern can be maintained once it has been
established. For these so-called self-healing structures [Zuc99] it is suggested
that monitoring, diagnosing and repair mechanisms be installed to cooperate
in restoring already constructed patterns.
CHAPTER 4. THE GROWING POINT LANGUAGE 66

A problem that seems a little harder to solve is that of particles malfunc-


tioning by producing incorrect answers to certain computations. One way
to get around this is to perform redundant computations for each step of the
process on many neighbouring particles. Further extensions are the inclu-
sion of sensors and actuators, and, more fundamentally, allowing particles
to be mobile.

4.4 Summary
The growing point language is the first serious attempt to develop a pro-
gramming language for an amorphous computer. The main goal of such a
language is to provide a means to attain predetermined global behaviour
from local interactions only. While gpl uses pattern reproduction as model
for global behaviour, one should look at it from the aforementioned broader
point of view. gpl is capable of producing a whole range of complicated
patterns, such as, for example, CMOS electrical circuits, through the ab-
stractions of, amongst others, growing points, tropisms, pheromones and
networks. The implementation of gpl exists on two levels: one is the gpl
illustrator, an interpreter working on top of Scheme, another is ecoli, a
precursor of a gpl-compiler which cooperates with hlsim. An important
theoretical result is that gpl either reproduces a general pattern or decides
that it is stuck. The latter case is closely related with the domain lay-
out, where a high mean neighbourhood size with low variance is required
in order to avoid stuck growing points as best as possible. Several exten-
sions were presented, of which the most important is the incorporation into
gpl of typical aspects of the amorphous computing model, such as faulty
communications and malfunctioning particles.
Chapter 5

Thoughts on Amorphous
Geometry

Previous examples have shown that amorphous computing can deal with a
variety of problems. As a contribution, we decided to study yet another
family of applications: that of computational geometry. Geometric algo-
rithms play a fundamental role in a whole series of application domains,
such as computer graphics, geographic information systems and robotics.
Since the physical layout of an amorphous computer serves as a basis for a
one-to-one correspondence with geometrical configurations, it seems natural
to look at what we will refer to as amorphous geometry. Moreover, within
the context of amorphous computing one can easily visualise applications:
for example, think of smart floors in large buildings such as hospitals and
companies, where coloured floor lines direct the way. Amorphous comput-
ing particles composing these lines would have to behave dynamically with
respect to their colour and the shape of the lines, finding shortest paths
between points, going around obstacles and the like. The idea is to show
that an amorphous computer could solve problems of this kind equally well
as a conventional one. We will present most of our suggested solutions as
well as the associated simulation results within the context of gpl, which
is specialised in pattern generation and hence a useful tool for a direct rep-
resentation of geometrical problems and their solutions. The goal of this
chapter is to report our results, as well as comment on the flexibility of the
environment – gpl – in which our experiments were tested.
Section 5.1 adresses the problem of robot motion planning on hlsim as well
as with gpl. The next sections prepare the way towards the triangulation
problem, within the context of gpl. First, we discuss the construction of ar-
bitrary polygons in section 5.2, showing simulation results for polygons with
up to six vertices. Next we search for a quadrangle’s diagonals in section
5.3. Based on the ideas from these sections, we present general guidelines
for solving the triangulation problem in section 5.4; triangulations for a pen-

67
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 68

tangle and hexagon are shown. Last, we give a summary of the chapter in
section 5.5.

5.1 Robot Motion Planning


One aspect of autonomous robots is that they should be able to plan their
own motion. In order to do this, a robot must have some knowledge about
the environment in which it is moving. Some of this information is static and
available from floor plans – for example where walls and machines are lo-
cated. Dynamical information, such as temporary obstacles, must be derived
through sensory information. The robot motion planning problem has to be
solved whenever a robot wants to move from its origin towards a goal posi-
tion, such that the shortest path avoiding all obstacles, static or dynamic, is
the resulting solution. We discuss this problem on hlsim for a static planar
environment with a point-robot that can move in every direction. The path
found consists of a series of particles less than one communication radius
from one another; hence, it is a connection of line segments with as maxi-
mal length the communication radius. It is constructed in essence through
the following code fragment 1

(select
(primitive-message-event
=>(lambda (message)
(if (not obstacle?)
(begin
(event.clear! primitive-message-event)
(cond ((eq? message ’collision)
(if up
(set-retry-up-timer!
(+ 1000 (random 5000))))
(loop ttl up down))

((and (eq? (first message) ’UP)


(> (second message) ttl))
(if destination?
(set-retry-down-timer! 500)
(set-retry-up-timer! (random 100)))
(loop (second message) (third message) down))

((eq? (first message) ’DOWN)


(set-retry-up-timer! #F)
(cond ((= (second message) processor-id)
1
The code presented is based on the find example from [DeR97].
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 69

(if (not source?)


(set-retry-down-timer! 100))
(loop ttl up (third message)))

((and down up
(= (third message) up))
(set-retry-down-timer! #F)
(colour-me "yellow")
(event.signal! break #T))

(else (loop ttl up down))))


(else (loop ttl up down)))))))

(retry-down-timer (broadcast ‘(DOWN ,up ,processor-id))


(set-retry-down-timer!
(+ 100 (random 200)))
(loop ttl up down))

(retry-up-timer (broadcast ‘(UP ,(-1+ ttl) ,processor-id))


(set-retry-up-timer!
(+ 1000 (random 10000)))
(loop ttl up down))

(continue (event.clear! continue)


(loop ttl up down))

((or break global-timeout) ’DONE)))

The loop procedure consists of a colouring clause along with the above
select form; it is more extensive but comparable to the hlsim code ex-
tracts presented in section 3.2.1. Initially, the source is coloured yellow,
obstacles cyan and the rest of the processors blue. In the above, ttl, up
and down refer to time to live, up-neighbour and down-neighbour respec-
tively, while global-timeout, continue, break, never, retry-up-timer
and retry-down-timer are all events. The basic idea is as follows: a source
processor – the origin of the robot – sends out an UP message with a time
to live of maximally 50 hops, through the retry-up-timer event. When
a processor, which is not an obstacle, receives an UP message, it rebroad-
casts it with a decremented time to live, following the same protocol as
in the potential example from section 3.2.1. However, when a rebroadcast
occurs, each processor remembers the identity of the processor from which
the message was received – i.e. the up-neighbour. Hence the UP message
travels away from the source, such that each processor knows the identity of
its up-neighbour. This is translated in the first select clause, which deals
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 70

with the reception of messages. Important is that an obstacle ignores all


messages, meaning that it cannot aid in the construction of the path and
therefore cannot be part of it, exactly what we want if the robot has to
avoid obstacles. Once an UP message reaches the destination processor –
the goal of the robot – it broadcasts a DOWN message carrying the identity
of its up-neighbour and of the sender (the down-neighbour), through the
retry-down-timer event. Hence the DOWN message travels back to the
source, following the chain of up-neighbours. Whenever a message is broad-
cast, retries will occur automatically until such time as they are turned off.
Retries UP cause chains to be optimised and they continue to be sent until
any DOWN message is received. DOWN retries are turned off whenever a
DOWN message is received that was sent by the UP neighbour. Processors
change colour to magenta when they rebroadcast UP, so that the extent of
propagation is evident. As the DOWN message travels back from destina-
tion to source, the processors en route change colour to yellow, defining the
path avoiding all obstacles which the robot should take.

Figure 5.1: Simulation results for the robot motion planning problem.

Figure 5.1 presents the path found with the above code for a simulation
with 2000 processors with a communication radius of 0.05. Source and
destination processors were given as initial conditions, as well as obstacles,
which were chosen in the following way
(define obstacle?
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 71

(let ((x (random 10)))


(if (= x 5)
#T
#F)))

Of course, realistic problems would require more precisely defining obstacle


processors. Also, dynamic information should be taken into consideration;
however, since processors equipped with sensors are part of the amorphous
computing model this should pose no problem in theory.

While the above program obviously works, it is not really easy to read.
One inevitably wonders how the robot motion planning problem could be
solved within the more user-friendly gpl environment. First of all, setting
static obstacles through initial conditions is done by generating the floor plan
pattern, analogously to the generation of electrical circuit layouts. Through
the abstraction of a network, one would construct the floor plan, after which
a growing point is initiated at the robot’s initial location. Basically, the
construction of the robot’s path is done through the following growing points

(define-growing-point (robot-path)
(material path-material)
(size 0)
(tropism (ortho+ goal-pheromone))
(avoids obstacle-pheromone)
(actions
(when ((sensing? goal-material)
(terminate))
(default
(propagate)))))

(define-growing-point (goal)
(material goal-material)
(size 0)
(for-each-step
(secrete 10 goal-pheromone)))

In other words, a straight path from initial location to goal is constructed,


such that locations where obstacle-pheromone is present are avoided. There-
fore, all growing points that construct the floor plan of obstacles should
secrete this pheromone over a short range.
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 72

5.2 Drawing a Polygon


The problem of drawing a polygon can be divided into the subproblems of
constructing line segments from one vertex to another. For example, when
drawing a quadrangle with vertices a, b, c and d, one starts with establish-
ing the growing points a-to-b-segment and b-point, as defined in section
4.1.1, at the vertices a and b respectively. Next, one has to define equivalent
growing points at the couples of vertices (b,c), (c,d) and (d,a). As a result,
each vertex is equipped with two growing points, a <vertex>-to-<next-
vertex>-segment and a <vertex>-point growing point. We say ”equiva-
lent” and not ”equal”, since if we used the same definitions for the required
growing points at every vertex interference of pheromones would result.
More particularly, suppose we install identical growing points as defined
in section 4.1.1 at the points a, b, c and d. This implies that at these points
B-pheromone is secreted as well as B-material deposited. However, since
each vertex is at the same time a startpoint and an endpoint for a line seg-
ment, the growing point A-to-B-segment would halt at once because of the
presence of B-material. Moreover, even when we allow the segment-like
growing point initially to ignore the presence of its halting material, the
B-pheromones secreted at each vertex would interfere with one another. As
a result, line segments do not grow straight and might not even halt at the
correct vertex. Hence, we have to use a unique pheromone and material
for each vertex, and define an associated segment-like growing point that
grows towards that particular pheromone. The whole process is presented
schematically in figure 5.2.

a-point
b-point

a-to-b-segment
ment
a-seg
d-to-

t
b-to-c-segmen

d-point

c-to-d-segment

c-point

Figure 5.2: Drawing a quadrangle.

Finally, to start off drawing the quadrangle, we execute the following code


CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 73

fragment 2

(with-locations
(a b c d)
(at a (start-gp a-point)
(--> (start-gp a-to-b-segment)
(--> (start-gp b-to-c-segment)
(--> (start-gp c-to-d-segment)
(start-gp d-to-a-segment)))))
(at b (start-gp b-point))
(at c (start-gp c-point))
(at d (start-gp d-point)))

Simulation results for the construction of a quadrangle with the gpl-illustrator


in the above way, as well as for the analogous construction of a triangle, pen-
tangle and hexagon are shown in figure 5.3. It is clear from these results
that arbitrary polygons can be constructed amorphously.

Figure 5.3: Several polygons constructed with gpl.

2
start-gp is an alias for start-growing-point.
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 74

In the above solution for drawing a quadrangle, we are immediately con-


fronted with the inability of gpl to parametrise pheromones and materials.
For if that were possible, we would define the following growing points

(define-growing-point (edge next-vertex-pheromone next-vertex-material)


(material edge-material)
(size 0)
(tropism (ortho+ next-vertex-pheromone))
(actions
(when ((sensing? next-vertex-material)
(terminate))
(default
(propagate)))))

(define-growing-point (vertex vertex-pheromone vertex-material)


(material vertex-material)
(size 0)
(for-each-step
(secrete 10 vertex-pheromone)))

With the help of these growing points, the following code constructs the
desired quadrangle3

(with-locations
(a b c d)
(at a (start-gp (vertex a-pheromone a-material))
(--> (start-gp (edge b-pheromone b-material))
(--> (start-gp (edge c-pheromone c-material))
(--> (start-gp (edge d-pheromone d-material))
(start-gp (edge a-pheromone
a-material))))))
(at b (start-gp (vertex b-pheromone b-material)))
(at c (start-gp (vertex c-pheromone c-material)))
(at d (start-gp (vertex d-pheromone d-material))))

This example clarifies the need for equipping gpl with parametrisation pos-
sibilities for growing point ingredients such as pheromones, materials and
poisons.

A more fundamental problem emerges when we try to construct polygons


with an arbitrary number of vertices. While it is obvious how to extend the
above reasoning to vertex numbers higher than four, a preferable solution
3
Detection of edge material naturally leads the identification of self-intersecting poly-
gons; this is typically a much more complex operation in conventional geometry.
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 75

is to write a procedure which takes as argument the number of vertices and


constructs the necessary edges through an iterative loop, as follows

(define (draw-polygon . vertices)


(define (draw-iter begin rest)
(let* ((carrest (car rest))
(first carrest.location))
(if (null? (cdr rest))
(with-locations
(first begin)
(at first
(start-gp (edge begin.pheromone begin.material)))
(at begin
(start-gp (vertex begin.pheromone begin.material))))
(let* ((cadrrest (cadr rest))
(second cadrrest.location))
(with-locations
(first second)
(at first
(start-gp (edge second.pheromone second.material)))
(at second
(start-gp (vertex second.pheromone second.material)))
(draw-iter begin (cdr rest))))))
(let ((carvert (car vertices)))
(draw-iter carvert.location vertices)))

Here we presented each vertex as an abstract data type of the form (<location>
<pheromone> <material>). However, the problem is that as yet it is not
possible to mix Scheme and gpl as above. Since a gpl program is not ac-
tually run on any one processor, there is a translation process converting
it to a set of instructions for each processor. In order to combine Scheme
and gpl, we would have to clarify the semantics of Scheme in this new con-
text. For example, exactly which particles are involved in iterating the loop
– since we are trying to get the whole ensemble to cooperate in producing
the polygon. This problem is more fundamental, and to solve it requires
thorough investigation.

5.3 Finding the Diagonals of a Convex Quadrangle


Let us now look at the problem of finding a quadrangle’s diagonals. As a
simplification, we work with a convex quadrangle; in this case we know there
are always exactly two diagonals. A diagonal can be defined as a straight
line between non-adjacent vertices that lies entirely on the inside of the
quadrangle. Hence, it has to grow away from its startpoint, as well as from
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 76

the two adjacent vertices. We can specify these conditions as negative or-
thotropisms with respect to the three pheromones secreted by these vertices
during the construction of the quadrangle. Following the discussion of the
previous section, a diagonal growing from a has to be negatively orthotropic
to a-pheromone, b-pheromone and d-pheromone. Next, a diagonal must
end at the opposite vertex, meaning it has to be positively orthotropic to
the pheromone secreted by that vertex – c-pheromone, in this case. A
clarifying schematical representation of this process is given in figure 5.4.

b
c

al
gon
dia

Figure 5.4: The structure of the diagonal growing point.

However, we found that defining the diagonal growing point in the above
way resulted in stuck growing points. This is because we overspecified its
tropism – combinations of several primitive tropism expressions proved to be
unflexible, at least for not too large simulations. Besides being unflexible we
also found most present tropisms expressions superfluous. Specifying only
that a diagonal grows away from itself and towards the opposite vertex leaves
the growing point more freedom to propagate, while adding tropisms towards
the other vertices results in the growing point not being able to move at all.
Hence, we define a diagonal growing from for example a to be negatively
orthotropic towards a pheromone excreted by itself (diagonal-pheromone)
and positively orthotropic to c-pheromone.

One extra complication is that in order for gpl to know what part of the
domain is the quadrangle’s inside, we need an additional initial location –
referred to as center below4 . At center, we establish the following growing
4
The choice of an internal point is inspired by [Coo99]; alternatives could have been:
considering polygons filled with a specific material, or two-material edges.
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 77

point

(define-growing-point (inside)
(actions
(secrete 10 center-pheromone)))

This stationary growing point has default size and no associated material,
for it does not have to be detectable for other growing points. Its only task
is to secrete center-pheromone, which directs the diagonals towards the
inside of the quadrangle. However, in order for the diagonal to reach the
opposite vertex, at one point the positive orthotropism towards center has
to be eliminated. Therefore, we have to split up the construction of the
diagonal as shown if figure 5.5.

a center c

diagonal = diagonal-init + diagonal-rest

Figure 5.5: The composition of the diagonal growing point.

More concretely, the diagonal growing point is a combination of two grow-


ing points: diagonal-init, which ensures that the diagonal starts off to-
wards the inside of the quadrangle, and diagonal-rest, which constructs
the rest of the diagonal. The growing point definitions required for the above
scheme are as follows

(define-growing-point (diagonal-init length)


(material diagonal-material)
(size 0)
(tropism (ortho+ center-pheromone))
(actions
(secrete+ 1 diagonal-pheromone)
(when ((< length 1)
(terminate))
(default
(propagate (- length 1))))))

(define-growing-point (diagonal-rest)
(material diagonal-material)
(size 0)
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 78

(tropism (and (ortho- diagonal-pheromone)


(ortho+ c-pheromone)))
(actions
(secrete+ 1 diagonal-pheromone)
(when ((sensing? c-material))
(terminate))
(default
(propagate)))))

(define-growing-point (diagonal)
(material diagonal-material)
(size 0)
(actions
(--> (start-gp diagonal-init 2)
(start-gp diagonal-rest))))

As one can see, diagonal-init determines the initial direction, while diagonal-
rest behaves like a growing point with inertia, maintaining the same direc-
tion by growing away from its own pheromone5 . Once diagonal-rest senses
the presence of c-material – i.e. when it has reached the opposite vertex
– it terminates. Finally, to initiate the process of creating a diagonal, we
execute the following

(with-locations
(center a)
(at center (start-gp inside))
(at a (start-gp diagonal)))

Figure 5.6 shows the simulation result for the construction of a quadrangle
diagonal in the above way. Again, it is shown that amorphous diagonal
construction is very satisfactory.

The second diagonal can be constructed analogously; however, the associ-


ated growing point has to secrete a pheromone different from diagonal-
pheromone in order to avoid interference. Once again, the need to incorpo-
rate parameterisable pheromones in gpl becomes clear.

5.4 Triangulation
The triangulation problem is closely connected to the art gallery problem,
which can be formulated as follows [dBvKOS00]: given an art gallery, how
many cameras are required to guard it and how do we decide where to
5
Note the use of secrete+, which ensures that the diagonal grows in the same direction.
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 79

Figure 5.6: A quadrangle’s diagonal.

place them? Since a gallery’s floorplan is sufficient information to solve this


problem, we model a gallery as a polygonal region in the plane; camera
positions correspond to points in the plane. Since a camera can rotate
horizontally, it sees those points in the polygon to which it can be connected
by an open line segment that lies in the interior of the polygon, as shown in
figure 5.7. The first part of the art gallery problem concerns the multitude
of cameras required, a question clearly related to the complexity of the
polygon. Because it is difficult to determine the number of cameras for a
complex polygon, we first decompose it into pieces that are easy to guard,
namely triangles – a process called triangulation.

Figure 5.8 shows a possible triangulation in the middle for the polygon
shown left. Before we engage ourselves into discussing the triangulation
problem for an amorphous computer, we trace its further connection to the
art gallery problem. The key result is for each triangulated polygon there
exists a 3-colouring [dBvKOS00]; an example is shown at the right of figure
5.8. Since a 3-colouring is such that any two vertices connected by an edge
or a diagonal have different colours, it follows that we can guard the polygon
by placing cameras at all equally-coloured vertices, giving an answer as to
where we should place all cameras. To return to the question of how many
cameras are required, choosing the smallest colour class leads to following
result

Art Gallery Theorem: For a simple polygon with n vertices,


CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 80

Figure 5.7: A camera’s view in an art gallery.


n/3 cameras are occasionally necessary and always sufficient
to have every point in the polygon visible from at least one of
the cameras.

where simple polygon means that we do not allow regions with holes.

Figure 5.8: A polygon with a possible triangulation and 3-colouring.

While the above theorem is only optimal in the worst case, it is clear
that polygon triangulation lies at the heart of the problem. Therefore, we
require an algorithm that constructs a triangulation for a given polygon.
Basically, we want to establish a growing point at each vertex that constructs
a diagonal, such that once a set of non-intersecting diagonals that determines
a triangulation is found, the remaining growing points that did not aid in
this construction are cancelled. In order to do this, we should be able to
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 81

use the results from the previous section. However, there are three main
differences with the construction of a quadrangle’s diagonals, namely
1. there are an arbitrary number of vertices;
2. a diagonal’s endpoint cannot be predetermined;
3. diagonals cannot intersect.
Let us first look at the generalisation towards polygons with more than four
vertices. First, when constructing a diagonal starting at a given vertex,
we have to know which pheromones are secreted by that vertex as well as
both adjacent ones in order to define negative orthotropisms towards them.
Next, we have to take into account that a diagonal’s endpoint cannot be
uniquely predetermined. For a quadrangle, a diagonal starting at a given
vertex can only end at one vertex; however, for a general polygon there
might be several possibilities. Therefore, we have to change the (ortho+
d-material) tropism from the diagonal growing point in section 5.3 into
positive orthotropisms towards all possible endpoints. A clarifying schemat-
ical representation of this process is given in figure 5.9.

OR

OR

al
gon
dia

Figure 5.9: Triangulation tropisms.

As in the previous section, we found that listing all these tropisms for each
diagonal results in stuck growing points; however, if one omits the (super-
fluous) negative orthotropism expressions correct results are obtained. A
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 82

preferable solution is to generate this tropism expression for each diagonal


automatically 6 . In order to do this, we require a function extract-pheromones,
which, given a vertex, the list of all vertices and the associated list of
pheromones (secreted during the polygon’s construction as in section 5.2),
extracts the pheromones to which a diagonal starting at the given vertex
should have a positive orthotropism. Figure 5.10 shows schematically how
this is done through the following steps

• determine the given vertex’ position in the list of vertices;

• locate the pheromone at the same position in the pheromone list;

• return a list containing all pheromones apart from the above deter-
mined pheromone as well as those at the previous and following loca-
tion in the pheromone list.

Through extract-pheromones, we can establish a diagonal at each vertex


in the following way

(with-locations
(... x ...)
...
(at x (start-gp (diagonal (extract-pheromones x))
...)

where the dots indicate identical clauses for all vertices; note that the above
code presumes that the lists of vertices and pheromones are defined globally.
The diagonal growing point then includes in its tropism an ortho+ clause
for each of the pheromones in the list returned by extract-pheromones;
these tropism expressions are combined with the tropism or form.

Finally, the last difference with section 5.3 is that diagonals cannot in-
tersect. This actually simplifies the solution a bit, since this can be solved
by having all diagonals secrete the same pheromone, such that each diago-
nal avoids it. Putting all these results together, we arrive at the following
diagonal growing point definitions

(define-growing-point (diagonal-init length pheromones)


(material diagonal-material)
(size 0)
(tropism (ortho+ center-pheromone))
(avoids diagonal-pheromone)
6
Note that in the following reasoning we again use parametrisation of pheromones.
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 83

a b c ...

locate pheromone

a-pher b-pher c-pher ... ... ... ...

extract pheromones

Figure 5.10: The extract-pheromone procedure.

(actions
(secrete+ 1 diagonal-pheromone)
(when ((< length 1)
(terminate))
(default
(propagate (- length 1))))))

(define-growing-point (diagonal-rest pheromones)


(material diagonal-material)
(size 0)
(tropism (and (ortho- diagonal-pheromone)
(or (map ortho+ pheromones))))
(avoids diagonal-pheromone)
(actions
(secrete+ 1 diagonal-pheromone)
(when ((sensing? vertex-material))
(terminate))
(default
(propagate)))))

(define-growing-point (diagonal)
(material diagonal-material)
(size 0)
(actions
(--> (start-gp diagonal-init 2)
(start-gp diagonal-rest))))
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 84

One issue which is not entirely clear to us at this stage is how to cancel
those growing points still trying to establish a diagonal after the triangula-
tion has finished. A possible solution is to first search the opposite vertex,
then reconstructing the found diagonal, analogously to the construction of a
robot path with hlsim in section 5.1. In this case, we could only display the
diagonal material when backtracking, thus avoiding a cluttered result due
to unfinished growing points desperately trying to find a vertex while none
are left. However, further investigations are required in order to clarify this
issue. Nevertheless, it should be clear from the above reasoning that solv-
ing the triangulation problem is entirely within reach of gpl. The problem
of shutting down eager growing points after a solution is found is practi-
cal rather than theoretical, since the main goal of constructing a polygon
triangulation is reached anyhow.

While the above general solution is theoretically in reach of gpl, as yet


we have to cope without assets such as pheromone parameterisation. There-
fore, in order to concretely triangulate a specific polygon without jumping
ahead of gpl’s capacities, one has to define each diagonal growing point in-
dependently as well as manually. Carrying out pentangle triangulation, we
found that for the order of simulations we could run the avoids clauses re-
sulted in stuck growing points. This is because for not too large simulations
the density of particles is too small to leave enough choice for a diagonal
growing point whilst encountering other diagonals. Therefore, we only es-
tablished as many diagonals as required for the triangulation in question,
thus eliminating the need for the avoids-clause as well as that of cancelling
superfluous diagonals once a triangulation has been constructed. For exam-
ple, for a pentangle with vertices a, b, c, d and e in clockwise direction, we
established the two diagonals A-diagonal and B-diagonal at points a and
b respectively, which are defined in the following way7

(define-growing-point (A-diagonal-rest)
(material diagonal-material)
(size 0)
(tropism (and (ortho- diagonal-pheromone)
(or (ortho+ D-pheromone)
(ortho+ C-pheromone))))
(actions
(secrete+ 1 diagonal-pheromone)
(when ((or (sensing? C-material)
(sensing? D-material))
(terminate))
(default
7
Without the avoids clause, the definition of diagonal-init is identical to that of
section 5.3.
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 85

(propagate)))))

(define-growing-point (A-diagonal)
(material diagonal-material)
(size 0)
(actions
(--> (start-gp diagonal-init 2)
(start-gp A-diagonal-rest))))

(define-growing-point (B-diagonal-rest)
(material diagonal-material)
(size 0)
(tropism (and (ortho- diagonal-pheromone)
(or (ortho+ D-pheromone)
(ortho+ E-pheromone))))
(actions
(secrete+ 1 diagonal-pheromone)
(when ((or (sensing? D-material)
(sensing? E-material))
(terminate))
(default
(propagate)))))

(define-growing-point (B-diagonal)
(material diagonal-material)
(size 0)
(actions
(--> (start-gp diagonal-init 2)
(start-gp B-diagonal-rest))))

Simulation results for a pentangle and (analogous) hexagon triangulation


are shown in figure 5.11.

In the above simulations we found that listing each diagonal’s endpoints


through the or compound tropism expression proved to be sequence-dependent.
More particularly, in our simulations only the first tropism in an or form
was succesfull, whereas trying following tropisms when the first one does
not deliver a possible propagation location resulted in stuck growing points.
Hence, while in principle the or form lists several choices (each of which
succesfull with equal probability), in practice we found that only the first
choice is given a chance of finding a solution. This is clearly a problem since
in order to find a triangulation at all one should thus know the endpoints
when specifying the tropism for each diagonal. We suggest that the or form
CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 86

Figure 5.11: Triangulation of a pentangle and a hexagon.

be replaced by a parallel or, which chooses one of its clauses arbitrarily with
each execution, thus giving each tropism an equal chance of determining a
suitable successor location.

Experimenting further with diagonal construction along the above lines


we constructed stars inscribed in a pentangle and a hexagon. Each star
shape is defined by connecting several diagonals defined analogously as in
the previous section. Simulation results are shown in figure 5.12.

Figure 5.12: Star shapes inscribed in a pentangle and a hexagon.


CHAPTER 5. THOUGHTS ON AMORPHOUS GEOMETRY 87

5.5 Summary
Computational geometry is an important field within computer science that
knows a whole series of applications, which are easily interpretable within
the context of amorphous computing. Each processor governs a geometrical
patch, such that the whole medium cooperates in finding a global solution to
the problem at hand. To show that amorphous computing has the potential
to deal with this kind of problems as well, we newly introduced the domain of
amorphous geometry. We covered how a series of typical problems borrowed
from computational geometry are viewed, solved and simulated within the
context of amorphous geometry. First, we discussed the problem of robot
motion planning, both with hlsim and gpl. This highlighted the differences
between both tools, especially the higher accessibility of the gpl solution,
as well as some of gpl’s weaknesses. The latter are concentrated mainly
around the inability to parameterise pheromones and, more fundamentally,
the unclear semantics of ordinary Scheme within the context of gpl. Next,
we discussed the construction of general polygon as well as the generation
of a quadrangle’s diagonals, notable problems in themselves which were also
useful in preparation of the lastly discussed problem of polygon triangula-
tion. While the aforementioned gpl weaknesses reemerged, simulation re-
sults confirmed that amorphous geometry is well within reach of amorphous
computing, encouraging further explorations in these directions and provid-
ing a solid basis for its development into a full-blown branch of amorphous
computing.
Chapter 6

Conclusion

Amorphous computing is an emerging domain in computer science that has


the potential to result in an entirely new computing paradigm, of equal
value to the already existing ones of imperative, functional, logic and object-
oriented programming. However, at this stage only the first tentative, albeit
promising steps have been made towards the construction of an amorphous
computing paradigm. In order to develop this, one has to extract general
behaviour and engineering techniques from the investigations already car-
ried out within amorphous computing. Therefore, we felt it was necessary to
provide an extensive overview and classification of the existing work within
this field, especially since to our knowledge this has not been done as yet.
As opposed to what we found in literature, we deliberately separated the
layout of amorphous algorithms from their implementation on a specific
amorphous computing medium. Both aspects reside on top of the basic
amorphous computing model, which consists of a multitude of locally com-
municating particles, each with limited computational power. Due to the
expected mass production of these processors, one should design algorithms
and programs which are robust with respect to faulty processors and mes-
sage loss. The crucial difference with previous work on cellular automata
lies with the locality of the model: individual processors have no a priori
knowledge of their environment nor of the global configuration. Since the
acquisition of predefined global goals through local interactions only is rem-
iniscent of certain aspects of biology and physics, these sciences were relied
on as a source for metaphors for initial investigations on the potential of
amorphous computation. We classified the resulting algorithms first accord-
ing to the scientific discipline from which they originate, next according to
the specific application within these sciences. Important is that what we
desire to achieve with these applications is not mere emergent behaviour; as
engineers our goal is to program the amorphous computing particles such
that the resulting global behaviour is as we planned it.

88
CHAPTER 6. CONCLUSION 89

While we specifically discussed metaphors apart from the possible media,


one requires interaction between both aspects of amorphous computing in
order to investigate the field properly. However, we stress that the general
model as well as the metaphorical applications are to be viewed as top level,
in the sense that they can be developed on any medium. Nevertheless, when
implementing an amorphous program on a particular medium one needs to
take into account its specific attributes. Therefore, once again we felt it
was of value to be first to provide a proper overview of the existing media
and their specifications, as well as to make a proper classification. This
resulted in the partition of amorphous computing media into hardware im-
plementations and simulators. The first class can again be subdivided into
microelectronically based media and biological media. The former is still in
an experimental stadium; nevertheless it is already possible to test amor-
phous computing on a small scale, while at the same time investigations
concerning more amorphous-like hardware implementations are carried out.
Tentative explorations of the possible utilisation of biological cells for amor-
phous computing particles are also underway. While this scheme is not yet
within our reach, it is within our grasp: experiments for biological logical
gates already delivered promising results. Besides hardware implementa-
tions, another class of amorphous computing media is that of simulators,
most importantly hlsim, a high level simulator for Gunk. While not en-
tirely user friendly, hlsim provides an adequate basis for testing the validity
and the expected outcome of amorphous programs.

The above classifications of metaphors and media are very useful in trying
to bridge the gap between isolated applications towards a general paradigm.
As yet there exists one concrete proposition towards achieving exactly this:
the Growing Point Language (gpl). While we do not claim that gpl is
the solution to the problem of constructing a paradigm, we felt that, be-
cause of its uniqueness, it was important enough to cover in detail. gpl
starts out as another metaphor-based application, but it goes a lot further
in establishing general rules for achieving global goals from local interactions
only. Using pattern generation as model for global behaviour, it is shown
that a whole range of representations can be generated through the rela-
tively simple abstraction of growing points. Next to this, an overview of the
internal implementation enriches us with an insight as to what issues one
has to consider when laying the first stones for the development of a fully
operational amorphous computing language. While gpl clearly provides a
first flavour of an amorphous computing paradigm, there nevertheless re-
main several important topics that are not included. First, gpl assumes
faultless communication and processors, which is clearly in violation with
the amorphous computing model. Next, a series of extensions to the lan-
guage in itself are required in order to have more powerful growing point
CHAPTER 6. CONCLUSION 90

abstractions. Last, it is important to clarify the semantics of Scheme within


the context of gpl, since at this stage it is not possible to interweave both
languages, although the environment provides the possibility of doing just
so. Even better would be to extend gpl into a language that incorporates
all relevant Scheme capabilities, so that it can be viewed as an independent
language specifically designed for amorphous computation, instead as an
additional tool for Scheme.

As a first fundamental extension to the preliminary amorphous comput-


ing paradigm as thus far constructed, we studied amorphous geometry. The
choice to study computational geometry is justified by it being an impor-
tant discipline of computer science with a broad range of useful applications;
furthermore, within the context of amorphous computation it is very inter-
esting to study geometrical problems since these can be directly translated
into the physical layout of an amorphous computer. Above all, even at this
stage it is easy to think of interesting applications, such as coloured floor
lines dynamically directing the way around obstacles in buildings, and more
generally any smart surface requiring geometrical information in order to
carry out repairs or interact in any way with its environment. Using gpl
to investigate concretely amorphous geometry further confirmed previously
stated issues concerning its power as well as its weaknesses. However, it is
clear that amorphous computing is perfectly capable of dealing with this
class of problems as well, and, more specifically, that gpl has the potential
of evolving into a language adequate for expressing amorphous geometric
problems.

Besides extensions to the growing point language and further elaborations


on amorphous geometry, it should be clear from this work that amorphous
computing in itself needs to be extended to a fully respectable computational
branch. To achieve this, the main goal is the development of an associated
paradigm. Starting with a general amorphous computing model, one has
to learn how to narrow down behaviour of each particle in order to achieve
the desired goals. We have already seen that a whole range of problems
can be solved in this way – but what can we conclude in general from all
these particular case studies? How does one restrict local communications
and reaction to messages so that one attains global goals, not merely as un-
suspected emergent behaviour, but intentionally? These questions are the
essence of amorphous computation, and to solve them is to construct an
amorphous computing paradigm. Next to the already existing imperative,
functional, logical and object-oriented paradigms, once again a fundamen-
tally different view on computation will be provided. Hopefully, at that
stage we will have amorphous computers at our disposal, in order to fully
exploit the power of the amorphous computing paradigm.
Appendix: The Secretion
Finite State Machine

Following is the complete graphical representation of the secretion finite


state machine. Figure 1 shows arrows labeled with conditions necessary
to make a transition, where an unlabeled arrow means that the message is
ignored. Next, figure 2 gives an overview of the actions carried out with each
transition. Last, figure 3 clarifies what processes are executed; this figure
provides us with a synthesis of the actual code for the finite state machine.

91
secrete(extent,count) new-count+1<min
count<min
update(extent,old-count,new-count)
restore(count)
count+1>=min
timeout(send)

t-secrete wait
(1) (2)

nt
te
in

ex
m

<
>=
in

nt
1

ou
t+
=m

-c
nt>

w
ou
ou

ne
-c
c in

w
t =m

ne
ten 1>
ex t+
t<
count+1<min
new-count+1<min

un
un co
co

count>extent
count=extent count+1<min
default t-restore t-update
(0) (3) (4)

new-count=extent new-count+1<min

new-count>extent

count‡min

Figure 1: The secretion finite state machine: conditions.


src-t-
src-default
(5) restore
(6)

count<min
add count
secrete(extent,count) replace count

update(extent,old-count,new-count) MIN=new-min
secrete(MIN)
restore(count)
timeout() t-secrete wait
(1) (2)

t
un
t

co
w
un

ne
co

d
e
c

ad
la
t

p
un

re
nt
replace count

u co
co dd
MIN=new-min

d a
ad
update(MIN,new-min)

restore(new-min)

add count add count


default t-restore t-update
(0) (3) (4)
set min

set min

replace count add count


add new count

replace count

Figure 2: The secretion finite state machine: actions.


add count src-t-restore
src-default (6)
(5)

add count

replace count
simple
secrete
simple
update
#f
restore
secrete
timeout
t-secrete wait
(1) correction?
(2)
correction?
relevant?
wait

init simple
default t-restore t-update
init (0) (3) (4) simple
#f #f
#f update

correction?
correction?
relevant?
restore

Figure 3: The secretion finite state machine: processes.


src- src-t-
default restore simple
simple (5) (6)
simple
simple
relevant?
relevant?
done
#f
Bibliography

[AAC+ 99] H. Abelson, D. Allen, D. Coore, C. Hanson,


G. Homsy, T. Knight, R. Nagpal, E. Rauch,
G. Sussman, and R. Weiss. Amorphous computing.
http://www.swiss.ai.mit.edu/projects/amorphous/, 1999.
[AAC+ 00] H. Abelson, D. Allen, D. Coore, C. Hanson, G. Homsy,
T. Knight, R. Nagpal, E. Rauch, G. Sussman, and R. Weiss.
Amorphous computing. Communications of the ACM, 43(5),
May 2000.
[Ada97] S. Adams. A high level simulator for gunk. http://www.
swiss.ai.mit.edu/projects/amorphous/, 1997.
[AKS96] H. Abelson, T. Knight, and G. Sussman. Amorphous com-
puting manifesto. http://www.swiss.ai.mit.edu/projects/
amorphous/white-paper/amorph-new/amorph-new.html,
1996.
[ASS96] H. Abelson, G. Sussman, and J. Sussman. Structure and In-
terpretation of Computer Programs, 2nd ed. MIT Press, 1996.
[Bee] W. Beebee. M68hc11 gunk api book. http://www.swiss.ai.
mit.edu/projects/amorphous/HC11/api.html.
[Bee97] W. Beebee. Pulse synchronisation in amorphous computers.
http://www.swiss.ai.mit.edu/projects/amorphous/, 1997.
[CN98a] D. Coore and R. Nagpal. An algorithm for group forma-
tion and maximal independent set in an amorphous computer.
http://www.swiss.ai.mit.edu/projects/amorphous/, 1998.
[CN98b] D. Coore and R. Nagpal. An algorithm for group forma-
tion in an amorphous computer. http://www.swiss.ai.mit.edu/
projects/amorphous/, 1998.
[CN98c] D. Coore and R. Nagpal. Implementing reaction-diffusion
on an amorphous computer. http://www.swiss.ai.mit.edu/
projects/amorphous/, 1998.

95
[CNW97] D. Coore, R. Nagpal, and R. Weiss. Paradigms for struc-
ture in an amorphous computer. http://www.swiss.ai.mit.edu/
projects/amorphous/, 1997.

[Coo98] D. Coore. Establishing a coordinate system on an


amorphous computer. http://www.swiss.ai.mit.edu/projects/
amorphous/, 1998.

[Coo99] D. Coore. Botanical Computing: A Developmental Approach


to Generating Interconnect Topologies on an Amorphous Com-
puting. PhD thesis, MIT, 1999. http://www.swiss.ai.mit.edu/
projects/amorphous/.

[dBvKOS00] M. de Berg, M. van Kreveld, M. Overmars, and


O. Schwarzkopf. Computational Geometry, Algorithms and
Applications, 2nd ed. Springer, 2000.

[DCG99] M. Dorigo, G. Di Caro, and L. Gambardella. Ant algorithms


for discrete optimization. Artificial Life, 5(3):137–172, 1999.

[DeR97] D. DeRoure. Introductory gunk examples for hlsim.


http://www.swiss.ai.mit.edu/projects/amorphous/6.966/dder-
examples/, 1997.

[HKW99] G. Homsy, T. Knight, and R. Weiss. Toward in vivo digital cir-


cuits. Dimacs Workshop on Evolution as Computation, 1999.
http://www.swiss.ai.mit.edu/projects/amorphous/.

[Kat99] J. Katzenelson. Notes on amorphous computing. http://www.


swiss.ai.mit.edu/projects/amorphous/, 1999.

[KS98] T. Knight and G. Sussman. Cellular gate technology. http://


www.swiss.ai.mit.edu/projects/amorphous/, 1998.

[Nag99] R. Nagpal. Organizing a global coordinate system from local


information on an amorphous computer. http://www.swiss.
ai.mit.edu/projects/amorphous/, 1999.

[Rau99] E. Rauch. Discrete, amorphous physical models. http://www.


swiss.ai.mit.edu/projects/amorphous/, Master Thesis, 1999.

[Zuc99] J. Zucker. Self-healing structures in amorphous computing.


http://www.swiss.ai.mit.edu/projects/amorphous/, 1999.

Вам также может понравиться