Вы находитесь на странице: 1из 15

Draft Manuscript

http://www.csu.edu.au/ci/

Please note that this manuscript has yet to be accepted by Complexity International

Simulation meta-architecture for


analyzing the emergent behavior of agents

Lisa A. Schaefer

The MITRE Corporation


McLean, Virginia 22102
Email: LisaAnn@mitre.org

Philip M. Wolfe, John Fowler

Department of Industrial Engineering


Arizona State University
Tempe, AZ 85287
Email: john.fowler@asu.edu, wolfe@asu.edu

Timothy E. Lindquist

Department of Electronics and Computer Engineering


Arizona State University East
Mesa, AZ 85212
Email: tim@asu.edu

Abstract
This paper describes a meta-architecture, which includes functional segments, events, and a
communication link between functional segments and events. Using the object-oriented
architecture systems of agents can be analyzed to determine their emergent behavior. In an agent
system, each agent iteratively executes its own set of rules during its lifetime. Each agent is an
individual entity with its own intelligence defined by its rule set. Rule sets for any given system can
have many variations and it is not known a priori which variation will result in the most desired
outcome. Since each agent is a separate entity, intelligence is distributed throughout the system,
rather than existing in a centralized unit. The architecture in this paper is a framework that can be
used for experimenting with variations of rule sets to assist in discovering a rule set that results in
desirable system-level behavior. We also describe a case study in which the architecture is used to
simulate rule sets for a group of robot agents to determine the system-level average effective speed
of the robots resulting from their interactions at the individual robot level.

Received: 31 Aug 2002 © Copyright 2002


Accepted: Pending –1– http://www.csu.edu.au/ci/draft/schaef01/
Complexity International Draft Manuscript

1. Introduction
In complex agent systems, many interactions among the agents occur over time as they carry
out their individual rule sets. It is difficult to analyze multi-agent systems due to the many
possible outcomes that can result from slight deviations from each interaction. However with
modern computing capabilities, it is becoming faster and easier to analyze complex systems.
We see an increasing trend in the number of systems that will be analyzed as multi-agent
systems. Thus we developed simulation architecture for analyses of multi-agent systems.
Results of analyses performed within this architecture would be used to quantify the emergent
behavior of agent systems.
The property of a system that emergent behavior represents is dependent upon the system
being analyzed and the property that the system’s observer is looking for. Emergent behavior
can be defined as the system-level behavior resulting from the interactions at the level of the
individual (autonomous) components of the system [1]. According to Maes [2], one of the
founders of agent research, “autonomous agents are computational systems that inhabit some
complex dynamic environment, sense and act autonomously in this environment, and by doing
so realize a set of goals or tasks for which they are designed.” The means by which the agent
makes decisions on how to act in its environment can be a brain, such as in a bird [3], or could
be the processor in an autonomous robot.
Mataric conducted experiments with robots to understand system-level behaviors of the
robots as a group that emerge when local behaviors were changed at the individual agent
level. The concept of developing simulation architecture is similar to the concept of individual
mobile robots in Mataric [4]. We conduct experiments within our architecture to understand
system-level behavior of general agent-like entities as local agent behaviors changed.
However since our analysis was a simulation, the experiment was much cheaper.
Finding emergent behavior is similar to the goal of the architecture described in Oka et al.
[5] in which they attempt to find the desired properties of agents. The architecture described in
this paper gives researchers an object-oriented structure in which to determine the emergent
behavior of an agent system. The researcher can experiment with the behaviors of agents to
determine which behaviors result in the desired properties of the system.
Object-oriented modeling is well suited for simulating systems of many similar entities that
interact [6]. In such a complex system with many agents, the attributes of each agent or the
system state at future time steps cannot be predicted a priori. Simulated agents have a lot to
gain from using object-oriented programming, since it provides a better way to model the
agents and their environment, their dependencies, and their relations [7].
The specifications for the emergent behavior analysis architecture are discussed in Section
2 of this paper. The architecture is described in Section 3. Its use is demonstrated in Section 4
through a simulation of a group of free-range mobile robots with several variations of flocking
rules to simulate the robots. In Section 5 we conclude with a table of possible applications for
our simulation architecture.

2. Model Specifications
To guide the development of the architecture for this research, model specifications for a
general agent simulation were defined. There are three aspects of this problem we needed to
consider when determining the specifications for our simulation architecture: the fact that the
system will be analyzed with a discrete computer, the definition of the system to be simulated,

schaef01 –2– © Copyright 2002


Complexity International Draft Manuscript

and the type of emergent solution desired. We describe these three aspects and their
specifications.
A) Simulation abstraction: Since the simulation will be performed on a computer, the actions
of the real system must be defined in abstract terms. One must be able to mathematically
abstract system and agent parameters and time into computer-understandable relations.
B) System definition: The system to be simulated is a group of many agents changing their n-
dimensional parameters over time according to a set of system-dependent rules. The
actions and results at each time step are dependent upon the rules and the system state at
that moment. The items that exist in the system are the agents and a community
representing a group in which agents belong. The agents must be able to detect some of the
parameters of other agents in their environment. The agents must be able to execute rules
toward achieving their individual goals.
C) Type of emergent solution: The emergent behavior is the solution to the simulation
analysis in the form of parameters that describe system-level behavior. In order to
determine how the system-level behavior varies as a function of different system
parameters and different agent rule sets, one must be able to change the system parameters
and agent rules according to an experimental design. Each time the design parameters are
reset, the simulation must execute several runs to develop and collect output. After all
output is gathered, one must analyze the output to determine the results.
The specifications derived for each aspect above are summarized in Figures 1a through 1c.

2. Initial 3. Ability to
1. Execution
conditions keep track of
order
goal=3,
simulated
Advance time red, blue,
Execute agent rules lazy… fast…
small… time
Update attributes…

Figure 1a. Specifications to address computer-based simulation

1. Many
agents 2. Group in
which agents
are assigned 3. Communication
among agents
blue,
4. Local rulesDo.. fast…
If.. 6. Attributes

Go..
Get..
5. Goals

Figure 1b. Specifications to address type of system being simulated

schaef01 –3– © Copyright 2002


Complexity International Draft Manuscript

1. Ability to 3. Ability to
SYSTEM
specify input gather output

4. Analysis

2. Ability to change
experimental design
values
Figure 1c. Specifications to address type of solution required from simulation

3. Architecture
The architecture consists of elements that describe the architectural functional segments, the
simulation and analysis events, and the communication that links the segments and their
events. Functional segments are groupings of software entities, such as agents. Events are the
grouping of software activities, such as execution of agent rule sets. Communication controls
the timing at which each entity executes a specific activity.
In this section we present both a high-level view and detailed-level view of the functional
segments, high and detailed level views of the software events, and the communication that
links the segments with the events. Figure 2 shows the relationships of the architectural
elements as a meta-architecture.

Architectural Hierarchical
component diagram COMMUNICATION
sequence diagram
DETAIL
DETAIL
object
diagram algorithms

FUNCTIONAL SEGMENTS EVENTS


Figure 2. Meta architecture

The purpose of the architectural component diagram is to show a general view of the
functional segments of the architecture developed for this paper. The object diagram shows
the implementation view of the functional segments. The purpose of the hierarchical diagram
is to show a general view of the events portion of our architecture. The algorithms show the
implementation view of the events. The communication sequence links functional segments to
events.

3.1 Functional segments


The general view of the functional segments is shown in Figure 3 and mainly consists of
control, community, and analysis. The Platform, which could exist on one or multiple
processors, contains all functional segments required for the simulation described below.

schaef01 –4– © Copyright 2002


Complexity International Draft Manuscript

Agent
Goal Parameters

Behavior

Community

Input Control Analysis System


Simulation
Behavior

Platform

Figure 3. General view of functional segments: architectural component diagram

The control segment is in charge of making sure the main activities required for a
simulation analysis (initialization, simulation, and output analysis), are executed. The input
segment is the interface for the user to specify simulation parameters.
The community coordinates global data and communication among the group of agents,
which belong to it. It contains information that the agents broadcast to the community for
other agents to read. The community is similar to the open software-agent architecture
developed by Martin et al. [8], however a simulation capability resides within the community
in our architecture. The simulation uses the goals, parameters, and behaviors of the agents to
change the system state at each time step. The community can be considered as a field for
animals to run on, a factory floor with robots delivering materials, a traffic intersection with
cars and pedestrians as agents, a network of computers, a group of stock traders, or a (virtual)
auction room. Several instances would represent several floors or intersections in a single
system.
Agents can be homogeneous or heterogeneous and could represent any type of real-world
agent (e.g. pedestrians, robots, airplanes, cars, stockbroker, auction bidder) that can change
with respect to the state of its neighborhood. The state of the neighborhood is described by
characteristics of other agents such as location, speed, or financial wealth. The agents are
objects that all exist on the same virtual machine or on their own virtual machine, depending
upon the nature of the simulation. The behavior, goals, and parameters describing each agent
reside within their respective agent.
The analysis segment converts data produced by the simulation into an analytical format
that describes the behavior of the system that emerges during the simulation. Table 1 lists
tangible software and hardware entities that can be implemented to instantiate each of the
functional segments.

schaef01 –5– © Copyright 2002


Complexity International Draft Manuscript

Table 1. Possible realizations for elements of agent simulation architecture


Functional Segment Possible Realization
Platform Virtual machine
Processor
Network
JavaSpace
Input Text
GUI
Control Main object
Community Instance of object
Machine
Server
Community simulation Set of object methods
Agent-accommodating simulation package
Agents Instances of objects
Machines
Clients
Agent behavior Set of object methods
Analysis Statistical software
User-defined program
Spreadsheet
System behavior Equation
System state defined by parameter sets

Figure 4 shows a more detailed description of the architecture for the simulation and
analysis of emergent behavior in the form of an object diagram. There are three types of
objects required for this simulation: a main program that controls the execution of each
functional mode of the simulation, the community in which the agents are grouped, and the
agents themselves. The main program embodies the control element of the agent simulation
architecture shown in Figure 3. The community object controls the simulation and owns
groups of agents that belong within the community. Each instance of an agent within the
community inherits from an agent interface. Agents have rules and goals. The particular rules
and goals reflect the type of real-world entity that the agent represents. Pedestrians may have a
goal that represents a destination and rules for walking. Robots may have goals to locate all
land mines and rules for searching. The goal of an airplane may be to arrive on time with air
traffic control flight rules.

Main
input output
Initialize system
Execute simulation
Data manipulation
<<creates>> <<creates>>

Community
number agents in system Agent
number time steps in run agent characteristics
number runs current state
agent group characteristics goal location
vector of agents Decision rules
system state Try to reach goal
Time loop Broadcast information
Initiate agent simulation loop
Check agents in neighborhood
Check for inconsistencies
Pedestrian Robot Airplane

Figure 4. Implementation view of functional segments: object diagram

schaef01 –6– © Copyright 2002


Complexity International Draft Manuscript

3.2 Events
The general view of the functional segments relates to control, community, and analysis,
however the general view of the events relates to initialization, simulation, and analysis.
Within initialization, the community object and agent objects are given all characteristics
needed for participation within the simulation. The simulation activity loops through each
time step to call each agent to execute iterations of its rules. The data manipulation activity
transforms output into usable information. The general view of the events is shown in Figure
5.
Execution

Initialize Simulate Data Manipulation

Change Experimental Storage


Read
Community Agents Design Values & Analysis
Input

Execute Check for Calculate Increment


Rules Inconsistancies Performance Measures Time Step

Figure 5. General view of events: hierarchical event diagram

Each of the three activities contains several subactivities. Within initialization, input
sources are read and assigned as initial values to the community and agent attributes shown in
the object diagram. Within simulation, agent rules are executed, checks for inconsistencies
validate the system state, and system-level performance is assessed at each time step. Within
data manipulation, parameters are transformed to represent the desired experimental design.
Attribute values in either the community or agents can be changed, agents can be created or
destroyed, or an indication of whether a certain rule set should or should not be executed
during subsequent time steps can be set. Upon completion of all time steps, the overall
system-level performance throughout the entire simulation (emergent behavior) is assessed.

3.3 Algorithms
The general algorithms for execution and the three activities are shown in Figure 6. The
general algorithm for behavioral rule execution is shown in Figure 7. Inconsistency check and
performance calculation algorithms are specific to each system being simulated. The time
increment can be a discrete event calendar or an iterative loop. A detailed example of all
simulation algorithms, with specific behavioral rules, for a robot application is included in
Section 4.

schaef01 –7– © Copyright 2002


Complexity International Draft Manuscript

Initialize system While time remaining


Execute simulation For all agents
Manipulate output Execute rules
For all agents
Check if inconsistencies exist
If true, correct inconsistencies
Update current values of performance measures
Increment time

Highest level execution algorithm Simulation algorithm


Read input If at end of time step
Create community Write performance measures to file
Initialize system variables If n runs complete
Create agents Change system and/or agent parameters
Initialize agent variables If all combinations of parameters complete
Perform data analysis

Initialization algorithm Output manipulation algorithm

Figure 6. Implementation view of events: execution, initialization, simulation, and


output algorithms

If at goal
Calculate new goal
Check state of neighborhood
Execute rules

Figure 7. Behavioral rule execution algorithm

3.4 Communication
Communication is the link between the functional segments and events. Figure 8 is an activity
diagram that describes the order of events during a simulation run. The arrows depict the
direction of the communication that occurs among the main execution object, community, and
agents when each algorithm is called. The sequence in which the communication causes
algorithms to execute can be followed from top to bottom of the communication diagram.

Main
read input
Community Agent
<<create>> Agent
Agent
Agent
<<create>>
simulate
execute rules check goal
gather system parameters
make decision,
update attributes
check for inconsistencies

change parameters for


increment time step
experimental design
output manipulation
<<destroy>>

Figure 8. Communication link between objects and algorithms

schaef01 –8– © Copyright 2002


Complexity International Draft Manuscript

4. Implementation
The simulation architecture documented above was implemented in an object-oriented
language. We used the architecture on a case study for determining the emergent behavior of
an agent system. The classification of agents we chose to simulate (out of the three types
given by Maes [9]) was a system of autonomous robots. An analysis was performed to
determine an appropriate navigation rule set to insert in the simulation architecture described
above for simulating free-range (as opposed to track-guided) mobile robots for material
handling in a manufacturing cell [10].
The navigation rules used in this study were documented in Mataric [4]. Mataric
experimented with a herd of terrain-searching robots. Part of her thesis focused on the
combination of basis behaviors to form higher-level behaviors, such as flocking or herding, as
shown in Figure 9. Basis behaviors are listed at the bottom with the higher levels of behaviors
in the upper layers. The basis behaviors are an analysis and extension of Reynolds’ [3]
flocking algorithms.
Mataric’s [4] agents were programmed with a set of four basis behaviors: homing, safe-
wandering, dispersion, and aggregation. Since the agents in our study do not need to stay in a
group, aggregation was determined to be unnecessary for our agents. We decided to call the
higher-level behavior necessary for our research “delivery.” This behavior requires only
homing and safe-wandering. Thus the “delivery” (delivery agents also discussed in [11])
behavior was added to Mataric’s behavior-level diagram, shown in Figure 9. Mataric’s
original diagram is depicted in regular font. The additions to the original diagram created for
this document are depicted in bold.

Herding

Delivery Flocking Surrounding

Homing Safe-wandering Dispersion Aggregation Following

Figure 9. Behavior-level diagram of mobile robots (modified from Mataric [4])

The basis behavior “Homing” is called at the “Execute Rules” command of the Rule
Execution algorithm in Figure 7. The basis behavior “Safe-Wandering” is a collision
avoidance strategy and is also part of the “Execute Rules” call of the Rule Execution
algorithm. The rules for the homing and safe-wandering behaviors [4] are are described by
very simple rules, shown in Figure 10.

Turn toward direction of goal location If robot is in path


Go forward If at the right only
If at goal, stop Turn left, go forward
If at the left only
Turn right, go forward
If on both sides
Wait

Homing algorithm Safe-wandering algorithm


Figure 10. Homing rules and safe-wandering rule set [4]

schaef01 –9– © Copyright 2002


Complexity International Draft Manuscript

Each agent was modeled as an instance of the same object in an object-oriented


programming language. Thus they will follow the same set of navigation rules and have the
same set of attributes, although the values of the attributes may be different for each agent.
Each agent is given a random destination location as its goal and a random starting location.
Other attributes of agents in this analysis include agent speed, x and y values for location,
direction, goal location, current distance from goal, turning angle, distance traveled toward
goal during current time step, and size. If animation is used, color may also be an attribute.
The emergent behavior of this system was represented by equations that will assist in
determining three solutions from our analysis:
1. How many agents can exist on a given floor space before congestion becomes a problem
(“problem” is user-defined)?
2. What is the degree of inefficiency in delivering material due to maneuvering around other
agents and the effects of system characteristics on inefficiency?
3. Which system characteristics affect inefficiency?
Visual animation was used to qualitatively assess whether the system state had any
inconsistencies at each time step, such as agents running over each other. Figure 11 depicts a
screen shot of the animation. A check for collisions at the end of each time step, algorithm
shown in Figure 12, was calculated to verify agents did not overlap each other. If an agent is
in the state of having an inconsistency, the agent’s location is not advanced, thus it stops at its
location before collision. It then continues to execute safe wandering rules at the next time
step. The simulation portion of Figure 5 is recreated in Figure 13 with the specific steps for
this application substituted in each module.

Figure 11. Screen shot of agents navigating themselves on a floor space

For all agents


Check all other agents
If distance to agent < (agent size + safety factor)
Number of collisions ++
Tag agent as having inconsistency

Figure 12. System state validation algorithm

schaef01 – 10 – © Copyright 2002


Complexity International Draft Manuscript

Simulate

Homing Calculate:
Check for Increment
Safe- effective speed,
Collisions Time Step
wander potential collisions

Determine
Determine if angular Turn, stop,
agents are on direction or
left &/or right to goal go forward

Figure 13. Simulation hierarchy for physical agent case study

Within the architecture, we varied safe-wandering rule sets and parameters according to an
experimental design. After performance measures were saved in a file, output was analyzed
using spreadsheets and statistical software to determine the emergent behavior of the system.
The emergent behavior for this system was defined as a set of equations that specified the
system performance measures (effective speed and collision probability) as a function of the
system parameters. The equations can be used to determine good ranges of parameters to run
an actual system. A discussion of the results of this analysis can be found in Schaefer [10].
Figure 14 shows one of the equations and the plots of this equation derived from
simulating the system using the safe-wandering rule set shown in Figure 10. This plot shows
that the average speed of the agents is reduced when the amount of floor space occupied by
agents and variation in the directions the agents are traveling increases. The average speed
does not increase significantly when they try to travel faster if the floor space occupancy or
directional variation is high.

schaef01 – 11 – © Copyright 2002


Complexity International Draft Manuscript

Figure 14. Effective speed as a function of floor space utilization, amount of


variation in agent travel direction, and agents’ desired speed.

schaef01 – 12 – © Copyright 2002


Complexity International Draft Manuscript

Conclusion
The framework described in this paper allows one to experiment with and tune the items
depicted in Figure 1 to determine a system’s emergent behavior, as we did in the above
section with navigation rules and their parameters. Table 2 below summarizes other potential
applications for that can be simulated within the simulation architecture described in this
paper. For each of the applications listed, examples of agents, communities, and their
attributes are shown.

Table 2. Possible applications of agent simulation architecture


Application Agents Agent Agent Community Community Emergent References
Attributes Goal Attributes Behavior
Auction Buyer Money Minimize Auction room Buyer Final prices [12 Wurman
Seller available price membership et al. 1998]
Maximize Seller
profit membership
Criminal Criminal Last hit Steal City Set of Pattern of [13]
behavior Hide Country financial victims
Find loot institutions
Insect habits Ants Gender Follow Picnic area List of ants Ant trail [14, 15, 16]
Roaches Last mealtime pheromones
Find food
Find mate
Material Robots Speed Go to machine Factory floor Set of active Delivery rate [10, 17, 18]
handling Parts Current Minimize robots
location time Machine
Travel Avoid locations
direction collisions
Next machine
Robot soccer Robots Have ball Bring ball to Field Set of robots Winning team [19]
Ball Current location in each
location team
Team
designation
Scheduling Parts Process time Schedule Manufactur- Machine set Schedule [20]
Machines start time ing cell Parts set
Minimize wait
Software Object code Methods Access other Computer Computers, Filter [21]
Virtual Attributes processors network printers, etc unwanted
machine Object type Change currently on information
parameters line
Stock market Traders $ available Maximize Trading room Available Portfolio [22]
portfolio stocks profile
Street Pedestrians Size Avoid Intersection Size Throughput [1, 17
Vehicles Speed collisions Pedestrian
Minimize count
time Vehicle count
Cross
intersection
Terrain Robots Speed Find a soil Distant planet Set of robots Rate of [4]
searching sample Mine field Mine locating
Find a mine locations items
War games Soldiers Ammo power Hit target Battle field Set of targets Target [23]
Weapons accuracy

schaef01 – 13 – © Copyright 2002


Complexity International Draft Manuscript

Acknowledgements
This work was partially supported by the Federal Highway Administration under contract
DTFH61-97-P-00137.

References
[1] K. L. Sanford, J. A. Wentworth, and S. McNeil, “A-Life in the Fast Lane,” in Proceedings
of the AI 1995 15th International Conference, 1995, pp. 431-440.
[2] P. Maes, “Artificial Life meets Entertainment: Interacting with Lifelike Autonomous
Agents,” Communications of the ACM vol. 38-11, pp. 108-114, 1995.
[3] C. W. Reynolds, “Flocks, herds, schools: A distributed behavioral model,” Computer
Graphics vol. 21-4, pp. 25-34, 1987.
[4] M. Mataric, “Designing and understanding adaptive group behavior,” Adaptive Behavior
vol. 4-1, pp. 51-80, 1995.
[5] T. Oka, J. Tashiro, and K. Takase, “Object-oriented BeNet programming for data-focused
bottom-up design of autonomous agents,” Robotics and Autonomous Systems 28, pp.
127-139, 1999.
[6] I. Jacobson, Object-Oriented Software Engineering, ACM Press and Addison-Wesley:
Reading MA, 1990.
[7] I. Tzikas, “Agents and Objects: The Difference,” M.Sc. Thesis, Department of Computer
and Systems Sciences, Stockholm University, 1997.
[8] D. Martin, A. Cheyer, and D. Moran, “The Open Agent Architecture: A framework for
building distributed software systems,” Journal of Autonomous Agents and Multi-Agent
Systems vol. 4, pp. 143-148, 2001.
[9] P. Maes, “Modeling adaptive autonomous agents,” Artificial Life vol. 1, 1-2, ed. C. G.
Langton, Cambridge, MIT Press, 1995.
[10] L. A. Schaefer, “Flow Theory for Flocks of Autonomous Mobile Physical Agents,” Ph.D.
Thesis, Department of Industrial Engineering, Arizona State University, 2000.
[11] B. Hayes-Roth, K. Pfleger, P. Lalanda, P. Morignot, and M. Balabanovic, “A Domain-
Specific Software Architecture for Adaptive Intelligent Systems,” IEEE Transactions on
Software Engineering, 1994.
Werner, G. M. and Dyer, M. G. 1992. Evolution of herding behavior in artificial animals.
Proceedings, From Animals to Animats 2, Second International Conference on
Simulation of Adaptive Behavior, 432-441.
[13] Hexmoor, H., Lafary, M., and Trosen, M. 1999. Adjusting Autonomy by Introspection.
AAAI Spring Symposium, Stanford, CA.
[14] Millonas, M. M. 1992. A connectionist type model of self-organized foraging and
emergent behavior in ant swarms. Journal of Theoretical Biology 159, 529-552.

schaef01 – 14 – © Copyright 2002


Complexity International Draft Manuscript

[15] Schweitzer, F., Lao, K., and Family, F. 1997. Active random walkers simulate trunk trail
formation by ants. Biosystems 41, 153-166.
[16] Parunak, V. 1996. Go to the ant. Proceedings from the Santa Fe Institute Conference, 15-
21.
[17] Schaefer, L. A., Mackulak, G. T., Cochran, J. K, and Cherilla, J. L. 1998. Application of
a General Particle System Model to Movement of pedestrians and vehicles. In
Proceedings of the 1998 Winter Simulation Conference, 1155-1160.
[18] Doty, K. L. and Van Aken, R. E. 1993. Swarm robot materials handling paradigm for a
manufacturing workcell. IEEE International Conference on Robotics and Automation.
[19] Balch, T. 2000. Hierarchic Social Entropy: An Information Theoretic measure of robot
group diversity. Autonomous Robots 8, 3.
[20] Baker, “Metaphor or Reality: A Case Study where Agents Bid with Actual Costs to
Schedule a Factory,” Market-Based Control: A Paradigm for Distributed Resource
Allocation, S. H. Clearwatter (ed.) River Edge, NJ: World Scientific Publishing Co., pp.
184-223, 1996.
[21] De Bra, P. and Post, R. 1994. Information Retrieval in the World-Wide Web: Making
Client-Based Searching Feasible. Journal on Computer Networks and ISDN Systems 27,
183-192.
[22] Kephart, J. O., Hanson, J. E., and Greenwald, A. R. 2000. Dynamic pricing by software
agents.
[23] Lee, T., Ghosh S., and Nerode A. 2000. Asynchronous, Distributed, Decision-Making
Systems with Semi-Autonomous Entities: A Mathematical Framework. IEEE
Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics 30, 1.
Sycara, K., Decker, K., Pannu, A., Williamson, M., and Zeng, D. 1996. Distributed Intelligent
Agents. IEEE Expert.
Tambe, M., Pynadath, D., and Chauvat, N. 2000. Building dynamic agent organizations in
cyberspace. IEEE Internet Computing 4, 2.

schaef01 – 15 – © Copyright 2002

Вам также может понравиться