Вы находитесь на странице: 1из 34

A Behavior-Based Intelligent Control Architecture with

Application to Coordination of Mutiple Underwater


Vehicles
1 2
Ratnesh Kumar
Dept. of Electrical Engineering, Univ. of Kentucky
Lexington, KY 40506
Email: kumar@engr.uky.edu
James A. Stover
Applied Research Lab., Pennsylvania State University
University Park, PA 16802
Email: jjs5@psu.edu
November 20, 2005
1
The work was completed while the rst author was on a sabbatical leave at Applied Research
Laboratory (ARL) on an invitation from Prof. Asok Ray of PSU and Dr. Shashi Phoha of ARL.
The support was provided by the University of Kentucky, and by ARL through an Oce of Naval
Research (ONR) grant. The authors wish to thank Dr. Jerry Shelton of ARL for his valuable
inputs in formalizing some of the concepts of the intelligent control architecture, and to Mr. Atilla
Kiraly for assisting with developing the gures that model the response controller of the underwater
vehicles.
2
This research is supported in part by the National Science Foundation under Grants NSF-ECS-
9409712, NSF-ECS-9709796, and in part by the ONR under the Grant ONR-N00014-96-1-5026.
Abstract
The paper presents a behavior-based intelligent control architecture for designing con-
trollers which, based on their observation of sensor signals, compute the discrete control
actions. These control actions then serve as the set-points for the lower level controllers.
The behavior-based approach yields an intelligent controller which is a cascade of a perceptor
and a response controller. The perceptor extracts the relevant symbolic information from
the incoming continuous sensor signals, which enables the execution of one of the behaviors.
The response controller is a discrete event system that computes the discrete control actions
by executing one of the enabled behaviors. The behavioral approach additionally yields a
hierarchical two layered response controller, which provides better complexity management.
The inputs from the perceptor are used to rst compute the higher level activities, called
behaviors, and next to compute the corresponding lower level activities, called actions. This
paper focuses on the discrete event subsystem, namely the response controller. We illustrate
the intelligent control architecture by discussing its application to the design of intelligent
controllers for autonomous underwater vehicles used for ocean sampling missions. A com-
plete set of discrete event models of the response controller of the underwater vehicles for
the above application has been given, and their formal verication discussed.
Keywords: Intelligent control, discrete event control, hierarchical control and coordination,
autonomous underwater vehicles, ocean sampling
1 Introduction
Many of the dynamical systems that need to be controlled, called plants, are complex,
large-scale, highly non-linear, time-varying, stochastic, and operate in an uncertain and
unpredictable environment. As a result of these characteristics, dynamical systems are not
amenable to accurate modeling. Hence, conventional control techniques that are model-based,
i.e., rely on the plant model, are not suitable for the controller design for such systems. There
are many diculties in extracting a model of a plant using physical laws which include:
(i) plant behavior is too complex to understand, (ii) models are dicult or expensive to
evaluate, (iii) plant behavior is subject to unpredictable environmental disturbances, (iv)
plant behavior is distributed, non-linear, time-varying, stochastic.
Moreover, conventional controllers are designed primarily for the purposes such as sta-
bilization, regulation and tracking, robustness, disturbance rejection, model matching, per-
formance optimization, and use the plant model for control design. However, the control
requirements for complex systems are far beyond those mentioned above and include addi-
tional requirements such as recongurability, learning capability, safety, failure and exception
handling, capability to manage dynamically changing mission goals, multi-system coordina-
tion, increased autonomy, many of which are elastic (such as control of room temperature).
The inadequacy of the conventional control techniques for the reasons described above,
namely, the need to (i) control systems without their accurate models, and (ii) meet spec-
ications beyond the scope of conventional control, has led to research into the non-
conventional control techniques, also called intelligent control. Intelligent control oers an
alternative to conventional control for designing controllers whose structure and consequent
outputs in response to external commands and environmental conditions are determined
by empirical evidence, i.e., observed input/output behavior of the plant, rather than by
reference to a mathematical or model-based description of the plant. For an exposure to
intelligent control techniques readers are referred to the edited volumes [10, 21, 5, 9].
There is little to be gained by intelligent control when the plant model is well known
and control requirements fall within the scope of conventional control. For this reason the
control is generally hierarchically structured as is shown in Figure 1, where at the lower level
conventional control is exercised, whereas at the higher level intelligent control is used. The
controller
Conventional
controller
Conventional
Plant
Intelligent Controller
Figure 1: Control hierarchy with an intelligent controller
1
lower level conventional controllers are model-based, oering conventional control capabili-
ties; the higher level intelligent controllers on the other hand, operate on models constructed
through empirical evidence oering control capabilities beyond the purview of conventional
control. The tasks are delegated from the higher level to the lower level, whereas the sensory
feedback is passed on from the lower level to the higher level.
Given that intelligent control is used when plant models are ill-dened and control re-
quirements are beyond the scope of conventional control, such controllers are inherently
non-linear. Several techniques for such non-linear controller design have been proposed in
literature, which include expert systems, fuzzy logic systems, articial neural networks, ge-
netic algorithms.
Although various alternative techniques for intelligent control are being actively re-
searched, there is little research eort directed towards the design of intelligent control
architectures.
One such architecture by Saridis [19] is hierarchical with three layers: the execution layer
at the bottom, the coordination layer in the middle, and the organization layer at the top.
One of the main ideas of this architecture is the principle of increasing intelligence with
decreasing precision. Meystel [17] has proposed a nested hierarchical control architecture
for the design of intelligent controllers. A model-based autonomous systems architecture by
Zeigler-Chi [22] consists of models of planning, operations, perceptions, diagnostics, and
recovery. These models are systematically used for achieving control with high degree of
autonomy. A general approach to task-based model development is also presented. An
architecture consisting of a network of intelligent nodes is proposed by Levis [16] as a model
for distributed intelligent system. Intelligence in each node is the consequence of its ve-
stage model, namely, situation assessment, information fusion, task processing, command
interpretation, and response selection.
Another architecture, called real-time control system (RCS) reference model architecture
is by Albus [4]. RCS is also arranged in a hierarchy, where each node in the hierarchy performs
sensor processing, value judgment, world modeling, and behavior generation at a level of
abstraction and resolution appropriate for the position of the node in the intelligent control
hierarchy. The sensor processing subsystem receives sensor inputs from the environment and
their predicated values from the world modeling subsystem, and provides updates for the
world modeling by comparing the two signals. It also extracts relevant features out of the
sensor inputs, which are evaluated for signicance by the value judgment subsystem and
then passed on to the world modeling subsystem for updating its database. The behavior
generation subsystem generates a plan-of-actions based on both the current state available
from the world modeling subsystem and its knowledge of the mission goals. The world
modeling subsystem receives the plan from the behavior generation subsystem, simulates it,
and sends the results of simulation to the value judgment subsystem for evaluation. The
value judgment subsystem then passes on its plan evaluations to the behavior generation
subsystem. Once a plan is nalized, the behavior generation subsystem issues actuator
commands. Another control computation architecture, called cerebellar model articulation
controller (CMAC), was also proposed by Albus [3, 2] to model control computations in
2
intelligent biological systems.
A structure-based hierarchical architecture is proposed by Acar-Ozguner [1]. It embeds
intelligence in control via a special hierarchical organization based on the physical structure
of the system. Each node in the hierarchy is systematically assigned its intelligence or
functionality through its accomplishable tasks and the procedures to accomplish them.
Another architecture, called subsumption architecture, is by Brooks [6]. This architecture
is based on the idea of levels of increasing competence of an intelligent system, which need to
be identied in the beginning of the design phase. The design then proceeds by constructing
a control system that achieves the level-zero competence. Next, a new control layer is added
which examines signals from level-zero and also injects signals into level-zero to either inhibit
or to replace an existing signal. This layer with the aid of level-zero achieves the level-one
competence, and so on.
In this paper we present a new architecture, called behavior-based intelligent control
architecture, for designing intelligent controllers. The controllers based on their inputs of
sensor signals and mission goals compute control actions. As shown in Figure 1, these control
actions then serve as set-points for the lower level conventional controllers. The intelligent
control architecture is a cascade of four subsystems: the input interface, the perceptor, the
response controller, and the output interface. The perceptor extracts the relevant symbolic
information from the incoming continuous sensor signals, while the response controller is a
discrete event system [7, 12] that computes discrete control actions in response to the discrete
inputs from the perceptor.
Our approach to the design of intelligent controllers is behavioral. Behaviors are certain
high level activities (that are independent of each of other) that determine the manner in
which the system reacts to changing external/environmental conditions and thereby executes
subtasks of the given mission tasks. Behaviors can be congured in dierent modes, and
each behavior mode can be executed by the execution of certain primitive activities, called
actions. This behavioral approach to intelligent control design naturally yields:
1. The perceptor subsystem which has the task of extracting those features from incoming
continuous sensor signals that enable the execution of one of the behaviors.
2. The response controller subsystem with a two layered hierarchy, where the lower layer
consists of behavior controllers (one per behavior) and the higher layer consists of a
single coordinator. An implicit advantage of such a hierarchically structured response
controller is that it provides a better complexity management.
Thus in our approach the design of an intelligent controller requires identifying certain
high level activities, the behaviors, that the system under control should exhibit; next iden-
tifying the dierent modes of each behavior; and nally identifying the primitive actions
which, when executed in an appropriate sequence, execute a specic behavior mode. An
action is a primitive activity that is executed by executing an unique associated algorithm
that computes the set-points for the lower level controllers. The perceptor is designed so
as to identify the external/environmental conditions that lead to the enablement of the
appropriate behaviors in the response controller.
3
This paper focuses on the details of the response controller subsystem; an introduction
to the perceptor subsystem can be found in [20]. Section 2 describes an application
ocean sampling using a network of autonomous underwater vehicles, where the intelligent
controllers for the underwater vehicles have been designed in the architecture presented here.
An overview of the intelligent control architecture is presented in Section 3. The architectural
details of the response controller is discussed in Section 4. Section 5 discusses the interacting
automata based discrete event system model of the response controller that is used for its
analysis and verication. Section 6 details the intelligent control design for the autonomous
underwater vehicles in our architecture and gives the discrete event model of the response
controllers of the underwater vehicles, whereas the control (mission) specication models
used for vericaiton are presented in Section 7. Section 8 concludes the work presented here,
and a preliminary introduction to discrete event systems is presented in the appendix. Some
results of this paper were presented in the conference papers [13, 14].
2 An application of intelligent control architecture
We use a SAmpling MObile Network (SAMON) of autonomous underwater vehicles
(AUVs) to illustrate the intelligent control architecture [18]. The controller for each of
the AUVs in this application is designed in the intelligent control architecture presented
here. A schematic diagram of SAMON is shown in Figure 2.
FSP
FSP FSP FSP FSP FSP FSP
AUV AUV AUV AUV
FSP
SAUV SAUV
TC
Figure 2: Schematic diagram of Ocean SAMON
A typical ocean sampling mission begins by seeding the ocean oor of the region of
interest with a number of xed sensor packages (FSPs) and deploying a group of autonomous
underwater vehicles (AUVs) to explore this region and gather data from FSPs. FSPs are
capable of sensing parameters of interest, data recording, and communicating the recorded
data by sonar to AUVs, which explore the region. They may also serve as position references
for the group of AUVs. AUVs are capable of navigating from one point to another, and can
communicate with each other and with FSPs by sonar. The group of AUVs consists of a
certain number of supervisory AUVs (SAUVs) which can communicate by radio to the tactical
coordinator (TC), located on shore, and can communicate by sonar with their subordinate
AUVs.
4
As shown in Figure 2, SAMON is a four layered hierarchy consisting of the TC, SAUVs,
AUVs, and FSPs. Once the group of AUVs are deployed, they perform an initial self-
organization in which each SAUV determines the number and locations of its subordinate
AUVs through sonar queries, and then surfaces to report its availability for ocean sampling
operations to the TC by radio. Once the TC has been contacted by all SAUVs and has
determined the resources available, it issues mission orders (such as gather data, relocate,
etc.) to each SAUV, and also provides search region coordinates. Each SAUV distributes
the search region among its subordinate AUVs, and determines a rendezvous point and a
time to meet afterwards. Each subordinate AUV then develops an itinerary for exploring
its own subregion, and gathers sampled data from FSPs within that subregion. At the
rendezvous point each SAUV makes sonar contacts with its subordinates, collects any data,
and surfaces to make radio contact with the TC to download the data and receive further
mission instructions.
SAMON is comprised of a recongurable four layered hierarchical and distributed ar-
chitecture of command-control-communication AUVs. Each node in this hierarchy is an
intelligent controller that, based on its observations of the sensor signals and signals from
lower level controllers, computes the discrete control actions which serve as set-points for
the lower level controllers and its own eector subsystems. We apply the intelligent control
architecture presented in this paper for designing the controller for a typical AUV/SAUV.
3 Intelligent control architecture
As described above, the intelligent control architecture is used for designing controllers,
which based on their observations of sensor signals, compute discrete control actions that
serve as set-points for the lower level controllers. The intelligent controller is a cascade of
four subsystemsinput interface, perceptor, response controller, and output interfaceas
shown in Figure 3 We discuss the functionality of each of the subsystems in the following.
C
O
N
T
R
O
L
_
D
A
T
A
o
u
t
p
u
t

i
n
t
e
r
f
a
c
e
r
e
s
p
o
n
s
e

c
o
n
t
r
o
l
l
e
r
S
T
I
M
U
L
U
S
_
D
A
T
A
p
e
r
c
e
p
t
o
r
S
E
N
S
O
R
_
D
A
T
A
i
n
p
u
t

i
n
t
e
r
f
a
c
e
(compute_response) (send_output_data) (do_perception)
(get_sensor_data)
Figure 3: Systems within the intelligent control architecture. By convention, rectangular
blocks depict systems, whereas the oval blocks depict signals.
5
Input interface: The intelligent controller receives sensor data in the form of certain data
packets over a network after the raw sensor signals have been processed by an appro-
priate signal processing system. The input interface reads the sensor data packets o of
the network and stores them in an appropriate form in SENSOR DATA. This is achieved
by the execution of the function-call event get sensor data.
Perceptor: Perceptor reads the data from SENSOR DATA, extracts the features of interest,
and stores them in appropriate forms in STIMULUS DATA by transforming continuous
signals into discrete symbols. A certain fuzzy pattern classier, called continuous
inferencing net (CINET) [20], is used for this purpose. The perceptor is invoked by
the execution of the function-call event do perception.
Response controller: Response controller reads the data from the STIMULUS DATA, com-
putes the discrete control actions, and stores them in CONTROL DATA. It is a discrete
event system that receives discrete sensor symbols, maintains discrete states, and out-
puts discrete control actions. The response controller computes the control actions
hierarchically to manage complexity, where the STIMULUS DATA inputs are used to
rst compute the higher level activities, called behaviors, and next to compute the
corresponding lower level activities, called actions. This correspondence is shown in
Figure 5, where the response controller consists of: (i) a top level coordinator that
based on the STIMULUS DATA inputs selects the behaviors to be exhibited, and (ii) sev-
eral lower level behavior controllers (one for each behavior) that determine the control
actions for exhibiting the selected behaviors. The response controller computes the
control action each time the event compute response is executed. This computation,
in turn, is achieved by a sequence of events (in form of function-calls) shown in Figure 6.
Details of the functions are given below.
Output interface: The output interface reads the control actions stored in CONTROL DATA
and generates a data packet in an appropriate form for the network. The network
receives the control action data from the intelligent controller and transmits them as
the set-points for the lower level controllers. This process is achieved by the execution
of the function-call event send control data.
A more detailed description of the two interfaces and the perceptor is beyond the scope
of this paper. The response controller only is described in detail. An introduction to the
CINET based perceptor is presented in [20].
The sequence of function-calls get sensor data, do perception, compute response,
and send control data is executed to invoke the input interface, the perceptor, the response
controller, and the output interface, respectively. These executions are done each time
either a new sensor data packet arrives or a time-out occurs. In other words, the intelligent
controller waits for a new sensor data packet to arrive for a maximum time determined
previously, and when a sensor data packet arrives within this time or when the time-out
occurs, it executes the sequence of function-call events. It is possible for a new sensor data
6
packet to arrive before the last complete controller cycle is executed, in which case a new
controller cycle is initiated pre-emptying the execution of the last one. The diagram is shown
in Figure 4.
do_perception
arrival_of_new_datapacket
time_out
send_control_data compute_response
get_sensor_data
Figure 4: Event sequence the controller cycles through
4 Response controller
The response controller is the subsystem in the intelligent control architecture that is
responsible for making the control decisions, i.e., computing the control actions.
Our approach to designing the response controller is a behavioral approach. Behav-
iors are certain high level activities (that are independent of each of other) that determine
the manner in which the system reacts to changing external/environmental conditions and
thereby executes subtasks of the given mission tasks. For example, in the SAMON applica-
tion, the overall mission is to sample a given region of the ocean oor by using a network of
SAUVs and AUVs. In order to perform this mission, each SAUV should be able to exhibit
the following behaviors:
Form its team of AUVs
Communicate with the TC
Communicate with the AUVs in its team
Reconnaissance
Refuel
Avoid collisions
Loiter
Similarly, each AUV should be able to exhibit the following behaviors:
Communicate with its SAUV
Communicate with the FSPs
7
Reconnaissance
Refuel
Avoid collisions
Loiter
Thus the design phase begins with identifying such higher level activities, called be-
haviors, that the system under control should exhibit for executing subtasks of the given
missions. A certain behavior may be congured in dierent modes. As an example of modes
in the SAMON application, the SAUV behavior communicate with the TC may be con-
gured in one of the following modes:
Communicate with the TC to establish a contact
Communicate with the TC to download the gathered data
Once the set of behaviors and the associated modes have been identied, we next identify
the lower level primitive activities, called actions, with the property that each behavior mode
can be executed by executing a certain sequence of primitive actions. For example in the
SAMON application, the SAUV behavior mode communicate with the TC to establish
contact is executed by executing the following sequence of primitive actions:
surface, send hello by radio, receive acknowledgment on radio
On the other hand, the SAUV behavior mode communicate with the TC to download the
gathered data is executed by executing the following sequence of primitive actions:
surface, send download request by radio, receive download acknowledgment on
radio, transmit gathered data on radio
Thus actions are primitive or atomic activities whose execution in a certain sequence
results in the execution of a particular behavior mode. An action is primitive in the sense
that it species a unique set-point for the lower level controllers that are controlled by
the intelligent controller (refer to Figure 1). For example in the SAMON application, the
surface action species the set-point of zero depth, and zero speed for the auto-pilot
controller. Similarly, the action send hello by radio action species the set-point of hello
message for the radio communication controller.
To summarize, the design of an intelligent controller requires identifying certain high
level activities, the behaviors, that the system under control should exhibit; next identifying
the dierent modes of each behavior; and nally identifying the primitive actions which,
when executed in an appropriate sequence, execute a specic behavior mode. An action
is a primitive activity that is executed by executing an unique associated algorithm that
computes the set-points for the lower level controllers. The perceptor is designed so as to
8
identify the external/environmental conditions that lead to the enablement of the appropriate
behaviors in the response controller.
This behavioral approach for designing intelligent controllers naturally leads to a response
controller that has a two layered hierarchy. The lower layer consists of behavior controllers,
one for each behavior, and the higher layer consists of a single coordinator. This hierarchy
is shown in Figure 5.
The coordinator receives sensor inputs (orders) from the STIMULUS DATA, selects the
behavior to execute, suggests the mode of the behavior execution, monitors the progress of
behavior execution, modies its selections in case of problems, and sends reports out in case
of failures. The task of a behavior controller on the other hand is to monitor sensor inputs
and accordingly advise the coordinator of its enablement, receive mode conguration inputs
from the coordinator, select the behavior mode to execute, issue set-point inputs for the
lower level controllers for executing the corresponding sequence of primitive actions, monitor
the progress of action sequence execution, and inform the coordinator in case of failures.
The functionality of the coordinator and the behavior controllers within the two layered
response controller described in the previous paragraph is achieved by a combination of
functions (drawn in rectangular blocks in Figure 5) and the signals they process/generate
(drawn in oval blocks in Figure 5). Figure 5 depicts the domain and range spaces (drawn as
oval blocks) of each function (drawn as rectangular blocks) within the response controller.
We next discuss these functions individually.
In order to maintain uniformity, the coordinator and the behavior controllers are struc-
tured identically. Each consists of two functions, the enabler function and the planner
function, and maintains internal state variables called directives.
Coordinator enabler: This is the enabler function in the coordinator. It checks for the
arrival of any new orders in the STIMULUS DATA from a higher level (such as the TC level
for the SAUVs, and the SAUV level for the AUVs in the SAMON application), and
enables the appropriate behaviors by setting the appropriate COORDINATOR DIRECTIVES
in the coordinator. One of the enabled behaviors is later selected for execution.
Behavior enabler: Each behavior has its own behavior enabler function which checks
for the arrival of any relevant input in the STIMULUS DATA that implies the partic-
ular behavior be enabled. The behavior enabler function then sets the appropriate
COORDINATOR DIRECTIVES, informing the coordinator that the particular behavior is
enabled in response to the arrival of a certain input in the STIMULUS DATA.
Coordinator planner: The task of the coordinator planner is two fold. Firstly, depending
upon its current state of COORDINATOR DIRECTIVES, which contains the information
about the set of enabled behaviors, it generates a COORDINATOR PLAN, a sequence of
enabled behaviors, in which the enabled behaviors should be executed. It thus assigns
an order to the set of enabled behaviors. Secondly, it sets BEHAVIOR DIRECTIVES in
the enabled behaviors for conguring their modes of execution.
9
b_evaluator
C_DIRECTIVE
CONTROL_DATA
c_enabler
Bi_DIRECTIVE
bj_enabler
Bj_DIRECTIVE
bi_enabler
c_planner
a_evaluator
bi_planner
a_executor
COORDINATOR
BEHAVIORj CONTROLLER BEHAVIORi CONTROLLER
bj_planner
STIMULUS_DATA
C_PLAN
B_PLAN
Figure 5: Details of the response controller
10
Behavior planner: Associated with each behavior is its own planner. Whenever the cur-
rent COORDINATOR PLAN begins with planners owner behavior, then based on the cur-
rent BEHAVIOR DIRECTIVES, which contain information about the enabled behavior
modes, the planner selects a behavior mode and generates the BEHAVIOR PLAN cor-
responding to that mode. A BEHAVIOR PLAN is a sequence of actions, in which the
selected behavior mode is to be executed.
Action executor: The action executor checks the current BEHAVIOR PLAN and executes the
rst action in it. The execution of the action computes the appropriate set-points for
the lower level controllers using any pertinent information from the STIMULUS DATA,
and outputs them as CONTROL DATA. The output interface function send control data,
when executed, reads the CONTROL DATA, forms an appropriate data packet, and sends
it out on the network as set-points for the lower level controllers.
Action evaluator: This function evaluates the success of the last action executed. It checks
the current BEHAVIOR PLAN to determine the rst action in it. (Initially, when the
BEHAVIOR PLAN is empty, the action evaluator acts vacuously.) It then compares
the assigned set-point in the CONTROL DATA with the result of the lower level control
reported as a new sensor data in the STIMULUS DATA. If the action was successfully
executed, it deletes the action from the BEHAVIOR PLAN, otherwise it sets an appropriate
BEHAVIOR DIRECTIVE to inform the behavior controller.
Behavior evaluator: This function evaluates the success of the last behavior executed. It
checks the current COORDINATOR PLAN to determine the rst behavior in it. (Initially,
when the COORDINATOR PLAN is empty, the behavior evaluator acts vacuously.) It
then checks the corresponding BEHAVIOR DIRECTIVES to determine if there was any
problem with the execution of the last action. If there was a problem that the behav-
ior controller was unable to resolve, it sets an appropriate COORDINATOR DIRECTIVE to
inform the coordinator. Otherwise, it checks the BEHAVIOR PLAN, and if empty, deter-
mines that behavior was successfully executed, in which case it deletes the behavior
from the COORDINATOR PLAN.
To summarize, we list each function in the response controller along with its domain and
range space:
coordinator enabler : STIMULUS DATA
COORDINATOR DIRECTIVE
behavior i enabler : STIMULUS DATA
COORDINATOR DIRECTIVE
coordinator planner : COORDINATOR DIRECTIVE
COORDINATOR PLAN
i
BEHAVIORi DIRECTIVE
behavior i planner : COORDINATOR PLAN BEHAVIORi DIRECTIVE
BEHAVIOR PLAN
11
action executor : BEHAVIOR PLAN
CONTROL DATA
action evaluator : BEHAVIOR PLAN CONTROL DATA STIMULUS DATA
BEHAVIOR PLAN
i
BEHAVIORi DIRECTIVE
behavior evaluator : COORDINATOR PLAN BEHAVIOR PLAN
i
BEHAVIORi DIRECTIVE
COORDINATOR DIRECTIVE COORDINATOR PLAN
As described above the response controller gets invoked through the execution of the
event (function-call) compute response, which gets executed each time the controller cy-
cles through the sequence of events shown in Figure 4. This happens each time the in-
telligent controller receives a new sensor data packet or a time-out occurs. The function
compute response is a macro-event whose functionality is achieved by the functions of
the response controller executed in a certain sequence shown in Figure 6.
bi_enabler(s)
bi_planner(s) c_planner a_executor
a_evaluator b_evaluator c_enabler
situation evaluation
operation execution operation planning
Figure 6: Event sequence that achieves compute response
This sequence of execution of the functions can be classied into the phases of activities
of situation evaluation, operation planning, and operation execution as discussed below:
Situation evaluation: This is achieved in two steps. Firstly, the eect of the last action is
evaluated by executing the action evaluator followed by the behavior evaluator.
The execution of these functions modies the COORDINATOR PLAN and BEHAVIOR PLAN
if the action was successful, and generates appropriate COORDINATOR DIRECTIVES and
BEHAVIOR DIRECTIVES otherwise.
Secondly, the arrival of a new input in the STIMULUS DATA is evaluated and then the
appropriate behaviors are enabled by executing the coordinator enabler followed by
the behavior enabler for each of the behaviors. The execution of these functions
generates appropriate COORDINATOR DIRECTIVES and BEHAVIOR DIRECTIVES.
Operation planning: This is achieved by executing the coordinator planner followed
by the behavior planner for each individual behavior. Based on the current state
of COORDINATOR DIRECTIVES and all BEHAVIOR DIRECTIVES, the planners generate a
COORDINATOR PLAN, followed by a BEHAVIOR PLAN.
12
Operation execution: This is achieved by executing the action executor, which executes
the rst action in the BEHAVIOR PLAN and generates the set-points for the lower level
controllers in the CONTROL DATA.
The response controller goes through the phases of situation evaluation, operation planning,
and operation execution to make its control decision.
5 Discrete event system model of response controller
The response controller is a discrete event system since it has discrete states which change
in response to discrete events (the function calls). We model the response controller by
a collection of interacting automata, which is described in detail in the appendix. The
derivation of the formal discrete event model of the response controller aides the analysis
and verication in several ways:
1. The models can be used for the formal verication of the controller to see whether
or not it meets the mission specication. For this, the mission specication is also
represented as a discrete event system that denes the sequences of behavior modes
that the controlled system should execute in order to accomplish the mission task.
The control design is veried by showing that the sequences that are allowed by the
design automata are also allowed by the specication automata. The formal verication
problem is thus one of language containment [11], and can be performed using standard
tools such as COSPAN [15].
2. The models can be used for automated code generation. Once the control design has
been formally veried, standard tools such as DIADEM [8] can be used to automati-
cally generate the software code that implements the behavior that is modeled by the
interacting automata.
3. Availability of automata models allows easy redesign and maintenance. Any design
change due to either changes in the system or in the mission specication can be
more easily incorporated if the formal model of the controller is available. Once the
changes have been incorporated the design can be easily veried and then converted
into executable software code. Availability of the model thus also facilitates software
reuse.
Each automaton in the discrete event system model of the response controller tracks
the evolution of one particular state variable of the response controller, i.e., there is one
automaton per state variable of the response controller. The response controller has the
following state variables:
COORDINATOR DIRECTIVE
BEHAVIORi DIRECTIVE, for each i in the set of behaviors
13
COORDINATOR PLAN
BEHAVIOR PLAN
Thus if there are n behaviors, then the discrete event system model of the response controller
consists of a collection of n + 3 interacting automata. In practice, however, in order to
simplify the complexity of any individual automaton it may be convenient to represent it
by a collection of interacting automata. So in practice, the response controller model may
consist of more than n + 3 automata.
Since there are no directives or plans initially in the response controller, the initial state
of each of the above automata is the empty set.
The event set of an automaton is the set of those controller functions whose range space
contains the state variable the automaton is tracking. The following table gives the event
set of each automaton listed above:
Automaton Event set
COORDINATOR DIRECTIVE coordinator enabler, behavior evaluator, behavior i enabler
BEHAVIORi DIRECTIVE coordinator planner, action evaluator
COORDINATOR PLAN coordinator planner, behavior evaluator
BEHAVIOR PLAN action evaluator, action executor, behavior i planner
This interacting automata based DES model of the response controller consists of two
shared variables:
STIMULUS DATA, and
CONTROL DATA
The value of the shared variable STIMULUS DATA is modied by the perceptor, whereas the
value of the shared variable CONTROL DATA is modied by the response controller.
The guard condition of a certain state transition that occurs in response to a certain event
is a predicate over the set of those shared and state variables that are in the domain space
of the response controller function to which the event corresponds. The following table lists
the guard variables (the set of variables over which a predicate of the guard condition is
dened) for each event:
Event Guard variables
coordinator enabler STIMULUS DATA
behavior i enabler STIMULUS DATA
coordinator planner COORDINATOR DIRECTIVE
behavior planner BEHAVIORi DIRECTIVE, COORDINATOR PLAN
action executor STIMULUS DATA, BEHAVIOR PLAN
action evaluator CONTROL DATA, STIMULUS DATA, BEHAVIOR PLAN
behavior evaluator BEHAVIOR PLAN, COORDINATOR PLAN, BEHAVIORi DIRECTIVE
14
Finally, the state transitions that occur in response to the event action executor have
assignment functions associated with them and modify the shared variable CONTROL DATA.
Figure 7(a), Figure 7(b), Figure 7(c), and Figure 7(d), depict the possible transition types
in the automaton tracking the evolution of the state variables, COORDINATOR DIRECTIVE,
BEHAVIOR PLAN, BEHAVIORi DIRECTIVE, and COORDINATOR PLAN, respectively.
(STIMULUS_DATA)
(a)
bi_planner,
a_evaluator,
P (C_DIRECTIVE)
(c)
P
a_evaluator,
(b)
a_executor,
P
CONTROL_DATA:=f (STIMULUS_DATA) B_PLAN
P (STIMULUS_DATA,CONTROL_DATA)
P
P (B_PLAN,STIMULUS_DATA,CONTROL_DATA)
(Bi_DIRECTIVE,C_PLAN)
(C_DIRECTIVE)
c_enabler,
c_planner, c_planner,
(d)
(STIMULUS_DATA)
bi_enabler,
P
b_evaluator,
P (B_PLAN,C_PLAN, i Bi_DIRECTIVE)
b_evaluator,
P (B_PLAN,C_PLAN, i Bi_DIRECTIVE)
Figure 7: Types of transitions in DES model of response controller
6 Application to SAMON
As described in Section 3, SAMON is a sampling mobile network of autonomous under-
water vehicles. Their intelligent controllers are designed in the intelligent control architecture
presented here. In the present design each vehicle is endowed with six dierent behaviors
that are deemed necessary for accomplishing the mission goal of sampling a certain region
of ocean oor, which are:
S-COM: S-COM stands for sonar communication. This behavior is exhibited by an
SAUV to communicate with AUVs and FSPs, and also by an AUV to communication
with SAUVs and FSPs. Since these communications occur in the water, they are sonar
based. There are three modes in which S-COM can be congured:
Send-query: This is exhibited to send a query by an SAUV to AUVs and by an AUV
to FSPs to establish contact.
15
Send-orders: This is exhibited by an SAUV to issue orders to AUVs or by a AUV to
issue orders to FSPs for task assignments.
Send-reports: This is exhibited by an AUV or a FSP to report acknowledgments to
queries, or to report compliance to orders, or to report gathered samples.
R-COM: R-COM stands for radio communication. This behavior is executed by an
SAUV only to communicate with the TC. Since this communication occurs in the air,
it is radio based. There are two modes in which R-COM can be congured:
Send-report1: This is exhibited by an SAUV to provide status report to the TC.
Send-report2: This is exhibited by an SAUV to report gathered samples to the TC.
RECON: RECON stands for reconnaissance. This behavior is exhibited by a vehicle to
survey a certain designated region of ocean oor. This operation is the only mode of
this behavior.
GOTO: As the name suggests, this behavior is exhibited by a vehicle to move to a certain
designated point. This operation is the only mode of this behavior.
REFUEL: As the name suggests, this behavior is exhibited by a vehicle to visit a refueling
dock and refuel. This operation is the only mode of this behavior.
LOITER: As the name suggests, this behavior is exhibited by a vehicle to remain idle
maintaining its current position, which is the only mode of this behavior.
Since each vehicle is endowed with six behaviors the response controller for each vehicle
in the SAMON application consists of six behavior controllers, and a single coordinator. The
coordinator and the behavior controllers, through their enablers, monitor the STIMULUS DATA
and generate COORDINATOR DIRECTIVES to enable an appropriate behavior mode. The au-
tomata representing the evaluation of the directives and the plans in response to various
events (function calls) is depicted in Figures 8-15. For convenience, the set of directives is
partitioned into the long-term directives, the short-term directives, the count directives, and
the points directives.
The STIMULUS DATA is one of the shared variables, and for the SAMON application
consists of the following:
ORDERS: For each vehicle this consists of orders received from the next higher level,
namely, the TC level for the SAUVs, and the SAUV level for the AUVs. It is monitored
by the vehicles coordinator enabler. The list of orders in the SAMON application
consists of:
Resume
Search
16
Goto-point
Dump-samples
The variable NEW-ORDERS is set true when a new order is received.
QUERIES: For each AUV, this consists of the list of SAUVs from whom it has received
queries for establishing contacts. It is monitored by the AUVs S-COM enabler. The
variable NEW-QUERY is set true when a new query is received.
CONTACTS: For each SAUV (resp., AUV), this consists of the list of AUVs (resp., FSPs)
with whom it has established contact. It is monitored by the vehicles S-COM enabler.
The variable AUV-CONTACT (resp., STATION-CONTACT) is set true when SAUVs (resp.,
AUVs) contacts list is nonempty, and the variable NEW-AUV-CONTACTS (resp., NEW-STATION-CONTACT)
is set true when the SAUV (resp., AUV) establishes a contact with a new AUV (resp.,
FSP).
SAMPLES: For each vehicle this consists of the list of sample data that it has gathered.
AUTHORITY: For each vehicle this is assigned one of the two valuesSAUV or AUV
depending on whether the vehicle is to take the role of a SAUV or a AUV. Thus by
changing this variable, the role of the vehicle can be changed dynamically.
SYS-TIME: This is the value used for setting the time when the system timer times out.
FUEL: This is the value that determines the minimum fuel level that a vehicle is allowed
to have before it must refuel.
DH: This is the value that determines the maximum amount of drift in the position error
is allowed.
By monitoring the STIMULUS DATA consisting of orders, queries, contacts, sam-
ples, authority, sys-time, fuel, and dh, the various enablers generate the appropriate
coordinator directives. By using these directives, the coordinator planner generates
a coordinator plan, and also congures each enabled behavior in its appropriate mode by
generating the corresponding behavior directives. The coordinator plan in this case is a
sequence of enabled behaviors belonging to the set S-COM, R-COM, RECON, GOTO,
REFUEL, and LOITER. The planner for the rst behavior in the coordinator plan,
generates a behavior plan using its behavior directives. The rst action in this plan
is then executed by the action executor, generating the appropriate CONTROL DATA, which
provides the set-points for the lower level conventional controllers. In the SAMON applica-
tion, the lower level conventional control exists for three subsystems, namely, the navigation
subsystem, the sonar communication subsystem, and the radio communication subsystem.
In the SAMON application, the following primitive actions exist, which when executed
in a particular sequence, result in the execution of a certain behavior mode:
17
s-command: This is used to send query/order by sonar.
s-response: This is used to send status reports/samples by sonar.
r-query: This used to send query by radio.
r-dump: This used to send samples by radio.
transit: This is used to move to a designated position.
hang-close: This is used to maintain the present position.
The following table lists the dierent behavior modes, and the corresponding sequence
of primitive actions that executes the behavior mode.
Behavior mode Action sequence
S-COM [send-query] <s-command(query),s-command(query)>
S-COM [send-order] <s-command(order)>
S-COM [send-report] <s-response(report)>
R-COM [send-report1] <transit(top),r-query>
R-COM [send-report2] <transit(top),r-dump>
RECON <transit(point),s-command(order)>

GOTO <transit(point)>
REFUEL <transit(dock)>
LOITER <hang-close>
Figures 8-15 depict the automata equivalents tracking the various state variables of the
intelligent controller for the underwater vehicles. The state transitions in these automata
take place when events (function calls) indicated in oval boxes occur. These events occur in
the order depicted in 6, thus dictating the order of state transitions.
In Figure 8, for example, the state transitions of the long-term coordinator directive
are depicted. When the execution of behavior evaluator occurs, the state variable evolves
as depicted in the top portion of the gure, while the middle (resp., bottom) portion depicts
the evolution when the coordinator enabler (resp., RECON enabler) is executed. Each
transition edge has a pair of labels: The label above the transition edge gives the guard
condition (a predicate over the shared variables and the set of state variables) under which the
transition is executed. The label below the transition edge gives the resulting modication
in the state variable. The transition edge of the top most branch in Figure 8, for example,
is executed when the event behavior evaluator occurs, the COORDINATOR PLAN contains
REFUEL as the rst behavior (indicated as the guard CP = <REFUEL,...>), and the vehicle
is an SAUV (indicated as the guard AUTHORITY = SAUV); and as a result, a state transition
that pushes LOCATE-AUVS and AT-DOCK to the long-term coordinator directives stack
takes place. All state transitions are self-descriptive, and can be interpreted easily.
18
b_evaluator
LTD*GOTO* -> GOTO-RENDEZ
+ AT-RENDEZVOU
+LOCATE-AUVS
+AT-DOCK
STD*RECON* -> TRANSIT-FINISHED POINTS > 0
+ QUERY-STATIONS
+ AT-RENDEZVOU
e
ls
e
e
ls
e
AUTHORITY=SAUV
-X> SELF-ORGANIZING
+ SELF-ORGANIZING
e
ls
e
- ORGANIZE-INIT
e
ls
e
e
ls
e
- QUERY-STATIONS
+ DOING-RENDEZ
AUTHORITY=SAUV
AUTHORITY!=AUV
LTD*COORDINATOR* -> ORGANIZE-INIT
LTD*COORDINATOR* -X> SEND-AUV-ORDERS
-> QUERY-STATIONS
CP = <REFUEL,...>
CP = <GOTO,...>
CP = <RECON,...>
CP = <S-COM,...>
LTD*COORDINATOR* ->
AT-RENDEZVOU
LTD*COORDIANTOR* -X> LOCATE-AUVS, STATION_CONTACT
SYS-TIME >= T-RENDEZ
LTD*COORDINATOR* -X>DOING-RENDEZ STD*COORDINATOR* -X> GO-REFUEL. GOTO-POINT, NEW-ORDERS-FROM-SAUV, HELLO-QUERY-RECIEVED
- DO-RECON
OR
STD*RECON* -> CLEANUP
POINTS = 0
STD*S-COM* -> INTERUPT
LTD*S-COM* -x> TRANSMIT-STA-ORDERS
-> QUERY-STA
- QUERY-STATIONS
e
ls
e
STD*S-COM* -> AUV-LOCATE-COMPLETE
- LOCATE-AUVS
e
ls
e
AUV-CONTACTS
e
ls
e
+ SEND-AUV-ORDERS
+ AT-DOCK
e
ls
e
-> AT-DOCK
LTD*COORDINATOR*
LTD*S-COM* -> TRANSMIT-STA-ORDERS
- SEND-AUV-ORDERS
e
ls
e
- AT-DOCK + STATION-CONTACT - STATION-CONTACT
e
ls
e
e
ls
e
STD*S-COM* -> STATION-CONTACT
LTD*COORDINATOR* -> SEND-AUV-ORDERS
AND NOT AUV-CONTACTS LTD*COORDINATOR* -> AT-DOCK
NOT AUV-CONTACTS AND
RECON_enabler
+new DO-RECON
ORDERS = <RECON-AREA,...>
- ORGANIZE-INIT
- SELF-ORGANIZING
e
ls
e
e
ls
e
- AT-RENDEZVOU
- DOING-RENDEZ
+new DO-RECON
e
ls
e
c_enabler
ORDERS -> RESUME
LTD*COORDINATOR*
ORDERS -> SEARCH -> AT-RENDEZVOU LTD*COORDINATOR* -> ORGANIZE-INIT
+ SEND-AUV-ORDERS
"NOT SAUV"
AUTHORITY = SAUV
LTD*COORDINATOR*
e
ls
e
Figure 8: Coordinator long-term directives
19
+ HELLO-QUERY-RECIEVED
e
l
s
e
+ NEW-ORDERS-FROM-SAUV
S-COM_enabler
NEW-ORDERS
NEW-QUERY
a_executor
NIL
e
l
s
e
e
l
s
e
+ DUMP-ORDERS-RECIEVED
+ GOTO-POINT
ORDERS -> DUMP-SAMPLES
ORDERS -> GOTO-POINT
c_enabler
STD*COORDINATOR*
FUEL < LO-FUEL
+ GO-REFUEL
Figure 9: Coordinator short-term directives
20
c_planner
+ LOCATE-AUVS
STD*MISSION* -X> GO-REFUEL, GOTO-POINT
+ LOCATE-AUVS
e
l
s
e
+new ORDER-AUV
e
l
s
e
+new QUERY-STA
e
l
s
e
AUTHORITY = SAUV
STD*S-COM* -> AUV-LOCATE-COMPLETE
- LOCATE-AUVS
- QUERY-AUV e
l
s
e
LTD*S-COM* -> DOING-RENDEZ LTD*S-COM* -> RENDEZ-QUERY
- QUERY-AUV
- RENDEZ-QUERY
AUV-CONTACTS = TRUE
e
l
s
e
- REQUEST-AUV-DUMP
+ REQUEST-AUV-DUMP
e
ls
e
- DOING-RENDEZ e
l
s
e
LTD*S-COM* -> TRANSMIT-STA-ORDERS
- TRANSMIT-STA-ORDERS
e
l
s
e
b_evaluator
LTD*S-COM* -> LOCATE-AUVS
e
l
s
e
e
l
s
e
- QUERY-REVERSE
e
l
s
e
-> TRANSMIT-STA-ORDERS
- SPECIFIC-OBJ
e
l
s
e
LTD*S-COM*
LTD*S-COM* -> ORDER-AUV
- ORDER-AUV
LTD*S-COM* -> QUERY-STA
- SENDING-AUV-ORDERS
- SPECIFIC-OBJ
a_evaluator
LTD*S-COM*
LTD*COORDINATOR* -X> SELF-ORGANIZING
LTD*COORDINATOR* -> ORGANIZE-INIT
LTD*COORDINATOR* -> LOCATE-AUVS
LTD*COORDINATOR* -> AT-RENDEZVOU
LTD*COORDINATOR* -> QUERY-STATIONS
+ RENDEZ-QUERY
+ DOING-RENDEZ
AUTHORITY!=AUV
SYS-TIME>=T-RENDEZ
- QUERY-STA
+ QUERY-REVERSE
BP=<S-COMMAND,S-COMMAND,...>
- QUERY-REVERSE
e
l
s
e
LTD*S-COM* -> QUERY-REVERSE
NEW-AUV-CONTACTS OR COUNT>1
CP=<S-COM,...>
BP=<S-COMMAND,...>
NOT STATION-CONTACTS
NOT AUV-CONTACTS
NOT AUV-CONTACTS
STD*S-COM* -> INTERUPT
e
l
s
e
LTD*S-COM* -> TRANSMIT-STA-ORDERS
+new SPECIFIC-OBJ e
l
s
e
LTD*S-COM* -> QUERY-STA
- SPECIFIC-OBJ
+new QUERY-AUV
LTD*S-COM* -> LOCATE-AUVS
e
l
s
e
STD*S-COM* -> NEW-ORDERS
+new SPECIFIC-OBJ
e
l
s
e
STD*S-COM* -> HELLO-QUERY-RECIEVED
+new SPECIFIC-OBJ
e
l
s
e
LTD*S-COM* -> ORDER-AUV
-X> SENDING-AUV-ORDERS
+ SENDING-AUV-ORDERS
e
l
s
e
LTD*S-COM* -> DOING-RENDEZ LTD*S-COM* -> RENDEZ-QUERY
- SPECIFIC-OBJ e
l
s
e
+new REQUEST-AUV-DUMP
+new SPECIFIC-OBJ
+new QUERY-AUV
CP=<S-COM,...>
LTD*COORDINATOR* -> SEND-AUV-ORDERS
+new TRANSMIT-STA-ORDERS
else
STD*COORDINATOR* -X> NEW-ORDERS-FROM-SAUV, HELLO-QUERY-RECIEVED
else
-> STATION-CONTACT
LTD*COORDINATOR*
Figure 10: S-COM long-term directives
21
BP != S-COMMAND
+ AUV-LOCATE-COMPLETE
LTD*S-COM* -x> LOCATE-AUVS ->QUERY-STA
STATION-CONTACTS
LTD*S-COM* -> LOCATE-AUVS
+ STATION-CONTACT
+ HELLO-QUERY-RECIEVED
+ NEW-ORDERS-FROM-SAUV + DUMP-ORDERS-RECIEVED
+ INTERRUPT
+ INTERRUPT
e
l
s
e
e
l
s
e
e
l
s
e
BP=<S-COMMAND,...>
a_evaluator
c_planner
STD*S-COM*
LTD*COORDINATOR* -x> AT-RENDEZVOU , SEND-AUV-ORDERS
LTD*COORDINATOR* -> STATION-CONTACT
STD*COORDINATOR* -> HELLO-QUERY-RECIEVED
STD*COORDINATOR* -x> GOTO-REFUEL -x> GOTO-POINT -> NEW-ORDERS-FROM-SAUV
LTD*COORDINATOR* -x> ORGANIZE-INIT , LOCATE-AUVS
-> QUERY-STATIONS
STD*COORDINATOR* -> DUMP-ORDERS-RECIEVED
a_executor
NIL
Figure 11: S-COM short-term directives
22
b_evaluator
STD*RECON* -> CLEANUP OR POINTS = 0
- DOING-RECON
+ DOING-RECON
LTD*RECON* -x> DOING-RECON
e
l
s
e
LTD*RECON*
CP = <RECON,...>
b_evaluator
LTD*GOTO* -> GOTO-RENDEZ
- GOTO-RENDEZ
STD*S-COM* -> AUV-LOCATE-COMPLETE
NOT AUV-CONTACTS
OR
NOT AUV-CONTACTS
+ GOTO-RENDEZ
LTD*GOTO*
LTD*COORDINATOR* -> AT-DOCK
LTD*COORDINATOR* -> SEND-AUV-ORDERS -> AT-DOCK
CP = <S-COM,...>
CP = <GOTO,...>
DH<20
+ TRANSIT_FINISHED
DH>=20
+ ADJUST_HEADING
a_evaluator
STD*RECON*
a_executor
NIL
BP = <TRANSIT,...>
Figure 12: RECON/GOTO long/short-term directives
23
b_planner
c_planner
LTD*COORDINATOR* -x> DOING-RECON
n
a_evaluator
DH < 20
n := n-1
LTD*S-COM* -> LOCATE-AUVS
NEW-AUV-CONTACTS
OR COUNT>1
0
a_evaluator
e
l
s
e
STD*S-COM* -> INTERRUPT
LTD*S-COM* -x> TRANSMIT-STA-ORDERS
-> QUERY-STATIONS
0
BP = NIL
0
BP=<S-COMMAND,...>
LTD*S-COM* -x> ORDER-AUV, TRANSMIT-STA-ORDERS
-x> REQUEST-AUV-DUMP
+1
a_executor
CP=<S-COM,...>
BP=<S-COMMAND,...>
COUNT
POINTS
CP=<RECON,...>
BP=<TRANSIT,...>
Figure 13: Points and Count directives
24
c_planner
POP
POP
e
l
s
e
e
l
s
e
POP
e
l
s
e
*LOITER*
e
l
s
e
*REFUEL* e
l
s
e
*GOTO* e
l
s
e
e
l
s
e
OR
e
l
s
e
OR
*LOITER* e
l
s
e
e
l
s
e
e
l
s
e
e
l
s
e
*RECON* e
l
s
e
e
l
s
e
*LOITER*
b_evaluator
STD*S-COM* -> AUV-LOCATE-COMPLETE
AUTHORITY=AUV
SYS-TIME<T-RENDEZ
AUTHORITY=AUV
AUTHORITY=SAUV
STD*COORDINATOR* -> GO-REFUEL
STD*COORDINATOR* -> GOTO-POINT
LTD*COORDINATOR* -> LOCATE-AUVS
STD*COORDINATOR* -> NEW-ORDERS-FROM-SAUV
STD*COORDINATOR* -> HELLO-QUERY-RECIEVED
LTD*COORDINATOR* -> STATION-CONTACT
-X> SELF-ORGANIZING
LTD*COORDINATOR*
LTD*COORDINATOR* -> ORGANIZE-INIT
-> AT-RENDEZVOU
LTD*COORDINATOR*
LTD*COORDINATOR* -X> DOING-RENDEZ
-> SEND-AUV-ORDERS
LTD*COORDINATOR* -> QUERY-STATIONS
LTD*COORDINATOR* -> DO-RECON
*S-COM*
*S-COM*
+new *S-COM*
*S-COM*
*S-COM*
+new *S-COM*
CP=<S-COM,...>
NOT AUV-CONTACTS
LTD*S-COM* -> TRANSMIT-STA-ORDERS
NOT AUV-CONTACTS
LTD*S-COM* -> DOING-RENDEZ
*R-COM*
LTD*COORDINATOR*
*R-COM*
*R-COM*
Coordinator Plan (CP)
Figure 14: Coordinator plan
25
a_evaluator
Behavior Plan (BP)
POP
NIL
+new R-DUMP
NIL
-> DUMP-SAMPLES
-X> RESEND
-X> SELF-ORGANIZING
NIL e
l
s
e
TRANSIT
SURFACE
R-QUERY
HANG-CLOSE
GOTO-XY
GO-DOCK
+new S-COMMAND
e
l
s
e
+ S-COMMAND
+ S-COMMAND
e
l
s
e
e
l
s
e
OR
S-COMMAND
S-RESPONSE
LTD*S-COM* -> LOCATE-AUVS
NEW-AUV-CONTACTS OR COUNT>1
BP=<S-COMMAND,...>
POP
NIL LTD*S-COM* -X> QUERY-STA
-> TRANSMIT-STA-ORDERS
POP
e
l
s
e
b_planner
b_evaluator
LTD*S-COM* -> ORDER-AUV LTD*S-COM* -> DOING-RENDEZ
-X> SENDING-AUV-ORDERS
STD*S-COM* -> INTERRUPT
BP=<GO-DOCK,...>
BP=<GOTO,...>
BP=<SURFACE,...>
BP=<S-COMMAND,...>
ORDERS
STD*COORDINATOR* -X> AUV-LOCATE-COMPLETE
LTD*COORDINATOR* -> DOING-RENDEZ
LTD*COORDINATOR* -> ORGANIZE-INIT
NOT AUV-CONTACTS
CP=<S-COM,...>
CP=<RECON,...>
CP=<LOITER,...>
CP=<GOTO,...>
CP=<REFUEL,...>
CP=<S-COM,...>
CP=<R-COM,...>
CP=<R-COM,...>
BP = NIL LTD*S-COM* -> LOCATE-AUVS
STD*S-COM* -> HELLO-QUERY-RECIEVED
STD*S-COM* -> NEW-ORDERS
OR
Figure 15: Behavior plan
26
7 Models for specication and its verication
The mission task, i.e., the control specication, that the intelligent controller should
achieve is also represented by automata models. The specication automata represents the
sequences of behavior modes that the controlled system must execute in order to accomplish
the mission task. As an example, Figure 16(a) shows the specication automaton for an
SAUV, and Figure 16(b) shows that for an AUV.
REFUEL
S-COM[send-query] NEW-CONTACTS R-COM[send-report1]
ORDERS -> RESUME
S-COM[send-ordes]
FUEL < LO-FUEL
ORDERS -> SEARCH
RECON
S-COM[send-orders]
R-COM[send-report2]
NEW-SAMPLES
(a)
ORDERS -> SEARCH
(b)
LOITER NEW-QUERIES S-COM[send-report]
REFUEL
FUEL < LO-FUEL
RECON
ORDERS -> RESUME
S-COM[send-report]
ORDERS -> DUMP
Figure 16: SAUV and AUV specication automata
The control design is veried by showing that the sequences that are allowed by the
design automata are also allowed by the specication automata. The verication problem
is thus one of language containment [11] and can be performed using standard tools such as
COSPAN [15].
8 Conclusion
We have presented a new intelligent control architecture for the design of intelligent con-
trollers. Intelligent control is exercised when the conventional control is inadequate due to
27
the (i) lack of accurate plant model, and/or (ii) presence of control requirements that fall
beyond the scope of the conventional control. Our approach to the design of intelligent
controllers is a behavioral approach, where a behavior is a certain high level activity that
determines the manner in which a system must react to external conditions to execute sub-
tasks of the mission tasks. This behavioral approach naturally yields a cascaded architecture
consisting of a perceptor subsystem and a response controller subsystem. The task of the
perceptor is to extract those features from sensor signals that enable the execution of one
of the behaviors, and the task of the response controller is to monitor such features and to
execute one of the enabled behaviors. The perceptor and the response controller are both
hierarchical, allowing complexity management for systems that must operated in complex
environments and carry our complex tasks.
The behavior based intelligent control architecture presented in this paper is motivated
from the functional model of brain, and hence a controller designed in this architecture will
be an intelligent controller. This architecture has been successfully applied to design in-
telligent controllers in practice, and one such application is discussed in this paper. The
other applications include ship damage control automation, medical diagnosis, and defense
application. Further work is needed to make this architecture a more realistic function model
of brain, and to enhance it further so as to allow it to perform tasks such as learning. The
controllers designed in the current architecture are able to adapt to the changing environ-
mental conditions in a manner they are programmed to do so; however, they are unable to
devise any new control scheme which a system that has the capability to learn would be able
to do.
References
[1] L. Acar and U. Ozguner. Design of structure-based hierarchies for distributed intelligent
control. In An introduction to intelligent and autonomous control, pages 79108. Kluwer
Academic Publishers, Boston, MA, 1993.
[2] J. S. Albus. Data storage in the cerebellar model articulation controller (CMAC).
Journal of Dynamic Systems, Measurement, and Control, 93:228233, 1975.
[3] J. S. Albus. A new approach to manipulator control: The cerebellar model articulation
controller (CMAC). Journal of Dynamic Systems, Measurement, and Control, 93:220
227, 1975.
[4] J. S. Albus. A reference model architecture for intelligent systems design. In An introduc-
tion to intelligent and autonomous control, pages 2756. Kluwer Academic Publishers,
Boston, MA, 1993.
[5] P. J. Antsaklis and K. M. Passino, editors. An introduction to intelligent autonomous
control. Kluwer Academic Publishers, Boston, MA, 1993.
28
[6] R. A. Brooks. A robust layered control system for a mobile robot. IEEE Transactions
on Robotics and Automation, 2(3):1423, 1986.
[7] C. G. Cassandras and S. Lafortune. Introduction to Discrete Event Systems. Kluwer
Academic Publishers, Norwell, MA, 1995.
[8] A. Deshpande and Joao Borges de Sousa. Real-time multi-agent coordination using
DIADEM: Applications to automobile and submarine control. In 1997 IEEE Conference
on Systems, Man and Cybernetics, 1997.
[9] M. M. Gupta and N. K. Sinha, editors. Intelligent control:Theory and applications.
IEEE Press, Piscataway, NJ, 1996.
[10] C. J. Harris, editor. Advances in intelligent control. Taylor & Francis, Bristol, PA, 1994.
[11] J. E. Hopcroft and J. D. Ullman. Introduction to Automata Theory, Languages and
Computation. Addison-Wesley, Reading, MA, 1979.
[12] R. Kumar and V. K. Garg. Modeling and Control of Logical Discrete Event Systems.
Kluwer Academic Publishers, Boston, MA, 1995.
[13] R. Kumar and J. A. Stover. A behavior-based intelligent control architecture. In IEEE
International Symposium on Intelligent Control, pages 549553, Gaithersburg, MD,
September 1998.
[14] R. Kumar, J. A. Stover, and A. Kiraly. Discrete event modeling of a Behavior-based
intelligent control architecture. In Joint Conference on Information Systems, pages
288291, RTP, NC, October 1998.
[15] R. Kurshan. Computer-Aided Verication of Coordinating Processes: The Automata-
Theoretic Approach. Prentice University Press, Princeton, NJ, 1994.
[16] A. H. Levis. Modeling and design of distributed intelligence systems. In An introduction
to intelligent and autonomous control, pages 109128. Kluwer Academic Publishers,
Boston, MA, 1993.
[17] A. Meystel. Nested hierarchical control. In An introduction to intelligent and au-
tonomous control, pages 129161. Kluwer Academic Publishers, Boston, MA, 1993.
[18] S. Phoha, E. Peluso, P. A. Stadter, J. Stover, and R. Gibson. A mobile distributed net-
work of autonomous undersea vehicles. Technical report, Applied Research Laboratory,
Pennsylvania State University, University Park, PA, 1997.
[19] G. N. Saridis. Analytical formulation of the principle of increasing precision with de-
creasing intelligence for intelligent machines. Automatica, 25(3):461467, 1989.
29
[20] J. A. Stover, D. L. Hall, and R. E. Gibson. A fuzzy logic architecture for autonomous
multisensor data fusion. IEEE Transactions on Industrial Electronics, 43(3), 1996.
[21] D. A. White and D. A. Sofge, editors. Handbook of intelligent control. Van Nostrant
Reinhold, New York, NY, 1992.
[22] B. P. Zeigler and S. Chi. Model-based architecture concepts for autonomous systems
design and implementation. In An introduction to intelligent and autonomous control,
pages 5778. Kluwer Academic Publishers, Boston, MA, 1993.
A Discrete event systems: preliminaries
A discrete event system (DES) is a event-driven system that has discrete states, which
change in response to asynchronous occurrences of certain discrete activities, called events.
Examples of DESs include manufacturing systems, communication networks, reactive com-
puter programs, database management systems, automated trac systems. For a detailed
introduction to DESs, readers are referred to [7, 12].
A DES can be modeled by a nite collection of interacting automata. Each automaton has
a nite set of states and events, and share a nite set of variables. An automaton transitions
from one state to another in response to the execution of an event provided a certain guard
condition is satised. A guard condition is a predicate over the states of other automata and
the values of certain shared variables. A state transition may modify the values of certain
shared variables.
Formally, let {G
i
} be a nite collection of automata indexed by i, with S denoting its
nite set of shared variables. Each variable s S takes values over a countable domain
D(s). We use D(S) :=
sS
D(s) to denote the domain of the set of shared variables. Each
automata is a quadruple,
G
i
:= (X
i
,
i
, E
i
, x
0
i
),
where X
i
is its nite set of states,
i
is its nite set of events, E
i
is its nite set of state
transitions, and x
0
i
X
i
is its initial state. Each transition e E
i
is quintuple of the form:
e := (x
e
,
e
, P
e
(S
i
X
i
), f
e
, y
e
),
where x
e
X
i
is the state where the transition is executed,
e

i
is the event that causes
the state transition, y
e
X
i
is the state resulting from the execution of the transition,
P
e
(S
i
X
i
) is the guard conditiona predicate over the shared variables and the states
of the other automatawhich must be satised for the transition to be executed, and f
e
:
D(S) D(S) is the shared variable assignment function that assigns new values to the
shared variables. A transition e E
i
is equivalently also represented as:
e := x
e

e
, P
e
(S
i
X
i
), S := f
e
(S)
y
e
.
30
Buffer
a2 a1
Machine 1
d1 d2
Machine 2
Figure 17: A simple manufacturing system
Example 1 Consider for example the following simple manufacturing system consisting of
two machines and a buer shown in Figure 17.
In this manufacturing system, the rst machine M
1
fetches a part from an innite supply
when it is in its idle state. The arrival of a part into M
1
is denoted by the event a
1
, and
causes M
1
to change its state from idle to working. When M
1
nishes processing, the part
is deposited in buer B (event d1). This event causes M
1
to changes its state from working
to idle. The second machine M
2
fetches a part from the buer when it is in its idle state
and the buer is nonempty. This is denoted by the event a
2
, whose occurrence causes the
machine M
2
to change its state from idle to working. After M
2
nishes processing a part,
the part departs from the manufacturing system, denoted by the event d
2
, and causes M
2
to
return to its idle state.
We model this manufacturing system as a pair of interacting automata G
1
and G
2
,
where for i = 1, 2, the automaton G
i
models the machine M
i
. Each automaton has two
states, X
i
:= {idle
i
, working
i
}, with idle
i
being the initial state of G
i
. The event set of the
automaton G
i
consists of
i
:= {a
i
, d
i
}. The number of parts in the buer B, denoted by b,
is the shared variable. The set of transitions of the two automata are shown in Figure 18.
idle2
d2
a2,[b>0], b:=b-1
working2
a1
d2 d2
a2,[b>0],
b:=b-1
d1, b:=b+1
a1
d1, b:=b+1
a2,[b>0],
b:=b+1
d1, b:=b+1
a1
idle1 working1
G1 G2 2 1||G G
Figure 18: DES model of the manufacturing system
G
1
starts in the initial state idle
1
, and when event a
1
occurs it transitions to the state
working
1
. It returns to the initial state when event d
1
occurs, also incrementing the value of
the shared variable, the buer content, by 1. G
2
starts in the initial state idle
2
, and when
event a
2
occurs it transitions to the state working
2
provided the value of the shared variable
exceeds zero, also decrementing the the value of the shared variable by 1. G
2
returns to its
initial state when event d
2
occurs.
31
A collection of interacting automata {G
i
:= (X
i
,
i
, E
i
, x
0
i
)} that models a DES can be
combined using synchronous composition to form a single automaton that is an equivalent
model of the DES. Without loss of generality, we dene the synchronous composition of two
interacting automata {G
i
:= (X
i
,
i
, E
i
, x
0
i
)}
i=1,2
which share variables in a set S. Their
synchronous composition, denoted G
1
G
2
, is the automaton G
1
G
2
:= (X, , E, x
0
), where
X := X
1
X
2
, :=
1

2
, x
0
:= (x
0
1
, x
0
2
), and the set of transitions E = E

:
E

:= {((x
e
1
, x
e
2
),
e
, P
e
, f
e
, (y
e
1
, y
e
2
)) |
(x
e
1
,
e
, P
e
1
, f
e
, y
e
1
) E
1
, (x
e
2
,
e
, P
e
2
, f
e
, y
e
2
) E
2
s.t.
P
e
1
P
e
2
= P
e
}
E

:= {((x
e
1
, x
e
2
),
e
, P
e
, f
e
, (y
e
1
, x
e
2
)) |
(x
e
1
,
e
, P
e
, f
e
, y
e
1
) E
1
s.t.
e

1

2
}
E

:= {((x
e
1
, x
e
2
),
e
, P
e
, f
e
, (x
e
1
, y
e
2
)) |
(x
e
2
,
e
, P
e
, f
e
, y
e
2
) E
2
s.t.
e

2

1
}
E

is the set of transitions which occur synchronously with the participation of both G
1
and
G
2
, whereas E

and E

, respectively, are the set of transitions that occur asynchronously


with the participation of G
1
and G
2
only.
Example 2 The synchronous composition of automata G
1
and G
2
described in Example 1
is shown in Figure 18.
32

Вам также может понравиться