Вы находитесь на странице: 1из 21

Towards a Theory of Delegation for Agent-

based Systems
Cristiano Castelfranchi - Rino Falcone
IP - CNR - Rome, Italy
Division of "AI, Cognitive Modelling and Interaction"
{cris, falcone}@pscs2.irmkant.rm.cnr.it
Abstract
In this paper a theory of delegation is presented. There are at least three reasons for developing such a theory. First, one
of the most relevant notions of "agent" is based on the notion of "task" and of "on behalf of". In order to found this
notion a theory of delegation among agents is needed. Second, the notion of autonomy should be based on different
kinds and levels of delegation. Third, the entire theory of cooperation and collaboration requires the definition of the
two complementary attitudes of goal delegation and adoption linking collaborating agents.
After motivating the necessity for a principled theory of delegation (and adoption) the paper presents a plan-based
approach to this theory. We analyze several dimensions of the delegation/adoption (on the basis of the interaction
between the agents, of the specification of the task, of the possibility to subdelegate, of the delegation of the control,
of the help levels). The agent's autonomy and levels of agency are then deduced.
We describe the modelling of the client from the contractor's point of view and viceversa, with their differences, and the
notion of trust that directly derives from this modelling.
Finally, a series of possible conflicts between client and contractor are considered: in particular collaborative conflicts,
which stem from the contractor's intention to help the client beyond its request or delegation and to exploit its own
knowledge and intelligence (reasoning, problem solving, planning, and decision skills) for the client itself.
1. Introduction
There are at least three reasons for developing a theory of delegation. First, one of the most relevant
notions of "agent" is based on the notion of "task" and of "on behalf of". In order to found this notion a
theory of delegation among agents is needed. Second, the notion of autonomy should be based on
different kinds and levels of delegation. Third, the entire theory of cooperation and collaboration requires
the definition of the two complementary attitudes of goal delegation and adoption linking collaborating
agents.
Let us briefly discuss these reasons underlying the need for a principled theory of delegation as a
foundational notion for Agents and Multi Agent Systems (MAS).
1.1. "On behalf of": delegation and the notion of agent
Although there are many definitions of agent, some of which in full disagreement with each other, the
majority of them are based on the notions of task, of "on behalf of" or make explicit reference to
delegation.
For example, P. Maes [20], in her definition of the software agent, characterizes its autonomy in this
way: "... it knows how to act on your behalf";
M. Wooldridge & N. Jennings [28] in their weak notion of agency say: ".. Think of agents as human-
like assistants or drones that are limited in their abilities: i) you can give them tasks to do, and they
can go away and cooperate with other agents to achieve tasks ....";
B. Crabtree, M. Wiegand, J. Davies [12] in their definition of agents autonomy say: ".. When an agent
is performing some tasks on behalf of a user it needs to work on its own and only interacts with a user
when it has something to pass on";
F.C. Cheong [9]: "Agents are primarily human-delegated software entities that can perform a variety of
tasks for their human masters".
J. Ousterhout [26]: ".. The agents need to be able to make decisions that are complex and subtle, and we
need to be able to trust them enough that we don't have to check up on them constantly";
J. White [26]: ".. The two principal meanings of the term agent, which are captured in the adjectives
"intelligent" and "mobile", have in common the objective of moving the user beyond interaction with
computers to delegation of responsibility to computers ..."
Finally, R. Goodwin [14]: "An agent is an entity created to perform some tasks or set of tasks. Any
property of an agent must therefore be defined in terms of the task and the environment in which the task
is to be performed".
This attempt to found the agent notion on that of task (and/or delegation), presupposes that an
ontologically and formally clear notion of task (and/or delegation) already exists.
This is not really true. In our view, it is not clear, for example:
i) precisely what a task is;
ii) what the relationship is between users goals and agents goals (how they are interwoven);
iii) what the agents autonomy is in performing a task;
iv) what is meant by "on behalf of" (it is only intuitively clear);
v) how and why an agent (human or artificial) trusts another agent.
In this paper we work towards a theory of delegation able to provide a principled foundation for this
notion of agency.
1.2. Delegation and Autonomy
The main problem we have with agents (both theoretically and practically) concerns their autonomy: how
this autonomy could be realized and regulated. This is a problem also humans have in
cooperation/collaboration both in occasional interactions and in structured interactions (in organisations).
Why is autonomy so important and central in a theory of agency?
In many cases the user (or the delegating agent) needs local and decentralised knowledge and decision
from the delegated agent. This agent - delegated to take care of a given task - has to choose from among
different possible recipes (plans), or to adapt abstract or previous plans to suit new situations; it has to
find additional (local and updated) information; it has to solve a problem (not just to execute a function,
an action, or implement a recipe); sometimes it has to exploit its expertise. In all these cases this agent
takes care of the interests or goals of the former remotely i.e. far from it and without its monitoring
and intervention (control), and autonomously. This requires what we will call an open delegation:
basically the delegation to bring it about that .... The agent is supposed to use its knowledge, its
intelligence, its ability, and to exert a degree of discretion (in this paper we do not consider it as part of
the agent's autonomy that the agent itself could have its own goals to pursue, or the consequent possible
conflicts).
Moreover, given that the knowledge of the delegating agent/user (client) concerning the domain and the
helping agents is limited (possibly both incomplete and incorrect) the "delegated task" (the request or the
elicited behaviour) might not to be so useful for the client itself. Either the expected behaviour is useful
but cannot be executed, or it is useless or self-defeating, or dangerous for the clients other goals, or else
there is a better way of satisfying the client's needs; and perhaps the helping agent is able to provide
greater help with its knowledge and ability, going beyond the "literally" delegated task. We will call this
"over-help" or "critical-help". To be really helpful this kind of agent must take the initiative of opposing
(not for personal reasons/goals) the others expectations or prescriptions, either proposing or directly
executing a different action/plan. To do this it must be able to recognise and reason about the goals,
plans and interests of the client, and to have/generate different solutions.
Open delegation and over/critical help distinguish a collaborator from a simple tool [15], and
presuppose intelligence and autonomy (discretion) in the agent.
However, of course, there is a trade-off between pros and cons both in open delegation and in over-help:
the more intelligent and autonomous the agent (able to solve problems, to choose between alternatives, to
think rationally and to plan) the less quickly and passively "obedient" it is. The probability that the
solution or behaviour provided does not correspond to what we expect and delegate exactly increases.
In addition, possible conflicts arise between a "client" delegating certain tasks to an agent, and the
"contractor" or in general the agent adopting and/or satisfying those tasks; conflicts which are either due
to the intelligence and the initiative of the delegated agent or to an inappropriate delegation by the client:
we are interested here only in conflicts due to the agent's willingness to collaborate and to help the other
better and more deeply: a sort of collaborative conflict.
1.3. The theory of cooperation and its basic ingredients: delegation and adoption
Delegation and adoption are two basic ingredients of any collaboration and organization [16]. In fact, the
huge majority of DAI and MA, CSCW and negotiation systems [23], communication protocols,
cooperative software agents [27], are based on the idea that cooperation works through the allocation of
some task (or sub-task) by a given agent (individual or complex) to another agent, via some "request"
(offer, proposal, announcement, etc.) meeting some "commitment" (bid, help, contract, adoption, etc.).
This is basically correct, although is not general enough: it is already restricted to the most complex (less
basic) forms of delegation and adoption. For example, it ignores unilateral delegation and exploitation,
and spontaneous help. We must start from more basic notions.
In this paper we in fact describe a theory of cooperation among agents by identifying the elementary
mechanisms on which any collaboration must be founded.
Our research (see also [3, 4, 5]) is based on three fundamental claims:
i) only on the basis of a principled theory of cooperation will it be possible both to really understand the
human cooperation and to design cooperation among artificial agents, among humans and artificial
agents, among humans through artificial agents;
ii) this theory must be founded on the main actions of delegation and adoption;
iii) the analysis of the delegation/adoption theory must be based on the plan model of the action.
Starting from an ontology of actions, results and plans we propose a definition of delegation and
adoption, the identification of their various levels, the characterization of their basic principles and
representations.
The aim of this analysis is to provide some instruments for characterizing high levels of agents
cooperativeness and autonomy.
In the first part, we present our definition of delegation and adoption, a plan-based definition of tasks,
and of different kinds and levels of delegation and adoption. On the same basis we also characterise
different levels of agency and autonomy (in the delegated agent).
In the second part, we describe the modelling of the client by the contractor and viceversa: their
differences, the notion of trust that derives directly from this description.
Finally, in the third part a series of possible conflicts between client and contractor will be considered, in
particular collaborative conflicts, which come from the contractor's intention to help the client beyond the
latters request or delegation and to exploit its own knowledge and intelligence (reasoning, problem
solving, planning, and decision-making skills) for the client itself.
2. Delegation/Adoption theory
As we said the notion of delegation is already explicitly present in the theory of MAS, of collaboration
[16], and team-work. However, we have based our analysis on much more basic notions.
Informally,
in delegation an agent A needs or likes an action of another agent B and includes it in its own plan. In
other words, A is trying to achieve some of its goals through B's actions; thus A has the goal that B
performs a given action.
A is constructing an MA plan and B has a "part", a share in this plan: B's task (either a state-goal or an
action-goal).
On the other hand:
in adoption an agent B has a goal since and for so long as it is the goal of another agent A, that is, B
usually has the goal of performing an action since this action is included in the plan of A. So, also in
this case B plays a part in A's plan (sometimes A has no plan at all but just a need, a goal).
In our model, delegation and adoption are characterized in terms of the particular set of mental states
(cognitive ingredients) of the agents involved in the interaction. In fact, a delegation (or an adoption) is a
set of agent's (agents') beliefs, goals, intentions, commitments, etc.: externally there may be no interaction
between the agents, the delegation (adoption) being only in the mind of one of the agents (unilateral
delegation/adoption).
At this basic level delegation (adoption) can be established also between a cognitive and a non cognitive
agent.
We assume that to delegate an action necessarily implies delegating some result of that action.
Conversely, to delegate a goal state always implies the delegation of at least one action (possibly
unknown to A) that produces such a goal state as result. Thus, we consider the action/goal pair =(,g)
as the real object of delegation, and we will call it task. Then by means of , we will refer to the action
(), to its resulting world state (g), or to both.
Hereafter we will call A the delegating-adopted agent (the client) and B the delegated-adopting agent (the
contractor).
Rationality in delegation can be considered under a couple of respects:
- the rational allocation of tasks: to which agent one should delegate which task. This is a well studied
problem in DAI and MAS since the beginning of Contract Nets, and several approaches in various
domains -from economics to organisation- have been proposed;
- the rational decision of delegating or not delegating a task to a given agent.
The latter aspect is more general and basic and closer to the analytic level of this paper.
When is it rational to delegate a task to an agent?
This depends on several factors. Is the client able to perform the task? Or is it strongly dependent on the
contractor for the task? How much is the client depending on the contractor: are there possible alternative
contractors? In fact, in our view an agent is less dependent on another the more it has alternatives, and
these alternatives increase its negotiation power [24]. So it is rational to delegate the task to the
contractor iff the value of the goal (achievable by that task delegated) for the client is greater than the cost
of delegation for the client itself, and if there are no better alternatives (inferior costs with other possible
contractors).
By delegation costs we mean the cost of delegating (for ex. the cost of communicating or of
influencing the contractor) in itself, plus the cost implied by contractors conditions for adoption (e.g.
future reciprocation, or exchange, or reward), plus the cost of possible delegation conflicts (see 4).
We consider now the case in which the client is able to achieve the goal of the task by itself: in this case it
is rational to delegate if it is preferable (weak dependence): i.e. if the difference between the value of the
goal and the cost of delegation is inferior to the difference between the value of the goal and the cost of
the performance by the client.
However, this is not enough for a rational delegation. In fact there are also probability factors relative to
the success and to the costs. In particular what is very important is the clients trust in the contractors
capability and willingness. It is rational for the client to delegate a task to the contractor iff the client's
trust in the contractor's action (and then in the possibility of realizing the goal through the contractors
action) is greater than its trust in the alternatives. So the rationality of delegation is reduced to the level of
trust the agent requires for delegating. If the client is absolutely sure about the contractors performance,
what we just said holds; but if the client is not so sure, it is a matter of degree of trust: delegation is in
fact a risk, a bet on the contractor. The client bet is rational depending on the probability to win, on the
costs, and on the risks.
About the rationality of adoption, we can say: autonomous agents are not automatically benevolent. They
adopt others' goals, they spend their resources for the others only for their own motives (either selfish or
altruistic). Among autonomous agents, adoption is always subordinated and instrumental to the motives
of the agent: agents are self-interested or self-motivated (although not necessarily selfish) [11]. Thus
adopting and helping others is rational when the costs of adoption (spent resources and renounced goals)
are lesser than the utility of the achieved motives (the reasons/goals to adopt), and when this difference is
greater than the alternative choices (of not adopting and pursuing ones own goals if any).
However this is the most obvious aspect of rationality in help. What is more interesting here is to
connect rationality to our theory of the levels of adoption and possible conflicts (see 2.10).
2.1. Plan Ontology
Let Act={
1
, ...,
n
} be a finite set of actions, and Agt={A
1
, .., A
n
, B, C, ...} a finite set of agents. Each
agent has an action repertoire, a plan library [17, 21], resources, goals (in general by "goal" we mean a
partial situation (a subset) of the "world state"), beliefs, motives, interests [11].
The general plan library is =
a
U
d
, where
a
is the abstraction hierarchy rule set and
d
is the
decomposition hierarchy rule set. As usual, for each action there are: body, preconditions, constraints,
results (Notice that we do not call body the decomposition rule of a plan, as is customary, but only the
procedural attachment of the elementary actions and their procedural composition in the case of complex
actions).
We call a composed action (plan) in if in
d
there is at least one rule (called reduction rule):
->
1
, ...,
n
. (The actions
1
, ...,
n
are called component actions of . They could be either
elementary or complex actions).
We call an abstract action in if in
a
there is at least one rule: -->
1
.
(
1
is called a specialized action of ).
An action Act is called elementary action in if there is no a rule r in such that is the left part
of r.
We will call BAct (Basic Actions), the set of elementary actions in and CAct (Complex Actions) the
remaining actions in Act: BActyAct, CAct = Act - BAct.
Given
1
,
2
and
d
, we can say that
1
dominates
2
(or
2
is dominated by
1
) if there is a set of
rules (r
1
, .., r
m
) in
d
, such that: (
1
= Lr
1
)(
2
Rr
m
)(Lr
i
Rr
i-1
)
where: Lr
j
and Rr
j
are, respectively, the left part and the right part of the rule r
j
and 2im. We can say
that
1
dominates at level k
2
if the set (r
1
, .., r
m
) includes k rules (that is to say, m=k). In the same way
it is possible to define the dominance relation between two actions in the abstraction hierarchy rule set.
Hereafter, we will consider the dominance relation in the decomposition hierarchy rule set
d
alone.
We denote as:
-
A
As plan library, and
- Act
A
, the set of actions known by the agent A (Act
A
yAct), i.e. all the actions included in
A
.
The set of irreducible actions (through decomposition or specification rules) included in
A
is composed
of two subsets: the set of actions that A believes to be elementary actions (BAct
A
) and the set of actions
that A believes to be complex but for which it has no reduction rules (NRAct
A
: Non Reduced actions).
Then BAct
A
cAct and possibly BAct
A
zBAct.
In fact, given an elementary action, an agent may (or may not) know the body of that action. We define
as the skill set of an agent A, S
A
, the actions in BAct
A
whose body is known by A (action repertoire of
A). S
A
yBAct
A
. US
A
i
(on all A
i
Agt) yBAct.
In sum, an agent A has its own plan library,
A
, in which some actions (CAct
A
and NRAct
A
) are
complex actions (and it knows the reduction rules of CAct
A
) while some other actions (BAct
A
) are
elementary actions (and it knows the body of a subset - S
A
- of them).
A has a complete executable know-how of if either S
A
or in
A
there is a set of rules (r
1
, .., r
m
) able
to reduce into (
1
, ..,
k
) and for each 1ik,
i
S
A
. We can define an operator CEK(A,) that
returns if S
A
or ((
1
, ..,
k
)) if there are the rules able to reduce as described above. Then
CEK(A,) ( is the empty set) when A has at least a complete executable know-how of .
In fact, CEK(A, ) characterizes the executive autonomy of the agent A relative to .
To execute an action means:
- to execute the body of , if is an elementary action or
- to execute the body of each elementary action to which can be reduced (through one of the possible
sequences of rules in ), if is not an elementary action.
A simplification factor is: given any two agents A and B and any elementary action , with S
A
and
S
B
: the body of is the same for A and B: an elementary action cannot be performed in different
ways.
From the previous assertions it follows that an action might be an elementary action for a given agent
A while it might be a plan for another agent B. Moreover, the same plan might have different reduction
rules for different agents.
Agents execute actions to achieve goals: they look up in their memory the alternative actions/plans fitting
their goals, and select and execute one of them.
Given some world state c, we will call R(,c) the operator that, when applied to an action and to c,
returns the set of results produced by (when executed alone). When the relevant world state in which
an action is applied changes, the action results will change. Different conditions and results characterize
different actions. R(,c) may thus be denoted as R() because c are defined univocally.
We define P()=Pc()UCn() where Pc is the operator that when applied to any action returns the
set of preconditions of and Cn() is the set of constraints of . An action can be executed if the
preconditions and constraints of are satisfied.
R
A
() returns the results that A believes will produce when executed. R
A
() might (or not)
correspond to R(); however, for the sake of simplicity, when an action has been executed, each agent
in Agt has the same perception of its results: precisely R().
We call relevant results of an action for a goal the subpart of the results of that action which correspond
to the goal; more formally, given and g, we define the operator Rr as follows:
Rr(,g)={g
i
| g
i
g} if gyR(), = otherwise.
Therefore, the same action has different relevant results when used for different goals.
Let us suppose that is a component action of and Rr(,g); we define pertinent results of in
for g, Pr(,,g), the results of useful for that plan aimed at achieving goal g; they correspond
to a subset of R() such that:
Pr(,,g) = {q
i
R() | (q
i
Rr(,g)) ((q
i
=P(
1
)) (dominate ) (dominate
1
)
(
1
))};
in other terms, an action is in a plan (aimed at a goal g) either because some of its results are
relevant results of (aimed at g) or because some of its results produce the preconditions of another
action
1
in that plan.
1
The pertinent results of an action in represent the real reason for which that action is in that
plan . In other words, a plan is not an arbitrary list of actions: an action is in a plan if and only if
it contributes, directly or indirectly, to the intended results of .
Considering the abstraction relation, we can say that if is a specialized action of :
q
i
Rr(,g), q
j
R()

| (q
i
is a specialization of q
j
).
2.2. Types and levels of delegation
Delegation (if realized between two cognitive agents) is generally a social action [11, 24], and also a
meta-action, since its object is an action. We introduce an operator of delegation with four parameters:
Delegates(A B d), where A,BAgt, =(,g), d=deadline. This means that A delegates the task to B
with the deadline d. In the following we will put aside both the deadline of , and the fact that, in
delegating , A could implicitly delegate also the realization of preconditions (which normally implies
some problem-solving and/or planning).

1
Let us define as temporary results of an action in a plan , the results of that are not results of :
Tr(,) = {q
i
R()

| q
i
R()}.
We define as transitory results (or pertinent temporary results) of an action in a plan aimed to goal g:
TRr(,,g) = Tr(,)Pr(,,g)
They correspond to those results of that enable another action
1
in but that are not results of aimed at goal g:
TRr(,,g)={q
i
R() | (q
i
R()) (q
i
=P(
1
)) (dominates ) (dominates
1
) (
1
)}.
Let us define as relevant results of in aimed to g: Rr(,,g)={q
i
R() | q
i
Rr(,g)}
In fact, in our model for delegating a task it consists in specific mental states of (at least one of) the
interacting agents, in the actions that follow and in their results in the world and in the cognitive states of
the agents themselves.
2.3. Types of delegation based on the interaction between the agents
On the basis of the kind of interaction between the delegating agent and the delegated one, it is possible
to define various types of delegation. The five basic types of delegation are shown in table 1 [6].
Unilateral Acceptance-based
by Exploitation no mutual belief &
passive achievement of
mutual belief &
passive achievement of
by Induction no mutual belief &
active achievement of
mutual belief &
active achievement of
by Agreement mutual belief &
mutual active achievement of
Table1
We call weak delegation (line 1, column 1, in Table 1) delegation based on exploitation, on the passive
achievement by A of the task. In it there is no agreement, no request or even influence: A is just
exploiting in its plan a fully autonomous action of B. In fact, A has only to recognize the possibility that
B will realize by itself and that this realization will be useful for A, which "passively" awaits the
realization of .
More precisely,
a) The achievement of (the execution of and its result g) is a goal of A.
b) A believes that there exists another agent B that has the power of [8] achieving .
c) A believes that B will achieve in time.
c-bis) A believes that B intends to achieve in time (in the case that B is a cognitive agent).
d) A prefers
2
to achieve through B.
e) The achievement of through B is the goal of A.
f) A has the goal (relativized to (e)) of not achieving by itself.
We consider (a, b, c, and d) what the agent A views as a "Potential for relying on" the agent B, its trust;
and (e and f) what A views as the "Decision to rely on" B. We consider "Potential for relying on" and
"Decision to rely on" as two constructs temporally and logically related to each other.
We call mild delegation (line 2, column 1, in Table 1) the delegation based on induction, on the active
indirect achievement by A of the task. In it there is no agreement, no request, but A is itself eliciting,
inducing in B the desired behaviour in order to exploit it.
More precisely,
a') The achievement of is a goal of A.
b') A believes that there exists another agent B that has the power of achieving .
c') A does not believe that B will achieve by itself.
d') A believes that if A realizes an action ' as a consequence it will be possible for B to achieve .

2
This means that, either relative to the achievement of or relative to a broader goal g' that includes the achievement
of , A believes to be dependent on B [24].
e') A prefers to achieve through B.
f') A intends to do '.
g') The achievement of through B is the goal of A.
h') A has the goal (relativized to (g')) of not achieving by itself.
We consider (a', b', c', d' and e') what the agent A views as a "Potential for relying on" the agent B; and
(f', g' and h') what A views as the "Decision to rely on" B.
We will call strict delegation (line 3 in Table 1) delegation based on explicit agreement, on the active
achievement by A of the task through an agreement with B. It is based on B's adopting A's task in
response to A's request/order. Strict delegation is always connected with strict adoption (see below): we
will give a more detailed description of strict delegation in the "delegation-adoption" paragraph.
2.4. Adoption (Help)
In analogy with delegation we introduce the corresponding operator for adoption: Adopts(B A ). Let us
now consider some dimensions of the adoption notion. This means that B adopts the task for A. In
fact, to adopt a task means, as for delegation, to identify (to describe) the mental states of the interacting
agents, the actions that follow and their results in the world and in the cognitive state of the agents.
2.5. Types of adoption based on the interaction between the agents
On the basis of the kind of interaction between the adopting agent and the adopted one, it is possible to
define various types of adoption. The three basic types of adoption are shown in Table 2.
Unilateral Acceptance-based
by Spontaneous
Initiative
no mutual belief &
passive achievement of
mutual belief &
passive achievement of
by Agreement mutual belief &
mutual active achievement of
Table 2
Let us call weak adoption (line 1, column 1, in Table 2) adoption based on spontaneous initiative, on the
interferencing (wished) action of B that permits the passive achievement by A of the task. There is no
agreement, no information or even influence: B unilaterally and spontaneously has the goal of
performing a given action because this action is either contained in A's plan or is (according to B) an
interest of A itself.
Notice that this kind of help may be even ignored by A. In other words, B can adopt some of A's goals
independent of A's delegation or request.
More precisely,
a'') B believes that the achievement of is a goal of A.
b'') B believes that B has the power of achieving .
c'') B intends to achieve for A (i.e., B has the goal to achieve relativized to the previous beliefs).
In analogy with the weak delegation we consider (a'' and b'') what the agent B views as a "Potential for
weak adoption" of the agent A; and (c'') what B views as the "Choice of weak adopting" A.
We will call strict adoption (line 2 in Table 2) the adoption based on explicit agreement, on the active
achievement by B of the task delegated/requested by A. It is based on B's adopting A's task in response
to A's request/order (see "delegation-adoption" paragraph).
2.6. Delegation-Adoption (Contract)
In Strict Delegation, the delegated agent knows that the delegating agent is relying on it and accepts the
task; in Strict Adoption, the helped agent knows about the adoption and accepts it (very often both these
acceptations are preceded by a process of negotiation between the agents).
In other words, Strict Delegation requires Strict Adoption, and viceversa: they are two facets of a unitary
social relation that we call "delegation-adoption" or "contract".
3
There is a delegation-adoption relationship between A and B for , when:
"Potential for request of contract" from A to B:
- From A's point of view:
a) The achievement of is a goal of A.
b) A believes that an agent B exists that has the power of achieving .
d) A prefers to achieve through B.
- From Bs point of view:
b'') B believes that B has the power of achieving .
After the "Agreement":
A series of mutual beliefs (MB) are true (line 3, column 2, in Table1, and line 2, column 2 in Table2):
(MB A B (a, b, c, d, e, f, h, a'', b'', c''))
where: h) B is socially committed to A to achieve for A.
The delegation/adoption relation is the core of the social commitment [25, 13, 1] relationship. Thus, it is
a basic ingredient for joint intentions, true cooperation and team work [18, 15]. In other words, we claim
that in collaborative activity each partner relies on the other partners "strongly" delegating to them some
tasks, and, viceversa, each partner adopts some tasks from the other partners.
2.7. Types of delegation based on the specification of the task
Another important dimension of the delegation/adoption problem concerns how the task is specified in
the delegation action; how this specification influences the contractor's autonomy, how different
interpretations of the specification of the task (or different levels of granularity in the interpretation of the
task specification) for client and contractor could produce misunderstanding and conflicts.
The object of delegation () can be minimally specified (open delegation), completely specified (close
delegation) or specified at any intermediate level.
Let us consider two extreme main cases:

3
In [16] a particular relevance is given to the delegation/adoption notions; however in that interesting work a series of
assumptions about the delegation/adoption model reduces the effective value that these basic notions have for a general
cooperation theory. For example, the delegation (called task delegation) concerns only a set of actions: it is not possible
for an agent to delegate either states of the world (results) or complex action (of which the delegating agent does not
know the breakdown into elementary actions). In practice, it is not possible for the client to rely on the contractor's
problem solving abilities. Moreover, in the adoption (task adoption) it is necessary that the task be already a contractor's
goal (it must be present in the "potential for cooperation" of the contractor). In this way the actual definition of the term
adoption is distorted: in which sense does an agent adopt a goal if this is already included in its set of goals?
We introduce the more basic notions of weak adoption and weak delegation. Our notions are based on a structural theory
of plans. In our treatment the contractor acquires a new goal (changes its mind) and does so just because it is a goal of
the other agent (the client). This makes our notion of adoption more dynamic and flexible, covering several types of
social relationship and also clearly related to the notion of influencing [8]. We do not presuppose any coincidence of
goals.
- Pure Executive (Close) Delegation
From the client's (contractor's) point of view: when either S
A,
or BAct
A
(S
B,
or BAct
B
), or
g is the relevant result of and S
A
or BAct
A
(S
B,
or BAct
B
). In other words, the delegating
(delegated) agent believes it is delegating (adopting) a completely specified task: what A expects from B
is just the execution of an elementary action (what B believes A delegated to it is simply the execution of
an elementary action).
- Open Delegation
From the client's (contractor's) point of view: either CAct
A
, or NRAct
A
(either CAct
B
, or
NRAct
B
); and also when g is the relevant result of and CAct
A
or NRAct
A
(CAct
B
, or
NRAct
B
).
In other words, the client (contractor) believes it is delegating (adopting) an incompletely specified task:
either A (B) is delegating (adopting) a complex or abstract action, or it is delegating (adopting) just a
result (state of the world). The agent B can (or must) realize the delegated (adopted) task by exerting its
autonomy.
The importance of open delegation in collaboration theory should be examined.
On the one hand, we would like to stress that open delegation is not only due to client's preference
(utility) or limited know-how or limited skills. Of course, when A is delegating to B, it is dependent on
B as for [24]: it needs B's action for some of its goals (either some domain goals or goals like saving
time, effort, resources, etc.). However, open delegation is fundamental because it is also due to A's
ignorance about the world and its dynamics. In fact, frequently enough it is not possible or convenient to
fully specify because some local and updated knowledge is needed in order for that part of the plan to
be successfully executed.
Open delegation is one of the bases of the flexibility of distributed and MA plans. To be radical,
delegating actions to an autonomous agent always requires some level of "openness": the agent at least
cannot avoid monitoring and adapting its own actions, during their execution.
Moreover, the distributed character of the MA plans derives from open delegation.
As we saw, A can delegate to B either an entire plan or some part of it (partial delegation). The
combination of partial delegation (where the contractor can ignore the other parts of the plan) and open
delegation (where the client can ignore the sub-plan chosen and developed by the contractor) creates the
possibility that A and B collaborate in a plan that they do not share and that nobody knows fully: that is
to say a truly distributed plan [15, 11]. However, for each part of the plan there will be at least one agent
that knows it.
The object of the delegation can be a practical or domain action as well as a meta-action (searching,
planning, choosing, problem solving, and so on).
When A is open delegating to B some domain action, it is necessarily also delegating to B some meta-
action: at least searching for a plan, applying it, and sometimes deciding between alternative solutions.
We call B's discretion concerning the fact that some decision about is delegated to B.
Given this distinction between kinds of delegation, also the problem of the rationality of delegation could
be reviewed in the same perspective by considering the relationship between rationality and the level of
delegation.
It is rational to closely delegate when we are able to specify the plan as it is needed in the context of
execution, and when the cost of specifying the plan and communicating it is not high, or when we do not
trust the ability/willingness of the contractor to find or developing a plan for the goal (of the task) or the
contractor's understanding of our needs.
So, rationality in the level of delegation strictly depends on the clients model of the contractor (see 3)
and on the clients knowledge of the domain plans and of the execution situation.
2.8. Subdelegation and delegation properties
We will say that B subdelegates the task if there are at least two other agents (A,CAgt) such that:
Delegate(A B ) Delegate(B C )
where A, B and C are different agents; and there are no constraints on the performer(s) of the action
(or ' if =g and an ' exists such that Rr(',g)). B is delegated to but by means of whom/what it
will achieve is not defined.
- In strict delegation, the responsibility of the achievement of without fixing the performer(s) of the
action(s) is fixed to B -through either explicit or implicit agreement;
- in weak delegation, B can achieve and A relies on this achievement, although could be realized
through other agents.
When A is aware of B's subdelegation we have an interesting special case of open delegation.
We can say that B subdelegates a subtask ' if: Delegate(A B ) Delegate(B C ') where ' is a subpart
of .
It is important to distinguish the previous definition of subdelegation from the delegation to delegate: in
the latter case there is a prescription to realize the specific meta-action of delegation, and the task of the
contractor is to establish another delegation relation.
The client can allow the contractor to sub-delegate the task. We can have at least three possibilities
relative to the task and three possibilities relative to the sub-delegated agent.
Considering the task, in the client's view the contractor:
- can/must subdelegate the entire delegated task and/or
- can/must subdelegate just a specified part of the delegated task
- cannot/must not sub-delegate any part of the delegated task.
Considering the sub-delegated agent, in the client's view the contractor:
- can/must subdelegate the task to an agent in a set specified by the client;
- can/must subdelegate the task to anybody;
- cannot/must not sub-delegate the task.
Finally we would like to underline some elementary properties of the delegation relation.
Delegation as we defined here is neither a symmetrical nor a transitive relation.
In fact:
- from the fact that Delegate(A B ) it does not follow that Delegate(B A );
- from the fact that Delegate(A B ) and Delegate(B C ) it does not follow that Delegate(A C ).
In fact, it should not be overlooked that delegation is defined in terms of a set of cognitive states of the
client and/or the contractor: from the conjunction of the fact that A believes and intends that B will do ,
and the fact that B believes and intends that C will do , it does not follow that A believes and intends that
C will do .
2.9. Delegation of the control
The control (or check up) is an action aimed at ascertaining whether another action has been successfully
executed (or if a given state of the world has been realized or maintained).
Controlling an action means verifying that its relevant results hold (including the execution of the action
itself).
Given Rr(, g) of any action Act, a set of actions
c
Act -that we call "controlling actions of
aimed at g"- may be associated with it. Each action in
c
can be either an elementary or a complex
action.
The relevant results of each
k

c
for the goal of controlling can be indicated through:
Rr(
k
, (control g)). It returns the truth values of each g
i
g in Rr(, g).
Plans typically contain control actions of some of their actions.
When the client is delegating a given object-action, what about its control actions?
Considering, for the sake of simplicity, that the control action is executed by a single agent, when
Delegates(A B ) there are at least four possibilities:
i)
k

c

| Delegates(A B
k
)
(i.e., A delegates the control to B: the client does not (directly) verify the success of the delegated
action to the contractor);
ii)
k

c
, XAgt (with (XA)(XB))

| Delegates(A X
k
)
(i.e., A delegates the control to a third agent);
iii) For each
k

c
, for each XAgt |
k
is not executed
(i.e., A gives up the control: nobody is delegated to control the success of );
iv)
k

c

| (
k
is executed)((Agent
k
)=A)
(i.e., A maintains the control for itself).
Each of these possibilities could be explicit or implicit in the delegation of the object-action, in the roles
of the agents (if they are part of a social structure), in the preceding interactions between the client and
contractor, etc.
2.10. Levels of adoption relative to the delegated task
In order for the adoption be an effective help (deep cooperation), the contractor should consider/foresee
the client's plan (in which the delegated task is inserted), its goals and interests and, on the basis of the
circumstances, deeply-understand/improve/preserve the requested help. In this way it is possible to
classify the contractor's adoption at the various levels:
Literal help
The contractor adopts exactly what has been delegated by the client (see fig.1). More formally:
Delegates(A B =(,g)) and Adopts(B A ).
'(g')
1(g1)
delegated
adopted
. . . .
. . . .
(g)
. . . .
fig.1
Overhelp
The contractor goes beyond what has been delegated by the client without changing the clients plan
(see fig.2). More formally:
Delegates(A B =(,g)) (dominates ) Adopts(B A
1
) (dominates
1
) (dominates-or-
equal
1
)
'(g')

1
(g
1
)
delegated
adopted
. . . .
. . . .
(g)
. . . .
fig.2
Critical help
The contractor achieves the relevant results of the requested plan/action, but modifies the plan/action
(see fig.3). More formally:
Delegates(A B =(,g)) (dominates ) Adopts(B A
x
) (
x
=(
x
,g))
In fact, what happens is that B adopts g, that is, it is sufficient for B to find in Act
B
an action
x
whatever
such that Rr
B
(
x
, g).
'(g')
delegated
adopted
(g)
....
....

x
(g)
....
alternative
fig.3
Critical overhelp
The contractor implements an overhelp and in addition modifies/changes the plan/action (see fig.4).
More formally:
Delegates(A B =(,g)) (dominates ) Adopts(B A
x
) (
x
=(
x
,g))
In fact, what happens is that B adopts g, that is, it is sufficient for B to find in Act
B
an action
x
whatever
such that Rr
B
(
x
, g).
'(g')
(g)
. . . .
. . . .
x(g')
. . . .
delegated
adopted
alternative
fig.4
Hyper-critical help
The contractor adopts goals or interests of the client that the client itself did not take into account: by
doing so, the contractor neither performs the delegated action/plan nor totally achieves the results that
were delegated (see fig.5). More formally (calling IA the set of the interests of the client):
Delegates(A B =(,g)) (dominates ) Adopts(B A
1
) (
1
=(
1
,g
1
)) (g
1
IA).
In practice, B is preserving an interest of A that the plan/goal the client is pursuing could jeopardize.
'(g')

1
(g
1
)
(g)
. . . .
. . . .
. . . .
delegated
adopted
. . . .
fig.5
Given the above analysis of adoption levels, we can reconsider the rationality of adoption. Is it rational to
adopt a request of the clients which the contractor believes to conflict with the clients aims? Is it rational
to do as requested and delegated by the client when a better solution/plan exists for the clients delegated
goal?
We do not consider the conflicts arising between Bs personal interest and the client's request. We are
mainly interested in possible conflicts arising from the contractors willingness to help. The conflict is
just due to the contractors intention of helping the client as much as possible: this is why it is over- or
critically-helping.
Precisely in this perspective to help (over)critically is rational: since the contractor wants to help, it is
better to help in a useful or optimal way. If the delegated task is wrong it is not rational to waste your
resources for a useless plan; if the plan is not wrong but there is a better plan, and the plan is better also
with regards to the ratio between costs and benefits, it is rational to change the plan.
In sum, for helpful rational agent over-help or critical- help are unavoidable.
2.11. Levels of agency
On the basis of the previous analysis of the delegation and adoption dimensions, it is possible to identify
various levels and types of agency.
Types and levels of delegation characterize the autonomy of the delegated agent. There are at least two
meanings of autonomy: one is equal to self-sufficiency, not being dependent on others for our own
goals [24]; on this side the less dependent B is on A regarding the resources necessary for the task, the
more autonomous B is of A regarding that task.
The other meaning is related to action and goals and to their levels. One could distinguish between
performance or executive autonomy (the agent is not allowed to decide anything but the execution of
the entire delegated plan [11]: in our terms, given an agent A and a plan , CEK(A, ) is true, where
is completely specified in the delegation itself); planning autonomy (the agent is allowed to plan by
itself, CEK(A, ) is true, where is not completely specified in the delegation itself); goal autonomy
(the agent is allowed to have/find goals). Here we ignore the autonomous goals of the delegated agent, so
we can characterise different degrees of autonomy in delegation as follows.
The autonomy of the contractor vis--vis the client increases along various dimensions:
- the more open the delegation (the less specified the task is), or
- the more control actions given up or delegated to B by A, or
- the more delegated decisions (discretion),
the more autonomous B is of A regarding that task.
Open delegation presupposes some cognitive skills in the contractor. The same holds also for certain
kinds of adoption which presuppose B's ability of plan abduction, recognition and agent modelling.
3. Agent modelling
Both delegation and adoption might require some level of agent modelling.
We examine which model the client should have of the contractor since Delegates(A B ), and, viceversa,
which model the contractor should have of the client since Adopts(B A ). In both cases, depending on
the beliefs concerning , and on the level of delegation and adoption, specific goals or competencies are
attributed to the other agent.
In other terms, it is possible to predict different needs of agent-modelling, from the two different
structural positions in the contract relationship (we neglect how the client models the "willingness" of the
contractor):
Basically, the client should model abilities and reliability of the contractor. While in weak delegation A
must have a belief about the intention of the other (cognitive) agent, thus possibly using some intention
recognition (IR), in strict delegation (based on agreement) A usually does not need either intention or
plan recognition (PR) about B.
Indeed, the contractor, in general does not need to model client's capabilities, while (in deep
cooperation) he needs to model client's plans and goals quite well; he will thus apply PR and IR.
This asymmetry has, of course, some exceptions:
- sometimes the contractor should model the client's abilities in order to realize its help better (consider
overhelp due to the necessity of realizing tasks that the client did not delegate but is not really able to do;
consider an underhelp due to the fact that the client is able to do part of , and B lets it do it);
- sometimes it is useful for the client to apply PR to contractor activity, for example to monitor B's task
execution when A has not delegated also the control. As we said PR is necessary in weak delegation,
where A must exploit B's autonomous activity and intentions without any agreement, but PR and IR
might be necessary also to attain strict delegation when negotiation is needed to influence and convince
the contractor.
Both in high levels of delegation and in high levels of adoption the other agent is modelled as a cognitive
agent endowed with cognitive skills, goals and plans. Only cognitive agents can be literally open-
delegated and over-helped (given our plan based definition). Conversely, only cognitive agents can open
delegate and over help.
Such a modelling of the other agent is constructed on different bases:
a) previous experience of the other's behaviour (if B was once able to do , A assumes that B is still able
now; if i was once an interest of the client by default the contractor will believe this is still the case);
b) an explicit declaration of the agent (B declared to A to be able to do ; A explicitly informed B of the
goals that motivate );
c) an implicit communication of the agent (B declared to A that he intends to do , that, if B is sincere,
implies that B believes to be able to do );
d) attributions to the category or role the agent belongs to [22] (if the contractor belongs to a class of
agents that are able to do then also it is able to do ; if the client belongs to a class of agents that have
the motivation g then also it has such a goal g).
As we said a fundamental role is played by the modelling of abilities and expertise of the other agent. In
particular, in our approach the knowledge of the plan library of the other agent is critical. In our model
knowledge about actions and rules (plans) can be distributed among the agents, in a hierarchical way.
We mean that there are plans and/or actions that can be ascribed to any agent (universal competence);
others that can be ascribed to sub classes or categories of agents (like roles in organizations) that have
specialized expertise or skills (class competence); others that pertain only to certain agents (personal
competence).
This is a classical approach to User Modelling [22] which can be applied to actions and plan libraries in
agent modelling.
Note that such a plan-ascription is fundamental both for modelling the contractor's competencies and for
recognizing the client's plans.
3.1. Modelling the client
For simple adoption (literal help) the contractor does not need to model the client: he needs only to
understand the task (request). Indeed, as we said, deeper levels of cooperation require going beyond the
request. Thus modelling client's plans, goals, motivations, interests, is necessary. The various adoption
levels were illustrated above; at each level it was already apparent what aspects of the client the contractor
should model: for example, B could ascribe A's certain goals or interests, and/or recognize A's plans, in
order to adopt them.
A very important problem is the following: how can the contractor recognize client's plans in which
delegation () is inserted? The ideal recognition of the active plan and motivations of the client would be
possible on the basis of the true plan library of the client itself. Lacking this, the contractor works on its
own plan library to recognize the plans of the other! In other words, the contractor in practice and by
default assumes that its plan library is shared.
The contractor is normally supposed to be more expert in the delegated task (to have a richer or more
correct plan-library) than the client or to have more "local", specific and updated knowledge. This is
generally true for the plans dominated by , not for plans which dominate . It follows that in certain
cases the contractor could have expertise-problems in understanding the client's higher plans (for ex.
soldier-general relationships); and, viceversa, that the client could have some problems in monitoring the
executive plans of a more expert or local contractor. In a complete theory of delegation types, one
should distinguish between delegation of tasks in which the client is as great an expert as (or a greater
expert than) the contractor, and delegation of tasks to an expert agent.
3.2. Modelling the contractor: trust
The client's model of the contractor has mainly to do with the contractor's competence and willingness .
In particular, the model of competencies is relative to domain competencies, meta or planning-
competencies, control capabilities and capabilities for sub-delegation; instead, the model of the
willingness is related to goals, commitments, and reliability.
In our definition of delegation there are a set of client's beliefs:
b) A believes that there exists another agent B that has the power of achieving .
c) A believes that B will achieve in time.
c-bis) A believes that B intends to achieve in time (if B is a cognitive agent);
h) A believes that B is socially committed to A to achieve for A.
These beliefs are part of the agent modelling and represent the clients trust in the contractor.
Note that the notion of reliability and trust are quite different in weak and strict delegation. In fact, in
weak delegation B is reliable precisely because of its predictability (based on frequency of behaviour or
on natural laws) or, in the case of cognitive agents, because of its internal/personal commitment to a given
intention (persistence) [2], plus some regularity due to habits, scripts, roles or other constraints. Indeed,
in strict delegation reliability is also based on social-commitment and on consequent obligations
(clients rights) [7]. Thus, A should model B as honest and sincere in committing, and as loyal and
trustworthy. Not only does A believe that B will do , and that it will do because it intends to , but A
also has beliefs concerning certain reasons and motives (social commitments and obligations) underlying
Bs intention.
Now, given the model of a task the client delegates to the contractor, we are interested in the consequent
model of the contractor itself.
If Delegates(A B ), A should believe about B that:
a) when =:
- either CEK(B,) or
- CEK(B,)=, but B can sub-delegate to an agent C the part of the task he is unable to execute or
decompose;
b) when =g:
- either there is in Act
B
an action such that Rr
B
(, g) [to go back to case a) about ];
- or there is not in Act
B
an action such that Rr
B
(, g) but Delegates(B C g).
4. Conflicts between client and contractor
For sake of brevity in this paper we will not consider the rich and important typology of control-related
conflicts, although several breakdowns in collaboration are due to conflicts or misunderstanding over
control.
4.1. Conflicts due to the level of adoption of the contractor
Given our characterisation of delegation and adoption and of their plan-based levels, we can derive a
basic ontology of conflicts arising between the two agents when there is a mismatch between the intended
delegation and the intended adoption. These mismatches are neither due to simple misunderstandings of
A's request/expectation nor to B's offer, nor to a wrong or incomplete plan/intention recognition of B.
Conflicts due to the contractor's sub-help
The supporter or the contractor might either offer adoption or actually satisfy merely a sub-goal of the
delegated task.
In other words, B does not satisfy the delegated task. Example in conversation: A: What time is it?, B:
I don't know. A's subgoal answered by B is satisfied, but the goal (to know the time) is not. Example
in a practical domain: A delegates to B make-fettuccini-marinara and B just makes-marinara-
sauce.
This could be due to several possible causes: B is not able to do all the task; the task is not convenient
for B; it does not want to complete the task because, for example, it believes that A is able to do a part by
itself, etc.
Let us return to what we are mainly interested in - collaborative conflicts - which come from B's
intention to help A beyond its request or delegation and to exploit its own knowledge and intelligence
(reasoning, problem solving, planning, and decision-making skills) for A [10].
Conflicts due to the contractor's over-help, critical help, critical over-help, hyper-critical help
In any case of over, critical and hypercritical adoption there is apparently a conflict, since A has the goal
that B does , while B is doing or intends to do something different for A. Normally these conflicts can
be quickly solved, for two reasons. First, B's intention is to help A, it's a collaborative intention; second,
normally B is "entitled" (see below) by A (either explicitly or implicitly) to provide this deeper help, and
A is expecting this initiative and autonomy. Thus, normally there is no real conflict since A is ready to
accept B's collaborative initiative. However, sometimes these cases trigger serious conflicts which have to
be negotiated. This is specially true in organizations and among different roles.
Leaving aside possible cases of misunderstanding between client and contractor about A's
request/expectation or B's offer (or to a wrong plan/intention by B), we can distinguish the reasons for
conflict (i.e., A is against B's initiative) into two main classes:
i) Trouble for A's goals
B can jeopardize the goal achievement of A: this is possible for at least two reasons:
i1) Lack of coordination
A plan is composed of many actions (assigned to several agents, when there is a partial delegation), so
a unilateral initiative on the part of one agent to change that plan without reconsidering the general
plan might be fatal (because of interference) or lead to a waste of resources, time, etc. (because of
redundancy).
i2) Disagreement about action results
A knows or believes that the action executed by B does not bring about the results expected or
believed by B itself.
ii) Role and Status
In this case the conflict is relative to the entitlement
4
of B by A to take the initiative of changing the
delegated task. For reasons of power, job, subordination, role B while doing such a sub/over/critical help
is going beyond what it is permitted to do (according to A).
This important aspect concerning conflicts extends beyond the plan-based analysis of delegation we are
illustrating here.
4.2. Conflicts over task specification
Implicit aspects of the delegation (or different representations of task specification) could produce
various possible misunderstandings among the agents. A's perspective could clash with B's point of
view: can be considered at different levels of complexity by the two interacting agents (see Table 3).
As point of view Bs corresponding point of view
= with
(S
A
)(BAct
A
)
"It is an elementary
action!"
(S
B
)(BAct
B
)
"It is an elementary
action!" -> No Confl i ct
(CAct
B
)(NRAct
B
)
"It is a complex
action!" -> Confl i ct
= with
(NRAct
A
)
(CAct
A
)
"It is a complex
action!"
(S
B
)(BAct
B
)
"It is an elementary
action!" -> Confl i ct
(CAct
B
)(NRAct
B
)
"It is a complex
action!" -> No Confl i ct
Table 3
5. Conclusions
We claimed that any theory of agents, of autonomy, and of cooperation and collaboration requires an
analytic theory of delegation and adoption. We attempted to contribute to this theory with a plan-based
analysis of delegation and adoption.
This theory must be based on the analysis of actions, plans, social actions and the agents minds.

4
We will say that B is entitled by A to through the delegation Delegates(A B ), when there is common (to A and B)
knowledge that A is committed not to oppose, not to be astonished, etc., if B pursues [7].
We presented our definition of delegation, adoption, and tasks. We characterized different kinds and
levels of delegation and adoption. We discussed the different kinds of agents and their degrees of
autonomy.
We analyzed how the client models the contractor and vice versa.
We illustrated the most interesting forms of collaborative conflicts which arise when the
provided/proposed help does not match the intended delegation.
We hope that after this analysis the reader will agree with our thesis that delegation, autonomy and their
related conflicts are the core of cooperation with any kind of agent.
Autonomous agents provide a collaboration, but their autonomy could produce several possible conflicts
with the client/user.
6. References
[1] A.H. Bond, Commitments, Some DAI insights from Symbolic Interactionist Sociology. In Proceedings of the 9^
International AAAI Workshop on Distributed Artificial Intelligence. (Menlo Park, 1989) 239-261.
[2] M.E. Bratman, What is intention?, In P.R. Cohen, J. Morgan, and E. Pollack (eds.) Intentions in Communication,
(MIT Press) 1990.
[3] C. Castelfranchi & R. Falcone, Levels of help, levels of delegation and agent modeling. AAAI-96 Agent Modeling
Workshop, 4 August 1996.
[4] C. Castelfranchi & R. Falcone, Delegation Conflicts, in M. Boman & W. Van de Velde (eds.), Proceedings of the 8th
European Workshop on Modelling Autonomous Agents in A Multi-Agent World (MAAMAW'97), Multi-Agent
Rationality, Lecture Notes in Artificial Intelligence, 1237 (Springer-Verlag 1997) 234-254.
[5] C. Castelfranchi & R. Falcone, From Task Delegation to Role Delegation, in M Lenzerini (Editor), AI*IA97:
Advances in Artificial Intelligence, Lecture Notes in Artificial Intelligence, 1321 (Springer-Verlag 1997) 278-289.
[6] C. Castelfranchi, Principles of Individual Social Action. In Tuomela R., & Hintikka, G. (eds.) Contemporary Action
Theory. (Kluwer, in press).
[7] C. Castelfranchi, Commitment: from intentions to groups and organizations. In Proceedings of ICMAS'96,
S.Francisco, (AAAI-MIT Press 1996).
[8] C. Castelfranchi, Social Power: a missed point in DAI, MA and HCI. In Decentralized AI. Y. Demazeau &
J.P.Mueller (eds) (Elsevier, Amsterdam 1991) 49-62.
[9] F.C. Cheong, Internet Agents: Spiders, Wanderers, Brokers, and Bots, (New Riders Publishing, Indiannapolis, USA,
1996).
[10] J. Chu-Carroll, S. Carberry, Conflict detection and resolution in collaborative planning, IJCAI-95 Workshop on
Agent Theories, Architectures, and Languages, Montreal, Canada (1995).
[11] R. Conte & C. Castelfranchi, Cognitive and Social Action (UCL Press, London, 1995).
[12] B. Crabtree, M. Wiegand, J. Davies, Building Practical Agent-based Systems, PAAM Tutorial, London, 1996.
[13] R. E. Fikes, A commitment-based framework for describing informal cooperative work. Cognitive Science, (1982) 6:
331-347.
[14] R. Goodwin, Formalizing Properties of Agents. Technical report, CMU-CS-93-159 (1993).
[15] B. Grosz, Collaborative Systems. AI Magazine (summer 1996) 67-85.
[16] A. Haddadi, Communication and Cooperation in Agent Systems (the Springer Press, 1996).
[17] H.A. Kautz, A Formal Theory of Plan Recognition. Ph.D thesis, University of Rochester, 87. Also TR215,
University of Rochester, Dept. of Computer Science, Rochester, N.Y., 1987.
[18] D. Kinny, M. Ljungberg, A. Rao, E.Sonenberg, G. Tidhar and E. Werner, Planned Team Activity, in Proceedings of
MAAMAW-92, S. Martino al Cimino, Italy, July 1992.
[19] H.J. Levesque, P.R. Cohen, Nunes J.H.T. On acting together. In Proceedings of the 8th National Conference on
Artificial Intelligence, 94-100. San Marco, California: Kaufmann. 1990.
[20] P. Maes, Situated agents can have goals. In P. Maes, editor, Designing Autonomous Agents, pp. 49-70. The MIT
Press, 1990.
[21] Pollack, M., Plans as complex mental attitudes in Cohen, P.R., Morgan, J. and Pollack, M.E. (eds), Intentions in
Communication, MIT press, USA, pp. 77-103, 1990.
[22] Rich, E. User Modeling via stereotypes. Cognitive Sciences, 3:329-354, 1984.
[23] Rosenschein, J.S. and Zlotkin, G. Rules of encounters Designing conventions for automated negotiation among
computers. Cambridge, MA: MIT Press. 1994.
[24] Sichman, J, R. Conte, C. Castelfranchi, Y. Demazeau. A social reasoning mechanism based on dependence networks.
In Proceedings of the 11th ECAI, 1994.
[25] M.P. Singh, Multiagent Systems: A Theoretical Framework for Intentions, Know-how, and Communications.
Springer Verlag, LNCS, volume 799, 1995.
[26] Virtual Roundtable, Internet Computing on-line Journal, July-August issue, 1997.
[27] Werner, E., Cooperating agents: A unified theory of communication and social structure. In L.Gasser and
M.N.Huhns, editors, Distributed Artificial Intelligence: Volume II. Morgan Kaufmann Publishers, 1990.
[28] Wooldridge M., and Jennings N., Intelligent Agents: Theory and Practice, The knowledge Engineering Review, Vol.
10, N.2, pp. 115-152, 1995.

Вам также может понравиться