Академический Документы
Профессиональный Документы
Культура Документы
in Multi-Agent Systems
Dr. Jordi Sabater Mir
jsabater@iiia.csic.es
Outline
Introduction
Approaches to control the interaction
Computational reputation models
eBay
ReGreT
Trust
A complete absence of trust
would prevent [one] even getting
up in the morning.
Niklas Luhman - 1979
Trust
A couple of definitions that I like:
Trust
Epistemic
Motivational
6
Reputation
Reputation
What a social entity says about a target regarding his/her behavior
Mr. Yellow
Mr. Yellow
Direct experiences
Mr. Pink
Mr. Yellow
Mr. Pink
Mr. Green
Mr. Yellow
Mr. Yellow
Reputation
Mr. Yellow
Security approach
Institutional approach
Security approach
Institutional approach
Security approach
Computational reputation
models
Classification dimensions
Paradigm type
Mathematical approach
Cognitive approach
Information sources
Direct experiences
Witness information
Sociological information
Prejudice
Visibility types
Subjective
Global
Models granularity
Single context
Multi context
Subjective vs Global
Global
The reputation is maintained as a centralized resource.
All the agents in that society have access to the same reputation values.
Advantages:
Reputation information is available even if you are a newcomer and do not
depend on how well connected or good informants you have.
Agents can be simpler because they dont need to calculate reputation
values, just use them.
Disadvantages:
Particular mental states of the agent or its singular situation are not taken
into account when reputation is calculated. Therefore, a global view it is only
possible when we can assume that all the agents think and behave similar.
Not always is desireable for an agent to make public information about the
direct experiences or submit that information to an external authority.
Therefore, a high trust on the central institution managing reputation is
essential.
Subjective vs Global
Subjective
The reputation is maintained by each agent and is calculated according to its
own direct experiences, information from its contacts, its social relations...
Advantages:
Reputation values can be calculated taking into account the current state of
the agent and its individual particularities.
Disadvantages:
The models are more complex, usually because they can use extra sources of
information.
Each agent has to worry about getting the information to build reputation
values.
Less information is available so the models have to be more accurate to
avoid noise.
eBay model
eBay model
Specifically oriented to scenarios with the following
characteristics:
A lot of users (we are talking about milions)
Few chances of repeating interaction with the same partner
Easy to change identity
Human oriented
ODB
IDB
SDB
The ReGreT
system
Credibility
Neighbourhood
reputation
Witness
reputation
Reputation
model
Direct
Trust
System
reputation
Trust
ODB
IDB
SDB
The ReGreT
system
Credibility
Neighbourhood
reputation
Witness
reputation
Reputation
model
Direct
Trust
System
reputation
Trust
Outcome
Prize =c 2000
Quality =c A
Quantity =c 300
Contract
Prize =f 2000
Quality =f C
Quantity =f 295
Fulfillment
Outcome
Prize =c 2000
Quality =c A
Quantity =c 300
Prize =f 2000
Quality =f C
Quantity =f 295
offers_good_prices
maintains_agreed_quantities
Outcome
Prize =c 2000
Quality =c A
Quantity =c 300
Prize =f 2000
Quality =f C
Quantity =f 295
Imp(o, 1 )
Imp(o, 2 )
Imp(o, 3 )
ODB
IDB
SDB
The ReGreT
system
Credibility
Witness
reputation
Direct
Trust
Neighbourhood
reputation
Direct Trust
Trust relationship calculated directly from an agents
outcomes database.
DTa b ( )
(t , t ) Imp(o , )
i
a ,b
oi ODB gr
( )
f (ti , t )
(t , ti )
o IDBa ,b f (t j , t )
j
gr ( )
ti
f (ti , t )
t
Direct Trust
DT reliability
a ,b
a ,b
Number of
outcomes
(No)
a ,b
No ( ODB gr ( ) ), itm 10
Deviation
(Dv)
The greater the variability in
the rating values the more
volatile will be the other
agent in the fulfillment of its
agreements.
ODB
IDB
SDB
The ReGreT
system
Credibility
Neighbourhood
reputation
Witness
reputation
Reputation
model
Direct
Trust
System
reputation
Trust
Witness reputation
Reputation that an agent builds on another agent based
on the beliefs gathered from society members (witnesses).
Problems of witness information:
Can be false.
Can be incomplete.
It may suffer from the correlated evidence problem.
o
+
o
#
c1
a1
b1
a2u3
+
u7
o+b2
c1
u6
u2
u4
u9
u5
u2
#^
b2
u3
+
^
o+ c2 #^ d1
d1 +
u8
d2
+
u1
c2
o
+
a1 #
o^
u9
u1
u8
u6 u5
u4 u7
o
#
+
a2
b1
+
+
trade ^
d2
o
#
o
+
a2
a1
o
#
b1 u1
o
o
+
#
^
b2
c1
c2
o+ u4
#^
u9
o
u1
d1
+
d2
u4
u3
u2
+
^
u5
u5
u2
u9
u6
u6
u3
u7
u8
u8
u7
o
Big exchange of sincere infor# kind of predispomation and some
sition to help+if it is possible.
cooperation
+
^
o
#
o
+
a2
a1
o
#
b1
+
b2
o
o
+
#
^
c1
#
u1
u4
o
u1
u9 c2u3
o+
#^
+
^
d1
+
d2
u2
u5
u2
u9
u7
u3
u6
o
Agents tend to use all the available
# some advantage
mechanisms to take
from their+competitors.
u8
u8
u5
u6
u7
competition
+
^
u4
o
#
Witness
reputation
u7
a1
#
c1
Step 1: Identifying
the witnesses
Initial set of witnesses:
Agents that have had
a trade Relation with
the target agent
o+
u6
u3
u2
?
c2
#^
d1
u8
b2
u9
u1
u5
u4
a2
b1
+
trade
d2
u7
Witness
Grouping agents with frequent interactions
reputation
among them and considering each one of these
Step 1: Identifying
u6
Minimizes u3
the correlated evidence problem.
the witnesses
u8
u2
Initial set of witnesses:
Reduces the number of queries to agents that
Agents that have had
probably will give us more or less the same
a trade Relation with
the target agent information.
b2
trade
Witness reputation
u7
Central-point
u6
u3
u8
u2
b2
u5
u4
Cut-point
cooperation
Witness
reputation
Step 1: Identifying
the witnesses
u7
u6
u3
u8
u2
b2
u5
u4
trade
Witness
reputation
Step 1: Identifying
the witnesses
u3
u2
b2
u5
trade
Witness
reputation
Trustu 2b 2 ( ), TrustRLu 2b 2 ( )
u2
Step 1: Identifying
the witnesses
Step 2: Who can I
trust?
u3
u5
Trustu 5b 2 ( ), TrustRLu 5b 2 ( )
ODB
IDB
SDB
The ReGreT
system
Credibility
Neighbourhood
reputation
Witness
reputation
Reputation
model
Direct
Trust
System
reputation
Trust
Credibility model
Two methods are used to evaluate the credibility of
witnesses:
Credibility
(witnessCr)
Social relations
(socialCr)
Past history
(infoCr)
Credibility model
socialCr(a,w,b): credibility that agent a assigns to agent w when
w is giving information about b and considering the social structure
among w, b and himself.
a
w
a
w
a
w
w
b
competitive relation
cooperative relation
w
b
b
w - witness
b - target agent
a - source agent
Credibility model
Regret uses fuzzy rules to calculate how the structure of
social relations influences the credibility on the information.
IF coop(w,b) is h
THEN socialCr(a,w,b) is vl
0
0
low
(l)
moderate
(m)
high
(h)
very_low
(vl)
low
(l)
moderate
(m)
high
(h)
very_high
(vh)
ODB
IDB
SDB
The ReGreT
system
Credibility
Neighbourhood
reputation
Witness
reputation
Reputation
model
Direct
Trust
System
reputation
Trust
Neighbourhood reputation
The trust on the agents that are in the neighbourhood of
the target agent and their relation with it are the elements
used to calculate what we call the Neighbourhood reputation.
ReGreT uses fuzzy rules to model this reputation.
IF DTan (offers_good_quality ) is X AND coop(b,ni) low
THEN Ran b (offers_good_quality) is X
i
ODB
IDB
SDB
The ReGreT
system
Credibility
Neighbourhood
reputation
Witness
reputation
Reputation
model
Direct
Trust
System
reputation
Trust
System reputation
The idea behind the System reputation is to use the
common knowledge about social groups and the role that
the agent is playing in the society as a mechanism to assign
reputation values to other agents.
The knowledge necessary to calculate a system reputation
is usually inherited from the group or groups to which the
agent belongs to.
Trust
If the agent has a reliable direct trust value, it will use that
as a measure of trust. If that value is not so reliable then it
will use reputation.
Neighbourhood
reputation
Witness
reputation
Reputation
model
Direct
Trust
System
reputation
Trust
[Properly] Integrating
Social evaluation
A social evaluation, as the name suggests, is the evaluation by a social
entity of a property related to a social aspect.
Social evaluations may concern physical, mental, and social properties of
targets.
A social evaluation includes at least three sets of agents:
a set E of agents who share the evaluation (evaluators)
a set T of evaluation targets
a set B of beneficiaries
We can find examples where the different sets intersect totally, partially,
etc...
e (e in E) may evaluate t (t in T) with regard to a state of the world that is in
bs (b in B) interest, but of which b not necessarily is aware.
Example: quality of TV programs during childrens timeshare
Image
An evaluative belief; it tells whether the target is good or bad with respect
to a given behaviour [Conte & Paolucci]
Is the result of an internal reasoning on
different sources of information that leads the
agent to create a belief about the behaviour
of another agent.
Beliefs
Social evaluation
Reputation
A voice is something that it is said, a piece of information that is being
transmitted.
Reputation: a voice about a social evaluation that is recognised by the
members of a group to be circulating among them.
Beliefs
B(S(f))
Reputation
Implications:
The agent that spreads a reputation, because it is not implicit that it
believes the associated social evaluation, takes no responsibility about
that social evaluation (another thing is the responsibility associated to
the action of spreading that reputation).
This fact allows reputation to circulate more easily than image
(less/no fear of retaliation).
Notice that if an agent believes what people say, image and
reputation colapse.
This distinction has important advantages from a technical point of
view.
Gossip
In order for reputation to exist, it has to be transmitted. We cannot have
reputation without communication.
Gossip currently has the meaning of an idle talk or rumour, especially
about the personal or private affairs of others. Usually has a bad
connotation. But in fact is an essential element in human nature.
The antecedents of gossip is grooming.
Studies from evolutionary psicology have found gossip to be very
important as a mechanism to spread reputation [Sommerfeld et al. 07, Dunbar 04]
Outline
A cognitive view on Reputation
Repage, a computational cognitive reputation model
[Properly] Integrating
RepAge
What is the RepAge model?
It is a reputation model evolved from a
cognitive theory by Conte and Paolucci.
The model is designed with an special
attention to the internal representation of the
elements used to build images and
reputations as well as the inter-relations of
these elements.
RepAge memory
Rep
Img
Strength: 0.6
Value:
RepAge memory
Outline
A cognitive view on Reputation
Repage, a computational cognitive reputation model
[Properly] Integrating
Planner
?
Inputs
Decision
mechanism
Comm
Black box
Reactive
Agent
Planner
Value
Inputs
Decision
mechanism
Comm
Black box
Reactive
Agent
Planner
Inputs
Decision
mechanism
Comm
Agent
Planner
Inputs
Decision
mechanism
Comm
Agent
Not only reactive...
... proactive
BDI model
Very popular model in the multiagent community.
Has the origins in the theory of human practical reasoning
[Bratman] and the notion of intentional systems [Dennett].
The main idea is that we can talk about computer programs as if
they have a mental state.
Specifically, the BDI model is based on three mental attitudes:
Beliefs - what the agent thinks it is true about the world.
Desires - world states the agent would like to achieve.
Intentions - world states the agent is putting efforts to achieve.
BDI model
The agent is described in terms of these mental attitudes.
The decision-making model underlying the BDI model is known as
practical reasoning.
Multicontext systems
Logics
UNITS
Bridge Rules
Theories
U2
U1
U1:b
U2:d
U3:a
U3
U2
U1
U1:b
U2:d
U3:a
U3
U2
U1
U1:b
U2:d
U3:a
U3
U2
U1
U1:b
U2:d
U3:a
U3
Multicontext
BC-LOGIC
Generating Intentions
Outline
A cognitive view on Reputation
Repage, a computational cognitive reputation model
[Properly] Integrating
Ex 2: Linguistic Labels
Argumentation level
?
?
Reputation-related information
Consequence relation
(Reputation model)
Specific to each agent
Role: Inf
informant
Agent i: proponent
Agent j: opponent
Role: sell(q)
Role: sell(dt)
quality
delivery time
Outline
+ PART II:
Trust
Computing Approaches
Security
Institutional
Social
Evaluation of
111
113
An example,
Public Key Infrastructure
Certificate authority
4. Publication of
certificate
3. Public key
sent
LDAP directory
5. Certificate
sent
Registration authority
1. Client identity
EASSS 2010, Saint-Etienne, France
114
115
Institutional approach
Institutions have proved to successfully regulate human
societies for a long time:
- created to achieve particular goals while complying norms.
- responsible for defining the rules of the game (norms), to
enforce them and assess penalties in case of violation.
Examples: auction houses, parliaments, stock exchange
markets,.
Institutional approach is focused on the existence of
organizations:
Providing an execution infrastructure
Controlling the resources access
Sanctionning/rewarding agents behaviors
EASSS 2010, Saint-Etienne, France
116
An example: e-institutions
117
118
Social approach
Social approach consists in the idea of an auto-organized society
(Adam Smiths invisible hand)
Each agent has its own evaluation criteria of what is expected:
no social norms, just individual norms
Each agent is in charge of rewards and punishments (often in
terms of more/less future cooperative interactions)
No central entity at all, it consists of a completely distributed
social control of malicious agents.
Trust as an emergent property
Avoids Privacy issues caused by centralized approaches
EASSS 2010, Saint-Etienne, France
119
120
Realistic Data
We need to generate realistic data to test trust and
reputation in agent systems.
Several technical/design problems arise:
Which # of users, ratings and items we need?
How much dynamic would be the society of agents?
But the hardest part is the pshichological/sociological
one:
How individuals take trust decisions? Which types of
individuals?
How real society of humans trust? How many of each
individual type belong to real human society?
Realistic Data
Large-scale simulation with Netlogo
(http://ccl.northwestern.edu/netlogo/)
Others: MASON (https://mason.dev.java.net/), RePast
(http://repast.sourceforge.net/)
But there are mainly adhoc simulations which are
difficult to repeat by third parties.
Many of them are unrealistic agents with binary
behaviour altruist/egoist based on game theory views.
Art Domain
ART Interface
Evaluation criteria
Lack of criteria on which and how the very different trust
decisions should be considered
Conte and Paolucci 02:
epistemic decisions: those about about updating and
generating trust opinions from received reputations
pragmatic-strategic decisions are decisions of how to
behave with partners using these reputation-based trust
memetic decisions stand for the decisions of how and
when to share reputation with others.
Trust Strategies in
Evolutive Agent Societies
An evolutionarily stable strategy (ESS) is a strategy which,
if adopted by a population of players, cannot be invaded
by any alternative strategy
An evolutionarily stable trust strategy is a strategy which,
if becomes dominant (adopted by a majority of agents)
can not be defeated by any alternative trust strategy.
Justification: The goal of trust strategies is to establish
some kind of social control over malicious/distrustful
agents
Assumption: agents may change of trust strategy. Agents
with a failing trust strategy would get rid of it and they
would adopt a successful trust strategy in the future.
and so on
16 participants
in 2007 competition
Winner
Winner
ART gam
ART game
ART game
Loser
Loser
Game
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Winner
iam2
iam2
iam2
iam2
agentevicente
iam2
artgente
artgente
artgente
iam2
artgente
artgente
artgente
artgente
artgente
artgente
artgente
iam2
iam2
artgente
Earnings
17377
14321
10360
10447
8975
8512
8994
10611
8932
9017
7715
8722
8966
8372
7475
8384
7639
6279
14674
8035
Loser
xerxes
lesmes
reneil
blizzard
Rex
alatriste
agentevicente
agentevicente
novel
IMM
marmota
spartan
zecariocales
iam2
iam2
UNO
iam2
JAM
artgente
iam2
Earnings
-8610
-13700
-14757
-7093
-5495
-999
2011
1322
424
1392
1445
2083
1324
2599
2298
2719
2878
3486
2811
3395
CompetitionRank
EvolutionRank
Agent
ExcludedInGame
artgente
iam2
JAM
18
UNO
16
zecariocales
13
spartan
12
marmota
11
13
IMM
10
10
novel
15
10
agentevicente
11
11
alatriste
12
12
rex
13
Blizzard
14
reneil
14
15
lesmes
16
16
xerxes
Consciousness Scale
Too much quantification (AI is not just statistics)
Compare agents qualitatively Measure their level of
consciusness
A scale of 13 conscious levels according to the cognitive
skills of an agent, the Cognitive Power of an agent.
The higher the level obtained, the more the behavior of
the agent resembles humans
www.consscale.com
Emotion
Feeling
Feeling of a Feeling
Fake Emotions
Perception
Adaptation
Attention
Set Shifting
Planning
Imagination
I Know
I Know I Know
I Know You Know
I Know You Know I Know
Consciousness Levels
Super-Conscious
Human-like
Social
Empathic
Self-Conscious
Emotional
Executive
Attentional
Adaptive
Reactive
Thank you !
172