Академический Документы
Профессиональный Документы
Культура Документы
complex networks
7.35e+006 6.74e+006
7.25e+006 6.7e+006
7.2e+006 6.68e+006
7.15e+006 6.66e+006
7.1e+006 6.64e+006
7.05e+006 6.62e+006
7e+006 6.6e+006
0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200
Time-step Time-step
Figure 6: The variation of cumulative network payoff of the network of players consisting of ZD-Pavlov and ZD-GC strategies.
The network is a scale-free network consisting of 1000 nodes. The game was iterated for 200 time-steps.
3.2 3.2
3.1 3.1
3 3
2.9 2.9
ZD
Average Degree
Average Degree
Pavlov
2.8 2.8
ZD
GC
2.7 2.7
2.6 2.6
2.5 2.5
2.4 2.4
2.3 2.3
2.2 2.2
0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200
Time-step Time-step
Figure 7: The variation of the average degrees of the network of players consisting of ZD-Pavlov and ZD-GC strategies. The
network is a scale-free network consisting of 1000 nodes. The game was iterated for 200 time-steps.
8100 2.95
Cooperator
2.9 Defector
8050 2.85
Cumulative payoff
2.8
Average Degree
8000
2.75
2.7
7950
2.65
7900 2.6
2.55
7850 2.5
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Configuration number Configuration number
Figure 2: The variation of cumulative network payoff of a Figure 3: The variation of the average degrees of the coop-
collection of random strategy distributions on a set of players erators and defectors of a collection of random strategy dis-
playing the prisoner’s dilemma game. The variable b was set tributions on a set of players playing the prisoner’s dilemma
to 1.8. The underlying network has a scale-free topology, game. The variable b was set to 1.8. The underlying net-
consisting of 1000 nodes. The game was iterated for 10,000 work has a scale-free topology, consisting of 1000 nodes. The
time-steps. The strategy distributions are sorted based on game was iterated for 10,000 time-steps. The strategy dis-
the cumulative payoff. tributions are sorted based on the cumulative payoff.
ZD and Pavlov strategies. Fig. 6[a] depicts the increase
of the cumulative payoff of the players when the networks
are being evolved, suggesting that in memory-one strategies
too, strategy placement does contribute to the optimisation
of cumulative network payoff. Fig. 7[a] shows the evolution
of player configuration using genetic optimisation. As the
17000
figure shows, there is an apparent increase in the average
16500
degree of the Pavlov strategy compared to the ZD strategy,
16000
within the network population as the average cumulative
15500
payoffs of the networks are optimised. Even though the
Cumulative payoff
15000
payoff of a ZD node would be higher against a Pavlov node,
14500
Pavlov performs well against itself compared to ZD strategy,
14000
making it the evolutionary stable against ZD. This suggests
13500
that when Pavlov and ZD strategies are mixed in a pop-
13000
12500
ulation of players, cumulative payoff of the entire network
12000
could be maximised by assigning the hubs with the Pavlov
11500
strategy.
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Similarly when the ZD strategy mixed with the General
Time-step
Cooperator strategy, GC strategy would tend to occupy the
hubs, as the networks are evolved over time. As with the
Figure 4: The variation of cumulative network payoff of
case of ZD vs Pavlov strategies, the cumulative payoffs of
the network of players playing the prisoner’s dilemma game.
the networks would continue to increase when the initial
The variable b was set to 1.8. The network is a scale-free
configuration of the strategies are changed, as shown in Fig.
network consisting of 1000 nodes. The game was iterated
6[b]. Fig. 7[b] shows the evolution of the average degree of
for 10,000 time-steps.
the nodes occupying the two strategies over time.
Next, we mixed the Pavlov and general cooperator strate-
gies to observe which strategy tends to occupy the hubs as
the cumulative payoff of networks evolve as in Fig. 8. Again,
it is when the Pavlov strategy is placed on the hubs that the
cumulative payoff of the network tends to increase, as shown
in Fig. 9.
8.14e+006
8.12e+006
8.1e+006
Cumulative payoff
8.08e+006
3.4 8.06e+006
8.04e+006
3.2
8.02e+006
3 8e+006
Average Degree
7.98e+006
2.8
Cooperator
Defector 7.96e+006
0 20 40 60 80 100 120 140 160 180 200
2.6 Time-step