Академический Документы
Профессиональный Документы
Культура Документы
0.2
Table I: Simulated Cloud datacenters (DC) characteristics
0
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Hosts characteristics
Number of PSE−jobs
DC Latency # Hosts Proc. power RAM Storage Cores
D1 0.80 s. 30 7,200 MIPS 32 GB 500 GB 8 Figure 2: Relative makespan reduction regarding RR
D2 1.75 s. 50 9,900 MIPS 32 GB 1 TB 6
D3 0.32 s. 20 8,036 MIPS 16 GB 500 GB 8
D4 2.00 s. 30 7,500 MIPS 16 GB 1 TB 8 Figure 2 illustrates the relative makespan reduction regard-
D5 0.25 s. 50 7,200 MIPS 32 GB 500 GB 8 ing the worst competitor –RR– as the number of jobs is in-
D6 1.50 s. 10 4,008 MIPS 8 GB 1 GB 4
creased. In the results, our ACO-based broker-level algorithm
D7 0.29 s. 20 5,618 MIPS 12 GB 500 GB 6
D8 2.20 s. 20 5,200 MIPS 8 GB 500 GB 4 is the one that produces the lowest makespan to the user with
D9 0.50 s. 50 6,600 MIPS 12 GB 500 GB 8 respect to LAG and RR. The makespan is equals to 81.23,
D10 1.20 s. 50 7,527 MIPS 16 GB 500 GB 8 97.76 and 118.08 minutes when the number of jobs to be
executed is 1,000 for ACO, LAG and RR, respectively, and
Each job had a number of instructions between 1,333,293 the gains obtained for ACO with respect to LAG and RR are
and 2,712,789 MI. Lastly, the experiments had input files of 16,91% and 31,21%, respectively. However, when the number
291.7 Kbytes and output files of 587.1 Kbytes. We evalu- of jobs to be executed is increased to 10,000, the makespan is
ated the performance of executing the user PSE-jobs as we equals to 720.75, 737.28 and 757.60 minutes for ACO, LAG
increased the number of jobs to be performed, i.e., 25 ∗ i jobs and RR, respectively, and the gains of ACO with respect to
with i = 40, 80, ..., 400. This is, the base job set comprising LAG and RR are 2.24% and 4.86%, respectively. Note that
25 jobs of the plane strain plate PSE obtained by varying the the larger the number of jobs to be executed, the lowest the
value of η were cloned to obtain larger sets. impact of the latencies in the makespan, because the latencies
are set at the moment of creating the virtual infrastructure.
B. Performed experiments Since the ACO broker scheduler considers both network
We compared our ACO-based broker-level scheduler against latencies and the percentage of VMs that can be allocated
two alternative broker schedulers: in a datacenter, it avoids to explore datacenters with lower
• Latency-Aware Greedy (LAG), which maintains a list of latency but which can allocate few VMs, i.e., the most VMs
all interconnected datacenters sorted by their latencies. are created in datacenters with low latencies. As can be see in
Each time a user requires a number the VMs to execute Figure 3, the datacenter with low latency which can allocate at
their PSE, LAG selects the datacenter with the lowest least 60% of the required VMs by the user is D7, in which are
latency in the list. Then, whenever a datacenter has no assigned ~68% of the total VMs. The remaining VMs which
more physical resources to allocate VMs, the algorithm were not allocated in D7, i.e., ~32% of VMs, were allocated
selects the next datacenter in the list with low latency. in D5. Note that only 32% of the VMs had to explore both
• Round Robin (RR) maintains a list of all network inter- D7 and D5 looking for resources availability.
connected datacenters that make up the Cloud, sorted by Second is the LAG algorithm, which always first selects
increasing latency, and assigns each VM required by the the datacenter with the lowest latency (see D5 in Table I) but
user to a datacenter from the list in a circular order. without considering the amount of available resources in such
datacenter. Later, to allocate the VMs, the ACO algorithm at scheduler provide better makespan to the user than LAG and
the infrastructure level should explore up to 80% of the hosts RR.
that compose datacenter D5 –50 hosts– to find them which We will explore other bio-inspired techniques such as Ar-
ones are available to allocate VMs. As can be seen in Figure 3, tificial Bee Colony [18]. Another issue is considering multi-
only ~35% of VMs are allocated in datacenter D5. The reason tenancy, i.e., providing schedulers for allocating resources to
is because this datacenter has few available resources. For several independent users. We will also extend our scheduler to
allocating the remaining ~65% of failed VMs, LAG must to consider other optimization criteria (e.g., monetary cost). For
select a new low-latency datacenter. The new datacenter –D7– example, in many Clouds different providers offer multiple
is explored again for each VM whih could not be allocated types of VMs with different capacities and pricing. Thus,
in the previous datacenter, i.e., 65% of the total VMs had to if one application is mapped to VMs from different Clouds
explore both the D5 and D7 datacenters looking for available providers, it may result not only in different makespans but
resources. The number of network messages to send within a also in different monetary costs.
datacenter to determine resource availability directly impacts Finally, an important issue to consider in federated Clouds is
the makespan. This is because for each message sent, latencies enhance the scheduler with dynamic optimization capabilities,
from datacenters affect the answers. The larger the number of enabling the dynamic reallocation (migration) of VMs from
VMs which can be allocated in a datacenter with a low latency, one host to another. The migration of VMs might allow to
the fewer the number of failed VMs that trigger the exploration meet a specific optimization criteria such as reduce the number
of new datacenters with greater latency. of hosts in use for minimizing energy consumption [19] or
balance the workload of all resources to avoid resources
70 saturation and performance slowdown.
ACO
60 LAG
RR
50 ACKNOWLEDGMENTS
Number of VMs
40
We acknowledge the support by ANPCyT (grants PICT-
30
2012-0045, PICT-2014-1430) and UNCuyo (project 06/B308).
20
10
R EFERENCES
0
D1 D2 D3 D4 D5 D6 D7 D8 D9 D10
[1] R. Coutinho, L. Drummond, Y. Frota, and D. de Oliveira, “Optimizing
Datacenter virtual machine allocation for parallel scientific workflows in federated
clouds,” Future Generation Computer Systems, 2014, in Press.
Figure 3: Number of allocated VMs by datacenter [2] J. Tordsson, R. Montero, R. Moreno Vozmediano, and I. Llorente,
“Cloud brokering mechanisms for optimized placement of virtual ma-
chines across multiple providers,” Future Generation Computer Systems,
Finally, the greatest makespan was obtained when the vol. 28, no. 2, pp. 358 – 367, 2012.
datacenters are selected by RR. The reason is because RR [3] J. Kennedy, “Swarm Intelligence,” in Handbook of Nature-Inspired and
Innovative Computing. Springer US, 2006, pp. 187–219.
explores all datacenters in a circular order for each VM to be [4] E. Pacini, C. Mateos, and C. García Garino, “Balancing throughput and
allocated and without consider their latencies and resources response time in online scientific clouds via ant colony optimization,”
availability (see Figure 3). The number of allocated VMs on Advances in Engineering Software, vol. 84, pp. 31–47, 2015.
[5] C. García Garino, M. Ribero Vairo, S. Andía Fagés, A. Mirasso, and J.-
each datacenter depends on the number of available hosts P. Ponthot, “Numerical simulation of finite strain viscoplastic problems,”
and the hosts characteristics that compose each datacenter. Journal of Computational and Applied Mathematics, vol. 246, pp. 174–
The hosts characteristics that compose datacenter D6 do not 184, Jul. 2013.
[6] R. Calheiros, R. Ranjan, A. Beloglazov, C. De Rose, and R. Buyya,
have enough processing power to allocate the type of VM “Cloudsim: A toolkit for modeling and simulation of Cloud Comput-
required. In this algorithm, the greater network latencies of ing environments and evaluation of resource provisioning algorithms,”
some datacenters have a greater impact on the makespan. Software: Practice & Experience, vol. 41, no. 1, pp. 23–50, 2011.
[7] U. Singha and S. Jain, “An analysis of swarm intelligence based load
balancing algorithms in a cloud computing environment,” Journal of
V. C ONCLUSIONS Hybrid Information Technology, vol. 8, no. 1, pp. 249–256, 2015.
[8] P. Pendharkar, “An ant colony optimization heuristic for constrained
PSEs are popular in CM experiments, and involve running task allocation problem,” Journal of Computational Science, vol. 7, pp.
37–47, 2015.
many CPU-intensive jobs. The growing popularity of federated [9] R. Tavares Neto and M. Godinho Filho, “Literature review regarding
Clouds and SI-inspired algorithms has increased the research Ant Colony Optimization applied to scheduling problems: Guidelines
in resource allocation mechanisms. Then, scheduling at the for implementation and directions for future research,” Engineering
Applications of Artificial Intelligence, vol. 26, no. 1, pp. 150–161, 2013.
associated three levels plays a fundamental role. [10] E. Pacini, C. Mateos, and C. García Garino, “SI-based Scheduling of Pa-
In this work we have described an ACO-based broker-level rameter Sweep Experiments on Federated Clouds,” in First HPCLATAM
scheduler for the efficient selection of datacenters taking into - CLCAR Joint Conference (CARLA)., vol. 845, 2014, pp. 28–42.
[11] Y. Kessaci, N. Melab, and E.-G. Talbi, “A pareto-based metaheuristic
account both their network latencies and the availability of for scheduling HPC applications on a geographically distributed cloud
resources. The broker scheduler was combined with an ACO federation,” Cluster Computing, vol. 16, no. 3, pp. 451–468, 2013.
algorithm for allocating the VMs in the selected datacenters [12] J. Lucas-Simarro, R. Moreno-Vozmediano, R. Montero, and I. Llorente,
“Scheduling strategies for optimal service deployment across multiple
at the broker level, and FIFO for mapping the PSE-jobs. The clouds,” Future Generation Computer Systems, vol. 29, no. 6, pp. 1431
performed experiments suggest that our ACO-based broker – 1441, 2013.
[13] L. Agostinho, G. Feliciano, L. Olivi, E. Cardozo, and E. Guimaraes, “A
Bio-inspired approach to provisioning of virtual resources in federated
Clouds,” in DASC 2011. IEEE Computer Socienty, 2011, pp. 598–604.
[14] A. Noda and A. Raith, “A dijkstra-like method computing all extreme
supported non-dominated solutions of the biobjective shortest path
problem,” Computers & Operations Research, vol. 57, pp. 83–94, 2015.
[15] C. García Garino, F. Gabaldón, and J. M. Goicolea, “Finite element
simulation of the simple tension test in metals,” Finite Elements in
Analysis and Design, vol. 42, no. 13, pp. 1187–1197, 2006.
[16] J. Jung, S. Jung, T. Kim, and T. Chung, “A study on the Cloud
simulation with a network topology generator,” World Academy of
Science, Engineering & Technology, vol. 6, no. 11, pp. 303–306, 2012.
[17] S. Malik, F. Huet, and D. Caromel, “Latency based group discovery
algorithm for network aware Cloud scheduling,” Future Generation
Computer Systems, vol. 31, pp. 28 – 39, 2014.
[18] D. Karaboga, B. Gorkemli, C. Ozturk, and N. Karaboga, “A compre-
hensive survey: artificial bee colony (ABC) algorithm and applications,”
Artificial Intelligence Review, March 2012.
[19] R. Jeyarani, N. Nagaveni, and R. Vasanth Ram, “Design and imple-
mentation of adaptive power-aware virtual machine provisioner (APA-
VMP) using swarm intelligence,” Future Generation Computer Systems,
vol. 28, no. 5, pp. 811–821, 2012.