Академический Документы
Профессиональный Документы
Культура Документы
317846
D4.3
Public
Socially-aware Management of
New Overlay Application Traffic with
Energy Efficiency in the Internet
European Seventh Framework Project FP7-2012-ICT-317846-STREP
Deliverable D4.3
Prototype Evaluations and Overall Project
Assessment
Version 1.0
Page 1 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Document Control
Title:
Type:
Public
Editor(s):
Rafal Stankiewicz
E-mail:
rstankie@agh.edu.pl
Author(s): Andi Lareida, Thomas Bocek, Manos Dramitinos, David Hausheer, Ioanna
Papafili, Frederic Faucheux, Domenico Gallico, Roman Lapacz, Rafal Stankiewicz,
Zbigniew Dulinski, Sabine Randriamasy, Sergios Soursos, Valentin Burger, Florian
Wamser, Fabian Kaup, Jeremias Blendin, Michael Seufert, Burkhard Stiller, Patrick
Poullie, Gerhard Hasslinger, George Stamoulis, Gino Carrozzo, Paolo Cruschelli,
Grzegorz Rzym, Piotr Wydrych, Krzysztof Wajda
Doc ID:
D4.3-v1.0
AMENDMENT HISTORY
Version
Date
Author
Description/Comments
V0.0
03/2015
Rafal Stankiewicz
ToC defined
V0.1
2015/06/10
Rafal Stankiewicz
V0.2
2015/07/30
V0.3
2015/09/03
S. Soursos, S. Randriamasy, I.
Papafili, M. Dramitinos, T. Bocek, R.
Lapacz, D. Gallico, A. Lareida, P.
Cruschelli, M. Seufert, R.
Stankiewicz
V0.4
2015/09/22
S. Soursos, S. Randriamasy, I.
Papafili G. Carrozzo, T. Bocek, M.
Dramitinos, R. Lapacz, D. Gallico, A.
Lareida, P. Cruschelli, M. Seufert, R.
Stankiewicz, Z. Dulinski
V0.5
2015/10/07
V0.6
2015/10/17
Page 2 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
V0.7
2015/10/29
D. Hausheer, T. Bocek, P.
Cruschelli, V. Burger, F. Kaup, M.
Dramitinos, G. Hasslinger, Z.
Dulinski
V1.0
2015/10/31
R. Stankiewicz, G. Rzym, Z.
Dulinski, S. Randriamasy, , I.
Papafili, M. Dramitinos, D.
Hausheer, G. D. Stamoulis
Legal Notices
The information in this document is subject to change without notice.
The Members of the SmartenIT Consortium make no warranty of any kind with regard to this document,
including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The
Members of the SmatenIT Consortium shall not be held liable for errors contained herein or direct, indirect,
special, incidental or consequential damages in connection with the furnishing, performance, or use of this
material.
Version 1.0
Page 3 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Table of Contents
Amendment History
List of Figures
List of Tables
1 Executive summary
11
2 Introduction
13
7 Overall assessment
13
13
15
16
20
24
26
27
31
31
32
32
35
37
39
41
42
45
47
49
57
65
70
77
84
84
89
93
103
114
114
115
122
130
136
137
138
139
140
Page 4 of 177
147
147
153
154
155
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
156
157
161
9 SMART Objectives
164
10 References
167
11 Abbreviations
170
12 Acknowledgements
173
13 Appendices
174
Version 1.0
174
174
174
175
176
Page 5 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
List of Figures
Figure 1: The SmartenIT ecosystem and the involved architectural entities. ..................... 24
Figure 2: The DTM architecture (a) and deployment (b) diagrams .................................... 25
Figure 3: The RB-HORST architecture (a) and deployment (b) diagrams ......................... 25
Figure 4: SmartenIT Entities and components involved in MUCAPS ................................. 26
Figure 5: Variable and static parameters and type of experiments .................................... 44
Figure 6: Logical topology for S-to-S experiment ............................................................... 47
Figure 7: Logical topology for M-to-M experiment.............................................................. 48
Figure 8: Traffic growth during the billing period and a cost map. SDN controller
mode: "reactive with reference". (OFS#1.1) ....................................................... 54
Figure 9: Traffic patterns on link 1 and 2 during the billing period. SDN controller
mode: "reactive with reference". (OFS#1.1) ....................................................... 55
Figure 10: Compensation vector values received by remote SBox (AS3 domain). SDN
controller mode: "reactive with reference". (OFS#1.1) ...................................... 56
Figure 11: Compensation vector values received by remote SBox (AS3 domain). SDN
controller mode: "reactive without reference". (OFS#1.1) ................................. 56
Figure 12: 5-min samples observed on inter-domain links during a single 7-day long
billing period (OFS#1.2) .................................................................................... 60
Figure 13: Distributions of 5-min samples collected during a single 7-day long billing
period (OFS#1.2) .............................................................................................. 61
Figure 14: The distribution of 5-min sample pairs on a cost map. (OFS#1.2) .................... 62
Figure 15: The distribution of 5-min sample pairs - long flows, reactive mode................... 64
Figure 16: The distribution of 5-min sample pairs - long flows, proactive mode................. 64
Figure 17: Traffic patterns on link 1 and 2 during the billing period. (OFS#1.3) ................. 67
Figure 18: 5-min samples observed on inter-domain links during a single 1-day long
billing period (OFS#1.3) .................................................................................... 68
Figure 19: 5-min samples observed on inter-domain links during a single 1-day long
billing period when hierarchical policer is inactive; only DTM operates
(OFS#1.3) ......................................................................................................... 69
Figure 20: Traffic growth during the billing period and a cost map. Domain AS1.
(OFS#2.1) ......................................................................................................... 74
Figure 21: Traffic growth during the billing period and a cost map. Domain AS4.
(OFS#2.1) ......................................................................................................... 75
Figure 22: The distribution of 5-min sample pairs on a cost map in domain AS1
(OFS#2.2) ......................................................................................................... 80
Figure 23: The distribution of 5-min sample pairs on a cost map in domain AS4
(OFS#2.2) ......................................................................................................... 81
Page 6 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Figure 24: 5-min samples observed on inter-domain links LA1 and LA2 (AS1) at the
end of billing period (OFS#2.2) ......................................................................... 83
Figure 25: Testbed setting for evaluation of caching functionality ...................................... 86
Figure 26: Cache hit rate with varying number of devices ................................................. 86
Figure 27: Transferred data with varying number of devices ............................................. 87
Figure 28: Cache hit rate with varying request time interval .............................................. 87
Figure 29: Transferred data with varying request time interval .......................................... 88
Figure 30: Cache efficiency in large scale study trace ....................................................... 88
Figure 31: Overlay of uNaDas after large scale study ....................................................... 90
Figure 32: Number of video views ..................................................................................... 90
Figure 33: Cache efficiency ............................................................................................... 91
Figure 34: Location of request processing ......................................................................... 91
Figure 35: Prefetching overhead........................................................................................ 92
Figure 36: Quality of Experience for videos served by uNaDas ......................................... 92
Figure 37: Activity patterns of the participating smartphones............................................. 95
Figure 38: RTTs as measured from the smartphones ....................................................... 96
Figure 39: Network uplink as measured from the smartphones ......................................... 97
Figure 40: Network downlink as measured from the smartphones .................................... 98
Figure 41: Measured CDF of the power consumption of the smartphones using 3G
and WiFi ............................................................................................................ 98
Figure 42: CDF of the power consumption using identical traffic traces and the power
model of the Nexus 5 for the individual interfaces............................................. 99
Figure 43: Activity patterns of the participating uNaDas .................................................. 101
Figure 44: CDF of the power consumption over the uNaDas participating in the study ... 102
Figure 45: MUCAPS prototype set-up ............................................................................. 105
Figure 46: Screenshot with MUCAPS OFF for all types of UEP access .......................... 106
Figure 47: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a LAN/FTTX
access ............................................................................................................. 107
Figure 48: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a WiFi
access ............................................................................................................. 107
Figure 49: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a 3G access. 108
Figure 50: GANT network topology in Europe ............................................................... 117
Figure 51: CAPEX and OPEX for a medium-sized ISP on an 8-year basis for
DTM/DTM++. .................................................................................................. 119
Figure 52: DC traffic evolution and estimated traffic savings due to DTM/DTM++ on an
8-year basis. ................................................................................................... 120
Figure 53: Forecasted values of the price of transit for the next 8 years [39]................... 120
Version 1.0
Page 7 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Figure 54: Total costs vs. total benefits for DTM/DTM++ (benefits are calculated both
for 8% and 15% traffic savings). ..................................................................... 121
Figure 55: Overall economic gains assessment of DTM/DTM++. .................................... 121
Figure 56: Tiered caching scenario. ................................................................................. 124
Figure 57: Expected adoption rate of RB-HORST. .......................................................... 127
Figure 58: CAPEX and OPEX for a medium-sized ISP on an 8-year basis for
RB-HORST. .................................................................................................... 128
Figure 59: Video traffic evolution and estimated traffic savings due to RB-HORST on
an 8-year basis. .............................................................................................. 128
Figure 60: Total costs vs. total benefits (benefits are calculated both for 50% and 75%
traffic savings). ................................................................................................ 129
Figure 61: Overall economic gains assessment for RB-HORST...................................... 130
Figure 62: MUCAPS employed for transparent in-network optimization. ......................... 133
Figure 63: Evolution of SmartenIT solutions .................................................................... 143
Figure 64: Sequences of incentives for DTM and DTM++ in the context of
stakeholders relations ..................................................................................... 150
Figure 65: General overview of hardware testbed extension ........................................... 176
Page 8 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
List of Tables
Table 1: SmartenIT mechanisms mapping ........................................................................ 30
Table 2: Performance metrics for ICC ............................................................................... 32
Table 3: ICC parameters.................................................................................................... 32
Table 4: Performance metrics for DTM .............................................................................. 33
Table 5: DTM parameters .................................................................................................. 34
Table 6: Caching functionality metrics in RB-HORST ........................................................ 35
Table 7: Caching functionality parameters in RB-HORST ................................................. 36
Table 8: Large scale RB-HORST study metrics................................................................. 37
Table 9: Large scale RB-HORST study parameters .......................................................... 38
Table 10: MONA performance metrics............................................................................... 39
Table 11: MONA parameters (selected configuration underlined) ..................................... 40
Table 12: MUCAPS performance metrics .......................................................................... 41
Table 13: MUCAPS parameters ........................................................................................ 42
Table 14: SmartenIT metric categories mapping ............................................................... 43
Table 15: Experiment summary ......................................................................................... 45
Table 16: Experiment Report Card Template .................................................................... 46
Table 17: Total traffic costs achieved with DTM and expected without DTM obtained
for various SDN controller modes (OFS#1.1) ..................................................... 53
Table 18: KPI values for DTM run with various SDN controller modes (OFS#1.1) ............ 53
Table 19: The number of vector updates exchanged by SBox-es during the billing
period ................................................................................................................. 57
Table 20: Inter-domain traffic costs and KPI values for DTM (OFS#1.2) ........................... 59
Table 21: Manageable traffic generator setting for generation of long flows ...................... 63
Table 22: Inter-domain traffic costs in domains AS1 and AS4. (OFS#2.1) ........................ 76
Table 23: KPI for AS1 and AS4 (OFS#2.1) ........................................................................ 76
Table 24: Inter-domain traffic costs in domains AS1 and AS4. (OFS#2.2) ........................ 82
Table 25: KPI for AS1 and AS4 (OFS#2.2) ........................................................................ 82
Table 26: Analysis of the device activity ............................................................................ 95
Table 27: Comparison of the power measurements for 3G and WiFi while active ............. 99
Table 28: Results of the simulation using different interfaces .......................................... 100
Table 29: Measured power consumption of the uNaDas ................................................. 102
Table 30: ISP-specific ALTO values for RC and BWS chosen for the 3 AEP classes ..... 105
Table 31: Metric weights chosen for the 3 tested access types ....................................... 105
Version 1.0
Page 9 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Page 10 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
1 Executive summary
This document is Deliverable D4.3 of SmartenIT project. It is entitled Prototype
Evaluations and Overall Project Assessment and documents the results of validation and
performance evaluations of the mechanisms implemented within the SmartenIT project.
All measurable results, including traffic measurements, metrics on performance,
robustness, energy efficiency, and QoE, and results obtained by means of experiments,
simulations, and theoretical investigations are provided in the first part of this deliverable.
The second part of the deliverable is devoted to the overall assessment of the solutions
and mechanisms proposed by the SmartenIT project. It contains the broad picture from
various points of view (such as perspectives of different stakeholders, single layer and
cross-layer views, incentives of stakeholders, various business models and scenarios, and
new technologies implemented within SmartenIT), based on all methods applied (theory,
modeling, experiments) and evaluation techniques utilized (simulations, testbed). Final
project conclusions as well as implementation guidelines and usability assessment are
also provided in this document. The deliverable also outlines the major achievements of
the SmartenIT project as a whole with respect to its general approach and its specific
network management mechanisms.
To this end, this deliverable first positions SmartenIT solutions and traffic management
mechanisms in the current Internet taking into account the perspective of stakeholders and
current challenges. SmartenIT has conducted a deep analysis of the current Internet, in
two main complementary and synergetic directions, which thus led to the development of
two scenario categories, namely OFS (Operator Focused Scenario) and EFS (End-user
Focused Scenario), OFS is dedicated to the interaction among stakeholders acting as
Cloud and particularly as Internet Service Providers, while EFS is centered on the enduser and its interaction with other users as well as the Cloud. The main SmartenIT
mechanisms pertaining to OFS (namely ICC, DTM and their superposition DTM++)
constitute powerful tools for service differentiation and OPEX reduction for ISPs, thus
contributing both to cost reduction and value generation aspects of ISPs business. These
mechanisms have consciously been designed for the Best Effort Internet, but can also be
adapted under a Differentiated Services model and/or under Beyond Best Effort Internet.
Also, MRA ensures a fair resource allocation among federated Cloud Service Providers.
Regarding EFS, all relevant SmartenIT mechanisms are suitable for certain business
contexts. The business context that is most relevant for RB-HORST (which is the
combination of several mechanisms) is the Terminal and Service business model. Also,
the SmartenIT EFS mechanisms, namely RB-HORST, SEConD, vINCENT, MONA, and
MUCAPS that can deliver energy efficiency, advanced services and resource
management, increased agility and performance, can be very useful means for new
differentiated services and terminal operations, such as cloudlets.
Furthermore, the deliverable summarizes evaluation criteria, methodologies and metrics
used for assessment of traffic management mechanisms selected for testbed experiments
and reports about the testbed experiments results and assessment achieved during
SmartenIT validation activities. It should be noted that all runs in the testbeds included
both a functional test assessment and a performance test. For the OFS experiments of
DTM and DTM++, a virtualized testbed environment was employed, with four testbed
instances launched in three SmartenIT partners premises, and covering both Single-toSingle and Multi-to-Multi domain topologies with a tariff based on both volume and 95th
percentile. The EFS experiments evaluate caching and energy consumption in RB-HORST
Version 1.0
Page 11 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
as well as Quality of Service of videos for MUCAPS. In fact, the large-scale study for RBHORST involved the provision of videos to users in the premises of five different partners.
Additionally, other dimensions of assessment of SmartenIT solutions and finally
implemented traffic management mechanisms are introduced and a deep analysis of costs
and benefits for the deployment of SmartenIT solutions expected over next couple of years
is provided together with a SWOT analysis.
Finally, an overall assessment of all SmartenIT solutions and mechanisms is done, which
serves as a summary for all project activities and outcomes. A map of solutions and stage
of development and achieved maturity is presented and a summary is given about how
key design goals and objectives defined at the project beginning were achieved and
addressed. In particular, the main conclusions of this final assessment are as follows:
For the OFS scenario, DTM/DTM++ proves to be an excellent solution to be casted over
the ISP domain, mainly because: a) it is based on incentive collaboration among different
administrative domains by providing a technical framework allowing traffic management
between Cloud and ISP in a win-to-win situation, b) it resolves the problem of information
asymmetry by providing partial information exchange between cloud layer and network
layer, thus providing cross-layer aware traffic management, c) it boosts the Cloud
Federation business models, thus demonstrating that SmartenIT solutions for the OFS
scenario are well deployable both for large scale Cloud and Internet Service Providers and
for smaller ones that together widen their geographical footprint, d) it optimizes inter-cloud
traffic by providing an excellent tool to optimally route the selected inter-domain traffic, e) it
is a non-disruptive solution, meaning that they can be deployed over provider domains
without changing the entire infrastructure, with a high deployability potential.
For the EFS scenario, the SmartenIT prototype and the related artifacts have proved to be
a stable solution capable to achieve good and measurable performances, and to be casted
in real-world large scale deployments over a wide geographical area with an accordingly
high number of end-users. Selected EFS traffic management mechanisms proved to be an
excellent solution, mainly because: a) They provide a technical framework that allows
traffic management between the Cloud provider and the end-user in a win-to-win fashion,
b) they entail incentive-based collaborations among end-users based on a novel trust
schema that leverages on social awareness information, c) they achieve energy efficiency
for end-user mobile devices and significant enhancement of Quality of Experience as
perceived by the end-user, d) they provide novel content caching and prefetching schemes
for content downloading, thus indirectly attaining savings of inter-domain traffic and energy
efficiency in the providers domain due to traffic localization, and e) they are easily
deployable over end-user's and ISP's premises with negligible impact on the existing
infrastructure.
Overall, the SmartenIT solutions evaluated with the elaborated methodology reported in
this deliverable constitute a stable technical framework with measurable performance that
ensures concrete benefits for the involved stakeholders (including traffic and energy
savings), high deployability capabilities over both the ISP's infrastructure and the enduser's premises, high configurability in terms of functionalities and capabilities, and good
positioning in the actual cloud landscape, thus fulfilling all major objectives of the
SmartenIT project.
Page 12 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
2 Introduction
Deliverable D4.3 reports the outcome of task T4.3 (Evaluation of Prototype) and task T4.4
(Overall Assessment). The goal of task T4.3 was to conduct the validation and
performance evaluation of the SmartenIT prototype based on the application use cases
and experiments identified in task T4.2. Specifically, the prototype developed in WP3 had
to be parameterized and deployed on top of the mobile and wired testbed defined in task
T4.1 and had to be evaluated with respect to scenarios, metrics, and characteristic
dimensions defined in task T4.2. This includes the efficiency, scalability, and energyawareness of the developed network management mechanisms. The aim of these
evaluations was to reveal the positive impact of the mechanisms of SmartenIT on network
traffic load and on performance parameters, the potential for cost savings, and trade-offs
between energy efficient computation in the cloud and energy consumption on mobile enduser devices, ultimately measurable in overall energy savings and cost efficiency.
In parallel to task T4.3, the goal of task T4.4 was to assess all results achieved with
respect to the performance evaluations of task T4.3 as well as the overall project
outcomes, including model and simulation based evaluations carried out within WP2. Also
QoE assessment based on theoretical models and subjective tests had to be performed,
interpreted, and summarized within task T4.4. Benefits from the deployment of SmartenIT
solutions for respective stakeholders were also evaluated and documented. This
assessment provides valuable interpretations of the results, to outline the major
achievements of the SmartenIT project as a whole with respect to both its specific network
management mechanisms (and the circumstances under which they are efficient) and its
general approach. This approach also enables the provision of feedback to the theoretical
work carried out by WP2 (theory) and to the engineering work carried out by WP3
(engineering) as well as the final system-related and research-related public messages to
the interested stakeholders.
2.1
The purpose of Deliverable D4.3 is to document the results of validation and performance
evaluations of the mechanisms implemented within the SmartenIT project. To this end, all
measurable results including traffic measurements, performance metrics, robustness,
energy efficiency, and QoE are provided here. Moreover, this deliverable aims to provide
an overall assessment of those solutions and mechanisms proposed by the SmartenIT
project and to derive the final project conclusions as well as give implementation
guidelines and usability assessment. Finally, the deliverable also aims to outline the major
achievements of the SmartenIT project as a whole with respect to its specific network
management mechanisms and the general approach.
2.2
Document Outline
Version 1.0
Page 13 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
used for assessment of traffic management mechanisms selected for testbed experiments
as presented later in Section 5. The metrics are categorized as economic, performance,
and user experience metrics. Moreover this section provides a business perspective on
assessment of project outcomes. Additionally, testbed experiments results and
assessment achieved during SmartenIT validation activities are given in Section 5. Results
of several experiments are presented and discussed. Evaluations are done with using
methodologies and metric presented in Section 4. Furthermore, Section 6 provides the
assessment of the main projects outcomes. This section introduces other dimensions of
assessment of SmartenIT solutions and finally implemented traffic management
mechanisms. It provides a deep analysis of costs and benefits of deployment of SmartenIT
solutions expected over next couple of years. Additionally, a SWOT analysis is presented.
Finally, an overall assessment of all SmartenIT solutions and mechanisms proposed is
given in Section 7. This section serves as summary for all project activities and outcomes.
It presents a map of solutions and stage of development and maturity they achieved. It
also summarizes how key design goal and objectives defined at the project beginning
were achieved and addressed. Section 8 summarizes and concludes this deliverable.
Page 14 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
A great traffic increase driven by the new applications has a tremendous impact
over actual global network built over ISP infrastructure. Cloud traffic, which can be
seen in the majority of cases as overlay traffic, is not directly managed by ISP
organizations and these results in poor end to end traffic management capabilities.
The emergence of new requirements from end-user point of view and the metrics to
evaluate them. In the Cloud era, the QoE metric (an aggregate metric which
measures end-user experience with a purchased service) is gaining importance.
The role of end-user is continuously becoming more central and is also shifting from
a passive role (where he just receives a service), to a more active role with the
possibility to interact with other users (social networks) and to interact with service
providers itself (in the form of Nano Datacenters) by allowing the provider to utilize
the equipment at customer premises as additional computing/storage resources.
SmartenIT project has conducted deep analysis which has been oriented in two main
complementary and synergetic directions.
The first direction is the analysis of Cloud landscape from the point of view of stakeholders
and their relationships, in order to understand their requirements and their conflicting
needs and trying to form/devise/develop a rationale which can convert those conflicting
interests into synergetic/collaborative ones. This analysis has led to the development of
two scenario categories, namely OFS (Operator Focused Scenario) and EFS (End-user
Focused Scenario), which summarize two complementary perspectives of the Cloud
landscape. The first is dedicated to the interaction among stakeholders acting as service
providers while the second is centered on end-user and its interaction with the other users
as well as the Cloud.
Version 1.0
Page 15 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
The second direction is the analysis of suitable Traffic Management mechanisms (and
their composition) to be casted over the two reference scenarios and which are intended to
mitigate the drawbacks of the Cloud landscape by mitigating inter-cloud traffic impact over
providers domain, by providing new schemas for efficient content caching and pre/fetching
and by providing a framework which forecast collaboration among multiple independent
domains. Traffic management mechanisms have been designed to span across a certain
number of SmartenIT entities (SBox implementing the SmartenIT logic, the SDN
Controller, the Network Entity, the uNaDa, i.e., an enhanced home gateway and the
End User Entity) that constitute the SmartenIT architecture. The overall SmartenIT
architecture has been designed to be modular in order to be deployable over different
scenarios according to the actual needs.
In the next subsection, the main results of SmartenIT analysis will be reported with focus
on:
SmartenIT architecture (high level description with focus on SmartenIT entities and
its deployment capabilities on different scenarios)
3.1
Stakeholders roles
Five actors can be identified within the SmartenIT ecosystem, namely Cloud Service
Providers, Data Center Operators, Internet Service Providers, End Users, and the Social
Information Provider. Their interactions will be described in this section.
The main interest of the Cloud Service Provider (CSP) is the monetization of services
purchased by the end users. Satisfaction of end users is a crucial issue. To ensure this
goal, the QoS as well as the energy requirements should be met. Collecting and utilizing
social information, new services may be offered. The CSP attempts to attract demand and
differentiate its product offerings so as to serve various market segments with different
prices/packages and consequently extract as much of its customers surplus as possible.
The monetization can also be achieved by either making the end-user accept a certain
level of advertisement or by introducing systems that prevent users from accessing
webpage content without a paid subscription to the service. On the other hand, there is
pressure to keep the costs for ISP infrastructure and Cloud resource consumption low, i.e.,
for renting resources from Data Center Operators. Its main concern is also to respect the
SLAs with its customers and properly dimension its infrastructure to cover the respective
demand and redundancy requirements. The CSP also needs to interact with the ISP to
make sure that the provided network meets the requirements of the provided cloud
service. Interconnection with other CSPs can reduce the transit fees a CSP pays to the
ISP.
The Data Center Operator (DCO) is primarily concerned with the efficient use of its
infrastructure, i.e., make the best use of its resources while minimizing energy
consumption for the completion of the maximum possible amounts of task. It is possible
that a part of its infrastructure is purchased from some CSP under an IaaS or PaaS model.
Monetization of infrastructure is done by guaranteeing satisfactory QoS/QoE parameters
for end users. Its main concern is to respect the SLAs it has with its customers, which may
also be CSPs, and properly dimension its infrastructure to cover the respective demand
Page 16 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
and reduce its costs. Reduction of costs, in this case, focuses on best possible utilization
of hardware both resource-wise and energy-wise. In particular, it aims to improve the
utilization of its servers, to avoid or limit congestion on its links and platform; as well as to
limit the energy consumption by its data centers. An important factor for its costs besides
energy is transit costs for the data transfers. An emerging trend is datacenter
interconnection, similarly to the donut peering model of ISPs [44]: Datacenters are
gradually interconnecting with other datacenters via exchange points so as to reduce
transit costs. Another interesting emerging prospect is that of federations [44]: since in the
data center business multi-location presence and locality in execution of tasks is crucial so
as to be competitive, small data center operators may team up and build a federation by
combining their resources and aligning their business processes so as to appear as a
large virtual data center operator to their potential large business customers. In this case,
interactions within the federation are policed by means of business policy rules defining
what are the rights and obligations of each federation member.
The Internet Service Provider (ISP) aims at maintaining good quality in his network,
which will give him a competitive edge in the market and ultimately a good RoI (Return on
Investment). This means that ISPs are interested in optimizing the network bandwidth
utilization, while avoiding congestion, and preserving low latency. The ISP interacts with
other ISPs via peering and possibly transit agreements for the management of the traffic of
his customers, both upstream and downstream. Providing guarantees for the delivery of
portions of the traffic and/or isolating a part of the traffic of Data Centers and/or ISPs (e.g.,
via VPNs, leased lines, or private paid peering agreement) is a source of potential revenue
for the ISP. This can be increased by high quality of network services that lead to higher
satisfaction of DCOs and also of end users. Supporting new services, possibly by
employing additional information (with the main example here being social information),
may be attractive and simultaneously make ISPs more competitive towards end users and
cloud providers. Offering new services to CSPs would open new market opportunities for
ISPs. The main goal of the ISP is manage the incoming traffic that is possibly delivered to
him via multiple interconnection points. An additional goal for Tier-1 and Tier-2 ISPs is to
sell transit interconnection agreements to other ISPs, DCOs and CSPs. At the same time
own charged transit link traffic has to be kept as low as possible. Applying Traffic Engineer
solutions to prioritize the traffic of his customers according to the respective value/cost
tradeoff are used in parallel with the management of inter-domain business agreements
and establishment of presence in Internet exchange points and areas that can attract
demand from customers.
End users are to a large extent unaware of the business agreements and interactions
among the other stakeholders being primarily a client of ISP and CSP. The only thing they
are aware of is the description and cost of the services they purchase, mainly Internet
access from an ISP, file hosting from a CSP. The end users are sensitive to the reliability
and availability of the service, the support they receive when needed, as well as the
performance of the service and the resulting QoE attained. All those features are typically
defined in the contract/SLA of the service between the end user and the respective service
provider. In case multiple providers are needed for the provisioning of a certain service,
the respective interactions are hidden to the end user due to the nesting of the internal
SLA, who solely interacts with a single point of service, i.e., the provider from which he has
purchased the end-to-end service. The end users main concerns are its own QoE,
network access cost, and energy consumption. The energy efficiency of services and
network access are factors of increasing importance from the point of view of the end user,
especially when considering mobile network access (e.g. through the limitation of battery
Version 1.0
Page 17 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
lifetime when using a mobile device). It is noteworthy, that costs in case of end user often
can be expressed by being exposed to advertisements instead of being involved in
monetary flow. Typically, the end user will not be interested in disclosing social information
directly; however, it may be willing to give it up for some profit or improved QoS/QoE.
The Social Information Provider wants to monetize the social information he owns.
Therefore, it can sell the social information to CSPs, DCOs, or ISPs. If the information can
be used to improve a service, this creates incentives for end users to provide information
to the Social Information Provider. If the Social Information Provider offers a service itself
like an Online Social Network (OSN), these incentives lead to a higher number of users
OSN, which can further increase revenue (e.g., advertisement) of its service.
Having presented the main stakeholders and their roles, we now provide some insight on
their business relationships and interactions, as well as tussles amongst them in the
context of Best Effort Internet and beyond.
Asymmetric information between the ISP and the buyer of connectivity service is a major
issue, since ISP networks are black boxes to other ISPs and cloud/application service
providers, rendering inter-domain monitoring impossible. Networks do not allow other
parties to monitor their network performance and do not provide to their customers any
guarantees besides uptime. Furthermore, ISPs do not exchange any quality or control
information and there is lack of reward schemes for ISPs that would be willing to provide
assured quality to inter-domain traffic since networks solely exchange data and BGP
information, lacking standardized service-aware inter-provider service coordination in both
business and technical layer. This is also due to the fact that interconnection contracts
pertain to large traffic aggregates, thus there is no service-specific overlay for
optimizations, and provide only uptime guarantees and absolutely no guarantees on
quality.
This lack of end-to-end inter-provider SLAs and respective multi-provider service-aware
connectivity products drives high quality out of the market according to Akerlofs theory on
market of lemons [45] : since there are no quality guarantees, it is impossible for buyers to
predict the quality of inter-domain flows Thus they expect and bias their willingness to
pay on the average quality observed in the market Average quality is by definition lower
than that of high-quality connectivity thus essentially driving any higher-quality connectivity
product out of the market and resulting in a market of lemons. This is also evident in the
context of mobile networks where content delivery is highly monetized and thus any quality
degradation is unacceptable: extranet solutions such as IPX are increasingly popular thus
creating an alternative to Internet and a potential threat to the applicability of the latter for
sustaining the business of content delivery especially due to the increasing number of
mobile users and sophisticated mobile terminals.
ISPs possible opportunistic routing and/or prioritizing own flows over inter-domain traffic
further mitigate quality of service. Overall, these inefficiencies are due to the lack of
service-aware coordination and cooperation among ISPs, clouds as well as between ISPs
and clouds, further rendering inter-provider cloud and content service provisioning
problematic and greatly affecting the end-users QoE. This issue is also of particular
interest to Europe and similar areas, where there are typically multiple geographicallylimited ISPs, data centers and cloud service providers without global Information and
Communication Technology infrastructure, rendering intra-provider traffic management
and cross-layer network-cloud layer optimization solutions inadequate
Page 18 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
To this end, data centers, CDNs and proxy servers attempt to mitigate these well-known
inefficiencies to some extent and improve service performance by bringing data and
services closer to the users. Therefore, it is interesting that the inefficiencies of the existing
Internet and the resulting degradation in the service quality actually constitute a business
opportunity and sustain a market of considerable size, that of CDNs. Similarly, these
inefficiencies lead to opportunities for SmartenIT too!
This complex business landscape gives rise to interesting tussles among the stakeholders,
which have been also analysed in D2.5 [8]. These tussles are greatly affected by both the
existing business relationships and practices among the operators and by the technical
Internet protocols deployed. Regarding the tussles among the operators in the OFS, the
optimal traffic destination when multiple candidate destinations exist comprises an
interesting tussle. The selection of the optimal destination would be then performed by the
sending entity, CSP/DC or uNaDa, without any knowledge on the underlying network load,
which may affect both the time within which the data transfer will be completed as well as
the underlying network load. Also the BGP aggregation function further mitigates the
choices for optimizing the traffic routing. The net neutrality tussle pertains to the scope of
reasonable traffic management. The information asymmetry, already explained earlier in
this section and the potential inter-domain traffic changes resulting from edge storage
devices and the support of video content dissemination may have significant impact on an
ISPs inter-connection costs, peering ratios and thus on the sustainability of its
interconnection agreements. Last but not least, the pricing tussle over traffic is also closely
related to the net neutrality tussle and partly resolved via competition, business models
and regulation. However, the existing status quo in pricing, as well as the introduction of
new schemes, may create interesting spill-over effects and new tussles, since more or less
control and options is provided to some of the stakeholders in the Internet services value
chain: to this end, pricing is both the cause of tussles and a way to resolve them as a
control mechanism, but also serve as the outcome of tussles and business modeling in the
Internet.
Similar tussles also appear in the context of federations among clouds and datacenters.
The degree of cooperation and information and control delegation among the federation
members comprises an interesting tussle. This is largely due to the fact that the federation
members, are also competing directly in the market for customers and also try to maximize
their profits and customers obtained from the federation. This is closely related to the
information asymmetry and quality discrimination tussles already explained in the previous
paragraph for the operators case. The definition of the Federation policies, rules and
information model is also a highly important tussle for the cloud service providers since
this determines customer ownership, intermediation, revenue flows and revenue sharing
schemes, i.e., crucial aspects of the CSPs business.
Tussles also appear in the context of EFS, including the reduction of cloud provider traffic
traversing ISPs and the traceability of user behavior, which are directly related to the
intermediation of CDNs or alternative caching entities, such as uNaDas. These tussles, the
caching mechanisms that attempt to mitigate those, as well as performance issues are
also related to the pricing and inter-domain traffic changes, and to the respective impact
on hardware and network upgrades needed.
All these tussles and their correlation with the business and technology status quo of the
Internet indicate the highly disruptive impact of technology in the Internet ecosystem and
the resulting distribution of power among its stakeholders. The complexity of these tussles
also indicates that a holistic approach, encompassing both the business and technology
Version 1.0
Page 19 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
aspects of Internet services delivery, is needed to mitigate them. This is well in line with
the SmartenIT approach and the mechanisms developed, which are specified so as to
respect neutrality, fair competition and design-for-tussle principles. The respective
mechanisms, their scope and goal, as well as their relation with the business landscape
are presented in the next subsection.
3.2
Mechanisms landscape
This section provides an overview of the landscape of SmartenIT mechanisms and the
respective cloud business models under which these can be utilized. Regarding the
Operator Focused Scenario (OFS), the main mechanisms and their respective
achievements are:
All SmartenIT OFS mechanisms can be of high value to certain business contexts related
to the Internet and Cloud Services landscape. A high level overview can be found in
Chapter 5 of the D2.5 [8]. Next, we focus on ICC, DTM and their combination DTM++,
since DTM++ has been fully implemented and constitutes one of the two major tangible
outputs, i.e., the software products of the project, as well as MRA.
ICC, DTM and their superposition DTM++ constitute powerful tools for service
differentiation and OPEX reduction for Internet Service Providers, thus contributing both to
cost reduction and value generation aspects of ISPs business. ICC, DTM and DTM++
have consciously been designed for the Best Effort Internet so as to maximize the OFS
mechanisms impact and applicability. However, the mechanisms design can also be
adapted under a Differentiated Services model by extending the number of traffic
classes and the respective differentiated treatment of the traffic and/or under Beyond
Best Effort Internet. Regarding the latter, DTM and DTM++ tunnels can utilize assured
quality inter-domain paths so as to further optimize the distribution of traffic on a perservice level by using separate tunnels per service and per destination and optimizing the
cost distribution over them. ICC can comprise multiple classes of service with different
prioritization of the traffic taking into account the respective services requirements. In this
case, statistical guarantees per service type can be assured.
MRA is an attractive solution for cloud federations. This is also the case for all the other
OFS mechanisms which, additionally to the aforementioned powerful features, comprise
powerful tools for providing customizable Network as a Service (NaaS) connectivity. In
particular, this may apply over the entire ISP infrastructure or over network slices,
potentially configured and managed according to the Software Defined Networking
Page 20 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Version 1.0
Page 21 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Furthermore, the brokering business context is another one where MRA, ICC, DTM and
DTM++ could be of value (additional details in Section 3.2 of Deliverable D2.4 [7]). Brokers
must be extremely efficient in meeting the needs for aggregation, integration and
customization of services and resources and delivering to their customers a unified service
that meets the customized needs of the specific customer.
Last but not least, DTM can also apply in the emerging 5G and Internet of Things (IoT)
contexts, where virtualization will lead to the conception and deployment of network slice
and resource/traffic aggregates. The DTM can offer virtual routers serving as points of
presence for data centers. They can be deployed in separate slices belonging to different
virtual operators so as to provision 5G or IoT services. Since multi-provider service
delivery, including cloud and IoT resources, over software-defined networks is part of the
5G vision, DTM++ constitutes a candidate for this market as well.
Regarding the End user Focused Scenario (EFS), the main mechanisms and their
respective achievements are:
Replicating Balanced Tracker and Home Router Sharing based on Trust (RBHORST), which provides WiFi access to trusted users to offload their mobile traffic
from 3/4G to WiFi. RB-HORST also enables home routers of trusted users identified
through a social network to be organized in a content-centric overlay network
supporting content prefetching and caching.
Socially-aware Efficient Content Delivery (SEConD), which employs social
information, AS-locality awareness, chunk-based P2P content delivery and
prefetching. A centralized node is acting as cache and as P2P tracker with a proxy
to improve the Quality of Experience of video streaming for users of OSNs to
reduce inter-AS traffic.
Virtual Incentives (vINCENT), which aims to leverage unused wireless resources
among users, while it ensures security by employing tunneling among trusted users.
vINCENT exploits social relationships derived by the OSN, as well as interest
similarities and locality of exchange of OSN content.
Mobile Network Assistant (MONA), which schedules wireless data transmissions
to reduce the energy expenses on the air-interfaces exploiting changes of the
mobile network performance depending on location and time patterns.
Multi-Criteria Application Endpoint Selection (MUCAPS), which improves users
Quality of Experience by performing selection of communication endpoints with an
awareness on the underlying network topology provided by ALTO protocol
extensions and the end-user access.
All the SmartenIT EFS mechanisms are suitable for certain business contexts. Next, we
primarily focus on the RB-HORST mechanism, which is the superposition of various EFS
mechanisms and one of the software products of the project and discuss its fitness to the
business landscape.
The business context that is most relevant is the Terminal and Service business model. In
particular, the terminal+service trends and respective business models also indicate an
increasing interest on innovation at the end user devices side. Thus, the SmartenIT EFS
mechanisms, namely RB-HORST, SEConD, vINCENT, MONA and MUCAPS that can
deliver energy efficiency, advanced services and resource management, increased agility
and performance, can be very useful means for new differentiated services and terminal
operations. A prominent example is the cloudlets business case: cloudlets are
decentralized and widely-dispersed Internet infrastructures whose compute cycles and
Page 22 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
storage resources can be leveraged by nearby mobile computers, i.e. a data center in a
box. The aforementioned SmartenIT EFS mechanisms are well positioned within this
emerging business context.
This is also evident in the latest developments regarding Windows 10. In particular, mobile
terminals running Windows 10 let its users exchange Wi-Fi network access with their
Facebook friends, Outlook.com contacts, or Skype contacts to give and get Internet
access without seeing each other's Wi-Fi network passwords [22].
Furthermore, 5G is an emerging business landscape were EFS mechanisms are of
prominent importance. The large emphasis on Quality of Experience (QoE), energy
efficiency and ubiquitous connectivity, as opposed to mobility, render RB-HORST
functionality well placed in this context, since its logic could be utilized for the efficient
coordination of horizontal and vertical hand-offs in an opaque to the user manner.
The novel features that RB-HORST brings to the end user devices and the respective
services can be of high value over specific services whose monetization is greatly affected
by QoE: the video streaming services delivery comprises a primary market segment
whose size and dynamics justify the investment on RB-HORST and similar solutions. In
this context, MUCAPS is also applicable since it could be used for smart end-point
selection for the content or service delivery points: this is of high value in the video market,
as well as in that of CDNs. Furthermore, the MUCAPS functionality could also be extended
so as to perform smart content delivery in the context of 5G networks and mobile edge
computing: User devices would use the MUCAPS service so as to associate end users
flows to the less costly interface and service/content delivery point available.
RB-HORST's mechanisms for caching, prefetching, and offloading are especially
interesting for businesses that lack their own 3G/4G network with a wide Internet
coverage. Such a business, e.g., ISP, that has set-top boxes or routers at the premises of
its customers, can quickly gain a high WLAN coverage and compete with 3G/LTE
providers with respect to data transmission. Furthermore, making the set-top box or router
more intelligent that would allow users to install various router apps could boost software
development on those devices when providing an open API such as with Android or iOS.
With MUCAPS, end-users get a better QoE due to the ISP-assisted application traffic
optimization. ISP can optimize routing costs and resources usage by means of either a
direct usage of MUCAPS or applications and end-users accepting to use the MUCAPS
option. As a result CSPs get an increased satisfaction rating from their customers. If CSPs
buy services from CDNs, the CDNs will easily keep them as customers. Besides, ISPs
save inter-domain traffic costs due to content delivery from sources hosted locally or in a
peering network. ISPs may buy MUCAPS from a vendor to optimize their routing costs and
resources usage by optimizing application traffic. ISPs may also buy an ALTO Server if
they dont have one. As MUCAPS is transparent to them, ISP customers (CSPs, CDNs)
and end users do not need to buy or employ some additional service, but may benefit of
improved resources allocation offered by the ISP thanks to MUCAPS. Besides, end users
may also agree to have their connection type unveiled to the ISP.
Version 1.0
Page 23 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
3.3
System architecture
As already mentioned, SmartenIT envisions two approaches for the efficient management
of Internet traffic. These are:
a) the Operator Focused approach, which manages of traffic between Cloud-based
services, large content providers, or data centers , typically residing in Tier-2
domains and involving traffic crossing Tier-1 domains as well,
b) the End-user Focused approach, which manages traffic destined to end users (in
Tier 3 domains) from Cloud-based infrastructure with multiple Points of Presence
(PoPs), i.e. caches, mirrors, surrogates, CDNs.
In deliverable D3.3 Final Report on System Architecture [11], the final system
architecture was presented and documented in details. The entities which constitute the
SmartenIT architecture are the SBox (implementing the SmartenIT logic), the SDN
Controller, the Network Entity (i.e., the router), the uNaDa (i.e., an enhanced home
gateway) and the End User Entity (i.e., smartphone, laptop). The first three entities refer
to the Operator Focused approach while the last two refer to the End-user Focused
approach. Figure 1 provides a visual representation of the aforementioned entities and
their envisioned deployment.
Page 24 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
SBox (Server)
Jetty Container
sqlite
HTTP
sbox.war
sdn.jar
Floodlight
0.90
jdbc
sbox.jar
TCP
Inter-SBox
Communication
Service
OpenFlow
DB
NETCONF
Traffic Manager
Economic
Analyzer
Network Device
SNMP
QoS
Analyzer
(b)
(a)
(b)
(a)
Version 1.0
Page 25 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Page 26 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
implementation of new features. The last releases also proved that architecture were well
designed, it was enough flexible to incorporate more management mechanisms.
3.4
The business aspects of traffic management mechanisms depend on the goals and
incentives of all involved parties in service provisioning over the Internet, which differ from
the perspective of popular Internet platforms and from the users' perspective as well as for
content delivery networks and network providers (ISPs) on the transport path.
Four main categories can be distinguished of Internet traffic to be optimized by different
traffic management mechanisms in SmartenIT:
traffic distributed via over-the-top (OTT) content platforms being supported by a large
global CDN, i.e. CDN-to-user traffic,
traffic via smaller web platforms without support by large global CDNs, i.e.
cloud/server-to-user traffic,
user-to-user traffic, e.g. conversational voice or video, P2P file sharing etc.,
transit traffic between data centers (DC-to-DC traffic).
The impact of each category on the Internet traffic mix is varying and developing over time.
In a phase from 2001 - 2005, user-to-user traffic was dominant based on high volume file
sharing applications [25] (since 2004 P2P traffic is declining). Since then, client-server
based traffic is increasing mainly on video and IP-TV platforms, e.g., YouTube or Netflix
together with other cloud servers of different size, which generate CDN/server/cloud-touser type traffic. A major portion of this traffic is distributed over only a few CDNs (Google,
Akamai etc.) with a dense global footprint, but traffic from smaller clouds and from
spontaneously popular web sites without global CDN support is also summing up to a
considerable fraction of IP traffic. The volume of file sharing traffic in Europe is constant or
slightly decreasing according to Cisco [24] and its share decreased below 10% of the total
downstream traffic in recent Internet traffic reports by Sandvine [30] because other traffic
types are growing much faster (for detailed graphs and figure, refer to D2.1 [4]). The
current composition of those traffic types has significant influence on the options for traffic
engineering and control mechanisms. In the following discussion of appropriate
management measures for each traffic type, a reduction of the traffic load always means
lower cost and energy consumption because upgrades of capacity may be delayed or
resources may be switched off due to better resource usage. Over the last decade, energy
consumption increased to a major component of operational network costs (OPEX)
especially in high speed core networks and in the radio access of wireless/mobile
networks. Energy consumption is becoming a limiting factor of bandwidth growth in future
IP networks. Therefore studies on energy saving options as provided e.g. by the MONA
mechanism in Section 4.2.4 or ICC mechanism in Section 4.2.1 are an important focus of
SmartenIT.
For OTT cloud services and web sites with support by a global CDN, i.e. CDN-to-user
traffic, the CDN often has control over most of the transport path from the original site of
the web server to caching and content distribution servers, which are available at main
Internet exchange points (IPX) [38] around the world and have peering connections with
most ISPs. Based on a large server infrastructure in the Internet, CDNs with global
footprint can apply load balancing under control of their own servers, regarding the load in
Version 1.0
Page 27 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
the distributed server architecture, the connection network load between the distribution
servers and the sites providing the original content. For this purpose, the dynamic traffic
management (DTM) mechanism as described in Section 4.2.1 can be applied. Moreover,
global CDNs also maintain and optimize the connections from their servers to the user
including fast switching procedures to another server offering better performance, when
QoS degradation in terms of low bandwidth or long delays is observed by QoS monitoring
[31].
Moreover, CDN-to-user traffic is also supported by caching within the platform of the
CDN provider. On the other hand, caching options beyond the CDN are often prohibited
because they only partly conform to the business interests of global CDNs. Additional
caching in user controlled nano data centers (uNaDas) as proposed by the RB-HORST
approach in sections 0 - 4.2.3 or in home gateways under control of an ISP can further
improve throughput and delays of services. This may be less relevant for small ISP
networks whose peering with global CDN servers is already close to the users. For large
ISPs, which still have a number of hops between a peering CDN server and their users
and for mobile and wireless network providers, where the access link via air interface is
the bottleneck, caching in global CDN servers is often far from an optimum transport
solution. Nonetheless, global CDNs usually have a higher business interest in keeping
complete control over the connections to the users in order to get full information about
user activity, because usage statistics are essential for the revenues of content and
service providers.
Therefore cooperative approaches between large CDN providers and ISPs seem to be the
only viable way. In fact, such approaches have already been developed. For example,
SmartenIT partners TDG and TUD have studied an approach, where the CDN provider
maintains full control on application layer. At the same time, the network provider can
optimize the network layer independently, with an SDN controller interface for exchange of
information about requests from users and corresponding selection of an appropriate
server and QoS support within the ISP network. Moreover, the MUCAPS mechanism and
developments proposed by BENOCS [43] investigate similar cooperative traffic
management approaches.
The situation is different for the second type of cloud/server-to-user traffic without
support by a global CDN, which allows for transparent caching in uNaDas as part of the
RB-HORST mechanism and/or home gateways of an ISP or similar caching facilities in
local networks. Small clouds and other web services without background CDN would face
a considerable performance handicap as compared to OTT services and therefore can
benefit a lot in terms of improved throughput and shortened delays from transparent
caches in ISP networks and user premises. ISPs have a benefit from a transparent
caching architecture that reduces load on expensive interconnection and peering links and
improves throughput for his subscribers. The RB-HORST caching approach still can
exploit some incentive for the users who get improved performance, but naturally it is more
challenging to set up an efficient cooperative service with limited shared storage
resources.
In principle, DTM is also applicable to inter-domain traffic of the cloud/server-to-user type
between clouds and ISP domains. However, on both sides many connections have to be
maintained with different small to medium size clouds or ISP domains, respectively,
whereas DTM assumes established links between one cloud provider and an ISP. The
options for monitoring and for influencing the traffic flows also have to be adapted to such
a scenario.
Page 28 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
The user-to-user traffic type is also appropriate for caching options, if a small set of
popular web objects are requested by a large user population as a general precondition for
caching efficiency. Moreover, approaches based on application layer traffic optimization as
discussed in the ALTO working group at the IETF are relevant [27]. Such approaches are
included in the MUCAPS mechanism in section 4.2.5 as an advanced ALTO
implementation. The basic concept of an ALTO server that provides network layer
information, especially distance metrics between users, in order to optimize transport
efficiency and costs, is also considered as part of the DTM concept and of the overall
SmartenIT architecture. However, the concept relies on ISPs to deploy and operate ALTO
servers, although their incentive is not entirely clear. Indeed, enabling higher throughput
for e.g. file sharing data may end up in even higher P2P traffic demand rather than
reduced network load [25]. On the other hand, an ALTO server has to adapt to
requirements of different applications and it is not sure whether an application is willing to
trust the servers' recommendations based on unknown ISP preferences, which may lead
to unexpected and hardly controllable side effects. However, the most relevant data for the
ALTO service is information about the autonomous system to which a user belongs, which
is available in public data about the IP address ranges of ISPs and other organizations
without the need to rely on special ISP involvement. A more detailed picture about regional
user locations often does not help because even a transport path between users in the
same region usually has to pass through a core PoP and transport in the core of an ISP
has lowest per Mb/s costs on high speed backbone links.
Last not least, traffic between data centers (DC-to-DC) amounts to a considerable portion
of about one third of the Internet traffic according to reports by Cisco Systems [24]. Most of
this traffic is again flowing between data centers of the largest global CDN providers. In
other cases, when data centers in different domains are connected, DTM can be applied if
several paths are prepared for the data center exchange. The characteristics of inter data
center traffic is expected to be much more variable than for CDN/cloud-to-user traffic,
because there is no comparable statistical multiplexing gain as is exploitable for
distributing data over a large user population.
DC-to-DC traffic should be deferred to low traffic load periods, e.g., overnight rather than
in busy hours wherever possible such as in the ICC mechanism in Section 4.2.1. The daily
DC-to-user traffic profiles usually follow a sinus-like curve with a peak in the evening hours
and less than half of the traffic rate in early morning hours, as shown in traffic statistics e.g.
of the world's currently largest Internet exchange point (DE-CIX) [32]. Then prefetching of
content at night time that is expected to become popular at the next day can reduce the
daily peak rate. In general, the caching strategy can be modified e.g. from least recently
used (LRU) with a steady cache input flow to a strategy that defers any cache update
towards off-peak periods.
User demands are the original source of traffic of any type. In principle, the users decide
which applications and traffic types they generate. However, the mass of Internet users
prefers low rates for Internet access and IP services for free or again at low prices. As a
consequence, OTT business models are partly based on advertisement or on small
monthly fees and access network providers mainly offer best effort services, whereas
special QoS support is reserved for a small traffic portion devoted to business customers.
Nonetheless, the users can expect a basic quality level, which is more and more densely
controlled by the European regulators in recent quality initiatives based on a development
towards large scale performance measurements in the IETF LMAP standardization (cf.
[23], [29]). Finally, the relationships between users in social networks have been studied in
Version 1.0
Page 29 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
the RB-HORST mechanism in order to predict data requests for improving caching
strategies to hold the most popular data close to the users.
Concluding, different traffic types require different traffic engineering mechanisms where
load balancing, ALTO location servers and caching options can be combined in adaptation
to many scenarios in order to optimize data transport of services. The main aim of
SmartenIT mechanisms is in providing refined monitoring methods and flexibility for fast
responses to changing network traffic pattern, which can improve QoS/QoE measures in a
cost efficient way, provided that stability demands are still met and eventually higher
solution complexity can still be handled in network operation. The development towards
advanced traffic management methods is indispensable to cope with fast growing data
volumes as well as more thorough control of QoS guaranties as currently enforced by the
European regulators (cf. [23], [27]). While traffic engineering already has been optimized
on provider platforms under unique administration [26], there is still considerable potential
for improvement of end-to-end transport through heterogeneous network domains via
CDNs, clouds and distributed data centers being operated by multiple providers [28] with
different business goals and traffic engineering approaches in each domain, although all
parties have the goal of efficient service and QoS provisioning in common. Table 1
highlights which specific SmartenIT mechanisms cover each type of traffic.
Table 1: SmartenIT mechanisms mapping
Main categories of Internet SmartenIT traffic management mechanism
traffic
CDN-to-user traffic
RB-HORST, MUCAPS
Cloud/server-to-user traffic
DTM, RB-HORST
User-to-user traffic
RB-HORST, MUCAPS
DC-to-DC traffic
ICC, DTM
Page 30 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
4.1
Version 1.0
Page 31 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
4.2
The metrics and key performance indicators have been described initially in D2.2 [5], D2.3
[6], and D4.2 [14]. They have been also elaborated in D2.5 [8], while summarizing
simulation results and WP2 outcomes. The following section provides an overview of those
metrics and parameters, explaining why they were used for the experiments and they are
important. Based on those parameter and metrics of the experiments, an overview over
high level evaluation criteria and metrics are given.
4.2.1 OFS: ICC, DTM & DTM++
ICC and in particular its network traffic management logic has been partly integrated in
DTM++. The respective ICC performance metrics are essentially the same with those of
DTM++ when the goal is the reduction of the 95th percentile of the ISP. Table 2 below
provides an overview of the performance metrics for ICC. Table 3 provides an overview of
the ICC parameters.
Table 2: Performance metrics for ICC
Metric
Assessment
Total amount of traffic at Represents total traffic transferred during the billing
the end of the billing period period.
Total cost achieved
The traffic patterns of the ISP manageable and nonmanageable traffic over the transit link with and without
ICC.
Time-shiftable traffic extra The extra delays incurred for the time-shiftable (delaytolerant) traffic due to ICC operation.
delays
Table 3: ICC parameters
Parameter
Assessment
Number of epochs y (static, The number of times that ICC is invoked within the 5-min
interval. Within the context of the DTM++ experiments this
y=10)
ICC parameter is essentially the same with the invocation
of DTM, i.e. y=10
These parameters define the target rate within each
Threshold parameters
tholds (static, 1,...,y-1 set to epoch of the 5-min interval for ICC.
0.9, y-th value set to 1)
Traffic (variable)
The traffic upon which the ICC rate control will be applied.
For DTM++ experiments this is by definition the same
since ICC is an integral part of DTM++.
Page 32 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
The DTM experiments are dedicated to performance evaluation for this management
mechanism. In D4.2 [14] Section 3.1, a set of experiments has been defined. These
experiments are divided into two groups depending on the charging rules applied for the
inter-domain traffic (volume and 95th percentile rules). In the experiments we evaluate the
DTM system for different inter-cloud connections (single-to-single and multi-to-multi cloud
communication). The operation of the system is inspected for different billing periods, cost
functions and traffic patterns. The set of evaluation metrics and KPIs has been defined in
D4.2 [14].
It was decided that the performance of DTM++ will not be evaluated experimentally, only
functional test will be performed. DTM++ is a synergetic solution that was designed during
the last project year. The specification of realization of ICC using hierarchical policer (c.f.
D2.5 [8]) and the integration with DTM was not easy and time consuming process.
Evaluation of ICC and DTM++ implementations required building new testbed environment
(on hardware routers). Finally, before the end of the project only functional tests of DTM++
implementation were done and are presented in this deliverable in Section 5.1.3. The
results of ICC functional test (operation without DTM) have been presented in D2.5 [8]. In
the table below performance metrics for DTM and DTM++ are collected.
Table 4: Performance metrics for DTM
Metric
Assessment
Total amount of traffic at Represents total traffic transferred during billing period
the end of the billing period via respective links. One can compare if the total amount
of traffic sent when the DTM(used also for DTM++) is
used with a case when DTM (used also for DTM++) is not
working. It would be very useful in case of DTM++ when
ICC operates too, because it indicates how ICC
influences total traffic transfer.
Total cost achieved
Amount
traffic
of
Version 1.0
Page 33 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
The absolute transit cost benefit (or loss) from using DTM,
if link 1 or 2 is considered as a default path, respectively
This KPI represents the ratio of the achieved cost to the
cost expected if the achieved distribution of traffic among
links were exactly equal to the reference vector. It is
especially useful when DTM++ evaluated.
Assessment
Traffic profiles:
amount of traffic on
each inter-domain
link
share of manageable
traffic in overall
traffic
pattern of
manageable traffic
Page 34 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
DTM configuration
parameters:
Assessment
Version 1.0
Page 35 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Energy consumption
In the following table, the parameters are assessed. Two variable parameters were used in
the experiment, while two static parameters were fixed. Changing these two static
parameters to variable parameters could have a high impact on the experiments.
Table 7: Caching functionality parameters in RB-HORST
Parameter
Assessment
The number of devices shows the scalability of the RBHORST. The numbers are from 1-8, thus representing 1-8
users connected to a uNaDa. Besides the RB-HORST
software, a further limiting factor for the number of devices
connected to a uNaDa is the WiFi capacity.
(variable 1- 8 devices)
Cache size
(static ~15GB)
Page 36 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Request mechanism
(static Zipf distribution)
Assessment
of The energy consumption of the uNaDa (Odroids) is to be
determined using system utilization traces collected
during the large-scale study. Using this metric, the overall
energy cost of using RB-HORST can be determined,
based on which comparisons with conventional streaming
can be derived.
Prefetching efficiency
Requests served
Bandwidth utilization from The bandwidth utilization describes the traffic on the
Ethernet and WiFi interface of the uNaDa. From these
traffic traces
measurements, first the power consumption of the uNaDa
is to be derived, secondly, the changes in network traffic
between on-demand streaming and social offloading
using RB-HORST are to be determined.
Version 1.0
Page 37 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Inter-domain traffic
produced by prefetching
Table 9 presents the experiment parameters and their values used during the large scale
studies. Due to the size and duration of a large-scale study it is impractical to vary several
parameters and re-run the whole study. Therefore, all parameters are static and were
chosen to produce the most data points for the later evaluation. E.g., the predictions were
executed every hour, allowing to reduce granularity and take only every 2nd or 4th
prediction into account for the assessment.
Table 9: Large scale RB-HORST study parameters
Parameter
Assessment
The TTL for the cache determines how long a video may
reside in the cache after being watched the last time.
Videos with expired TTL may be removed by the cache
maintenance task even if the cache is not full or near full.
(static 10 min)
Overlay Updates Interval
(static 60 min)
Overlay Predictions
Interval (static 60 min)
Page 38 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Assessment
Energy consumption
Throughput
RTT
Version 1.0
Page 39 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Signal Strength
For the analysis of the system, the parameters were selected as given in Table 11. The
network technology is automatically selected by the participating devices based on
availability. For the energy modeling of the network connections on the mobile device, the
power model of the Nexus 5 was selected, as it is a widely used device implementing the
required network technologies.
Table 11: MONA parameters (selected configuration underlined)
Parameter
Assessment
Network technology
Nexus S, Nexus 5
Nexus 5 selected due to availability of 4G connectivity
Raspberry Pi, Odroid-C1
On connect, periodically
(1 min, 5 min, 10 min, 15 min, 60 min)
1 minute interval selected to be able to determine
changes in network behavior while limiting additional load
on the system.
Throughput measurement
interval
Page 40 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Assessment
NFrz, DFrz
%corrupted frames
The Cross Layer Utility (CLU): represents the proximity of the AEP performance vector
V(aep) to an ideal (RC, BWS) vector Vid, composed by the best observed performance
values among the candidate AEPs. CLU is the L1 norm of a weighted distance vector
between V(aep) and Vid. The selected AEP must have the maximal CLU value.
Version 1.0
Page 41 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Assessment
MUCAPS Status
(variable G1, G2, G3, G4)
UEP access
(variable WiFi, LAN, 3G)
Video Resolution
(variable HR, MR, LR )
4.3
Summarizing the detailed presentation on the metrics evaluated in the experiments for the
OFS and the EFS in Sections 4.2.1-4.2.5, it follows that these metrics can be divided into
three dimensions:
Page 42 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Many performance metrics have a direct influence on energy efficiency, e.g., cache hit rate
or bandwidth. The higher the cache hit rate, the less requests have to be performed over
the Internet, thus saving energy. The same applies for bandwidth and several other
metrics. If bandwidth can be saved, then energy will be saved as well. Energy efficiency
has been evaluated in other scenarios indirectly. It is important to mention that the other
two metrics (economics and user experience) may be counterproductive with respect to
energy efficiency, as the cheapest link may not be the most energy efficient link.
Furthermore, parameters were already divided into the following categories:
Static parameters
Variable parameters
From the experiments, it can be observed that experiments with a simulative character
have more variable parameters. For example EFS#2 only has static parameters as the
system was implemented with real devices and real users, similar to OFS that has only
one variable parameter. Thus, parameters such as social prediction interval or cache size
had are fixed because only one large scale test runs could have been performed. EFS#1
or EFS#4 on the other has a simulative character and more variable parameters were
evaluated.
Privacy and security were not evaluated in the experiments and not part of any metric or
parameter. However, e.g., for the large-scale experiments, certificates were used to
ensure that data is securely transferred. Thus, although security was not directly evaluated
in a quantitative manner, it was considered qualitatively in the experiments and actually
provisioned in some of them. Additional mechanisms such as caching on uNaDas does
not encrypt content, raising privacy issues as the uNaDa owner may see what content its
friends consume. On the other hand, prefetching to a friends phone hides this information
as the friend may or may not consume the content in the end. A more elaborate analysis of
privacy and security issues applicable to SmartenIT mechanisms is future work.
Many of the parameters and metrics that were reported in D2.2 [5], D2.3 [6], and D4.2 [14]
were evaluated in specific scenarios and for those parameters and metrics not evaluated,
detailed reasons were provided why those parameters and metrics were not involved.
Future work should include evaluation of EFS and OFS in different scenarios. For
instance, the uNaDa caching mechanisms can also be applicable for mobile / fixed edge
computing.
Table 14: SmartenIT metric categories mapping
Metric categories
Experiment
Economic metrics
MONA, ICC
Performance metrics
DTM, RB-HORST
MUCAPS
Thus, the metrics chosen for all the experiments show that all three metric dimensions economic metrics, performance metrics, and user experience metrics - are represented
and evaluated accordingly as shown in Table 14. For the parameters a mix of variable and
static parameters were chosen. Depending on the type of experiment more parameters
could be evaluated (simulative experiments), while for other experiments many parameters
had to be fixed (real-world experiments). The relation between those is shown in Figure 5.
Version 1.0
Page 43 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Page 44 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Experiment
ID
Experiment name
Testbed
owner
Experimenter
Supporting
partners
OFS
OFS#1.1.1(Sto-S volume)
IRT
IRT
AGH, PSNC
OFS
AGH
AGH
PSNC
OFS
IRT
IRT
AGH, PSNC
OFS
AGH
AGH
PSNC
OFS
AGH
AGH
PSNC
OFS
PSNC
PSNC
AGH
OFS
PSNC
PSNC
PSNC
EFS
EFS#1
TUD
(testbed),
UZH
(uNaDa)
UZH, UniWue
TUD
EFS
EFS#2
TUD,
UZH
TUD, UZH
All
EFS
EFS#3
TUD,
UZH
TUD
UZH
EFS
EFS#4
MUCAPS
ALBLF
ALBLF
TUD
Page 45 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
of view) while the second is a performance test (where the traffic mechanism to be
investigated according to the actual deployment is evaluated from a quantitative
point of view with large scale, stress and load test activities)
4. Details regarding the testbed where single experiments will run, the owner of the
experimenter as well the supporting partners for the experiments have been
provided.
All experiments have been described by using a common Experiment report card template
which has been agreed among project partners and which is intended to provide a
coherent and comprehensive way to collect and present the results. The Experiment report
card will play a central role in the phase of SmartenIT overall performance assessment.
This template is provided in Table 16 and includes the following dedicated entries:
Reference scenario and description of the use case which generated the actual
experiment/test card,
Experiment identifier
Experiment report cards are followed by plain text descriptions of results achieved,
relevant plots and figures, discussion of the results, conclusions as well as future research.
Table 16: Experiment Report Card Template
Field
Description
Scenario
Use case
Experiment Id
Goal
TM mechanism
Experiment type
Experiment setup
parameters,
Page 46 of 177
architecture,
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
5.1
OFS experiments
single instance at IRT premises for experiments with Single-to-Single domain (S-toS) topology (Figure 6) and volume based tariff
two instances at AGH premises for experiments with S-to-S topology with volume
as well as 95th percentile based tariff
definition of performance metrics and KPIs (separately for volume based tariff and
95th percentile based tariff).
Traffic
generator
(sender)
Cloud A
DC-A
AS2
Traffic
receiver
AS3
DA-1
Traffic
receiver
Cloud B
DC-B
DC traffic
generator
sender
BG-1.1
BG-1.2
AS1
SDN controller
Traffic
receiver
S-Box
S-Box
Traffic
generator
(sender)
BGP router
Intra-domain router
Inter-domain link
Intra-domain link
Page 47 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Cloud A
DC-A
Traffic
receiver RA1
BG-1.1
Traffic
generator
(sender) GA1
Cloud B
DC-B
AS3
DA-1
Traffic
receiver
AS2
BG-1.2
AS1
Traffic
receiver RA2
S-Box
Cloud C
DC-C
DC traffic
generator
sender
Traffic
generator
(sender) GC1
SDN controller
S-Box
Traffic
Receiver RC1
Traffic
generator
(sender) GA2
Cloud D
DC-D
AS5
DA-4
DC traffic
generator
sender
BG-4.1
Traffic
receiver
AS4
BG-4.2
Traffic
receiver RC2
Traffic
generator
(sender) GC2
SDN controller
S-Box
S-Box
basic topology (stemming from the number of cloud/DCs sending or receiving the
manageable traffic), namely S-to-S (Figure 6) and M-to-M (Figure 7) experiments
are distinguished
type of tariff used for billing the traffic on inter-domain links, namely volume based
and 95th percentile based.
short billing period: usually 30 minutes for volume based tariff and 500 minutes for
95th percentile based tariff (in order to collect exactly 100 5-minute samples within
a billing period)
Page 48 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
In performance tests, each traffic source generated traffic with a daily envelope. The traffic
envelope shape is realistic. It is derived from the average traffic in bits per second flowing
through DE-CIX internet exchange [32] from January 18/19, 2015 midnight CET to
January 19/20, 2015 midnight CET. The length of billing period was selected in two ways:
1 day long billing period - chosen if a quick evaluation of the influence of some
parameter settings was needed. Additionally, in the case of 95th percentile tariff,
tests with 1 day long billing period were used as preparatory step for experiments
with 7-day long billing period. It was needed since the system behavior and the
system operating point is difficult to predict. Thus, settings for 7-day long billing
period were first validated and tuned with test with 1-day billing period. In contrast,
in the case of volume based tariff, the system operating point can be quite easily
pre-calculated and such a preparatory run was not needed. The amount of nonmanageable traffic on both links as well as the amount of overall manageable traffic
sent between DCs in the billing period can be calculated from traffic generators
settings.
7-day long billing period was chosen for the main performance test runs. Instead of
using realistic 1-month billing period we assumed 7-days. A single experiment with
1-month billing period would need to be run for at least 3 months to collect usable
results for 2 billing periods (the first billing period is an experiment warm-up period
and measurements are useless). The performed experiments shown that 7-day
billing period is long enough to obtain reliable results due to averaging effect. The
amount of traffic in the case of volume based tariff is large enough and lengthening
the billing period does not improve the accuracy. The experiments with 95th
percentile based charging are significantly more sensitive to the length of the billing
period. Too short billing period results in collection of few 5-minute samples and the
obtained distribution of samples might be unrealistic, especially influencing the
calculation of percentile value. The number of 5-minute samples collected during 7
days equals to 2016. It is sufficient to obtain distributions of samples close to
realistic. We observed that the value of calculated percentile converges with the
growing number of days (due effect of averaging). The improvement after 7 extra
days is very small.
A separate testbed environment was created for OFS experiment 1.3. It is a new
experiment employing DTM++, which was designed after release of D4.2 [14]. The status
of specification and implementation of DTM++ did not allow to define experiments at that
time.
The testbed environment for OFS 1.3 is not based on virtualization on physical server but
is built with real network equipment, namely hardware routers and switches. Such an
approach is determined by the way of implementation of ICC functionality for DTM++.
Namely, the ICC functionality is realized with the use of hierarchical policers that are
available on physical routers offered by certain vendors. The specification of the ICC
implementation for DTM++ can be found in deliverable D2.5 [8]. More details on the
experiment can be found Section 5.1.3.
5.1.1 OFS#1.1 (S-to-S Volume based charging rule)
The goal of this experiment was to evaluate DTM performance under the assumption that
total volume charging rule is used and S-to-S topology is used. The logical topology is
presented in Figure 6. The cost of traffic transferred on each inter-domain link (L1 and L2)
Version 1.0
Page 49 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
is calculated at the end of billing period using the total amount of traffic transferred through
a given link during the whole billing period.
The primary goal of OFS#1.1.x experiments was to evaluate the performance of DTM
when volume based tariff is used. In the first experiments the used SDN controller mode
was "reactive with reference". First results shown that DTM is able to distribute
manageable traffic among inter-domain links in such a way that the target traffic cost is
achieved with a high accuracy. The measured traffic vector followed the reference vector
very well during the whole billing period. It motivated experimenters to check if satisfactory
results could be achieved with different SDN controller modes which are less complex.
Therefore, experiments were repeated with "reactive without reference" mode of SDN
controller.
In both modes, the compensation vector is calculated by SBox located in the domain that
optimizes traffic (domain AS-1). The calculation is done periodically with interval defined
by parameter called "report period DTM". It was set to 30s in all OFS#1.1.x experiments.
The compensation vector is calculated using measurements of the current traffic vector
and it reflects the deviation of the current traffic vector from the reference vector.
In the case of "reactive with reference" mode, the newly calculated compensation vector is
always sent to the remote SBox (in domain AS3), that is every 30 seconds. The values of
received compensation vector are used to decide through which tunnel the new flows
should be send. The value of the compensation vector components is taken into account.
Once the new compensation vector is received one of the tunnels is selected. Then the
amount of traffic sent through this tunnel is measured. If the amount of traffic sent reaches
the value of the compensation vector, the SDN controller starts to balance the traffic
among tunnels with the proportion stemming directly from the reference vector
components' values.
In the case of "reactive without reference" mode the mechanism is simplified. The remote
SBox does not take into account the value of the compensation vector. Only the sign is
considered and indicates one of the tunnels to be used. The tunnel used may be switched
only when the sign of the new compensation vector component is changed. Thus, the sbox calculating the compensation vector does not need to send it if the sign remains
unchanged after calculation. Therefore, the new compensation vector is calculated every
30 s ("report period DTM") but sent only if the sign is changed (and some preconfigured
threshold is exceeded release 3.0 and 3.1), that is, the tunnel used must be switched.
Additionally, according to the specification, compensation vector is always sent (regardless
the sign) when the time defined by parameter "compensation period" elapses (in the
experiment it was set to 5 minutes).
The benefits of using "reactive without reference" mode are as follows:
less complexity of functionality of remote SBox and SDN controller (no need to
measure the traffic sent over tunnels).
One of the experiment goals was to evaluate if DTM performance is not degraded if
"reactive without reference" is used.
The last setting of SDN controller mode tested was "proactive without reference". In
"reactive" modes the new selection of tunnel (link) is applied to newly generated flow only
(active flows are still sent over the link selected when they were created). In turn, in
Page 50 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
"proactive" modes if an arriving compensation vector indicates to change the tunnel used
all flows (including active) are immediately switched to newly selected tunnel. The
advantage of such solution is that SDN controller does not need to maintain large table of
flows (only one rule for all manageable traffic is sufficient). Thus it offers a better
scalability. Another benefit of "proactive" mode may arise if long-lasting flows are
dominating in manageable traffic profile and new flows are generate relatively rarely.
"Proactive" mode offers a faster reaction to the need to compensate the traffic than
"reactive" mode. However, this is mostly important if 95th percentile tariff is used (this will
be elaborated more in OFS#1.2 experiment description). The drawback of this mode is
that packets may arrive to the destination out of order if end-to-end delays are different on
two paths. The goal of those experiments was to evaluate performance of DTM under
"proactive" mode.
Field
Description
Scenario
Use case
Experiment
Id
Goal
To evaluate the performance of DTM when tariff used by the ISPs based on a total traffic
volume. To compare performance for different setting of SDN controller mode.
TM
mechanism
DTM
Experiment
type
Testbed
Experiment
setup
300
5540
0
168 10
168 10
880.32 10
880.32 10
1.19 10
0
369.6 10
5.952 10
1760 369.6 10
559.78 10
11.9 10
5092
559.78 10
where
Compensation period: 30 s
Version 1.0
Page 51 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Recorded
data and raw
data format
Background on L1
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on L2
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Manageable DC-DC
Distribution
Flow inter-arrival-time
exponential
Flow length
exponential
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
1
109.5 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
218.5 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
1
118 [ms]
2400 [ms]
1
35 [ms]
With probability 0.95
1358
25
158
20
(for reactive with reference mode (OFS#1.1.1) the above values are equal)
vector values at the end of each billing period
Refer to D4.2 [14] Table 3 for more details.
Measured
metrics and
evaluation
methodology
The DTM performance for volume based tariff was verified under three different settings of
the SDN controller mode. For each mode, the performance of DTM was observed over a
few billing periods. The obtained results are very similar. In all cases, the measured traffic
vector met the reference vector with a high accuracy and, what follows, the cost of transit
traffic was very close to optimal (expected), the KPI is close to 1 (Table 18). The small
difference between expected and achieved cost (below 1%) stems from the statistical
features of the traffic pattern; the total amount of traffic transferred may vary between
billing periods. In presented results the traffic volume in observed billing period was a bit
lower than in the previous, i.e., used to calculate reference vector. In other billing periods
(not presented here) we observed also a bit higher traffic volumes in current versus
previous billing period.
Page 52 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Absolute costs for transit traffic presented in Table 17 as well as KPIs (Table 18) shows
that for each SDN controller mode not only an expected traffic cost was achieved but it
was also lower than for possible static routing solutions. The ISP using DTM saves costs:
if DTM was not enabled and link 1 or link 2 was used for transferring all manageable traffic
(as a static BGP path) the cost would be higher.
Table 17: Total traffic costs achieved with DTM and expected without DTM obtained for
various SDN controller modes (OFS#1.1)
Cost
Reactive
with
reference
Reactive
without
reference
Proactive
without
reference
3228.5
3347.9
3298.6
3211.9
3312.0
3269.7
3203.8
3294.6
3254.9
3451.3
3570.1
3503.1
3903.8
3968.0
3956.1
Table 18: KPI values for DTM run with various SDN controller modes (OFS#1.1)
KPI
Reactive without
reference
Proactive without
reference
0.9307
0.9277
0.9333
0.8228
0.8347
0.8264
-239.31
-258.13
-233.82
-691.86
-656.03
-686.85
0.99487
0.98926
0.99111
Also plots for all SDN controller modes are very similar so it was decided to present figures
showing traffic patterns and traffic growth on a cost map for "reactive without reference"
mode only (Figure 8 and Figure 9). It can be observed that the manageable traffic is
distributed between the two links. During the periods when the traffic volume on both links
is very low the whole manageable traffic is sent over link 1. Since there was not enough
manageable traffic to be shifted to link 1 the measured traffic vector diverges from
reference vector over some time. However, it is compensated later and current traffic
vector again converges with (the periods when effect is most apparent are indicated by a
green circle on Figure 8 and Figure 9). This temporal discrepancy is however very low and
one can notice that the manageable traffic is distributed in such a way that the measured
current traffic vector follows the direction of very well and converges at optimal point at
the end of billing period (Figure 8). The values of compensation vector are high during
such a period indicating the amount of manageable traffic need to be shifted to converge
to direction of (Figure 10).
Version 1.0
Page 53 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
When the traffic is growing on both links the manageable traffic is distribute among both
links. Around the end of the daily peak period there is too much background traffic on link
1 so DTM decides to send most of manageable traffic over link 2 but still the traffic is
distributed over both links. At that period, the compensation vector oscillates around 0
(Figure 10).
Figure 8: Traffic growth during the billing period and a cost map. SDN controller mode:
"reactive with reference". (OFS#1.1)
Page 54 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Figure 9: Traffic patterns on link 1 and 2 during the billing period. SDN controller mode:
"reactive with reference". (OFS#1.1)
Version 1.0
Page 55 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Figure 10: Compensation vector values received by remote SBox (AS3 domain). SDN
controller mode: "reactive with reference". (OFS#1.1)
Figure 11: Compensation vector values received by remote SBox (AS3 domain). SDN
controller mode: "reactive without reference". (OFS#1.1)
The difference between operation of DTM in modes with and without reference appears in
values of compensation vector (Figure 10 and Figure 11). In the mode "with reference" the
updates of are send regularly and very often (every 30 s in this experiment). Thus the
compensation is corrected every 30 seconds and is very accurate. In mode "without
reference" the vector is updated when its value changes sign (there is a need to switch
to another link) or after 5 minutes. The visible effect is that oscilalations of compensation
vector values are higher since the result of less often updates is that the mechanism may
slightly overcompensate the traffic before a new update arrive. However, we do not
observe any negative effect of this on the cost achieved. In turn, the number of vector
Page 56 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
updates during the billing period is significantly lower for "without reference" modes (Table
19). Therefore, from overhead and scalability perspective it is recommended to use SDN
controller mode type "without reference".
Table 19: The number of
9154
9413
20160
Finally, considering proactive vs. reactive mode for volume based tariff, looking at DTM
performance and KPIs we did not observe any significant indicator for choosing one of
them over the others. The results are similar. However, since "proactive" mode affects
active flows (in contrast to "reactive" which chooses link only for new flows) what may
result in out of order packet delivery we recommend to use "reactive mode" for volume
based tariff.
Conclusion: To sum up, the OFS#1.1 experiments proved that DTM performs well and is
capable of decreasing ISP costs of transit traffic when volume based tariff is used. KPIs
calculated for experiments presented in this deliverable indicates the potential for interdomain traffic costs reduction by 7% to 18%. Similar values where obtained for most of
experiments done (including those not presented in deliverable). Also benefits up to 30%
were observed. However, the actual ISPs benefits depend on traffic volumes on interdomain links, static routing configurations and cost functions on links. The number of
experiments and DTM configurations tested allowed for defining recommendations on
choosing some DTM configuration settings, especially the choice of SDN controller mode.
5.1.2 OFS#1.2 (S-to-S 95% percentile based tariff)
In this experiment DTM performance was evaluated under the assumption of single-tosingle topology and 95th percentile rule for charging. Unlike in the case of volume based
tariff, the time to react to traffic bursts and to compensate them by balancing the traffic
among inter-domain links is very short. As shown in OFS#1.1 experiments the DTM has
the whole billing period to compensate the traffic bursts and irregular character. The traffic
on both links is summed up over a long period of time, i.e., the whole billing period (7 days
in the experiment). In the case of 95th percentile tariff each 5-minute sample must be
treated separately since each sample might potentially become one of 5% of the highest.
Therefore, for a given 5-minute long period, the system has only 5 minutes to compensate
undesired traffic growth and distribute the traffic among links in such a way that the sizes
of both 5-minute samples (collected on two inter-domain links at the same time) are below
thresholds stemming from the reference vector. The goal of the experiment was to
evaluate the performance of DTM when 95th percentile tariff is used by the ISP and
evaluate the potential benefits of the ISP. The detailed experiment definition can be found
in D4.2 [14] Section 3.1.1.
Field
Description
Scenario
Use case
Experiment
Id
OFS#1.2
Version 1.0
Page 57 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Goal
To evaluate the performance of DTM when 95th percentile tariff is used by the ISP
TM
mechanism
DTM
Experiment
type
Testbed
Experiment
setup
Topology: Figure 6
Tariff: 95th percentile
Cost functions for link L1 and L2:
200
3 10
9 10
0
160 10
280
4480
2.8 10
7.8 10
900
20.3 10
5150
0
180 10
160 10
700 10
700 10
180 10
340 10
340 10
Where is the amount of traffic (in bytes). At the end of billing period it takes a value
equal to the size of the 5-min sample calculated using 95th percentile rule.
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on L2
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Manageable DC-DC
Distribution
Flow inter-arrival-time
exponential
Flow length
exponential
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
1
109.5 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
218.5 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
1
35 [ms]
With probability 0.95
1358
25
158
20
Distribution parameters
1
1
118 [ms]
2400 [ms]
Page 58 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Recorded
data and raw
data format
Measured
metrics and
evaluation
methodology
The experiments were run with 7-day long billing period. The number of 5-min samples
collected during such a billing period equals to 2016. Therefore, 5% of them (i.e. 100
samples) are allowed to exceed 95th percentile threshold. The rest of samples must be
below the threshold stemming from the reference vector to achieve expected traffic cost.
As experiments' results show DTM was able to achieve this goal. Table 20 shows
achieved costs and KPI values. The desired cost is achieved very accurately. On link 1 the
achieved 95th percentile threshold was slightly higher than
component of , while on
link 2 it is a bit lower than the respective value of
(see Figure 13 showing sizes of
samples collected in time order). Therefore the cost of inter-domain traffic on link 1 is a bit
higher than expected while on link 2 is lower than expected. The resulting total cost is less
than 1% higher than desired by ISP (Table 20).
Table 20: Inter-domain traffic costs and KPI values for DTM (OFS#1.2)
Cost
KPIs
3311.0
0.9520
3366.7
0.7117
3362.0
-169.79
3536.6
-1363.54
4730.3
1.008
The principle of DTM operation for 95th percentile based tariff is based on modifying
sample sizes by shifting manageable traffic between inter-domain links. As a result
distribution of samples is changed. Figure 13 shows the actual distribution of 5-minute
samples with DTM compared to the distribution of samples that would be obtained if DTM
was not used and link1 or link 2 was used as a part of static BGP path between
communicating datacenters DC-B and DC-A: source and receiver of manageable traffic,
respectively (cf. Figure 6). It is crucial to influence the highest samples since they may
potentially drop into the set of 5% of highest samples thus increasing the cost on the link.
Version 1.0
Page 59 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Figure 12: 5-min samples observed on inter-domain links during a single 7-day long billing
period (OFS#1.2)
Page 60 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Figure 13: Distributions of 5-min samples collected during a single 7-day long billing period
(OFS#1.2)
In Figure 14 we present the distribution of samples on two dimensional space and with a
cost map in the background. Each dot in the figure represents a pair of samples collected
on two inter-domain links at the same time. It can be easily noticed that when DTM is used
sample pairs are condensed around the direction of the reference vector (blue dots).
Orange and red dots shows the distribution of sample pairs that would have been obtained
when DTM was not used and all manageable traffic was sent over link 1 or link 2,
respectively.
Version 1.0
Page 61 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Figure 14: The distribution of 5-min sample pairs on a cost map. (OFS#1.2)
During the set of experiments run it was noticed that the accuracy of DTM traffic
management increases when assumed billing period is longer and what follows the
number of 5-min samples collected increases (we started with 1-day long billing period,
then 3-days and finished with 7-days). The more samples in billing period the better
convergence to the reference vector. Observing this tendency we believe that DTM will
perform even better for realistic billing period of 1 month.
Additional experiment to evaluate DTM performance under "unfriendly" traffic
pattern
As discussed before, it is more challenging for DTM to manage and optimally distribute the
traffic when 95th percentile tariff is used since the size of each 5-min sample should be
kept below threshold and DTM has only those 5 minutes to compensate undesired traffic
distribution. If manageable traffic pattern consists of many short flows generated
frequently, the reactive mode can quite easily manage to perform traffic compensation.
There are enough new flows that can be shifted to a tunnel currently chosen. At the same
time, flows remaining on the previously chosen tunnel end quickly.
Page 62 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
We decided to consider a traffic pattern consisting of many relatively long flows (as
compared to the length of sample, i.e. 5 minutes) and with long inter-arrival time. Such
traffic conditions are difficult for "reactive" mode. In such a case "proactive" mode should
help since it makes it possible to switch active flows between tunnels.
A short experiment with traffic patterns consisting of long flows with large inter-arrival time
was prepared. The traffic generator was set up as presented in Table 21. The mean flow
length is then 162,5 seconds.
Two experiment runs were launched: with "reactive" and "proactive" modes. Due to time
limitations those experiment runs were performed with a 1-day long billing period.
Table 21: Manageable traffic generator setting for generation of long flows
Manageable DC-DC
Distribution
Flow inter-arrival-time
exponential
Flow length
lognormal
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
1
8000 [ms]
11,5474
0,94975
1
35 [ms]
With probability 0.95
1358
25
158
20
The obtained results for "reactive" and "proactive" modes are presented in Figure 15 and
Figure 16, respectively. When proactive mode is used, the 5-minute sample pairs are very
well condensed around reference vector. In the case of "reactive" mode they are more
spread and clearly the achieved actual 95th percentile threshold does not meet the
reference vector. This is because active flows are in fact non shiftable in "reactive mode".
Therefore, the amount of manageable traffic that can be effectively managed is in practice
lower than the nominal amount of manageable traffic. Finally, "proactive" mode perform
better than "reactive" and is able to manage the traffic pattern consisted of long flows more
effectively.
Conclusion: In the main experiment presented above the inter-domain traffic cost
reduction is about 5% or 29% depending on which inter-domain link is considered to be
default BGP path for manageable traffic (see KPIs in Table 20). Typical values of
(observed in various experiments not reported here due to space limitations) varied
between 8 and 15%. Similarly to OFS#1.1 experiment the actual cost saving will depend
on the amount of traffic on inter-domain links, share of manageable traffic and cost
functions and also on manageable traffic patterns. ISP deploying DTM for 95th percentile
charging needs to be more careful with choosing DTM settings to obtain higher benefits.
Especially, the choice of SDN controller mode should take into account the pattern of
manageable traffic: the distribution of flow lengths and flow inter-arrival time).
Version 1.0
Page 63 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Figure 15: The distribution of 5-min sample pairs - long flows, reactive mode
Figure 16: The distribution of 5-min sample pairs - long flows, proactive mode
Page 64 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Description
Scenario
Use case
Experiment
Id
OFS#1.3
Goal
TM
mechanism
DTM++
Experiment
type
Testbed
Experiment
setup
280
4480
2.8 10
7.8 10
900
20.3 10
5150
0
160 10
0
180 10
160 10
700 10
700 10
180 10
340 10
340 10
where is the amount traffic (in bytes). At the end of billing period it takes a value equal
to the size of the 5-min sample calculated using 95th percentile rule.
Version 1.0
Page 65 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on L2
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Manageable DC-DC
Delay sensitive
Distribution
Flow inter-arrival-time
exponential
Flow length
exponential
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
1
109.5 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
218.5 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
1
35 [ms]
With probability 0.95
1358
25
158
20
Distribution parameters
1
1
118 [ms]
2400 [ms]
Recorded
data and raw
data format
Measured
metrics and
evaluation
methodology
Page 66 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
The result of DTM++ operations are presented in Figure 17 and Figure 18. The former
shows 30s samples of mean traffic rate while the latter shows sizes of respective 5-min
samples. In both figures patters of three types of traffic are distinguished. It can be noticed
that manageable traffic (both sensitive and tolerant) is distributed between the two links.
This is a result of a plain DTM operation. Additionally, in both links a limitation of delay
tolerant traffic can be observed. It is best visible on link L1 (Figure 17). When the
aggregate traffic rate in link L1 achieves the threshold stemming from reference vector
component
(about 12h) the tolerant traffic starts to be limited to prevent the mean total
throughput from exceeding the threshold. This is seen by observing that the curves of the
total traffic and of the background+sensitive traffic almost coincide; thus, only delay
tolerant traffic is affected. This operation is realized by a hierarchical policer. Aggregate
throughput of background and delay sensitive traffic is not limited. In order to avoid killing
TCP flows, the filter is configured in such a way that it allows transferring small amounts of
delay tolerant flows even in the peak period. The link capacity available for tolerant traffic
is not limited to zero but the tolerant traffic is allowed to be transferred with a rate no
greater than with some preconfigured percentage of the threshold value. In the experiment
this was set to 2%.
When the peak period ends the link capacity available for tolerant traffic increases again.
Then TCP sources increase their transfer rate and flush data accumulated in buffers. As a
result, the throughput generated is for some time higher than 2.2Mbit/s (the rate is not
limited by TCP source or access link but by the application).
Figure 17: Traffic patterns on link 1 and 2 during the billing period. (OFS#1.3)
Version 1.0
Page 67 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Figure 18 shows corresponding values of 5-min samples. The effect of limiting delay
tolerant traffic can be also noticed. It can be also seen that 95th percentile threshold
achieved in this billing period is below the reference vector.
Figure 18: 5-min samples observed on inter-domain links during a single 1-day long billing
period (OFS#1.3)
The effect of limiting tolerant traffic during peak periods is clear from above considerations.
Additionally, a similar experiment with hierarchical policer switched off was performed to
visualize how the traffic would be distributed if just DTM was used (instead of DTM++).
The results are shown in Figure 19. The lack of limitation of tolerant traffic during the peak
period is clearly visible. Additionally, the reference vector values in the case of DTM are
higher than for DTM++ (
735.9, 340 [MB] and
700, 300.9 [MB],
respectively). It might be epxected that total costs for inter-domain traffic achieved with
DTM++ are lower than for DTM. However, due to the character of the experiments those
results can assessed only roughly.
Page 68 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Figure 19: 5-min samples observed on inter-domain links during a single 1-day long billing
period when hierarchical policer is inactive; only DTM operates (OFS#1.3)
Conclusion: Qualitative assessment of the presented results of those simple experiments
shows that:
DTM++ has a potential for better controlling ISP's costs for inter-domain traffic thus
to achieve further cost reductions.
Version 1.0
Page 69 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Description
Scenario
Use case
Experiment
Id
OFS#2.1
Goal
To evaluate the performance of DTM in multi-ISP environment when tariff used by the
ISPs is based on a total traffic volume.
TM
mechanism
DTM
Experiment
type
Testbed
Experiment
setup
Domain AS1
Cost functions for link LA1 and LA2:
where
200
2.48 10
7.44 10
300
5508
0.992 10
4.96 10
9.92 10
1760
5024
0
201.6 10
201.6 10
1049.9 10
1049.9 10
0
443.52 10
443.52 10
658.02 10
658.02 10
Domain AS4
Cost functions for link LC1 and LC2:
200
2.976 10
8.929 10
0
134.4 10
300 134.4 10
672 10
5540
672 10
1.49 10
4.84 10
990
18.6 10
6910
where
0
295.68 10
295.68 10
430.08 10
430.08 10
Page 70 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on LA2
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on LC1
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on LC2
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
1
91 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
182 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
146 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
210 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
1
35 [ms]
With probability 0.95
1358
25
158
20
1
35 [ms]
With probability 0.95
1358
25
158
20
Distribution
Flow inter-arrival-time
exponential
113.5 [ms]
Flow length
exponential
2400 [ms]
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution
Flow inter-arrival-time
exponential
Flow length
exponential
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
Distribution parameters
1
1
738 [ms]
2400 [ms]
Version 1.0
Page 71 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Recorded
data and raw
data format
Distribution
Flow inter-arrival-time
exponential
282.5 [ms]
Flow length
exponential
2400 [ms]
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
1
35 [ms]
With probability 0.95
1358
25
158
20
1
35 [ms]
With probability 0.95
1358
25
158
20
Distribution
Flow inter-arrival-time
exponential
207.5 [ms]
Flow length
exponential
2400 [ms]
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
Traffic samples on each of 4 inter-domain (LA1, LA2, LC1, LC2) links every 30 seconds
vector values in domain AS1 - every 30 seconds
vectors' values (in domains AS1 and AS4) at the end of each billing period
Refer to D4.2 [14] table 8 for more details.
Measured
metrics and
evaluation
methodology
o
o
o
and
and
(traffic from DC-B via LA2, traffic from DC-D via LA1)
and
(traffic from DC-B via LA1, traffic from DC-D via LA2)
o
o
o
and
and
(traffic from DC-B via LC2, traffic from DC-D via LC1)
and
(traffic from DC-B via LC1, traffic from DC-D via LC2)
Page 72 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
In this experiment there were two independent autonomous systems (owned by different
ISPs) that run DTM, namely domains AS1 and AS4 (c.f. Figure 7). In domain AS1
datacenter DC-A is located. In AS4 there is datacenter DC-C. Both datacenters receive
traffic from two sources DC-B and DC-D located in two remote domains: AS3 and AS5
respectively. The complexity of this scenario lies in the following:
ISPs managing inbound traffic (AS1 or AS4) must rely on the cooperation of two
remote ISPs, have to manage manageable traffic generated by two independent
and uncorrelated sources located in two distinct domains.
The ISP in which domain source of the traffic is located (AS-3 or AS5) receives
reference and compensation vectors from two independent domains running DTM
(AS1 and AS4). Values of those vectors are not correlated. Traffic from a
datacenter must be distributed among 4 tunnels based on vectors received.
Requests from both AS1 and AS4 must be served simultaneously.
The goal of this experiment was to evaluate if DTM is able to perform effectively under
such conditions.
To assess benefits of using DTM we need to consider four possible configurations of static
BGP paths (separately for AS1 and AS4). For instance, AS1 has two inter-domain links
(LA1 and LA2) and two remote traffic sources (DC-B and DC-D). Four static cases might
be considered:
static paths selected by BGP are such that manageable traffic from both
datacenters passes link LA1
static paths selected by BGP are such that manageable traffic from both
datacenters passes link LA2
static path from AS3 (where DC-B is located) passes link LA2 while static path from
AS5 (DC-D) passes link LA1
static path from AS3 (where DC-B is located) passes link LA1 while static path from
AS5 (DC-D) passes link LA2
Similar considerations apply to AS4. Costs and KPIs achieved with DTM are compared to
costs that would be generated in all of four scenarios. Different color lines are used in
Figure 20 and Figure 21 to show traffic growth for each static case. Similarly, inter-domain
traffic costs are calculated for each case (Table 22 and Table 23). In this experiment traffic
costs achieved with DTM were lower as compared to any of static routing cases. DTM
deployment results in cost reduction. Those statements are valid for both domains AS1
and AS4 (see more detailed analysis below).
Version 1.0
Page 73 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Figure 20: Traffic growth during the billing period and a cost map. Domain AS1. (OFS#2.1)
Page 74 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Figure 21: Traffic growth during the billing period and a cost map. Domain AS4. (OFS#2.1)
Version 1.0
Page 75 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Table 22: Inter-domain traffic costs in domains AS1 and AS4. (OFS#2.1)
Domain AS4
Domain AS1
Cost
Expected (estimated in prev. billing period, reflected by )
3297.4
3243.4
3217.2
w/o DTM: traffic from DC-B and from DC-D via link LA1
3496.1
w/o DTM: traffic from DC-B and from DC-D via link LA2
3948.6
w/o DTM: traffic from DC-B via link LA2, traffic from DC-D via link LA1
3670.7
w/o DTM: traffic from DC-B via link LA1, traffic from DC-D via link LA2
3377.4
3866.7
3902.4
3895.3
w/o DTM: traffic from DC-B and from DC-D via link LC1
4223.8
w/o DTM: traffic from DC-B and from DC-D via link LC2
6141.6
w/o DTM: traffic from DC-B via link LC2, traffic from DC-D via link LC1
4234.7
w/o DTM: traffic from DC-B via link LC1, traffic from DC-D via link LC2
4689.3
Domain AS1
Domain AS4
0.9277
0.9240
0.8214
0.6354
0.8836
0.9215
0.9603
0.8322
-252.6
-321.4
-705.2
-2239.1
-427.3
-332.5
-133.9
-786.9
0.9836
1.0093
The cost savings in domain AS1 vary from ~4% to ~18%, depending on to which static
routing case we compare results obtained with DTM. In domain AS4 cost savings vary
between ~8% to ~36% (Table 23). The highest savings are achieved if static BGP paths
from both remote domains, AS3 (where DC-B is located) and AS5 (DC-D), are assumed to
pass link LC2.
Page 76 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
The accuracy of DTM traffic management was high in both domains. In the presented
billing period the cost achieved in domain AS1 was lower that stemming from reference
vector ( =0.9836) while in domain AS4 it was a bit higher than expected ( =1.0093). As
explained before, such small deviations are normal since statistically amounts of traffic in
different billing periods are never identical ( used in current billing period reflects a
prediction of traffic calculated using traffic measurements from the previous billing period).
Conclusion: To sum up, experiment OFS#2.1 proved that DTM performs well in multi-ISP
topology and offers inter-domain cost reduction. The typical potential benefits of ISP from
deploying DTM are similar as for previous experiments. Depending on traffic volumes, cost
function and a static routing configuration taken as a reference the cost reduction vary
from 5-8% to 15-18% but also even higher savings are possible. The most important
conclusion is that DTM prototype was able to operate effectively in multi-domain scenario.
Two ASes were serving as sources of manageable traffic (SBoxes in those domains
distribute traffic over four tunnels using two reference vectors received from two domains
receiving the traffic. In turn, each of two ISP optimizing its inter-domain traffic costs had to
rely on cooperation with to remote domains in which DC generating the traffic were
located.
5.1.5 OFS#2.2 (M-to-M 95% rule)
Field
Description
Scenario
Use case
Experiment
Id
OFS#2.2
Goal
To evaluate the performance of DTM in multi-ISP environment when tariff used by the
ISPs is based on 95th percentile rule.
TM
mechanism
DTM
Experiment
type
Testbed
Experiment
setup
Domain AS1
Cost functions for link LA1 and LA2:
200
0.7 10
5 10
94
3534
0
420 10
420 10
800 10
800 10
0.25 10
0
210 10
3 10
557.5 210 10
440 10
6 10
1897.5
440 10
where
Version 1.0
Page 77 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Domain AS4
Cost functions for link LC1 and LC2:
200
1.25 10
3.75 10
0.5 10
1.625 10
6.25 10
where
50
1250
0
200 10
200 10
480 10
480 10
0
150 10
168.75 150 10
310 10
1602.5
310 10
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on LA2
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on LC1
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Background on LC2
Distribution
Flow inter-arrival-time
exponential
Flow length
Pareto
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
1
91 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
182 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
146 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Distribution parameters
1
210 [ms]
3400
1.5
1
35 [ms]
With probability 0.6
1358
25
158
20
Page 78 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Recorded
data and raw
data format
Distribution
Flow inter-arrival-time
exponential
113.5 [ms]
Flow length
exponential
2400 [ms]
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution
Flow inter-arrival-time
exponential
Flow length
exponential
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
1
35 [ms]
With probability 0.95
1358
25
158
20
1
35 [ms]
With probability 0.95
1358
25
158
20
1
35 [ms]
With probability 0.95
1358
25
158
20
1
35 [ms]
With probability 0.95
1358
25
158
20
Distribution parameters
1
1
738 [ms]
2400 [ms]
Distribution
Flow inter-arrival-time
exponential
282.5 [ms]
Flow length
exponential
2400 [ms]
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
Distribution
Flow inter-arrival-time
exponential
207.5 [ms]
Flow length
exponential
2400 [ms]
Packet inter-arrival-time
Packet payload size
exponential
normal mixture
Distribution parameters
Traffic samples on each of 4 inter-domain (LA1, LA2, LC1, LC2) links every 30 seconds
5 minute samples on each of 4 inter-domain (LA1, LA2, LC1, LC2) links - with distinction
of manageable and non-manageable traffic
vector values in domain AS1 - every 30 seconds
vectors' values (in domains AS1 and AS4) at the end of each billing period
Refer to D4.2 [14] table 8 for more details.
Version 1.0
Page 79 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Measured
metrics and
evaluation
methodology
o
o
o
and
and
(traffic from DC-B via LA2, traffic from DC-D via LA1)
and
(traffic from DC-B via LA1, traffic from DC-D via LA2)
and
and
(traffic from DC-B via LC2, traffic from DC-D via LC1)
and
(traffic from DC-B via LC1, traffic from DC-D via LC2)
o
o
Figure 22: The distribution of 5-min sample pairs on a cost map in domain AS1 (OFS#2.2)
Page 80 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Figure 23: The distribution of 5-min sample pairs on a cost map in domain AS4 (OFS#2.2)
Similarly to OFS#2.1 experiment, four cases of static BGP paths configurations were
considered. However, since plots presenting sample pairs distribution on cost map would
be completely illegible if we plot all four cases together we decided to present only two
cases: all manageable traffic passing LA1 or LA2 in AS1 (LC1 or LC2 in AS4, respectively)
- Figure 22 and Figure 23. In turn, inter-domain traffic costs and KPIs are presented for all
cases (Table 24 and Table 25).
Figures showing distribution of sample pairs are similar to the plot shown for OFS#1.2
experiment. Sample pairs are condensed around the reference vector. In both domains,
the achieved 95th percentile thresholds are lower than determined by reference vector. As
a result, the achieved costs in both domains are lower than expected (Table 24). In a
presented billing period values of
in AS1 and AS4 are equal to 0.9616 and 0.9707,
respectively, i.e., achieved cost was ~3-4% lower than expected: stemming from reference
vector calculated using traffic measurements from previous billing period.
Looking at Table 24 one can also notice that cost achieved with DTM is not only lower
than expected but is also lower than for any static routing case (using paths selected by
BGP for manageable traffic). It is confirmed by KPIs presented in Table 25. Values of all
Version 1.0
Page 81 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
calculated for each domain and for each possible static routing scenario are below 1. ISPs
benefit vary from ~12 to 22% in domain AS1. In turn, cost reduction thanks to using DTM
is between 3 and 24% in domain AS4, depending on which static routing case is
considered as reference. Moreover, the DTM optimization algorithm taking traffic statistics
from previous period converges to an even better solution: the cost would be even lower if
traffic distribution were slightly different (Table 24). It finds a new reference vector for the
consecutive billing period. Using the new DTM will try to further lower traffic costs in the
next billing period.
Table 24: Inter-domain traffic costs in domains AS1 and AS4. (OFS#2.2)
Domain AS4
Domain AS1
Cost
Expected (estimated in prev. billing period, reflected by )
1144.3
1100.3
1068.9
w/o DTM: traffic from DC-B and from DC-D via link LA1
1296.9
w/o DTM: traffic from DC-B and from DC-D via link LA2
1402.7
w/o DTM: traffic from DC-B via link LA2, traffic from DC-D via link LA1
1266.3
w/o DTM: traffic from DC-B via link LA1, traffic from DC-D via link LA2
1244.9
1149.7
1116.3
1105.8
w/o DTM: traffic from DC-B and from DC-D via link LC1
1237.3
w/o DTM: traffic from DC-B and from DC-D via link LC2
1469.0
w/o DTM: traffic from DC-B via link LC2, traffic from DC-D via link LC1
1150.6
w/o DTM: traffic from DC-B via link LC1, traffic from DC-D via link LC2
1218.4
Domain AS1
Domain AS4
0.8484
0.9022
0.7844
0.7599
0.8689
0.9701
0.8838
0.9162
-196.6
-121.1
-302.4
-352.8
-165.9
-34.4
-144.6
-102.1
0.9616
0.9707
Page 82 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Figure 24 presents an interesting phenomenon that may occur when the end of billing
period is approaching. It shows how DTM react to the current measurements of the traffic
to optimally distribute the traffic to achieve target (reference vector) 95th percentile
thresholds. First, DTM using statistics of collected 5-minute samples decides to switch off
the compensation on link LA2. The distribution of samples collected until this moment is
such that regardless the size of remaining samples to be collected until the end of billing
period, the target 95th percentile threshold will not be exceeded. Therefore, the
compensation on link LA2 is switched off and all manageable traffic is sent over that link to
help to achieve target threshold on link LA1. Then, after some time the same situation
occurs on link LA1. The compensation on that link can be also switched off. From that
moment, until the end of the billing period DTM balances the traffic among both links.
It may also happen that DTM will not manage to achieve 95th percentile thresholds before
the end of billing period. In such a case DTM continues distributing manageable traffic in
order to minimize exciding the target threshold (reference vector).
End of billing period: 23:59:59 28.09.2015 23:59:59 05.10.2015
Link LA1
800
Total traffic
Background traffic
R vector threshold
Actual 95th percentile
threshold
600
400
end of compensation
on link LA1
200
end of compensation
on link LA2
159 h
162 h
165 h
Time
168 h
165 h
Time
168 h
Link LA2
400
300
200
100
159 h
162 h
Figure 24: 5-min samples observed on inter-domain links LA1 and LA2 (AS1) at the end of
billing period (OFS#2.2)
Version 1.0
Page 83 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Conclusion: To sum up, experiment OFS#2.2 proved that DTM performs well in multi-ISP
topology and offers inter-domain cost reduction when 95th percentile rule is used for
charging for inter-domain traffic. This conclusion is confirmed by quantitative values
presented and discussed above.
5.2
EFS experiments
Description
Scenario
End-user focused
Use case
Test Card
Goal
RB-HORST
Experiment type
Testbed
Experiment setup
Equipment set:
Page 84 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Home router/end device: hard disk size of home router, up/downlink bandwidth, CPU, RAM
Functionality tests:
Performance tests:
Version 1.0
Page 85 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Page 86 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Figure 28: Cache hit rate with varying request time interval
Figure 28 and Figure 29 show that the request interval does not have an influence on
cache hit rate. Also the bandwidth remains stable when simulating with different request
time intervals. That means time does not have a significant influence on the functionality of
caching in RB-HORST.
Finally the trace of the large scale study was used to evaluate the performance of caches
with different size. Therefore the simulative evaluation framework was configured with one
cache per user taking part in the study. The cache size in number of items was varied and
Version 1.0
Page 87 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
the performance of caches working in an overlay was compared to the cases where
caches only work for the requests of its owner which is referred to the no overlay case. To
evaluate which replacement strategy works best in the study, we compare the simple LRU
policy to the gated approach k-LRU, with k=1, where one virtual cache is preceding the
cache that stores only hashes. Upon request an item is only placed in the cache if its hash
is found in the virtual cache. This prevents the cache from pollution with rarely requested
items. Figure 30 shows the average hit rate of the caches. As expected the cache
efficiency increases with the cache size in any case. An overlay as deployed in RBHORST highly increases the efficiency of caches. The gated cache replacement performs
worse than the simple LRU replacement. This depends on the fact, that the requests in the
trace are highly dynamic and follow a daily pattern according to the instructions given in
the study.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Conclusion: The saved bandwidth with 8 devices in these experiment is around 6.5GB.
Without the uNaDa, Ethernet traffic would be more than 8.5GB, while with uNaDa it is only
a bit more than 2GB. The average transferred data remains at around 6GB, while the
saved bandwidth in average is 4GB. The evaluation of the large scale study trace showed
the an overlay highly increases the performance of the caches.
5.2.2 EFS#2: Evaluation of RB-HORST large scale study
This experiment has been performed in a large scale study at the premises of UZH,
UniWue, TUD, ICOM, AUEB. Following, the Test Cards related to the experiment is
reported.
Field
Description
Scenario
Use case
Test Card
Goal
TM mechanism
RB-HORST
Experiment type
Testbed
Experiment setup
Parameters:
Duration: 4 weeks
uNaDa:
Overlay neighbors
Video
o
o
o
Request events
Prefetching events
Serving events
Cache
o Hits
o Delete
Smartphone
Connected SSID
Version 1.0
Page 89 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
In the large scale study 23 active users participated. A limited catalogue of 100 videos was
provided and a central hub node was chosen. To generate a social network like structure
with central contributors all study participants established a friend relationship with the
Facebook profile of the central hub node. During the study, 1005 videos were requested
and the uNaDas interconnected due to the RB-HORST overlay. Figure 31 shows the
overlay visualizing the uNaDas as nodes with their respective MAC address and their
overlay connections.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
The left plot of Figure 33 shows the cache hit rate of the uNaDas, which is the ratio of
requests served by the uNaDas divided by the users video requests. It can be seen that
the median of the cache hit rate is around 0.4. However, for some uNaDas the cache hit
rate can reach up to 0.75. In the right plot of Figure 33, the cache hit rate is depicted
depending on the number of video requests at a given uNaDa. It can be seen that
especially caches with many requests have a high cache hit rate. For caches with a few
requests the cache hit rate takes a broad range of values.
Page 91 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
requested and watched by the end user, which could not be served locally, but also
prefetched videos. It can be seen in the left plot that nine uNaDas save requests as the
ratio is below 1. For some uNaDas the ratio is 1 or slightly above 1, but some also have a
high prefetching overhead, which is better visible when zooming out in the right plot. It is
not well understood what causes these high ratios, but the extreme outliers on the far right
might be due to benchmarking tests and bug fixing by some participants of the study and,
if this assertion is right they will not occur when deploying the RB-HORST mechanism.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Conclusion: To sum up, these results show that the uNaDas form an overlay as
expected. The cache hit rates among the different uNaDas can reach up to 0.75, and were
high for those uNaDas that had many requests. It could be observed that half of all user
requests could be served locally, although some users faced a high prefetching overhead.
When streaming a video from the uNaDa, users had a good QoE for 75% of the videos
because download times (plus safety margin) were shorter than the video playtime.
5.2.3 EFS#3: Energy consumption of RB-HORST
Any newly deployed system must prove its utility and induced profit versus the generated
cost to justify its deployment. One of the increasingly important parameters is the
consumed energy, in particular in comparison to the status quo and alternative systems.
Hence, a thorough analysis of the power consumption of RB-HORST and the mobile client
are conducted.
The energy assessment of RB-HORST is conducted using dedicated experiments
assessing the network performance (i.e., RTT, throughput) over different connection
options. The study is conducted on the RB-HORST testbed, which is extended by
monitoring software allowing to determine system and network utilization. Based on these
measurements, combined with a power model of the deployed devices, the power
consumption of the individual components is determined.
The influence on the energy consumption of mobile users is calculated exemplarily using
Android smartphones equipped with measurement software. Besides the system
monitoring, also dedicated throughput and RTT tests are conducted. Similarly to the
uNaDas, the power consumption of the smartphones is determined on a model based
approach.
5.2.3.1 Analysis of Mobile Content Access Energy consumption
The energy efficiency of mobile content access is determined using dedicated
measurements from the smartphones. Therefore, measurement points are placed at:
uNaDa
These measurement points are selected to represent possible content locations in RBHORST. Content may be pre-fetched/cached on the uNaDa, and hence is available with
low delay and high throughput. Alternatively, the content may be located at a CDN. These
are usually located close to or within the ISP networks.
To achieve comparable measurements, the same measurement environment needs to be
set up on the uNaDas and the measurement servers. Hence, a suitable location for the
measurement server representing the content server is required. The best possible
approximation allowing flexible measurements is using EmanicsLab [46] servers, which
are distributed over Europe and are usually well connected. From these, the closest server
is identified by selecting the one with the lowest connect duration. This also avoids
selecting highly loaded servers. The best server selection is always executed after a
connection change, assuring that the fastest server is used for all measurements.
In this experiment, the performance was measured for connections via the uNaDa as well
as the 2G/3G/4G cellular network using different network providers. Such, a wide variety of
connectivity options is reflected in the results.
Version 1.0
Page 93 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
The power consumption of the end-user devices is calculated based on the power model
of the Nexus 5. Hence, the system utilization of all devices is not collected relative to the
respective interface, but absolute numbers are collected (e.g., CPU cycles, bytes
transmitted/received). Hence, the utilization traces of any smartphone can be mapped to a
Nexus 5, and thus the power consumption of the Nexus 5 calculated.
Field
Description
Scenario
Use case
Test Card
Goal
TM mechanism
RB-HORST
Experiment type
Testbed
Experiment setup
Parameters:
Duration: 4 weeks
Using RB-HORST:
RTT between
o
Reference (3G/4G)
Throughput (bidirectional)
measurement server
between
smartphone
and
Metrics are directly recorded on the device. See Recorded data and
raw data format
Measurement Results
The activity patterns of the smartphones in Figure 37 show the active times of the different
devices. Some devices were active more often than others. Out of the 20 devices, 15
recorded useful data, which is discussed in the following. Table 26 lists the activity of the
network interfaces over the course of the study. On the average, the network interfaces of
Page 94 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
the participating devices were active for 10% of the time. Still the variability of the
observed patterns is high. Some devices didnt provide any or just a small number of
measurements, while a few others were active for 80% of the time, collecting useful
metrics for the analysis of the power consumption of the devices.
Min
0%
Mean
9.4 %
Median
1%
Max
80 %
From the active devices, the network delay was measured using the native ping
command. Approximately 100,000 individual measurements were conducted from the
mobile phones. 60% of these were end-to-end measurements using RB-HORST access
points. These are indicated by the red line in Figure 38. Of the remaining 40% of
measurements 1/3 measures the RTT to the local uNaDa indicated by the green line, 1/3
measures the RTT between uNaDa and measurement server (blue line), while the last 1/3
measures the cellular network for reference.
The measurements in Figure 38 show that in 80% of the cases a service running on the
local uNaDa provides the best performance. This is consistent with the expectations. The
worst performance is achieved by using the different cellular technologies. Using RBHORST for on-demand offloading e.g., as a simple access point the performance is
improved against the cellular network, falling in the region of 10 ms over a large range of
measurements.
Version 1.0
Page 95 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
The blue line shows the RTT between the uNaDa and the measurement server. This is in
the range of 10ms to 100ms. The steps visible within are caused by different access
technologies available to the study participants. Measurements from well-connected office
environments clearly show better performance compared to DSL or cellular uplinks.
Page 96 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Version 1.0
Page 97 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Figure 41: Measured CDF of the power consumption of the smartphones using 3G and
WiFi
The active time of each interface is listed in Table 27. It is visible, that the WiFi interface on
the average was approximately 20% longer active than the 3G interface compared to the
idle states. This is explained by the much shorter ramp and tail states of the interface.
From the mean power and the active time, the energy consumption of the individual
interfaces can be calculated by multiplying the active power by the active time. On the
average, each interface has used 1/20 of the energy of a conventional smartphone battery.
Still, these measurements exclude other power intensive components like display, GPS,
and processing.
Page 98 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Table 27: Comparison of the power measurements for 3G and WiFi while active
Metric
3G
WiFi
P_min [W]
0.4
0.46
P_mean [W]
0.68
0.55
P_max [W]
1.05
1.23
0.08%
0.1%
1886 J
0.536 Wh
1557J
0.435 Wh
Percent active
Mean energy
To better compare the measurements, the collected network utilization is used to calculate
the power consumption of the individual interfaces based on the power model from [34].
The CDFs of the power states are given in Figure 42 and the corresponding metrics in
Table 28. As is apparent in this Figure, the power consumption of the device mainly
depends on the used interface (horizontal axis). Also the fraction of time where the
interfaces are active differs between interfaces. Similarly to the above plots, large fractions
of the active time are spent in idle modes. This is particularly visible for the 4G connection,
which spends 40% of the time in ramp or tail states, while for 55% of the time the
connection is only lightly used. Only a small fraction of time is spent transferring larger
amounts of traffic.
Figure 42: CDF of the power consumption using identical traffic traces and the power
model of the Nexus 5 for the individual interfaces
Table 28 shows the measurements derived from the simulations when calculating the
power consumption of different interfaces using the same traffic traces. The average
power consumption of the different interfaces while active is comparable in the case of
WiFi and 3G, while 4G in requires almost double the power. When calculating the active
Version 1.0
Page 99 of 177
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
periods of the devices, large differences become visible. Compared to WiFi, the 4G
interface is active more than double the time, while the 3G interface is active triple the
time, all using the same traces. Combining the active time with the mean power
consumption of the respective interfaces, the energy cost for transmitting the data used
during the study becomes apparent. Using WiFi alone, the data can be transferred using
0.63 Wh, which is less than 10% of a conventional smartphone battery. Both on 3G and
4G, the power consumption of the interfaces is approximately quadrupled.
Table 28: Results of the simulation using different interfaces
Metric
3G
4G
WiFi
P_min [W]
0.4
0.6
0.46
P_mean [W]
0.69
0.95
0.55
P_max [W]
1.34
1.44
1.23
0.46%
0.38%
0.15%
8503 J
2.36 Wh
9610 J
2.67 Wh
2284 J
0.63 Wh
Percent active
Mean energy
Considering the large power saving potential when only using the WiFi interface, the
deployment of WiFi access points should be greatly increased. The measurements above
were conducted for real-life traffic patterns using an empirically derived power model.
Based on these results, the deployment of uNaDas, and in particular using WiFi sharing,
should be increased. As the traffic patterns visible in the measurements show only a small
number of larger file transfers, and a huge fraction of small transfers, the influence of
connection establishment and tear-down does not only effect the QoS experienced by the
end-user, but also the energy consumed by the smartphone. This also affects the QoE, as
already proposed in [36]. By making use of the extended availability of WiFi access points
up to 75% of the energy required for communication may be saved.
Conclusion: Considering the power models available on the smartphones, the cheapest
connection for a given type of traffic can be determined a-priori. Hence, Additional energy
savings are possible, which are not visible in the observed traffic pattern. Knowing which
type of content is accessed (e.g. real-time messaging, (live) video) streaming, the power
consumption of the smartphones can be reduced compared to using a fixed connection
type.
5.2.3.2 Energy analysis of the uNaDas
Field
Description
Scenario
Use case
Test Card
Goal
TM mechanism
RB-HORST
Experiment type
Testbed
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Parameters:
Experiment setup
Duration: 4 weeks
System utilization:
CPU
RAM
Disk
Measurement Results
The uNaDas used during the course of the study were all of type Odroid-C1 and
connected to the router of the home gateway. As these dont include a WiFi chip, a WiFi
dongle is attached. This is detailed in Appendix 13.1. This is run in AP-mode to provide the
required WiFi networks.
Version 1.0
D4.3
Based on the average power consumption of a single uNaDa, the power consumption of
the full uNaDa deployment can be calculated. As for the study, 25 devices were active, the
power consumption of the full deployment is 69.5 W. This is less than one conventional PC
when idle. Considering that a considerable amount of traffic usually transferred from CDN
nodes through the ISP network to the local end-user devices may be eliminated, and such
the load on the CDN node can be reduced, the additional power consumption within the
end-user premises is comparatively small.
Considering the potential savings in the network backbone by reducing the load on the
intermediate routers between end-user and CDN node and the decreased load on the
CNS nodes for a comparatively small increase of end-user energy consumption, energy
savings using RB-HORST are apparent.
Table 29: Measured power consumption of the uNaDas
Metric
Value
Minimum power
0.072 W
Mean power
2.783 W
Median power
2.804 W
Maximum power
3.167 W
5% - 95% percentile
2.740 W - 2.811 W
69.5 W
Figure 44: CDF of the power consumption over the uNaDas participating in the study
Conclusion: The RB-HORST system in the current configuration reduces the power
consumption of the mobile devices as well as within the network backbone and CDN on
the cost of a marginal increase of end-user energy consumption. Considering that the RB-
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
HORST functionality may be integrated in future home gateways, the additional energy
expense is negligible. Considering the reduced energy consumption on the smartphones
in combination with the reduced response time for locally available content, the QoE of the
end-user is also increased.
5.2.4 EFS#4: MUCAPS
This prototype validation has been performed at the ALBLF premises. Following, the test
card related to the experiment is reported below.
Field
Description
Scenario
End-user focused
Use case
Experiment Id
Goal
TM mechanism
MUCAPS
Experiment type
Prototype validation
Experiment setup
parameters,
architecture,
Equipment set:
VITALU: one PC
Test parameters
Version 1.0
D4.3
MUCAPS modality
o
Performance tests
For ISP: difference in routing cost with and without MUCAPS all
modalities
computed by MUCAPS
computed by VITALU
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
BWS
AEP1
11
18 [45, 160]
AEP2
8 [4, 9]
AEP3
60
20 [45, 160]
The weights associated to those metrics are set depending on the UEP access type as
presented in Table 31 with the following assumptions:
In WiFi/ADSL: BW has a higher influence as the QoE will rapidly degrade as there are
many active users in the zone covered by an access point,
In 3G: seeking the highest possible path bandwidth is useless as mobile devices can
only support a limited bitrate. On the other hand, the routing cost for the operator is
very important.
Table 31: Metric weights chosen for the 3 tested access types
LAN/FTTX
WiFi/ADSL
3G
RC
2.3
BWS
Version 1.0
D4.3
The AEP have IP addresses as documented in Table 32 AEP3 is a private server that only
has a public address, which for confidentiality is not revealed here. Addresses of hosts
other than LAN addresses are partially masked for the same reasons.
Table 32: IP addresses of the 3 video servers
Host
LAN
VPN
PUBLIC/
AEP1
10.133.10.1
172.alu.vpn.142
sp1.pub.srv.113
AEP2
10.133.10.2
172.alu.vpn.51
sp1.pub.srv.111
spB.prv.srv.37
AEP3
$ dig videoserver.alto
# Without ALTO, on LAN
; <<>> DiG 9.9.5-3-Ubuntu <<>> videoserver.alto
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11339
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;videoserver.alto.
;; ANSWER SECTION:
videoserver.alto.
videoserver.alto.
videoserver.alto.
10
10
10
IN
IN
IN
IN
A
A
A
spB.prv.srv.37
sp1.pub.srv.113
sp1.pub.srv.111
;; AUTHORITY SECTION:
alto.
259200
IN
NS
a.ns.alto.
;; ADDITIONAL SECTION:
a.ns.alto.
259200
IN
10.1.1.165
Figure 46: Screenshot with MUCAPS OFF for all types of UEP access
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
$ dig @172.alu.vpn.165videoserver.alto
# With ALTO, on LAN
; <<>>DiG 9.9.5-3-Ubuntu <<>> @172.alu.vpn.165 videoserver.alto
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51825
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;videoserver.alto.
;; ANSWER SECTION:
videoserver.alto.
videoserver.alto.
videoserver.alto.
10
10
10
IN
IN
IN
IN
A
A
A
sp1.pub.srv.113
sp1.pub.srv.111
spB.prv.srv.37
;; AUTHORITY SECTION:
alto.
259200
IN
NS
a.ns.alto.
;; ADDITIONAL SECTION:
a.ns.alto.
259200
IN
10.1.1.165
Figure 47: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a LAN/FTTX
access
$ dig @192.alu.wfi.145 videoserver.alto
# with ALTO, on Wi-Fi
; <<>> DiG 9.9.5-3-Ubuntu <<>> @192.alu.wfi.145 videoserver.alto
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49285
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;videoserver.alto.
;; ANSWER SECTION:
videoserver.alto.
videoserver.alto.
videoserver.alto.
10
10
10
IN
IN
IN
IN
A
A
A
sp1.pub.srv.113
spB.prv.srv.37
sp1.pub.srv.111
;; AUTHORITY SECTION:
alto.
259200
IN
NS
a.ns.alto.
;; ADDITIONAL SECTION:
a.ns.alto.
259200
IN
10.1.1.165
Figure 48: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a WiFi access
Version 1.0
D4.3
10
10
10
IN
IN
IN
IN
A
A
A
sp1.pub.srv.111
sp1.pub.srv.113
spB.prv.srv.37
;; AUTHORITY SECTION:
alto.
259200
IN
NS
a.ns.alto.
;; ADDITIONAL SECTION:
a.ns.alto.
259200
IN
10.1.1.165
Figure 49: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a 3G access
The three tables below compare, for each access type, the AEP performances in crosslayer utility (CLU) for all 4 MUCAPS modalities, computed by the AEP ranking module of
MUCAPS. Table 33, Table 34, and Table 35 provide the AEP CLU for respectively the
LAN, WiFi and 3G access. The CLU represents the proximity of the AEP performance
vector V(aep) to an ideal (RC, BWS) vector Vid, composed by the best observed
performance values among the candidate AEPs. In the present case, Vid = (7, 20). CLU is
the L1 norm of a weighted distance vector between V(aep) and Vid. In all the three tables,
the best choice is the AEP that maximizes the CLU.
For LAN access: the best CLU performance is obtained in the ON+(RC, BW) mode,
when AEP1 is selected. The CLU equals 0.765
For WiFi access: the best CLU performance is obtained in the ON+(RC, BW) mode,
when AEP1 is selected. The CLU equals 0.7418
For 3G access: the best CLU performance is obtained in the ON+(RC, BW) mode,
when AEP2 is selected. The CLU equals 0.76
Table 33: AEP performances in CLU for all 4 MUCAPS modalities for LAN access
LAN
Selected AEP
RC
BWS
CLU
OFF
AEP3
60
20
0.558
ON+RC
AEP2
0.7
ON+BW
AEP3
60
20
0.558
ON+(RC, BW)
AEP1
11
18
0.765
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Table 34: AEP performances in CLU for all 4 MUCAPS modalities for WiFi access
WiFi
Selected AEP
RC
BWS
CLU
OFF
AEP3
60
20
0.47
ON+RC
AEP2
0.6191
ON+BW
AEP3
60
20
0.47
ON+(RC, BW)
AEP1
11
18
0.7418
Table 35: AEP performances in CLU for all 4 MUCAPS modalities for 3G access
3G
Selected AEP
RC
BWS
CLU
OFF
AEP3
60
20
0.47
ON+RC
AEP2
0.76
ON+BW
AEP3
60
20
0.47
ON+(RC, BW)
AEP2
0.76
Resolution
Media Bitrate
Gone Girl LR
1152x480
638 kb/s
Gone Girl MR
1728x720
1316 kb/s
Gone Girl HR
1224 kb/s
662 kb/s
854x480
The QoE analysis results for all three types of access are provided in Table 37. The values
are averaged over the several streaming sessions done for each video item. For each
access type, lines in slanted orange text show QoE results on the initial choice for each
access type. Lines in bold green show the results for the MUCAPS assisted AEP
selection. The text is colored in: black for LAN/FTTX, blue for WiFi/ADSL, purple for 3G.
VQS max is also called theoretical VQS. It is the VQS proposed by the video
encoder. This value means that it is the best possible VQS value, assuming the
encoder provides the best possible MBR, so that it does not make sense to try to reach
a higher VQS. The VQS takes values in the range [1, 5].
Version 1.0
D4.3
Start Time(s) is the time in seconds elapsed between user clicking on play and the
video playout.
RTT(ms) is provided here to illustrate how VITALU performs but is only conclusive in
the 3G case, since the AEP1 and AEP2 class servers are located in the ALBLF
premises. Indeed,
o when the LAN/FTTX connection is used, all application paths go through a proxy
and the RTT applies to the path between the end-user and the proxy,
o when WiFi is used: RTT illustrates the difference between server AEP3 located
outside the ALBLF premises and ISP network and AEP1 and AEP2.
o Freezes: number, average and maximal duration are actually the metrics that
have the most impact on QoE, as the observed freeze duration can last up to 30
seconds.
Table 37: QoE analysis results when streaming the test videos, for all access types
Server
Resolution
Mode
RTT
(ms)
Start
Time (s)
VQS
VQS
(max)
Number
of
Freezes
Freeze
avg (s)
Freeze max
(s)
AEP1
1080
LAN
0.24
0.35
4.51057
4.64
AEP1
480
LAN
0.35
0.18
3.83629
3.75
AEP1
720
LAN
0.28
0.26
4.2258
4.22
AEP2
1080
LAN
0.28
13.33499
1.73389
4.87
14.70
20.25
AEP2
480
LAN
0.30
3.69
3.19114
3.75
AEP2
720
LAN
0.25
6.92
3.29814
4.22
AEP3
1080
LAN
0.21
0.53
4.4028
4.64
AEP3
480
LAN
0.26
0.26
3.78476
3.76
AEP3
720
LAN
0.27
0.34
3.82556
4.75
AEP1
1080
Wi-Fi
1.96
2.12
3.96123
4.64
AEP1
480
Wi-Fi
1.98
0.65
3.61145
3.76
AEP1
720
Wi-Fi
1.93
1.13
3.87519
4.22
AEP2
1080
Wi-Fi
1.34
13.44
1.68577
4.88
15.37
20.48
AEP2
480
Wi-Fi
1.25
3.72
3.18884
3.75
AEP2
720
Wi-Fi
1.23
6.85
3.30156
4.22
AEP3
1080
Wi-Fi
17.66
2.08
3.96879
4.64
AEP3
480
Wi-Fi
17.84
0.64
3.61467
3.76
AEP3
720
Wi-Fi
23.10
1.11
3.88009
4.22
AEP1
1080
3G
1139
16.84
1.34103
4.95
24.69
28.25
AEP1
480
3G
1069
4.63
2.872
3.86
AEP1
720
3G
1139
8.99
2.55859
4.64
8.71
8.71
AEP2
1080
3G
1069
15.74
1.42838
4.95
23.38
30.14
AEP2
480
3G
1039
3.27
3.10624
3.86
AEP2
720
3G
1159
6.66
3.19339
4.65
AEP3
1080
3G
759
5.90
3.3512
4.95773
AEP3
480
3G
819
1.01
3.20611
3.86
AEP3
720
3G
989
2.13
3.5393
4.65
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Table 38, Table 39, and Table 40 below provide the performance gains or loss on the
measured metrics, for each type of access.
Perf Gain/f: gives for each QoE metric, the average performance gain per flow for the
target resolution. The symbol = denotes an equal performance
GPerf Gain/f: gives for each metric, the average performance gain per flow, averaged in
turn over all the sustainable resolutions. The symbol = denotes an equal performance
Results for LAN:
We consider the case of an UEP streaming a HR video. The target resolution is HR. The
sustainable resolution levels are HR, MR and LR.
The initial choice without MUCAPS is AEP3. With MUCAPS/G4, the selected server is
AEP1. In both cases VQS/VQS max is the same, equal to 0.786. The theoretical gain on
(RC, BWS) is equal to (49, -2), with a CLU performance gain equal to 37.1%. The
movement from AEP3 to AEP1 though yields the same VQS/VQS Max ratio.
The average gain in hops is 10, with values in [8,12].
Table 38: Impact of MUCAPS on video streaming QoE for LAN/FTTX UEP access
LAN/FTTX
MUCAPS OFF:
AEP3
MUCAPS G4 :
AEP1
Perf Gain/f
GPerf Gain/f
VQS/VQS max
3.89/4.95=0.786
3.89 / 4.95
-0.09%
0.581
0.391
+32.7% = -0.19 s
+35.4% = -0.154 s
RTT (ms)
0.35
0.29
+17.14% = -0.06 ms
+16.85%= -0.06 ms
NFrz
NFrzavg (s)
Version 1.0
D4.3
Table 39: Impact of MUCAPS on video streaming QoE for WiFi/ADSL UEP access
MUCAPS OFF:
AEP3
MUCAPS G4 :
AEP1
Perf gain/f
GPerf gain/f
3.91/4.22 = 0.9265
3.91/4.22 = 0.9265
1.11
1.13
-1.8% = +0.05s
-0.892%= -0.03s
RTT (ms)
23.10
1.93
+91.65%= -21.17ms
+90.27 = -18.52ms
NFrz
NFrzavg (s)
Wifi/ADSL
VQS/VQS max
Results for 3G
We consider the case of an UEP streaming a LR video. The target resolution is MR. The
sustainable resolution level is LR. In this scenario, only AEP3 is directly reachable on the
Internet where as AEP1 and AEP2 are behind a firewall. Thus the path performances of
S1 and S3 remain comparable.
The initial choice without MUCAPS is AEP3. With MUCAPS/G4, the selected server is
AEP2. There is a slight gain on VQS and Start time. In both cases VQS/VQS max is the
same, equal to 0.829. The theoretical gain on (RC, BWS) is equal to (53, -12), with a CLU
performance gain equal to 61.7% with the 3G metric weights. The movement from AEP3
to AEP1 though yields an equivalent VQS/VQS Max ratio equal to 0.805.
The average gain in number of hops is 3 with values in [2,4].
Table 40: Impact of MUCAPS on QoE for video streaming for 3G UEP access
MUCAPS OFF:
AEP3
MUCAPS G4 : AEP2
Perf gain/f
GPerf gain/f
3.20/3.86 = 0.829
3.20/3.86 = 0.829
Start time
1.01
3.27
- 223% = +2.26s
RTT
819
1039
- 26.86%
NFrz
NFrzavg
NFrz max
3G
VQS/VQS max
Conclusion: This section provides experiments results and assessment for MUCAPS that
has been tested in the prototype validation mode. That is, the goals of these experiments
were to:
Verify the ability of MUCAPS to revise the initial AEP selection whenever it is
appropriate: that is, when the content location selection without MUCAPS is not
providing the optimal location,
Evaluate the impact of the MUCAPS selection on the QoE of a video streaming
session, that is: compare QoE results without and with the MUCAPS involvement in the
AEP selection.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
The average gain in number of hops is 7.6 with values in [2, 12].
In all the access cases the MUCAPS intervention maintains the user VQS level, which
fulfills the core requirement for the ALTO protocol to be attractive to applications. The
highest observed VQS decrease is for the 3G access where the average observed
VQS decrease is equal to 2.9%. The average performance increase for LAN equals
9.57%, where as the VQS remains almost unchanged in WiFi access.
Freezes occur when the video resolution is too high w.r.t. the application path
bandwidth: the duration ranges between 8 and 30 seconds and the mean number of
occurrences whenever is 4. This is a highly perceptible and disturbing by human users.
o in LAN/FTTX and WiFi/ADSL, Freezes occur when High Resolution videos are
request from AEP2, that offers a path bandwidth suited at most for Medium
resolution on a WiFi/ADSL connection.
o In 3G: freezes appear when the resolution is medium or large but never when it
is low.
The need to use AEP selection metrics that are suited to the needs of an application:
video requires path bandwidth and routing cost should not be the only decision metric,
The impact of the UEP connection type on the influence of the metric: experiments on
3G connections show that application traffic optimization should not systematically
seek for the maximum bandwidth path, as it is useless and sometimes counterproductive.
The next step of this evaluation, rather than adding up more AEPs in the candidate list,
would be to evaluate the MUCAPS impact in a real life situation, when it is hooked to a
set of ISP DNS resolvers. This would imply a realistic ISP scale study of DNS responses in
CDN applications. In particular, one would need to measure the proportion of optimal
AEP selection in DNS responses. The same holds for DHT responses: when the MACAO
selection is requested by Clients hooked to DHT in end systems.
For the RTT, it is to be noted, beside its non conclusive values in the prototype validation
set-up, that it has little influence on the VQS in Video on Demand applications. Its
influence is important in video-conferencing especially when it reaches values between
150 and 300ms. In which case, the selection of metrics and their weights will be different.
The important factors that really bother the end-user are the Start Time delay and the
freezes.
Version 1.0
D4.3
Market and risk analysis, where we identify involved stakeholders and their
interests, target markets and competition.
Then, we focus on the main stakeholder that employs the SmartenIT mechanism, and thus
is the one baring the deployment costs as well as enjoys certain benefits due to its
operation.
For each stakeholder considered, we further break the analysis into the following parts:
Cost2 analysis, including CAPEX, OPEX and estimated cost modification. CAPEX
comprises equipment cost, deployment of service, i.e., implementation and
installation, and any other setup overhead. On the other hand, OPEX comprises
maintenance cost, i.e., energy cost, cost of monitoring, troubleshooting and
upgrade, and any other cost associated to the mechanism operation. Costs are
estimated based on our experience with developing and running the considered
traffic management mechanisms and/or taking into account their complexity. Cost
modification refers to the impact of the introduction of the mechanism on all
aforementioned cost categories and ultimately on the total cost.
The cost benefit analysis does not address SWOT as this is the subject of another section of this deliverable (see
Section 7.3).
2
Costs and benefits are measured in the same timescale (e.g., week, month), depending on the TM mechanism
considered, so as to be comparable and utilized in calculations.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Break-even analysis, where we compare cost and benefits defined in previous parts
of the analysis to derive the net profits for the stakeholder. This last part is provided
only for DTM/DTM++ and RB-HORST/RBH++ as derived cost modifications allowed
us to demonstrate quantifiable gains.
ISP:
transfers
(sender/receiver)
data
between
ultimate
DCs
Market of interest
Internet access
Target customers
Competing approaches
None identified.
Customer value
Customer ownership
Version 1.0
D4.3
Value
organization
Revenue model
Frequency of use
Increase of demand
Topology
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Deployment of service
Implementation
Installation
Version 1.0
D4.3
OpEx
Maintenance
Monitoring
Troubleshooting
Upgrade
Energy
Energy cost
None
Deployment cost
None
Management cost
None
Other cost
None
None
Latency
None
Throughput
None
QoE
None
Competitive advantage
Vertical integration
Bundling opportunities
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Revenues
demand
for
Revenues
by
demand (if any)
Figure 51: CAPEX and OPEX for a medium-sized ISP on an 8-year basis for DTM/DTM++.
Next, we consider the achieved traffic savings for a medium-sized ISP with 6 inter-domain
links with 10 Gbps capacity each. We assume that inter-datacenter traffic is 40% of the
total IP traffic that crosses the links of the considered ISP, while all of this traffic (i.e.,
100%) is assumed to be manageable by the ISP. As DC/cloud traffic is expected to
increase with an annual rate of 30% within the next years until 2019 [24], we also assume
that ISPs will perform appropriate upgrades. Nevertheless, the cost of these upgrades is
led by the explosion of IP traffic and not by the deployment of DTM/DTM++; thus, it is
ignored from this analysis. According to the evaluations of DTM/DTM++, traffic reduction
achieved in inter-domain links reaches 8-15%. Based on the outcome of the evaluations,
we calculate the traffic savings in the aforementioned setup. Calculated traffic savings and
DC/cloud traffic evolution on an 8-year basis are depicted in Figure 52.
Version 1.0
D4.3
Figure 52: DC traffic evolution and estimated traffic savings due to DTM/DTM++ on an 8year basis.
According to [39], the price of transit is in 2015 $0.63/Mbps, which corresponds to
0.56/Mbps. This price continuously drops within the last years with annual rate 33%,
while it is forecasted to further drop with a similar rate. The forecasted values of the price
of transit for the 8-years are depicted in Figure 53.
Figure 53: Forecasted values of the price of transit for the next 8 years [39].
Taking into account the derived CAPEX, OPEX, traffic savings and cost savings
(considered as benefits), we deduce a graph presenting total costs vs. total benefits due to
the introduction and operation of the DTM/DTM++ mechanism in the network of an ISP.
Figure 54 depicts the derived results. Unfortunately, due to the dramatic drop of the price
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
of transit, savings for the ISP diminish over the years. Still when 15% traffic savings are
achieved significant cost savings are achieved annually for the ISP.
Figure 54: Total costs vs. total benefits for DTM/DTM++ (benefits are calculated both for
8% and 15% traffic savings).
D4.3
benefits, and the accumulated gains over the years, respectively. The break-even point for
15% traffic savings is reached already in the 2nd year of operation. Thus, as it can be
observed, DTM/DTM++ can achieve significant cost reduction for rather low investment
and OPEX, and thus, is considered as a highly cost-efficient solution.
6.1.2 Cost-benefit analysis of RB-HORST
6.1.2.1 Market and risk analysis
Market and risk analysis
Stakeholders
Major players
Market of interest
Target customers
Customer value
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Value
organization
Revenue model
Version 1.0
D4.3
Frequency of use
Increase of demand
Topology
Deployment of service
Implementation
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Installation
None
OpEx
Maintenance
Monitoring
Troubleshooting
Upgrade
Energy
Energy is saved at ISP cache, cellular network and enduser device such as the smartphone.
Energy cost
Deployment cost
No cost modification
Management cost
Other cost
Version 1.0
D4.3
Latency
Throughput
QoE
Competitive advantage
Vertical integration
Bundling opportunities
Revenues
demand
for
Revenues
by
demand (if any)
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
bound of 40% (see Figure 57). In practice, similar solutions that are deployed by ISPs,
such as FON, are adopted more quickly and by a larger market segment, as enhanced
home routers are provided to end-users by the ISP itself with a new contract or any
subsequent renewal.
Version 1.0
D4.3
Figure 58: CAPEX and OPEX for a medium-sized ISP on an 8-year basis for RB-HORST.
Figure 59: Video traffic evolution and estimated traffic savings due to RB-HORST on an 8year basis.
To calculate traffic cost savings, we again consider the price of transit, which, as
discussed in Section 6.1.1.6, is $0.63/Mbps in 2015, which corresponds to 0.56/Mbps,
and it is forecasted to continuously drop within the next years with annual rate 33%.
Moreover, we consider savings of cache resources for the ISP. According to the
evaluations of RB-HORST, about 30% of cache storage can be saved. Therefore, if the
video catalogue has 107 objects consuming 1GB each, considering also replication and
quality layers, which equals 104 TB, 30% cache savings account for 3000TB and
3000/month.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Furthermore, due to caching at end-users premises, 40% - 60% of total energy is saved at
ISP cache and cellular network, i.e., 100 Joule savings per Gbit [19]. One may claim that
the energy cost is transferred to the end-user. Indeed the power consumption of the
uNaDa is increased by a maximum of 150 mW compared to the idle mode when running
RB-HORST. Considering a similar increase in power consumption in the case of a
deployment on home routers (being conceptually similar in architecture to the uNaDas) the
additional energy cost at the end-user premises may be neglected. In practice, RBHORST uses resources which otherwise remain unused, while still consuming energy.
Comparing this to the QoE benefits possible by using RB-HORST justifies the marginal
increase of power consumption.
Returning to the cost analysis and taking into account the derived CAPEX, OPEX, traffic
savings and respective cost reduction (considered as benefit), we deduce a graph
presenting total costs vs. total benefits due to the introduction and operation of the RBHORST mechanism in the network of an ISP. Figure 60 depicts the derived results.
Unfortunately, due to the dramatic drop of the price of transit, savings for the ISP diminish
over the years. We observe that benefits always exceed costs since the first year of
operation for 75% traffic savings due to RB-HORST, while benefits exceed costs only after
the 3rd years for 50% traffic savings.
Figure 60: Total costs vs. total benefits (benefits are calculated both for 50% and 75%
traffic savings).
In Figure 61, the blue and red lines present the accumulated costs and benefits over the 8year period. Moreover, the green and purple lines depict the annual gain, i.e., the
difference between costs and benefits, and the accumulated gains over the years,
respectively. As it can be observed, the break-even point for 75% traffic savings is reached
already since the 1st year of operation. As it can be observed, RB-HORST can achieve
highly significant cost reduction for a medium-sized investment, mainly due to equipment
upgrade which is considered to be borne by the ISP in this analysis, and relatively low
OPEX, and thus, is considered as a highly cost-efficient solution.
Version 1.0
D4.3
Major players
Market of interest
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Target customers
Competing approaches
Customer ownership
Value chain/network
organization
Customer value
Revenue model
Version 1.0
D4.3
Business
model
(in ISP managed service.
accordance
to
business
modeling analysis in [7])
6.1.3.3 Considered scenario
Number of potential users
Frequency of use
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Increase of demand
Topology
Version 1.0
D4.3
Deployment of service
Implementation
Installation
OpEx
Maintenance
Monitoring
Troubleshooting
Upgrade
Energy
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Energy cost
Deployment cost
Not foreseen.
Management cost
Other cost
Not foreseen.
MUCAPS
Not addressed.
Latency
Not addressed.
QoE
On LAN/FTTX:
Version 1.0
D4.3
On 3G
Competitive advantage
Vertical integration
Bundling opportunities
Not foreseen.
Revenues
demand
for
Revenues
by
demand (if any)
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
6.2.1 DTM/DTM++
Strengths
Weaknesses
Generic
traffic
management
mechanism; inadequate focus in
interaction with Cloud Management
systems
(although
conceptual
considerations were done)
Missing
high-availability
features
(controller
redundancy,
SBOX
redundancy), which are requested over
typical providers domain
Security
issues:
communication
channels between components are not
secured.
initial
Implementation requires
hardware routers with
policers (DTM++ only)
for
Implemented
tunneling could be
extended
for
assured
quality
interconnection among network and
cloud operators
usage of
hierarchical
Threats
Version 1.0
D4.3
6.2.2 RB-HORST
Strengths
Offers both WiFi Roaming and Content
Prefetching
Reduced/Balanced traffic loads on core
network
Weaknesses
Variable
(legal)
framework
for
caching/serving content (DRM, etc.) in
different countries
to
Threats
Similar
products/services
(WiFi
roaming) with wider customer base and
more mature design
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
6.2.3 MUCAPS
Strengths
Weaknesses
Opportunities
Version 1.0
D4.3
7 Overall assessment
This section summarizes the overall assessment of all SmartenIT solutions and
mechanisms proposed, and presents a clearly laid out picture of various SmartenIT
perspectives, methods, evaluation techniques, and results. The overall assessment builds
previous sections of this deliverable but also encompasses the range of all findings of
SmartenIT during its past 3 years of operation and research.
While Section 3 of this deliverable presented explicitly the mechanisms landscape,
reminded on relevant stakeholders and their relations and perspectives, and reminded on
the finalized and stable SmartenIT system architecture, here the major overall assessment
is undertaken, including parameters and metrics, business perspective, experiment
results, a cost-benefit analysis, and a SWOT analysis.
During the three years of work, the SmartenIT project evolved taking into account its key
design goals and five main objectives defined in the description of work.
Thus, in recall, the SmartenIT key design goals were defined as:
SmartenIT generated a wide range of ideas and solutions. These ideas were compiled into
mechanisms, which took the form and style of management mechanisms, addressing
more broadly Network and Service Management aspects of network operations performed
by the main stakeholders involved, as well as the user. During these 3 years, a set of
carefully selected mechanisms has achieved a mature state in terms of formal
specification and design, while some of them have additionally been implemented to
demonstrate, within a working prototype, that respective control loops and mechanisms
can be cast into operational and efficient software, too. The SmartenIT approach
encompassed evaluations in terms of simulations (in case of the formal specification used)
and in terms of prototypical software performance and functional evaluations (for the
implemented mechanisms. Besides insights gained for each of those two evaluation paths
separately, the conclusive finding for simulation results and operational performance
results revealed that theory and implementation can get very close together, while real-life
considerations are fully integrated into the both the definition of the evaluation set-ups and
the interpretation of their outcomes.
Therefore, Table 41 summarizes all traffic management mechanisms, which had been
considered by the SmartenIT consortium. As already explained in Section 3 they are
mapped onto two separate scenarios, namely the EFS (End-user Focused Scenario) and
OFS (Operator Focused Scenario). For each mechanism initiated and developed within
SmartenIT, Table 41 provides the key information on (a) the stage of its development
Page 140 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
achieved and (b) its respective general assessment with respect to the benefits or loses of
the stakeholders involved. More explicitly, each of those 11 mechanisms in total is clearly
distinguished in terms of its intended and design impacts (cf. Deliverables D2.2 [5], D2.4
[7], and D2.5 [8]) and consequently in terms of its formal specification. This indicates that
the respective mechanism was carefully investigated with respect to how that intended
impact can be reached, either in a general case or in dedicated use-cases SmartenIT had
defined within WP1. A subset of those mechanisms (more specifically, six of them) were
evaluated by means of simulations, by (a) either determining a relevant simulation model,
implementing that, and running respective simulations or (b) utilizing known simulation
models and applying those to the dedicated situation under investigation.
EFS
General assessment of
stakeholders benefits/loses
ISP
Enduser
Overlay
(CSP/CDN/DCO)
Benefits
Benefit
No-lose/Benefit
SEConD
Benefits
Benefits
Benefits
vINCENT
Benefits
Benefit
No-lose
No-lose
Benefit
No-lose
MUCAPS
F+P
Benefits
Benefit
Benefit
DTM
F+P
Benefits
No-lose
Benefits
Benefits
Benefits
Benefits
RB-HORST
MONA
ICC
OFS
Testbed experiment
implementation
simulation
Traffic
management
Mechanism
specification
MRA
F+P
DTM++(DTM +
F
Benefits
No-lose/Benefit
ICC network layer
functionality)
F functional test
M measurement extension
P performance test
* network layer functionality of ICC implemented on hierarchical policer [8], cloud layer
functionality not implemented
Additionally, the mechanisms with the highest potential impact in practical operations as
well with the highest innovation potential have been chosen by SmartenIT to become
prime candidates for a full-fledged implementation. Thus three mechanisms have seen
that stage of development within SmartenIT. Note that implementations in a prototypical
manner led to various practical experiments and evaluations. Those evaluations
addressed complex and real-life characteristics. It is also important to note that not all
mechanisms with a similar level of impact and innovation potential could be implemented
due to limited resource availability within SmartenIT (implementations do need much more
Version 1.0
D4.3
resources than simulations). However, those prototypes and their respective show-cases
demonstrate clearly that it is neither easy to reach the exact same behavior of a simulated
versus an implemented mechanism nor it is desirable to do that for every mechanism.
SmartenIT as a research and development project needs to provide general insights of
future design and implementation work as well as measurable results on selected
mechanisms. Thus, three mechanisms (RB-HORST, DTM/DTM++, and MUCAPS) were
implemented and became part of the SmartenIT prototype, which was first for the Year 2
review and will be also show-cased in the advanced form for the Year 3 review, indicating
that besides its written documentation of such implementations and evaluations the
respective effects and impacts can be seen in a real-life operation.
Finally, Table 41 also shows, at an abstract and on purpose high level of assessment, the
key stakeholders benefits and losses. Due to the detailed evaluation available within
Deliverables D2.2 [5], D2.4 [7], D2.5 [8], and D4.3 (this document),this general overview
focuses on the main stakeholders, which are considered to be the usual major players,
especially (a) the Internet Service Provider (ISP), (b) the end-user, and (c) the overlay
provider, either in terms of the Cloud Service Provider (CSP), Content Delivery Network
(CDN) or the Data Center Operator (DCO). Thus, Table 41 provides an overview of which
stakeholders are involved in which mechanisms and indicates a general assessment of
potential benefits that may be reached with a given solution or possible losses for the
respective stakeholder that may arise.
Furthermore, Figure 63 shows specifically the evolution of SmartenIT solutions, which are
based on the mechanisms designed and their evaluations performed. Starting from all 9
standalone basic mechanisms, especially the RB-tracker, HORST, MONA, vINCENT,
SEConD, and MUCAPS for the End-user Focused Scenarios under investigation and
DTM, ICC, and MRA for the Operator Focused Scenario, synergies have been
investigated on the level of formal investigations, simulations, and integrations. The
resulting synergetic mechanisms combine key functionality of basic mechanisms and form
practically applicable solutions of SmartenIT know-how, especially RB-HORST, and
DTM++, which were subsequently chosen for an integrated implementation and
prototyping. The fully integrated cross-scenario solution termed RB-HORST-DTM-ICC
determines, as documented in Deliverable D2.5 [8]) that various functionalities developed
within SmartenIT form a tool box, which can serve as the basis for combinations as (a) the
dedicated field of application may require it or (b) the set-up of stakeholders and their
dedicated technology in operation may demand it. Thus, the adding-up and the
combination of the developed complete system addresses in full the key project objectives
and its design goals.
Within Figure 63 the blue boxes indicate those mechanisms that have been just described
or described and specified. An important number of those have been inspected in the
closest possible detail by using theoretical models and simulation techniques. The red
boxes present mechanisms, which have been described and specified, theoretically
investigated and implemented too. The RB-tracker and HORST do not have separate
implementations, as their combined functionality offered by each of these mechanisms is
implemented in a fully integrated approach, termed as the RB-HORST mechanism. All
arrows of Figure 63 show the parent-child relation between these mechanisms, but also
they represent the evolution of the integration process within the Smarten IT project,
especially for performing the engineering and prototyping of these mechanisms within the
SmartenIT architecture. The DTM++ mechanism integration followed a slightly different
evolution, since the parent DTM mechanism was implemented first, then was improved by
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
its integration with the ICC mechanism, and finally evaluated in full in that integrated form.
Thus, the ICC mechanism was implemented in its integrated form of DTM++. Therefore,
the DTM++ mechanism possesses the functionality of both the pure DTM and the ICC
mechanisms.
Version 1.0
D4.3
Obviously, the vertical grey line in Figure 63 separates the originally defined EFS and OFS
mechanisms. However, as SmartenIT progressed there are clear insights already gained
so far that the future and possible integration of SmartenIT end-users and operators
views can form a fully functional mechanism and product in turn that will be able to
offer integrated features and functionality related to OFS and EFS as a whole. This
integration level has been already discussed and was considered in further detail in D2.5
[8]. Therefore, the green arrows indicate the next possible integration paths, already
considered and studied within the scope of SmartenIT, but not implemented. Furthermore,
the integration of SEConD, MONA, and vINCENT with RB-HORST had been deeply
discussed in the consortium due to its innovation potential and resulted in an
implementation specification already (as RB-HORST++ mechanism), but due to resource
constraints such an implementation had not been further pursued.
The main functionality and innovation of all mechanisms presented on Figure 63 may be
summarized as follows:
RB-HORST combines the approaches of RB-Tracker and HORST by using RBTracker to form and manage an overlay among the uNaDas participating in
HORST.
MONA adds intelligent traffic control on the smartphone, thus reducing the energy
consumption of mobile data access and increasing the QoE of the end-user by
selecting the optimal network connection depending on the current network
availability, network performance, and user interaction.
vINCENT provides an incentive scheme for users to offer WiFi network access to even
unknown parties. In the scheme, each user participates with his mobile phone and his
uNaDa. It grants their owners an appropriate incentive by allowing them to consume a
fair share of the offloading capabilities of other access points. vINCENT exploits social
relationships derived by the OSN, as well as interest similarities and locality of
exchange of OSN content, and addresses the asymmetries between rural and urban
participants of an offloading scheme to derive fair incentives for resource sharing
among all users.
SEConD enhances the Online Social Network (OSN) users QoE by eliminating stall
events during video viewing. The means to attain this improvement is the effective
video prefetching and the peer-assisted and QoE-oriented video delivery. SEConD
reduces the contribution of the origin server in video delivery, and thus its relevant
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
MUCAPS revises the initial Endpoints selection done by the application overlay by
adding awareness on the underlying network topology and on transport costs
through information not available by traditional end to end on-line measurement
mechanisms. The selection criteria used in MUCAPS are an abstraction of end-toend performances on application paths. This abstraction relies itself on an ISPcentric abstraction of the Internet topology, provided to applications via the IETF
ALTO protocol and one of its extensions supporting several network cost metrics
and constraints combining several logical operators and metrics, see [40]. MUCAPS
also integrates the access technology used by the device receiving the application
resource in its decision making, thus possibly adapting its decisions accordingly.
DTM reduces traffic transfer costs on inter-domain links. It works for traffic volume
and 95th percentile charging rules. It reacts dynamically on traffic changes on interdomain links moving some part of traffic (manageable traffic) from one link to
another. This management mechanism works in a distributed way, it requires
cooperation between operators, namely ISPs receiving and generating traffic. It
has been applied to inter-Cloud communication for Clouds located in different ISP
domains; this traffic is classified as manageable traffic.
ICC attains substantial cost savings for the ISP under both full information and
partial information (traffic expectation from statistics) via smart traffic management.
ICC always attains the transit charge reduction goal under full information on traffic.
ICC is net neutral: it incentivizes the key ISP business customers that generate the
time-shiftable traffic to allow ICC to manage the time-shiftable portion of their traffic
through a discount proportional to his fair share on the total volume of traffic
managed by ICC. The pricing scheme associated with the ICC mechanism is a
good compromise between simplicity and sophistication to offer the right incentives.
DTM++ combines the so-called traffic shift in space of DTM and traffic shift in
time of ICC to achieve further optimization of traffic distribution across multiple
transit links while delaying delay-tolerant traffic when transit links are heavily
loaded. The DTM part of mechanism distributes traffic through tunnels traversing
inter-domain links. It tries to keep stable traffic (background and manageable)
proportion which traverse both links. DTM++ does not control the amount of traffic
allowed to pass particular link, only mentioned proportion is controlled. The problem
of excess traffic is mitigated by ICC functionality that controls the amount of traffic
sent by clouds. Only time-shiftable traffic part (selected TCP traffic) generated by
cloud can be controlled. The DTM++ has to mark time-shiftable traffic, which can be
recognized by a hierarchical policer that discards this part of traffic when it exceeds
predefined limit. The TCP sliding window regulates traffic amount reaching interdomain links.
MRA allocates multiple heterogeneous resources, such as CPU, RAM, disk space,
and bandwidth, in a fair manner between customers of a cloud or a cloud
federation. As presented in D2.5 [8], the results show that the physical resources
can have complex/unpredictable dependency patterns when consumed by VMs.
Version 1.0
D4.3
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Version 1.0
D4.3
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
When we consider RB-HORST, we have three other beneficiaries without direct monetary
incentive, which however have substantial gains in terms of efficiency gains that is
inevitably translated to cost reduction, and thus to indirect monetary gains. These
stakeholders are the CSP, who saves resources, the HRM (home router manufacturer),
who sells dedicated equipment, and finally the DCO, who saves energy. All this
stakeholders and their benefits are presented in Section 6.
Another important note is that the RB-HORST equipment may operate without any ISP
support, as it is a single router with RB-HORST software. If RB-HORST enabled home
router is sold as an appliance, the support is provided by vendor. If the RB-HORST home
router is based on open source implementation, the software updates are provided by a
group distributing software. One can say that this mechanism is ISP/CSP/DCO agnostic.
Any single end user can deploy this device without any permission. The end user only
relies on HRM who offers this solution. Looking from the perspective of the ISP and the
End User, their motivation for RB-HORST deployment is aligned and not contradictory.
The only downside of the uNaDa deployment is the increase of end users energy
consumption due to the RB-HORST functionality.
An ISP may easily deploy MUCAPS to optimize routing costs and resources usage while
maintaining end-user QoE. As MUCAPS is deployed within the ISP network, its customers
may benefit from improved resources availability without needing to buy anything. End
users may get refined guidance based on their connection type to better adapt the content
location to the capabilities of their current access. To this end, users may be asked to
agree having their connection type unveiled to the ISP. As for the value chain in terms of
network organization: the end-user gets a better QoE as a result of ISP-initiated
application traffic optimization. The ISP gets optimized routing costs and resources usage
by means of direct usage of MUCAPS or through applications and end-users accepting to
use the MACAO option. Likewise, the CSPs get an increased satisfaction rating from their
customers. If CDNs use MACAO to optimize their application traffic, they can improve the
satisfaction of the CSPs buying their services and will easily keep them as customers.
Last, the ISP saves inter-domain traffic costs due to content delivery from sources hosted
locally or in a peering ISP network.
OFS
The ISP-friendliness of the TM mechanism has been a prerequisite during the design and
development phases. Thus, the TM mechanisms are aligned with the interests of a real
ISP due to the transit cost minimization/reduction attained by means of most of the OFS
TM mechanisms, i.e. DTM or ICC and their combination, namely DTM++, and selected
EFS TM mechanisms as well, such as SEConD. The latter is discussed in the synergies
part of this section, thus it is not re-discussed here.
In case of DTM/DTM++ three stakeholders were identified: ISP, DCO and CSP. The
deployment of DTM/DTM++ requires cooperation between the site that receive traffic and
the one that generate traffic. The mechanism has been designed to lower ISP transit costs
due to traffic shaping of inter-Cloud communication. If Clouds are located in different
domains, they operate on resources provided by the DCO. The main incentive originator in
the case of DTM/DTM++ solutions is the ISP aiming to decrease its transit cost. Note that
if the ICC payment rule is adopted (D2.5 [8] Appendix F), then incentives are provided to
the DCOs as well by their respective home ISPs. Several incentive chains that encourage
partners to deploy DTM or DTM++ were considered.
Version 1.0
D4.3
In the simplest case (Figure 64(a)) the ISP-A receives traffic from the cloud hosted by ISPB. ISP-A asks ISP-B to deploy the respective DTM part. The ISP-A offers the same DTM
deployment for the ISP-B. Both ISPs benefit since they decrease their transit costs. In
Figure 64(b) and Figure 64(c) we present the situation where DCOs or CSPs are
federated. In this case an incentive originated by one stakeholder may result in a
sequence of incentives, this has been presented also in Figure 64. ISP-A offers to DCO-A
lower cost for connecting to its network infrastructure. This cost reduction can take place if
DCO-A convinces DCO-B to deploy DTM (Figure 64 (b)). The incentive scheme used by
DCO-A to do this, would be that DCO-A can offer some free services to DCO-B (that can
improve DCO-B operation and competitiveness). The incentives of the value chain
stakeholders once more are aligned and not contradictory, thus all stakeholders attain
tangible benefits. A similar sequence of incentives may appear in the case of CSP
federation. The ISP-A initiates the whole encouragement procedure for DTM deployment
by contacting DCO-A. Next, DCO-A offers a reward, such as a discount for the
infrastructure services provided, to CSP-A federated with CSP-B. CSP-B operates in ISPB network where it can deploy DTM elements (SDN controller and Open Flow switch , e.g.,
OvSwitch [48]) in a form of virtual or physical machine (cloud can operate on DCO
resources, so the deployment can be done without DCO permission with
virtualization).Each mentioned player gains some benefit.
Figure 64: Sequences of incentives for DTM and DTM++ in the context of stakeholders
relations
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
When ISP-A wants to use DTM++, the cooperation with CSP-B is required, since it is
responsible for marking delay tolerant traffic. So again the initial incentive starts with ISP-A
and the final participator is CSP-B. Considered sequences of incentives for DTM++ are
presented in Figure 64 (c), (d), (e). When some stakeholders are federated, this relation
allows omitting ISP-B in the sequence (Figure 64 (c), (d)). The federated entities are
usually in a straight relation and they have common business aims. For instance in Figure
64 (d) where DCO-A is confederated with DCO-B (DCO-B delivers resources for operation
to CSP-B), ISP-A may contact CSP-B via DCO-A instead of contacting ISP-B who hosts
DCO-B (who respectively has business relation with CSP-B). It may be easier to
encourage DCO-A than ISP-B to contact with CSP-B. All incentives appearing in this
sequence of incentive are compatible.
Last but not least, these incentive chains, though differently structured for federated and
independent operation of the stakeholders, all result in aligning stakeholders incentives.
This is extremely important since federations are gaining momentum as a means of
reducing CAPEX and OPEX for ISPs and Clouds, while maintaining elastic access to
resources over large geographical regions with fine geo-spatial granularity. This trend is
expected to intensify the coming years in networks due to the increasing adoption of
Software Defined Networking and Network Function Virtualization paradigms, which
render the cost-efficient fast and dynamic provisioning of Network as a Service
sustainable. Regarding Clouds and overlays, such federations already exist, e.g., XiFi,
OnApp (details provided in D2.4 [7], Section 3).
Finally, note that the optional Cloud-ISP interface of the ICC mechanism, whose
implementation is not part of the current DTM++ prototype, can serve as a means of
communicating preferences and providing incentives between the cloud and the network
layers, thus enabling the on-demand creation of such value and incentives chains.
Incentives for resource sharing and load/VM migration among Clouds (with or without
federation) or load sharing among home routers have also been investigated by dedicated
models in D2.5 [8], Section 4. The optimization criteria served are minimization of transit
charges for the ISP, cloud metrics, QoS/QoE and energy metrics. These cross-layer
optimizations also form new incentive chains, which we refrain from presenting here for
brevity reasons. Once again, the direct and indirect benefits stemming from the operation
of the respective SmartenIT mechanism are propagated from the chain originator across
the stakeholders of the value chain by means of incentive schemes such as discounts,
monetary rewards and improved performance/richer services functionality.
Tussle Analysis
SmartenIT has performed tussle analysis within the scope of the two scenarios, namely
OFS and EFS, of the hybrid combination of them including synergies for combinations of
SmartenIT mechanisms, as well as the in SmartenIT models. Since an exhaustive analysis
of the entire SmartenIT solution space would not be feasible, the project has opted to
provide indicative analysis over specific instances for each major SmartenIT solution
type, namely mechanisms, scenarios and models. This analysis has also been explicitly
mapped to the business models that are relevant for SmartenIT and have been presented
in the Chapter 3 of SmartenIT deliverable D2.4 [7].
SmartenIT designed mechanisms that the address incentives of the various involved
stakeholders, so as to avoid potential tussles among them. We focused on two types of
incentives: monetary incentives, e.g., revenues for providing a specific service, and
Version 1.0
D4.3
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
D4.3
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
To this end, available OSN datasets were collected and a Facebook dataset was analyzed
to identify important parameters for predicting the popularity [20]. In order to optimize the
prefetching accuracy of content, a regression model was developed. This model was used
in SmartenIT mechanism like RB-HORST to exploit the meta-information provided in social
networks. To use the services provided by RB-HORST, a user has to connect with
Facebook and enter her or his credentials. Thus meta-information of the users interests
and friends is acquired. This meta-information is used to improve content delivery by
prefetching content and to enable a trusted WiFi hotspot for users that are friends on
Facebook. In a large scale study (c.f. sec. 5.2.2) the social aware concept of RB-HORST
was proven to save more than half of the traffic by caching popular content. Additionally to
the social prediction, an overlay based prediction algorithm was investigated and
implemented. This relies on the distance between uNaDas, in terms of AS hops, and aims
to minimize the inter-domain traffic. This is achieved by the fact the friends in social
networks that are potentially interested in the same content are in close proximity with high
probability.
In SEConD, the meta-information provided in OSNs is exploited for category based
prefetching. Content is prefetched if a certain threshold is exceeded. This threshold is
based on the percentage of videos watched on the media channel and on the viewers
interests. The evaluation shows that in case of the considered scenario a reduction of the
total inter-AS traffic of 87% is achieved on average, compared to the inter-AS traffic
created from the traditional client-server architecture. The contribution of the origin server
where the video is hosted is reduced by 88%. The SEConD mechanism enables QoE
aware content delivery for video streaming utilizing available resources in end-user
networks.
7.1.4 Energy Efficiency
Energy efficiency, as one of the main objectives within SmartenIT was addressed in the
MONA approach by analyzing the power consumption of the full deployment, which was
constructed with energy efficiency in mind.
From the measurements of MONA with respect to the RB-HORST deployment, it was
derived that the power consumption on end-user devices using caching and WiFi
offloading can be increased, in particular for highly interactive applications requiring only
little traffic. Simulations have shown that on demand WiFi offloading results in slightly
higher energy consumption on the mobile devices, but this effect can be mitigated by
caching the content on the local uNaDas, thus providing increased response times and
overall throughput compared to the on-demand case, while saving energy.
By offloading the cellular network, the load within the network is equalized. By reducing
traffic peaks due to content availability on the uNaDas, the currently installed infrastructure
may be used for a longer time, making it possible to serve a larger number of users with
the currently deployed hardware. This reduces the energy cost per user, which would
otherwise increase in the cellular network due to increasing demand.
Caching video content on the uNaDas, and re-distributing it within the ISPs network leads
to similar effects as in the cellular case on the traffic generated in the Internet backbone.
An equalized traffic distribution and reduced peak demands allow deferring infrastructure
upgrades as peaks in traffic demand are smoothed out. Hence, satisfying demands by a
larger number of users with currently deployed hardware becomes possible.
Version 1.0
D4.3
The content providers and CDNs also benefit in terms of energy efficiency from the
generally reduced load and in particular reduced traffic peaks, as these networks need to
be built to fulfill the peak demand. By exchanging content within the ISP, the currently
deployed hardware may be used longer.
In the OFS case, the energy consumption, as discussed in D4.1 [13], is mainly static. Still,
energy savings can be achieved by re-configuring interfaces for a lower data rate, or
sending traffic along the paths with the most energy efficient intermediate hops. The ICC
Cloud layer, though not implemented and integrated in the DTM++ prototype within the
project lifespan also considers energy efficiency as a key decision factor on selecting the
optimal destination for inter-cloud communication use cases where multiple candidate
destinations for the respective flows exist, e.g., when performing a datacenter backup.
This is an attractive feature that also contributes to energy awareness and efficiency of the
SmartenIT traffic management mechanisms.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
considerations as used by tools such as "Traceroute" can benefit from the addition of ISP
policy considerations in exchange of the ISP providing hints on its network capabilities.
Overall, the SmartenIT integrated prototypes offer a wide variety of solutions that can be
adopted (party or in their entirety) by vendors and solutions provider, so as to meet the
modern needs of the ISP subscribers and deal with the complex problems with respect to
traffic management that operators face nowadays. Being research prototypes, the
SmartenIT solutions cannot be adopted as is to become (parts of) commercial solutions.
However, the underlying design and implementation principles allow their extension for the
benefit of the involved stakeholders.
SmartenIT will lay the basis for its traffic management mechanisms for traffic
characterization by the definition of key stakeholders, which require mechanisms with an
incentive compatibility, QoE- and social-awareness, and energy efficiency for emerging
scenarios of tomorrows applications.
Objective 2
SmartenIT will develop theory for new pricing and business models, applicable in example
use cases, combined with the design and prototyping of appropriate traffic management
mechanism that can be combined with todays and IETFs proposed relevant communication
protocols.
Objective 3
SmartenIT will investigate economically, QoE, and energy-wise such models in theory by
simulations to guide the respective prototyping work in terms of a successful, viable, and
efficient architectural integration, framework and mechanisms engineering, and its
subsequent performance evaluation.
Objective 4
SmartenIT will evaluate use cases selected out of three real-world scenarios (1) inter-cloud
communication, (2) global service mobility, and (3) exploiting social networks information
(QoE and social awareness) by theoretical simulations and on the basis of the prototype
engineered.
Objective 5
SmartenIT will disseminate and publish project results, will support the standardization of
relevant IETF working groups, and will exploit those results with respect to the partnerspecific plans.
Objective 1
WP1 has investigated the SmartenIT environment by identifying traffic characteristics of
cloud based applications, characterizing stakeholder relationships, classifying cloud
services and selecting applications for simulations and trials.
After identifying the key aspects of scenarios from different perspectives, the basic
scenarios were consolidated and merged to focus onto two major scenarios, namely the
end user focused scenario (EFS) and the operator focused scenario (OFS). The EFS
focuses on the retail market and aims to improve the QoE enhancement for end users by
Version 1.0
D4.3
considering mobility of users, services and contents. The EFS further aims to utilize socialawareness and to improve content delivery and placement by prefetching, caching, and
data offloading. On the other hand, the OFS focuses on the wholesale market and aims to
optimize costs and energy efficiency for operator and provider by federation of data center
and cloud operators as well as interaction between operator and ISP.
Realistic and interesting use cases were specified and associated parameters and metrics
defined. Drafts for traffic management solutions were designed employing the identified
use cases.
Finally, WP2 has developed traffic management mechanisms that pursue and promote
incentive-compatibility across stakeholders, e.g., reduction of transit cost for ISPs with
simultaneous improvement of QoS for CSPs, improvement of end-users QoE and
simultaneous reduction of inter- and intra- domain traffic.
Objective 2
WP2 has identified eleven relevant use cases in the playfield of SmartenIT. A wellstructured template for use case description has been built to prove that the solutions
proposed in SmartenIT are applicable to solve real and concrete problems of incentive
based traffic management for overlay networks and cloud-based applications driven by
social awareness, QoE awareness and energy efficiency. Moreover, WP2 studied
extensively business models and performed a mapping of use cases and mechanisms to
specific business models. Tussle analysis has been employed for single OFS and EFS
framework, so as to demonstrate the incentive compatibility achieved by the SmartenIT
mechanisms and their synergies.
A pricing classification framework has been introduced that details where the different
layers and granularities of pricing mechanisms related to SmartenIT operate and how they
work together. A concrete pricing scheme for ICC was proposed, which meets the goals of
simplicity, incentive compatibility and design-for-tussle. Furthermore, a model of Cloud
federation including pricing of users and pricing of each provider by others has been
studied, while both strong and weak federations have been considered, while also profit
sharing in the context of ICC mechanism has been introduced. Finally, WP2 has designed
TM mechanisms that address the main aspects of the OFS and the EFS, namely the main
design pillars of SmartenIT: incentive-compatibility, inter-data center communication,
social awareness, and energy efficiency.
WP3 on the other hand, has transformed the most prominent mechanisms, identified in
WP2, into research prototypes. These prototypes have been implemented following
industry standards and has adopted well-accepted communication protocols, like
OpenFlow, REST, etc. Among the implemented prototypes, DTM/DTM++ and RB-HORST
have closely followed the architectural design established in the project and hence they
are integrated to the SmartenIT architecture. MONA and MUCAPS have followed
standalone implementation, but an initial mapping to the SmartneIT architecture has been
also provided.
Objective 3
SmartenIT has investigated extensively economic aspects of inter-cloud communication
and has formulated a pricing framework, while also pricing aspects of a cloud federation
have been explicitly described. QoE has been considered in all use-cases of EFS as a
Page 158 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
major KPI and has been indirectly addressed by RB-HORST, SEConD, MONA and
RBH++ mechanisms, and directly by MUCAPS. Furthermore, theoretical models have
been developed and employed by TM mechanisms so as to quantify the energy
consumption by mobile devices in EFS.
Theory developed in WP2 has been evaluated by simulations to guide the respective
prototyping work in terms of a successful, viable, and efficient architectural integration,
framework and mechanisms engineering in WP3, and its subsequent performance
evaluation in WP4.
During design (WP2) and development (WP3) of the prototypes, work focused on the QoE
of the end-user and reducing the energy consumption of the overall system. The QoE is
addressed by moving content as close to the end-user as possible, while the energy
consumption was addressed by using the most energy efficient data transfer mode
providing sufficient QoE to end user.
The architecture designed and the prototypes developed within the WP3 have been based
on the requirements and use cases defined in WP1 and WP2. The software engineering
methods followed during the implementation phase led to prototypes that are very close to
industry standards. The generated prototypes can be easily adapted to fit to established
systems as well as extended to match specific requirements.
Objective 4
SmartenIT considered two scenarios OFS and EFS. The OFS covers inter-cloud
communication and even extends this scenario because it considers relation between
underlay and overlay networks. The EFS is a compound of the last two scenarios
mentioned in objective 4, namely global mobility and exploitation of social network
information.
The prototypical management mechanisms DTM/DTM++ and ICC addressing OFS, and
RB-HORST, MONA and MUCAPS addressing EFS were inspected in various ways. The
simulations of OFS mechanisms (DTM and ICC) gave promising results and performance
indicators shown the validity of these mechanisms concepts. All mechanisms were
validated in the testbed environment. Multiple testbeds were designed. The methods for
testing software product evolved also, we constructed a few software based testbeds in
different locations at partners premises. Finally hardware based testbed (which uses
routers produced by leading vendors) was prepared. The tests in the hardware testbed
show that SmartenIT system (and particularly DTM++) can work in real network
environments; for example, there were used operator class routers. For performance test
we designed and implemented dedicated software for traffic generation, online
visualization and measurement. For ICC we built separate hardware testbed which was
dedicated for functional tests, the same was done for DTM++.
Almost the same approach was applied to RB-HORST. Theoretical analysis and some
simulations were done first for Horst and RB-tarcker which constitute RB-HORST. The
obtained results motivated SmartenIT consortium to chose both mechanisms for
implementation. Authors of these mechanism decided to put efforts into specifying and
implementing single solution offering functionalities of both mechanisms. The RB-HORST
and MONA were tested in a real network where a group of end users participated in
experiments. These EFS performance experiments also proved maturity of their concept.
The measured performance indicators in all experiments (DTM, RB-HORST, MONA,
MUCAPS) shown the operational quality of these mechanisms and that they are really
Version 1.0
D4.3
beneficial for stakeholders. The experiments also showed that QoE were improved and
social awareness used by RB-HORST properly supports operation of this mechanism.
The MUCAPS prototype was tested with end users connected with various technologies
and servers physically deployed in both the test lab and the Internet. Its benefits for users
were quantified in terms of an actual Video Quality Score and associated QoE-specific
metrics measured via the fully fleshed pre-industrial QoE analysis tool VITALU. ISP
benefits were evaluated in terms of the augmentation of a multi-objective utility function of
the routing and bandwidth performance. This utility is computed based on abstracted
routing costs and path bandwidth performance supposed to be entered by the ISP in an
ALTO Server. Experiments illustrate the importance of the application as well as the user
connection type. For example, results show that maximizing user bandwidth is not
systematically the best option when the end user capabilities cannot sustain it.
Objective 5
Since dissemination, exploitation, external liaisons and standardization have dedicated
deliverables (D5.3 [17] and D5.4 [18]) that are due by the of end of the project those
aspects are not covered here. Project activities and assessment in those areas are
covered in respective deliverables.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Proper time schedule and use of multiple replicas of SmartenIT testbed deployed at
partners premises have led to a successful completion of the experiments execution. It is
worth observing that in fact a broader set of experiments than those initially planned have
been executed. The extra experiments has been accomplished to fully assess the
SmartenIT performance under a broader range of identified configuration parameters
(DTM case) or to provide an additional testbed for live deployment of SmartenIT standalone implementation (MUCAPS traffic management mechanism) whose deployment has
not been initially considered but was decided at a later stage.
The overall experiment plan has been carefully considered in order to cover at full extent
all SmartenIT technical solutions, thus providing:
SmartenIT project has produced a large variety of assets in the form of traffic management
mechanisms, which spans the two identified reference scenarios, namely OFS and EFS,
providing a rich landscape of solutions able to be deployed over a wide range of practical
real-world scenarios. Traffic management mechanisms identified by SmartenIT provide
concrete and measurable enhancements to the actual cloud-Internet landscape
Solutions for OFS scenario (DTM/DTM++)
The main SmartenIT solution for OFS scenario, namely DTM/DTM++ has been deeply
evaluated by means of the execution of well-focused experiments conducted over
implemented prototype leading to confirm that the prototype is stable, achieves good
performance and is able to withstand large-scale intensive real-world traffic patterns.
Moreover, the DTM/DTM++ implemented prototype is able to successfully manage traffic
under different combinations of background traffic levels and traffic profiles.
DTM/DTM++ thus proves to be an excellent solution to be casted over ISP domain for the
following key motivations:
Version 1.0
D4.3
It boosts the Cloud Federation business models, thus demonstrating that SmartenIT
solutions for OFS scenario are well deployable not only in a scenario dominated by
large scale Cloud and Internet Service Providers, but also in a market segment
composed of smaller service providers that team together to widen their geographical
footprint.
As an effect of the previous point, this SmartenIT solution also achieves (at least in an
indirect way) energy efficiency for ISPs domain by realizing a more fair infrastructure
utilization, which it is appropriate for serving traffic exchanged between Clouds for
energy efficiency reasons.
The solution depicted so far is on one side fully applicable over typical Internet best
effort connections and on the other side, by having different management schema for
delay-sensitive and delay-tolerant traffic realizes a sort of QoS driven model where
traffic is managed according to QoS classification.
It has a high deployability potential, being a solution that can be deployed with several
combined features and casted over different scenarios, from public to hybrid cloud
deployment.
They are based on incentive collaboration among different administrative domains and
provide a technical framework that allows traffic management between Cloud provider
and the end-user in a win-to-win situation
They entail incentive collaborations among end-users based on a novel trust schema
that leverages on Social awareness information.
They can be easily deployed over end-user and ISP premises with negligible impact on
existing infrastructures.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
They provide novel content caching and prefetching schema for content downloading,
thus indirectly savings of inter-domain traffic.
Version 1.0
D4.3
9 SMART Objectives
Through this document, three SmartenIT SMART objectives defined in Section B1.1.2.4 of
the SmartenIT Description of Work (DoW, [1]) have been partially addressed. Namely, one
overall (O4, see Table 42) and three practical (O1.1, O3.4, and O4.3 see Table 42)
SMART objectives were addressed.
The overall Objective 4 is defined in the DoW as follows:
Objective 4 SmartenIT will evaluate use cases selected out of three real-world scenarios
(1) inter-cloud communication, (2) global service mobility, and (3) exploiting
social networks information (QoE and social awareness) by theoretical
simulations and on the basis of the prototype engineered.
This document gives an overview on SmartenIT in the current Internet (Section 3).
Moreover, a description of the assessment criteria and methodology is provided (Section
4), and testbed experiments results and assessment achieved during SmartenIT validation
activities are documented (Section 5). Furthermore, the assessment of the main projects
outcomes is given (Section 6), and an overall assessment of all SmartenIT solutions and
mechanisms is provided (Section 7).
These results provide the outcome of the tasks T4.3 and T4.4 in which the SmartenIT
prototype was evaluated and assessed in experiments. As such, deliverable D4.3
determines the final step in the evaluation of SmartenIT use cases and real-world
scenarios as part of Objective 4, which started with deliverable D4.1 (Testbed Set-up and
Configuration, end of project month 24)and was extended in deliverable D4.2 (Experiment
Definition and Set-up, end of project month 30).
Table 42: Overall SmartenIT SMART objective addressed. (Source: [1])
Measurable
Objective
No.
O4
Specific
Evaluation of use
cases
Deliverable
Number
D4.1, D4.2, D4.3
Timely
Achievable
Relevant
Mile Stone
Number
Implementation,
evaluation
Complex
MS4.3
Objective
ID
Specific
Metric
O1.1
O3.4
How to monitor
energy efficiency and
take appropriate
coordinated actions?
Achievable
Relevant
Timely
Project
Month
Design,
simulation
T1.4, T2.1,
T2.2, T2.5,
T4.4
Major output
of relevance
for provider
and in turn
users
M36
Design,
simulation,
prototyping
T1.3, T2.3,
T4.1, T4.2,
T4.4
Highly
relevant
output of
relevance
for users
M36
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Objective
ID
Measurable
Specific
Metric
Which assessment
schemes will have to
be run by the
providers before
successful
deployment of the
developed services in
their network?
O4.3
Number of
identified/considered
schemes, number of
feasibility analyses for the
identified schemes
Achievable
Relevant
Timely
Project
Month
Design,
simulation,
prototyping
T4.4
Extremely
relevant
output of
relevance
for providers
M36
D4.3
connections on the mobile device, the power model of a widely used device (Nexus
5)was selected. Finally, the energy assessment of RB-HORST was conducted
using dedicated experiments assessing the network performance (i.e. RTT,
throughput) over different connection options (cf. Section 5.2.3).
Objective O4.3: Which assessment schemes will have to be run by the providers
before successful deployment of the developed services in their network?
After deployment of the DTM for testing, an operator may perform tests described in
Section 5.1. Depending on the charging schema an ISP may asses if mechanism
offers satisfactory transfer cost reduction. As it has been mentioned it strongly
depends on cost functions used on inert-domain links but ISP is always in no lose
position. An operator needs to observe system behavior in a few billing periods
because the DTM system learns by traffic measurement what are the optimization
reference parameters, namely the reference vector. This parameter is used in the
consecutive billing periods. The DTM++ requires observation of the system
behavior in busy hours and hours after. The experiment (cf. Section 5.1.3) related to
DTM++ is an example what parameters should be observed in order to asses
efficiency of DTM++ and can be used by particular operator. An operator possesses
records how much traffic from DC/clouds traverses its inter-domain links, so it can
make comparison of traffic when DTM++ is used and without DTM++.
Also in case of RB-HORST, after mechanism deployment, an may perform tests in
which it can observe traffic on its inter-domain links. It should be observed the
amount of traffic traversing inter-domain links. Especially attention should be paid to
traffic directed to networks where RBH has been deployed. In the long time scale
measurements should present decrease of traffic related to communication
between RBH users and outside word (outside the operator domain).
According to the SMART objectives set within SmartenIT DoW, those ones of relevance
for D4.3 and the respective tasks within WP4, i.e., T4.3 and T4.4, the targeted for
objectives have been met.
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
10 References
[1]
The SmartenIT project: Grant Agreement for STREP: Annex I 'Description of Work
(DoW)'; 2012.
[2]
[3]
The SmartenIT project: Deliverable D1.2: Report on Cloud Service Classifications and
Scenarios; October 2013.
[4]
[5]
[6]
[7]
[8]
[9]
The Smarten IT project: Deliverable D3.1: Report on Initial System Architecture; April
2013.
Version 1.0
D4.3
Annual IEEE Conference on Local Computer Networks (LCN), Clearwater Beach, FL,
USA, October 2015.
[21] Partnering Orchestration, Monetisation and Ecosystem Enablement for CSPs and
DSPs, Infonova, on-line:
https://www.infonova.com/pdf/Infonova_R6__ODE_EEP_Oct_2013.pdf
[22] Windows Phone, Wi-Fi Sense FAQ, on-line: https://www.windowsphone.com/enus/how-to/wp8/connectivity/wi-fi-sense-faq
[23] Bundesnetzagentur, Initiative on network quality (in German), https://www.initiativenetzqualitaet.de
[24] Cisco Systems, Visual networking index, forecast and methodology, 2014 2019;
White paper series (2015): https://www.cisco.com
[25] G. Hasslinger, ISP platforms under a heavy peer-to-peer workload, Proc. Peer-toPeer Systems and Applications, Eds.: R. Steinmetz and K. Wehrle, Springer LNCS
3485 (2005) 369-382
[26] G. Hasslinger, G. Nunzi, C. Meirosu, C. Fan and F.-U. Andersen, Traffic engineering
supported by inherent network management: Analysis of resource efficiency and cost
saving potential, International Journal on Network Management (IJNM), Special Issue
on Economic Traffic Management, Vol. 21 (2011) 45-64
[27] Internet Engineering Task Force (IETF)- ALTO WG on application layer traffic
optimization, <tools.ietf.org/wg/alto/charters>
[28] Internet Engineering Task Force (IETF)- CDNI WG on CDN interconnection,
<tools.ietf.org/wg/cdni/charters>
[29] Internet Engineering Task Force (IETF)- LMAP WG on large scale measurement of
broadband performance <tools.ietf.org/wg/lmap/charters>
[30] Sandvine Inc., Global Internet Phenomena, Asia-Pacific & Europe, White paper report
(2015) <www.Sandvine.com>
[31] A.-J. Su, D.R. Choffnes, A. Kuzmanovic and F.E. Bustamante, Drafting behind
Akamai, IEEE/ACM Trans. on Networking 17 (2009) 17521765
[32] DE-CIX (statistics and traffic traces), available on-line at: https://www.de-cix.net/
[33] Cachebench at Github, on-line: https://github.com/pettitor/cachebench
[34] F. Kaup, P. Gottschling, and D. Hausheer, PowerPi: Measuring and Modeling the
Power Consumption of the Raspberry Pi, in LCN, 2014.
[35] F. Kaup, M. Wichtlhuber, S. Rado, and D. Hausheer, Analysis and Modeling of the
Multipath-TCP Power Consumption for Constant Bitrate Streaming, Darmstadt, 2015.
[36] F. Kaup and D. Hausheer, Optimizing Energy Consumption and QoE on Mobile
Devices, in International Conference on Network Protocols (ICNP), 2013, pp. 13.
[37] I. Poese, B. Franck, B. Ager, G. Smaragdakis, S. Uhlig and A. Feldmann: Improving
Content Delivery with PaDIS; Internet Computing, June 2012
[38] Definition of Internet exchange point, Wikipedia,
http://en.wikipedia.org/wiki/Internet_exchange_point
[39] Internet Transit Prices - Historical and Projected; White Paper; Dr Peering
Internatrional; on-line: http://drpeering.net/white-papers/Internet-Transit-PricingHistorical-And-Projected.php
Page 168 of 177
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Version 1.0
D4.3
11 Abbreviations
3G
Third Generation
5G PPP
AEP
Application EndPoint
ADSL
AGH
ALBLF
ALTO
AP
Access Point
API
AS
Autonomous System
AUEB
BGP
BWS
BandWidth Score
CAPEX
Capital Expenditures
CDN
CET
CLU
Cross-Layer Utility
CPU
CSP
DA
DC
Data Center
DCO
DFrz
Duration of Freezes
DHT
DNS
DoW
Description of Work
DRM
DSL
DTM
EEF
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
EEP
EFS
End-user-Focused Scenario
EP
End Point
FTTH
FTTX
Fiber to the X
GPerf Gain
GRE
H2H
Human-to-Human
H2M
Human-to-Machine
HD
High Definition
HTTP
ICC
Inter-Cloud Communication
ICN
ICOM
IETF
I/O
Input/Output
IP
Internet Protocol
IRT
Interroute S.P.A
ISP
Joule
JSON
KPI
LD
Low Definition
LNR
LRU
LTE
M2M
Machine-to-Machine
MACAO
MARC
MARS
MBR
MONA
MPLS
MRA
Multi-Resource Allocation
Version 1.0
D4.3
M-to-M
Multiple to Multiple
MUCAPS
NaaS
Network as a Service
NFrz
Number of Freezes
NREN
NSP
NFV
OFS
Operator-Focused Scenario
OPEX
Operating Expenditures
OSN
OTT
OVS
Open vSwitch
Power
P2P
Peer-to-Peer
PC
Personal Computer
PM
Person Month
PoP
Point-of-Presence
PSNC
QoE
Quality of Experience
QoS
Quality of Service
RAM
Random-Access Memory
RC
Routing Cost
REST
RB-HORST
RTT
SD
Standard Definition
SLA
S-to-S
Single to Single
SDN
SEConD
SMART
SNMP
SSID
SWOT
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
TCP
TDG
TM
Traffic Management
ToS
Type of Service
TTL
Time To Live
TUD
UDP
UEP
User EndPoint
uNaDa
UniWue
UZH
University of Zrich
Watt
WiFi
Wireless Fidelity
WLAN
vINCENT
Virtual Incentives
VITALU
VM
Virtual Machine
VPC
VPN
VQS
12 Acknowledgements
Besides the authors, this deliverable was made possible due to the large and open help of the
WP4 team of the SmartenIT team. Many thanks to all of them! Special thanks go to the
internal reviewers Sergios Soursos (ICOM) and George Stamoulis (AUEB) for their detailed
revision and valuable comments.
Version 1.0
D4.3
13 Appendices
13.1 Testbed extension: Odroid-C1
13.1.1 Overview
The Odroid-C1 is a replacement for the Raspberry PI device, which is described as a
testbed extension in Deliverable D4.1 in Section 5.3.1 Extension: Raspberry Pi.
Therefore, in this description only the setup procedure is described, as the rest of the
setup is the same as for the Raspberry Pi.
A typical hardware setup of a Ordoid-C1 is as follows:
Model: Odroid-C1
CPU: ARM Cortex A5, 4 cores, 1.5Ghz
Memory: 1 GB
Storage: 16GB microSDHC card with SD Adapter (class 6 or 10)
WLAN dongle: Wireless N USB Adapter
Operating System: Ubuntu 14.04 LTS with custom kernel
Finally, install the RB Horst software and reboot the device with the WIFI dongle attached.
A script will be automatically started during boot that completes the configuration process.
The device will reboot and is then ready to be used.
http://odroid.in/ubuntu_14.04lts/
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
Then, an IP network is defined to be used as the address range for remote access clients.
This network can use arbitrary IP addresses, as long as there are no overlaps with other
address range. In this description, 172.16.1.0/24 is used. For each admin a /30 subnet is
used, with the lower IP address used for the VPN server side and the higher address used
for the remote client. It has to be ensured that the selected IP network is allowed in the
local iptables firewall to access the management network. Furthermore, a unique port
number has to be assigned to the user. Starting with the default port 1196 for OpenVPN,
the ports should be increased until an unused port is found.
The remote admin needs to be named accordingly, the <config-name> should include the
partner shortcut and the admin name. Create a file named <config-name>.conf in the
folder /etc/openvpn with the following content:
devtun
ifconfig<172.16.1.X><172.16.1.X+1>
secret<config-name>.key
keepalive 10 120
protoudp
port<unique port number>
persist-tun
statusopenvpn-<config-name>.log
The final step is to create a configuration file named <config-file>-client.ovpn for the
client as follows:
devtun
protoudp
remote<Public IP address of VPN server><unique port number>
ifconfig<172.16.1.X+1><172.16.1.X>
route 10.200.0.0 255.255.0.0
keepalive 10 120
#secret dtm-agh.key
<secret>
<copy the content of the file <config-file>.key in this location>
</secret>
The administrator installs aOpenVPN client, e.g. from the OpenVPN website4and imports
the <config-file>-client.ovpn file to use the remote access.
https://openvpn.net/index.php/open-source/downloads.html
Version 1.0
D4.3
Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium
D4.3
Public
For traffic generation we used servers (virtual machines) Cloud-S and Background, Figure
65. The server Background is used for generation of background traffic. This is done in the
same way like in case of pure DTM. In server Cloud-S we have an installed instance of
http server and the same traffic generator that was used for pure DTM. In the case of
DTM++ the traffic generator is used for generating delay sensitive traffic and IPtables
firewall for marking this traffic (Expedited Forwarding). This generator generates UDP
flows. In AS1 we have deployed server Cloud-R, which is used as a receiver of the traffic
from Cloud-S. The wget application is used for downloading files from http server in CloudS. This traffic is not marked (Best Effort) and we classify it as delay tolerant from ICC point
of view. This traffic is consisted of TCP flows.
The PortMirror switch is used for port mirroring. In particular, links from routers AS1-BG1,
AS1-BG2, AS2 and AS22 are connected to separate ports and the traffic from all these
ports are mirrored to another port connected to TrafficAnalyzer. TrafficAnalyzer is a PC
used for traffic filtering and counting. We use tcpdump for traffic analysis. We filter traffic
with using ToS field, and GRE source and destination addresses. this way we can
recognize how much traffic goes via each inter-domain link. By employing ToS, we are
able to recognize delay sensitive and delay tolerant traffic. We measure the amount of
total, delay sensitive and delay tolerant traffic on the inter-domain links.
S-Box-AS3 and SDN controller are deployed in the same way like in case of pure DTM. SBox-AS1 collects counter readouts from inter-domain links via SNMP, but this time this
data is collected from routers AS1-BG1 and AS1-BG2 instead of AS1-DA.Note that tunnels
are terminated on these routers in domain AS1; this is imposed by implementation of ICC
functionality in DTM++. In order to establish SNMP communication between S-Box-AS1
and mentioned border routers, the GUI was appropriately modified. The traffic constraints
for hierarchical policers are sent by S-Box-AS1 to routers AS1-ICC1 and AS1-ICC2 via
NetConf communication. This communication must be enabled on these routers, via
proper router configuration. Also S-Box GUI possess proper NetConf template
configuration.
The EBGP is configured on the links between autonomous systems. OSPF protocol works
in AS1 and IBGP session is configured between AS1-BG1 and AS1-BG2. Configuration
procedures are the same like in the case of pure DTM.
In the testbed we have used only one MX240 router. This offers router virtualization called
logical system. MX240 can also operate without virtualization, in such case it uses only its
main logical system (which can be treated as hypervisor logical system). All routers
presented in Figure 65 are logical systems. The same router has been used as an
OpenFlow switch, but this functionality may be enabled only in a main logical system, this
means that having a few virtual-routers only one can operate as OpenFlow switch.
Version 1.0