Вы находитесь на странице: 1из 177

Seventh Framework STREP No.

317846

D4.3
Public

Socially-aware Management of
New Overlay Application Traffic with
Energy Efficiency in the Internet
European Seventh Framework Project FP7-2012-ICT-317846-STREP

Deliverable D4.3
Prototype Evaluations and Overall Project
Assessment

The SmartenIT Consortium


Universitt Zrich, UZH, Switzerland
Athens University of Economics and Business - Research Center, AUEB, Greece
Julius-Maximilians Universitt Wrzburg, UniWue, Germany
Technische Universitt Darmstadt, TUD, Germany
Akademia Gorniczo-Hutnicza im. Stanislawa Staszica w Krakowie, AGH, Poland
Intracom SA Telecom Solutions, ICOM, Greece
Alcatel Lucent Bell Labs, ALBLF, France
Instytut Chemii Bioorganicznej PAN, PSNC, Poland
Interoute S.P.A, IRT, Italy
Telekom Deutschland GmbH, TDG, Germany

Copyright 2015, the Members of the SmartenIT Consortium


For more information on this document or the SmartenIT project, please contact:
Prof. Dr. Burkhard Stiller
Universitt Zrich, CSG@IFI
Binzmhlestrasse 14
CH8050 Zrich
Switzerland
Phone: +41 44 635 4331
Fax: +41 44 635 6809
E-mail: info@SmartenIT.eu

Version 1.0

Page 1 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Document Control
Title:

Prototype Evaluations and Overall Project Assessment

Type:

Public

Editor(s):

Rafal Stankiewicz

E-mail:

rstankie@agh.edu.pl

Author(s): Andi Lareida, Thomas Bocek, Manos Dramitinos, David Hausheer, Ioanna
Papafili, Frederic Faucheux, Domenico Gallico, Roman Lapacz, Rafal Stankiewicz,
Zbigniew Dulinski, Sabine Randriamasy, Sergios Soursos, Valentin Burger, Florian
Wamser, Fabian Kaup, Jeremias Blendin, Michael Seufert, Burkhard Stiller, Patrick
Poullie, Gerhard Hasslinger, George Stamoulis, Gino Carrozzo, Paolo Cruschelli,
Grzegorz Rzym, Piotr Wydrych, Krzysztof Wajda
Doc ID:

D4.3-v1.0

AMENDMENT HISTORY
Version

Date

Author

Description/Comments

V0.0

03/2015

Rafal Stankiewicz

ToC defined

V0.1

2015/06/10

Rafal Stankiewicz

Updated ToC, preliminary content definition and


responsibility assignment

V0.2

2015/07/30

R. Lapacz, I. Papafili, V. Burger

Template for cost-benefit analysis, inputs to c-b analysis


for DTM and RB-HORST,

V0.3

2015/09/03

S. Soursos, S. Randriamasy, I.
Papafili, M. Dramitinos, T. Bocek, R.
Lapacz, D. Gallico, A. Lareida, P.
Cruschelli, M. Seufert, R.
Stankiewicz

Section 3 introduction, input to Section 3.2 mechanisms


landscape, 3.3 System architecture, Template for Section
4, inputs to Section 4, parameters and metrics for DTM
and DTM++, SEConD, RB-HORST, MUCAPS (initial),
Introduction to Section 5, general description of OFS
experiments (Section 5.1), Experiment report cards for
OFS#1.1, 1.2, first results for ORF#1.2, Introduction to
Section 5.2, First results descriptions for EFS#1
experiment, next steps in cost-benefit analysis, draft
SWOT for DTM, draft content in section 7.

V0.4

2015/09/22

S. Soursos, S. Randriamasy, I.
Papafili G. Carrozzo, T. Bocek, M.
Dramitinos, R. Lapacz, D. Gallico, A.
Lareida, P. Cruschelli, M. Seufert, R.
Stankiewicz, Z. Dulinski

Description of metrics in EFS2.2 (RB-HORTS), Metrics


and assessment of the parameters for MONA, Updated
ICC parameters and metrics, revised Section 3.2, SWOT
analysis for RB-HORST, Section 3.1 Stakeholders' roles,
Updates to Section 4, new content OFS experiments,

V0.5

2015/10/07

T. Bocek, P. Poullie, G. Hasslinger,


R. Stankiewicz, F. Kaup, I. Papafili,
S. Randriamasy, K. Wajda

Bussines perspective (section 4.2.7), Updates on SWOT


analysis for DTM and RB-HORST, Update in Section 6.1:
Cost-benefit analysis for DTM/DTM++ and RBHORST/RBH++, 4.2.7 (Metrics, Parameters, and
Assessment Summary), pre-final version of section
4,contributed content for EFS#3 experiment results,
Contributions to Section 7 Overall assessment, skeleton of
the section

V0.6

2015/10/17

A. Lareida, Z. Dulinski, G. Rzym. R.


Stankiewicz, P. Wydrych, T. Bocek,
M. Dramitinos, I. Papafili, V. Burger,
P. Cruschelli, S. Soursos, M.
Seufert, F. Kaup

Cleaning all sections, Editorial work, cleaned references,


updates on sections 3 and 7, added stakeholders' roles in
Sec 3.1, text for section 7.1.3, objectives addressed in
Section 7.3, functionality and Innovation for HORST and
RB-HORST added in section 7, several updates to Section
7, cleaned the SWOT section, new content in section 5,
added appendices, finalized cost-benefit analysis
Version ready for internal reviewers,
MUCAPS parts added in parallel to review of sections 1-6

Page 2 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

V0.7

2015/10/29

D. Hausheer, T. Bocek, P.
Cruschelli, V. Burger, F. Kaup, M.
Dramitinos, G. Hasslinger, Z.
Dulinski

Reviewers' comments to sections 1-6 addressed, Ready


reviews for MUCAPS content and Section 7, 8, 9.

V1.0

2015/10/31

R. Stankiewicz, G. Rzym, Z.
Dulinski, S. Randriamasy, , I.
Papafili, M. Dramitinos, D.
Hausheer, G. D. Stamoulis

Addressed reviewers' comments and revisions to sections


7, 8 and 9 and MUCAPS related content .
Final editorial work, last updates to Section 7 and Section
8, revised Section 1. Mastering the layout.

Legal Notices
The information in this document is subject to change without notice.
The Members of the SmartenIT Consortium make no warranty of any kind with regard to this document,
including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The
Members of the SmatenIT Consortium shall not be held liable for errors contained herein or direct, indirect,
special, incidental or consequential damages in connection with the furnishing, performance, or use of this
material.

Version 1.0

Page 3 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Table of Contents
Amendment History

List of Figures

List of Tables

1 Executive summary

11

2 Introduction

13

2.1 Purpose of the Document D4.3


2.2 Document Outline

3 SmartenIT in the current Internet


3.1 Stakeholders roles
3.2 Mechanisms landscape
3.3 System architecture
3.3.1 Implementation process
3.4 Business perspective and applicability to main IP traffic categories

4 Assessment criteria and methodology


4.1 Methodology for Parameters and Metrics
4.2 Traffic Management Mechanism Evaluation Criteria
4.2.1 OFS: ICC, DTM & DTM++
4.2.2 EFS#1: RB-HORST Caching
4.2.3 EFS#2: RB-HORST Large Scale Study
4.2.4 EFS#3: MONA (including EEF)
4.2.5 EFS#4: MUCAPS
4.3 Metrics, Parameters, and Assessment Summary

5 Testbed experiments results and assessment


5.1 OFS experiments
5.1.1 OFS#1.1 (S-to-S Volume based charging rule)
5.1.2 OFS#1.2 (S-to-S 95% percentile based tariff)
5.1.3 OFS#1.3 (S-to-S 95% rule)
5.1.4 OFS#2.1 (M-to-M volume)
5.1.5 OFS#2.2 (M-to-M 95% rule)
5.2 EFS experiments
5.2.1 EFS#1: Evaluation of Caching Functionality in RB-HORST
5.2.2 EFS#2: Evaluation of RB-HORST large scale study
5.2.3 EFS#3: Energy consumption of RB-HORST
5.2.4 EFS#4: MUCAPS

6 Assessment of project outcomes


6.1 Assessment of stakeholders benefits and costs
6.1.1 Cost-benefit analysis of DTM/DTM++
6.1.2 Cost-benefit analysis of RB-HORST
6.1.3 Cost-benefit analysis of MUCAPS
6.2 SWOT analysis
6.2.1 DTM/DTM++
6.2.2 RB-HORST
6.2.3 MUCAPS

7 Overall assessment

13
13

15
16
20
24
26
27

31
31
32
32
35
37
39
41
42

45
47
49
57
65
70
77
84
84
89
93
103

114
114
115
122
130
136
137
138
139

140

7.1 Key design goals addressed


7.1.1 Incentive compatible network management mechanisms
7.1.2 QoE-awareness
7.1.3 Social awareness
7.1.4 Energy Efficiency

Page 4 of 177

147
147
153
154
155

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

7.2 Assessment of SmartenIT outcomes from an industrial point of view


7.3 Project objectives addressed

156
157

8 Summary and conclusions

161

9 SMART Objectives

164

10 References

167

11 Abbreviations

170

12 Acknowledgements

173

13 Appendices

174

13.1 Testbed extension: Odroid-C1


13.1.1 Overview
13.1.2 OS and Extra Packages Installation
13.2 Testbed extension: Remote Access for Admins with OpenVPN
13.3 Testbed extension: Hardware testbed

Version 1.0

174
174
174
175
176

Page 5 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

List of Figures
Figure 1: The SmartenIT ecosystem and the involved architectural entities. ..................... 24
Figure 2: The DTM architecture (a) and deployment (b) diagrams .................................... 25
Figure 3: The RB-HORST architecture (a) and deployment (b) diagrams ......................... 25
Figure 4: SmartenIT Entities and components involved in MUCAPS ................................. 26
Figure 5: Variable and static parameters and type of experiments .................................... 44
Figure 6: Logical topology for S-to-S experiment ............................................................... 47
Figure 7: Logical topology for M-to-M experiment.............................................................. 48
Figure 8: Traffic growth during the billing period and a cost map. SDN controller
mode: "reactive with reference". (OFS#1.1) ....................................................... 54
Figure 9: Traffic patterns on link 1 and 2 during the billing period. SDN controller
mode: "reactive with reference". (OFS#1.1) ....................................................... 55
Figure 10: Compensation vector values received by remote SBox (AS3 domain). SDN
controller mode: "reactive with reference". (OFS#1.1) ...................................... 56
Figure 11: Compensation vector values received by remote SBox (AS3 domain). SDN
controller mode: "reactive without reference". (OFS#1.1) ................................. 56
Figure 12: 5-min samples observed on inter-domain links during a single 7-day long
billing period (OFS#1.2) .................................................................................... 60
Figure 13: Distributions of 5-min samples collected during a single 7-day long billing
period (OFS#1.2) .............................................................................................. 61
Figure 14: The distribution of 5-min sample pairs on a cost map. (OFS#1.2) .................... 62
Figure 15: The distribution of 5-min sample pairs - long flows, reactive mode................... 64
Figure 16: The distribution of 5-min sample pairs - long flows, proactive mode................. 64
Figure 17: Traffic patterns on link 1 and 2 during the billing period. (OFS#1.3) ................. 67
Figure 18: 5-min samples observed on inter-domain links during a single 1-day long
billing period (OFS#1.3) .................................................................................... 68
Figure 19: 5-min samples observed on inter-domain links during a single 1-day long
billing period when hierarchical policer is inactive; only DTM operates
(OFS#1.3) ......................................................................................................... 69
Figure 20: Traffic growth during the billing period and a cost map. Domain AS1.
(OFS#2.1) ......................................................................................................... 74
Figure 21: Traffic growth during the billing period and a cost map. Domain AS4.
(OFS#2.1) ......................................................................................................... 75
Figure 22: The distribution of 5-min sample pairs on a cost map in domain AS1
(OFS#2.2) ......................................................................................................... 80
Figure 23: The distribution of 5-min sample pairs on a cost map in domain AS4
(OFS#2.2) ......................................................................................................... 81

Page 6 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 24: 5-min samples observed on inter-domain links LA1 and LA2 (AS1) at the
end of billing period (OFS#2.2) ......................................................................... 83
Figure 25: Testbed setting for evaluation of caching functionality ...................................... 86
Figure 26: Cache hit rate with varying number of devices ................................................. 86
Figure 27: Transferred data with varying number of devices ............................................. 87
Figure 28: Cache hit rate with varying request time interval .............................................. 87
Figure 29: Transferred data with varying request time interval .......................................... 88
Figure 30: Cache efficiency in large scale study trace ....................................................... 88
Figure 31: Overlay of uNaDas after large scale study ....................................................... 90
Figure 32: Number of video views ..................................................................................... 90
Figure 33: Cache efficiency ............................................................................................... 91
Figure 34: Location of request processing ......................................................................... 91
Figure 35: Prefetching overhead........................................................................................ 92
Figure 36: Quality of Experience for videos served by uNaDas ......................................... 92
Figure 37: Activity patterns of the participating smartphones............................................. 95
Figure 38: RTTs as measured from the smartphones ....................................................... 96
Figure 39: Network uplink as measured from the smartphones ......................................... 97
Figure 40: Network downlink as measured from the smartphones .................................... 98
Figure 41: Measured CDF of the power consumption of the smartphones using 3G
and WiFi ............................................................................................................ 98
Figure 42: CDF of the power consumption using identical traffic traces and the power
model of the Nexus 5 for the individual interfaces............................................. 99
Figure 43: Activity patterns of the participating uNaDas .................................................. 101
Figure 44: CDF of the power consumption over the uNaDas participating in the study ... 102
Figure 45: MUCAPS prototype set-up ............................................................................. 105
Figure 46: Screenshot with MUCAPS OFF for all types of UEP access .......................... 106
Figure 47: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a LAN/FTTX
access ............................................................................................................. 107
Figure 48: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a WiFi
access ............................................................................................................. 107
Figure 49: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a 3G access. 108
Figure 50: GANT network topology in Europe ............................................................... 117
Figure 51: CAPEX and OPEX for a medium-sized ISP on an 8-year basis for
DTM/DTM++. .................................................................................................. 119
Figure 52: DC traffic evolution and estimated traffic savings due to DTM/DTM++ on an
8-year basis. ................................................................................................... 120
Figure 53: Forecasted values of the price of transit for the next 8 years [39]................... 120
Version 1.0

Page 7 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 54: Total costs vs. total benefits for DTM/DTM++ (benefits are calculated both
for 8% and 15% traffic savings). ..................................................................... 121
Figure 55: Overall economic gains assessment of DTM/DTM++. .................................... 121
Figure 56: Tiered caching scenario. ................................................................................. 124
Figure 57: Expected adoption rate of RB-HORST. .......................................................... 127
Figure 58: CAPEX and OPEX for a medium-sized ISP on an 8-year basis for
RB-HORST. .................................................................................................... 128
Figure 59: Video traffic evolution and estimated traffic savings due to RB-HORST on
an 8-year basis. .............................................................................................. 128
Figure 60: Total costs vs. total benefits (benefits are calculated both for 50% and 75%
traffic savings). ................................................................................................ 129
Figure 61: Overall economic gains assessment for RB-HORST...................................... 130
Figure 62: MUCAPS employed for transparent in-network optimization. ......................... 133
Figure 63: Evolution of SmartenIT solutions .................................................................... 143
Figure 64: Sequences of incentives for DTM and DTM++ in the context of
stakeholders relations ..................................................................................... 150
Figure 65: General overview of hardware testbed extension ........................................... 176

Page 8 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

List of Tables
Table 1: SmartenIT mechanisms mapping ........................................................................ 30
Table 2: Performance metrics for ICC ............................................................................... 32
Table 3: ICC parameters.................................................................................................... 32
Table 4: Performance metrics for DTM .............................................................................. 33
Table 5: DTM parameters .................................................................................................. 34
Table 6: Caching functionality metrics in RB-HORST ........................................................ 35
Table 7: Caching functionality parameters in RB-HORST ................................................. 36
Table 8: Large scale RB-HORST study metrics................................................................. 37
Table 9: Large scale RB-HORST study parameters .......................................................... 38
Table 10: MONA performance metrics............................................................................... 39
Table 11: MONA parameters (selected configuration underlined) ..................................... 40
Table 12: MUCAPS performance metrics .......................................................................... 41
Table 13: MUCAPS parameters ........................................................................................ 42
Table 14: SmartenIT metric categories mapping ............................................................... 43
Table 15: Experiment summary ......................................................................................... 45
Table 16: Experiment Report Card Template .................................................................... 46
Table 17: Total traffic costs achieved with DTM and expected without DTM obtained
for various SDN controller modes (OFS#1.1) ..................................................... 53
Table 18: KPI values for DTM run with various SDN controller modes (OFS#1.1) ............ 53
Table 19: The number of vector updates exchanged by SBox-es during the billing
period ................................................................................................................. 57
Table 20: Inter-domain traffic costs and KPI values for DTM (OFS#1.2) ........................... 59
Table 21: Manageable traffic generator setting for generation of long flows ...................... 63
Table 22: Inter-domain traffic costs in domains AS1 and AS4. (OFS#2.1) ........................ 76
Table 23: KPI for AS1 and AS4 (OFS#2.1) ........................................................................ 76
Table 24: Inter-domain traffic costs in domains AS1 and AS4. (OFS#2.2) ........................ 82
Table 25: KPI for AS1 and AS4 (OFS#2.2) ........................................................................ 82
Table 26: Analysis of the device activity ............................................................................ 95
Table 27: Comparison of the power measurements for 3G and WiFi while active ............. 99
Table 28: Results of the simulation using different interfaces .......................................... 100
Table 29: Measured power consumption of the uNaDas ................................................. 102
Table 30: ISP-specific ALTO values for RC and BWS chosen for the 3 AEP classes ..... 105
Table 31: Metric weights chosen for the 3 tested access types ....................................... 105
Version 1.0

Page 9 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Table 32: IP addresses of the 3 video servers ................................................................. 106


Table 33: AEP performances in CLU for all 4 MUCAPS modalities for LAN access ........ 108
Table 34: AEP performances in CLU for all 4 MUCAPS modalities for WiFi access ....... 109
Table 35: AEP performances in CLU for all 4 MUCAPS modalities for 3G access .......... 109
Table 36: Name, resolution and bitrate of the six videos used for the prototype
validation .......................................................................................................... 109
Table 37: QoE analysis results when streaming the test videos, for all access types ...... 110
Table 38: Impact of MUCAPS on video streaming QoE for LAN/FTTX UEP access ....... 111
Table 39: Impact of MUCAPS on video streaming QoE for WiFi/ADSL UEP access ...... 112
Table 40: Impact of MUCAPS on QoE for video streaming for 3G UEP access .............. 112
Table 41: Traffic management mechanisms developed by SmartenIT ............................ 141
Table 42: Overall SmartenIT SMART objective addressed. (Source: [1]) ........................ 164
Table 43: Practical SmartenIT SMART objective addressed. (Source: [11]).................... 164

Page 10 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

1 Executive summary
This document is Deliverable D4.3 of SmartenIT project. It is entitled Prototype
Evaluations and Overall Project Assessment and documents the results of validation and
performance evaluations of the mechanisms implemented within the SmartenIT project.
All measurable results, including traffic measurements, metrics on performance,
robustness, energy efficiency, and QoE, and results obtained by means of experiments,
simulations, and theoretical investigations are provided in the first part of this deliverable.
The second part of the deliverable is devoted to the overall assessment of the solutions
and mechanisms proposed by the SmartenIT project. It contains the broad picture from
various points of view (such as perspectives of different stakeholders, single layer and
cross-layer views, incentives of stakeholders, various business models and scenarios, and
new technologies implemented within SmartenIT), based on all methods applied (theory,
modeling, experiments) and evaluation techniques utilized (simulations, testbed). Final
project conclusions as well as implementation guidelines and usability assessment are
also provided in this document. The deliverable also outlines the major achievements of
the SmartenIT project as a whole with respect to its general approach and its specific
network management mechanisms.
To this end, this deliverable first positions SmartenIT solutions and traffic management
mechanisms in the current Internet taking into account the perspective of stakeholders and
current challenges. SmartenIT has conducted a deep analysis of the current Internet, in
two main complementary and synergetic directions, which thus led to the development of
two scenario categories, namely OFS (Operator Focused Scenario) and EFS (End-user
Focused Scenario), OFS is dedicated to the interaction among stakeholders acting as
Cloud and particularly as Internet Service Providers, while EFS is centered on the enduser and its interaction with other users as well as the Cloud. The main SmartenIT
mechanisms pertaining to OFS (namely ICC, DTM and their superposition DTM++)
constitute powerful tools for service differentiation and OPEX reduction for ISPs, thus
contributing both to cost reduction and value generation aspects of ISPs business. These
mechanisms have consciously been designed for the Best Effort Internet, but can also be
adapted under a Differentiated Services model and/or under Beyond Best Effort Internet.
Also, MRA ensures a fair resource allocation among federated Cloud Service Providers.
Regarding EFS, all relevant SmartenIT mechanisms are suitable for certain business
contexts. The business context that is most relevant for RB-HORST (which is the
combination of several mechanisms) is the Terminal and Service business model. Also,
the SmartenIT EFS mechanisms, namely RB-HORST, SEConD, vINCENT, MONA, and
MUCAPS that can deliver energy efficiency, advanced services and resource
management, increased agility and performance, can be very useful means for new
differentiated services and terminal operations, such as cloudlets.
Furthermore, the deliverable summarizes evaluation criteria, methodologies and metrics
used for assessment of traffic management mechanisms selected for testbed experiments
and reports about the testbed experiments results and assessment achieved during
SmartenIT validation activities. It should be noted that all runs in the testbeds included
both a functional test assessment and a performance test. For the OFS experiments of
DTM and DTM++, a virtualized testbed environment was employed, with four testbed
instances launched in three SmartenIT partners premises, and covering both Single-toSingle and Multi-to-Multi domain topologies with a tariff based on both volume and 95th
percentile. The EFS experiments evaluate caching and energy consumption in RB-HORST
Version 1.0

Page 11 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

as well as Quality of Service of videos for MUCAPS. In fact, the large-scale study for RBHORST involved the provision of videos to users in the premises of five different partners.
Additionally, other dimensions of assessment of SmartenIT solutions and finally
implemented traffic management mechanisms are introduced and a deep analysis of costs
and benefits for the deployment of SmartenIT solutions expected over next couple of years
is provided together with a SWOT analysis.
Finally, an overall assessment of all SmartenIT solutions and mechanisms is done, which
serves as a summary for all project activities and outcomes. A map of solutions and stage
of development and achieved maturity is presented and a summary is given about how
key design goals and objectives defined at the project beginning were achieved and
addressed. In particular, the main conclusions of this final assessment are as follows:
For the OFS scenario, DTM/DTM++ proves to be an excellent solution to be casted over
the ISP domain, mainly because: a) it is based on incentive collaboration among different
administrative domains by providing a technical framework allowing traffic management
between Cloud and ISP in a win-to-win situation, b) it resolves the problem of information
asymmetry by providing partial information exchange between cloud layer and network
layer, thus providing cross-layer aware traffic management, c) it boosts the Cloud
Federation business models, thus demonstrating that SmartenIT solutions for the OFS
scenario are well deployable both for large scale Cloud and Internet Service Providers and
for smaller ones that together widen their geographical footprint, d) it optimizes inter-cloud
traffic by providing an excellent tool to optimally route the selected inter-domain traffic, e) it
is a non-disruptive solution, meaning that they can be deployed over provider domains
without changing the entire infrastructure, with a high deployability potential.
For the EFS scenario, the SmartenIT prototype and the related artifacts have proved to be
a stable solution capable to achieve good and measurable performances, and to be casted
in real-world large scale deployments over a wide geographical area with an accordingly
high number of end-users. Selected EFS traffic management mechanisms proved to be an
excellent solution, mainly because: a) They provide a technical framework that allows
traffic management between the Cloud provider and the end-user in a win-to-win fashion,
b) they entail incentive-based collaborations among end-users based on a novel trust
schema that leverages on social awareness information, c) they achieve energy efficiency
for end-user mobile devices and significant enhancement of Quality of Experience as
perceived by the end-user, d) they provide novel content caching and prefetching schemes
for content downloading, thus indirectly attaining savings of inter-domain traffic and energy
efficiency in the providers domain due to traffic localization, and e) they are easily
deployable over end-user's and ISP's premises with negligible impact on the existing
infrastructure.
Overall, the SmartenIT solutions evaluated with the elaborated methodology reported in
this deliverable constitute a stable technical framework with measurable performance that
ensures concrete benefits for the involved stakeholders (including traffic and energy
savings), high deployability capabilities over both the ISP's infrastructure and the enduser's premises, high configurability in terms of functionalities and capabilities, and good
positioning in the actual cloud landscape, thus fulfilling all major objectives of the
SmartenIT project.

Page 12 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

2 Introduction
Deliverable D4.3 reports the outcome of task T4.3 (Evaluation of Prototype) and task T4.4
(Overall Assessment). The goal of task T4.3 was to conduct the validation and
performance evaluation of the SmartenIT prototype based on the application use cases
and experiments identified in task T4.2. Specifically, the prototype developed in WP3 had
to be parameterized and deployed on top of the mobile and wired testbed defined in task
T4.1 and had to be evaluated with respect to scenarios, metrics, and characteristic
dimensions defined in task T4.2. This includes the efficiency, scalability, and energyawareness of the developed network management mechanisms. The aim of these
evaluations was to reveal the positive impact of the mechanisms of SmartenIT on network
traffic load and on performance parameters, the potential for cost savings, and trade-offs
between energy efficient computation in the cloud and energy consumption on mobile enduser devices, ultimately measurable in overall energy savings and cost efficiency.
In parallel to task T4.3, the goal of task T4.4 was to assess all results achieved with
respect to the performance evaluations of task T4.3 as well as the overall project
outcomes, including model and simulation based evaluations carried out within WP2. Also
QoE assessment based on theoretical models and subjective tests had to be performed,
interpreted, and summarized within task T4.4. Benefits from the deployment of SmartenIT
solutions for respective stakeholders were also evaluated and documented. This
assessment provides valuable interpretations of the results, to outline the major
achievements of the SmartenIT project as a whole with respect to both its specific network
management mechanisms (and the circumstances under which they are efficient) and its
general approach. This approach also enables the provision of feedback to the theoretical
work carried out by WP2 (theory) and to the engineering work carried out by WP3
(engineering) as well as the final system-related and research-related public messages to
the interested stakeholders.

2.1

Purpose of the Document D4.3

The purpose of Deliverable D4.3 is to document the results of validation and performance
evaluations of the mechanisms implemented within the SmartenIT project. To this end, all
measurable results including traffic measurements, performance metrics, robustness,
energy efficiency, and QoE are provided here. Moreover, this deliverable aims to provide
an overall assessment of those solutions and mechanisms proposed by the SmartenIT
project and to derive the final project conclusions as well as give implementation
guidelines and usability assessment. Finally, the deliverable also aims to outline the major
achievements of the SmartenIT project as a whole with respect to its specific network
management mechanisms and the general approach.

2.2

Document Outline

The remainder of this deliverable is organized as follows. An overview on SmartenIT in the


current Internet is given in Section 3. The goal of this section is to position SmartenIT
solutions and traffic management mechanisms in the current Internet taking into account
perspective of stakeholders and current challenges. It also serves as introduction to other
sections by reminding main project outcomes. SmartenIT system architecture is also
summarized. Moreover, Section 4 provides a description of the assessment criteria and
methodology. This section summarizes evaluation criteria, methodologies and metrics

Version 1.0

Page 13 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

used for assessment of traffic management mechanisms selected for testbed experiments
as presented later in Section 5. The metrics are categorized as economic, performance,
and user experience metrics. Moreover this section provides a business perspective on
assessment of project outcomes. Additionally, testbed experiments results and
assessment achieved during SmartenIT validation activities are given in Section 5. Results
of several experiments are presented and discussed. Evaluations are done with using
methodologies and metric presented in Section 4. Furthermore, Section 6 provides the
assessment of the main projects outcomes. This section introduces other dimensions of
assessment of SmartenIT solutions and finally implemented traffic management
mechanisms. It provides a deep analysis of costs and benefits of deployment of SmartenIT
solutions expected over next couple of years. Additionally, a SWOT analysis is presented.
Finally, an overall assessment of all SmartenIT solutions and mechanisms proposed is
given in Section 7. This section serves as summary for all project activities and outcomes.
It presents a map of solutions and stage of development and maturity they achieved. It
also summarizes how key design goal and objectives defined at the project beginning
were achieved and addressed. Section 8 summarizes and concludes this deliverable.

Page 14 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

3 SmartenIT in the current Internet


In recent years the emerging Cloud paradigm has caused a great impact on the Internet
landscape, where Cloud can be intended, at least from a very general point of view, as a
new way to offer products and services to the end-users. Superimposed to the impact of
Cloud paradigm (and correlated to it from both direct and indirect perspective), other
important shifts/trends should be considered, like the exponential growth of end/user
mobile access capabilities, the increase request to enhance the capabilities for
user/service mobility, the increasing focus of energy efficiency topics (intended as both
impacting over service provider and end-user domain), the emerging importance of social
network frameworks and its exploitation potentialities.
All the abovementioned key topics have caused a certain number of broad effects on
Internet landscape which can be summarized in the following main points:

A great traffic increase driven by the new applications has a tremendous impact
over actual global network built over ISP infrastructure. Cloud traffic, which can be
seen in the majority of cases as overlay traffic, is not directly managed by ISP
organizations and these results in poor end to end traffic management capabilities.

The emergence of new requirements from end-user point of view and the metrics to
evaluate them. In the Cloud era, the QoE metric (an aggregate metric which
measures end-user experience with a purchased service) is gaining importance.

A new landscape of interactions between different parties. Typically a Cloud service


to be offered to end-user is composed by a combination of technology which span
across multiple domain of service providers resulting in an end to end vertical and
horizontal macro-composition. Moreover it should be noted that Cloud service
should be perceived in a transparent way from end-user, which means that endusers must not be aware of the underlying framework used to provide the service.
In this multisided and complex landscape. All these topics foresee a new framework
(from both technical and business point of view) which could encompass crosslayer and cross-domain interaction among service providers in order to satisfy the
end-to-end SLA increasing requirements.

The role of end-user is continuously becoming more central and is also shifting from
a passive role (where he just receives a service), to a more active role with the
possibility to interact with other users (social networks) and to interact with service
providers itself (in the form of Nano Datacenters) by allowing the provider to utilize
the equipment at customer premises as additional computing/storage resources.

SmartenIT project has conducted deep analysis which has been oriented in two main
complementary and synergetic directions.
The first direction is the analysis of Cloud landscape from the point of view of stakeholders
and their relationships, in order to understand their requirements and their conflicting
needs and trying to form/devise/develop a rationale which can convert those conflicting
interests into synergetic/collaborative ones. This analysis has led to the development of
two scenario categories, namely OFS (Operator Focused Scenario) and EFS (End-user
Focused Scenario), which summarize two complementary perspectives of the Cloud
landscape. The first is dedicated to the interaction among stakeholders acting as service
providers while the second is centered on end-user and its interaction with the other users
as well as the Cloud.

Version 1.0

Page 15 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

The second direction is the analysis of suitable Traffic Management mechanisms (and
their composition) to be casted over the two reference scenarios and which are intended to
mitigate the drawbacks of the Cloud landscape by mitigating inter-cloud traffic impact over
providers domain, by providing new schemas for efficient content caching and pre/fetching
and by providing a framework which forecast collaboration among multiple independent
domains. Traffic management mechanisms have been designed to span across a certain
number of SmartenIT entities (SBox implementing the SmartenIT logic, the SDN
Controller, the Network Entity, the uNaDa, i.e., an enhanced home gateway and the
End User Entity) that constitute the SmartenIT architecture. The overall SmartenIT
architecture has been designed to be modular in order to be deployable over different
scenarios according to the actual needs.
In the next subsection, the main results of SmartenIT analysis will be reported with focus
on:

SmartenIT stakeholders descriptions (roles, need, interests and conflicts)

SmartenIT traffic management mechanisms (divided on a per scenario basis, with


proper focus on their features and the enhancements they are able to produce)

SmartenIT architecture (high level description with focus on SmartenIT entities and
its deployment capabilities on different scenarios)

3.1

Stakeholders roles

Five actors can be identified within the SmartenIT ecosystem, namely Cloud Service
Providers, Data Center Operators, Internet Service Providers, End Users, and the Social
Information Provider. Their interactions will be described in this section.
The main interest of the Cloud Service Provider (CSP) is the monetization of services
purchased by the end users. Satisfaction of end users is a crucial issue. To ensure this
goal, the QoS as well as the energy requirements should be met. Collecting and utilizing
social information, new services may be offered. The CSP attempts to attract demand and
differentiate its product offerings so as to serve various market segments with different
prices/packages and consequently extract as much of its customers surplus as possible.
The monetization can also be achieved by either making the end-user accept a certain
level of advertisement or by introducing systems that prevent users from accessing
webpage content without a paid subscription to the service. On the other hand, there is
pressure to keep the costs for ISP infrastructure and Cloud resource consumption low, i.e.,
for renting resources from Data Center Operators. Its main concern is also to respect the
SLAs with its customers and properly dimension its infrastructure to cover the respective
demand and redundancy requirements. The CSP also needs to interact with the ISP to
make sure that the provided network meets the requirements of the provided cloud
service. Interconnection with other CSPs can reduce the transit fees a CSP pays to the
ISP.
The Data Center Operator (DCO) is primarily concerned with the efficient use of its
infrastructure, i.e., make the best use of its resources while minimizing energy
consumption for the completion of the maximum possible amounts of task. It is possible
that a part of its infrastructure is purchased from some CSP under an IaaS or PaaS model.
Monetization of infrastructure is done by guaranteeing satisfactory QoS/QoE parameters
for end users. Its main concern is to respect the SLAs it has with its customers, which may
also be CSPs, and properly dimension its infrastructure to cover the respective demand
Page 16 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

and reduce its costs. Reduction of costs, in this case, focuses on best possible utilization
of hardware both resource-wise and energy-wise. In particular, it aims to improve the
utilization of its servers, to avoid or limit congestion on its links and platform; as well as to
limit the energy consumption by its data centers. An important factor for its costs besides
energy is transit costs for the data transfers. An emerging trend is datacenter
interconnection, similarly to the donut peering model of ISPs [44]: Datacenters are
gradually interconnecting with other datacenters via exchange points so as to reduce
transit costs. Another interesting emerging prospect is that of federations [44]: since in the
data center business multi-location presence and locality in execution of tasks is crucial so
as to be competitive, small data center operators may team up and build a federation by
combining their resources and aligning their business processes so as to appear as a
large virtual data center operator to their potential large business customers. In this case,
interactions within the federation are policed by means of business policy rules defining
what are the rights and obligations of each federation member.
The Internet Service Provider (ISP) aims at maintaining good quality in his network,
which will give him a competitive edge in the market and ultimately a good RoI (Return on
Investment). This means that ISPs are interested in optimizing the network bandwidth
utilization, while avoiding congestion, and preserving low latency. The ISP interacts with
other ISPs via peering and possibly transit agreements for the management of the traffic of
his customers, both upstream and downstream. Providing guarantees for the delivery of
portions of the traffic and/or isolating a part of the traffic of Data Centers and/or ISPs (e.g.,
via VPNs, leased lines, or private paid peering agreement) is a source of potential revenue
for the ISP. This can be increased by high quality of network services that lead to higher
satisfaction of DCOs and also of end users. Supporting new services, possibly by
employing additional information (with the main example here being social information),
may be attractive and simultaneously make ISPs more competitive towards end users and
cloud providers. Offering new services to CSPs would open new market opportunities for
ISPs. The main goal of the ISP is manage the incoming traffic that is possibly delivered to
him via multiple interconnection points. An additional goal for Tier-1 and Tier-2 ISPs is to
sell transit interconnection agreements to other ISPs, DCOs and CSPs. At the same time
own charged transit link traffic has to be kept as low as possible. Applying Traffic Engineer
solutions to prioritize the traffic of his customers according to the respective value/cost
tradeoff are used in parallel with the management of inter-domain business agreements
and establishment of presence in Internet exchange points and areas that can attract
demand from customers.
End users are to a large extent unaware of the business agreements and interactions
among the other stakeholders being primarily a client of ISP and CSP. The only thing they
are aware of is the description and cost of the services they purchase, mainly Internet
access from an ISP, file hosting from a CSP. The end users are sensitive to the reliability
and availability of the service, the support they receive when needed, as well as the
performance of the service and the resulting QoE attained. All those features are typically
defined in the contract/SLA of the service between the end user and the respective service
provider. In case multiple providers are needed for the provisioning of a certain service,
the respective interactions are hidden to the end user due to the nesting of the internal
SLA, who solely interacts with a single point of service, i.e., the provider from which he has
purchased the end-to-end service. The end users main concerns are its own QoE,
network access cost, and energy consumption. The energy efficiency of services and
network access are factors of increasing importance from the point of view of the end user,
especially when considering mobile network access (e.g. through the limitation of battery
Version 1.0

Page 17 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

lifetime when using a mobile device). It is noteworthy, that costs in case of end user often
can be expressed by being exposed to advertisements instead of being involved in
monetary flow. Typically, the end user will not be interested in disclosing social information
directly; however, it may be willing to give it up for some profit or improved QoS/QoE.
The Social Information Provider wants to monetize the social information he owns.
Therefore, it can sell the social information to CSPs, DCOs, or ISPs. If the information can
be used to improve a service, this creates incentives for end users to provide information
to the Social Information Provider. If the Social Information Provider offers a service itself
like an Online Social Network (OSN), these incentives lead to a higher number of users
OSN, which can further increase revenue (e.g., advertisement) of its service.
Having presented the main stakeholders and their roles, we now provide some insight on
their business relationships and interactions, as well as tussles amongst them in the
context of Best Effort Internet and beyond.
Asymmetric information between the ISP and the buyer of connectivity service is a major
issue, since ISP networks are black boxes to other ISPs and cloud/application service
providers, rendering inter-domain monitoring impossible. Networks do not allow other
parties to monitor their network performance and do not provide to their customers any
guarantees besides uptime. Furthermore, ISPs do not exchange any quality or control
information and there is lack of reward schemes for ISPs that would be willing to provide
assured quality to inter-domain traffic since networks solely exchange data and BGP
information, lacking standardized service-aware inter-provider service coordination in both
business and technical layer. This is also due to the fact that interconnection contracts
pertain to large traffic aggregates, thus there is no service-specific overlay for
optimizations, and provide only uptime guarantees and absolutely no guarantees on
quality.
This lack of end-to-end inter-provider SLAs and respective multi-provider service-aware
connectivity products drives high quality out of the market according to Akerlofs theory on
market of lemons [45] : since there are no quality guarantees, it is impossible for buyers to
predict the quality of inter-domain flows Thus they expect and bias their willingness to
pay on the average quality observed in the market Average quality is by definition lower
than that of high-quality connectivity thus essentially driving any higher-quality connectivity
product out of the market and resulting in a market of lemons. This is also evident in the
context of mobile networks where content delivery is highly monetized and thus any quality
degradation is unacceptable: extranet solutions such as IPX are increasingly popular thus
creating an alternative to Internet and a potential threat to the applicability of the latter for
sustaining the business of content delivery especially due to the increasing number of
mobile users and sophisticated mobile terminals.
ISPs possible opportunistic routing and/or prioritizing own flows over inter-domain traffic
further mitigate quality of service. Overall, these inefficiencies are due to the lack of
service-aware coordination and cooperation among ISPs, clouds as well as between ISPs
and clouds, further rendering inter-provider cloud and content service provisioning
problematic and greatly affecting the end-users QoE. This issue is also of particular
interest to Europe and similar areas, where there are typically multiple geographicallylimited ISPs, data centers and cloud service providers without global Information and
Communication Technology infrastructure, rendering intra-provider traffic management
and cross-layer network-cloud layer optimization solutions inadequate

Page 18 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

To this end, data centers, CDNs and proxy servers attempt to mitigate these well-known
inefficiencies to some extent and improve service performance by bringing data and
services closer to the users. Therefore, it is interesting that the inefficiencies of the existing
Internet and the resulting degradation in the service quality actually constitute a business
opportunity and sustain a market of considerable size, that of CDNs. Similarly, these
inefficiencies lead to opportunities for SmartenIT too!
This complex business landscape gives rise to interesting tussles among the stakeholders,
which have been also analysed in D2.5 [8]. These tussles are greatly affected by both the
existing business relationships and practices among the operators and by the technical
Internet protocols deployed. Regarding the tussles among the operators in the OFS, the
optimal traffic destination when multiple candidate destinations exist comprises an
interesting tussle. The selection of the optimal destination would be then performed by the
sending entity, CSP/DC or uNaDa, without any knowledge on the underlying network load,
which may affect both the time within which the data transfer will be completed as well as
the underlying network load. Also the BGP aggregation function further mitigates the
choices for optimizing the traffic routing. The net neutrality tussle pertains to the scope of
reasonable traffic management. The information asymmetry, already explained earlier in
this section and the potential inter-domain traffic changes resulting from edge storage
devices and the support of video content dissemination may have significant impact on an
ISPs inter-connection costs, peering ratios and thus on the sustainability of its
interconnection agreements. Last but not least, the pricing tussle over traffic is also closely
related to the net neutrality tussle and partly resolved via competition, business models
and regulation. However, the existing status quo in pricing, as well as the introduction of
new schemes, may create interesting spill-over effects and new tussles, since more or less
control and options is provided to some of the stakeholders in the Internet services value
chain: to this end, pricing is both the cause of tussles and a way to resolve them as a
control mechanism, but also serve as the outcome of tussles and business modeling in the
Internet.
Similar tussles also appear in the context of federations among clouds and datacenters.
The degree of cooperation and information and control delegation among the federation
members comprises an interesting tussle. This is largely due to the fact that the federation
members, are also competing directly in the market for customers and also try to maximize
their profits and customers obtained from the federation. This is closely related to the
information asymmetry and quality discrimination tussles already explained in the previous
paragraph for the operators case. The definition of the Federation policies, rules and
information model is also a highly important tussle for the cloud service providers since
this determines customer ownership, intermediation, revenue flows and revenue sharing
schemes, i.e., crucial aspects of the CSPs business.
Tussles also appear in the context of EFS, including the reduction of cloud provider traffic
traversing ISPs and the traceability of user behavior, which are directly related to the
intermediation of CDNs or alternative caching entities, such as uNaDas. These tussles, the
caching mechanisms that attempt to mitigate those, as well as performance issues are
also related to the pricing and inter-domain traffic changes, and to the respective impact
on hardware and network upgrades needed.
All these tussles and their correlation with the business and technology status quo of the
Internet indicate the highly disruptive impact of technology in the Internet ecosystem and
the resulting distribution of power among its stakeholders. The complexity of these tussles
also indicates that a holistic approach, encompassing both the business and technology
Version 1.0

Page 19 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

aspects of Internet services delivery, is needed to mitigate them. This is well in line with
the SmartenIT approach and the mechanisms developed, which are specified so as to
respect neutrality, fair competition and design-for-tussle principles. The respective
mechanisms, their scope and goal, as well as their relation with the business landscape
are presented in the next subsection.

3.2

Mechanisms landscape

This section provides an overview of the landscape of SmartenIT mechanisms and the
respective cloud business models under which these can be utilized. Regarding the
Operator Focused Scenario (OFS), the main mechanisms and their respective
achievements are:

Dynamic Traffic Management (DTM), which minimizes the inter-domain traffic


cost in multi-homed AS by influencing the distribution of the traffic among links.
DTM is designed and implemented to work volume based tariffs and 95th percent
based tariffs.
Inter-Cloud Communication (ICC), which attains a reduced 95th percentile transit
charge by controlling the rate of delay-tolerant traffic, marked a priori accordingly by
the ISPs business customer (e.g. cloud/datacenter), and shifting its transmission at
off-peak intervals.
Multi-Resource Allocation (MRA), which ensures a fair resource allocation among
federated Cloud Service Providers.
DTM++ employing features of DTM and ICC to further improve load balancing and
95-th percentile inter-connection charge compared to the individual mechanisms.

All SmartenIT OFS mechanisms can be of high value to certain business contexts related
to the Internet and Cloud Services landscape. A high level overview can be found in
Chapter 5 of the D2.5 [8]. Next, we focus on ICC, DTM and their combination DTM++,
since DTM++ has been fully implemented and constitutes one of the two major tangible
outputs, i.e., the software products of the project, as well as MRA.
ICC, DTM and their superposition DTM++ constitute powerful tools for service
differentiation and OPEX reduction for Internet Service Providers, thus contributing both to
cost reduction and value generation aspects of ISPs business. ICC, DTM and DTM++
have consciously been designed for the Best Effort Internet so as to maximize the OFS
mechanisms impact and applicability. However, the mechanisms design can also be
adapted under a Differentiated Services model by extending the number of traffic
classes and the respective differentiated treatment of the traffic and/or under Beyond
Best Effort Internet. Regarding the latter, DTM and DTM++ tunnels can utilize assured
quality inter-domain paths so as to further optimize the distribution of traffic on a perservice level by using separate tunnels per service and per destination and optimizing the
cost distribution over them. ICC can comprise multiple classes of service with different
prioritization of the traffic taking into account the respective services requirements. In this
case, statistical guarantees per service type can be assured.
MRA is an attractive solution for cloud federations. This is also the case for all the other
OFS mechanisms which, additionally to the aforementioned powerful features, comprise
powerful tools for providing customizable Network as a Service (NaaS) connectivity. In
particular, this may apply over the entire ISP infrastructure or over network slices,
potentially configured and managed according to the Software Defined Networking

Page 20 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

paradigm. Therefore, service differentiation and customization at network layer can be


supported for both legacy and cutting-edge IP network SDN technologies.
This is an additional means of the ISPs to differentiate themselves from being dump pipe
providers and is in line with the shift towards Value Added Services. Thus, the OFS
mechanisms can serve as means for the ISPs to extend beyond the low-margin
commodity market for dump connectivity to the high-value customizable and differentiated
services market. Furthermore, they can be bundled with additional services that ISPs
already provide, such as IPTV or any other Value Added Services targeted by TM Forum
in [21], in order to materialize the Repeatable Digital Services Provisioning model.
Therefore, ICC, DTM and DTM++ can be beneficial for the declared TM Forum attempt to
adapt the Amazon repeatable retail business model to cloud and Internet services based
on an Ecosystem Enablement Platform (EEP); details are provided in D2.4 [8]. This can
be accomplished by providing rich functionality to digital services where Over The Top
(OTT) providers are extremely active either cooperating with or competing against ISPs.
The traffic management functionality of these mechanisms and also the cloud layer
functionality of ICC - for optimal destination (i.e. cloud, data center) selection in inter-cloud
communication cases where this is possible, such as bulk data transfers for data
replication - could comprise a core module of the EEP platform, offering a substantial
competitive advantage for such Value Added Services provisioned on the application
layer. Note that this is also well aligned with the increasing importance of the network in
cloud services and its substantial impact on the users perceived quality. Thus the potential
for monetizing the end user flows of the service via subscription or per use fees (e.g. for
video on demand).
Therefore, ICC, DTM and DTM++ could serve as a means for the ISPs to capture portions
of the high-value market segments of private, hybrid and public cloud markets, including
through bundling with other services Dynamic Business Processes and Business
Processes as a Service segments (refer to Figure 3-1 of Deliverable D2.4 [7]). Integrating
these OFS mechanisms over the ISP infrastructure could enable the ISP to acquire a
portion of those highly profitable markets by increasing the competitive advantages of the
bundled services. In particular, ICC and DTM could be used as stand-alone or combined
as DTM++ an advanced network management mechanisms efficiently meeting complex
business processes needs through smart quality of class differentiation and tunneling of
the respective traffic. To this end, ISPs could bundle and provision such services under a
Managed Services approach to their business customers or offer this advanced
functionality under IaaS/PaaS to third party Over-The-Top providers such as Application
Service Providers.
The Federation model (additional details are provided in subsection 3.4.2 of Deliverable
D2.4 [7]) is also a business landscape where MRA, ICC, DTM and DTM++ are applicable,
since these mechanisms design and deployment are facilitated by a federation of data
centers/clouds. The roll out of all the OFS mechanisms in this context would greatly
contribute to reducing the OPEX of both the cloud federation members and their host ISPs
due to the respective transit cost reduction.
As also stated previously, the ISP Managed Services model is also a suitable business
context for ICC, DTM and DTM++. The respective functionality could be an integral part of
this paradigm as a whole and also for particular managed services such as Virtual Private
Cloud (VPC) and Managed Storage and Backup (additional details are provided in section
3.4 of Deliverable D2.4 [7]).

Version 1.0

Page 21 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Furthermore, the brokering business context is another one where MRA, ICC, DTM and
DTM++ could be of value (additional details in Section 3.2 of Deliverable D2.4 [7]). Brokers
must be extremely efficient in meeting the needs for aggregation, integration and
customization of services and resources and delivering to their customers a unified service
that meets the customized needs of the specific customer.
Last but not least, DTM can also apply in the emerging 5G and Internet of Things (IoT)
contexts, where virtualization will lead to the conception and deployment of network slice
and resource/traffic aggregates. The DTM can offer virtual routers serving as points of
presence for data centers. They can be deployed in separate slices belonging to different
virtual operators so as to provision 5G or IoT services. Since multi-provider service
delivery, including cloud and IoT resources, over software-defined networks is part of the
5G vision, DTM++ constitutes a candidate for this market as well.
Regarding the End user Focused Scenario (EFS), the main mechanisms and their
respective achievements are:

Replicating Balanced Tracker and Home Router Sharing based on Trust (RBHORST), which provides WiFi access to trusted users to offload their mobile traffic
from 3/4G to WiFi. RB-HORST also enables home routers of trusted users identified
through a social network to be organized in a content-centric overlay network
supporting content prefetching and caching.
Socially-aware Efficient Content Delivery (SEConD), which employs social
information, AS-locality awareness, chunk-based P2P content delivery and
prefetching. A centralized node is acting as cache and as P2P tracker with a proxy
to improve the Quality of Experience of video streaming for users of OSNs to
reduce inter-AS traffic.
Virtual Incentives (vINCENT), which aims to leverage unused wireless resources
among users, while it ensures security by employing tunneling among trusted users.
vINCENT exploits social relationships derived by the OSN, as well as interest
similarities and locality of exchange of OSN content.
Mobile Network Assistant (MONA), which schedules wireless data transmissions
to reduce the energy expenses on the air-interfaces exploiting changes of the
mobile network performance depending on location and time patterns.
Multi-Criteria Application Endpoint Selection (MUCAPS), which improves users
Quality of Experience by performing selection of communication endpoints with an
awareness on the underlying network topology provided by ALTO protocol
extensions and the end-user access.

All the SmartenIT EFS mechanisms are suitable for certain business contexts. Next, we
primarily focus on the RB-HORST mechanism, which is the superposition of various EFS
mechanisms and one of the software products of the project and discuss its fitness to the
business landscape.
The business context that is most relevant is the Terminal and Service business model. In
particular, the terminal+service trends and respective business models also indicate an
increasing interest on innovation at the end user devices side. Thus, the SmartenIT EFS
mechanisms, namely RB-HORST, SEConD, vINCENT, MONA and MUCAPS that can
deliver energy efficiency, advanced services and resource management, increased agility
and performance, can be very useful means for new differentiated services and terminal
operations. A prominent example is the cloudlets business case: cloudlets are
decentralized and widely-dispersed Internet infrastructures whose compute cycles and

Page 22 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

storage resources can be leveraged by nearby mobile computers, i.e. a data center in a
box. The aforementioned SmartenIT EFS mechanisms are well positioned within this
emerging business context.
This is also evident in the latest developments regarding Windows 10. In particular, mobile
terminals running Windows 10 let its users exchange Wi-Fi network access with their
Facebook friends, Outlook.com contacts, or Skype contacts to give and get Internet
access without seeing each other's Wi-Fi network passwords [22].
Furthermore, 5G is an emerging business landscape were EFS mechanisms are of
prominent importance. The large emphasis on Quality of Experience (QoE), energy
efficiency and ubiquitous connectivity, as opposed to mobility, render RB-HORST
functionality well placed in this context, since its logic could be utilized for the efficient
coordination of horizontal and vertical hand-offs in an opaque to the user manner.
The novel features that RB-HORST brings to the end user devices and the respective
services can be of high value over specific services whose monetization is greatly affected
by QoE: the video streaming services delivery comprises a primary market segment
whose size and dynamics justify the investment on RB-HORST and similar solutions. In
this context, MUCAPS is also applicable since it could be used for smart end-point
selection for the content or service delivery points: this is of high value in the video market,
as well as in that of CDNs. Furthermore, the MUCAPS functionality could also be extended
so as to perform smart content delivery in the context of 5G networks and mobile edge
computing: User devices would use the MUCAPS service so as to associate end users
flows to the less costly interface and service/content delivery point available.
RB-HORST's mechanisms for caching, prefetching, and offloading are especially
interesting for businesses that lack their own 3G/4G network with a wide Internet
coverage. Such a business, e.g., ISP, that has set-top boxes or routers at the premises of
its customers, can quickly gain a high WLAN coverage and compete with 3G/LTE
providers with respect to data transmission. Furthermore, making the set-top box or router
more intelligent that would allow users to install various router apps could boost software
development on those devices when providing an open API such as with Android or iOS.
With MUCAPS, end-users get a better QoE due to the ISP-assisted application traffic
optimization. ISP can optimize routing costs and resources usage by means of either a
direct usage of MUCAPS or applications and end-users accepting to use the MUCAPS
option. As a result CSPs get an increased satisfaction rating from their customers. If CSPs
buy services from CDNs, the CDNs will easily keep them as customers. Besides, ISPs
save inter-domain traffic costs due to content delivery from sources hosted locally or in a
peering network. ISPs may buy MUCAPS from a vendor to optimize their routing costs and
resources usage by optimizing application traffic. ISPs may also buy an ALTO Server if
they dont have one. As MUCAPS is transparent to them, ISP customers (CSPs, CDNs)
and end users do not need to buy or employ some additional service, but may benefit of
improved resources allocation offered by the ISP thanks to MUCAPS. Besides, end users
may also agree to have their connection type unveiled to the ISP.

Version 1.0

Page 23 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

3.3

System architecture

As already mentioned, SmartenIT envisions two approaches for the efficient management
of Internet traffic. These are:
a) the Operator Focused approach, which manages of traffic between Cloud-based
services, large content providers, or data centers , typically residing in Tier-2
domains and involving traffic crossing Tier-1 domains as well,
b) the End-user Focused approach, which manages traffic destined to end users (in
Tier 3 domains) from Cloud-based infrastructure with multiple Points of Presence
(PoPs), i.e. caches, mirrors, surrogates, CDNs.
In deliverable D3.3 Final Report on System Architecture [11], the final system
architecture was presented and documented in details. The entities which constitute the
SmartenIT architecture are the SBox (implementing the SmartenIT logic), the SDN
Controller, the Network Entity (i.e., the router), the uNaDa (i.e., an enhanced home
gateway) and the End User Entity (i.e., smartphone, laptop). The first three entities refer
to the Operator Focused approach while the last two refer to the End-user Focused
approach. Figure 1 provides a visual representation of the aforementioned entities and
their envisioned deployment.

Figure 1: The SmartenIT ecosystem and the involved architectural entities.


The SmartenIT architecture has been designed in such a way to be able to host any of the
initially described mechanisms. However, as certain mechanisms became more mature
than others, two of them were selected for implementation. This fact however does not
limit the applicability of the SmartenIT architecture for the rest of the envisioned
mechanisms. In the following paragraphs, the architectural diagrams of these two
mechanisms, namely the DTM and RB-HORST ones, are described.

Page 24 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

SBox (Server)
Jetty Container

SDN Controller (Server)


HTTP
(JSON)

sqlite

HTTP
sbox.war

sdn.jar
Floodlight
0.90

jdbc

sbox.jar
TCP

Inter-SBox
Communication
Service

OpenFlow

DB
NETCONF

Traffic Manager
Economic
Analyzer

Network Device

SNMP

QoS
Analyzer

(b)

(a)

Figure 2: The DTM architecture (a) and deployment (b) diagrams


Figure 2(a) presents the SmartenIT architecture instantiated for the DTM mechanism. The
various components have been grouped in the respective entities and the interfaces
between components and entities have been identified. The color-coding of the
components defines whether a component was implemented by SmartenIT (blue) or a
third-party implementation was used (white). Figure 2(b) shows which specific
technologies and containers where used to deploy the implemented components.

(b)

(a)

Figure 3: The RB-HORST architecture (a) and deployment (b) diagrams


Figure 3(a) presents the SmartenIT architecture instantiated for the RB-HORST
mechanism. The various components have been grouped in the respective entities and the
interfaces between components and entities have been identified. The same color-coding
rule applies here as well. Figure 3(b) shows which specific technologies and containers
where used to deploy the implemented components. For more details on the generic
SmartenIT architecture and its specific instantiations for the DTM and RB-HORST
mechanisms, please refer to D3.3 [11].
Figure 4 presents the mapping between the SmartenIT architecture and MUCAPS.
MUCAPS was not part of the project team implementations as its purpose is to add ISPdefined awareness on the transport network topology and costs to existing mechanisms.
MUCAPS was developed as a standalone add-on that can be easily patched on the
SmartenIT architecture. The MUCAPS components are pictured in yellow.

Version 1.0

Page 25 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 4: SmartenIT Entities and components involved in MUCAPS


3.3.1 Implementation process
In the first year of work, the SmartenIT partners had chosen mechanisms which were to be
implemented in following years. The main candidates for implementation were RB-HORST
and DTM. Implementation process has been preceded by system design. From that
moment the project team responsible for implementation started working on system
architecture. In the first stage of design the core functionalities were identified. The system
components, internal and external interfaces were defined for each selected mechanism.
The authors of other mechanisms added necessary elements required for future
extensions, specific for particular mechanism. Each mechanism was specified on two
levels: mechanism description level (WP2 perspective) and implementation level (WP3
perspective). The architecture was frozen and implementation started. The implementation
went smoothly. During project new functionalities were added to the system and they were
presented in the form of a few releases. The key SmartenIT system releases were
published on github.
The last release contains functionalities related to a few mechanisms (RB-HORST, DTM
and ICC). Generally SmartenIT work on software followed agile programming paradigm.
The whole process of producing consecutive releases of SmartenIT system evolved during
3 years.
The work on last releases proved that the cooperation between partners got mature level:
two levels of specification took a form of single containing WP2 and WP3 perspective, the
communication between partners responsible for selected components were improved, the
appropriate form of specification were worked out which accelerated programming
process. The interface between components were properly defined enabling

Page 26 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

implementation of new features. The last releases also proved that architecture were well
designed, it was enough flexible to incorporate more management mechanisms.

3.4

Business perspective and applicability to main IP traffic categories

The business aspects of traffic management mechanisms depend on the goals and
incentives of all involved parties in service provisioning over the Internet, which differ from
the perspective of popular Internet platforms and from the users' perspective as well as for
content delivery networks and network providers (ISPs) on the transport path.
Four main categories can be distinguished of Internet traffic to be optimized by different
traffic management mechanisms in SmartenIT:

traffic distributed via over-the-top (OTT) content platforms being supported by a large
global CDN, i.e. CDN-to-user traffic,
traffic via smaller web platforms without support by large global CDNs, i.e.
cloud/server-to-user traffic,
user-to-user traffic, e.g. conversational voice or video, P2P file sharing etc.,
transit traffic between data centers (DC-to-DC traffic).

The impact of each category on the Internet traffic mix is varying and developing over time.
In a phase from 2001 - 2005, user-to-user traffic was dominant based on high volume file
sharing applications [25] (since 2004 P2P traffic is declining). Since then, client-server
based traffic is increasing mainly on video and IP-TV platforms, e.g., YouTube or Netflix
together with other cloud servers of different size, which generate CDN/server/cloud-touser type traffic. A major portion of this traffic is distributed over only a few CDNs (Google,
Akamai etc.) with a dense global footprint, but traffic from smaller clouds and from
spontaneously popular web sites without global CDN support is also summing up to a
considerable fraction of IP traffic. The volume of file sharing traffic in Europe is constant or
slightly decreasing according to Cisco [24] and its share decreased below 10% of the total
downstream traffic in recent Internet traffic reports by Sandvine [30] because other traffic
types are growing much faster (for detailed graphs and figure, refer to D2.1 [4]). The
current composition of those traffic types has significant influence on the options for traffic
engineering and control mechanisms. In the following discussion of appropriate
management measures for each traffic type, a reduction of the traffic load always means
lower cost and energy consumption because upgrades of capacity may be delayed or
resources may be switched off due to better resource usage. Over the last decade, energy
consumption increased to a major component of operational network costs (OPEX)
especially in high speed core networks and in the radio access of wireless/mobile
networks. Energy consumption is becoming a limiting factor of bandwidth growth in future
IP networks. Therefore studies on energy saving options as provided e.g. by the MONA
mechanism in Section 4.2.4 or ICC mechanism in Section 4.2.1 are an important focus of
SmartenIT.
For OTT cloud services and web sites with support by a global CDN, i.e. CDN-to-user
traffic, the CDN often has control over most of the transport path from the original site of
the web server to caching and content distribution servers, which are available at main
Internet exchange points (IPX) [38] around the world and have peering connections with
most ISPs. Based on a large server infrastructure in the Internet, CDNs with global
footprint can apply load balancing under control of their own servers, regarding the load in

Version 1.0

Page 27 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

the distributed server architecture, the connection network load between the distribution
servers and the sites providing the original content. For this purpose, the dynamic traffic
management (DTM) mechanism as described in Section 4.2.1 can be applied. Moreover,
global CDNs also maintain and optimize the connections from their servers to the user
including fast switching procedures to another server offering better performance, when
QoS degradation in terms of low bandwidth or long delays is observed by QoS monitoring
[31].
Moreover, CDN-to-user traffic is also supported by caching within the platform of the
CDN provider. On the other hand, caching options beyond the CDN are often prohibited
because they only partly conform to the business interests of global CDNs. Additional
caching in user controlled nano data centers (uNaDas) as proposed by the RB-HORST
approach in sections 0 - 4.2.3 or in home gateways under control of an ISP can further
improve throughput and delays of services. This may be less relevant for small ISP
networks whose peering with global CDN servers is already close to the users. For large
ISPs, which still have a number of hops between a peering CDN server and their users
and for mobile and wireless network providers, where the access link via air interface is
the bottleneck, caching in global CDN servers is often far from an optimum transport
solution. Nonetheless, global CDNs usually have a higher business interest in keeping
complete control over the connections to the users in order to get full information about
user activity, because usage statistics are essential for the revenues of content and
service providers.
Therefore cooperative approaches between large CDN providers and ISPs seem to be the
only viable way. In fact, such approaches have already been developed. For example,
SmartenIT partners TDG and TUD have studied an approach, where the CDN provider
maintains full control on application layer. At the same time, the network provider can
optimize the network layer independently, with an SDN controller interface for exchange of
information about requests from users and corresponding selection of an appropriate
server and QoS support within the ISP network. Moreover, the MUCAPS mechanism and
developments proposed by BENOCS [43] investigate similar cooperative traffic
management approaches.
The situation is different for the second type of cloud/server-to-user traffic without
support by a global CDN, which allows for transparent caching in uNaDas as part of the
RB-HORST mechanism and/or home gateways of an ISP or similar caching facilities in
local networks. Small clouds and other web services without background CDN would face
a considerable performance handicap as compared to OTT services and therefore can
benefit a lot in terms of improved throughput and shortened delays from transparent
caches in ISP networks and user premises. ISPs have a benefit from a transparent
caching architecture that reduces load on expensive interconnection and peering links and
improves throughput for his subscribers. The RB-HORST caching approach still can
exploit some incentive for the users who get improved performance, but naturally it is more
challenging to set up an efficient cooperative service with limited shared storage
resources.
In principle, DTM is also applicable to inter-domain traffic of the cloud/server-to-user type
between clouds and ISP domains. However, on both sides many connections have to be
maintained with different small to medium size clouds or ISP domains, respectively,
whereas DTM assumes established links between one cloud provider and an ISP. The
options for monitoring and for influencing the traffic flows also have to be adapted to such
a scenario.
Page 28 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

The user-to-user traffic type is also appropriate for caching options, if a small set of
popular web objects are requested by a large user population as a general precondition for
caching efficiency. Moreover, approaches based on application layer traffic optimization as
discussed in the ALTO working group at the IETF are relevant [27]. Such approaches are
included in the MUCAPS mechanism in section 4.2.5 as an advanced ALTO
implementation. The basic concept of an ALTO server that provides network layer
information, especially distance metrics between users, in order to optimize transport
efficiency and costs, is also considered as part of the DTM concept and of the overall
SmartenIT architecture. However, the concept relies on ISPs to deploy and operate ALTO
servers, although their incentive is not entirely clear. Indeed, enabling higher throughput
for e.g. file sharing data may end up in even higher P2P traffic demand rather than
reduced network load [25]. On the other hand, an ALTO server has to adapt to
requirements of different applications and it is not sure whether an application is willing to
trust the servers' recommendations based on unknown ISP preferences, which may lead
to unexpected and hardly controllable side effects. However, the most relevant data for the
ALTO service is information about the autonomous system to which a user belongs, which
is available in public data about the IP address ranges of ISPs and other organizations
without the need to rely on special ISP involvement. A more detailed picture about regional
user locations often does not help because even a transport path between users in the
same region usually has to pass through a core PoP and transport in the core of an ISP
has lowest per Mb/s costs on high speed backbone links.
Last not least, traffic between data centers (DC-to-DC) amounts to a considerable portion
of about one third of the Internet traffic according to reports by Cisco Systems [24]. Most of
this traffic is again flowing between data centers of the largest global CDN providers. In
other cases, when data centers in different domains are connected, DTM can be applied if
several paths are prepared for the data center exchange. The characteristics of inter data
center traffic is expected to be much more variable than for CDN/cloud-to-user traffic,
because there is no comparable statistical multiplexing gain as is exploitable for
distributing data over a large user population.
DC-to-DC traffic should be deferred to low traffic load periods, e.g., overnight rather than
in busy hours wherever possible such as in the ICC mechanism in Section 4.2.1. The daily
DC-to-user traffic profiles usually follow a sinus-like curve with a peak in the evening hours
and less than half of the traffic rate in early morning hours, as shown in traffic statistics e.g.
of the world's currently largest Internet exchange point (DE-CIX) [32]. Then prefetching of
content at night time that is expected to become popular at the next day can reduce the
daily peak rate. In general, the caching strategy can be modified e.g. from least recently
used (LRU) with a steady cache input flow to a strategy that defers any cache update
towards off-peak periods.
User demands are the original source of traffic of any type. In principle, the users decide
which applications and traffic types they generate. However, the mass of Internet users
prefers low rates for Internet access and IP services for free or again at low prices. As a
consequence, OTT business models are partly based on advertisement or on small
monthly fees and access network providers mainly offer best effort services, whereas
special QoS support is reserved for a small traffic portion devoted to business customers.
Nonetheless, the users can expect a basic quality level, which is more and more densely
controlled by the European regulators in recent quality initiatives based on a development
towards large scale performance measurements in the IETF LMAP standardization (cf.
[23], [29]). Finally, the relationships between users in social networks have been studied in

Version 1.0

Page 29 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

the RB-HORST mechanism in order to predict data requests for improving caching
strategies to hold the most popular data close to the users.
Concluding, different traffic types require different traffic engineering mechanisms where
load balancing, ALTO location servers and caching options can be combined in adaptation
to many scenarios in order to optimize data transport of services. The main aim of
SmartenIT mechanisms is in providing refined monitoring methods and flexibility for fast
responses to changing network traffic pattern, which can improve QoS/QoE measures in a
cost efficient way, provided that stability demands are still met and eventually higher
solution complexity can still be handled in network operation. The development towards
advanced traffic management methods is indispensable to cope with fast growing data
volumes as well as more thorough control of QoS guaranties as currently enforced by the
European regulators (cf. [23], [27]). While traffic engineering already has been optimized
on provider platforms under unique administration [26], there is still considerable potential
for improvement of end-to-end transport through heterogeneous network domains via
CDNs, clouds and distributed data centers being operated by multiple providers [28] with
different business goals and traffic engineering approaches in each domain, although all
parties have the goal of efficient service and QoS provisioning in common. Table 1
highlights which specific SmartenIT mechanisms cover each type of traffic.
Table 1: SmartenIT mechanisms mapping
Main categories of Internet SmartenIT traffic management mechanism
traffic
CDN-to-user traffic

RB-HORST, MUCAPS

Cloud/server-to-user traffic

DTM, RB-HORST

User-to-user traffic

RB-HORST, MUCAPS

DC-to-DC traffic

ICC, DTM

Page 30 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

4 Assessment criteria and methodology


In order to evaluate the performance of existing applications and newly proposed traffic
management solutions, models of the addressed use-cases and their key performance
metrics and parameters have been defined. These key performance metrics and
parameters were used in analytical and simulative evaluations. In order to assess the
overall criteria and methodology, key performance metrics and parameters for the
experiments and simulations per traffic management mechanism are used as a basis and
details of those are provided below. Based on these detailed metrics and parameters,
assessment of metric and parameter categories are also carried out. Furthermore, the
business perspectives for those traffic management mechanisms are described as well.

4.1

Methodology for Parameters and Metrics

SmartenIT employed a common methodology to describe parameters and metrics across


the selected use cases. As outlined in D2.3 [6], parameters are used as follows:
As outlined in D2.3 [6], the parameters are used as follows:
Parameters should be interpreted as configuration parameters for the testbed, e.g.,
link router capacity or initial load in Data Center facility, or parameters to be used as
the starting condition for the use case. This interpretation pertains to the input
assumptions and configuration decisions of the use case. Typical Value Ranges used
during the performance assessment phase can also be provided alongside with the
respective parameter to which they apply, so as to both be able to investigate the
impact of crucial parameters to the use case scenarios run and perform sensitivity
analysis (if needed).
The parameters have been divided into generic and specific parameters. While the
specific parameters are relevant for the experiments, the generic parameters provide a
high level overview and are not tailored to a specific use case or traffic mechanism.
As outlined in D2.3 [6], metrics are used as follows:
Metrics are considered as the means to quantify the output of the use case and
provide numerical insight into the most important aspects of the use case and
respective performance issues. Additionally, specific metrics may also pertain to
assess the use case under the assumption that a certain mechanism is employed.
Relevant metrics are described in [6] for each use case by providing information related
to the definition of the metric itself as well as proper description of how, where and
when the measures are done.
The following high level metrics are of interest within the scope of SmartenIT: traffic
reduction for ISPs, improving QoE, lowering cost, energy saving from a global perspective.
In the following, for each traffic management mechanism, the metrics and parameters are
assessed. The metrics are assessed and it is explained how and why they are used. The
parameters are split into dynamic parameters (variable) and static parameters, each
explaining why the decisions (static or variable) have been made and why these
parameters have been chosen. While all the dynamic parameters are listed, only the most
important of the static parameters are described.

Version 1.0

Page 31 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

4.2

Traffic Management Mechanism Evaluation Criteria

The metrics and key performance indicators have been described initially in D2.2 [5], D2.3
[6], and D4.2 [14]. They have been also elaborated in D2.5 [8], while summarizing
simulation results and WP2 outcomes. The following section provides an overview of those
metrics and parameters, explaining why they were used for the experiments and they are
important. Based on those parameter and metrics of the experiments, an overview over
high level evaluation criteria and metrics are given.
4.2.1 OFS: ICC, DTM & DTM++
ICC and in particular its network traffic management logic has been partly integrated in
DTM++. The respective ICC performance metrics are essentially the same with those of
DTM++ when the goal is the reduction of the 95th percentile of the ISP. Table 2 below
provides an overview of the performance metrics for ICC. Table 3 provides an overview of
the ICC parameters.
Table 2: Performance metrics for ICC
Metric

Assessment

Total amount of traffic at Represents total traffic transferred during the billing
the end of the billing period period.
Total cost achieved

The transit cost of the ISP with and without ICC.

Traffic patterns attained

The traffic patterns of the ISP manageable and nonmanageable traffic over the transit link with and without
ICC.

Time-shiftable traffic extra The extra delays incurred for the time-shiftable (delaytolerant) traffic due to ICC operation.
delays
Table 3: ICC parameters
Parameter

Assessment

Transit link capacity C


(static)

This parameter provides the upper bound for the link


utilization.

Target 95th percentile


Ctarget (static)

The target 95th percentile. ICC is assessed whether this


goal was reached and if not the respective
positive/negative deviation from this target is quantified.

Number of epochs y (static, The number of times that ICC is invoked within the 5-min
interval. Within the context of the DTM++ experiments this
y=10)
ICC parameter is essentially the same with the invocation
of DTM, i.e. y=10
These parameters define the target rate within each
Threshold parameters
tholds (static, 1,...,y-1 set to epoch of the 5-min interval for ICC.
0.9, y-th value set to 1)
Traffic (variable)

The traffic upon which the ICC rate control will be applied.
For DTM++ experiments this is by definition the same
since ICC is an integral part of DTM++.

Page 32 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

The DTM experiments are dedicated to performance evaluation for this management
mechanism. In D4.2 [14] Section 3.1, a set of experiments has been defined. These
experiments are divided into two groups depending on the charging rules applied for the
inter-domain traffic (volume and 95th percentile rules). In the experiments we evaluate the
DTM system for different inter-cloud connections (single-to-single and multi-to-multi cloud
communication). The operation of the system is inspected for different billing periods, cost
functions and traffic patterns. The set of evaluation metrics and KPIs has been defined in
D4.2 [14].
It was decided that the performance of DTM++ will not be evaluated experimentally, only
functional test will be performed. DTM++ is a synergetic solution that was designed during
the last project year. The specification of realization of ICC using hierarchical policer (c.f.
D2.5 [8]) and the integration with DTM was not easy and time consuming process.
Evaluation of ICC and DTM++ implementations required building new testbed environment
(on hardware routers). Finally, before the end of the project only functional tests of DTM++
implementation were done and are presented in this deliverable in Section 5.1.3. The
results of ICC functional test (operation without DTM) have been presented in D2.5 [8]. In
the table below performance metrics for DTM and DTM++ are collected.
Table 4: Performance metrics for DTM
Metric

Assessment

Total amount of traffic at Represents total traffic transferred during billing period
the end of the billing period via respective links. One can compare if the total amount
of traffic sent when the DTM(used also for DTM++) is
used with a case when DTM (used also for DTM++) is not
working. It would be very useful in case of DTM++ when
ICC operates too, because it indicates how ICC
influences total traffic transfer.
Total cost achieved

The total cost the ISP pays for inter-domain traffic in a


billing period. One can compare cost profits when the
DTM (used also for DTM++) is used with the situation
without DTM(used also for DTM++)

Total cost expected

Calculated by DTM (used also for DTM++) based on


reference vector.
It indicates expected total transfer cost for next billing
period. Comparison of total cost achieved with expected
cost shows how well the compensation mechanism
operates.

Amount
traffic

of

manageable This parameter indicates how much traffic is generated by


the source DC. If it is measured in the receiving side, it
shows how much traffic is received thereby.
KPI and denote the relative monetary gain of using
DTM. It is a ratio of total cost with traffic management to
the total cost without traffic management. DTM may be
used in a multi-homed autonomous system. In
experiments DTM optimizes cost of inter-domain traffic in
AS which possesses two inter-domain links: link 1 and 2

Version 1.0

Page 33 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

(c.f. D4.2 [14]). For instance,


indicates the relative
cost reduction in the case when the default BGP path
used includes link 1 (all manageable traffic passes interdomain link 1). In turn,
denotes a monetary gain of
balancing the traffic with DTM instead of using link 2 as
default BGP path. If both values are lower than 1 it means
that an ISP benefits from using DTM regardless which link
would be used as default BGP path. If for instance is
greater or equal to 1 that means that it is better for ISP to
transfer all manageable traffic via link 1, i.e., use a this
link as a default BGP path, without employing DTM

The absolute transit cost benefit (or loss) from using DTM,
if link 1 or 2 is considered as a default path, respectively
This KPI represents the ratio of the achieved cost to the
cost expected if the achieved distribution of traffic among
links were exactly equal to the reference vector. It is
especially useful when DTM++ evaluated.

Table 5: DTM parameters


Parameter

Assessment

Traffic profiles:

Traffic profiles are not DTM configuration parameters but


are input constraints important for configuration of DTM
and its optimization potential. The potential of DTM (and
DTM++) to lower ISPs cost of inter-domain traffic depends
on the amount of manageable traffic. The more
manageable traffic, the more freedom for distribution
traffic among inter-domain links and the more potential for
cost optimization. The traffic patterns also play a role.
Generally short flows are easier to manage, especially in
the case of 95th percentile based tariff. If long flows are
dominating and flow inter-arrival time is long, ISP needs to
more carefully select the DTM configuration parameters
(especially for selection of SDN controller mode). The
potential for better control on traffic cost provided by ICC
functionality in DTM++ depends on the amount of delay
tolerant traffic.

amount of traffic on
each inter-domain
link

share of manageable
traffic in overall
traffic

for DTM++: share of


sensitive and
tolerant traffic in
manageable traffic

pattern of
manageable traffic

In the case of volume based tariff, the most important is


the share of manageable traffic in total traffic. The detailed
traffic pattern is not that important due to the long time
period (the whole billing period) available for traffic
compensation.
Cost function on interdomain links and type of
tariff

This function again has the role of an input constraint for


DTM. If the cost functions on links are identical or are
pure linear, then DTM cannot lower costs; in such a case
DTM++ (thanks to the ICC functionality) may offer cost
reduction. Otherwise, if cost functions on links differ and

Page 34 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

are not linear (they must be at least piecewise linear) both


DTM and DTM++ offer cost reduction. The type of tariff
(volume of 95th percentile) is important for DTM
configuration parameters selection.
In reactive modes the new decision on tunnel (link)
selection (determined by compensation vector) is applied
to new flows only. Active flows still transfer packets over
report period DTM,
tunnel previously selected. SDN controller must maintain
compensation period table of all active flows. In proactive mode once a new
compensation vector is received all flows (active and new)
SDN controller mode
are switched to a tunnel indicated by the C vector value.
compensation
In this case SDN controller does not need to maintain a
threshold (for modes table of all active flows. Generally, reaction to the need for
without reference)
traffic compensation is faster; especially in the case of
long flows and large flow inter-arrival time, proactive mode
ICC threshold (for
offers efficient management. The drawback of proactive
DTM++)
mode is the possibility of out of order packet delivery. The
distinction of with and without reference modes adheres to
the placement of responsibility for current assessment of
compensation effect (SBox at ISP doing optimization, i.e.,
receiving the traffic or SDN controller at sender side).

DTM configuration
parameters:

The DTM reporting period and the compensation period


determine the frequency of the C vector calculation and
updating.
More details on DTM parameters can be found in D2.4 [7],
D2.5 [8], and D3.4 [12].

4.2.2 EFS#1: RB-HORST Caching


As stated in D4.2 [14], the goal of this experiment is to test the basic caching functionality
of RB-HORST and to evaluate the cache performance. The metrics are presented in Table
7. The evaluation the caching performance with respect to the response time is not
considered, as our implementation is a proof of concept written in an interpreted language
(python). Comparing response times of highly optimized web server to our prototype would
not give a clear picture of the performance.
Table 6: Caching functionality metrics in RB-HORST
Metric

Assessment

Cache hit rate of uNaDa

The hit rate measures how much content was served by a


uNaDa. When downloaded from a uNaDa, inter AS traffic
is saved, thus, the higher the cache hit rate, the more
traffic is saved. Since the requests follow a string Zipf
distribution, a similar result is expected with respect to the
cache hit rate. Such a case indicates that the basic
caching functionality is working.

Version 1.0

Page 35 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Requests served by uNaDa

The served requests measure how many times content


was served by a uNaDa. Since the caching experiment
has an increasing number of users, an increased number
of served requests is expected. Thus, if such a behavior
can be observed, the basic caching functionality is
working.

Bandwidth utilization from


traffic traces

The bandwidth utilization and the saved traffic should lead


to the same conclusion as the cache hit rate and the
request served. This test metric shows that the video data
was transferred over the network and that the video
content was delivered.

Energy consumption

The caching experiment was conducted with Odroids, for


which no energy model is available. Thus, the energy
consumption metric could not be evaluated. Since the
main goal of this experiment as stated in D4.2 [14] was to
test the basic caching functionality and cache
performance, the energy consumption evaluation was
omitted.

QoE (compute stalling


events from download
bandwidth and video
bitrate)

The QoE evaluation was also not considered in the


caching experiment due to the same reason as in the
energy evaluation: the main goal to test the basic caching
functionality and cache performance, which is achieved
with the first three metrics: caching hit rate, requests
served by uNaDa, and bandwidth utilization.

In the following table, the parameters are assessed. Two variable parameters were used in
the experiment, while two static parameters were fixed. Changing these two static
parameters to variable parameters could have a high impact on the experiments.
Table 7: Caching functionality parameters in RB-HORST
Parameter

Assessment

Number of end user


devices

The number of devices shows the scalability of the RBHORST. The numbers are from 1-8, thus representing 1-8
users connected to a uNaDa. Besides the RB-HORST
software, a further limiting factor for the number of devices
connected to a uNaDa is the WiFi capacity.

(variable 1- 8 devices)

Video request rate


(variable 1/16min - 2min)

Cache size
(static ~15GB)

Since the video request rate is difficult to model, the


following values have been used: 1/16min, 1/8min,
1/4min, 1/2min, 1/1min, 1/0.5min. As the expected result
of this experiment is that these values should not have an
influence, an accurate model is not required
Cache size has been set to the device limits ~15GB. This
setting is the same as in the large scale study. A lower
cache size would lead eventually to a higher Internet
usage as less videos are being cached.

Page 36 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Request mechanism
(static Zipf distribution)

The request mechanism of is a Zipf distribution where


popular content is requested more often. Typically these
Zipf distributions are used to model user behavior. The
worst request mechanism would be to pick videos
randomly, which would lead to a much lower cache hit
rate.

4.2.3 EFS#2: RB-HORST Large Scale Study


As stated in D4.2 [14], the goal of this experiment is to test the prediction and caching
mechanisms of RB-HORST under realistic conditions. The metrics for assessment is
presented in Table 8.
Table 8: Large scale RB-HORST study metrics
Metric
Energy consumption
home router

Assessment
of The energy consumption of the uNaDa (Odroids) is to be
determined using system utilization traces collected
during the large-scale study. Using this metric, the overall
energy cost of using RB-HORST can be determined,
based on which comparisons with conventional streaming
can be derived.

Prefetching efficiency

Prefetching has to be analyzed in terms of efficiency. This


metric is calculated by dividing the number of watched
and prefetched videos by the total number of prefetched
videos. The metric will indicate the ratio of success of the
prefetching.

Cache hit rate

The cache hit rate measures how much content was


served by a uNaDa. When downloaded from a uNaDa,
traffic is saved (inter AS for the ISP), thus, the higher the
cache hit rate, the more traffic is saved. The large scale
study is an excellent opportunity to evaluate the caching
functionality in uNaDas. Therefore, this metric is used to
assess the caching functionality, i.e., the cache size and
the cache policy.

Requests served

Requests served is a basic metric which indicates the


consumptions of Vimeo videos through the uNaDa. There
are three types of requests which are being assessed,
differing in the way they were served: from Vimeo, from
neighbor uNaDa, and from local uNaDa.

Bandwidth utilization from The bandwidth utilization describes the traffic on the
Ethernet and WiFi interface of the uNaDa. From these
traffic traces
measurements, first the power consumption of the uNaDa
is to be derived, secondly, the changes in network traffic
between on-demand streaming and social offloading
using RB-HORST are to be determined.

Version 1.0

Page 37 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Inter-domain traffic
produced by prefetching

Inter-domain traffic is the primary concern of RB-HORST


and therefore it is crucial to assess how much additional
inter-domain traffic is generated by the prefetching
mechanism. This metric is related to the prefetching
efficiency metric which gives the percentage of
unnecessary inter domain traffic. Inter-domain traffic
generated is also depending on the source of the content,
since content served from uNaDas that are in the same or
even a neighboring AS introduces less inter-domain traffic
than downloading from Vimeo servers.

Inter-domain traffic saved

This is a key performance indicator (KPI) of RB-HORST.


With this metric the combined performance of the
prefetching and caching mechanisms can be assessed.

Table 9 presents the experiment parameters and their values used during the large scale
studies. Due to the size and duration of a large-scale study it is impractical to vary several
parameters and re-run the whole study. Therefore, all parameters are static and were
chosen to produce the most data points for the later evaluation. E.g., the predictions were
executed every hour, allowing to reduce granularity and take only every 2nd or 4th
prediction into account for the assessment.
Table 9: Large scale RB-HORST study parameters
Parameter

Assessment

Cache Size (static 1100


MB)

The maximum size of the cache used for prefetching and


caching.

Cache Size Threshold


(static 100 MB)

The threshold used to trigger cache maintenance.


Contents which expired (are longer in the cache than their
time to live) are removed during this maintenance.

Cache Time To Live (static


120 h = 5 days)

The TTL for the cache determines how long a video may
reside in the cache after being watched the last time.
Videos with expired TTL may be removed by the cache
maintenance task even if the cache is not full or near full.

Social Queries Interval

This is the time between two consecutive updates of the


social network data from Facebook. Friends list, feed
items, etc. will be update after the specified interval.

(static 10 min)
Overlay Updates Interval
(static 60 min)

The interval between two consecutive Overlay updates.


Overlay updates are used to make sure no old data
resides in the overlay and non-responsive neighbors are
removed.

Overlay Predictions
Interval (static 60 min)

This is the time interval between two overlay predictions.


It was set on a low value (60 min) for the large-scale study
to have many predictions for assessment.

Social Predictions Interval


(static 60 min)

This is the time interval between two social predictions. It


was set on a low value (60 min) for the large-scale study
to have many predictions for assessment.

Page 38 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Overlay Prediction Score


Threshold (static 0)

The overlay prediction threshold defines what score is


needed for a video to be prefetched. To gain the most
data points for assessment it was set to 0 for the largescale study.

Social Prediction Score


Threshold (static 0)

The social prediction threshold defines what score is


needed for a video to be prefetched. To gain the most
data points for assessment, it was set to 0 for the largescale study.

AS Hop Limit (static 1


hops)

The AS Hop Limit is the maximum number of AS hops to


a neighbor uNaDa from which content can be
downloaded. This was set to 1 for the study to ensure that
only very close neighbors were used to prefetch content.
With a maximum of 1 hop, only neighboring ASes were
used and by definition both must be access networks and
therefore have a peering agreement rather than a transit
contract.

4.2.4 EFS#3: MONA (including EEF)


The parameters of interest are listed and described in Table 10. The main goal of the
analysis is deriving the energy consumption of the overall system. As it is not possible to
directly measure the power consumption of devices handed out to end-users, a model
based measurement approach was chosen. Hence, intermediate metrics are collected on
all devices, allowing to determine the power consumption of the devices using a calibrated
power model.
Table 10: MONA performance metrics
Metric

Assessment

Energy consumption

The energy consumption of the smartphones is compared


depending on different connectivity options. These include
streaming via 3G/LTE, on-demand streaming/offloading
using the RB-HORST AP, and streaming content from the
local cache.

Throughput

The throughput of the individual connections is measured,


as it limits the available video quality, and also influences
the power consumption of the device when streaming or
downloading content.

RTT

The RTT is measured from the smartphone with respect


to the local uNaDa and a server acting as remote
endpoint for up- and download measurements. The RTT
between uNaDa and the measurement server is also
measured. From these measurements, the influence of
RB-HORST on the performance of time-critical services is
derived.

Version 1.0

Page 39 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

The signal strength of the respective wireless network


connection limits the maximum throughput and influences
the RTT by increasing the number of retries on lower
levels. Also the energy consumption is influenced by
increased re-transmissions, lower throughput and a higher
transmission power required to reach the remote network.

Signal Strength

For the analysis of the system, the parameters were selected as given in Table 11. The
network technology is automatically selected by the participating devices based on
availability. For the energy modeling of the network connections on the mobile device, the
power model of the Nexus 5 was selected, as it is a widely used device implementing the
required network technologies.
Table 11: MONA parameters (selected configuration underlined)
Parameter

Assessment

Network technology

One of 3G/4G/WiFi/RB-Horst WiFi, determined by


availability (with preference on RB-HORST WiFi)

Smartphone device type

Nexus S, Nexus 5
Nexus 5 selected due to availability of 4G connectivity
Raspberry Pi, Odroid-C1

uNaDa device type

Odroid-C1 selected because of system performance


requirements
Signal strength sampling
interval

Continuous on change, while active on change,


periodically
The change of signal strength was observed only while
the device was active limiting the energy consumed by the
measurements while maximizing the number of
measurements.

RTT measurement interval

On connect, periodically
(1 min, 5 min, 10 min, 15 min, 60 min)
1 minute interval selected to be able to determine
changes in network behavior while limiting additional load
on the system.

Throughput measurement
interval

On connectivity change, periodically

Throughput test duration

5s, 10s, 30s

Executed on connectivity change to sample the maximum


number of network technologies while reducing the cost
generated at the end-user side.
Interval of 10 s chosen to allow throughput stabilization
also on links with hither RTT (e.g. 3G) while minimizing
traffic cost.

Page 40 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

4.2.5 EFS#4: MUCAPS


The MUCAPS experiments aim at selecting the Application EndPoint (AEP) , in the
present case the best video server to serve a user requesting a video content with best
possible quality. Using both analytical and measurement evaluations, the experiments also
depend on the player device and the type of connection. The goal is to make a network
selection that will satisfy the ISP by following their preferences as set as input utility, and
also the user after evaluating their Quality of Experience by a specific tool.
Experiments will measure the QoE on a user endpoint (UEP) that is the end user device
receiving a video. The QoE will be assessed w.r.t the metrics listed in Table 12.
Table 12: MUCAPS performance metrics
Metric

Assessment

Cross Layer Utility (CLU)

This metric involves information such as routing


costs and path BWS (BandWidth Score) that are
defined by the ISP. The use of MUCAPS aims at
optimizing these values so that ISP can serve more
users in a satisfactory way.
This metric allows to evaluate the global quality of
experience of a video viewer. The goal is to serve all
users with a satisfactory VQS value (at least 3/5)

Video Quality Score (VQS)

Start Time Delay

Defined as the delay between the request of a video


by a user and the beginning of the playout. A short
start time delay (less than a few seconds) is
considered necessary when watching videos,
especially short ones.

NFrz, DFrz

Respectively Number of Freezes and Duration of


Freezes during the playout of a video, these metrics
define how many times and for how long a video has
stopped playing because of re-buffering events.

Media Bit Rate (MBR)

An improved media bit rate will induce a better


quality video. Useful to compare between similar
videos.

%corrupted frames

This metric is generally not useful when TCP is


used, except when the decoder experiences
performance problems. Because of the propagation
of errors between successive frames, a few
erroneous bits can affect many frames.

The Cross Layer Utility (CLU): represents the proximity of the AEP performance vector
V(aep) to an ideal (RC, BWS) vector Vid, composed by the best observed performance
values among the candidate AEPs. CLU is the L1 norm of a weighted distance vector
between V(aep) and Vid. The selected AEP must have the maximal CLU value.

Version 1.0

Page 41 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

VQS is derived in two steps:


1. The QoE measurement step gets from a video stream, the values on metrics:
DateTime Capture (s), Date, Clip Name, Resolution, Operator, Session ID, RTT (Avg,
Std, Max), Video Access Time (s), Video Duration (s), Video Bitrate (bps), VQS Avg
(Real), VQS Avg (Theor.), VQS Std (Real), VQS Std (Theoretical.), Frame Error (%),
Freeze (%), Nb of Freezes, Freeze Duration (Avg) (s), Freeze Duration (Max) (s), Nb of
Error Frames (for UDP streams only), Nb. of Packets, Packets Retransmitted (%),
Network Video Jitter (Avg) (s), Network Video Jitter (Max) (s), Network Bitrate (Avg)
(bps), Network Bitrate (Std) (bps), Late & Err. Frame Ratio (%), Frame Rate (Avg)
(fps), Frame Rate (Std) (fps), Nb of Frame Resolutions, Frame Resolution (px).
2. In SmartenIT, we use 6 metrics derived from the above measurements list which are:
Media Bit Rate, Start Time, RTT, VQS, Freeze Nb, Freeze Duration (avg), Freeze
Duration (max).
These metrics will be measured in several experimental conditions characterized by the
parameters listed in Table 13.
Table 13: MUCAPS parameters
Parameter

Assessment

MUCAPS Status
(variable G1, G2, G3, G4)

This composite parameter characterizes the degree of


involvement of MUCAPS in the application session and
takes 4 values each defining a test group:
-

G1: MUCAPS OFF


G2: MUCAPS ON + routing cost
G3: MUCAPS ON + BWS
G4: MUCAPS ON + routing cost + BWS

UEP access
(variable WiFi, LAN, 3G)

This parameter defines the access technology used by


the end user device receiving the video and impacts the
capabilities of the end system receiving the video

Video Resolution
(variable HR, MR, LR )

This parameter defines the image resolution of the


displayed video and impacts the VQS, depending on the
capabilities of the end user device receiving the video and
the application path bandwidth. It takes values: High
Resolution (HD) e.g. suited TV screen or large PC,
Medium Resolution (MR) e.g. PC with a WiFi access, Low
Resolution (LR) e.g. mobile phone

4.3

Metrics, Parameters, and Assessment Summary

Summarizing the detailed presentation on the metrics evaluated in the experiments for the
OFS and the EFS in Sections 4.2.1-4.2.5, it follows that these metrics can be divided into
three dimensions:

Economic metrics (e.g., cost, billing, and energy consumption)

Performance metrics (e.g., throughput, signal, strength, and bandwidth)

User experience metrics (e.g., video quality score, startup delay)

Page 42 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Many performance metrics have a direct influence on energy efficiency, e.g., cache hit rate
or bandwidth. The higher the cache hit rate, the less requests have to be performed over
the Internet, thus saving energy. The same applies for bandwidth and several other
metrics. If bandwidth can be saved, then energy will be saved as well. Energy efficiency
has been evaluated in other scenarios indirectly. It is important to mention that the other
two metrics (economics and user experience) may be counterproductive with respect to
energy efficiency, as the cheapest link may not be the most energy efficient link.
Furthermore, parameters were already divided into the following categories:

Static parameters

Variable parameters

From the experiments, it can be observed that experiments with a simulative character
have more variable parameters. For example EFS#2 only has static parameters as the
system was implemented with real devices and real users, similar to OFS that has only
one variable parameter. Thus, parameters such as social prediction interval or cache size
had are fixed because only one large scale test runs could have been performed. EFS#1
or EFS#4 on the other has a simulative character and more variable parameters were
evaluated.
Privacy and security were not evaluated in the experiments and not part of any metric or
parameter. However, e.g., for the large-scale experiments, certificates were used to
ensure that data is securely transferred. Thus, although security was not directly evaluated
in a quantitative manner, it was considered qualitatively in the experiments and actually
provisioned in some of them. Additional mechanisms such as caching on uNaDas does
not encrypt content, raising privacy issues as the uNaDa owner may see what content its
friends consume. On the other hand, prefetching to a friends phone hides this information
as the friend may or may not consume the content in the end. A more elaborate analysis of
privacy and security issues applicable to SmartenIT mechanisms is future work.
Many of the parameters and metrics that were reported in D2.2 [5], D2.3 [6], and D4.2 [14]
were evaluated in specific scenarios and for those parameters and metrics not evaluated,
detailed reasons were provided why those parameters and metrics were not involved.
Future work should include evaluation of EFS and OFS in different scenarios. For
instance, the uNaDa caching mechanisms can also be applicable for mobile / fixed edge
computing.
Table 14: SmartenIT metric categories mapping
Metric categories

Experiment

Economic metrics

MONA, ICC

Performance metrics

DTM, RB-HORST

User experience metrics

MUCAPS

Thus, the metrics chosen for all the experiments show that all three metric dimensions economic metrics, performance metrics, and user experience metrics - are represented
and evaluated accordingly as shown in Table 14. For the parameters a mix of variable and
static parameters were chosen. Depending on the type of experiment more parameters
could be evaluated (simulative experiments), while for other experiments many parameters
had to be fixed (real-world experiments). The relation between those is shown in Figure 5.
Version 1.0

Page 43 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

The EFS/OFS experiments (blue cloud in Figure 5) are classified as real-world


experiments with static parameters. However, as some experiments have variable
parameters and some experiments had a simulative character, the EFS/OFS experiments
overlap with those dimensions as well.

Figure 5: Variable and static parameters and type of experiments

Page 44 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

5 Testbed experiments results and assessment


This section reports the results achieved during SmartenIT validation activities related to
Task T4.3. SmartenIT experiments have been described and classified across deliverable
D4.2 [14], except from a small subset of experiments which have been proposed during
the validation activity and for this reason are documented here. The global list of
experiments can be summarized in Table 15.
Table 15: Experiment summary
Reference
scenario

Experiment
ID

Experiment name

Testbed
owner

Experimenter

Supporting
partners

OFS

OFS#1.1.1(Sto-S volume)

Evaluation of inter-domain traffic


cost reduction in DTM: S-to-S case

IRT

IRT

AGH, PSNC

OFS

OFS#1.1.2 (Sto-S volume)

Evaluation of inter-domain traffic


cost reduction in DTM: S-to-S case

AGH

AGH

PSNC

OFS

OFS#1.1.3 (Sto-S volume)

Evaluation of inter-domain traffic


cost reduction in DTM: S-to-S case

IRT

IRT

AGH, PSNC

OFS

OFS#1.2 (Sto-S 95%


rule)

Evaluation of inter-domain traffic


cost reduction in DTM: S-to-S case

AGH

AGH

PSNC

OFS

OFS#1.3 (Sto-S 95%


rule)

Evaluation of inter-domain traffic


cost reduction in DTM++: S-to-S
case -- HARDWARE ROUTERS

AGH

AGH

PSNC

OFS

OFS#2.1 (Mto-M volume)

Evaluation of inter-domain traffic


cost reduction in DTM: M-to-M case

PSNC

PSNC

AGH

OFS

OFS#2.2 (Mto-M 95%


rule)

Evaluation of inter-domain traffic


cost reduction in DTM: M-to-M
case

PSNC

PSNC

PSNC

EFS

EFS#1

Evaluation of caching functionality


in RB-HORST

TUD
(testbed),
UZH
(uNaDa)

UZH, UniWue

TUD

EFS

EFS#2

Large-scale RB-HORST Study -enhanced environment

TUD,
UZH

TUD, UZH

All

EFS

EFS#3

Evaluation of data offloading


functionality in RB-HORST

TUD,
UZH

TUD

UZH

EFS

EFS#4

MUCAPS

ALBLF

ALBLF

TUD

Following Table 15, the following key topics can be observed:


1. All experiments have been divided across the two main reference scenarios
identified across SmartenIT WP1 activities, namely OFS and EFS.
2. Single experiments are further classified to investigate certain traffic mechanism
according to certain configuration and parameters
3. All experiments foresee a double sided activity, the first being a functional test
assessment (where the main features of the traffic mechanism to be investigated
according to the actual deployment are evaluate from a qualitative/functional point
Version 1.0

Page 45 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

of view) while the second is a performance test (where the traffic mechanism to be
investigated according to the actual deployment is evaluated from a quantitative
point of view with large scale, stress and load test activities)
4. Details regarding the testbed where single experiments will run, the owner of the
experimenter as well the supporting partners for the experiments have been
provided.
All experiments have been described by using a common Experiment report card template
which has been agreed among project partners and which is intended to provide a
coherent and comprehensive way to collect and present the results. The Experiment report
card will play a central role in the phase of SmartenIT overall performance assessment.
This template is provided in Table 16 and includes the following dedicated entries:

Reference scenario and description of the use case which generated the actual
experiment/test card,

Experiment identifier

Description of the main goal of the experiment

Traffic mechanism currently investigated in the actual experiment

The type of experiment

Description of the setup used during the experiment

Recorded data and raw data format

Description of how metrics are estimated

Experiment report cards are followed by plain text descriptions of results achieved,
relevant plots and figures, discussion of the results, conclusions as well as future research.
Table 16: Experiment Report Card Template
Field

Description

Scenario

SmartenIT scenario the experiment belongs to

Use case

SmartenIT use case the experiment belongs to

Experiment Id

Id of experiment from Table 15, Test card X.Y.Z

Goal

Why this experiment is needed

TM mechanism

Which TM mechanism is evaluated

Experiment type

Testbed / Prototype validation

Experiment setup

Details on topology, configurable


assumptions, setup, etc.

Recorded data and raw


data format

Direct measurements made, measurement points, frequency of


measurements, raw data format

Measured metrics and


evaluation methodology

How metrics are estimated (direct observation, aggregation,


averaging, calculation, formulas)

parameters,

Page 46 of 177

architecture,

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

5.1

OFS experiments

The testbed environments, testbed configurations and experiment definitions were


introduced in D4.1 [13] and D4.2 [14]. In general, all OFS experiments are compatible with
the basic testbed topology and extensions described in those deliverables. OFS
experiments 1.1, 1.2, 2.1 and 2.2 (Table 15) use a virtualized testbed environment. Finally,
four testbed instances were launched in three locations:

single instance at IRT premises for experiments with Single-to-Single domain (S-toS) topology (Figure 6) and volume based tariff

two instances at AGH premises for experiments with S-to-S topology with volume
as well as 95th percentile based tariff

single instance for experiments with Multi-to-Multi domain (M-to-M) topology


(Figure 7) and volume as well as 95th percentile based tariff (located at PSNC
premises)Configuration details were presented in D4.2 [14]. They include:

physical testbed configurations

mapping of experiments logical topology to physical machines

localization of virtual machines, deployment of virtual routers, switches, servers,


traffic generators and receivers

network addresses of nodes

definition of measurement points

definition of performance metrics and KPIs (separately for volume based tariff and
95th percentile based tariff).
Traffic
generator
(sender)

Cloud A
DC-A

AS2

Traffic
receiver

AS3
DA-1

Traffic
receiver

Cloud B
DC-B
DC traffic
generator
sender

BG-1.1
BG-1.2

AS1

SDN controller
Traffic
receiver

S-Box

S-Box
Traffic
generator
(sender)

BGP router
Intra-domain router
Inter-domain link
Intra-domain link

Figure 6: Logical topology for S-to-S experiment


Version 1.0

Page 47 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Cloud A
DC-A

Traffic
receiver RA1

BG-1.1

Traffic
generator
(sender) GA1

Cloud B
DC-B

AS3
DA-1

Traffic
receiver

AS2
BG-1.2

AS1

Traffic
receiver RA2

S-Box

Cloud C
DC-C

DC traffic
generator
sender

Traffic
generator
(sender) GC1

SDN controller

S-Box

Traffic
Receiver RC1
Traffic
generator
(sender) GA2

Cloud D
DC-D
AS5

DA-4

DC traffic
generator
sender

BG-4.1

Traffic
receiver

AS4
BG-4.2
Traffic
receiver RC2

Traffic
generator
(sender) GC2

SDN controller

S-Box

S-Box

Figure 7: Logical topology for M-to-M experiment


In deliverable D4.2 [14] the OFS experiments are distinguished using two dimensions:

basic topology (stemming from the number of cloud/DCs sending or receiving the
manageable traffic), namely S-to-S (Figure 6) and M-to-M (Figure 7) experiments
are distinguished

type of tariff used for billing the traffic on inter-domain links, namely volume based
and 95th percentile based.

An additional dimension was the distinction of functional and performance tests.


In deliverable D4.2 [14], two main experiments are defined using the first dimension: S-toS (in section 3.1.2 of D4.2) or M-to-M (section 3.1.3). The former maps to OFS#1.1 (1.1.1,
1.1.2 and 1.1.3) and OFS#1.2 experiments defined in Table 15. The latter maps to
experiments OFS#2.1 and OFS#2.2.
Practically, for each of OFS experiments presented in Table 15, first at least a single run of
functionality test was performed and then a set of performance tests. The goal of the
functionality test was primarily to verify the proper configuration of the testbed
infrastructure, including validation of topology, accuracy of traffic generators, and
operability of measurement points as well as to check if the DTM prototype works as
expected. Additionally, functionality tests supported pre-selection of some parameters for
performance tests. The main settings (simplifications) distinguishing functionality tests
from performance tests were as follows:

short billing period: usually 30 minutes for volume based tariff and 500 minutes for
95th percentile based tariff (in order to collect exactly 100 5-minute samples within
a billing period)

flat envelope for manageable and non-manageable traffic

Page 48 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

In performance tests, each traffic source generated traffic with a daily envelope. The traffic
envelope shape is realistic. It is derived from the average traffic in bits per second flowing
through DE-CIX internet exchange [32] from January 18/19, 2015 midnight CET to
January 19/20, 2015 midnight CET. The length of billing period was selected in two ways:

1 day long billing period - chosen if a quick evaluation of the influence of some
parameter settings was needed. Additionally, in the case of 95th percentile tariff,
tests with 1 day long billing period were used as preparatory step for experiments
with 7-day long billing period. It was needed since the system behavior and the
system operating point is difficult to predict. Thus, settings for 7-day long billing
period were first validated and tuned with test with 1-day billing period. In contrast,
in the case of volume based tariff, the system operating point can be quite easily
pre-calculated and such a preparatory run was not needed. The amount of nonmanageable traffic on both links as well as the amount of overall manageable traffic
sent between DCs in the billing period can be calculated from traffic generators
settings.

7-day long billing period was chosen for the main performance test runs. Instead of
using realistic 1-month billing period we assumed 7-days. A single experiment with
1-month billing period would need to be run for at least 3 months to collect usable
results for 2 billing periods (the first billing period is an experiment warm-up period
and measurements are useless). The performed experiments shown that 7-day
billing period is long enough to obtain reliable results due to averaging effect. The
amount of traffic in the case of volume based tariff is large enough and lengthening
the billing period does not improve the accuracy. The experiments with 95th
percentile based charging are significantly more sensitive to the length of the billing
period. Too short billing period results in collection of few 5-minute samples and the
obtained distribution of samples might be unrealistic, especially influencing the
calculation of percentile value. The number of 5-minute samples collected during 7
days equals to 2016. It is sufficient to obtain distributions of samples close to
realistic. We observed that the value of calculated percentile converges with the
growing number of days (due effect of averaging). The improvement after 7 extra
days is very small.

A separate testbed environment was created for OFS experiment 1.3. It is a new
experiment employing DTM++, which was designed after release of D4.2 [14]. The status
of specification and implementation of DTM++ did not allow to define experiments at that
time.
The testbed environment for OFS 1.3 is not based on virtualization on physical server but
is built with real network equipment, namely hardware routers and switches. Such an
approach is determined by the way of implementation of ICC functionality for DTM++.
Namely, the ICC functionality is realized with the use of hierarchical policers that are
available on physical routers offered by certain vendors. The specification of the ICC
implementation for DTM++ can be found in deliverable D2.5 [8]. More details on the
experiment can be found Section 5.1.3.
5.1.1 OFS#1.1 (S-to-S Volume based charging rule)
The goal of this experiment was to evaluate DTM performance under the assumption that
total volume charging rule is used and S-to-S topology is used. The logical topology is
presented in Figure 6. The cost of traffic transferred on each inter-domain link (L1 and L2)

Version 1.0

Page 49 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

is calculated at the end of billing period using the total amount of traffic transferred through
a given link during the whole billing period.
The primary goal of OFS#1.1.x experiments was to evaluate the performance of DTM
when volume based tariff is used. In the first experiments the used SDN controller mode
was "reactive with reference". First results shown that DTM is able to distribute
manageable traffic among inter-domain links in such a way that the target traffic cost is
achieved with a high accuracy. The measured traffic vector followed the reference vector
very well during the whole billing period. It motivated experimenters to check if satisfactory
results could be achieved with different SDN controller modes which are less complex.
Therefore, experiments were repeated with "reactive without reference" mode of SDN
controller.
In both modes, the compensation vector is calculated by SBox located in the domain that
optimizes traffic (domain AS-1). The calculation is done periodically with interval defined
by parameter called "report period DTM". It was set to 30s in all OFS#1.1.x experiments.
The compensation vector is calculated using measurements of the current traffic vector
and it reflects the deviation of the current traffic vector from the reference vector.
In the case of "reactive with reference" mode, the newly calculated compensation vector is
always sent to the remote SBox (in domain AS3), that is every 30 seconds. The values of
received compensation vector are used to decide through which tunnel the new flows
should be send. The value of the compensation vector components is taken into account.
Once the new compensation vector is received one of the tunnels is selected. Then the
amount of traffic sent through this tunnel is measured. If the amount of traffic sent reaches
the value of the compensation vector, the SDN controller starts to balance the traffic
among tunnels with the proportion stemming directly from the reference vector
components' values.
In the case of "reactive without reference" mode the mechanism is simplified. The remote
SBox does not take into account the value of the compensation vector. Only the sign is
considered and indicates one of the tunnels to be used. The tunnel used may be switched
only when the sign of the new compensation vector component is changed. Thus, the sbox calculating the compensation vector does not need to send it if the sign remains
unchanged after calculation. Therefore, the new compensation vector is calculated every
30 s ("report period DTM") but sent only if the sign is changed (and some preconfigured
threshold is exceeded release 3.0 and 3.1), that is, the tunnel used must be switched.
Additionally, according to the specification, compensation vector is always sent (regardless
the sign) when the time defined by parameter "compensation period" elapses (in the
experiment it was set to 5 minutes).
The benefits of using "reactive without reference" mode are as follows:

less overhead (less messages needs to be sent between SBox-es)

less complexity of functionality of remote SBox and SDN controller (no need to
measure the traffic sent over tunnels).

One of the experiment goals was to evaluate if DTM performance is not degraded if
"reactive without reference" is used.
The last setting of SDN controller mode tested was "proactive without reference". In
"reactive" modes the new selection of tunnel (link) is applied to newly generated flow only
(active flows are still sent over the link selected when they were created). In turn, in

Page 50 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

"proactive" modes if an arriving compensation vector indicates to change the tunnel used
all flows (including active) are immediately switched to newly selected tunnel. The
advantage of such solution is that SDN controller does not need to maintain large table of
flows (only one rule for all manageable traffic is sufficient). Thus it offers a better
scalability. Another benefit of "proactive" mode may arise if long-lasting flows are
dominating in manageable traffic profile and new flows are generate relatively rarely.
"Proactive" mode offers a faster reaction to the need to compensate the traffic than
"reactive" mode. However, this is mostly important if 95th percentile tariff is used (this will
be elaborated more in OFS#1.2 experiment description). The drawback of this mode is
that packets may arrive to the destination out of order if end-to-end delays are different on
two paths. The goal of those experiments was to evaluate performance of DTM under
"proactive" mode.
Field

Description

Scenario

Operator Focused Scenario

Use case

Bulk data transfer for cloud operators

Experiment
Id

OFS#1.1.1, OFS#1.1.2 and OFS#1.1.3

Goal

To evaluate the performance of DTM when tariff used by the ISPs based on a total traffic
volume. To compare performance for different setting of SDN controller mode.

TM
mechanism

DTM

Experiment
type

Testbed

Experiment
setup

Tariff: total traffic volume


Topology: Figure 6
Cost functions for link L1 and L2:
200
2.976 10
8.929 10

300
5540

0
168 10

168 10
880.32 10
880.32 10

1.19 10
0
369.6 10
5.952 10
1760 369.6 10
559.78 10
11.9 10
5092
559.78 10
where

is the total traffic volume at the end of billing period.

Billing period: 7 days


Simulation time: 22 days (>3 billing periods)
SDN controller mode:
-

reactive with reference vector (OFS#1.1.1)


reactive without reference vector (OFS#1.1.2)
proactive without reference vector (OFS#1.1.3)

Compensation period: 30 s

Version 1.0

Page 51 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Traffic generators settings:

Recorded
data and raw
data format

Background on L1

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on L2

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Manageable DC-DC

Distribution

Flow inter-arrival-time

exponential

Flow length

exponential

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters
1

109.5 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

218.5 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1
1

118 [ms]
2400 [ms]

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

Traffic samples on both links every 30 seconds


vector valuesevery 30 seconds

calculated by SBox in domain AS-1

sent to SBox in domain AS-3 (only when sent)

(for reactive with reference mode (OFS#1.1.1) the above values are equal)
vector values at the end of each billing period
Refer to D4.2 [14] Table 3 for more details.
Measured
metrics and
evaluation
methodology

Total cost (expected, achieved, predicted for non DTM scenario)


KPIs: , ,
,
, (refer to D4.2 [14])

The DTM performance for volume based tariff was verified under three different settings of
the SDN controller mode. For each mode, the performance of DTM was observed over a
few billing periods. The obtained results are very similar. In all cases, the measured traffic
vector met the reference vector with a high accuracy and, what follows, the cost of transit
traffic was very close to optimal (expected), the KPI is close to 1 (Table 18). The small
difference between expected and achieved cost (below 1%) stems from the statistical
features of the traffic pattern; the total amount of traffic transferred may vary between
billing periods. In presented results the traffic volume in observed billing period was a bit
lower than in the previous, i.e., used to calculate reference vector. In other billing periods
(not presented here) we observed also a bit higher traffic volumes in current versus
previous billing period.

Page 52 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Absolute costs for transit traffic presented in Table 17 as well as KPIs (Table 18) shows
that for each SDN controller mode not only an expected traffic cost was achieved but it
was also lower than for possible static routing solutions. The ISP using DTM saves costs:
if DTM was not enabled and link 1 or link 2 was used for transferring all manageable traffic
(as a static BGP path) the cost would be higher.
Table 17: Total traffic costs achieved with DTM and expected without DTM obtained for
various SDN controller modes (OFS#1.1)
Cost

Reactive
with
reference

Reactive
without
reference

Proactive
without
reference

Expected (estimated in prev. billing period, reflected by )

3228.5

3347.9

3298.6

Achieved with DTM

3211.9

3312.0

3269.7

Optimal (estimated based on traffic vector achieved in


current billing period)

3203.8

3294.6

3254.9

w/o DTM - if a default BGP path selected for manageable


traffic was link 1

3451.3

3570.1

3503.1

w/o DTM - if a default BGP path selected for manageable


traffic was link 2

3903.8

3968.0

3956.1

Table 18: KPI values for DTM run with various SDN controller modes (OFS#1.1)
KPI

Reactive with reference

Reactive without
reference

Proactive without
reference

0.9307

0.9277

0.9333

0.8228

0.8347

0.8264

-239.31

-258.13

-233.82

-691.86

-656.03

-686.85

0.99487

0.98926

0.99111

Also plots for all SDN controller modes are very similar so it was decided to present figures
showing traffic patterns and traffic growth on a cost map for "reactive without reference"
mode only (Figure 8 and Figure 9). It can be observed that the manageable traffic is
distributed between the two links. During the periods when the traffic volume on both links
is very low the whole manageable traffic is sent over link 1. Since there was not enough
manageable traffic to be shifted to link 1 the measured traffic vector diverges from
reference vector over some time. However, it is compensated later and current traffic
vector again converges with (the periods when effect is most apparent are indicated by a
green circle on Figure 8 and Figure 9). This temporal discrepancy is however very low and
one can notice that the manageable traffic is distributed in such a way that the measured
current traffic vector follows the direction of very well and converges at optimal point at
the end of billing period (Figure 8). The values of compensation vector are high during
such a period indicating the amount of manageable traffic need to be shifted to converge
to direction of (Figure 10).

Version 1.0

Page 53 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

When the traffic is growing on both links the manageable traffic is distribute among both
links. Around the end of the daily peak period there is too much background traffic on link
1 so DTM decides to send most of manageable traffic over link 2 but still the traffic is
distributed over both links. At that period, the compensation vector oscillates around 0
(Figure 10).

Figure 8: Traffic growth during the billing period and a cost map. SDN controller mode:
"reactive with reference". (OFS#1.1)

Page 54 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 9: Traffic patterns on link 1 and 2 during the billing period. SDN controller mode:
"reactive with reference". (OFS#1.1)

Version 1.0

Page 55 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 10: Compensation vector values received by remote SBox (AS3 domain). SDN
controller mode: "reactive with reference". (OFS#1.1)

Figure 11: Compensation vector values received by remote SBox (AS3 domain). SDN
controller mode: "reactive without reference". (OFS#1.1)
The difference between operation of DTM in modes with and without reference appears in
values of compensation vector (Figure 10 and Figure 11). In the mode "with reference" the
updates of are send regularly and very often (every 30 s in this experiment). Thus the
compensation is corrected every 30 seconds and is very accurate. In mode "without
reference" the vector is updated when its value changes sign (there is a need to switch
to another link) or after 5 minutes. The visible effect is that oscilalations of compensation
vector values are higher since the result of less often updates is that the mechanism may
slightly overcompensate the traffic before a new update arrive. However, we do not
observe any negative effect of this on the cost achieved. In turn, the number of vector

Page 56 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

updates during the billing period is significantly lower for "without reference" modes (Table
19). Therefore, from overhead and scalability perspective it is recommended to use SDN
controller mode type "without reference".
Table 19: The number of

vector updates exchanged by SBox-es during the billing period

Reactive with reference

Reactive without reference

Proactive without reference

9154

9413

20160

Finally, considering proactive vs. reactive mode for volume based tariff, looking at DTM
performance and KPIs we did not observe any significant indicator for choosing one of
them over the others. The results are similar. However, since "proactive" mode affects
active flows (in contrast to "reactive" which chooses link only for new flows) what may
result in out of order packet delivery we recommend to use "reactive mode" for volume
based tariff.
Conclusion: To sum up, the OFS#1.1 experiments proved that DTM performs well and is
capable of decreasing ISP costs of transit traffic when volume based tariff is used. KPIs
calculated for experiments presented in this deliverable indicates the potential for interdomain traffic costs reduction by 7% to 18%. Similar values where obtained for most of
experiments done (including those not presented in deliverable). Also benefits up to 30%
were observed. However, the actual ISPs benefits depend on traffic volumes on interdomain links, static routing configurations and cost functions on links. The number of
experiments and DTM configurations tested allowed for defining recommendations on
choosing some DTM configuration settings, especially the choice of SDN controller mode.
5.1.2 OFS#1.2 (S-to-S 95% percentile based tariff)
In this experiment DTM performance was evaluated under the assumption of single-tosingle topology and 95th percentile rule for charging. Unlike in the case of volume based
tariff, the time to react to traffic bursts and to compensate them by balancing the traffic
among inter-domain links is very short. As shown in OFS#1.1 experiments the DTM has
the whole billing period to compensate the traffic bursts and irregular character. The traffic
on both links is summed up over a long period of time, i.e., the whole billing period (7 days
in the experiment). In the case of 95th percentile tariff each 5-minute sample must be
treated separately since each sample might potentially become one of 5% of the highest.
Therefore, for a given 5-minute long period, the system has only 5 minutes to compensate
undesired traffic growth and distribute the traffic among links in such a way that the sizes
of both 5-minute samples (collected on two inter-domain links at the same time) are below
thresholds stemming from the reference vector. The goal of the experiment was to
evaluate the performance of DTM when 95th percentile tariff is used by the ISP and
evaluate the potential benefits of the ISP. The detailed experiment definition can be found
in D4.2 [14] Section 3.1.1.
Field

Description

Scenario

Operator Focused Scenario

Use case

Bulk data transfer for cloud operators

Experiment
Id

OFS#1.2

Version 1.0

Page 57 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Goal

To evaluate the performance of DTM when 95th percentile tariff is used by the ISP

TM
mechanism

DTM

Experiment
type

Testbed

Experiment
setup

Topology: Figure 6
Tariff: 95th percentile
Cost functions for link L1 and L2:
200
3 10
9 10

0
160 10

280
4480

2.8 10
7.8 10
900
20.3 10
5150

0
180 10

160 10
700 10
700 10
180 10
340 10
340 10

Where is the amount of traffic (in bytes). At the end of billing period it takes a value
equal to the size of the 5-min sample calculated using 95th percentile rule.

Billing period: 7 days


Simulation time: 22 days
SDN controller mode: reactive with reference vector
Compensation period: 30 s
Traffic generators settings:
Background on L1

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on L2

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Manageable DC-DC

Distribution

Flow inter-arrival-time

exponential

Flow length

exponential

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters
1

109.5 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

218.5 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

Distribution parameters
1
1

118 [ms]
2400 [ms]

Page 58 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Recorded
data and raw
data format

Traffic samples on both links every 30 seconds


5 minute samples - with distinction of manageable and non-manageable traffic
vector values every 30 seconds
vector values at the end of each billing period
Refer to D4.2 [14]Table 4 for more details.

Measured
metrics and
evaluation
methodology

Total cost (expected, achieved, predicted for non DTM scenario)


KPIs: , ,
,
, (refer to D4.2 [14])

The experiments were run with 7-day long billing period. The number of 5-min samples
collected during such a billing period equals to 2016. Therefore, 5% of them (i.e. 100
samples) are allowed to exceed 95th percentile threshold. The rest of samples must be
below the threshold stemming from the reference vector to achieve expected traffic cost.
As experiments' results show DTM was able to achieve this goal. Table 20 shows
achieved costs and KPI values. The desired cost is achieved very accurately. On link 1 the
achieved 95th percentile threshold was slightly higher than
component of , while on
link 2 it is a bit lower than the respective value of
(see Figure 13 showing sizes of
samples collected in time order). Therefore the cost of inter-domain traffic on link 1 is a bit
higher than expected while on link 2 is lower than expected. The resulting total cost is less
than 1% higher than desired by ISP (Table 20).
Table 20: Inter-domain traffic costs and KPI values for DTM (OFS#1.2)
Cost

KPIs

Expected (estimated in prev. billing period, reflected by )

3311.0

0.9520

Achieved with DTM

3366.7

0.7117

Optimal (estimated based on traffic vector achieved in


current billing period)

3362.0

-169.79

w/o DTM - if a default BGP path selected for manageable


traffic was link 1

3536.6

-1363.54

w/o DTM - if a default BGP path selected for manageable


traffic was link 2

4730.3

1.008

The principle of DTM operation for 95th percentile based tariff is based on modifying
sample sizes by shifting manageable traffic between inter-domain links. As a result
distribution of samples is changed. Figure 13 shows the actual distribution of 5-minute
samples with DTM compared to the distribution of samples that would be obtained if DTM
was not used and link1 or link 2 was used as a part of static BGP path between
communicating datacenters DC-B and DC-A: source and receiver of manageable traffic,
respectively (cf. Figure 6). It is crucial to influence the highest samples since they may
potentially drop into the set of 5% of highest samples thus increasing the cost on the link.

Version 1.0

Page 59 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 12: 5-min samples observed on inter-domain links during a single 7-day long billing
period (OFS#1.2)

Page 60 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 13: Distributions of 5-min samples collected during a single 7-day long billing period
(OFS#1.2)
In Figure 14 we present the distribution of samples on two dimensional space and with a
cost map in the background. Each dot in the figure represents a pair of samples collected
on two inter-domain links at the same time. It can be easily noticed that when DTM is used
sample pairs are condensed around the direction of the reference vector (blue dots).
Orange and red dots shows the distribution of sample pairs that would have been obtained
when DTM was not used and all manageable traffic was sent over link 1 or link 2,
respectively.

Version 1.0

Page 61 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 14: The distribution of 5-min sample pairs on a cost map. (OFS#1.2)
During the set of experiments run it was noticed that the accuracy of DTM traffic
management increases when assumed billing period is longer and what follows the
number of 5-min samples collected increases (we started with 1-day long billing period,
then 3-days and finished with 7-days). The more samples in billing period the better
convergence to the reference vector. Observing this tendency we believe that DTM will
perform even better for realistic billing period of 1 month.
Additional experiment to evaluate DTM performance under "unfriendly" traffic
pattern
As discussed before, it is more challenging for DTM to manage and optimally distribute the
traffic when 95th percentile tariff is used since the size of each 5-min sample should be
kept below threshold and DTM has only those 5 minutes to compensate undesired traffic
distribution. If manageable traffic pattern consists of many short flows generated
frequently, the reactive mode can quite easily manage to perform traffic compensation.
There are enough new flows that can be shifted to a tunnel currently chosen. At the same
time, flows remaining on the previously chosen tunnel end quickly.

Page 62 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

We decided to consider a traffic pattern consisting of many relatively long flows (as
compared to the length of sample, i.e. 5 minutes) and with long inter-arrival time. Such
traffic conditions are difficult for "reactive" mode. In such a case "proactive" mode should
help since it makes it possible to switch active flows between tunnels.
A short experiment with traffic patterns consisting of long flows with large inter-arrival time
was prepared. The traffic generator was set up as presented in Table 21. The mean flow
length is then 162,5 seconds.
Two experiment runs were launched: with "reactive" and "proactive" modes. Due to time
limitations those experiment runs were performed with a 1-day long billing period.
Table 21: Manageable traffic generator setting for generation of long flows
Manageable DC-DC

Distribution

Flow inter-arrival-time

exponential

Flow length

lognormal

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters
1

8000 [ms]
11,5474

0,94975

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

The obtained results for "reactive" and "proactive" modes are presented in Figure 15 and
Figure 16, respectively. When proactive mode is used, the 5-minute sample pairs are very
well condensed around reference vector. In the case of "reactive" mode they are more
spread and clearly the achieved actual 95th percentile threshold does not meet the
reference vector. This is because active flows are in fact non shiftable in "reactive mode".
Therefore, the amount of manageable traffic that can be effectively managed is in practice
lower than the nominal amount of manageable traffic. Finally, "proactive" mode perform
better than "reactive" and is able to manage the traffic pattern consisted of long flows more
effectively.
Conclusion: In the main experiment presented above the inter-domain traffic cost
reduction is about 5% or 29% depending on which inter-domain link is considered to be
default BGP path for manageable traffic (see KPIs in Table 20). Typical values of
(observed in various experiments not reported here due to space limitations) varied
between 8 and 15%. Similarly to OFS#1.1 experiment the actual cost saving will depend
on the amount of traffic on inter-domain links, share of manageable traffic and cost
functions and also on manageable traffic patterns. ISP deploying DTM for 95th percentile
charging needs to be more careful with choosing DTM settings to obtain higher benefits.
Especially, the choice of SDN controller mode should take into account the pattern of
manageable traffic: the distribution of flow lengths and flow inter-arrival time).

Version 1.0

Page 63 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 15: The distribution of 5-min sample pairs - long flows, reactive mode

Figure 16: The distribution of 5-min sample pairs - long flows, proactive mode

Page 64 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

5.1.3 OFS#1.3 (S-to-S 95% rule)


Realization of this experiment required additional effort. First of all the configuration of
hardware testbed was complex. Proper measurement of each type of traffic (background,
sensitive and tolerant) was not trivial. It required additional network equipment, port
mirroring, and traffic analyzer to properly separate packets of different types and perform
measurement (see Appendix 13.3). For the above reasons a deep performance evaluation
as done in other experiments was not made. The performed experiment proved, however,
not only that the intended functionality of DTM++ is realized correctly but also some
quantitative results were obtained. They are assessed mostly qualitatively since, due to
time limitations, not enough experiment runs were performed to allow for quantitative
assessment.
The main settings are presented in the experiment report card below. Cost function and
background traffic generators' settings are exactly the same as for experiment OFS#1.2.
The inter-DC traffic (manageable traffic) consists of sensitive traffic and tolerant traffic.
Sensitive traffic is generated in the same way as manageable traffic in OFS#1.2. For
tolerant traffic the following simplifying approach was taken. Tolerant traffic is generated by
four TCP sources. The application that feeds those flows has a limited maximum packet
rate. The TCP sources generate traffic with approximately constant total rate of 2.2Mbit/s
(during the period when tolerant traffic is not limited). When DTM++ enters the phase of
limiting tolerant traffic (peak period) throughput of these TCP sources decreases (some
packets are dropped at TCP sources slow down).
Field

Description

Scenario

Operator Focused Scenario

Use case

Bulk data transfer for cloud operators

Experiment
Id

OFS#1.3

Goal

To verify functionality of DTM++ implementation in testbed-environment

TM
mechanism

DTM++

Experiment
type

Testbed

Experiment
setup

Logical topology: Figure 6


Testbed topology: Figure 65 (Appendix 13.3)
Tariff: 95th percentile
Cost functions for link L1 and L2:
200
3 10
9 10

280
4480

2.8 10
7.8 10
900
20.3 10
5150

0
160 10
0
180 10

160 10
700 10
700 10
180 10
340 10
340 10

where is the amount traffic (in bytes). At the end of billing period it takes a value equal
to the size of the 5-min sample calculated using 95th percentile rule.

Version 1.0

Page 65 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Billing period: 1 day


Simulation time: 3 days
SDN controller mode: proactive without reference vector
Compensation period: 30 s
Traffic generators settings:
Background on L1

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on L2

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Manageable DC-DC
Delay sensitive

Distribution

Flow inter-arrival-time

exponential

Flow length

exponential

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters
1

109.5 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

218.5 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

Distribution parameters
1
1

118 [ms]
2400 [ms]

Manageable DC-DC, Delay tolerant


4 TCP flows having large amount of data to transfer (permanently active). Application
feeds TCP sockets with approximately constant bit rate. Resulting delay tolerant traffic
generated is approximately 2.2 Mbit/s (when not limited by ICC filter).

Recorded
data and raw
data format

Traffic samples on both links every 30 seconds


5 minute samples - with distinction of manageable and non-manageable traffic
vector values every 30 seconds
vector values at the end of each billing period

Measured
metrics and
evaluation
methodology

Obtained results are assessed qualitatively based on obtained plots of traffic


characteristics. This is a functionality test, not a performance evaluation experiment. No
KPI are estimated since no quantifiable reference case exist.
To make the difference between results of operation of DTM and DTM++ the experiment
with similar settings of traffic generators but with ICC functionality switched off (the
hierarchical policer is inactive). In such a case traffic is distributed among links (by DTM).
Tolerant and sensitive traffic are not distinguished by DTM.

Page 66 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

The result of DTM++ operations are presented in Figure 17 and Figure 18. The former
shows 30s samples of mean traffic rate while the latter shows sizes of respective 5-min
samples. In both figures patters of three types of traffic are distinguished. It can be noticed
that manageable traffic (both sensitive and tolerant) is distributed between the two links.
This is a result of a plain DTM operation. Additionally, in both links a limitation of delay
tolerant traffic can be observed. It is best visible on link L1 (Figure 17). When the
aggregate traffic rate in link L1 achieves the threshold stemming from reference vector
component
(about 12h) the tolerant traffic starts to be limited to prevent the mean total
throughput from exceeding the threshold. This is seen by observing that the curves of the
total traffic and of the background+sensitive traffic almost coincide; thus, only delay
tolerant traffic is affected. This operation is realized by a hierarchical policer. Aggregate
throughput of background and delay sensitive traffic is not limited. In order to avoid killing
TCP flows, the filter is configured in such a way that it allows transferring small amounts of
delay tolerant flows even in the peak period. The link capacity available for tolerant traffic
is not limited to zero but the tolerant traffic is allowed to be transferred with a rate no
greater than with some preconfigured percentage of the threshold value. In the experiment
this was set to 2%.
When the peak period ends the link capacity available for tolerant traffic increases again.
Then TCP sources increase their transfer rate and flush data accumulated in buffers. As a
result, the throughput generated is for some time higher than 2.2Mbit/s (the rate is not
limited by TCP source or access link but by the application).

Figure 17: Traffic patterns on link 1 and 2 during the billing period. (OFS#1.3)
Version 1.0

Page 67 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 18 shows corresponding values of 5-min samples. The effect of limiting delay
tolerant traffic can be also noticed. It can be also seen that 95th percentile threshold
achieved in this billing period is below the reference vector.

Figure 18: 5-min samples observed on inter-domain links during a single 1-day long billing
period (OFS#1.3)
The effect of limiting tolerant traffic during peak periods is clear from above considerations.
Additionally, a similar experiment with hierarchical policer switched off was performed to
visualize how the traffic would be distributed if just DTM was used (instead of DTM++).
The results are shown in Figure 19. The lack of limitation of tolerant traffic during the peak
period is clearly visible. Additionally, the reference vector values in the case of DTM are
higher than for DTM++ (
735.9, 340 [MB] and
700, 300.9 [MB],
respectively). It might be epxected that total costs for inter-domain traffic achieved with
DTM++ are lower than for DTM. However, due to the character of the experiments those
results can assessed only roughly.

Page 68 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 19: 5-min samples observed on inter-domain links during a single 1-day long billing
period when hierarchical policer is inactive; only DTM operates (OFS#1.3)
Conclusion: Qualitative assessment of the presented results of those simple experiments
shows that:

ICC functionality implemented in hierarchical policer and integrated with DTM is


properly designed and is able to limit the throughput of tolerant traffic during peak
period

DTM++ has a potential for better controlling ISP's costs for inter-domain traffic thus
to achieve further cost reductions.

Version 1.0

Page 69 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

5.1.4 OFS#2.1 (M-to-M volume)


Field

Description

Scenario

Operator Focused Scenario

Use case

Bulk data transfer for cloud operators

Experiment
Id

OFS#2.1

Goal

To evaluate the performance of DTM in multi-ISP environment when tariff used by the
ISPs is based on a total traffic volume.

TM
mechanism

DTM

Experiment
type

Testbed

Experiment
setup

Tariff: total traffic volume


Topology: Figure 7

Domain AS1
Cost functions for link LA1 and LA2:

where

200
2.48 10
7.44 10

300
5508

0.992 10
4.96 10
9.92 10

1760
5024

0
201.6 10

201.6 10
1049.9 10
1049.9 10

0
443.52 10

443.52 10
658.02 10
658.02 10

is the total traffic volume at the end of billing period.

Domain AS4
Cost functions for link LC1 and LC2:
200
2.976 10
8.929 10

0
134.4 10
300 134.4 10
672 10
5540
672 10

1.49 10
4.84 10
990
18.6 10
6910
where

0
295.68 10

295.68 10
430.08 10
430.08 10

is the total traffic volume at the end of billing period.

Billing period: 7 days


Simulation time: 22 days
SDN controller mode: reactive without reference vector
Compensation period: 30 s

Page 70 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Traffic generators settings:


Background on LA1

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on LA2

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on LC1

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on LC2

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters
1

91 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

182 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

146 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

210 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

Manageable from DC-B


to DC-A

Distribution

Flow inter-arrival-time

exponential

113.5 [ms]

Flow length

exponential

2400 [ms]

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Manageable from DC-D


to DC-A

Distribution

Flow inter-arrival-time

exponential

Flow length

exponential

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters

Distribution parameters
1
1

738 [ms]
2400 [ms]

Version 1.0

Page 71 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Recorded
data and raw
data format

Manageable from DC-B


to DC-C

Distribution

Flow inter-arrival-time

exponential

282.5 [ms]

Flow length

exponential

2400 [ms]

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

Manageable from DC-D


to DC-C

Distribution

Flow inter-arrival-time

exponential

207.5 [ms]

Flow length

exponential

2400 [ms]

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters

Traffic samples on each of 4 inter-domain (LA1, LA2, LC1, LC2) links every 30 seconds
vector values in domain AS1 - every 30 seconds

calculated by SBox in domain AS1

sent to SBox in domains AS3 and AS5 (only when sent)

vector values in domain AS4 - every 30 seconds

calculated by SBox in domain AS4

sent to SBox in domains AS3 and AS5 (only when sent)

vectors' values (in domains AS1 and AS4) at the end of each billing period
Refer to D4.2 [14] table 8 for more details.
Measured
metrics and
evaluation
methodology

Separately for each domain (AS1 and AS4):


Total cost (expected, achieved, predicted for non DTM scenario)
KPIs (refer to D4.2 [14])

for domain AS1:


o

, and , (manageable traffic generated by both DC-B and DC-D


sent via static BGP path crossing link LA1)

o
o
o

and

(all traffic via LA2)

and

(traffic from DC-B via LA2, traffic from DC-D via LA1)

and

(traffic from DC-B via LA1, traffic from DC-D via LA2)

for domain AS4:


o

, and , (manageable traffic generated by both DC-B and DC-D


sent via static BGP path crossing link LC1)

o
o
o

and

(all traffic via LC2)

and

(traffic from DC-B via LC2, traffic from DC-D via LC1)

and

(traffic from DC-B via LC1, traffic from DC-D via LC2)

Page 72 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

In this experiment there were two independent autonomous systems (owned by different
ISPs) that run DTM, namely domains AS1 and AS4 (c.f. Figure 7). In domain AS1
datacenter DC-A is located. In AS4 there is datacenter DC-C. Both datacenters receive
traffic from two sources DC-B and DC-D located in two remote domains: AS3 and AS5
respectively. The complexity of this scenario lies in the following:

ISPs managing inbound traffic (AS1 or AS4) must rely on the cooperation of two
remote ISPs, have to manage manageable traffic generated by two independent
and uncorrelated sources located in two distinct domains.

The ISP in which domain source of the traffic is located (AS-3 or AS5) receives
reference and compensation vectors from two independent domains running DTM
(AS1 and AS4). Values of those vectors are not correlated. Traffic from a
datacenter must be distributed among 4 tunnels based on vectors received.
Requests from both AS1 and AS4 must be served simultaneously.

The goal of this experiment was to evaluate if DTM is able to perform effectively under
such conditions.
To assess benefits of using DTM we need to consider four possible configurations of static
BGP paths (separately for AS1 and AS4). For instance, AS1 has two inter-domain links
(LA1 and LA2) and two remote traffic sources (DC-B and DC-D). Four static cases might
be considered:

static paths selected by BGP are such that manageable traffic from both
datacenters passes link LA1

static paths selected by BGP are such that manageable traffic from both
datacenters passes link LA2

static path from AS3 (where DC-B is located) passes link LA2 while static path from
AS5 (DC-D) passes link LA1

static path from AS3 (where DC-B is located) passes link LA1 while static path from
AS5 (DC-D) passes link LA2

Similar considerations apply to AS4. Costs and KPIs achieved with DTM are compared to
costs that would be generated in all of four scenarios. Different color lines are used in
Figure 20 and Figure 21 to show traffic growth for each static case. Similarly, inter-domain
traffic costs are calculated for each case (Table 22 and Table 23). In this experiment traffic
costs achieved with DTM were lower as compared to any of static routing cases. DTM
deployment results in cost reduction. Those statements are valid for both domains AS1
and AS4 (see more detailed analysis below).

Version 1.0

Page 73 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 20: Traffic growth during the billing period and a cost map. Domain AS1. (OFS#2.1)

Page 74 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 21: Traffic growth during the billing period and a cost map. Domain AS4. (OFS#2.1)

Version 1.0

Page 75 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Table 22: Inter-domain traffic costs in domains AS1 and AS4. (OFS#2.1)

Domain AS4

Domain AS1

Cost
Expected (estimated in prev. billing period, reflected by )

3297.4

Achieved with DTM

3243.4

Optimal (estimated based on traffic vector achieved in current billing period)

3217.2

w/o DTM: traffic from DC-B and from DC-D via link LA1

3496.1

w/o DTM: traffic from DC-B and from DC-D via link LA2

3948.6

w/o DTM: traffic from DC-B via link LA2, traffic from DC-D via link LA1

3670.7

w/o DTM: traffic from DC-B via link LA1, traffic from DC-D via link LA2

3377.4

Expected (estimated in prev. billing period, reflected by )

3866.7

Achieved with DTM

3902.4

Optimal (estimated based on traffic vector achieved in current billing period)

3895.3

w/o DTM: traffic from DC-B and from DC-D via link LC1

4223.8

w/o DTM: traffic from DC-B and from DC-D via link LC2

6141.6

w/o DTM: traffic from DC-B via link LC2, traffic from DC-D via link LC1

4234.7

w/o DTM: traffic from DC-B via link LC1, traffic from DC-D via link LC2

4689.3

Table 23: KPI for AS1 and AS4 (OFS#2.1)


KPI

Domain AS1

Domain AS4

0.9277

0.9240

0.8214

0.6354

0.8836

0.9215

0.9603

0.8322

-252.6

-321.4

-705.2

-2239.1

-427.3

-332.5

-133.9

-786.9

0.9836

1.0093

The cost savings in domain AS1 vary from ~4% to ~18%, depending on to which static
routing case we compare results obtained with DTM. In domain AS4 cost savings vary
between ~8% to ~36% (Table 23). The highest savings are achieved if static BGP paths
from both remote domains, AS3 (where DC-B is located) and AS5 (DC-D), are assumed to
pass link LC2.

Page 76 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

The accuracy of DTM traffic management was high in both domains. In the presented
billing period the cost achieved in domain AS1 was lower that stemming from reference
vector ( =0.9836) while in domain AS4 it was a bit higher than expected ( =1.0093). As
explained before, such small deviations are normal since statistically amounts of traffic in
different billing periods are never identical ( used in current billing period reflects a
prediction of traffic calculated using traffic measurements from the previous billing period).
Conclusion: To sum up, experiment OFS#2.1 proved that DTM performs well in multi-ISP
topology and offers inter-domain cost reduction. The typical potential benefits of ISP from
deploying DTM are similar as for previous experiments. Depending on traffic volumes, cost
function and a static routing configuration taken as a reference the cost reduction vary
from 5-8% to 15-18% but also even higher savings are possible. The most important
conclusion is that DTM prototype was able to operate effectively in multi-domain scenario.
Two ASes were serving as sources of manageable traffic (SBoxes in those domains
distribute traffic over four tunnels using two reference vectors received from two domains
receiving the traffic. In turn, each of two ISP optimizing its inter-domain traffic costs had to
rely on cooperation with to remote domains in which DC generating the traffic were
located.
5.1.5 OFS#2.2 (M-to-M 95% rule)
Field

Description

Scenario

Operator Focused Scenario

Use case

Bulk data transfer for cloud operators

Experiment
Id

OFS#2.2

Goal

To evaluate the performance of DTM in multi-ISP environment when tariff used by the
ISPs is based on 95th percentile rule.

TM
mechanism

DTM

Experiment
type

Testbed

Experiment
setup

Tariff: 95th percentile rule


Topology: Figure 7

Domain AS1
Cost functions for link LA1 and LA2:
200
0.7 10
5 10

94
3534

0
420 10

420 10
800 10
800 10

0.25 10
0
210 10
3 10
557.5 210 10
440 10
6 10
1897.5
440 10
where

is the total traffic volume at the end of billing period.

Version 1.0

Page 77 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Domain AS4
Cost functions for link LC1 and LC2:
200
1.25 10
3.75 10
0.5 10
1.625 10
6.25 10
where

50
1250

0
200 10

200 10
480 10
480 10

0
150 10
168.75 150 10
310 10
1602.5
310 10

is the total traffic volume at the end of billing period.

Billing period: 7 days


Simulation time: 22 days
SDN controller mode: reactive without reference vector
Compensation period: 30 s
Traffic generators settings:
Background on LA1

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on LA2

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on LC1

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Background on LC2

Distribution

Flow inter-arrival-time

exponential

Flow length

Pareto

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters
1

91 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

182 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

146 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Distribution parameters
1

210 [ms]
3400

1.5

1
35 [ms]
With probability 0.6

1358

25

With probability 0.4

158

20

Page 78 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Recorded
data and raw
data format

Manageable from DC-B


to DC-A

Distribution

Flow inter-arrival-time

exponential

113.5 [ms]

Flow length

exponential

2400 [ms]

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Manageable from DC-D


to DC-A

Distribution

Flow inter-arrival-time

exponential

Flow length

exponential

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

1
35 [ms]
With probability 0.95

1358

25

With probability 0.05

158

20

Distribution parameters
1
1

738 [ms]
2400 [ms]

Manageable from DC-B


to DC-C

Distribution

Flow inter-arrival-time

exponential

282.5 [ms]

Flow length

exponential

2400 [ms]

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters

Manageable from DC-D


to DC-C

Distribution

Flow inter-arrival-time

exponential

207.5 [ms]

Flow length

exponential

2400 [ms]

Packet inter-arrival-time
Packet payload size

exponential
normal mixture

Distribution parameters

Traffic samples on each of 4 inter-domain (LA1, LA2, LC1, LC2) links every 30 seconds
5 minute samples on each of 4 inter-domain (LA1, LA2, LC1, LC2) links - with distinction
of manageable and non-manageable traffic
vector values in domain AS1 - every 30 seconds

calculated by SBox in domain AS1

sent to SBox in domains AS3 and AS5 (only when sent)

vector values in domain AS4 - every 30 seconds

calculated by SBox in domain AS4

sent to SBox in domains AS3 and AS5 (only when sent)

vectors' values (in domains AS1 and AS4) at the end of each billing period
Refer to D4.2 [14] table 8 for more details.

Version 1.0

Page 79 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Measured
metrics and
evaluation
methodology

Separately for each domain (AS1 and AS4):


Total cost (expected, achieved, predicted for non DTM scenario) KPIs (refer to D4.2 [14])

for domain AS1:


o

, and , (manageable traffic generated by both DC-B and DC-D


sent via static BGP path crossing link LA1)

o
o
o

and

(all traffic via LA2)

and

(traffic from DC-B via LA2, traffic from DC-D via LA1)

and

(traffic from DC-B via LA1, traffic from DC-D via LA2)

for domain AS4:


o

, and , (manageable traffic generated by both DC-B and DC-D


sent via static BGP path crossing link LC1)

and

(all traffic via LC2)

and

(traffic from DC-B via LC2, traffic from DC-D via LC1)

and

(traffic from DC-B via LC1, traffic from DC-D via LC2)

o
o

Figure 22: The distribution of 5-min sample pairs on a cost map in domain AS1 (OFS#2.2)
Page 80 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 23: The distribution of 5-min sample pairs on a cost map in domain AS4 (OFS#2.2)
Similarly to OFS#2.1 experiment, four cases of static BGP paths configurations were
considered. However, since plots presenting sample pairs distribution on cost map would
be completely illegible if we plot all four cases together we decided to present only two
cases: all manageable traffic passing LA1 or LA2 in AS1 (LC1 or LC2 in AS4, respectively)
- Figure 22 and Figure 23. In turn, inter-domain traffic costs and KPIs are presented for all
cases (Table 24 and Table 25).
Figures showing distribution of sample pairs are similar to the plot shown for OFS#1.2
experiment. Sample pairs are condensed around the reference vector. In both domains,
the achieved 95th percentile thresholds are lower than determined by reference vector. As
a result, the achieved costs in both domains are lower than expected (Table 24). In a
presented billing period values of
in AS1 and AS4 are equal to 0.9616 and 0.9707,
respectively, i.e., achieved cost was ~3-4% lower than expected: stemming from reference
vector calculated using traffic measurements from previous billing period.
Looking at Table 24 one can also notice that cost achieved with DTM is not only lower
than expected but is also lower than for any static routing case (using paths selected by
BGP for manageable traffic). It is confirmed by KPIs presented in Table 25. Values of all
Version 1.0

Page 81 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

calculated for each domain and for each possible static routing scenario are below 1. ISPs
benefit vary from ~12 to 22% in domain AS1. In turn, cost reduction thanks to using DTM
is between 3 and 24% in domain AS4, depending on which static routing case is
considered as reference. Moreover, the DTM optimization algorithm taking traffic statistics
from previous period converges to an even better solution: the cost would be even lower if
traffic distribution were slightly different (Table 24). It finds a new reference vector for the
consecutive billing period. Using the new DTM will try to further lower traffic costs in the
next billing period.
Table 24: Inter-domain traffic costs in domains AS1 and AS4. (OFS#2.2)

Domain AS4

Domain AS1

Cost
Expected (estimated in prev. billing period, reflected by )

1144.3

Achieved with DTM

1100.3

Optimal (estimated based on traffic vector achieved in current billing period)

1068.9

w/o DTM: traffic from DC-B and from DC-D via link LA1

1296.9

w/o DTM: traffic from DC-B and from DC-D via link LA2

1402.7

w/o DTM: traffic from DC-B via link LA2, traffic from DC-D via link LA1

1266.3

w/o DTM: traffic from DC-B via link LA1, traffic from DC-D via link LA2

1244.9

Expected (estimated in prev. billing period, reflected by )

1149.7

Achieved with DTM

1116.3

Optimal (estimated based on traffic vector achieved in current billing period)

1105.8

w/o DTM: traffic from DC-B and from DC-D via link LC1

1237.3

w/o DTM: traffic from DC-B and from DC-D via link LC2

1469.0

w/o DTM: traffic from DC-B via link LC2, traffic from DC-D via link LC1

1150.6

w/o DTM: traffic from DC-B via link LC1, traffic from DC-D via link LC2

1218.4

Table 25: KPI for AS1 and AS4 (OFS#2.2)


KPI

Domain AS1

Domain AS4

0.8484

0.9022

0.7844

0.7599

0.8689

0.9701

0.8838

0.9162

-196.6

-121.1

-302.4

-352.8

-165.9

-34.4

-144.6

-102.1

0.9616

0.9707

Page 82 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 24 presents an interesting phenomenon that may occur when the end of billing
period is approaching. It shows how DTM react to the current measurements of the traffic
to optimally distribute the traffic to achieve target (reference vector) 95th percentile
thresholds. First, DTM using statistics of collected 5-minute samples decides to switch off
the compensation on link LA2. The distribution of samples collected until this moment is
such that regardless the size of remaining samples to be collected until the end of billing
period, the target 95th percentile threshold will not be exceeded. Therefore, the
compensation on link LA2 is switched off and all manageable traffic is sent over that link to
help to achieve target threshold on link LA1. Then, after some time the same situation
occurs on link LA1. The compensation on that link can be also switched off. From that
moment, until the end of the billing period DTM balances the traffic among both links.
It may also happen that DTM will not manage to achieve 95th percentile thresholds before
the end of billing period. In such a case DTM continues distributing manageable traffic in
order to minimize exciding the target threshold (reference vector).
End of billing period: 23:59:59 28.09.2015 23:59:59 05.10.2015
Link LA1

Sample size [MB]

800
Total traffic
Background traffic
R vector threshold
Actual 95th percentile
threshold

600

400

end of compensation
on link LA1

200

end of compensation
on link LA2
159 h

162 h

165 h

Time
168 h

165 h

Time
168 h

Link LA2

Sample size [MB]


500

400

300

200

100

159 h

162 h

Figure 24: 5-min samples observed on inter-domain links LA1 and LA2 (AS1) at the end of
billing period (OFS#2.2)

Version 1.0

Page 83 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Conclusion: To sum up, experiment OFS#2.2 proved that DTM performs well in multi-ISP
topology and offers inter-domain cost reduction when 95th percentile rule is used for
charging for inter-domain traffic. This conclusion is confirmed by quantitative values
presented and discussed above.

5.2

EFS experiments

The EFS experiments evaluate caching and energy consumption in RB-HORST.


Furthermore, the large-scale study runs RB-HORST involving 4 different partners across
Europe serving videos. Finally, MUCAPS evaluates the quality of service of videos.
5.2.1 EFS#1: Evaluation of Caching Functionality in RB-HORST
This experiment has been performed at UZH premises. Following, the test cards related to
the experiment are reported.
Field

Description

Scenario

End-user focused

Use case

Locality in UNaDas, Service and content placement

Test Card

EFS 1- Evaluation of caching functionality in RB-HORST

Goal

Test basic functionality of RB-HORST:

User management on home router

Track uNaDa in DHT

Communication with other RB-HORST devices

Proxy functionality to intercept YouTube video requests

Caching capability of the home router

Performance evaluation of cache

How many content requests can home router serve?

How is performance of the RB-HORST cache?

Compare streaming of RB-HORST with streaming from remote


cache
TM mechanism

RB-HORST

Experiment type

Testbed

Experiment setup

Equipment set:

End user devices with RB-HORST app

Home router / UNaDa with RB-HORST functionality

Topology (Figure 25):

End user devices connect via shared WiFi of RB-HORST


router

RB-HORST router is connected to the Internet

Page 84 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Parameters (not tested, constant):

Home router/end device: hard disk size of home router, up/downlink bandwidth, CPU, RAM

Avg video length: 3min

Content: (content_id, file_size, content_type), 10 videos

Test parameters (underlined parameters are reference set-up):

Number of end devices: 1, 2, 4, 8 devices

Video request rate: 1/16min, 1/8min, 1/4min, 1/2min, 1/1min,


1/0.5min

Request generator: same video, random video (10 videos,


uniform distribution), catalogue (10 videos, Zipf-distributed
probability)

Functionality tests:

Setup home router and end user device

Home router is registered in overlay

Content requests are sent from end device

Home router caches content item

Home router serves content item

Performance tests:

Recorded data and raw


data format

Measured metrics and


evaluation methodology

Performance study number of end user devices, change


number of end devices

Performance study inter-arrival time of requests, change


video request rate

Performance study request strategies, change request


strategy

Content request time (timestamp, content_id), text file

Content cache time (timestamp, content_id), text file

Content serve time (timestamp, content_id), text file

Up-/downlink traffic traces, pcap file

Cache hit rate = #(content serve time) / #(content request


time)

Requests served = #(content serve time)

Bandwidth utilization from traffic traces

Version 1.0

Page 85 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 25: Testbed setting for evaluation of caching functionality


The experiment setup is shown in Figure 25, where end devices are connected via the
uNaDa access point. Up to 8 end devices have been used connecting to one access point.
Experiments testing the caching functionality have been carried out. The testbed setup
contains 9 Odroids with the following configuration: 1 Odroid is the UNaDa, where RBHORST is running, while 8 Odroids are end devices accessing the Internet via the UNaDa
access point. The input parameters of the experiment are the number of devices from 1 to
8 and the video request rate of 1/16min, 1/8min, 1/4min, 1/2min, 1/1min, and 1/0.5 min.
For each run, 10 videos on Vimeo were requested using a Zipf distribution. The request
generator is located at Github [33]. The experiments run for more than 2 days with all the
variations producing traces of more than 1GB of data.

Figure 26: Cache hit rate with varying number of devices

Page 86 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 27: Transferred data with varying number of devices


Figure 26 shows that the cache hit rate increases with increasing number of devices. This
expected behavior is due to the fact that more requests were being cached and were
served from the uNaDa instead requesting the video over the Internet. Thus, the
bandwidth, that can be saved (grey area) when using a UNaDa is highest with 8 devices.
The same behavior can be observed in Figure 27 with the transferred data. WiFi
bandwidth to serve up to 8 clients is used much more than Ethernet how much is used
when fetching the content from the Internet on a cache miss.

Figure 28: Cache hit rate with varying request time interval
Figure 28 and Figure 29 show that the request interval does not have an influence on
cache hit rate. Also the bandwidth remains stable when simulating with different request
time intervals. That means time does not have a significant influence on the functionality of
caching in RB-HORST.
Finally the trace of the large scale study was used to evaluate the performance of caches
with different size. Therefore the simulative evaluation framework was configured with one
cache per user taking part in the study. The cache size in number of items was varied and
Version 1.0

Page 87 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

the performance of caches working in an overlay was compared to the cases where
caches only work for the requests of its owner which is referred to the no overlay case. To
evaluate which replacement strategy works best in the study, we compare the simple LRU
policy to the gated approach k-LRU, with k=1, where one virtual cache is preceding the
cache that stores only hashes. Upon request an item is only placed in the cache if its hash
is found in the virtual cache. This prevents the cache from pollution with rarely requested
items. Figure 30 shows the average hit rate of the caches. As expected the cache
efficiency increases with the cache size in any case. An overlay as deployed in RBHORST highly increases the efficiency of caches. The gated cache replacement performs
worse than the simple LRU replacement. This depends on the fact, that the requests in the
trace are highly dynamic and follow a daily pattern according to the instructions given in
the study.

Figure 29: Transferred data with varying request time interval

Figure 30: Cache efficiency in large scale study trace


Page 88 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Conclusion: The saved bandwidth with 8 devices in these experiment is around 6.5GB.
Without the uNaDa, Ethernet traffic would be more than 8.5GB, while with uNaDa it is only
a bit more than 2GB. The average transferred data remains at around 6GB, while the
saved bandwidth in average is 4GB. The evaluation of the large scale study trace showed
the an overlay highly increases the performance of the caches.
5.2.2 EFS#2: Evaluation of RB-HORST large scale study
This experiment has been performed in a large scale study at the premises of UZH,
UniWue, TUD, ICOM, AUEB. Following, the Test Cards related to the experiment is
reported.
Field

Description

Scenario

End-user Focused Scenario

Use case

Social-Aware mobile data offloading, exploiting content locality

Test Card

EFS#2 Social dependencies

Goal

Assessing the social dependencies in content access

TM mechanism

RB-HORST

Experiment type

Testbed

Experiment setup

Parameters:

Duration: 4 weeks

Participants: Project partners

Mobile devices: Various Android phones, PCs, and uNaDas

See also Chapter 3.2.2 in D4.3 [14]


Recorded data and raw
data format

uNaDa:

List of Facebook friend IDs

Predicted video IDs

Overlay neighbors

Video
o
o
o

Request events
Prefetching events
Serving events

Cache
o Hits
o Delete
Smartphone

Connected SSID

See also Chapter 3.2.2 in D4.3 [14]


Measured metrics and
evaluation methodology

Metrics are directly recorded on the devices. The recorded data is


uploaded to a central data collection server for later (offline-)
analysis

Version 1.0

Page 89 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

In the large scale study 23 active users participated. A limited catalogue of 100 videos was
provided and a central hub node was chosen. To generate a social network like structure
with central contributors all study participants established a friend relationship with the
Facebook profile of the central hub node. During the study, 1005 videos were requested
and the uNaDas interconnected due to the RB-HORST overlay. Figure 31 shows the
overlay visualizing the uNaDas as nodes with their respective MAC address and their
overlay connections.

Figure 31: Overlay of uNaDas after large scale study


Figure 32 depicts the videos watched in the study ranked by the number of views. The
views of each video are plotted on the y-axis. It can be seen that not only the 100 videos of
the provided catalogue were watched, but the users additionally watched and shared own
videos. The number of views follows an exponential law depending on the video rank.
Additionally, there are few heavy hitters, i.e., videos that were watched very often during
the course of the study.

Figure 32: Number of video views


Page 90 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

The left plot of Figure 33 shows the cache hit rate of the uNaDas, which is the ratio of
requests served by the uNaDas divided by the users video requests. It can be seen that
the median of the cache hit rate is around 0.4. However, for some uNaDas the cache hit
rate can reach up to 0.75. In the right plot of Figure 33, the cache hit rate is depicted
depending on the number of video requests at a given uNaDa. It can be seen that
especially caches with many requests have a high cache hit rate. For caches with a few
requests the cache hit rate takes a broad range of values.

Figure 33: Cache efficiency


In Figure 34, more benefits of the RB-HORST approach become visible. It shows for each
uNaDa, sorted by their number of requests served, how many requests were served
locally by the uNaDa or a close overlay neighbor (uNaDa in the same AS), and how many
requests were forwarded to the video server. It can be seen that on average 50.45% of the
user requests could be served by the uNaDas. This means that a lot of inter-domain traffic
and video server load could be saved by the mechanism.

Figure 34: Location of request processing


As the RB-HORST mechanism not only reacts to user requests but also proactively
prefetches relevant video content to the uNaDas, it has to be analyzed if the prefetching
actually results in traffic savings. Figure 35 shows the actual requests to the Vimeo video
server divided by the user requests. The actual requests to Vimeo include both videos
Version 1.0

Page 91 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

requested and watched by the end user, which could not be served locally, but also
prefetched videos. It can be seen in the left plot that nine uNaDas save requests as the
ratio is below 1. For some uNaDas the ratio is 1 or slightly above 1, but some also have a
high prefetching overhead, which is better visible when zooming out in the right plot. It is
not well understood what causes these high ratios, but the extreme outliers on the far right
might be due to benchmarking tests and bug fixing by some participants of the study and,
if this assertion is right they will not occur when deploying the RB-HORST mechanism.

Figure 35: Prefetching overhead


To examine the Quality of Experience of the end users when streaming a video from the
uNaDa instead of the video server, we apply a simple QoE model. If the video duration is
smaller than the download time, the QoE is bad as stalling cannot be avoided. If the video
duration is higher than the download time, the QoE is assumed to be good. The left plot of
Figure 36 shows a scatter plot of video duration and video download time and also
indicates the videos having good and bad QoE. This plot is transformed into a CDF over
the download time divided by video duration in the right plot of Figure 36. Obviously, the
QoE is good if this ratio is smaller than 1. It can be seen that 80% of the videos have a
good QoE. Due to network fluctuations, a ratio of 1 does not necessarily mean that the
video playback is smooth. Thus, it is useful to add a safety margin, e.g., 20%, c.f. [41], to
this simple model for indirect QoE assessment. This means that the QoE is only good if
the ratio is smaller than 0.8. However, even in this case, 75% of all videos have good QoE
with the RB-HORST mechanism.

Figure 36: Quality of Experience for videos served by uNaDas


Page 92 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Conclusion: To sum up, these results show that the uNaDas form an overlay as
expected. The cache hit rates among the different uNaDas can reach up to 0.75, and were
high for those uNaDas that had many requests. It could be observed that half of all user
requests could be served locally, although some users faced a high prefetching overhead.
When streaming a video from the uNaDa, users had a good QoE for 75% of the videos
because download times (plus safety margin) were shorter than the video playtime.
5.2.3 EFS#3: Energy consumption of RB-HORST
Any newly deployed system must prove its utility and induced profit versus the generated
cost to justify its deployment. One of the increasingly important parameters is the
consumed energy, in particular in comparison to the status quo and alternative systems.
Hence, a thorough analysis of the power consumption of RB-HORST and the mobile client
are conducted.
The energy assessment of RB-HORST is conducted using dedicated experiments
assessing the network performance (i.e., RTT, throughput) over different connection
options. The study is conducted on the RB-HORST testbed, which is extended by
monitoring software allowing to determine system and network utilization. Based on these
measurements, combined with a power model of the deployed devices, the power
consumption of the individual components is determined.
The influence on the energy consumption of mobile users is calculated exemplarily using
Android smartphones equipped with measurement software. Besides the system
monitoring, also dedicated throughput and RTT tests are conducted. Similarly to the
uNaDas, the power consumption of the smartphones is determined on a model based
approach.
5.2.3.1 Analysis of Mobile Content Access Energy consumption
The energy efficiency of mobile content access is determined using dedicated
measurements from the smartphones. Therefore, measurement points are placed at:

uNaDa

Remote content server

These measurement points are selected to represent possible content locations in RBHORST. Content may be pre-fetched/cached on the uNaDa, and hence is available with
low delay and high throughput. Alternatively, the content may be located at a CDN. These
are usually located close to or within the ISP networks.
To achieve comparable measurements, the same measurement environment needs to be
set up on the uNaDas and the measurement servers. Hence, a suitable location for the
measurement server representing the content server is required. The best possible
approximation allowing flexible measurements is using EmanicsLab [46] servers, which
are distributed over Europe and are usually well connected. From these, the closest server
is identified by selecting the one with the lowest connect duration. This also avoids
selecting highly loaded servers. The best server selection is always executed after a
connection change, assuring that the fastest server is used for all measurements.
In this experiment, the performance was measured for connections via the uNaDa as well
as the 2G/3G/4G cellular network using different network providers. Such, a wide variety of
connectivity options is reflected in the results.

Version 1.0

Page 93 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

The power consumption of the end-user devices is calculated based on the power model
of the Nexus 5. Hence, the system utilization of all devices is not collected relative to the
respective interface, but absolute numbers are collected (e.g., CPU cycles, bytes
transmitted/received). Hence, the utilization traces of any smartphone can be mapped to a
Nexus 5, and thus the power consumption of the Nexus 5 calculated.
Field

Description

Scenario

End-user Focused Scenario

Use case

Evaluation of data offloading functionality in RB-HORST

Test Card

EFS#3 Performance Test

Goal

Assess the energy cost of RB-HORST offloading on mobile


devices

TM mechanism

RB-HORST

Experiment type

Testbed

Experiment setup

Parameters:

Duration: 4 weeks

Participants: Project partners

Mobile devices: Various Android phones

See also Chapter 3.2.3 in D2.4 [7]


Recorded data and raw
data format

Using RB-HORST:

RTT between
o

Smartphone and measurement server

Smartphone and uNaDa

UNaDa and measurement server

Throughput (bidirectional) between


o

Smartphone and measurement server

Smartphone and uNaDa

UNaDa and measurement server

Reference (3G/4G)

RTT between smartphone and measurement server

Throughput (bidirectional)
measurement server

between

smartphone

and

Data format: Structured text (JSON)


Measured metrics and
evaluation methodology

Metrics are directly recorded on the device. See Recorded data and
raw data format

Measurement Results
The activity patterns of the smartphones in Figure 37 show the active times of the different
devices. Some devices were active more often than others. Out of the 20 devices, 15
recorded useful data, which is discussed in the following. Table 26 lists the activity of the
network interfaces over the course of the study. On the average, the network interfaces of
Page 94 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

the participating devices were active for 10% of the time. Still the variability of the
observed patterns is high. Some devices didnt provide any or just a small number of
measurements, while a few others were active for 80% of the time, collecting useful
metrics for the analysis of the power consumption of the devices.

Figure 37: Activity patterns of the participating smartphones


Table 26: Analysis of the device activity
Device Activity

Fraction of the study

Min

0%

Mean

9.4 %

Median

1%

Max

80 %

From the active devices, the network delay was measured using the native ping
command. Approximately 100,000 individual measurements were conducted from the
mobile phones. 60% of these were end-to-end measurements using RB-HORST access
points. These are indicated by the red line in Figure 38. Of the remaining 40% of
measurements 1/3 measures the RTT to the local uNaDa indicated by the green line, 1/3
measures the RTT between uNaDa and measurement server (blue line), while the last 1/3
measures the cellular network for reference.
The measurements in Figure 38 show that in 80% of the cases a service running on the
local uNaDa provides the best performance. This is consistent with the expectations. The
worst performance is achieved by using the different cellular technologies. Using RBHORST for on-demand offloading e.g., as a simple access point the performance is
improved against the cellular network, falling in the region of 10 ms over a large range of
measurements.

Version 1.0

Page 95 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

The blue line shows the RTT between the uNaDa and the measurement server. This is in
the range of 10ms to 100ms. The steps visible within are caused by different access
technologies available to the study participants. Measurements from well-connected office
environments clearly show better performance compared to DSL or cellular uplinks.

Figure 38: RTTs as measured from the smartphones


The uplink measurements given in Figure 39 show the measured uplink throughput from
the mobile devices to the uNaDa or remote server. These measurements are
representative for content upload, which might be backup of locally generate content
(photos, videos) or sharing of these in social networks. Offloading these may be possible
by first accepting the content using a cloudlet on the RB-HORST device, and later
uploading them to the remote server.
Approximately 500 uplink throughput measurements where conducted. The distribution of
the measurements is similar to the RTT measurements described before. The best
performance (highest throughput) is clearly achieved using the connection between
uNaDa and measurement server (blue), although 50% of these are not faster than the
local WiFi connections. The upper 20% of the throughput measurements are explained by
devices located in office environments connecting with 1Gbps to the network. The WiFi
connections show data rates up to 35 Mbps (green line) and are significantly faster than
cellular connections (yellow), as expected. Surprisingly, the throughput using RB-HORST
access points for end-to-end connections is only slightly larger than the cellular uplink
(red). This is caused by 4G data rates exceeding the WiFi data rates on smartphones.
Nominally, 802.11n provides throughput of up to 600 Mbps. But to achieve this, a 4x4
antenna setup is required. Smartphones usually only have one WiFi antenna, limiting the
WiFi throughput to 65 Mbps or 135 Mbps depending on channel bandwidth [42]. Still, in
25% of the cases, increases of a few Mbps are possible.
From the uplink measurements, the straightforward conclusion is that using a service on
the local uNaDa greatly increases the performance as perceived by the mobile user. The
data rates are almost doubled, cutting the transmission duration of content to the local
device by 50%. Uploading the content from the uNaDa is often not time-critical, but also
increases the available data rates.

Page 96 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 39: Network uplink as measured from the smartphones


The respective downlink measurements are depicted in Figure 40. Again approximately
500 individual measurements were conducted. Similarly to the uplink measurements, the
remote measurements (blue) show the best performance.. Similar to the uplink
measurements, the uNaDas located in an office environment show exceptionally high data
rates. Remote, local, and mobile connections show a similar performance in 75% of the
cases. The end-to-end measurements in the case of on-demand offloading show the
lowest performance. This is to be expected as forwarding and NAT operations are
conducted on the uNaDa, which is confirmed by its appearance at the higher rates only.
Still, data rates of more than 7 Mbps can be achieved in more than 50% of the cases.
Using the pre-fetching/caching feature of RB-HORST, the performance is identical to the
cellular network performance.
The limited throughput on WiFi compared to the cellular network can be explained by the
available technologies. Although WiFi according to 802.11n defines data rates of up to 600
Mbps, this is only possible in configurations with multiple antennas at both sides. As
smartphones are currently mainly equipped with a single WiFi antenna, the maximum
nominal data rate is 65 Mbps [42]. Furthermore, lower WiFi net rates are caused by
overheads in channel admission and connection control. Additionally interference from
neighbouring stations reduces the throughput. Conventionally, a 50% of the nominal data
rates can be expected at application level. This is caused by the channel access control
mechanism, MAC layer retransmissions, and protocol overhead. This was achieved in
15% of the cases with a rate of 30 Mbps.
The power consumption of the individual connections is analyzed by calculating the power
consumed by the active interface based on the Nexus 5 power model as described in [35].
The calculated power consumption is plotted in Figure 41. The Figure shows the
distribution of the power states of the participating smartphones for the WiFi and 3G
interface. These are ramp (connection establishment), active (transferring data), and tail
(waiting for additional data before connection tear-down). Each interface is plotted
independently, showing only relative power states.

Version 1.0

Page 97 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 40: Network downlink as measured from the smartphones

Figure 41: Measured CDF of the power consumption of the smartphones using 3G and
WiFi
The active time of each interface is listed in Table 27. It is visible, that the WiFi interface on
the average was approximately 20% longer active than the 3G interface compared to the
idle states. This is explained by the much shorter ramp and tail states of the interface.
From the mean power and the active time, the energy consumption of the individual
interfaces can be calculated by multiplying the active power by the active time. On the
average, each interface has used 1/20 of the energy of a conventional smartphone battery.
Still, these measurements exclude other power intensive components like display, GPS,
and processing.

Page 98 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Table 27: Comparison of the power measurements for 3G and WiFi while active
Metric

3G

WiFi

P_min [W]

0.4

0.46

P_mean [W]

0.68

0.55

P_max [W]

1.05

1.23

0.08%

0.1%

1886 J
0.536 Wh

1557J
0.435 Wh

Percent active
Mean energy

To better compare the measurements, the collected network utilization is used to calculate
the power consumption of the individual interfaces based on the power model from [34].
The CDFs of the power states are given in Figure 42 and the corresponding metrics in
Table 28. As is apparent in this Figure, the power consumption of the device mainly
depends on the used interface (horizontal axis). Also the fraction of time where the
interfaces are active differs between interfaces. Similarly to the above plots, large fractions
of the active time are spent in idle modes. This is particularly visible for the 4G connection,
which spends 40% of the time in ramp or tail states, while for 55% of the time the
connection is only lightly used. Only a small fraction of time is spent transferring larger
amounts of traffic.

Figure 42: CDF of the power consumption using identical traffic traces and the power
model of the Nexus 5 for the individual interfaces
Table 28 shows the measurements derived from the simulations when calculating the
power consumption of different interfaces using the same traffic traces. The average
power consumption of the different interfaces while active is comparable in the case of
WiFi and 3G, while 4G in requires almost double the power. When calculating the active
Version 1.0

Page 99 of 177
Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

periods of the devices, large differences become visible. Compared to WiFi, the 4G
interface is active more than double the time, while the 3G interface is active triple the
time, all using the same traces. Combining the active time with the mean power
consumption of the respective interfaces, the energy cost for transmitting the data used
during the study becomes apparent. Using WiFi alone, the data can be transferred using
0.63 Wh, which is less than 10% of a conventional smartphone battery. Both on 3G and
4G, the power consumption of the interfaces is approximately quadrupled.
Table 28: Results of the simulation using different interfaces
Metric

3G

4G

WiFi

P_min [W]

0.4

0.6

0.46

P_mean [W]

0.69

0.95

0.55

P_max [W]

1.34

1.44

1.23

0.46%

0.38%

0.15%

8503 J
2.36 Wh

9610 J
2.67 Wh

2284 J
0.63 Wh

Percent active
Mean energy

Considering the large power saving potential when only using the WiFi interface, the
deployment of WiFi access points should be greatly increased. The measurements above
were conducted for real-life traffic patterns using an empirically derived power model.
Based on these results, the deployment of uNaDas, and in particular using WiFi sharing,
should be increased. As the traffic patterns visible in the measurements show only a small
number of larger file transfers, and a huge fraction of small transfers, the influence of
connection establishment and tear-down does not only effect the QoS experienced by the
end-user, but also the energy consumed by the smartphone. This also affects the QoE, as
already proposed in [36]. By making use of the extended availability of WiFi access points
up to 75% of the energy required for communication may be saved.
Conclusion: Considering the power models available on the smartphones, the cheapest
connection for a given type of traffic can be determined a-priori. Hence, Additional energy
savings are possible, which are not visible in the observed traffic pattern. Knowing which
type of content is accessed (e.g. real-time messaging, (live) video) streaming, the power
consumption of the smartphones can be reduced compared to using a fixed connection
type.
5.2.3.2 Energy analysis of the uNaDas
Field

Description

Scenario

End-user Focused Scenario

Use case

Evaluation of data offloading functionality in RB-HORST

Test Card

EFS #3 Performance Test

Goal

Assess the energy cost of the RB-HORST System

TM mechanism

RB-HORST

Experiment type

Testbed

Page 100 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Parameters:

Experiment setup

Duration: 4 weeks

Participants: Project partners

Mobile devices: Various Android phones


See also Chapter 3.2.3 in D2.4 [7]
Recorded data and raw
data format

Measured metrics and


evaluation methodology

System utilization:

CPU

RAM

Disk

Network I/O (WiFi/Ethernet/Cell)

Measurements are converted to rates, from which the power


consumption of the system is calculated using calibrated power
models.

Measurement Results
The uNaDas used during the course of the study were all of type Odroid-C1 and
connected to the router of the home gateway. As these dont include a WiFi chip, a WiFi
dongle is attached. This is detailed in Appendix 13.1. This is run in AP-mode to provide the
required WiFi networks.

Figure 43: Activity patterns of the participating uNaDas


The activity pattern of the participating uNaDas is plotted in Figure 43. 40 devices reported
to the server, from which 25 were actively participating in the study. Of these, 15 devices
were active for a considerable time.
The power consumption of the individual devices, and hence also of the full system, is
calculated based on system monitoring values and a power model of the Odroid-C1. This
is measured in accordance to the Raspberry Pi power model as described in [7][34]. The
resulting values are listed in Table 29. Figure 44 shows the CDF of the derived power
consumption for all participating devices in grey, and the overall distribution in black. All
curves show that the power consumption in 90% of the cases is in the range of 2.7 W to
2.8 W.

Version 1.0

Page 101 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Based on the average power consumption of a single uNaDa, the power consumption of
the full uNaDa deployment can be calculated. As for the study, 25 devices were active, the
power consumption of the full deployment is 69.5 W. This is less than one conventional PC
when idle. Considering that a considerable amount of traffic usually transferred from CDN
nodes through the ISP network to the local end-user devices may be eliminated, and such
the load on the CDN node can be reduced, the additional power consumption within the
end-user premises is comparatively small.
Considering the potential savings in the network backbone by reducing the load on the
intermediate routers between end-user and CDN node and the decreased load on the
CNS nodes for a comparatively small increase of end-user energy consumption, energy
savings using RB-HORST are apparent.
Table 29: Measured power consumption of the uNaDas
Metric

Value

Minimum power

0.072 W

Mean power

2.783 W

Median power

2.804 W

Maximum power

3.167 W

5% - 95% percentile

2.740 W - 2.811 W

System power consumption (25 devices)

69.5 W

Figure 44: CDF of the power consumption over the uNaDas participating in the study
Conclusion: The RB-HORST system in the current configuration reduces the power
consumption of the mobile devices as well as within the network backbone and CDN on
the cost of a marginal increase of end-user energy consumption. Considering that the RB-

Page 102 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

HORST functionality may be integrated in future home gateways, the additional energy
expense is negligible. Considering the reduced energy consumption on the smartphones
in combination with the reduced response time for locally available content, the QoE of the
end-user is also increased.
5.2.4 EFS#4: MUCAPS
This prototype validation has been performed at the ALBLF premises. Following, the test
card related to the experiment is reported below.
Field

Description

Scenario

End-user focused

Use case

Service and content placement

Experiment Id

EFS#4, see Table 15

Goal

Evaluate the ability of MUCAPS to revise the initial AEP


selection whenever it is appropriate,

Evaluate the impact of the MUCAPS selection on the QoE of


video streaming

TM mechanism

MUCAPS

Experiment type

Prototype validation

Experiment setup

Details on Topology, configurable


assumptions, setup, etc .

parameters,

architecture,

Equipment set:

UEP: End-user devices with VLC video reader: 1 PC, 1 mobile


phone

Wifi access point

MUCAPS: 1 PC for MACAO + tinyDNS

ALTO: 1 PC/ALTO server

AEPs: 3 video servers

VITALU: one PC

Topology, See Figure 45

UEP is connected via LAN, WiFi, or 3G

Wifi access point, connected to the Internet

UEP, tinyDNS resolver, MACAO bloc and ALTO Server belong


to the same ISP

2 AEPs is located in ISP A , 1 in remote ISP B

Test parameters

UEP access type: LAN/FTTH, WiFi/ADSL, 3G

AEP ALTO Costs: Routing Cost (RC), BWS (BWS) assumed to


be set by ISP

Metric weights: depend on access, based on assumed ISP


policy

2 video bitrate ranges: 650, 1270, 2700 kbits/sec

Version 1.0

Page 103 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

MUCAPS modality
o

G1: MUCAPS OFF

G2: MUCAPS ON + routing cost

G3: MUCAPS ON + BWS

G4: MUCAPS ON + (routing cost, BWS)

Performance tests

Recorded data and raw


data format

For ISP: difference in routing cost with and without MUCAPS all
modalities

For UEP: difference in VQS with and without MUCAPS in the


modality G4.

Direct measurements made, measurement points, frequency of


measurements, raw data format
PCAP files captured with Wireshark or TCPDUMP on User devices

Measured metrics and


evaluation methodology

How metrics are estimated (direct observation, aggregation,


averaging, calculation, formulas
See metric definition in table 11.
Cross-Layer Utility (CLU) of selected AEP:

function of RC, BWS and metric weights

computed by MUCAPS

Video Quality Score (VQS):

function of measured metrics: start-time delay, number of


freezes, duration of freezes, media bit rate).

computed by VITALU

5.2.4.1 Prototype set-up


The MUCAPS prototype set-up is illustrated in Figure 45.
The AEPs have been grouped in 3 classes w.r.t. their RC and BWS values and location.
Classes C1 and C2 have a LAN, VPN and external address.
Servers in class C3 have an external address only, as they are located in a different ISP.
The ALTO values of the end to end paths from AEPs have therefore been chosen as in
Table 30. In the evaluation topology, AEP3 is a high performance private server that, in
addition to belonging to another ISP, is located several more hops away than AEP1. This
explains its high RC. The values in brackets near the BWS indicate the corresponding
estimated end to end path bandwidth ranges in Mbits/s. These estimations are derived
from yearly performance statistics on French operators.

Page 104 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 45: MUCAPS prototype set-up


Table 30: ISP-specific ALTO values for RC and BWS chosen for the 3 AEP classes
RC

BWS

AEP1

11

18 [45, 160]

AEP2

8 [4, 9]

AEP3

60

20 [45, 160]

The weights associated to those metrics are set depending on the UEP access type as
presented in Table 31 with the following assumptions:

In LAN/FTTX: RC and BW have an equal importance,

In WiFi/ADSL: BW has a higher influence as the QoE will rapidly degrade as there are
many active users in the zone covered by an access point,

In 3G: seeking the highest possible path bandwidth is useless as mobile devices can
only support a limited bitrate. On the other hand, the routing cost for the operator is
very important.

Table 31: Metric weights chosen for the 3 tested access types
LAN/FTTX

WiFi/ADSL

3G

RC

2.3

BWS

Version 1.0

Page 105 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

The AEP have IP addresses as documented in Table 32 AEP3 is a private server that only
has a public address, which for confidentiality is not revealed here. Addresses of hosts
other than LAN addresses are partially masked for the same reasons.
Table 32: IP addresses of the 3 video servers
Host

LAN

VPN

PUBLIC/

AEP1

10.133.10.1

172.alu.vpn.142

sp1.pub.srv.113

AEP2

10.133.10.2

172.alu.vpn.51

sp1.pub.srv.111
spB.prv.srv.37

AEP3

5.2.4.2 Results of functional evaluation


The presented results shows how the name resolution is performed by the VLC reader
when the URI of the requested video is entered, using a system library function such as
gethostbyname.
In all the test cases, when MUCAPS is OFF, the tinyDNS response provides an AEP
ordering equal to {AEP3, AEP1, AEP2} that appears to be inversed w.r.t. the initial
appearance order in the DNS database.
Figure 46 is a screenshot of responses to the dig command in the G1 modality, that is:
with MUCAPS OFF, for all types of UEP access.
Figure 47, Figure 48, and Figure 49 provide a screenshot of a response to the dig
command in G4 modality, that is: with MUCAPS ON + BWS when the UEP has
respectively a LAN/FTTX, Wifi/ADSL and 3G access.

$ dig videoserver.alto
# Without ALTO, on LAN
; <<>> DiG 9.9.5-3-Ubuntu <<>> videoserver.alto
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11339
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;videoserver.alto.
;; ANSWER SECTION:
videoserver.alto.
videoserver.alto.
videoserver.alto.

10
10
10

IN

IN
IN
IN

A
A
A

spB.prv.srv.37
sp1.pub.srv.113
sp1.pub.srv.111

;; AUTHORITY SECTION:
alto.

259200

IN

NS

a.ns.alto.

;; ADDITIONAL SECTION:
a.ns.alto.

259200

IN

10.1.1.165

Figure 46: Screenshot with MUCAPS OFF for all types of UEP access

Page 106 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

$ dig @172.alu.vpn.165videoserver.alto
# With ALTO, on LAN
; <<>>DiG 9.9.5-3-Ubuntu <<>> @172.alu.vpn.165 videoserver.alto
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51825
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;videoserver.alto.
;; ANSWER SECTION:
videoserver.alto.
videoserver.alto.
videoserver.alto.

10
10
10

IN

IN
IN
IN

A
A
A

sp1.pub.srv.113
sp1.pub.srv.111
spB.prv.srv.37

;; AUTHORITY SECTION:
alto.

259200

IN

NS

a.ns.alto.

;; ADDITIONAL SECTION:
a.ns.alto.

259200

IN

10.1.1.165

Figure 47: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a LAN/FTTX
access
$ dig @192.alu.wfi.145 videoserver.alto
# with ALTO, on Wi-Fi
; <<>> DiG 9.9.5-3-Ubuntu <<>> @192.alu.wfi.145 videoserver.alto
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49285
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;videoserver.alto.
;; ANSWER SECTION:
videoserver.alto.
videoserver.alto.
videoserver.alto.

10
10
10

IN

IN
IN
IN

A
A
A

sp1.pub.srv.113
spB.prv.srv.37
sp1.pub.srv.111

;; AUTHORITY SECTION:
alto.

259200

IN

NS

a.ns.alto.

;; ADDITIONAL SECTION:
a.ns.alto.

259200

IN

10.1.1.165

Figure 48: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a WiFi access

Version 1.0

Page 107 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

$ dig @192.alu.3g.208 videoserver.alto


# with ALTO, on Cellular
; <<>>DiG 9.9.5-3-Ubuntu <<>> @192.alu.3g.208 videoserver.alto
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12508
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;videoserver.alto.
;; ANSWER SECTION:
videoserver.alto.
videoserver.alto.
videoserver.alto.

10
10
10

IN

IN
IN
IN

A
A
A

sp1.pub.srv.111
sp1.pub.srv.113
spB.prv.srv.37

;; AUTHORITY SECTION:
alto.

259200

IN

NS

a.ns.alto.

;; ADDITIONAL SECTION:
a.ns.alto.

259200

IN

10.1.1.165

Figure 49: Screenshot for MUCAPS ON + (RC, BWS) when the UEP has a 3G access
The three tables below compare, for each access type, the AEP performances in crosslayer utility (CLU) for all 4 MUCAPS modalities, computed by the AEP ranking module of
MUCAPS. Table 33, Table 34, and Table 35 provide the AEP CLU for respectively the
LAN, WiFi and 3G access. The CLU represents the proximity of the AEP performance
vector V(aep) to an ideal (RC, BWS) vector Vid, composed by the best observed
performance values among the candidate AEPs. In the present case, Vid = (7, 20). CLU is
the L1 norm of a weighted distance vector between V(aep) and Vid. In all the three tables,
the best choice is the AEP that maximizes the CLU.

For LAN access: the best CLU performance is obtained in the ON+(RC, BW) mode,
when AEP1 is selected. The CLU equals 0.765

For WiFi access: the best CLU performance is obtained in the ON+(RC, BW) mode,
when AEP1 is selected. The CLU equals 0.7418

For 3G access: the best CLU performance is obtained in the ON+(RC, BW) mode,
when AEP2 is selected. The CLU equals 0.76

Table 33: AEP performances in CLU for all 4 MUCAPS modalities for LAN access
LAN

Selected AEP

RC

BWS

CLU

OFF

AEP3

60

20

0.558

ON+RC

AEP2

0.7

ON+BW

AEP3

60

20

0.558

ON+(RC, BW)

AEP1

11

18

0.765

Page 108 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Table 34: AEP performances in CLU for all 4 MUCAPS modalities for WiFi access
WiFi

Selected AEP

RC

BWS

CLU

OFF

AEP3

60

20

0.47

ON+RC

AEP2

0.6191

ON+BW

AEP3

60

20

0.47

ON+(RC, BW)

AEP1

11

18

0.7418

Table 35: AEP performances in CLU for all 4 MUCAPS modalities for 3G access
3G

Selected AEP

RC

BWS

CLU

OFF

AEP3

60

20

0.47

ON+RC

AEP2

0.76

ON+BW

AEP3

60

20

0.47

ON+(RC, BW)

AEP2

0.76

5.2.4.3 Results of the QoE performance evaluation by VITALU


This section provides QoE analysis results on video streaming sessions done with AEPs
selected either with G1: MUCAPS OFF or G4: MUCAPS ON + (RC, BWS)
For both groups, for each access type, in each measurement session, six video items
were requested, each characterized by its resolution and offered bitrate (Table 36). HR,
MR, LR stand respectively for High, Medium and Low image Resolution. The downloaded
volume for the six test videos is approximately 110 Megabytes.
Table 36: Name, resolution and bitrate of the six videos used for the prototype validation
Video Name

Resolution

Media Bitrate

Gone Girl LR

1152x480

638 kb/s

Gone Girl MR

1728x720

1316 kb/s

Gone Girl HR

2592x1080 3047 kb/s

Big Buck Bunny HR

1920x1080 2370 kb/s

Big Buck Bunny MR 1280x720

1224 kb/s

Big Buck Bunny LR

662 kb/s

854x480

The QoE analysis results for all three types of access are provided in Table 37. The values
are averaged over the several streaming sessions done for each video item. For each
access type, lines in slanted orange text show QoE results on the initial choice for each
access type. Lines in bold green show the results for the MUCAPS assisted AEP
selection. The text is colored in: black for LAN/FTTX, blue for WiFi/ADSL, purple for 3G.

VQS max is also called theoretical VQS. It is the VQS proposed by the video
encoder. This value means that it is the best possible VQS value, assuming the
encoder provides the best possible MBR, so that it does not make sense to try to reach
a higher VQS. The VQS takes values in the range [1, 5].

Version 1.0

Page 109 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Start Time(s) is the time in seconds elapsed between user clicking on play and the
video playout.

RTT(ms) is provided here to illustrate how VITALU performs but is only conclusive in
the 3G case, since the AEP1 and AEP2 class servers are located in the ALBLF
premises. Indeed,
o when the LAN/FTTX connection is used, all application paths go through a proxy
and the RTT applies to the path between the end-user and the proxy,
o when WiFi is used: RTT illustrates the difference between server AEP3 located
outside the ALBLF premises and ISP network and AEP1 and AEP2.
o Freezes: number, average and maximal duration are actually the metrics that
have the most impact on QoE, as the observed freeze duration can last up to 30
seconds.

Table 37: QoE analysis results when streaming the test videos, for all access types
Server

Resolution

Mode

RTT
(ms)

Start
Time (s)

VQS

VQS
(max)

Number
of
Freezes

Freeze
avg (s)

Freeze max
(s)

AEP1

1080

LAN

0.24

0.35

4.51057

4.64

AEP1

480

LAN

0.35

0.18

3.83629

3.75

AEP1

720

LAN

0.28

0.26

4.2258

4.22

AEP2

1080

LAN

0.28

13.33499

1.73389

4.87

14.70

20.25

AEP2

480

LAN

0.30

3.69

3.19114

3.75

AEP2

720

LAN

0.25

6.92

3.29814

4.22

AEP3

1080

LAN

0.21

0.53

4.4028

4.64

AEP3

480

LAN

0.26

0.26

3.78476

3.76

AEP3

720

LAN

0.27

0.34

3.82556

4.75

AEP1

1080

Wi-Fi

1.96

2.12

3.96123

4.64

AEP1

480

Wi-Fi

1.98

0.65

3.61145

3.76

AEP1

720

Wi-Fi

1.93

1.13

3.87519

4.22

AEP2

1080

Wi-Fi

1.34

13.44

1.68577

4.88

15.37

20.48

AEP2

480

Wi-Fi

1.25

3.72

3.18884

3.75

AEP2

720

Wi-Fi

1.23

6.85

3.30156

4.22

AEP3

1080

Wi-Fi

17.66

2.08

3.96879

4.64

AEP3

480

Wi-Fi

17.84

0.64

3.61467

3.76

AEP3

720

Wi-Fi

23.10

1.11

3.88009

4.22

AEP1

1080

3G

1139

16.84

1.34103

4.95

24.69

28.25

AEP1

480

3G

1069

4.63

2.872

3.86

AEP1

720

3G

1139

8.99

2.55859

4.64

8.71

8.71

AEP2

1080

3G

1069

15.74

1.42838

4.95

23.38

30.14

AEP2

480

3G

1039

3.27

3.10624

3.86

AEP2

720

3G

1159

6.66

3.19339

4.65

AEP3

1080

3G

759

5.90

3.3512

4.95773

AEP3

480

3G

819

1.01

3.20611

3.86

AEP3

720

3G

989

2.13

3.5393

4.65

Page 110 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Table 38, Table 39, and Table 40 below provide the performance gains or loss on the
measured metrics, for each type of access.
Perf Gain/f: gives for each QoE metric, the average performance gain per flow for the
target resolution. The symbol = denotes an equal performance
GPerf Gain/f: gives for each metric, the average performance gain per flow, averaged in
turn over all the sustainable resolutions. The symbol = denotes an equal performance
Results for LAN:
We consider the case of an UEP streaming a HR video. The target resolution is HR. The
sustainable resolution levels are HR, MR and LR.
The initial choice without MUCAPS is AEP3. With MUCAPS/G4, the selected server is
AEP1. In both cases VQS/VQS max is the same, equal to 0.786. The theoretical gain on
(RC, BWS) is equal to (49, -2), with a CLU performance gain equal to 37.1%. The
movement from AEP3 to AEP1 though yields the same VQS/VQS Max ratio.
The average gain in hops is 10, with values in [8,12].
Table 38: Impact of MUCAPS on video streaming QoE for LAN/FTTX UEP access
LAN/FTTX

MUCAPS OFF:
AEP3

MUCAPS G4 :
AEP1

Perf Gain/f

GPerf Gain/f

VQS/VQS max

3.89/4.95=0.786

3.89 / 4.95

-0.09%

Start time (s)

0.581

0.391

+32.7% = -0.19 s

+35.4% = -0.154 s

RTT (ms)

0.35

0.29

+17.14% = -0.06 ms

+16.85%= -0.06 ms

NFrz

NFrzavg (s)

NFrz max (s)

Results for WiFi:


We consider the case of an UEP streaming a MR video. The target resolution is MR. The
sustainable resolution levels are MR, LR. Experiments were on purpose conducted on a
busy and heavily loaded WiFi access point.
The initial choice without MUCAPS is AEP3. With MUCAPS/G4, the selected server is
AEP1. There is a slight gain on VQS and Start time. And there is a significant gain in RTT,
despite the bias of high locality of AEP1 and AEP2. This is probably related to the high RC
of AEP3 which actually relates to a higher number of hops and thus latency. And this is all
the more perceptible as the path bandwidth for Wifi/ADSL is much lower than for
LAN/FTTX.
In both cases VQS/VQS max is the same, equal to 0.919 and 0.918. The theoretical gain
on (RC, BWS) is equal to (49, -2), with a CLU performance gain equal to 57.8% with the
WiFi metric weights. The movement from AEP3 to AEP1 though yields the same
VQS/VQS Max ratio.
Like for LAN, The average gain in hops is 10, with values in [8,12].

Version 1.0

Page 111 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Table 39: Impact of MUCAPS on video streaming QoE for WiFi/ADSL UEP access
MUCAPS OFF:
AEP3

MUCAPS G4 :
AEP1

Perf gain/f

GPerf gain/f

3.91/4.22 = 0.9265

3.91/4.22 = 0.9265

Start time (s)

1.11

1.13

-1.8% = +0.05s

-0.892%= -0.03s

RTT (ms)

23.10

1.93

+91.65%= -21.17ms

+90.27 = -18.52ms

NFrz

NFrzavg (s)

NFrz max (s)

Wifi/ADSL
VQS/VQS max

Results for 3G
We consider the case of an UEP streaming a LR video. The target resolution is MR. The
sustainable resolution level is LR. In this scenario, only AEP3 is directly reachable on the
Internet where as AEP1 and AEP2 are behind a firewall. Thus the path performances of
S1 and S3 remain comparable.
The initial choice without MUCAPS is AEP3. With MUCAPS/G4, the selected server is
AEP2. There is a slight gain on VQS and Start time. In both cases VQS/VQS max is the
same, equal to 0.829. The theoretical gain on (RC, BWS) is equal to (53, -12), with a CLU
performance gain equal to 61.7% with the 3G metric weights. The movement from AEP3
to AEP1 though yields an equivalent VQS/VQS Max ratio equal to 0.805.
The average gain in number of hops is 3 with values in [2,4].
Table 40: Impact of MUCAPS on QoE for video streaming for 3G UEP access
MUCAPS OFF:
AEP3

MUCAPS G4 : AEP2

Perf gain/f

GPerf gain/f

3.20/3.86 = 0.829

3.20/3.86 = 0.829

Start time

1.01

3.27

- 223% = +2.26s

RTT

819

1039

- 26.86%

NFrz

NFrzavg

NFrz max

3G
VQS/VQS max

Conclusion: This section provides experiments results and assessment for MUCAPS that
has been tested in the prototype validation mode. That is, the goals of these experiments
were to:

Verify the ability of MUCAPS to revise the initial AEP selection whenever it is
appropriate: that is, when the content location selection without MUCAPS is not
providing the optimal location,

Evaluate the impact of the MUCAPS selection on the QoE of a video streaming
session, that is: compare QoE results without and with the MUCAPS involvement in the
AEP selection.

Page 112 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Quantitatively, we observe that:

The average gain in number of hops is 7.6 with values in [2, 12].

In all the access cases the MUCAPS intervention maintains the user VQS level, which
fulfills the core requirement for the ALTO protocol to be attractive to applications. The
highest observed VQS decrease is for the 3G access where the average observed
VQS decrease is equal to 2.9%. The average performance increase for LAN equals
9.57%, where as the VQS remains almost unchanged in WiFi access.

Freezes occur when the video resolution is too high w.r.t. the application path
bandwidth: the duration ranges between 8 and 30 seconds and the mean number of
occurrences whenever is 4. This is a highly perceptible and disturbing by human users.
o in LAN/FTTX and WiFi/ADSL, Freezes occur when High Resolution videos are
request from AEP2, that offers a path bandwidth suited at most for Medium
resolution on a WiFi/ADSL connection.
o In 3G: freezes appear when the resolution is medium or large but never when it
is low.

Qualitatively the prototype results highlight:

The need to use AEP selection metrics that are suited to the needs of an application:
video requires path bandwidth and routing cost should not be the only decision metric,

The impact of the UEP connection type on the influence of the metric: experiments on
3G connections show that application traffic optimization should not systematically
seek for the maximum bandwidth path, as it is useless and sometimes counterproductive.

The next step of this evaluation, rather than adding up more AEPs in the candidate list,
would be to evaluate the MUCAPS impact in a real life situation, when it is hooked to a
set of ISP DNS resolvers. This would imply a realistic ISP scale study of DNS responses in
CDN applications. In particular, one would need to measure the proportion of optimal
AEP selection in DNS responses. The same holds for DHT responses: when the MACAO
selection is requested by Clients hooked to DHT in end systems.
For the RTT, it is to be noted, beside its non conclusive values in the prototype validation
set-up, that it has little influence on the VQS in Video on Demand applications. Its
influence is important in video-conferencing especially when it reaches values between
150 and 300ms. In which case, the selection of metrics and their weights will be different.
The important factors that really bother the end-user are the Start Time delay and the
freezes.

Version 1.0

Page 113 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

6 Assessment of project outcomes


6.1 Assessment of stakeholders benefits and costs
In this section, we perform a cost-benefit1 analysis for the three mechanisms (including
their variations) that have been designed in WP2, implemented in WP3 and evaluated in
WP4.
In order to facilitate the analysis, we developed a template that breaks the analysis into
parts, i.e.:

Market and risk analysis, where we identify involved stakeholders and their
interests, target markets and competition.

Value proposition, where we describe the product/service offered due to the


introduction of the SmartenIT mechanism, we address customer value and
customer ownership, we define applicable value chains and revenue chains, and
we identify applicable Business Models (according to the analysis performed in
D2.4 [7]).

Considered scenario, where we describe the assumptions regarding an


assessment scenario considered for the analysis, e.g., number of users, expected
adoption, frequency of use, potential increase of demand due to the introduction of
the SmartenIT mechanism, (abstracted) topology and any other mechanism-specific
parameters.

Then, we focus on the main stakeholder that employs the SmartenIT mechanism, and thus
is the one baring the deployment costs as well as enjoys certain benefits due to its
operation.
For each stakeholder considered, we further break the analysis into the following parts:

Cost2 analysis, including CAPEX, OPEX and estimated cost modification. CAPEX
comprises equipment cost, deployment of service, i.e., implementation and
installation, and any other setup overhead. On the other hand, OPEX comprises
maintenance cost, i.e., energy cost, cost of monitoring, troubleshooting and
upgrade, and any other cost associated to the mechanism operation. Costs are
estimated based on our experience with developing and running the considered
traffic management mechanisms and/or taking into account their complexity. Cost
modification refers to the impact of the introduction of the mechanism on all
aforementioned cost categories and ultimately on the total cost.

Benefit2 analysis, including technical benefits and performance/experience


improvement, and business benefits and value generations. Technical benefits and
performance improvements comprise improvements in data rate, latency,
throughput and QoE due to the operation of the SmartenIT mechanism, which could
potentially be later translated into revenues with appropriate pricing. Business
benefit and value generation comprise positive externalities, competitive advantage

The cost benefit analysis does not address SWOT as this is the subject of another section of this deliverable (see
Section 7.3).
2
Costs and benefits are measured in the same timescale (e.g., week, month), depending on the TM mechanism
considered, so as to be comparable and utilized in calculations.

Page 114 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

due to the introduction of the SmartenIT mechanism, potential for vertical


integration with other business activities and bundling opportunities with other
offered services, as well as additional revenues for both current demand and for
demand to be generated due to the new mechanism (if any).

Break-even analysis, where we compare cost and benefits defined in previous parts
of the analysis to derive the net profits for the stakeholder. This last part is provided
only for DTM/DTM++ and RB-HORST/RBH++ as derived cost modifications allowed
us to demonstrate quantifiable gains.

6.1.1 Cost-benefit analysis of DTM/DTM++


6.1.1.1 Market and risk analysis
Market and risk analysis
Stakeholders

ISP:
transfers
(sender/receiver)

data

between

ultimate

DCs

DCO: operates the resources on which applications of


CSP(s)/cloud systems run
CSP: manages applications deployed on the DCOs
resources/cloud platform
Major players

GEANT (managed by Dante) which is pan-European


academic network, NRENs (National Research and
educational Networks) like SmartenIT partner PSNC,
Telecom Operators and ISPs like SmartenIT partner
Deutsche Telecom.

Market of interest

Internet access

Target customers

DTM: ISPs that want to optimize the cost of inter-domain


traffic.
DTM++: Apart from ISP, CSP is a target customer as it
can benefit from cooperation with ISP.

Competing approaches

None identified.

6.1.1.2 Value proposition


Value proposition
Product/service delivered

Optimized (i.e., lower cost) inter-domain data transfer


between multi-homed network domains

Customer value

DTM: ISPs decrease the cost of inter-domain traffic.


DTM++: CSP may have lower cost of internet access
because it helps ISP to optimize network traffic.

Customer ownership

DTM: ISPs deploy DTM software instances which


communicate which each other.
DTM++: CSP must indicate delay tolerant traffic which
can be done by some specialized software.

Version 1.0

Page 115 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Value
organization

chain/network DTM: ISPs must communicate between each other and


share the monitoring information to correctly distribute
traffic among multi-domain links. This approach ensures
cost optimization for one or both ISPs (those which
manage both end points of data path).
DTM++: In case of DTM++ ISP may better optimize the
network traffic due to traffic categorization made by CSP.
Lower costs for ISP may result in lower costs for CSP.

Revenue model

DTM: ISP saves money which is the result of optimised


multi-domain traffic management (different cost functions
on inter-domain links).
DTM++: ISP may decide to establish a lower price for
CSP if DTM++ causes lower cost of inter-domain traffic
due to the cooperation of the CSP.

Business model (in


accordance to the business
modeling analysis in [7])

Federation (use of DTM/DTM++ when resources of


federated Clouds are used to run a service).
Open Market and Virtual Private Cloud (Cloud and
connectivity services are bought and integrated according
to the customer needs).

6.1.1.3 Considered scenario


Considered scenario
Number of potential users

As an example, NRENs (European National Research


and Education Networks), the GANT partners, could be
analyzed as an inter-domain environment for wide-scale
deployment of DTM/DTM++.

Expected adoption by market

43 European NRENs (GANT with NRENs connect over


50 million users at 10,000 institutions across Europe).

Frequency of use

Heavy use due to many international scientific projects


that transfer huge amount of data as well as everyday
information exchange among scientists and engineers
located in different countries.

Increase of demand

There is the need to exchange network monitoring data of


inter-domain links between NRENs (network domains).

Topology

GANT interconnects Europes National Research and


Education Networks (NRENs) with the capacity 100 Gbps
available across the core network, with a network design
that will support up to 8Tbps in the future.

Page 116 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Figure 50: GANT network topology in Europe


Mechanism specific
parameters

Cost functions, Type of tariff, Billing period length, report


period, compensation period, and SDN controller mode.

6.1.1.4 Cost analysis


Cost analysis
CapEx
Equipment

1 (virtual) server for S-Box instance and 1 (virtual) server


for SDN controller.
Practically the equipment cost is marginal, as usually
ISPs, and CSP using DCOs resources, have a set of
common servers so additional virtual machines can be
allocated to run DTM/DTM++ without new investments.

Deployment of service
Implementation

4 Person Months (PMs) for advanced tests in a


production environments and improvements (e.g.,
security).

Installation

0.5 PM to install an S-Box instance and SDN controller.

Other setup overhead

0.5 PM to prepare network configuration (tunnels).

Version 1.0

Page 117 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

OpEx
Maintenance
Monitoring

0.5 PM annually for monitoring active S-Box instance and


SDN controller

Troubleshooting

0.5 PM annually for troubleshooting efforts (hardware


rebooting, network issues, etc.)
0.5 PM annually for software and network configuration
updates
Energy consumed by (virtual) servers where S-Box and
SDN instances are deployed is low so can be avoided in
the analysis.
Not foreseen

Upgrade
Energy

Other running overhead


Estimated cost modification
Transit cost

The experiments of DTM showed that the mechanism


usually gives 8% - 15% lower costs of inter-domain traffic.
Therefore, savings of transit cost for inter-domain traffic
are also expected (see assessment below).

Energy cost

None

Deployment cost

None

Management cost

None

Other cost

None

6.1.1.5 Benefit analysis


Benefit analysis
Technical benefits and performance/experience improvement
Data rate

None

Latency

None

Throughput

None

QoE

None

Business benefits and value generation


Positive externalities

Competitive advantage
Vertical integration
Bundling opportunities

Reduced OpEx for managing inter-DC traffic may allow


for further development and support of other (potentially
more critical) services.
Lower transit costs for inter-domain traffic due to DTM;
thus lower OPEX
Not foreseen
In case of DTM++, a standardized interface for
information exchange between ISP and CSP would
simplify categorization of network traffic.

Page 118 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Revenues
demand

for

Revenues
by
demand (if any)

existing Revenues could be achieved with DTM++ under


appropriate pricing scheme for offering inter-data center
communication to CSPs with QoS.
additional Not foreseen

6.1.1.6 Costs vs. Benefits


We perform our break-event analysis for a medium-sized ISP on an 8-year basis, aiming
to calculate break-even point and overall gains without extending too much in time as new
technological paradigms may arise that can change the picture of the SmartenIT
ecosystem severely and thus rendering the considered mechanism obsolete.
In order to quantify deployment, installation and maintenance costs for a medium-sized
ISP, we assume that the European average personnel cost is 30.00/hour, which
translates to 4620.00/PM. The assumed personnel cost is in accordance to DFGs
personnel rates for 2015 [47]. Thus, total cost, including CAPEX and OPEX, on an 8-year
basis is depicted in Figure 51.

Figure 51: CAPEX and OPEX for a medium-sized ISP on an 8-year basis for DTM/DTM++.
Next, we consider the achieved traffic savings for a medium-sized ISP with 6 inter-domain
links with 10 Gbps capacity each. We assume that inter-datacenter traffic is 40% of the
total IP traffic that crosses the links of the considered ISP, while all of this traffic (i.e.,
100%) is assumed to be manageable by the ISP. As DC/cloud traffic is expected to
increase with an annual rate of 30% within the next years until 2019 [24], we also assume
that ISPs will perform appropriate upgrades. Nevertheless, the cost of these upgrades is
led by the explosion of IP traffic and not by the deployment of DTM/DTM++; thus, it is
ignored from this analysis. According to the evaluations of DTM/DTM++, traffic reduction
achieved in inter-domain links reaches 8-15%. Based on the outcome of the evaluations,
we calculate the traffic savings in the aforementioned setup. Calculated traffic savings and
DC/cloud traffic evolution on an 8-year basis are depicted in Figure 52.

Version 1.0

Page 119 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 52: DC traffic evolution and estimated traffic savings due to DTM/DTM++ on an 8year basis.
According to [39], the price of transit is in 2015 $0.63/Mbps, which corresponds to
0.56/Mbps. This price continuously drops within the last years with annual rate 33%,
while it is forecasted to further drop with a similar rate. The forecasted values of the price
of transit for the 8-years are depicted in Figure 53.

Figure 53: Forecasted values of the price of transit for the next 8 years [39].
Taking into account the derived CAPEX, OPEX, traffic savings and cost savings
(considered as benefits), we deduce a graph presenting total costs vs. total benefits due to
the introduction and operation of the DTM/DTM++ mechanism in the network of an ISP.
Figure 54 depicts the derived results. Unfortunately, due to the dramatic drop of the price

Page 120 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

of transit, savings for the ISP diminish over the years. Still when 15% traffic savings are
achieved significant cost savings are achieved annually for the ISP.

Figure 54: Total costs vs. total benefits for DTM/DTM++ (benefits are calculated both for
8% and 15% traffic savings).

Figure 55: Overall economic gains assessment of DTM/DTM++.


In Figure 55, the blue and red lines present the accumulated costs and benefits over the 8year period. We observe that the point when benefits become equal to costs and start to
continuously exceeding them comes quite early in the 2nd year of operation. Moreover, the
green and purple lines depict the annual gain, i.e., the difference between costs and
Version 1.0

Page 121 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

benefits, and the accumulated gains over the years, respectively. The break-even point for
15% traffic savings is reached already in the 2nd year of operation. Thus, as it can be
observed, DTM/DTM++ can achieve significant cost reduction for rather low investment
and OPEX, and thus, is considered as a highly cost-efficient solution.
6.1.2 Cost-benefit analysis of RB-HORST
6.1.2.1 Market and risk analysis
Market and risk analysis
Stakeholders

Application/Content provider: Provides content.


CSP/CDN: Provides cache resources for content delivery.
ISP: Provides Internet connection.
End users: Consume videos, share home routers/uNaDas
for caching and WiFi access.
Home Gateway Manufacturers: Create WiFi Access
Points (APs) for Internet access from homes and small
offices.
Google, YouTube, Vimeo, Netflix, HULU, Akamai,
Telekom, Vodafone, O2, ATNT, etc.
Video streaming, video on demand, internet access,
mobile access.
Home user who buys RB-HORST router to get WiFi
access in different cities by sharing its RB-HORST router.

Major players
Market of interest
Target customers

ISPs that want to deploy WiFi sharing to increase


coverage of WiFi and caching to support content delivery
close to end-user.
Competing approaches

FON, BT-WiFi, Kabel Deutschland Hot Spot, Microsoft


Windows WiFi Sense.

6.1.2.2 Value proposition


Value proposition
Product/service delivered

Enhanced video performance and wireless connectivity


by means of RB-HORST enabled home router / uNaDa.

Customer value

ISPs get higher cache capacities within their network that


are close.
ISPs get higher coverage of WiFi networks and reduced
load on cellular networks
ISPs benefit from reduced load on core network and
reduced inter-domain traffic.
CSPs benefit from reduced load on cache resources.
End-users get free WiFi access at trusted hotspot
locations.

Page 122 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

End-users benefit from high data rates due to content


fetched from closer sources (caches) and to WiFi
roaming.
Customer ownership

Value
organization

Direct or intermediated, the end-user can buy the product


directly or the ISP can buy the product and deploy it to
the end-users.
chain/network End-user gets WiFi access through other end-users who
share their RB-HORST router.
ISP gets large scale WiFi infrastructure from gateway
manufacturers.
ISP/CSPs get cache resources at end-user premises.
ISP saves inter-domain traffic due to local content
delivery.
End-user gets better QoE for video streaming.

Revenue model

Business model (in


accordance to business
modeling analysis in [7])

CSP may deploy less CDN resources.


Case 1: content delivery is performed by a video platform
owned by a CSP/CDN, e.g., YouTube,
CSP/CDN is paid by a Content Provider to
replicate the latter's content in multiple servers
close to the End-Users.
CSP/CDN can buy IaaS from DCO, e.g. buy
storage capacity in the latter's data centers, while
he can also pay an ISP to employ the ISP's
cache/proxy server to diminish the distance to his
End-Users.
Costs may also be incurred for the CSP/CDN due
to the provision of local cache/proxy server by the
ISP, and the social information potentially made
available by the OSN provider.
Case 2: peer-assisted content delivery
End-user must be provided some monetary or nonmonetary incentive for sharing his bandwidth,
storage, computational power and WiFi access.
Extra cost may be inferred for the CSP/CDN as he
may be obliged to provide incentives to the endusers for providing their resources for the delivery
of non-UG content.
Although the transit costs will be mitigated both for
the CSP/CDN and the edge ISP by the peerassisted content delivery; OPEX might increase for
the ISP due to the extra traffic in his network.
Terminal+Service.

Version 1.0

Page 123 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

6.1.2.3 Considered scenario


Considered scenario
Number of potential users

Up to 107 / dependent on size of ISP.

Expected adoption by market

ISPs with a large number of end users.

Frequency of use

Heavy use of caching service: every video request will be


delivered from RB-HORST nodes if available.
Light use of offloading service: offload to WiFi if trusted
AP is available.

Increase of demand

No extra demand expected.

Topology

The RB-HORST topology is formed by caches organized


in three tiers. It consists of caches at (shared) home
routers that have RB-HORST installed, edge caches of
ISPs or CSPs and the content provider. The cache
resources form an overlay. The overlay aims to deliver
content locally. Shared RB-HORST enabled routers
further provide WiFi access to trusted users.

Figure 56: Tiered caching scenario.


Mechanism specific
parameters

Size of cache in uNaDas may significantly affect the


effectiveness of the mechanism/service; upload rate of
uNaDas; sharing rate; social prediction accuracy; number
of uNaDas deployed.

6.1.2.4 Cost analysis


Cost analysis
CapEx
Equipment

1 item per household

Deployment of service
Implementation

10 PMs for re-shaping and commercialization of the RBHORST prototype.

Page 124 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Installation

10 minutes for shipping, installation by the user.

Other setup overhead

None

OpEx
Maintenance
Monitoring

0.5 PM per year for monitoring active RB-HORST routers


(based on assessment during RB-HORST study).

Troubleshooting

2 PM per year for fixing bugs in / improving firmware of


RB-HORST routers (based on assessment during RBHORST study).

Upgrade

0.5 PM per year for upgrading firmware and adapting to


changes of video streaming platform APIs.

Energy

Energy is saved at ISP cache, cellular network and enduser device such as the smartphone.

Other running overhead

Advertisements, incentives for users.


Cost of content caching (tier-2 caching) is marginal if the
ISP has already deployed his own data center. On the
other hand, if he buys storage from a nearby CSP with
whom ISP has established peering agreement, the cost of
caching would be 1000$ / month and a rack. If a rack has
1000 TB storage, 1TB of storage costs 1$ / month.

Estimated cost modification


Transit cost

Dependent on cache size, sharing probability and number


of users in the AS. Up to 75% savings of total transit
traffic if sharing probability is 1% in an AS with 106 endusers. In smaller ASes with 105 end-users up to 50% [20].

Energy cost

Energy is saved at ISP cache and cellular network.


No cost modification at end-users. Routers in households
are up and running anyhow. 40% - 60% of total energy
saved (i.e., 100 Joule savings per Gbit) [19].

Deployment cost

No cost modification

Management cost

Additional cost for monitoring, troubleshooting and


upgrades of RB-HORST firmware, app and overlay.

Other cost

Savings in cache resources and their cost. About 30% of


cache storage can be saved. If the video catalogue has
107 objects consuming 1GB each (considering replication
and quality layers), which equals 104 TB, 30% account for
3000TB and 3000$ / month.

Version 1.0

Page 125 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

6.1.2.5 Benefit analysis


Benefit analysis
Technical benefits and performance/experience improvement
Data rate

Higher data rate if content is delivered from local access


point (uNaDa) in case of cache hit.
For DSL 10 Mbps are received on average, as opposed
to local WiFi where 20 Mbps are received on average.
Higher data rates of offloaded connections due to high
WiFi coverage.
For HSDPA 3 Mbps are received on average, as opposed
to local WiFi where 20 Mbps are received on average.

Latency

Reduce delay from ~4.0ms (CDN) to ~0.4ms (LAN).

Throughput

10Mbps higher throughput (see data rate).

QoE

If local cache is hit, MOS improved by 1 on average


according to reduced latency and increased throughput in
LAN.

Business benefits and value generation


Positive externalities

The more people deploy uNaDas, the higher cache


resources and the wider the WiFi coverage area.

Competitive advantage

WiFi coverage is higher than in competitive ISPs.


QoE of video streaming is higher than in competitive
ISPs.

Vertical integration

Service is integrated in RB-HORST enabled routers and


provided with new contract.
App is provided by Google Play Store.

Bundling opportunities
Revenues
demand

for

Revenues
by
demand (if any)

RB-HORST can be bundled with new hardware or new


contract.
existing Not foreseen.

additional Not foreseen.

6.1.2.6 Costs vs. benefits


We perform our break-event analysis for a medium-sized ISP on an 8-year basis, aiming
to calculate break-even point and overall gains without extending too much in time as new
technological paradigms may arise that would change the picture of the SmartenIT
ecosystem severely and thus, making the considered mechanism obsolete. Before
analyzing costs and benefits, we perform a rather conservative assumption regarding the
adoption of the new service and equipment by the end-users. In particular, we assume that
in the case of a medium-sized ISP with 100000 households, adoption will be small in the
beginning, i.e., 10%, will increase in the next 4 years to reach a (relatively low) upper
Page 126 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

bound of 40% (see Figure 57). In practice, similar solutions that are deployed by ISPs,
such as FON, are adopted more quickly and by a larger market segment, as enhanced
home routers are provided to end-users by the ISP itself with a new contract or any
subsequent renewal.

Figure 57: Expected adoption rate of RB-HORST.


In order to quantify deployment, installation and maintenance costs for a medium-sized
ISP, we assume again as in the Section 6.1.1.6 that the European average personnel cost
is 30.00/hour, which translates to 4620.00/PM. We remind that the assumed personnel
cost is in accordance to DFGs personnel rates for 2015 [47].
Moreover, we consider the case of enhanced home routers offered to end users by the
ISP. The cost of the upgrade of existing home routers is assumed to be 5.00/unit. Since
this analysis aims to assess the introduction of uNaDas in the network, we assume that
the tier-2 caching by the ISP pre-exists and thus we do not consider costs regarding cache
deployment and installation. If caching is performed in a data center owned by the ISP, the
cost of caching would be anyhow marginal for him. Otherwise, storage would be bought
from a CSP with whom the ISP has an (open) peering agreement over an IXP. In such a
case, we ignore the cost as it is not related to the deployment of the RB-HORST solutions
and the uNaDas. Total cost, including CAPEX and OPEX, on an 8-year basis is depicted
in Figure 58.
Next, we consider the achieved traffic savings for a medium-sized ISP with 6 inter-domain
links with 10 Gbps capacity each. We assume that video traffic is 67% of the total IP traffic
that crosses the links of the ISP in 2015 and it is expected to reach 88% in 2019 [24].
Performing a conservative projection for 3 more years, we assume that video traffic will
reach 90% of total IP traffic by 2022. According to the evaluations of RB-HORST, traffic
reduction achieved in inter-domain links is dependent on the cache size, sharing
probability and number of users in the AS. It has been measured that if sharing probability
is 1% in an AS with 106 end-users, up to 75% traffic savings are achievable, while in
smaller ASes with 105 end-users traffic savings is up to 50% (cf. [7], [8], [20]). Based on
the outcome of the evaluations, we calculate the traffic savings in the aforementioned
setup. Calculated traffic savings and video traffic evolution on an 8-year basis are depicted
in Figure 59.

Version 1.0

Page 127 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 58: CAPEX and OPEX for a medium-sized ISP on an 8-year basis for RB-HORST.

Figure 59: Video traffic evolution and estimated traffic savings due to RB-HORST on an 8year basis.
To calculate traffic cost savings, we again consider the price of transit, which, as
discussed in Section 6.1.1.6, is $0.63/Mbps in 2015, which corresponds to 0.56/Mbps,
and it is forecasted to continuously drop within the next years with annual rate 33%.
Moreover, we consider savings of cache resources for the ISP. According to the
evaluations of RB-HORST, about 30% of cache storage can be saved. Therefore, if the
video catalogue has 107 objects consuming 1GB each, considering also replication and
quality layers, which equals 104 TB, 30% cache savings account for 3000TB and
3000/month.

Page 128 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Furthermore, due to caching at end-users premises, 40% - 60% of total energy is saved at
ISP cache and cellular network, i.e., 100 Joule savings per Gbit [19]. One may claim that
the energy cost is transferred to the end-user. Indeed the power consumption of the
uNaDa is increased by a maximum of 150 mW compared to the idle mode when running
RB-HORST. Considering a similar increase in power consumption in the case of a
deployment on home routers (being conceptually similar in architecture to the uNaDas) the
additional energy cost at the end-user premises may be neglected. In practice, RBHORST uses resources which otherwise remain unused, while still consuming energy.
Comparing this to the QoE benefits possible by using RB-HORST justifies the marginal
increase of power consumption.
Returning to the cost analysis and taking into account the derived CAPEX, OPEX, traffic
savings and respective cost reduction (considered as benefit), we deduce a graph
presenting total costs vs. total benefits due to the introduction and operation of the RBHORST mechanism in the network of an ISP. Figure 60 depicts the derived results.
Unfortunately, due to the dramatic drop of the price of transit, savings for the ISP diminish
over the years. We observe that benefits always exceed costs since the first year of
operation for 75% traffic savings due to RB-HORST, while benefits exceed costs only after
the 3rd years for 50% traffic savings.

Figure 60: Total costs vs. total benefits (benefits are calculated both for 50% and 75%
traffic savings).
In Figure 61, the blue and red lines present the accumulated costs and benefits over the 8year period. Moreover, the green and purple lines depict the annual gain, i.e., the
difference between costs and benefits, and the accumulated gains over the years,
respectively. As it can be observed, the break-even point for 75% traffic savings is reached
already since the 1st year of operation. As it can be observed, RB-HORST can achieve
highly significant cost reduction for a medium-sized investment, mainly due to equipment
upgrade which is considered to be borne by the ISP in this analysis, and relatively low
OPEX, and thus, is considered as a highly cost-efficient solution.

Version 1.0

Page 129 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Figure 61: Overall economic gains assessment for RB-HORST.

6.1.3 Cost-benefit analysis of MUCAPS


MUCAPS is an instantiation of the deployment of the MACAO (Multi Access and Cost
ALTO) server (MARS) and client MARC. The MACAO Services may be invoked by Clients
called MARCs located on the ISP network (MUCAPS case) or in end user devices, for
instance of Peer to Peer networks. In MUCAPS, the MARC is hooked to the ISP name
resolver, so that the application traffic optimization is completely done in the ISP network
and therefore completely transparent to the Applications and End Users.
The next sub-sections Market and risk analysis and Value proposition address the general
MACAO use case. The other sub-sections starting from the description of the considered
scenario, address the MUCAPS use case only, unless explicitly mentioned.
6.1.3.1 Market and risk analysis
Stakeholders

ISP: provide (edge) connectivity to end users


CDNs: select the locations from which they deliver the
content to end users.
Users: consume application content (multi-media files)
produced by CSPs and delivered by CDNs or produced
and delivered via P2P networks.

Major players

Content Providers such as YouTube or Dropbox.


CSPs, especially SaaS providers.
Network equipment vendors.

Market of interest

Video streaming, video on demand, file sharing, online


storage, distributed gaming.

Page 130 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Target customers

ISPs may employ MUCAPS so as to optimize their


routing costs and resources usage by optimizing
application traffic. Moreover, ISPs customers, i.e., CSPs
and End Users, get indirect benefit of improved resources
allocation offered by the ISP thanks to MUCAPS.
Therefore, End Users may also agree to have their
connection type unveiled to the ISP.

Competing approaches

The PADIS solution [37] complements the MUCAPS input


selection of application endpoints with additional ISP
specific information on content location that is not always
available via functions such as DNS.

6.1.3.2 Value proposition


Product/service delivered

In the MUCAPS case, ISPs can

hook MUCAPS to their in-network application content


localization function (e.g. DNS resolver), for a usertransparent application traffic optimization,

advertise this to their customers as a light-weight


asset for better user experience

Customer ownership

Moreover, the ISP can distribute MARCs to its customers


and advertise it as a light-weight asset for better user
experience.
The consumer of the application resource, e.g., an end
user watching a video or running a virtualized application
experiences better QoE, while the ISP saves transport
costs.
ISP directly buys MUCAPS.

Value chain/network
organization

End-user enjoys higher QoE through application traffic


optimization from the ISP.

Customer value

ISP gets optimized routing costs and resources usage


through direct usage of MUCAPS of through applications
and end-users accepting to use the MACAO option.
CSPs get an increased satisfaction rating from their
customers.
If CSPs buy services from CDNs, the CDNs will easily
keep them as customers.

Revenue model

ISP saves inter-domain traffic costs due to content


delivery from sources hosted locally or in a peering
network.
Case 1: content delivery is performed by a video platform
owned by a CDN, e.g., YouTube,
In the MACAO case, CDNs may get financial
incentives from the ISP to subscribe to the
MACAO option.

Version 1.0

Page 131 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Content Provider can pay a CDN to replicate


content in multiple servers close to the End-Users.

CDN Provider can pay a Data Center Operator to


buy storage capacity in the latter's data centers,
while he can also

CDN Provider can also pay an ISP to employ the


ISP's cache/proxy server to diminish the distance
to his End-Users.

Case 2: peer-assisted content delivery

In the MACAO case, peer to peer user may get


financial incentives from the ISP to subscribe to
the MACAO option.

Without MUCAPS, excessive traffic localization


may cause OPEX increase for the ISP due to the
extra traffic in his network. An ISP may avoid this
by appropriately setting its traffic guidance.

Business
model
(in ISP managed service.
accordance
to
business
modeling analysis in [7])
6.1.3.3 Considered scenario
Number of potential users

Some ISPs such as Sprint, Deutsche Telekom and


Verizon show interest in research on ALTO and
contribute to the IETF ALTO WG. They might be
interested in MUCAPS. We call NS(I) the number of
subscribers of an ISP I.
The cumulated number of their subscribers is in the order
of NS with a value up to k*107, where k is a multiplying
factor denoting the size of the ISP network in subscribers.

Expected adoption by market

The number of MUCAPS systems bought and installed by


an ISP would be proportional to the number ND(LNR) of
Local Name Resolution (LNR) functions, e.g. local DNS
Resolvers. Ideally these LNRs would implement a system
like PADIS, to enrich the initial selection done by the CDN
with the ISP knowledge on Content Servers usage
history. ND(LNR) depends on the size of the ISP network.
To our knowledge, there are no published number on
deployment of DSN resolvers by the ISPs.

Frequency of use

Subscribers with MUCAPS do not explicitly request the


MUCAPS service. They get it as their content request is
processed by the LNR function that calls MUCAPS.
According to [37], at least 90% of the customers use the
DNS resolver supplied by the ISP.

Page 132 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Increase of demand

If the efficiency of the MUCAPS Service is established the


increase of installed MUCAPS systems will follow the
increase of ND(LNR) the number of LNR functions.

Topology

The MUCAPS block is composed of the MACAO


functional bloc, its interface MARS to MACAO service
requests issued by clients (MARCs) hooked to LNR
functions. The LNR in MUCAPS is located in the ISP
premises. The MUCAPS system requires an ALTO
Server managed by the ISP. EPi are video content
servers located in the Internet. The user endpoint
receiving the content is a subscriber of the ISP.

Figure 62: MUCAPS employed for transparent in-network


optimization.
Mechanism
parameters

specific The effectiveness of MUCAPS may be significantly


affected by the value availability and accuracy of the
following parameters:

RC = ISP defined Routing Cost = unit less metric


BWS = unit less ISP score on e2e path BW availability
with values in:
o [0,20] if pure score,
o [0, 10MBpF] MBpF,
o UEP (User End Point) access type,

Where MBpF = mean MB allocation per flow (reflects real


path BW).

Version 1.0

Page 133 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

6.1.3.4 Cost analysis


CapEx
Equipment

ISP needs to buy a number of MUCAPS systems lower or


equal to ND(LNR) the number of Local Name resolution
functions they want to optimize with MUCAPS. E.g. the
number of DNS Servers.

Deployment of service
Implementation

Implementation of the prototype version of MUCAPS: 3.5


Person Years.
Production of industrial version: 3 PY.
Implementation of an ALTO Server: 1.5 PY.

Installation

Installation of MUCAPS prototype: 0.10 PMs.


Integration of the MARC call code in the LNR function:
0.10 PMs.
Configuration of prototype: 1-2 PMs.

Other setup overhead

Connect MUCAPS to an ALTO Server


If ALTO Server (AOS) not present:
- installation of AOS: 0.10 PMs
- computing the ALTO Costs ISP dependent: 2 PMs
- filling the AOS database: 1-2 PMs depending on the
size and details of the abstracted Internet topology +
regular updates

OpEx
Maintenance
Monitoring

0.5 PM per year for monitoring 1 active MUCAPS system.


1 PM per year to update the configuration of 1 active
MUCAPS system.
1 PM per year to monitor and update the AOS.

Troubleshooting

1 PM per year to fix bugs on 1 active MUCAPS system.


1 PM per year to fix bugs on AOS.

Upgrade

0.5 PM per year for upgrading software and adapt it to


new configurations

Energy

The basic energy costs are those of 2 Servers


(MARS/MUCAPS + AOS).
The annual power
consumption of a server is estimated at 2000 kWatts by
the French national electricity company.

Other running overhead

Advertising, incentives to users and content providers.

Page 134 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Estimated cost modification


Transit cost

Saving on transit costs for ISPs that have established


transit relationships with upper tier providers.

Energy cost

MUCAPS allows energy saving in the ISP network by:

moderating the end2end path length thanks to


appropriate setting of routing costs,

minimizing energy loss caused by congestion or


frame re-sending thanks to appropriate setting of
Bandwidth Scores

Deployment cost

Not foreseen.

Management cost

Additional management costs due to


deployment and possible AOS deployment.

Other cost

Not foreseen.

MUCAPS

6.1.3.5 Benefit analysis


Technical benefits and performance/experience improvement
Data rate

Not addressed.

Latency

Average gain on number of hops: 7.6 hops (between


[2, 12] hops)
Average gain in RTT: 27%
Average processing impact on Start Time delay:
1.7 secs
On LAN/FTTX:

Average Start time delay performance gain per


video flow = 0.55sec gained
On WiFi/ADSL heavy load conditions

Average Start time delay performance gain per


video flow = 0.035 secs lost
On 3G:

Average Start time delay performance gain:


2.26secs lost
(For further information, cf. Section 4.2.5.)
Throughput

Not addressed.

QoE

On LAN/FTTX:

Average Video Quality Score performance gain


per video flow = +9.57% (8.7%-10.44%)
On WiFi/ADSL heavy load conditions

Average Video Quality Score performance gain


per video flow = equal (- /+ 0.01%).

Version 1.0

Page 135 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

On 3G

Average Video Quality Score performance gain


per video flow = stable (-/+ 3%)
(For further information, cf. Section 4.2.5.)
Business benefits and value generation
Positive externalities

If CDNs get better revenues through more efficient


delivery their customers such as CSPs may get some
commercial advantages as well and be willing to
cooperate with ISPs on network aware delivery. Endusers may be willing to cooperate with ISPs and reveal
their access type for a better delivery service better suited
to their access capabilities.

Competitive advantage

ISP offers better QoE to end users or CDNs/CSPs,


gaining customer loyalty and potentially new customers.

Vertical integration

Service is integrated with ISP LNR.

Bundling opportunities

Not foreseen.

Revenues
demand

for

Revenues
by
demand (if any)

existing Saving on transit costs for ISPs that have established


transit relationships with upper tier providers.
additional Not foreseen.

6.2 SWOT analysis


In the following pages, the SWOT (Strengths, Weaknesses, Opportunities, Threats)
analysis for the three main mechanisms/prototypes of the SmartenIT project are provided,
i.e., DTM/DTM++, RB-HORST and MUCAPS. The analysis focuses on both the
conceptual and technical aspects of each solution. Conceptual aspects are highlighted
with a light green color, while technical aspects area highlighted with a light orange color.
This way, the reader gets a clear idea of the properties of each approach and the
respective mechanisms from a technical, practical and business-oriented aspect.

Page 136 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

6.2.1 DTM/DTM++
Strengths

Weaknesses

Unique combination of features that


allow
for
temporal/spatial
traffic
management

Generic
traffic
management
mechanism; inadequate focus in
interaction with Cloud Management
systems
(although
conceptual
considerations were done)

Present implementation works only for


two inter-domain links, both links must
use the same charging rule (volume or
95th percentile.

Missing
high-availability
features
(controller
redundancy,
SBOX
redundancy), which are requested over
typical providers domain

Security
issues:
communication
channels between components are not
secured.

initial

Traffic management is supported by


SDN controllers

Scalability of the mechanism as the


network topology increases has not
been tested

Implementation requires
hardware routers with
policers (DTM++ only)

Considerable cost savings related to


both intra- and inter-domain traffic for
ISPs

Easy to be deployed and tailored over


different scenarios, both in single and
multi-homed ISPs
Not
disruptive
w.r.t
organization
infrastructure,
in
line
with
interconnection contracts and best
practices
Low capital requirements
deployments

for

Possess features enabling scalability


(proactive operational mode)
Opportunities

Solve real need in provider organization


over the actual cloud landscape

DTM has be shown to be beneficial for


all involved stakeholders

Cost functions can be replaced for


instance by energy efficiency functions
(or other) so optimization can be
customized (implementation may be
needed)

Implemented
tunneling could be
extended
for
assured
quality
interconnection among network and
cloud operators

Can be applicable for 5G inter-provider


content delivery over software-defined
networks (SDN)

usage of
hierarchical

Threats

Unwillingness of ISPs to collaborate


with other ISPs deploying DTM;
incentive incompatibility issues

Increasing popularity of peering (in


various forms such as settlement-free,
paid, multi-lateral, donut peering) and
extranet solutions such as IPX

Requires cooperation between ISPs,


DCOs, CSPs deploying mechanism
entities
(distributed
form
of
management mechanism)

Increasing scalability and complexity


issues when more inter-domain links
are used

Version 1.0

Page 137 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

6.2.2 RB-HORST
Strengths
Offers both WiFi Roaming and Content
Prefetching
Reduced/Balanced traffic loads on core
network

Weaknesses

Variable
(legal)
framework
for
caching/serving content (DRM, etc.) in
different countries

Heterogeneous hardware could make


implementation complex

Current implementation bound


specific content provider (Vimeo)

Implementation based on third-party


tools that may become unsupported

Offloading of cellular networks


Energy savings on the handset, thus,
providing incentives for adoption by the
user
Social information is kept private since it
resides in the users handset and/or in
his uNaDa.
Simple implementation
Reusing existing infrastructure
Supports encrypted video streams
Interfaces with Facebook to fetch social
information
Opportunities

to

Threats

Continuous increase of mobile video


traffic

Agreements between ISPs and CPs for


content distribution are hard to achieve

Convergence of mobile and fixed


network operators to increase the
coverage/service areas; new FON-like
business models

Incompatible API from different ISP and


CP (fragmentation)

5G approaches for WiFi roaming may


render this approach obsolete.

Reduce CPs CAPEX for initial


deployment due to peer-assisted
content distribution (agreement with ISP
is required)

Similar
products/services
(WiFi
roaming) with wider customer base and
more mature design

NFV for home gateways may bring


more functionality than simple Internet
access

Constantly changing APIs from CPs

Current implementation not compatible


with NFV practices

Internet Telcos offering OSN services in


the future may allow social-aware traffic
management solutions to be more
easily deployed in intra-domain or
among peering domains, providing the
respective incentives.

Page 138 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

6.2.3 MUCAPS
Strengths

Weaknesses

Cross layer cooperation incentive : ISPs


provide guidance based on routing cost
and e2e path bandwidth performance
score. In return, applications following
this the guidance and get better QoE or
the same QoE plus possible commercial
advantages.
Reduced/Balanced traffic loads on core
network

Implementation uses standards such as


ALTO that are being implemented by
research in ISP Labs and ODL
(OpenDaylight).

Opportunities

Continuous increase of video traffic via


all types of access
Reduced ISP OPEX and enhanced
CDN/CSP customer satisfaction
Joint optimization of ISP costs and
guidance on high bandwidth paths is
likely to increase cooperation of P2P
users
The MUCAPS design uses an ALTO
protocol extension called Multi-Cost
ALTO and that was adopted as an IETF
working group document in March 2015.
MUCAPS is the placeholder for another
ALTO
extension
providing
time
dependent cost values. Is based on 3
published patent applications as well.

Current implementation is bound to


specific name resolution function (NR)
(currently DNS) whose behavior and
quality of output are variable.
There is a need to generalize the
deployment cases of MACAO beyond
DSN resolvers, to serve modern
applications such as virtualization and
end-user managed content sharing
MARC needs currently an API for each
type of NR: e.g. for DHT and trackers.
Further releases need a generic API
w.r.t. the NR function.
Threats

Lack of motivation for the CDN/CSP to


utilize ISP guidance.
Lack of adoption of ALTO protocol by
ISPs
Big CDN/CPs trying to skip ISP
guidance

Difficulty to obtain reliable information


on application resources location.

Difficulty to design ALTO topologies of


complex and heterogeneous topologies,
e.g. multi-access, cross-administrative
domain, where as 5G and virtualization
will require to go below the BGP level.

Version 1.0

Page 139 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

7 Overall assessment
This section summarizes the overall assessment of all SmartenIT solutions and
mechanisms proposed, and presents a clearly laid out picture of various SmartenIT
perspectives, methods, evaluation techniques, and results. The overall assessment builds
previous sections of this deliverable but also encompasses the range of all findings of
SmartenIT during its past 3 years of operation and research.
While Section 3 of this deliverable presented explicitly the mechanisms landscape,
reminded on relevant stakeholders and their relations and perspectives, and reminded on
the finalized and stable SmartenIT system architecture, here the major overall assessment
is undertaken, including parameters and metrics, business perspective, experiment
results, a cost-benefit analysis, and a SWOT analysis.
During the three years of work, the SmartenIT project evolved taking into account its key
design goals and five main objectives defined in the description of work.
Thus, in recall, the SmartenIT key design goals were defined as:

Incentive compatible network management mechanisms for users, overlay


service providers, and network providers, which are based on a well-defined and
open protocols like ALTO and support an operational framework for agile
connectivity, distributed management, and a better control of heterogeneous
networks; and

QoE-awareness which is based on the awareness of user equipment and load


situations(covering bursty traffic situations and congestion cases) and Social
awareness especially awareness of user relations and interests; and

Energy efficiency with respect to networking infrastructure as well as wireless and


wired end-user devices, which includes flexibility in terms of adaptability to changing
application requirements.

SmartenIT generated a wide range of ideas and solutions. These ideas were compiled into
mechanisms, which took the form and style of management mechanisms, addressing
more broadly Network and Service Management aspects of network operations performed
by the main stakeholders involved, as well as the user. During these 3 years, a set of
carefully selected mechanisms has achieved a mature state in terms of formal
specification and design, while some of them have additionally been implemented to
demonstrate, within a working prototype, that respective control loops and mechanisms
can be cast into operational and efficient software, too. The SmartenIT approach
encompassed evaluations in terms of simulations (in case of the formal specification used)
and in terms of prototypical software performance and functional evaluations (for the
implemented mechanisms. Besides insights gained for each of those two evaluation paths
separately, the conclusive finding for simulation results and operational performance
results revealed that theory and implementation can get very close together, while real-life
considerations are fully integrated into the both the definition of the evaluation set-ups and
the interpretation of their outcomes.
Therefore, Table 41 summarizes all traffic management mechanisms, which had been
considered by the SmartenIT consortium. As already explained in Section 3 they are
mapped onto two separate scenarios, namely the EFS (End-user Focused Scenario) and
OFS (Operator Focused Scenario). For each mechanism initiated and developed within
SmartenIT, Table 41 provides the key information on (a) the stage of its development
Page 140 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

achieved and (b) its respective general assessment with respect to the benefits or loses of
the stakeholders involved. More explicitly, each of those 11 mechanisms in total is clearly
distinguished in terms of its intended and design impacts (cf. Deliverables D2.2 [5], D2.4
[7], and D2.5 [8]) and consequently in terms of its formal specification. This indicates that
the respective mechanism was carefully investigated with respect to how that intended
impact can be reached, either in a general case or in dedicated use-cases SmartenIT had
defined within WP1. A subset of those mechanisms (more specifically, six of them) were
evaluated by means of simulations, by (a) either determining a relevant simulation model,
implementing that, and running respective simulations or (b) utilizing known simulation
models and applying those to the dedicated situation under investigation.

EFS

General assessment of
stakeholders benefits/loses

ISP

Enduser

Overlay
(CSP/CDN/DCO)

Benefits

Benefit

No-lose/Benefit

SEConD

Benefits

Benefits

Benefits

vINCENT

Benefits

Benefit

No-lose

No-lose

Benefit

No-lose

MUCAPS

F+P

Benefits

Benefit

Benefit

DTM

F+P

Benefits

No-lose

Benefits

Benefits

Benefits

Benefits

RB-HORST

MONA

ICC
OFS

Testbed experiment

implementation

simulation

Traffic
management
Mechanism

specification

Table 41: Traffic management mechanisms developed by SmartenIT

MRA

F+P

DTM++(DTM +
F
Benefits
No-lose/Benefit
ICC network layer
functionality)
F functional test
M measurement extension
P performance test
* network layer functionality of ICC implemented on hierarchical policer [8], cloud layer
functionality not implemented

Additionally, the mechanisms with the highest potential impact in practical operations as
well with the highest innovation potential have been chosen by SmartenIT to become
prime candidates for a full-fledged implementation. Thus three mechanisms have seen
that stage of development within SmartenIT. Note that implementations in a prototypical
manner led to various practical experiments and evaluations. Those evaluations
addressed complex and real-life characteristics. It is also important to note that not all
mechanisms with a similar level of impact and innovation potential could be implemented
due to limited resource availability within SmartenIT (implementations do need much more
Version 1.0

Page 141 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

resources than simulations). However, those prototypes and their respective show-cases
demonstrate clearly that it is neither easy to reach the exact same behavior of a simulated
versus an implemented mechanism nor it is desirable to do that for every mechanism.
SmartenIT as a research and development project needs to provide general insights of
future design and implementation work as well as measurable results on selected
mechanisms. Thus, three mechanisms (RB-HORST, DTM/DTM++, and MUCAPS) were
implemented and became part of the SmartenIT prototype, which was first for the Year 2
review and will be also show-cased in the advanced form for the Year 3 review, indicating
that besides its written documentation of such implementations and evaluations the
respective effects and impacts can be seen in a real-life operation.
Finally, Table 41 also shows, at an abstract and on purpose high level of assessment, the
key stakeholders benefits and losses. Due to the detailed evaluation available within
Deliverables D2.2 [5], D2.4 [7], D2.5 [8], and D4.3 (this document),this general overview
focuses on the main stakeholders, which are considered to be the usual major players,
especially (a) the Internet Service Provider (ISP), (b) the end-user, and (c) the overlay
provider, either in terms of the Cloud Service Provider (CSP), Content Delivery Network
(CDN) or the Data Center Operator (DCO). Thus, Table 41 provides an overview of which
stakeholders are involved in which mechanisms and indicates a general assessment of
potential benefits that may be reached with a given solution or possible losses for the
respective stakeholder that may arise.
Furthermore, Figure 63 shows specifically the evolution of SmartenIT solutions, which are
based on the mechanisms designed and their evaluations performed. Starting from all 9
standalone basic mechanisms, especially the RB-tracker, HORST, MONA, vINCENT,
SEConD, and MUCAPS for the End-user Focused Scenarios under investigation and
DTM, ICC, and MRA for the Operator Focused Scenario, synergies have been
investigated on the level of formal investigations, simulations, and integrations. The
resulting synergetic mechanisms combine key functionality of basic mechanisms and form
practically applicable solutions of SmartenIT know-how, especially RB-HORST, and
DTM++, which were subsequently chosen for an integrated implementation and
prototyping. The fully integrated cross-scenario solution termed RB-HORST-DTM-ICC
determines, as documented in Deliverable D2.5 [8]) that various functionalities developed
within SmartenIT form a tool box, which can serve as the basis for combinations as (a) the
dedicated field of application may require it or (b) the set-up of stakeholders and their
dedicated technology in operation may demand it. Thus, the adding-up and the
combination of the developed complete system addresses in full the key project objectives
and its design goals.
Within Figure 63 the blue boxes indicate those mechanisms that have been just described
or described and specified. An important number of those have been inspected in the
closest possible detail by using theoretical models and simulation techniques. The red
boxes present mechanisms, which have been described and specified, theoretically
investigated and implemented too. The RB-tracker and HORST do not have separate
implementations, as their combined functionality offered by each of these mechanisms is
implemented in a fully integrated approach, termed as the RB-HORST mechanism. All
arrows of Figure 63 show the parent-child relation between these mechanisms, but also
they represent the evolution of the integration process within the Smarten IT project,
especially for performing the engineering and prototyping of these mechanisms within the
SmartenIT architecture. The DTM++ mechanism integration followed a slightly different
evolution, since the parent DTM mechanism was implemented first, then was improved by

Page 142 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

its integration with the ICC mechanism, and finally evaluated in full in that integrated form.
Thus, the ICC mechanism was implemented in its integrated form of DTM++. Therefore,
the DTM++ mechanism possesses the functionality of both the pure DTM and the ICC
mechanisms.

Figure 63: Evolution of SmartenIT solutions


The SmartenIT project has selected two core mechanisms which are DTM and RBHORST for full prototyping and experimentation in WP4. These mechanisms provide a full
traffic management and transport solution for the EFS and OFS scenarios. Multi-partner
specifications of these mechanisms and their implementations started in Q1 2014. In Q1
2015, it was decided to add MUCAPS to the achievements of SmartenIT. MUCAPS has
been defined and prototyped as an add-on mechanism to further optimize application
traffic management by using a cross-layer cooperation between the application and ISP
transport networks. This cooperation is achieved thanks to a protocol that is being
standardized at the IETF as the ALTO (Application Layer Traffic Optimization), specified in
RFC 7285. The MUCAPS design integrates an extension to the ALTO protocol called
Multi-Cost ALTO that enables ALTO Clients to request path costs on several metrics jointly
and also allows clients to place constraints combining several metrics and logical
operators. Multi-Cost ALTO is now an IETF ALTO WG item since 2015, see [40]. The
MUCAPS implementation evaluated in this deliverable is entirely deployed within the ISP
network and its action is transparent to application client. However the core MUCAPS
functionality is a decision module called MACAO that serves optimization requests issued
by clients called MARCs, which are hooked to name resolution functions that are deployed
either in the network or in end systems. Deliverable D3.3 [11] illustrates how MUCAPS is
implemented with the MARCs being hooked to the ISP DNS Resolvers and how a MARC
can be as well hooked to a DHT in a uNaDa. The latter use case positions MUCAPS as a
potential add-on to RB-HORST, in that it can take the DHT selection of content locations
produced in RB-HORST and re-order it according to ALTO based and user access
technology aware information. As a consequence, in Figure 63, the dotted line pictures a
potential parent-child relation between MUCAPS and RH-HORST.

Version 1.0

Page 143 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Obviously, the vertical grey line in Figure 63 separates the originally defined EFS and OFS
mechanisms. However, as SmartenIT progressed there are clear insights already gained
so far that the future and possible integration of SmartenIT end-users and operators
views can form a fully functional mechanism and product in turn that will be able to
offer integrated features and functionality related to OFS and EFS as a whole. This
integration level has been already discussed and was considered in further detail in D2.5
[8]. Therefore, the green arrows indicate the next possible integration paths, already
considered and studied within the scope of SmartenIT, but not implemented. Furthermore,
the integration of SEConD, MONA, and vINCENT with RB-HORST had been deeply
discussed in the consortium due to its innovation potential and resulted in an
implementation specification already (as RB-HORST++ mechanism), but due to resource
constraints such an implementation had not been further pursued.
The main functionality and innovation of all mechanisms presented on Figure 63 may be
summarized as follows:

RB-Tracker incorporates three mechanisms in a fully distributed system, which form


an integrated management approach for P2P services combined with contentaware distribution demands: (1) the identification of content that is popular and of
interest to the user in order to replicate it to the local cache, (2) the identification of
network status to determine a feasible time to replicate, that is an off-peak hour, (3)
the identification of close peers to replicate from in order to avoid additional intradomain traffic induced by replication.

Utilizing available resources on home routers or uNaDas, HORST supports content


delivery and takes load off cellular networks by prefetching and caching content and
by providing WiFi access based on social network information. To use HORST,
end-users need to connect to the social network using their credentials. The social
information provided is used to predict the popularity of content for improved
prefetching and to identify trusted users to grant WiFi access.

RB-HORST combines the approaches of RB-Tracker and HORST by using RBTracker to form and manage an overlay among the uNaDas participating in
HORST.

MONA adds intelligent traffic control on the smartphone, thus reducing the energy
consumption of mobile data access and increasing the QoE of the end-user by
selecting the optimal network connection depending on the current network
availability, network performance, and user interaction.

vINCENT provides an incentive scheme for users to offer WiFi network access to even
unknown parties. In the scheme, each user participates with his mobile phone and his
uNaDa. It grants their owners an appropriate incentive by allowing them to consume a
fair share of the offloading capabilities of other access points. vINCENT exploits social
relationships derived by the OSN, as well as interest similarities and locality of
exchange of OSN content, and addresses the asymmetries between rural and urban
participants of an offloading scheme to derive fair incentives for resource sharing
among all users.

SEConD enhances the Online Social Network (OSN) users QoE by eliminating stall
events during video viewing. The means to attain this improvement is the effective
video prefetching and the peer-assisted and QoE-oriented video delivery. SEConD
reduces the contribution of the origin server in video delivery, and thus its relevant

Page 144 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

operational costs. It achieves reduction of the inter-domain transit traffic and


exploits peering links, which may lead to the reduction of transit inter-connection
cost. Moreover, SEConD restricts redundant traffic, i.e., traffic generated by
prefetching the same content item several times, in order to mitigate intra-domain
congestion.

MUCAPS revises the initial Endpoints selection done by the application overlay by
adding awareness on the underlying network topology and on transport costs
through information not available by traditional end to end on-line measurement
mechanisms. The selection criteria used in MUCAPS are an abstraction of end-toend performances on application paths. This abstraction relies itself on an ISPcentric abstraction of the Internet topology, provided to applications via the IETF
ALTO protocol and one of its extensions supporting several network cost metrics
and constraints combining several logical operators and metrics, see [40]. MUCAPS
also integrates the access technology used by the device receiving the application
resource in its decision making, thus possibly adapting its decisions accordingly.

DTM reduces traffic transfer costs on inter-domain links. It works for traffic volume
and 95th percentile charging rules. It reacts dynamically on traffic changes on interdomain links moving some part of traffic (manageable traffic) from one link to
another. This management mechanism works in a distributed way, it requires
cooperation between operators, namely ISPs receiving and generating traffic. It
has been applied to inter-Cloud communication for Clouds located in different ISP
domains; this traffic is classified as manageable traffic.

ICC attains substantial cost savings for the ISP under both full information and
partial information (traffic expectation from statistics) via smart traffic management.
ICC always attains the transit charge reduction goal under full information on traffic.
ICC is net neutral: it incentivizes the key ISP business customers that generate the
time-shiftable traffic to allow ICC to manage the time-shiftable portion of their traffic
through a discount proportional to his fair share on the total volume of traffic
managed by ICC. The pricing scheme associated with the ICC mechanism is a
good compromise between simplicity and sophistication to offer the right incentives.

DTM++ combines the so-called traffic shift in space of DTM and traffic shift in
time of ICC to achieve further optimization of traffic distribution across multiple
transit links while delaying delay-tolerant traffic when transit links are heavily
loaded. The DTM part of mechanism distributes traffic through tunnels traversing
inter-domain links. It tries to keep stable traffic (background and manageable)
proportion which traverse both links. DTM++ does not control the amount of traffic
allowed to pass particular link, only mentioned proportion is controlled. The problem
of excess traffic is mitigated by ICC functionality that controls the amount of traffic
sent by clouds. Only time-shiftable traffic part (selected TCP traffic) generated by
cloud can be controlled. The DTM++ has to mark time-shiftable traffic, which can be
recognized by a hierarchical policer that discards this part of traffic when it exceeds
predefined limit. The TCP sliding window regulates traffic amount reaching interdomain links.

MRA allocates multiple heterogeneous resources, such as CPU, RAM, disk space,
and bandwidth, in a fair manner between customers of a cloud or a cloud
federation. As presented in D2.5 [8], the results show that the physical resources
can have complex/unpredictable dependency patterns when consumed by VMs.

Version 1.0

Page 145 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Therefore, Cloud resource allocation mechanisms must operate only based on


consumed resources, but must assume distinct dependencies between resources
(because these dependencies vary widely from case to case). While this constraint
is ignored in the literature, the MRA mechanism takes this constraint into account by
defining fairness solely based on consumed resources and therefore achieving a
wide applicability.
The SmartenIT Traffic Management mechanisms and respective models can be further
strengthened beyond the lifespan of the project so as to increase the agile enrichment of
the respective functionality provided to the stakeholders adopting the SmartenIT solutions.
Firstly, an RB-HORST++ employing features of RB-HORST, SEConD, vINCENT and
MONA, in order to perform content prefetching and mobile to WiFi offloading in an energy
efficient manner, while ultimately further improving end-users Quality of Experience was
considered.
Further capitalizing on cross-layer and EFS-OFS synergetic solution
constitutes a highly promising area of future work. Synergies already defined are briefly
overviewed below: The modeling and evaluation of cache efficiency including dynamic
popularity can be further consolidated in order to further enhance the RB-HORST
efficiency and the caching infrastructures at ISPs, CSPs and data center providers. In
particular, the interworking of caching in different domains under different administration
remains a difficult task waiting for solution approaches that are viable for all involved
parties and enable better inter-domain optimization. Moreover, regarding MONA, the
developed power models for mobile connectivity can be used for developing an algorithm
scheduling the user generated traffic in future versions, considering QoE. Implementing
the cloud layer of the ICC mechanism, taking into account both the network load and the
energy cost of the IT infrastructure of the datacenters, will enable further optimizations on
the management of inter-domain traffic for the specific case of cloud services and in the
context of 5G software defined networking for content delivery. MUCAPS and ICC/DTM++
will benefit from further investigation on cross layer cooperation between users, network
providers and cloud/application providers and by the development of cross-layer decision
making algorithms.
Regarding EFS-OFS synergetic solutions, SmartenIT has also produced certain promising
solutions in terms of design and features.
In particular, the DTM-RB-HORST synergy (D2.5 [8], Section 5.3.1) is specified as follows:
RB-HORST contributes to reduction of inter-domain traffic by prefetching the content,
making socially-aware decisions, thus making the content to be demanded locally
available. RB-HORST increases local availability, so that end users need to download the
content directly from remote DCs less frequently. DTM is used to optimize traffic
distribution among inter-domain links with DTM in terms of ISPs costs. The ICC and
SEConD synergy has also been specified for various deployment scenarios (D2.5 [8],
Section 5.3.2). The idea is that the social awareness logic of SEConD is used in this
synergy to decide on what needs to be moved, where and when, also in an ISP-friendly
way that is enforced by ICC. In particular, this input is the amount of data, the priority level
(urgent or delay-tolerant) and the candidate destinations that will be selected via the social
prediction module of the SPS component of SEConD. Apart from the social awareness,
the adoption of SPS also improves the QoE of the ISPs customers.
The following section overviews how SmartenIT addressed key design goals listed in the
beginning of this section.

Page 146 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

7.1 Key design goals addressed


Key design goals are reminded in the beginning of section 7. Various functionalities
realizing key design goals were offered by different SmartenIT solutions. The specified
traffic management mechanisms (presented in Table 41 and Figure 63) have been
designed to be impartial in terms of market power and competition, adopting design-fortussle and incentive compatibility principles, and thus increasing the chances of their
adoption in the market and enhance the project impact. They offer incentives of various
forms, such as direct benefits, indirect benefits, cost reduction, efficiency increase to
stakeholders involved and result in tangible benefits for them. In particular, some
mechanisms address social awareness aspects (namely, RB-HORST, SEConD,
vINCENT). MONA is a dedicated mechanism for energy efficiency control, while other
mechanisms also touch some aspect of the problem of energy efficiency. The operation of
RB-HORST influences DCOs energy consumption and also reduces BTS (Base
Transceiver Station) power consumption that can happen when offloading procedure takes
place. This mechanism has been designed without straight relation to energy efficiency but
mechanism authors were aware that content caching in uNaDa frees resources in data
centers so less energy is consumed by them. Moreover, offloading procedures result in
lower number of mobile network users. Fewer users means less power used by
transceivers in BTS-es. Much effort in SmartenIT was directed for inter-Cloud
communication. Clouds located in different ISP domains were considered. The focus was
on the problem of lowering the inter-domain traffic transit costs due to inter-Cloud
communication (DTM/DTM++). QoE was also considered within SmartenIT. MUCAPS
offers data transfer with mutual incentive for cooperation between end users, ISPs and
applications. The impact on the QoE of video streaming has been explicitly analyzed and
quantified via a dedicated Video Quality Score based on several purely QoE specific
metrics, by the QoE analysis component called VITALU.
7.1.1 Incentive compatible network management mechanisms
SmartenIT developed solutions that offer proper incentives for all the stakeholders
involved in the Internet services value chain and distinguished in WP1 and Section 3.1 of
this deliverable. The inclusion of both network and Cloud/over-the-top layer stakeholders
in the SmartenIT research agenda implies that the SmartenIT traffic management
mechanisms operate in a fashion that is beneficial not only on the network layer but also
for the overlay/Cloud layer (c.f. D2.4 [7] and D2.5 [8]).
A major concern in terms of incentives for all the aforementioned stakeholders to adopt the
SmartenIT technology is the openness of the solutions employed, their flexibility, and their
interoperability with other solutions/legacy systems. The issue of adoption incentive has
been addressed in the scope of the project by means of using widely-used open protocol
standards and well-structured layered interchangeable functionality.
Different stakeholders involved in EFS or OFS scenario may play different roles, as also
already explained in Section 3.1, resulting in different structure of the resulting value and
incentive chains. The term incentive chain is used to depict the propagation of incentives
through proper incentive schemes along the value chain among the respective value chain
actors. In particular, in case a given stakeholder obtains direct benefit from the deployment
of a given traffic management mechanism (thus this potential benefit is an incentive for the
stakeholder to endorse the mechanism), this may act as a trigger of incentives for other
stakeholders. In particular, the originators initial tangible incentive may result in additional

Version 1.0

Page 147 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

benefits through cooperation with partner stakeholders. In order to establish this


cooperation, even with potential competitors, incentives need to be provided in turn to
other stakeholders in the form of some discount or free service or subsidization to the
partner stakeholder for service deployment. This constitutes a chain of incentives, which
must be also aligned with the value chain of the respective Internet service. For instance,
in ICC and, thus, in DTM++, the initial tangible benefit of the ISP is identified as transit cost
reduction. The ISP needs to convince customer Clouds/datacenters to adopt the
ICC/DTM++ traffic management solution for a portion of their traffic in order to obtain this
cost reduction. This results in providing incentives to these stakeholders by means of
increased performance of their time-critical traffic and monetary returns of a portion of the
home ISPs cost savings (additional details on the ICC payment rule are provided in
Appendix F of Deliverable D2.5 [8]).
A stakeholder may be also a beneficiary without a direct incentive. In such a case a
stakeholder can benefit only indirectly by the fact that a particular mechanism or device
that is deployed in a network has a positive impact on its operation and costs/gains. One
can say that such stakeholders get some indirect benefit as a side effect of a particular
service deployment, with the latter comprising their main focus and incentive. Additional
incentives in the form of rewards from the originator to these stakeholders may need to be
propagated in this case across the chain so as to ensure the value and incentive chain
sustainability.
Below we consider examples of chains of incentives for EFS and OFS scenarios
separately.
EFS
In the case of RB-HORST, an ISP wants to deploy this mechanism in the end-userpremises because it will gain profit from its operation. In order to encourage end users to
install a home router with RB-HORST, it may offer lower prices for Internet access or
higher bandwidth without any extra charge. The ISP benefits from installation because the
inter-domain traffic may be reduced.
Also some end users may decide by themselves to deploy RB-HORST without the direct
involvement of their home ISP. This type of deployment is possible because the ISP
provides Internet access to the end-users and does not restrict device deployment in the
end user premises. By using RB-HORST, end users will get better QoE, throughput and
lower latency but there is no additional benefit. There are cases where e.g. monetary
rewards may come from other layer stakeholders: for instance, certain on-line gaming
services offer a reduced monthly subscription service if the users enable peer-to-peer
sharing of maps, characters and content which reduces the load at the game servers and
increases the game performance. The ISP in such a case will be also beneficiary because
inter-domain traffic will be reduced because of the deployment of this ISP-friendly
technology from a portion of its end users. This is why SmartenIT has placed emphasis on
the ISP-friendliness of caching and overlay mechanisms. Other benefits are possible. For
instance, the mobile operator may benefit from offloading: it results in lowering BTS power
consumption and saves radio resources.
From the RB-HORST operation, there are benefits for the ISP and the end-users (more
detailed analysis presented in Section 6). There are no contradictory issues resulting in
generation of new tussles, thus no additional incentive mechanisms need to be put in
place.
Page 148 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

When we consider RB-HORST, we have three other beneficiaries without direct monetary
incentive, which however have substantial gains in terms of efficiency gains that is
inevitably translated to cost reduction, and thus to indirect monetary gains. These
stakeholders are the CSP, who saves resources, the HRM (home router manufacturer),
who sells dedicated equipment, and finally the DCO, who saves energy. All this
stakeholders and their benefits are presented in Section 6.
Another important note is that the RB-HORST equipment may operate without any ISP
support, as it is a single router with RB-HORST software. If RB-HORST enabled home
router is sold as an appliance, the support is provided by vendor. If the RB-HORST home
router is based on open source implementation, the software updates are provided by a
group distributing software. One can say that this mechanism is ISP/CSP/DCO agnostic.
Any single end user can deploy this device without any permission. The end user only
relies on HRM who offers this solution. Looking from the perspective of the ISP and the
End User, their motivation for RB-HORST deployment is aligned and not contradictory.
The only downside of the uNaDa deployment is the increase of end users energy
consumption due to the RB-HORST functionality.
An ISP may easily deploy MUCAPS to optimize routing costs and resources usage while
maintaining end-user QoE. As MUCAPS is deployed within the ISP network, its customers
may benefit from improved resources availability without needing to buy anything. End
users may get refined guidance based on their connection type to better adapt the content
location to the capabilities of their current access. To this end, users may be asked to
agree having their connection type unveiled to the ISP. As for the value chain in terms of
network organization: the end-user gets a better QoE as a result of ISP-initiated
application traffic optimization. The ISP gets optimized routing costs and resources usage
by means of direct usage of MUCAPS or through applications and end-users accepting to
use the MACAO option. Likewise, the CSPs get an increased satisfaction rating from their
customers. If CDNs use MACAO to optimize their application traffic, they can improve the
satisfaction of the CSPs buying their services and will easily keep them as customers.
Last, the ISP saves inter-domain traffic costs due to content delivery from sources hosted
locally or in a peering ISP network.
OFS
The ISP-friendliness of the TM mechanism has been a prerequisite during the design and
development phases. Thus, the TM mechanisms are aligned with the interests of a real
ISP due to the transit cost minimization/reduction attained by means of most of the OFS
TM mechanisms, i.e. DTM or ICC and their combination, namely DTM++, and selected
EFS TM mechanisms as well, such as SEConD. The latter is discussed in the synergies
part of this section, thus it is not re-discussed here.
In case of DTM/DTM++ three stakeholders were identified: ISP, DCO and CSP. The
deployment of DTM/DTM++ requires cooperation between the site that receive traffic and
the one that generate traffic. The mechanism has been designed to lower ISP transit costs
due to traffic shaping of inter-Cloud communication. If Clouds are located in different
domains, they operate on resources provided by the DCO. The main incentive originator in
the case of DTM/DTM++ solutions is the ISP aiming to decrease its transit cost. Note that
if the ICC payment rule is adopted (D2.5 [8] Appendix F), then incentives are provided to
the DCOs as well by their respective home ISPs. Several incentive chains that encourage
partners to deploy DTM or DTM++ were considered.

Version 1.0

Page 149 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

In the simplest case (Figure 64(a)) the ISP-A receives traffic from the cloud hosted by ISPB. ISP-A asks ISP-B to deploy the respective DTM part. The ISP-A offers the same DTM
deployment for the ISP-B. Both ISPs benefit since they decrease their transit costs. In
Figure 64(b) and Figure 64(c) we present the situation where DCOs or CSPs are
federated. In this case an incentive originated by one stakeholder may result in a
sequence of incentives, this has been presented also in Figure 64. ISP-A offers to DCO-A
lower cost for connecting to its network infrastructure. This cost reduction can take place if
DCO-A convinces DCO-B to deploy DTM (Figure 64 (b)). The incentive scheme used by
DCO-A to do this, would be that DCO-A can offer some free services to DCO-B (that can
improve DCO-B operation and competitiveness). The incentives of the value chain
stakeholders once more are aligned and not contradictory, thus all stakeholders attain
tangible benefits. A similar sequence of incentives may appear in the case of CSP
federation. The ISP-A initiates the whole encouragement procedure for DTM deployment
by contacting DCO-A. Next, DCO-A offers a reward, such as a discount for the
infrastructure services provided, to CSP-A federated with CSP-B. CSP-B operates in ISPB network where it can deploy DTM elements (SDN controller and Open Flow switch , e.g.,
OvSwitch [48]) in a form of virtual or physical machine (cloud can operate on DCO
resources, so the deployment can be done without DCO permission with
virtualization).Each mentioned player gains some benefit.

Figure 64: Sequences of incentives for DTM and DTM++ in the context of stakeholders
relations

Page 150 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

When ISP-A wants to use DTM++, the cooperation with CSP-B is required, since it is
responsible for marking delay tolerant traffic. So again the initial incentive starts with ISP-A
and the final participator is CSP-B. Considered sequences of incentives for DTM++ are
presented in Figure 64 (c), (d), (e). When some stakeholders are federated, this relation
allows omitting ISP-B in the sequence (Figure 64 (c), (d)). The federated entities are
usually in a straight relation and they have common business aims. For instance in Figure
64 (d) where DCO-A is confederated with DCO-B (DCO-B delivers resources for operation
to CSP-B), ISP-A may contact CSP-B via DCO-A instead of contacting ISP-B who hosts
DCO-B (who respectively has business relation with CSP-B). It may be easier to
encourage DCO-A than ISP-B to contact with CSP-B. All incentives appearing in this
sequence of incentive are compatible.
Last but not least, these incentive chains, though differently structured for federated and
independent operation of the stakeholders, all result in aligning stakeholders incentives.
This is extremely important since federations are gaining momentum as a means of
reducing CAPEX and OPEX for ISPs and Clouds, while maintaining elastic access to
resources over large geographical regions with fine geo-spatial granularity. This trend is
expected to intensify the coming years in networks due to the increasing adoption of
Software Defined Networking and Network Function Virtualization paradigms, which
render the cost-efficient fast and dynamic provisioning of Network as a Service
sustainable. Regarding Clouds and overlays, such federations already exist, e.g., XiFi,
OnApp (details provided in D2.4 [7], Section 3).
Finally, note that the optional Cloud-ISP interface of the ICC mechanism, whose
implementation is not part of the current DTM++ prototype, can serve as a means of
communicating preferences and providing incentives between the cloud and the network
layers, thus enabling the on-demand creation of such value and incentives chains.
Incentives for resource sharing and load/VM migration among Clouds (with or without
federation) or load sharing among home routers have also been investigated by dedicated
models in D2.5 [8], Section 4. The optimization criteria served are minimization of transit
charges for the ISP, cloud metrics, QoS/QoE and energy metrics. These cross-layer
optimizations also form new incentive chains, which we refrain from presenting here for
brevity reasons. Once again, the direct and indirect benefits stemming from the operation
of the respective SmartenIT mechanism are propagated from the chain originator across
the stakeholders of the value chain by means of incentive schemes such as discounts,
monetary rewards and improved performance/richer services functionality.
Tussle Analysis
SmartenIT has performed tussle analysis within the scope of the two scenarios, namely
OFS and EFS, of the hybrid combination of them including synergies for combinations of
SmartenIT mechanisms, as well as the in SmartenIT models. Since an exhaustive analysis
of the entire SmartenIT solution space would not be feasible, the project has opted to
provide indicative analysis over specific instances for each major SmartenIT solution
type, namely mechanisms, scenarios and models. This analysis has also been explicitly
mapped to the business models that are relevant for SmartenIT and have been presented
in the Chapter 3 of SmartenIT deliverable D2.4 [7].
SmartenIT designed mechanisms that the address incentives of the various involved
stakeholders, so as to avoid potential tussles among them. We focused on two types of
incentives: monetary incentives, e.g., revenues for providing a specific service, and
Version 1.0

Page 151 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

performance incentives, e.g., enjoying high(er) QoE or experiencing low(er) congestion.


Those incentive schemes discussed in the beginning of subsection 7.1.1 are used to
resolve tussles among the stakeholders. Thus, they are the control mechanisms by which
the alignment of incentives results in situations where there are no conflicts among the
stakeholders whose cooperation is crucial for the deployment of a mechanism or service.
The introduction of such schemes also enhances trust and facilitates the establishment of
business relationships among the stakeholders in a competitive environment. This is in
sharp contrast to the case where a powerful stakeholder may exert his market power in
order to enforce a certain solution in a non incentive-compatible way, thus creating new
tussles and possible business or legal disputes. De-peering or brute-force throttling of
certain types of traffic by ISPs are well-known examples of the latter. Therefore, it is
important to stress that SmartenIT tussle analysis has greatly contributed to designing
mechanisms that are attractive for all the stakeholders of the Internet services value chain,
or can be made so through the provision of proper incentives. Below we summarize the
tussle analysis conducted in SmartenIT; the reader may refer to Deliverable 2.4 for a
complete analysis.
A basic tussle analysis of the Inter-Cloud Communication and Exploiting Content
Locally use cases was conducted. During the tussle analysis, DTM++ (DTM combined
with ICC), and RB-HORST++ including components by RB-HORST, SEConD, vINCENT
and MONA have been considered. The conclusion from this analysis is that the SmartenIT
Traffic Management mechanisms for the Inter-Cloud Communication and Exploiting
Content Locality use cases do resolve the respective tussles by means of adopting the
design-for-tussle principles in the mechanism design without favoring any kind of
stakeholder at the expense of the other. Moreover, the TM mechanisms provide proper
interfaces and incentive schemes for stakeholders to communicate their interests (network
load, energy consumption and cloud load) and resolve or mitigate potential tussles and
conflicts in an incentive-compatible way. For instance, in the EFS scenario, in order to
avoid an increase of inter-connection charges, the use of uNaDas within the domain of an
ISP can be restricted to serve only users within that AS, thus resulting in minimizing
egress traffic, and thus, avoiding a potential increase of associated costs. In the OFS
scenario the tussles and inefficiencies regarding the inter-domain traffic management, can
be resolved by means of proper pricing rules that are mutually beneficial for ISPs and
DCOs/clouds, thus aligning the incentives of Cloud and network stakeholders and attaining
a win-win outcome.
Moreover, tussle analysis of the SmartenIT ecosystem as a whole has been performed (in
D2.5 [8], Section 5) and exhibits how conflicts of interest of stakeholders in SmartenIT
scenarios and use cases can be mitigated by means of the incentive compatible
SmartenIT TM mechanisms, according also to the projects theoretical investigations and
analysis of models. This is particularly interesting for the SmartenIT synergetic OFS-EFS
solutions where spillover effects are likely. These new tussles can be possibly mitigated by
means of the careful parameterization of the scope and operations of each of the
SmartenIT mechanisms constituting the synergy, under appropriate business models that
allow fair competition throughout the respective value chain segments.
Overall, the tussle analysis performed exhibits the effectiveness and the high potential for
adoption of the SmartenIT mechanisms, as well as the close relation of SmartenIT
theoretical investigations, models, scenarios, use cases, synergies, mechanisms and
business models and how inevitably correlated these are.

Page 152 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Technology and emerging business scope


Despite the fact that the project consciously focused on Best Effort Internet, the impact of
beyond Best Effort Internet connectivity has been briefly considered together with the
potential applicability of SmartenIT mechanisms, particularly for cases where some quality
of service assurance can be provided even in inter-domain level.
In particular, regarding the OFS TM mechanisms:
1. Evaluating the potential of ICC to operate with multiple delay classes and its
potential adoption over inter-domain network paths of assured quality, as well as
coupling the ICC pricing model with that of DiffServ for multiple quality layers
constitute additional promising future research steps. These also fit nicely with the
context of Future Internet and assured quality interconnection in order to overcome
the inefficiencies of Best Effort Internet and make Internet the preferred way of
delivering content and services, as opposed to competing extranet solutions that
currently fill-in this gap, such as IPX and its related business initiatives. Those
issues are also relevant to the 5G initiatives regarding the service-aware
management of traffic at the network core, also integrating the required cloud and
storage resources needed for the efficient and extremely fast provisioning of
services in a ubiquitous connectivity paradigm, as well as Mobile Edge Computing.
These actually constitute on-going work in the context of P14-16 Innovation projects
of the Network Virtualization and Software Networks strand of 5G PPP.
2. DTM can be extended to use multiple tunnel connections, each pertaining to a
different service for the same source-destination pair. DTM rationale could be
employed to optimize the usage of these tunnels while the tunnel features could
result in differentiated performance plus bypassing the BGP single sourcedestination route. Clearly those extensions could also be transferred to DTM++
which is the superposition of ICC and DTM.
5G is considered as a technology that enables machine-to-machine (M2M), human-tohuman (H2H) and human-to-machine (H2M) communication without limit (essentially
infinite bandwidth, infinite capacity). Many 5G services and Future Internet Applications
accessed through 5G will require huge data processing in DC/Cloud. Then, DTM and
DTM++ may help in managing traffic crossing the borders of mobile operator domains,
limiting transfer costs. In addition, as 5G network will rely on SDN technology, DTM can be
used for managing traffic transfer in different delivery regions of the mobile network were
more capacity is present. DTM uses only very simple features of SDN technology. Thus,
one of the possible extensions is usage of MPLS tunnels managed via SDN. The whole
physical network infrastructure can be shared by a few virtual mobile operators. DTM can
also be employed in 5G, where virtualization will lead to the conception and deployment of
network slice (virtual 5G network which is managed by a single virtual operator). The DTM
can offer virtual DA routers and they can be deployed in separate slices belonging to
different virtual operators. Given that the 5G architecture is still undefined it is impossible
to derive a solution, but the applicability of these SmartenIT mechanisms appears
promising and remains to be further investigated in the future.
7.1.2 QoE-awareness
One of the projects main objectives was to estimate the QoE that overlay applications
deliver to the end user and to use this metric as an important parameter to optimize for.
The QoE metric is based on objectively measurable parameters within the network. The
Version 1.0

Page 153 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

throughput received by the application was used as objectively measurable parameter to


assess the QoE of video streaming as the application most relevant for SmartenIT. This
parameter was mapped to QoE by considering the buffer level of the video player to
assess whether a smooth playback of the stream is expected. Thus, the QoE of different
mechanisms developed in SmartenIT was estimated and the application parameters
optimized.
In the RB-HORST experiments, a simple QoE model was used. The model was based on
the measured download times of the videos played back on client devices. The large scale
study (EFS#2) showed that in 80% of the cases the download time of a video was smaller
than it playback time which means that QoE is good, i.e., no stalling should occur.
However, that model might be too simplistic; a fact that can be corrected with a safety
margin. With a margin of 20%, meaning a video will have to be downloaded in 80% of it
playback time, in 75% of the measured cases the QoE is good. The bad cases can be
explained with weak WiFi signal or similar issues, since the caching experiment (EFS#1)
has shown that RB-HORST can handle a higher load (8 clients) than occurred in the study.
Finally, it can be said that RB-HORST can improve QoE if cached content is watched and
it does not impede QoE if content has to be fetched from the CDN.
Both RB-HORST++ and SEConD require the support of higher tier caches, if the
estimated throughput for the video session would not be sufficient for a quality level
acceptable for the end-user. A trade-off between the ISP cache contribution and QoE of
video sessions could be identified and evaluated. In RB-HORST++ the QoE of video
sessions was evaluated based on the throughput of the available mobile access link. A
QoE aware policy which selects the available mobile access link based on its throughput
could optimize the QoE for mobile video streaming when offloading to WiFi access points.
Besides MUCAPS a QoE measurement and analysis tool called VITALU was updated for
the SmartenIT use cases. VITALU measures a set of QoE specific metric values at
devices receiving video files and devices and derives a Video Quality Score (VQS) built in
a similar way to a Mean Opinion Score. VQS is a function of metrics such as Media Bit
Rate, Start Time, number and mean Duration of Freezes. VITALU does the QoE
measurements and analysis on the fly, as the video file arrives on the user device, Packet
Capture files are extracted by tools such as Wireshark of tcpdump and analyzed. So a
VITALU flow analysis lasts between 1 and 2 times the duration of the video, which allows
producing an almost direct user QoE feedback to applications. This way applications and
ISPs willing to use MUCAPS have a direct view of its impact and can if necessary adjust
their policy in terms of the routing costs and application path bandwidth performance
information they announce to applications. VITALU is in a pre-product stage. It has been
customized within the SmartenIT project to support measurements done on mobile phones
and further analysis functions, candidates to the QoE analysis entity pictured in the
SmartenIT architecture. VITALU was used to quantify the impact on video streaming QoE
of the MUCAPS mechanism.
7.1.3 Social awareness
The main objective addressing social awareness as defined in the description of work was
to analyze existing measurement data in terms of social network structure and information
diffusion, further to exploit meta-information for efficient traffic management to save interdomain traffic and to reduce the operating costs for ISPs.

Page 154 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

To this end, available OSN datasets were collected and a Facebook dataset was analyzed
to identify important parameters for predicting the popularity [20]. In order to optimize the
prefetching accuracy of content, a regression model was developed. This model was used
in SmartenIT mechanism like RB-HORST to exploit the meta-information provided in social
networks. To use the services provided by RB-HORST, a user has to connect with
Facebook and enter her or his credentials. Thus meta-information of the users interests
and friends is acquired. This meta-information is used to improve content delivery by
prefetching content and to enable a trusted WiFi hotspot for users that are friends on
Facebook. In a large scale study (c.f. sec. 5.2.2) the social aware concept of RB-HORST
was proven to save more than half of the traffic by caching popular content. Additionally to
the social prediction, an overlay based prediction algorithm was investigated and
implemented. This relies on the distance between uNaDas, in terms of AS hops, and aims
to minimize the inter-domain traffic. This is achieved by the fact the friends in social
networks that are potentially interested in the same content are in close proximity with high
probability.
In SEConD, the meta-information provided in OSNs is exploited for category based
prefetching. Content is prefetched if a certain threshold is exceeded. This threshold is
based on the percentage of videos watched on the media channel and on the viewers
interests. The evaluation shows that in case of the considered scenario a reduction of the
total inter-AS traffic of 87% is achieved on average, compared to the inter-AS traffic
created from the traditional client-server architecture. The contribution of the origin server
where the video is hosted is reduced by 88%. The SEConD mechanism enables QoE
aware content delivery for video streaming utilizing available resources in end-user
networks.
7.1.4 Energy Efficiency
Energy efficiency, as one of the main objectives within SmartenIT was addressed in the
MONA approach by analyzing the power consumption of the full deployment, which was
constructed with energy efficiency in mind.
From the measurements of MONA with respect to the RB-HORST deployment, it was
derived that the power consumption on end-user devices using caching and WiFi
offloading can be increased, in particular for highly interactive applications requiring only
little traffic. Simulations have shown that on demand WiFi offloading results in slightly
higher energy consumption on the mobile devices, but this effect can be mitigated by
caching the content on the local uNaDas, thus providing increased response times and
overall throughput compared to the on-demand case, while saving energy.
By offloading the cellular network, the load within the network is equalized. By reducing
traffic peaks due to content availability on the uNaDas, the currently installed infrastructure
may be used for a longer time, making it possible to serve a larger number of users with
the currently deployed hardware. This reduces the energy cost per user, which would
otherwise increase in the cellular network due to increasing demand.
Caching video content on the uNaDas, and re-distributing it within the ISPs network leads
to similar effects as in the cellular case on the traffic generated in the Internet backbone.
An equalized traffic distribution and reduced peak demands allow deferring infrastructure
upgrades as peaks in traffic demand are smoothed out. Hence, satisfying demands by a
larger number of users with currently deployed hardware becomes possible.

Version 1.0

Page 155 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

The content providers and CDNs also benefit in terms of energy efficiency from the
generally reduced load and in particular reduced traffic peaks, as these networks need to
be built to fulfill the peak demand. By exchanging content within the ISP, the currently
deployed hardware may be used longer.
In the OFS case, the energy consumption, as discussed in D4.1 [13], is mainly static. Still,
energy savings can be achieved by re-configuring interfaces for a lower data rate, or
sending traffic along the paths with the most energy efficient intermediate hops. The ICC
Cloud layer, though not implemented and integrated in the DTM++ prototype within the
project lifespan also considers energy efficiency as a key decision factor on selecting the
optimal destination for inter-cloud communication use cases where multiple candidate
destinations for the respective flows exist, e.g., when performing a datacenter backup.
This is an attractive feature that also contributes to energy awareness and efficiency of the
SmartenIT traffic management mechanisms.

7.2 Assessment of SmartenIT outcomes from an industrial point of


view
The SmartenIT project has produced several prototypes, two of which have been fully
integrated into the SmartenIT architecture, i.e., the DTM++ and RB-HORST prototypes.
From an industrial point of view, and especially from the perspective of a
telecommunications systems vendor and integrator, the SmartenIT architecture and
prototypes encompass two very important entities, the SBox and the uNaDa.
The SBox is considered to be the controlling entity of the network devices, so as to enforce
the decisions taken by the DTM++ mechanism. This layering of responsibilities and
functionality follows the general approach of the Network Management Systems (NMS), as
well as the more modern approach in networking, the Software Defined Networking (SDN)
approach. In this context, the SmartenIT prototype has produced several components that
can, or already are, adapted to the aforementioned approaches. For example, DTM++
already includes an SDN controller component (using the Floodlight technology).
Moreover, part of the functionality provided by DTM++ could be hosted (as standalone
module) in modern SDN controllers (like OpenDaylight). It is thus obvious that DTM++ is
adequately compatible with modern NMS and (part of) its functionality can be re-used and
extended to match the needs of modern providers and network equipment vendors.
On the other hand, the uNaDa follows the modern paradigm of edge computing that shifts
much of the functionality and services from core, centralized nodes to the logical extremes
of a network. Network optimizations with respect to the delivery of content performed by
the uNaDa, fall under this category. The uNaDa design is also compatible with the captive
portal approach of public WiFi hotspots. Together with the social awareness, the uNaDa
can serve as the basis for any access point based solutions provided by an ISP. Moreover,
in an Information Centric Networking (ICN) environment, uNaDas can have more central
role, since content identification and delivery will be inherent to the networks functionality.
Specific functionality of the uNaDa, like the content prefetching based on social and
overlay criteria, can become modules of popular proxy solutions like Squid.
MUCAPS may complement the ISP functionalities that enrich the DNS generated pool of
candidate content locations. It offloads the optimization burden from technology agnostic
users and applications to the network. When deployed with footprints in end systems, it
can be an add-on providing ISP network awareness on top of the selection of locations
from which RB-HORST may drag content to a uNaDa. Indeed, pure topology
Page 156 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

considerations as used by tools such as "Traceroute" can benefit from the addition of ISP
policy considerations in exchange of the ISP providing hints on its network capabilities.
Overall, the SmartenIT integrated prototypes offer a wide variety of solutions that can be
adopted (party or in their entirety) by vendors and solutions provider, so as to meet the
modern needs of the ISP subscribers and deal with the complex problems with respect to
traffic management that operators face nowadays. Being research prototypes, the
SmartenIT solutions cannot be adopted as is to become (parts of) commercial solutions.
However, the underlying design and implementation principles allow their extension for the
benefit of the involved stakeholders.

7.3 Project objectives addressed


In the description of work of SmartenIT (Section B1.1.2.3) the projects five main objectives
were defined and then turned into a number of detailed SMART objectives (Section
B1.1.2.4). To ensure meeting of these objectives, in each deliverable there is a section
that reports how relevant SMART objectives where achieved (or at least promoted) by the
work reported in the deliverable. The goal of this section is to summarize briefly how the
main objectives of the project are covered and argue that they have really been met.
The main objectives are as follows:
Objective 1

SmartenIT will lay the basis for its traffic management mechanisms for traffic
characterization by the definition of key stakeholders, which require mechanisms with an
incentive compatibility, QoE- and social-awareness, and energy efficiency for emerging
scenarios of tomorrows applications.

Objective 2

SmartenIT will develop theory for new pricing and business models, applicable in example
use cases, combined with the design and prototyping of appropriate traffic management
mechanism that can be combined with todays and IETFs proposed relevant communication
protocols.

Objective 3

SmartenIT will investigate economically, QoE, and energy-wise such models in theory by
simulations to guide the respective prototyping work in terms of a successful, viable, and
efficient architectural integration, framework and mechanisms engineering, and its
subsequent performance evaluation.

Objective 4

SmartenIT will evaluate use cases selected out of three real-world scenarios (1) inter-cloud
communication, (2) global service mobility, and (3) exploiting social networks information
(QoE and social awareness) by theoretical simulations and on the basis of the prototype
engineered.

Objective 5

SmartenIT will disseminate and publish project results, will support the standardization of
relevant IETF working groups, and will exploit those results with respect to the partnerspecific plans.

Objective 1
WP1 has investigated the SmartenIT environment by identifying traffic characteristics of
cloud based applications, characterizing stakeholder relationships, classifying cloud
services and selecting applications for simulations and trials.
After identifying the key aspects of scenarios from different perspectives, the basic
scenarios were consolidated and merged to focus onto two major scenarios, namely the
end user focused scenario (EFS) and the operator focused scenario (OFS). The EFS
focuses on the retail market and aims to improve the QoE enhancement for end users by
Version 1.0

Page 157 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

considering mobility of users, services and contents. The EFS further aims to utilize socialawareness and to improve content delivery and placement by prefetching, caching, and
data offloading. On the other hand, the OFS focuses on the wholesale market and aims to
optimize costs and energy efficiency for operator and provider by federation of data center
and cloud operators as well as interaction between operator and ISP.
Realistic and interesting use cases were specified and associated parameters and metrics
defined. Drafts for traffic management solutions were designed employing the identified
use cases.
Finally, WP2 has developed traffic management mechanisms that pursue and promote
incentive-compatibility across stakeholders, e.g., reduction of transit cost for ISPs with
simultaneous improvement of QoS for CSPs, improvement of end-users QoE and
simultaneous reduction of inter- and intra- domain traffic.
Objective 2
WP2 has identified eleven relevant use cases in the playfield of SmartenIT. A wellstructured template for use case description has been built to prove that the solutions
proposed in SmartenIT are applicable to solve real and concrete problems of incentive
based traffic management for overlay networks and cloud-based applications driven by
social awareness, QoE awareness and energy efficiency. Moreover, WP2 studied
extensively business models and performed a mapping of use cases and mechanisms to
specific business models. Tussle analysis has been employed for single OFS and EFS
framework, so as to demonstrate the incentive compatibility achieved by the SmartenIT
mechanisms and their synergies.
A pricing classification framework has been introduced that details where the different
layers and granularities of pricing mechanisms related to SmartenIT operate and how they
work together. A concrete pricing scheme for ICC was proposed, which meets the goals of
simplicity, incentive compatibility and design-for-tussle. Furthermore, a model of Cloud
federation including pricing of users and pricing of each provider by others has been
studied, while both strong and weak federations have been considered, while also profit
sharing in the context of ICC mechanism has been introduced. Finally, WP2 has designed
TM mechanisms that address the main aspects of the OFS and the EFS, namely the main
design pillars of SmartenIT: incentive-compatibility, inter-data center communication,
social awareness, and energy efficiency.
WP3 on the other hand, has transformed the most prominent mechanisms, identified in
WP2, into research prototypes. These prototypes have been implemented following
industry standards and has adopted well-accepted communication protocols, like
OpenFlow, REST, etc. Among the implemented prototypes, DTM/DTM++ and RB-HORST
have closely followed the architectural design established in the project and hence they
are integrated to the SmartenIT architecture. MONA and MUCAPS have followed
standalone implementation, but an initial mapping to the SmartneIT architecture has been
also provided.
Objective 3
SmartenIT has investigated extensively economic aspects of inter-cloud communication
and has formulated a pricing framework, while also pricing aspects of a cloud federation
have been explicitly described. QoE has been considered in all use-cases of EFS as a
Page 158 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

major KPI and has been indirectly addressed by RB-HORST, SEConD, MONA and
RBH++ mechanisms, and directly by MUCAPS. Furthermore, theoretical models have
been developed and employed by TM mechanisms so as to quantify the energy
consumption by mobile devices in EFS.
Theory developed in WP2 has been evaluated by simulations to guide the respective
prototyping work in terms of a successful, viable, and efficient architectural integration,
framework and mechanisms engineering in WP3, and its subsequent performance
evaluation in WP4.
During design (WP2) and development (WP3) of the prototypes, work focused on the QoE
of the end-user and reducing the energy consumption of the overall system. The QoE is
addressed by moving content as close to the end-user as possible, while the energy
consumption was addressed by using the most energy efficient data transfer mode
providing sufficient QoE to end user.
The architecture designed and the prototypes developed within the WP3 have been based
on the requirements and use cases defined in WP1 and WP2. The software engineering
methods followed during the implementation phase led to prototypes that are very close to
industry standards. The generated prototypes can be easily adapted to fit to established
systems as well as extended to match specific requirements.
Objective 4
SmartenIT considered two scenarios OFS and EFS. The OFS covers inter-cloud
communication and even extends this scenario because it considers relation between
underlay and overlay networks. The EFS is a compound of the last two scenarios
mentioned in objective 4, namely global mobility and exploitation of social network
information.
The prototypical management mechanisms DTM/DTM++ and ICC addressing OFS, and
RB-HORST, MONA and MUCAPS addressing EFS were inspected in various ways. The
simulations of OFS mechanisms (DTM and ICC) gave promising results and performance
indicators shown the validity of these mechanisms concepts. All mechanisms were
validated in the testbed environment. Multiple testbeds were designed. The methods for
testing software product evolved also, we constructed a few software based testbeds in
different locations at partners premises. Finally hardware based testbed (which uses
routers produced by leading vendors) was prepared. The tests in the hardware testbed
show that SmartenIT system (and particularly DTM++) can work in real network
environments; for example, there were used operator class routers. For performance test
we designed and implemented dedicated software for traffic generation, online
visualization and measurement. For ICC we built separate hardware testbed which was
dedicated for functional tests, the same was done for DTM++.
Almost the same approach was applied to RB-HORST. Theoretical analysis and some
simulations were done first for Horst and RB-tarcker which constitute RB-HORST. The
obtained results motivated SmartenIT consortium to chose both mechanisms for
implementation. Authors of these mechanism decided to put efforts into specifying and
implementing single solution offering functionalities of both mechanisms. The RB-HORST
and MONA were tested in a real network where a group of end users participated in
experiments. These EFS performance experiments also proved maturity of their concept.
The measured performance indicators in all experiments (DTM, RB-HORST, MONA,
MUCAPS) shown the operational quality of these mechanisms and that they are really
Version 1.0

Page 159 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

beneficial for stakeholders. The experiments also showed that QoE were improved and
social awareness used by RB-HORST properly supports operation of this mechanism.
The MUCAPS prototype was tested with end users connected with various technologies
and servers physically deployed in both the test lab and the Internet. Its benefits for users
were quantified in terms of an actual Video Quality Score and associated QoE-specific
metrics measured via the fully fleshed pre-industrial QoE analysis tool VITALU. ISP
benefits were evaluated in terms of the augmentation of a multi-objective utility function of
the routing and bandwidth performance. This utility is computed based on abstracted
routing costs and path bandwidth performance supposed to be entered by the ISP in an
ALTO Server. Experiments illustrate the importance of the application as well as the user
connection type. For example, results show that maximizing user bandwidth is not
systematically the best option when the end user capabilities cannot sustain it.
Objective 5
Since dissemination, exploitation, external liaisons and standardization have dedicated
deliverables (D5.3 [17] and D5.4 [18]) that are due by the of end of the project those
aspects are not covered here. Project activities and assessment in those areas are
covered in respective deliverables.

Page 160 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

8 Summary and conclusions


This deliverable concludes the activities of SmartenIT WP4 work package, with the main
aim to present the results for the validation phase and the subsequent overall performance
assessment of SmartenIT prototypes and in general of SmartenIT mechanisms. The main
outcome of this deliverable can be summarized in the following:

Presentation of results gathered during the execution of suitable experiments,

Performance evaluation and overall assessment of SmartenIT mechanisms,

Analysis of benefits for stakeholders from a business perspective,

Identification of potential further development for SmartenIT prototype as well as of


future.

Proper time schedule and use of multiple replicas of SmartenIT testbed deployed at
partners premises have led to a successful completion of the experiments execution. It is
worth observing that in fact a broader set of experiments than those initially planned have
been executed. The extra experiments has been accomplished to fully assess the
SmartenIT performance under a broader range of identified configuration parameters
(DTM case) or to provide an additional testbed for live deployment of SmartenIT standalone implementation (MUCAPS traffic management mechanism) whose deployment has
not been initially considered but was decided at a later stage.
The overall experiment plan has been carefully considered in order to cover at full extent
all SmartenIT technical solutions, thus providing:

Functional tests for SmartenIT prototype,

Performance tests in large time scales for SmartenIT prototype,

Distributed and large scale test for SmartenIT prototype,

Simulation tests for SmartenIT solutions not implemented in the prototype.

SmartenIT project has produced a large variety of assets in the form of traffic management
mechanisms, which spans the two identified reference scenarios, namely OFS and EFS,
providing a rich landscape of solutions able to be deployed over a wide range of practical
real-world scenarios. Traffic management mechanisms identified by SmartenIT provide
concrete and measurable enhancements to the actual cloud-Internet landscape
Solutions for OFS scenario (DTM/DTM++)
The main SmartenIT solution for OFS scenario, namely DTM/DTM++ has been deeply
evaluated by means of the execution of well-focused experiments conducted over
implemented prototype leading to confirm that the prototype is stable, achieves good
performance and is able to withstand large-scale intensive real-world traffic patterns.
Moreover, the DTM/DTM++ implemented prototype is able to successfully manage traffic
under different combinations of background traffic levels and traffic profiles.
DTM/DTM++ thus proves to be an excellent solution to be casted over ISP domain for the
following key motivations:

It is based on incentive collaboration among different administrative domains and


provides a technical framework that allows traffic management between Cloud and ISP
in a win-to-win situation.

Version 1.0

Page 161 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

It resolves (or at least mitigates) the problem of information asymmetry by providing


(partial) information exchange between cloud layer and network layer, thus providing
cross-layer aware traffic management.

It boosts the Cloud Federation business models, thus demonstrating that SmartenIT
solutions for OFS scenario are well deployable not only in a scenario dominated by
large scale Cloud and Internet Service Providers, but also in a market segment
composed of smaller service providers that team together to widen their geographical
footprint.

It optimizes inter-cloud traffic by providing an excellent tool to optimally route the


selected inter-domain traffic.

As an effect of the previous point, this SmartenIT solution also achieves (at least in an
indirect way) energy efficiency for ISPs domain by realizing a more fair infrastructure
utilization, which it is appropriate for serving traffic exchanged between Clouds for
energy efficiency reasons.

The solution depicted so far is on one side fully applicable over typical Internet best
effort connections and on the other side, by having different management schema for
delay-sensitive and delay-tolerant traffic realizes a sort of QoS driven model where
traffic is managed according to QoS classification.

It has a high deployability potential, being a solution that can be deployed with several
combined features and casted over different scenarios, from public to hybrid cloud
deployment.

SmartenIT OFS traffic management has proved to be non-disruptive solutions,


meaning that they can be deployed over provider domains without change of the whole
infrastructure. SmartenIT artifacts in fact have been carefully considered to be placed
only at relevant interface points among the different providers acting in this scenario.

Solutions for EFS (RB-HORST, SECOND)


SmartenIT solutions for EFS scenario have been carefully evaluated in a real-world
scenario by means of a variety of experiments going from simulation test to large scale
test with relevant artifacts deployed at partner premises. SmartenIT prototype for EFSrelated artifacts has proved to be a stable solution, to achieve measurable and in fact good
performance, to be able to be casted even in real-world large scale deployment over a
wide geographical are with an accordingly high number of end-users.
Selected traffic management mechanism thus prove to be an excellent solution to be
casted over EFS scenario for the following key motivations:

They are based on incentive collaboration among different administrative domains and
provide a technical framework that allows traffic management between Cloud provider
and the end-user in a win-to-win situation

They entail incentive collaborations among end-users based on a novel trust schema
that leverages on Social awareness information.

They achieve energy efficiency for end-user mobile devices.

They can be easily deployed over end-user and ISP premises with negligible impact on
existing infrastructures.

They provide mobile network offloading in favor of WiFi connectivity.

Page 162 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

They achieve significant enhancement of Quality of Experience as perceived by the


end-user.

They provide novel content caching and prefetching schema for content downloading,
thus indirectly savings of inter-domain traffic.

They indirectly achieve energy efficiency on providers domain by keeping as locally as


possible the traffic generated by content downloading.

SmartenIT evolutions and next steps


The SmartenIT solutions identified so far provide a stable technical framework with
measurable performance that ensures concrete benefits for the involved stakeholders,
high deployability capabilities over the ISP infrastructure and the end-user premises, high
configurability in terms of functionalities and capabilities and good positioning in the actual
cloud landscape, thus fulfilling all major objectives of the project.
Moreover, SmartenIT artifacts are compatible with recent trends of the cloud era (such as
Cloud Federation), provide innovation in terms of traffic management schema, achieve
traffic and content management with proper attention to energy efficiency, provide cross
layer and cross domain interaction to achieve superior traffic management, enable a social
awareness empowered framework that provides novel interfacing schema between endusers and service providers, provide concrete solutions to be potentially evolved to IoT
paradigm (uNaDa deployed at end-user premised) and finally constitute a framework that
can be employed when properly adapted under the next generation (5G) of mobile
networks.
The natural evolution of SmartenIT solutions showcased so far is a further development
that will unify all traffic management mechanisms in a single solution, being able to provide
an end-to-end solution with tangible benefits for all stakeholders and solving or at least
mitigating the inefficiencies identified so far in the actual cloud/Internet landscape.

Version 1.0

Page 163 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

9 SMART Objectives
Through this document, three SmartenIT SMART objectives defined in Section B1.1.2.4 of
the SmartenIT Description of Work (DoW, [1]) have been partially addressed. Namely, one
overall (O4, see Table 42) and three practical (O1.1, O3.4, and O4.3 see Table 42)
SMART objectives were addressed.
The overall Objective 4 is defined in the DoW as follows:
Objective 4 SmartenIT will evaluate use cases selected out of three real-world scenarios
(1) inter-cloud communication, (2) global service mobility, and (3) exploiting
social networks information (QoE and social awareness) by theoretical
simulations and on the basis of the prototype engineered.
This document gives an overview on SmartenIT in the current Internet (Section 3).
Moreover, a description of the assessment criteria and methodology is provided (Section
4), and testbed experiments results and assessment achieved during SmartenIT validation
activities are documented (Section 5). Furthermore, the assessment of the main projects
outcomes is given (Section 6), and an overall assessment of all SmartenIT solutions and
mechanisms is provided (Section 7).
These results provide the outcome of the tasks T4.3 and T4.4 in which the SmartenIT
prototype was evaluated and assessed in experiments. As such, deliverable D4.3
determines the final step in the evaluation of SmartenIT use cases and real-world
scenarios as part of Objective 4, which started with deliverable D4.1 (Testbed Set-up and
Configuration, end of project month 24)and was extended in deliverable D4.2 (Experiment
Definition and Set-up, end of project month 30).
Table 42: Overall SmartenIT SMART objective addressed. (Source: [1])
Measurable

Objective
No.
O4

Specific
Evaluation of use
cases

Deliverable
Number
D4.1, D4.2, D4.3

Timely
Achievable

Relevant

Mile Stone
Number

Implementation,
evaluation

Complex

MS4.3

Table 43: Practical SmartenIT SMART objective addressed. (Source: [11])


Measurable

Objective
ID

Specific

Metric

O1.1

How to align real ISP


networks, while
optimizing overlay
service/cloud
requirements?

O3.4

How to monitor
energy efficiency and
take appropriate
coordinated actions?

Savings in inter-domain traffic


(in Mbit/s) and possible
energy savings of due to
optimized management
mechanisms
Number of options identified
to monitor energy
consumption on networking
elements and end users
mobile devices, investigation
on which options perform
best (yes/no)

Achievable

Relevant

Timely
Project
Month

Design,
simulation
T1.4, T2.1,
T2.2, T2.5,
T4.4

Major output
of relevance
for provider
and in turn
users

M36

Design,
simulation,
prototyping
T1.3, T2.3,
T4.1, T4.2,
T4.4

Highly
relevant
output of
relevance
for users

M36

Page 164 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

Objective
ID

Measurable
Specific

Metric

Which assessment
schemes will have to
be run by the
providers before
successful
deployment of the
developed services in
their network?

O4.3

Number of
identified/considered
schemes, number of
feasibility analyses for the
identified schemes

Achievable

Relevant

Timely
Project
Month

Design,
simulation,
prototyping
T4.4

Extremely
relevant
output of
relevance
for providers

M36

This deliverable contributes to answering three specific practical questions:


Objective O1.1: How to align real ISP networks, while optimizing overlay
service/cloud requirements?
The evaluation of DTM/DTM++ (cf. Section 5.1) shows savings in inter-domain
traffic due to optimized management. The DTM mechanism is based on the
measurement of the network traffic. This mechanisms is cloud agnostic, the overlay
services, cloud management decisions do not spoil DTM operation and vice versa.
This mechanism separates traffic generated by clouds (manageable traffic) from
other traffic (background traffic) traversing inter-domain links. The manageable
traffic is switched between inert-domain links depending on the total traffic
conditions on links. The more manageable traffic traverses inter-domain links, the
more efficiently DTM my reduce costs. The management decisions are undertaken
by ISP taking into account real-time measurement and historical data from previous
billing period. The DTM++ is an extension of DTM (incorporates parts of ICC
mechanism) which may increase DTM efficiency. During busy hours the transfer of
delay tolerant data from clouds may by shifted in time to moment of time while link
occupancy decreases. This mechanism requires marking delay tolerant traffic from
delay sensitive data by clouds. This do not degrade cloud operation because the
cloud decides which traffic is justified as delay tolerant not ISP. Generally both pure
DTM and its extension the DTM++ are cloud friendly. The EFS experiments (cf.
Section 5.2.1 and 5.2.2) proved that caching and prefetching improves service
delivery to End Users, they perceive better transfer quality and lower latency.
Caching and prefetching confines substantial traffic internally in the ISP domain. As
the result an ISP observes less traffic on inter-domain links. Both an ISP and End
Users are in win-win position.
Objective O3.4: How to monitor energy efficiency and take appropriate coordinated
actions?
The MONA experiment (EFS#3, Section 5.2.4) which includes the Energy Efficiency
Framework (EEF) analysed the energy consumption of the overall RB-HORST
mechanism. A model based measurement approach was chosen, based on which
intermediate metrics were collected on end-user devices, allowing to determine the
power consumption of the devices using a calibrated power model. Furthermore,
the energy consumption of the smartphones was compared depending on different
connectivity options. These include streaming via 3G/LTE, on-demand
streaming/offloading using the RB-HORST access point, and streaming content
from the local cache. The network technology is automatically selected by the enduser devices based on availability. For the energy modeling of the network
Version 1.0

Page 165 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

connections on the mobile device, the power model of a widely used device (Nexus
5)was selected. Finally, the energy assessment of RB-HORST was conducted
using dedicated experiments assessing the network performance (i.e. RTT,
throughput) over different connection options (cf. Section 5.2.3).
Objective O4.3: Which assessment schemes will have to be run by the providers
before successful deployment of the developed services in their network?
After deployment of the DTM for testing, an operator may perform tests described in
Section 5.1. Depending on the charging schema an ISP may asses if mechanism
offers satisfactory transfer cost reduction. As it has been mentioned it strongly
depends on cost functions used on inert-domain links but ISP is always in no lose
position. An operator needs to observe system behavior in a few billing periods
because the DTM system learns by traffic measurement what are the optimization
reference parameters, namely the reference vector. This parameter is used in the
consecutive billing periods. The DTM++ requires observation of the system
behavior in busy hours and hours after. The experiment (cf. Section 5.1.3) related to
DTM++ is an example what parameters should be observed in order to asses
efficiency of DTM++ and can be used by particular operator. An operator possesses
records how much traffic from DC/clouds traverses its inter-domain links, so it can
make comparison of traffic when DTM++ is used and without DTM++.
Also in case of RB-HORST, after mechanism deployment, an may perform tests in
which it can observe traffic on its inter-domain links. It should be observed the
amount of traffic traversing inter-domain links. Especially attention should be paid to
traffic directed to networks where RBH has been deployed. In the long time scale
measurements should present decrease of traffic related to communication
between RBH users and outside word (outside the operator domain).
According to the SMART objectives set within SmartenIT DoW, those ones of relevance
for D4.3 and the respective tasks within WP4, i.e., T4.3 and T4.4, the targeted for
objectives have been met.

Page 166 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

10 References
[1]

The SmartenIT project: Grant Agreement for STREP: Annex I 'Description of Work
(DoW)'; 2012.

[2]

The SmartenIT project: Deliverable D1.1: Report on Stakeholders Characterization


and Traffic Characteristics; April 2013.

[3]

The SmartenIT project: Deliverable D1.2: Report on Cloud Service Classifications and
Scenarios; October 2013.

[4]

The SmartenIT project: Deliverable D2.1: Report on Overview of Overlay Traffic


Management Solutions; April 2013.

[5]

The SmartenIT project: Deliverable D2.2: Report on Definitions of Traffic


Management Mechanisms and Initial Evaluation Results; October 2013.

[6]

The Smarten IT project: Deliverable D2.3: Report on Definition of Use-Cases and


Parameters (Initial Version); April 2014.

[7]

The SmartenIT project: Deliverable D2.4: Report on Final specifications of Traffic


Management Mechanisms and Evaluation Results; October 2014.

[8]

The SmartenIT project: Deliverable D2.5: Report on Definition of Use-cases and


Parameters (Final Version); July 2015

[9]

The Smarten IT project: Deliverable D3.1: Report on Initial System Architecture; April
2013.

[10] The SmartenIT project: Deliverable D3.2: Technologies, Implementation Framework


and Initial Prototype (Initial Version), April 2014.
[11] The SmartenIT project: Deliverable D3.3: Final Report on System Architecture;
October 2014.
[12] The SmartenIT project: Deliverable D3.4: Prototype Implementation, Validation and
Selected Application (Final Version); April 2015
[13] The SmartenIT project: Deliverable D4.1: Testbed Set-up and Configuration; October
2014
[14] The SmartenIT project: Deliverable D4.2: Experiment Definition and Set-up; April
2015
[15] The SmartenIT project: Deliverable D5.1: Dissemination, External Liaisons Plan, and
Initial Exploitation Plan; January 2013
[16] The SmartenIT project: Deliverable D5.2: Standardization Survey; October 2013
[17] The SmartenIT project: Deliverable D5.3: Exploitation Plan (Final); October 2015
[18] The SmartenIT project: Deliverable D5.4: Final Report on Liaisons, Standardizations,
and Dissemination; October 2015
[19] V. Valancius, N. Laoutaris, L. Massouli, C. Diot, and P. Rodriguez. 2009. Greening
the internet with nano data centers. In Proceedings of the 5th international conference
on Emerging networking experiments and technologies (CoNEXT '09). ACM, New
York, NY, USA, 37-48.
[20] A. Lareida, G. Petropoulos, V. Burger, M. Seufert, S. Soursos, B. Stiller. Augmenting
Home Routers for Socially-Aware Traffic Management. To be published at the 40th

Version 1.0

Page 167 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

Annual IEEE Conference on Local Computer Networks (LCN), Clearwater Beach, FL,
USA, October 2015.
[21] Partnering Orchestration, Monetisation and Ecosystem Enablement for CSPs and
DSPs, Infonova, on-line:
https://www.infonova.com/pdf/Infonova_R6__ODE_EEP_Oct_2013.pdf
[22] Windows Phone, Wi-Fi Sense FAQ, on-line: https://www.windowsphone.com/enus/how-to/wp8/connectivity/wi-fi-sense-faq
[23] Bundesnetzagentur, Initiative on network quality (in German), https://www.initiativenetzqualitaet.de
[24] Cisco Systems, Visual networking index, forecast and methodology, 2014 2019;
White paper series (2015): https://www.cisco.com
[25] G. Hasslinger, ISP platforms under a heavy peer-to-peer workload, Proc. Peer-toPeer Systems and Applications, Eds.: R. Steinmetz and K. Wehrle, Springer LNCS
3485 (2005) 369-382
[26] G. Hasslinger, G. Nunzi, C. Meirosu, C. Fan and F.-U. Andersen, Traffic engineering
supported by inherent network management: Analysis of resource efficiency and cost
saving potential, International Journal on Network Management (IJNM), Special Issue
on Economic Traffic Management, Vol. 21 (2011) 45-64
[27] Internet Engineering Task Force (IETF)- ALTO WG on application layer traffic
optimization, <tools.ietf.org/wg/alto/charters>
[28] Internet Engineering Task Force (IETF)- CDNI WG on CDN interconnection,
<tools.ietf.org/wg/cdni/charters>
[29] Internet Engineering Task Force (IETF)- LMAP WG on large scale measurement of
broadband performance <tools.ietf.org/wg/lmap/charters>
[30] Sandvine Inc., Global Internet Phenomena, Asia-Pacific & Europe, White paper report
(2015) <www.Sandvine.com>
[31] A.-J. Su, D.R. Choffnes, A. Kuzmanovic and F.E. Bustamante, Drafting behind
Akamai, IEEE/ACM Trans. on Networking 17 (2009) 17521765
[32] DE-CIX (statistics and traffic traces), available on-line at: https://www.de-cix.net/
[33] Cachebench at Github, on-line: https://github.com/pettitor/cachebench
[34] F. Kaup, P. Gottschling, and D. Hausheer, PowerPi: Measuring and Modeling the
Power Consumption of the Raspberry Pi, in LCN, 2014.
[35] F. Kaup, M. Wichtlhuber, S. Rado, and D. Hausheer, Analysis and Modeling of the
Multipath-TCP Power Consumption for Constant Bitrate Streaming, Darmstadt, 2015.
[36] F. Kaup and D. Hausheer, Optimizing Energy Consumption and QoE on Mobile
Devices, in International Conference on Network Protocols (ICNP), 2013, pp. 13.
[37] I. Poese, B. Franck, B. Ager, G. Smaragdakis, S. Uhlig and A. Feldmann: Improving
Content Delivery with PaDIS; Internet Computing, June 2012
[38] Definition of Internet exchange point, Wikipedia,
http://en.wikipedia.org/wiki/Internet_exchange_point
[39] Internet Transit Prices - Historical and Projected; White Paper; Dr Peering
Internatrional; on-line: http://drpeering.net/white-papers/Internet-Transit-PricingHistorical-And-Projected.php
Page 168 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

[40] draft-ieft-alto-multi-cost-01: Multi-Cost ALTO, (work in progress) S. Randriamasy


and W. Roome and N. Schwan, October 19th 2015, https://tools.ietf.org/html/draft-ietfalto-multi-cost-01
[41] P. Szilagyi and V. Csaba, "Network side lightweight and scalable YouTube QoE
estimation." Communications (ICC), 2015 IEEE International Conference on. IEEE,
2015.
[42] "IEEE 802.11n-2009Amendment 5: Enhancements for Higher Throughput". IEEESA. 29 October 2009. doi:10.1109/IEEESTD.2009.5307322.
[43] BENOCS start-up, on-line http://www.benocs.com
[44] A. Greenberg, et al. "The cost of a cloud: research problems in data center networks."
ACM SIGCOMM computer communication review 39.1 (2008): 68-73.
[45] G. Akerlof, "The market for" lemons": Quality uncertainty and the market mechanism."
The quarterly journal of economics (1970): 488-500.
[46] Emanicslab Testbed, online, http://www.emanicslab.org, accessed 2015-10-28.
[47] DFG Personnel Rates 2015, http://www.dfg.de/formulare/60_12/60_12_en.pdf
[48] Open vSwitch, on-line: http://openvswitch.org/

Version 1.0

Page 169 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

11 Abbreviations
3G

Third Generation

5G PPP

5G Infrastructure Public Private Partnership

AEP

Application EndPoint

ADSL

Asymmetric Digital Subscriber Line

AGH

Akademia Gorniczo-Hutnicza im. Stanislawa Staszica w


Krakowie

ALBLF

Alcatel Lucent Bell Labs, France

ALTO

Application-Layer Traffic Optimization

AP

Access Point

API

Application Programming Interface

AS

Autonomous System

AUEB

Athens University of Economics and Business - Research


Center in Greece

BGP

Border Gateway Protocol

BWS

BandWidth Score

CAPEX

Capital Expenditures

CDN

Content Delivery Network

CET

Central European Time

CLU

Cross-Layer Utility

CPU

Central Processing Unit

CSP

Cloud Service Provider

DA

Data Center Attachment Point

DC

Data Center

DCO

Data Center Operator

DFrz

Duration of Freezes

DHT

Distributed Hash Table

DNS

Domain Name System

DoW

Description of Work

DRM

Digital Rights Management

DSL

Digital Subscriber Line

DTM

Dynamic Traffic Management

EEF

The Energy Efficiency Measurement Framework

Page 170 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

EEP

Ecosystem Enablement Platform

EFS

End-user-Focused Scenario

EP

End Point

FTTH

Fiber to the Home

FTTX

Fiber to the X

GPerf Gain

Global Performance Gain

GRE

Generic Routing Encapsulation

H2H

Human-to-Human

H2M

Human-to-Machine

HD

High Definition

HTTP

Hyper Text Transfer Protocol

ICC

Inter-Cloud Communication

ICN

Information Centric Networking

ICOM

Intracom S.A. Telecom Solutions

IETF

Internet Engineering Task Force

I/O

Input/Output

IP

Internet Protocol

IRT

Interroute S.P.A

ISP

Internet Service Provider

Joule

JSON

JavaScript Object Notation

KPI

Key Performance Indicator

LD

Low Definition

LNR

Local Name Resolution

LRU

Least Recently Used

LTE

Long Term Evolution

M2M

Machine-to-Machine

MACAO

Multi Access and Cost AltO

MARC

MACAO Request Client

MARS

MACAO Request Server

MBR

Media Bit Rate

MONA

Mobile Network Assistant

MPLS

Multiprotocol Label Switching

MRA

Multi-Resource Allocation

Version 1.0

Page 171 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

M-to-M

Multiple to Multiple

MUCAPS

Multi-Criteria Application Endpoint Selection

NaaS

Network as a Service

NFrz

Number of Freezes

NREN

National Research and educational Network

NSP

Network Service Provider

NFV

Network Function Virtualization

OFS

Operator-Focused Scenario

OPEX

Operating Expenditures

OSN

Online Social Network

OTT

Over The Top

OVS

Open vSwitch

Power

P2P

Peer-to-Peer

PC

Personal Computer

PM

Person Month

PoP

Point-of-Presence

PSNC

Instytut Chemii Bioorganicznej PAN

QoE

Quality of Experience

QoS

Quality of Service

RAM

Random-Access Memory

RC

Routing Cost

REST

REpresentational State Transfer

RB-HORST

Replicating Balanced tracker - HOme Router Sharing based


On truST

RTT

Round Trip Time

SD

Standard Definition

SLA

Service Level Agreement

S-to-S

Single to Single

SDN

Software Defined Networking

SEConD

Socially-aware Efficient Content Delivery

SMART

Specific Measurable Achievable Realistic And Timely

SNMP

Simple Network Management Protocol

SSID

Service Set Identifier

SWOT

Strengths, Weaknesses, Opportunities, Threats

Page 172 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

TCP

Transmission Control Protocol

TDG

Telekom Deutschland GmbH

TM

Traffic Management

ToS

Type of Service

TTL

Time To Live

TUD

Technische Universitt Darmstadt, Germany

UDP

User Datagram Protocol

UEP

User EndPoint

uNaDa

User-owned NAno DAtacenter

UniWue

Julius-Maximilians Universitt Wrzburg

UZH

University of Zrich

Watt

WiFi

Wireless Fidelity

WLAN

Wireless Local Area Network

vINCENT

Virtual Incentives

VITALU

Video Inspector Tool - Alcatel-LUcent

VM

Virtual Machine

VPC

Virtual Private Cloud

VPN

Virtual Private Network

VQS

Video Quality Score

12 Acknowledgements
Besides the authors, this deliverable was made possible due to the large and open help of the
WP4 team of the SmartenIT team. Many thanks to all of them! Special thanks go to the
internal reviewers Sergios Soursos (ICOM) and George Stamoulis (AUEB) for their detailed
revision and valuable comments.

Version 1.0

Page 173 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

13 Appendices
13.1 Testbed extension: Odroid-C1
13.1.1 Overview
The Odroid-C1 is a replacement for the Raspberry PI device, which is described as a
testbed extension in Deliverable D4.1 in Section 5.3.1 Extension: Raspberry Pi.
Therefore, in this description only the setup procedure is described, as the rest of the
setup is the same as for the Raspberry Pi.
A typical hardware setup of a Ordoid-C1 is as follows:

Model: Odroid-C1
CPU: ARM Cortex A5, 4 cores, 1.5Ghz
Memory: 1 GB
Storage: 16GB microSDHC card with SD Adapter (class 6 or 10)
WLAN dongle: Wireless N USB Adapter
Operating System: Ubuntu 14.04 LTS with custom kernel

13.1.2 OS and Extra Packages Installation


To prepare the SD card OS image, following these instructions:
Download the Ubuntu 14.04 LTS Server OS zip file3
Unzip the img file and install it to the SD card, following the instructions, depending
on your operating system
Boot the device and it will be ready to use.
To begin with, login to the Odroid-C1, add the rbhorst user and install the following
software:
$ sudo apt-get update
$ sudo apt-get install iperf traceroute iptables-persistent hostapd
oracle-java7-jdk tmux jetty8 python-pip libxml2-dev libxslt1-dev
libffi-dev python-dev
$ sudo pip install mitmproxy

Extract the configuration file archives:


$ tar zvxf SIT-RBHorst-Config.tar.gz C /etc
$ tar zvxf SIT-RBHorst-Image.tar.gz C /home/rbhorst/scripts
$ tar zvxf SIT-RBHorst-Crontab.tar.gz C /var/spool/cron

Finally, install the RB Horst software and reboot the device with the WIFI dongle attached.
A script will be automatically started during boot that completes the configuration process.
The device will reboot and is then ready to be used.

http://odroid.in/ubuntu_14.04lts/

Page 174 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

13.2 Testbed extension: Remote Access for Admins with OpenVPN


The goal of this extension is to enable administrators to access the management network
of the testbed from remote locations in a secure way.
First, install the required software:
$ sudo apt-get openvpn

Then, an IP network is defined to be used as the address range for remote access clients.
This network can use arbitrary IP addresses, as long as there are no overlaps with other
address range. In this description, 172.16.1.0/24 is used. For each admin a /30 subnet is
used, with the lower IP address used for the VPN server side and the higher address used
for the remote client. It has to be ensured that the selected IP network is allowed in the
local iptables firewall to access the management network. Furthermore, a unique port
number has to be assigned to the user. Starting with the default port 1196 for OpenVPN,
the ports should be increased until an unused port is found.
The remote admin needs to be named accordingly, the <config-name> should include the
partner shortcut and the admin name. Create a file named <config-name>.conf in the
folder /etc/openvpn with the following content:
devtun
ifconfig<172.16.1.X><172.16.1.X+1>
secret<config-name>.key
keepalive 10 120
protoudp
port<unique port number>
persist-tun
statusopenvpn-<config-name>.log

Then, generate the secret key:


$ openvpn --genkey --secret /etc/openvpn/<config-name>.key

The final step is to create a configuration file named <config-file>-client.ovpn for the
client as follows:
devtun
protoudp
remote<Public IP address of VPN server><unique port number>
ifconfig<172.16.1.X+1><172.16.1.X>
route 10.200.0.0 255.255.0.0
keepalive 10 120
#secret dtm-agh.key
<secret>
<copy the content of the file <config-file>.key in this location>
</secret>

The administrator installs aOpenVPN client, e.g. from the OpenVPN website4and imports
the <config-file>-client.ovpn file to use the remote access.

https://openvpn.net/index.php/open-source/downloads.html

Version 1.0

Page 175 of 177


Copyright 2015, the Members of the SmartenIT Consortium

D4.3

Seventh Framework STREP No. 317846


Public

13.3 Testbed extension: Hardware testbed


In this appendix we describe a testbed dedicated for testing implementation of ICC
functionality and for DTM++ experiments. As described in D2.5 [8] the DTM++
implementation requires usage of hierarchical policers. It was decided that the most
natural and straightforward way to implement it is to use hardware routers that offer this
functionality. Also, the implementation of hierarchical policer that realizes ICC functionality
was done on Juniper MX240 router. This implementation was integrated with SBox to
make a fully operational DTM++ implementation. Finally, MX240 routers had to become a
part of a testbed. Therefore, the whole testbed concept has been adapted. Physical
equipment is used instead of virtualized nodes (routers and switches). Also, the testbed
topology is slightly modified. In Figure 65 we present a general view of this modified
testbed.

Figure 65: General overview of hardware testbed extension


In experiments related to DTM++, traffic from Cloud-S is to be sent to Cloud-R. This traffic
traverses AS2 or AS22 on the way towards Cloud-R in AS1. There are some substantial
changes in this testbed in comparison to that previously used for DTM experiments:
1. tunnels in AS1 are terminated at border routers AS1-BG1 and AS1-BG2 (in pure
DTM case the tunnels were terminated at AS1-DA router, which was a cloud access
router),
2. new venture point switch PortMirror, Figure 65 (this switch performs port
mirroring, traffic from mirror ports is analyzed by sniffer TrafficAnalyzer ),
3. new routers AS1-ICC1 and AS1-ICC2 used for traffic policing (hierarchical policers
applied ICC operation).
Page 176 of 177

Version 1.0
Copyright 2015, the Members of the SmartenIT Consortium

Seventh Framework STREP No. 317846

D4.3
Public

For traffic generation we used servers (virtual machines) Cloud-S and Background, Figure
65. The server Background is used for generation of background traffic. This is done in the
same way like in case of pure DTM. In server Cloud-S we have an installed instance of
http server and the same traffic generator that was used for pure DTM. In the case of
DTM++ the traffic generator is used for generating delay sensitive traffic and IPtables
firewall for marking this traffic (Expedited Forwarding). This generator generates UDP
flows. In AS1 we have deployed server Cloud-R, which is used as a receiver of the traffic
from Cloud-S. The wget application is used for downloading files from http server in CloudS. This traffic is not marked (Best Effort) and we classify it as delay tolerant from ICC point
of view. This traffic is consisted of TCP flows.
The PortMirror switch is used for port mirroring. In particular, links from routers AS1-BG1,
AS1-BG2, AS2 and AS22 are connected to separate ports and the traffic from all these
ports are mirrored to another port connected to TrafficAnalyzer. TrafficAnalyzer is a PC
used for traffic filtering and counting. We use tcpdump for traffic analysis. We filter traffic
with using ToS field, and GRE source and destination addresses. this way we can
recognize how much traffic goes via each inter-domain link. By employing ToS, we are
able to recognize delay sensitive and delay tolerant traffic. We measure the amount of
total, delay sensitive and delay tolerant traffic on the inter-domain links.
S-Box-AS3 and SDN controller are deployed in the same way like in case of pure DTM. SBox-AS1 collects counter readouts from inter-domain links via SNMP, but this time this
data is collected from routers AS1-BG1 and AS1-BG2 instead of AS1-DA.Note that tunnels
are terminated on these routers in domain AS1; this is imposed by implementation of ICC
functionality in DTM++. In order to establish SNMP communication between S-Box-AS1
and mentioned border routers, the GUI was appropriately modified. The traffic constraints
for hierarchical policers are sent by S-Box-AS1 to routers AS1-ICC1 and AS1-ICC2 via
NetConf communication. This communication must be enabled on these routers, via
proper router configuration. Also S-Box GUI possess proper NetConf template
configuration.
The EBGP is configured on the links between autonomous systems. OSPF protocol works
in AS1 and IBGP session is configured between AS1-BG1 and AS1-BG2. Configuration
procedures are the same like in the case of pure DTM.
In the testbed we have used only one MX240 router. This offers router virtualization called
logical system. MX240 can also operate without virtualization, in such case it uses only its
main logical system (which can be treated as hypervisor logical system). All routers
presented in Figure 65 are logical systems. The same router has been used as an
OpenFlow switch, but this functionality may be enabled only in a main logical system, this
means that having a few virtual-routers only one can operate as OpenFlow switch.

Version 1.0

Page 177 of 177


Copyright 2015, the Members of the SmartenIT Consortium

Вам также может понравиться