Академический Документы
Профессиональный Документы
Культура Документы
Abstract—Since the introduction of software-defined net- network architects and designers. Although simulation
working (SDN), scalability has been a major concern. There studies and experimentation are among the widely used
are different approaches to address this issue, and most performance evaluation techniques, analytical modeling
of them can be addressed without losing the benefits of
SDN. SDN provides a level of flexibility that can accom- has its own benefits. A closed-form description of a net-
modate network programming and management at scale. working architecture enables the network designers to have
In this work we present the recent approaches, which are a quick (and approximate) estimate of the performance of
proposed to address scalability issue of SDN deployment. their design, without the need to spend considerable time
We particularly select a hierarchical approach for our for simulation studies or expensive experimental setup.
performance evaluation study. A mathematical framework
based on network calculus is presented and the performance In this work we utilize network calculus as a mathe-
of the selected scalable SDN deployment in terms of upper matical framework to analytically model the behavior of a
bound of event processing and buffer sizing of the root SDN scalable SDN deployment. To the best of our knowledge,
controller is reported. this is for the first time that a network calculus-based
Keywords-Scalable SDN, OpenFlow, Hierarchical con- analytical model is investigated and presented to model
troller, Network Calculus, Delay Bound, Buffer Sizing a scalable SDN architecture. After this introduction, re-
lated studies to address the scalability issue of SDN are
I. I NTRODUCTION compiled in Section II. Network calculus framework and
Decoupling the network control out of the forwarding detailed description of our analytical models are presented
devices is the common denominator of Software-defined in Section III. This section includes an overview of
Networking (SDN) (to mention a few [1], [2], [3], [4], network calculus, definitions, system model, and analysis
and [5]) proposals in the research community. This sepa- of an SDN local and root controller. The mathematical
ration paves the way for a more flexible, programmable, description of queue length and delay bound of SDN
vendor-agnostic, and innovative networking. While SDN controllers along with the buffer requirements in a scalable
concept and OpenFlow [5] find their ways into commercial deployment is presented too. Using the result of some
deployments, performance evaluation of the SDN concept recent evaluations, we present the boundary performance
and its scalability, delay bounds, buffer sizing and similar of packet delay and buffer sizing of SDN controllers in
performance metrics are not sufficiently investigated in Section IV. Finally we draw our conclusions and future
recent researches. research area in Section V.
It seems that control plane scalability challenges in SDN
are not inherently different than the similar concerns in II. A S CALABLE SDN
traditional network design. In fact, SDN encourages us The common perception that control in SDN is (log-
to apply common software and distributed systems de- ically) centralized leads to concerns about scalability of
velopment practices to simplify development, verification, SDN and its overall performance in a production network.
and debugging. In SDN, there is no need to address basic Regardless of the controller capability, a (logically) central
but challenging concerns like topology discovery, state controller does not scale as the network grows (in terms of
distribution, and resilience. In SDN, control applications number of SDN switches, number of flows and their rate,
can rely on the control platform that provide these basic bandwidth, etc.) and will fail to serve all the incoming
functions such as maintaining a cohesive view of the requests within an acceptable level of service guarantees.
network in a distributed and scalable fashion [1]. Given the logically centralized nature of SDN deploy-
Currently, first OpenFlow implementations from hard- ments, one can argue that there is no inherent bottleneck
ware vendors are being deployed in networks and a grow- to the SDN scalability. A typical data center network has
ing number of experiments over SDN-enabled networks tens of thousands of switching elements and can grow at
are expected. This will creates new challenges, as question a fast pace. The total number of control events generated
of SDN performance and scalability have not yet been in any network at that scale is enough to overload any
properly investigated. Understanding the performance and centralized controller. There are applications and events
limitation of the SDN concept is a requirement for its that stress the control plane by over-consuming the control
usage in practical applications. There are very few per- plane resources. For instance a big-flow detection appli-
formance evaluation studies of OpenFlow and SDN ar- cation continuously queries the switches to detect big-
chitecture. Besides, an initial estimate of the performance flows (statistics queries-replies). Upon detection of big-
and requirement of an SDN deployment is essential for flows, the application re-routes the big-flows. By their
69
as A ∼ (σ, ρ), if (see Fig. 2):
A(t) − A(s) ≤ σ + ρ(t − s), 0 ≤ s ≤ t. (1)
According to the multiplexing rule [15], if constrained
flows are merged, the output process is also constrained
or:
Ai ∼ (σi , ρi ) → Ai ∼ ( σi , ρi ) (2)
For any increasing sequence A (i.e., cumulative arrival
process), we define its “stopped sequence” at time τ (see
Fig. 2), denoted as Aτ , by:
τ A(t) if t ≤ τ,
A (t) = . (3)
A(τ ) otherwise
Figure 2. Graphical representation of cumulative arrival process A(t) According to (3) for the stopped sequence Aτ , if A is an
with average sustainable request arrival rate of ρ and burstiness of σ;
along with a stopped sequence Aτ (t). arrival process, then there are no further packet arrival (or
event) after time τ . We can simply show that a stopped
sequence Aτ is (σ(τ ), ρ)-upper constrained where
processes themselves but with bounding processes called σ(τ ) = max max [A(t) − A(s) − ρ(t − s)]. (4)
arrival and service curves. A comprehensive overview 0≤t≤τ 0≤s≤t
and outlook of stochastic network calculus can be found Note that in (4), the sequence Aτ is stopped at time τ ,
in [17]. By focusing on bounds, network calculus com- σ(τ ) is the maximum number of packet in the queue of a
pliments the classical queuing theory. To the best of conserving link with a capacity of ρ and input Aτ .
our knowledge, this is for the first time that a network
calculus-based analytical study is presented to model the B. Local SDN Controller
behaviour of a scalable SDN. A local SDN controller model is depicted in Fig. 3.
This controller has one input A, one control input C
(from root SDN controller), and one output F such that
A. Definitions F = C(A(t)), where A(t) is the cumulative number of
Network calculus is a tool to analyse flow control arrival by time t, C(n) is the number of events, which are
problems in networks mainly from a lower bound (i.e., flow controlled (e.g. using “Flow Mod” operation) among
worst case) perspective. In other words it is a frame- the first n arrivals, and F (t) is the cumulative departures
work to derive deterministic guarantees on delay, queue by time t. In other words, the cumulative number of event
lengths, throughput, and similar performance metrics. It output by time t, is the cumulative number of event arrival
is mathematically based on min-plus algebra, which could to the local controller, which are “flow controlled” by the
be also interpreted as a system theory for deterministic root SDN controller by time t. For the case of OpenFlow,
and stochastic queuing systems. The use of alternate when the OpenFlow controller caches an operation inside
algebra such as min-plus algebra and max-plus algebra the flow table of the switch, the consecutive packets,
to transform complex network systems into analytically which match the flow table entry will not be forwarded
tractable models is central to the network calculus theory. to the OpenFlow controller. The operation of OpenFlow
Furthermore, arrival and service processes are character- protocol, should be considered in the definition of C(n)
ized by some bounds in order to simplify the analysis. function. For an ideal SDN switch, if A ∼ (σ, ρ) is upper
The performance evaluations are performed based on these constrained and C ∼ (δ, γ) is also upper constrained, then
bounds. Most of the network flows can be described F ∼ (γσ + δ, γρ) is also upper constrained. This lemma
using arrival curves, represented with leaky bucket traffic can be easily proved as follows:
envelopes and most of the network elements provide Proof:
some service to the flows, described by its rate and slack F (t) − F (s) = C(A(t)) − C(A(s))
term [18], [19]. Network calculus uses traffic specification
≤ δ + γ(A(t) − A(s))
to model the arrival and peak characteristics of a flow
(packets, events, etc.). A cumulative arrival process A ≤ δ + γ(σ + ρ(t − s))
is a non-decreasing, integer valued function on the non- = δ + γσ + γρ(t − s)
negative integer Z+ such that A(0) = 0. A(t) denotes
the total number of arrivals (i.e., events) in time slots
1, 2, ..., t. The burstiness and the average sustainable rate We are assuming that the local SDN controller is in fact
of arrivals are represented by σ and ρ respectively. The a constant server under arrival process A ∼ (σ, ρ), with
number of event arrivals at time t is denoted by a(t) and a constant service rate of μ (positive integer). We define
a(t) = A(t)−A(t−1). The cumulative arrival process (A) a busy period for an SDN local controller, which starts
is said to be (σ, ρ)-upper constrained, which we denote it at instance s in time and ends at t if the queue length of
70
Figure 3. A model of a local SDN controller with interface to a root
SDN controller (i.e., C(n).
71
We are interested to find a closed form for the queue
length of the local and the root SDN controllers. Let Ã1
and Ã2 be the overall arrival process of the controllers
and FC (t) , FS (t) be the respective output processes. We
have
Ã1 (t) = A1 (t) + S21 (FC (t)), (9)
Ã2 (t) = A2 (t) + S12 (FS (t)). (10)
Furthermore, let FCτ and FSτ be the stopped sequence of
FC and FS at time τ . It follows that for “any” α1 , FSτ ∼
(σ1 (τ ), α1 ).
σ1 (τ ) = max max [FS (t) − FS (s) − α1 (t − s)]. (11)
0≤t≤τ 0≤s≤t
72
able SDN deployment. Focusing on bounds and worst
case scenario, network calculus compliments the classical
queuing theory. The latter concerns about the average
quantities in equilibrium, while network calculus focuses
on boundary conditions. Our scalable SDN deployment
model (consisting of local and root SDN controllers)
captured the closed form of the event delay and buffer
length inside the local SDN controller. Furthermore, an
analytical model of the interaction between local and root
SDN controller were analysed. Given the parameters of
the cumulative arriving processes and the flow control
functionality of the SDN controller, the network architect
or designer is able to compute an upper bound estimate
of the delay and buffer requirements of SDN controllers.
We presented the event delay of two variants of local
Figure 7. Upper bound of the root SDN controller buffer for a given feed
SDN controllers along with the buffer requirement of the
forward from local controllers (i.e., S12 (FS (t))) and different feedback root SDN controller. In addition to deterministic network
parameters (i.e., S21 (FC (t))). calculus, stochastic network calculus is another interesting
branch of network calculus, which can be also utilized for
analytical modelling of other aspect of SDN deployments.
SDN controllers. Assuming that the local SDN controller Comparing the performance of this approach with simu-
can be integrated inside an SDN switch, we borrow the lation or experimental setup are among the future works
specification of these two local SDN controllers from [20]. of this study.
Thus, the first local SDN controller has an average event
processing performance of 0.286 million events per second
and the second one, which is a software implementation, R EFERENCES
has an average processing performance of 0.03 events
per second [20]. As shown in these results, the average [1] T. Koponen, M. Casado, N. Gude, J. Stribling,
sustainable arrival rate of the input events to the second L. Poutievski, M. Zhu, R. Ramanathan, Y. Iwata, H. Inoue,
T. Hama et al., “Onix: a distributed control platform for
controller, should be kept one order of magnitude less large-scale production networks,” in Proceedings of the
than the same parameter of the other (e.g. hardware) 9th USENIX conference on Operating systems design and
implementation (e.g. Switch 1 [20]) in order to achieve the implementation, 2010, pp. 1–6.
similar performance in terms of event processing delay.
Fig. 7 is one of the potential upper bounds, which [2] A. Greenberg, G. Hjalmtysson, D. A. Maltz, A. Myers,
yields the required buffer space in the root SDN con- J. Rexford, G. Xie, H. Yan, J. Zhan, and H. Zhang, “A clean
troller. Using the recent reports [21], we selected Beacon slate 4d approach to network control and management,”
SIGCOMM Comput. Commun. Rev., vol. 35, no. 5, pp. 41–
OpenFlow controller with an average performance of 1.75 54, Oct. 2005.
million flow operations (e.g. Flow Mod) per seconds [21].
Assuming the average arrival rate of 0.3 mpps for local [3] M. Caesar, D. Caldwell, N. Feamster, J. Rexford,
SDN controllers, and 0.6 mpps as the aggregated arrival A. Shaikh, and J. van der Merwe, “Design and implemen-
of other local controllers, the buffer requirement of this tation of a routing control platform,” in Proceedings of the
root controller is shown in Fig. 7. We assumed that 1/100 2nd conference on Symposium on Networked Systems De-
sign & Implementation - Volume 2, ser. NSDI’05, Berkeley,
of the arrived events will be forwarded to the controller
CA, USA, 2005, pp. 15–28.
(i.e., δ12 ) with a burstiness parameter (i.e., γ12 ) of 0.2
mpps. Given these parameters, the required buffer size of
[4] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown,
the root SDN controller amounts to 0.83 million events and S. Shenker, “Ethane: taking control of the enterprise,”
in worst case scenario. This result helps the designers in Proceedings of the 2007 conference on Applications,
provision the required buffer space (i.e., buffer sizing) technologies, architectures, and protocols for computer
based on the operating regime of the controllers in terms communications, ser. SIGCOMM ’07, New York, NY,
USA, 2007, pp. 1–12.
of average sustainable arrival rate, burstiness of the input
traffic, and traffic specification of the feedback and feed
[5] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar,
forward paths.
L. Peterson, J. Rexford, S. Shenker, and J. Turner, “Open-
flow: enabling innovation in campus networks,” SIGCOMM
V. C ONCLUSIONS AND F UTURE R ESEARCH Comput. Commun. Rev., vol. 38, no. 2, pp. 69–74, Mar.
In spite of benchmark tools and some limited simulation 2008.
models, there are very few research activities to analyti-
cally evaluate the performance of an SDN deployment. [6] A. Tavakoli, M. Casado, T. Koponen, and S. Shenker,
In this paper we exploited the capabilities of network “Applying nox to the datacenter,” in Proc. of workshop on
Hot Topics in Networks (HotNets-VIII), 2009.
calculus framework to model the behaviour of a scal-
73
[7] T. Benson, A. Akella, and D. A. Maltz, “Network traffic [21] Openflow controller performance comparison. [On-
characteristics of data centers in the wild,” in Proceedings line]. Available: http://www.openflow.org/wk/index.php/
of the 10th ACM SIGCOMM conference on Internet mea- Controller Performance Comparisons (last access 24
surement, ser. IMC ’10. New York, NY, USA: ACM, 2010, September 2013).
pp. 267–280.
74