Вы находитесь на странице: 1из 6

Proceedings of the 18th Mediterranean Electrotechnical Conference

MELECON 2016, Limassol, Cyprus, 18-20 April 2016

A new approach to dynamic routing in SDN networks

Slavica Tomovic, Nedjeljko Lekic, Igor Radusinovic

Gordana Gardasevic

Faculty of Electrical Engineering

University of Montenegro, Montenegro
slavicat@ac.me, nedjo@ac.me, igorr@ac.me

Faculty of Electrical Engineering

University of Banja Luka, Bosnia and Herzegovina

Abstract Today's networks are dominantly based on the

Shortest Path First (SPF) routing algorithms. These algorithms
assign weighting factors to links statically, such that the routing
tables are recalculated only when a topology change occurs. As a
result, the network traffic is often unevenly distributed, thus
causing the congestion on some links even when the total traffic
load is not particularly high. In this paper, we discuss how the
above-mentioned problem could be alleviated by applying
dynamic routing in Software-Defined Networks (SDNs). We
propose three new dynamic routing algorithms and present their
OpenFlow-based implementation. The efficiency of the proposed
solutions has been verified on two OpenFlow network topologies
in Mininet emulator. The obtained results show that the proposed
solutions could significantly improve the network throughput in
comparison to SPF routing model used in traditional networks.
Keywords OpenFlow; Routing; SDN;

Recent advances in broadband technologies have led to
tremendous growth of Internet applications and huge increases
in the average amount of bandwidth consumed per user [1].
The recent explosion of the Machine-to-Machine (M2M)
communications and the Internet-of-Things (IoT) applications
additionally contributes to rapid increase in bandwidth
demand. This poses serious challenges to providing acceptable
Quality of Service (QoS). Adding more resources to the
network may temporarily relieve congestion conditions, but it
cannot be economically justified in the long-term. Thus, there
is an ever-rising expectation from routing algorithms to
provide the efficient use of the available network resources.
However, today's networks dominantly use Shortest Path First
(SPF) algorithms to calculate routes. Since this entails that
traffic flows with the same destination IP address often use
overlapping routes, users may encounter poor quality of
experience, while the network resources are underutilized most
of the time [2]. This could be particularly problematic for
applications that require specific performance guarantees, such
as Voice over IP (VoIP), video conferencing or video-ondemand services.
The concept of dynamic routing has been introduced with the
aim of providing more efficient usage of network resources.
These algorithms usually make routing decisions by
considering the "current" load of each link [3]. Although the
OSPF (Open Shortest Path First) protocol has a QoS extension
that enables distributions of such information across the
network [4], dynamic routing is still not implemented in the
This work is supported by the Montenegrin Ministry of Science under grant
01-451/2012 (FIRMONT) and EU FP7 project Fore-Mont (Grant Agreement
No. 315970 FP7-REGPOT-CT-2013) http://www.foremont.ac.me.

978-1-5090-0058-6/16/$31.00 2016 IEEE

Internet for several reasons. Firstly, changing the cost of a link

in one part of the network may cause a lot of routing updates
and in turn negatively affect traffic in a completely different
part of the network. In principle, it can be disruptive to many
(or all) traffic flows. An another problem refers to routing
loops that may occur while the routing protocol converges.
Thus, in networks with distributed control plane, changing the
link cost is considered just as disruptive as link-failures [5]. On
the other hand, without the possibility to differentiate between
traffic flows more granularly (not only based on destination IP
address), dynamic routing cannot significantly contribute to
load balancing. MPLS (Multi Protocol Label Switching [6])
solves the last problem by forwarding traffic based on labels
with special meanings. However, this approach slowly adapts
to network changes and does not allow the real-time network
reconfiguration [7].
Due to above-mentioned limitations of traditional networks,
this paper proposes new solutions for dynamic routing in
Software-Defined Networks (SDNs). In SDNs, the control
plane is separated from the data plane and centralized at
programmable controller [8]. As the routing logic is centralized
and decoupled from the state-distribution mechanism, the
implementation of dynamic routing does not jeopardize the
network stability. When link-costs change, SDN controller can
be programmed not to migrate existing flows in the network,
and use new routes only for new flows [4]. The main objective
of the paper is to present the solution that improves QoS
network performance through effective load balancing, without
any resource reservation mechanisms that limit the network
scalability. While in our previous work, we examined the
potential of SDN for providing absolute performance
guarantees [9-11], the solutions proposed in this paper are
focused on improving the performance of best-effort traffic as
a dominant type of traffic in the Internet today.
The rest of the paper is organized as follows. In Section II a
brief background on SDN network architecture is given.
Section III describes the implementation of dynamic routing in
OpenFlow controller. Emulation results and analysis are
presented in Section IV. Concluding remarks are given in
Section V.
In traditional networks, the control plane and the data plane
are tightly integrated and embedded in the same networking
devices. The control plane is responsible for making decisions

on how a specific packet should be handled, while the data

plane is responsible for the actual forwarding of data through
the device. The new concept introduced by SDN is the
decoupling of these two planes in the network. SDN network
devices perform only data forwarding, while forwarding
decisions are made based on set of rules determined by an
external controller. The reference SDN architecture is
illustrated in Fig. 1 [12].


We implemented the dynamic routing functionality within
POX OpenFlow controller [14]. As illustrated in Fig. 3, the
controller manages several key modules:

Routing module


UDP routes

Computation of routes

Computaton of link costs

Topology data TCP counters

Statistics gathering

Querry list

Topology discovery

Openflow interface

OpenFlow network

Fig. 3. Scheme of the proposed controller design.

Fig. 1. The SDN architecture.
In this paper, we have considered the scenario where
OpenFlow protocol [13] is used for communication between
the SDN controller and the data plane devices (i.e. OpenFlow
switches). OpenFlow enables the flow-based routing
decisions. OpenFlow switches differentiate and process traffic
flows according to instructions received from the controller. A
flow could be broadly defined as a sequence of packets with a
similar characteristics. To define a flow, the controller can use
any subset of 9 L2-L4 packet header fields shown in Fig. 2,
along with the identifier of the interface to which the packet
had arrived. This option of highly-granular control enables
implementation of dynamic multipath routing, which could
significantly increase the network capacity [9-11].

Fig. 2. L2-L4 fields used for a flow definition in OpenFlow

Controller's instructions are stored inside OpenFlow devices
in the form of the flow table rules (Fig. 1). When a packet
arrives, the lookup process starts to search for the
corresponding rule in the table. If the packet does not match
any rule, it is discarded. However, the common case is to
install a low-priority rule which instructs the switch to send
the packet to the controller [12]. SDN controller then defines
the next steps of processing, according to the running network
management application.

978-1-5090-0058-6/16/$31.00 2016 IEEE

1) Topology discovery - This module is an integral part of

the POX controller. It is responsible for discovering and
maintaining the network topology. The information about the
network connectivity are constantly available to the other
service modules residing in the controller. In case when the
link goes up or down, this module raises change events to
notify registered listeners.
2) Statistics gathering - In order to perform dynamic routing,
the controller needs an up-to-date view of the network state.
One option is to measure the link utilization by periodically
sending the port statistics requests to the OpenFlow switches
[13]. When an OpenFlow switch receives this message, it
reports a number of bytes sent over the corresponding network
interface. By comparing the values from the previous report
and the last one, SDN controller can calculate the link load in
the last measurement cycle. However, as TCP flows tend to
use all available bandwidth on the route, this method could
lead to the conclusion that, for example, links loaded with
only one long-lived TCP connection (or a few of them) also
should be avoided for routing. In other words, if the same
number of bytes is sent over two different links during the
measurement cycle, they will be equally treated even if the
difference in number of traffic flows on them is considerable.
As a consequence, this will have a negative effect on the
routing fairness and the overall network performance. For this
reason, we decided to rather estimate available bandwidth on
links, and then put these information at disposal to the other
modules. The estimation of available bandwidth is made
according to the formula:
RB = (capacity UDPbw) / (TCPcount + 1)


where UDPbw denotes the link bandwidth taken by UDP

flows and TCPcount denotes the number of TCP flows on the
link. Since links are bidirectional, separate statistics are
maintained for each direction. The rationale behind the above
formula is in the fact that TCP tends to distribute the available
bandwidth fairly between the individual TCP flows. If we
assume that the link is the bottleneck for each TCP connection
going over it, the above formula actually calculates the
amount of bandwidth that could be offered to a new TCP flow.
Certainly, this is a very rough assumption, but to determine
the actual bandwidth that a new TCP connection could get, the
controller needs detailed statistics for each traffic flow in the
network. Since the controller scalability is one of the major
SDN issues, our controller collects statistics only for UDP
flows, which account for a much smaller percentage of the
overall traffic [15].
We have mentioned that Eq. (1) is based on the assumption
that the arriving flow uses TCP protocol. It has no physical
meaning for the UDP flows, which tend to unfairly occupy as
much bandwidth as the application needs. However, the
estimations of available links' bandwidth are just used as the
input arguments of the routing module. Since all the routing
algorithms proposed in this paper (see Section IV) try in one
way or another to protect links with low available bandwidth,
we have considered it useful to keep the same formula for
UDP flows as well. In this way, the controller avoids to route
UDP flows over links with a large number of TCP flows,
which should have a positive effect on TCP performance.
When a route for TCP flow is installed, the controller
increments TCP counter (TCPcount) of each link along the
route. It also instructs OpenFlow switches to send the
notification when the corresponding flow rule expires. On the
other side, when a route for UDP flow is installed, the
controller adds ingress switch of the flow to the query list
(Fig. 3). We decided to collect statistics only from the ingress
switches because in large SDN networks it is important to
relax the controller from exhaustive computing as much as
possible. The ingress switches are also suitable in terms of
accuracy, since collected statistics will not take the packet loss
within the network into account. In order to minimize the
control overhead, the controller sends only one request for all
UDP flows installed on the switch. The reply also consists of
one message with the counters for all UDP flow entries. We
set the interval between consecutive measurements to 3
seconds. This is a trade-off, because not only too big, but also
too small values can negatively impact the accuracy of the
obtained information. Furthermore, OVS switches [16], that
are used in our experiments, push statistics from the kernel to
the user-space at intervals of 1 second. Thus, performing
measurements more often only unnecessarily congests the
controller. From the obtained statistics the controller derives
information about throughput of UDP flows in the last
measurement cycle. As illustrated in Fig. 3, it in addition
maintains the list of current UDP routes, such that it is able to
compute the UDP throughput on each network link.

978-1-5090-0058-6/16/$31.00 2016 IEEE

3) Routing module This module calculates link costs and

determines routes for traffic flows. It uses topology data and
results of the statistics gathering module as inputs. The more
details are provided in Section IV.
We have implemented three different routing algorithms on
the controller in order to identify the most suitable one. The
algorithms are based on solutions from literature [17-19],
which are designed for providing the absolute QoS guarantees
in terms of bandwidth. As the focus of this paper is on
dynamic routing of best-effort traffic, we had to modify them
to suit that specific scenario.
The first algorithm considered is SWP (Shortest Widest
Path), which computes the path with the most available
bandwidth [17]. If there is more than one such a path, the path
with smallest number of hops is considered optimal. In this
way, the algorithm tries to evenly load network links. Note
that in QoS scenario, the available bandwidth refers to part of
link bandwidth which is not reserved for priority traffic. In our
case, it refers to the estimation given by Eq. (1).
In the next step we have considered two algorithms, that in
addition to the information about the network topology and the
available bandwidth take into account locations of ingress and
egress nodes (i.e. OpenFlow switches): Minimum Interference
Routing Algorithm [18] (MIRA) and Dynamic Online Routing
Algorithm [19] (DORA). These algorithms are especially
suitable for backbone networks, where the number of
ingress/egress nodes for a specific region is usually not too big
[20]. However, even when all nodes are ingressegress nodes,
it is likely that some subset of nodes is more important and
need to be prioritized.
The key idea of MIRA is that an arriving traffic flow should
follow a route that does not interfere too much with routes that
may be critical to satisfy a future demands. The concept of
interference is related to the notion of maximum flow
(maxlow) in graph theory [21]. In communication networks,
the maxflow value for some source-destination (SD) pair is an
upper bound on the total amount of bandwidth that can be
routed between the pair. Therefore, the problem of selecting a
minimum interference path can be thought of as a choice of
path between the source and the destination that maximizes
the minimum open capacity between every other SD pair [18].
Since this problem is NP-hard, MIRA introduced concept of
link criticality to determine which links should be avoided
during selection of a route. These critical links are identied
by the algorithm as links with the property that whenever their
capacity is reduced by 1 bandwidth-unit, the maxflow value of
one or more SD pairs also reduces by 1 bandwidth-unit. The
level of link criticality is proportional to number of SD pairs
the link interferes with. Thus, the goal of MIRA is to find the
least critical path. Since MIRA is QoS algorithm, we used our
definition of the available link bandwidth to calculate the
critical links in the scenario with best-effort traffic.
DORA algorithm tries to avoid routing over links with low
residual bandwidth and with high potential to be included in
paths of future demands. It operates in offline and online

phase. In an offline phase, the array of path potential values

(PPV) is calculated for each SD pair. Elements of PPV array
correspond to network links and reflect their importance for
the other SD pairs. Initially all PPV values are set to zero.
Then, for corresponding SD pair the set of shortest disjoint
paths is calculated. PPV values of links included in these paths
are reduced by 1. Finally, each link is checked for occurrence
in the sets of disjoint paths of the other SD pairs. Upon
registering occurrence, its PPV value is increased by 1. The
PPV values are determined for each SD pair separately, but
these calculations are done only when thenetwork is initialized
or when some change in topology happens. In an online phase,
the PPV and link residual bandwidth are combined together to
form the link cost. The impact of residual bandwidth is
controlled by BWP (BandWidth Proportion) parameter:
cos t = (1 BWP) PPV + BWP

,0 BWP 1
residual _ bandwidth

performance limitations of software OpenFlow switches, links

of higher capacity are not recommended in Mininet [22]. A
subset of OpenFlow switches were selected to be sources (S)
and destinations (D) of traffic flows. Each source switch was
connected to one host machine with Iperf traffic generator
[23]. In order to avoid impact of access links on throughput
performance, their capacity was set to 10Gbps. Traffic flows
were generated one by one in fixed intervals of 3s. After the
last traffic flow had been generated, the simulation was left to
run for 100s. TCP traffic flows accounted for 80% of total
generated flows. The rest were constant-bit-rate (5 Mbps)
UDP flows. We have been unable to use real traffic traces due
to limited emulation resource.


Dijkstra algorithm is used to compute a cost optimized path

for a traffic flow.
In each of these three cases, the main difference between the
original version of the algorithm and the one implemented
within our controller is in the definition of residual/available
bandwidth. Also, while QoS algorithms consider only the
links that can satisfy the bandwidth demand, our algorithms
consider all the links regardless of their current load.
To evaluate the performance of the proposed SDN
applications we used Mininet network emulator [22]. Mininet
creates simulated network that runs the real software on the
virtual network components, such that it can be used to
interactively test networking software. As already mentioned,
we used POX OpenFlow controller to run our applications.
The emulated data plane consists of OVS software switches
[16]. Two different topologies were used in emulations. The
first is based on real Verizon topology (Fig. 4). The second is
random topology adopted from the literature (Fig. 5) [18],[19].

Fig. 4. Verizon-like topology. Blue nodes are used as sources

and destinations of traffic flows.
The thicker lines in Figures 4 and 5 denote 1Gbps links,
while the thinner lines correspond to 100 Mbps links. Due to

978-1-5090-0058-6/16/$31.00 2016 IEEE

Fig. 5. Random topology with four SD pairs [18], [19].

Fig. 6. shows results in terms of average throughput
achieved by TCP traffic flows on Verizon topology. We used
suffix '-BE' to denote modified versions of SWP, MIRA and
DORA algorithms. Several experiments were conducted, in
which various number of traffic flows was generated. As
expected, SPF routing resulted in the worst network
performance. This is a consequence of using always the same
shortest path tree for routing. Due to high concentration of
traffic flows on some links, throughput results degraded very
quickly with the number of generated flows.
On the other side, the modified version of MIRA algorithm
(MIRA-BE) insignificantly increased the average throughput.
Despite its complexity, MIRA-BE often causes unbalanced
network utilization. The reasons for this are twofold: i) it
assigns the same cost to all non-critical links; ii) the cost of
critical links does not depend on the link load. The negative
effect of the last factor was strongly pronounced in the
emulated scenario. Due to relatively large number of SD pairs,
many links were identified as critical and their costs become
close to each other. Thus, when all nodes in the network act as
potential sources and destinations of traffic flows, we can
expect SPF-like behaviour. Efficiency of DORA-BE algorithm
depends largely on value of BWP parameter. When BWP
parameter is set to low value, DORA-BE is focused on
avoiding critical links, i.e. links with high PPV. However,
DORA-BE determines critical link in an offline phase, so it
happens that some links are considered more critical than they

actually are. For example, under-loaded links could be

identified as critical and avoided during routing process. With
large values of BWP parameter DORA always gives the best
results. In a such configuration DORA estimates link
criticality based on locations of SD pairs, but always protects
more those with low available bandwidth. From Fig. 6, we can
see that it offers improvement up to 54.5% over SPF routing.
Significant improvement of average flow throughput is also
noticeable in the case when SWP-BE algorithm is used. SWPBE performs especially well under low traffic load. When the
load increases, it performs worse than DORA-BE (BWP=0.9)
because it tends to allocate longer paths and consume a lot of
network resources. Also, it does not differentiate links in terms
of their importance for future traffic flow arrivals. Depending
on the network topology, this limitation more or less
diminishes the overall performance.

DORA-BE again proved to be the most efficient when value

of BWP parameter is high. The improvement over SPF is even
more significant, as DORA-BE achieved 3.2 times higher
average flow throughput. The important advantage of this
algorithm is the applicability to a wide range of network
scenarios. When BWP parameter is high enough, the link cost
function given by Eq. (1) ensures the efficient network
utilization regardless of the number of SD pairs. On the other
hand, in networks where the link failure is common event, the
same algorithm with smaller values of BWP parameter could
be used to yield the best performance, because it tends to
minimize burden on the critical links.



Fig. 7. Average route length as a function of number of

generated traffic flows.







Number of generated traffic flows

Fig. 6. Average TCP flow throughput for different number of

generated traffic flows.
Fig. 7 shows the average route length as a function of
number of generated requests. Obviously, there is a great
difference in average route length of traditional SPF algorithm
compared to the others. SPF always chooses the route with
least number of hops, without consideration of the network
load and traffic engineering information. On the other side, the
dynamic routing algorithms often use longer routes if they
have more resources available or/and if they introduce the
lowest interference to SD pairs in the network. In each of the
emulated scenarios, the longest routes correspond to
algorithms which achieved the highest average throughput:
DORA-BE (BWP=0.9) and SWP-BE. However, in networks
where propagation delay is significant (e.g. wide-area
networks), selection of long routes may not be optimal
solution under low traffic load. Thus, further study is needed
to identify the proper model of load balancing.
Fig. 8 presents the average throughput achieved by TCP
traffic flows on random network topology with four SD pairs
(Fig. 5). We can notice slightly better behaviour of MIRA-BE
algorithm. As discussed earlier, this is a consequence of
smaller number of SD pairs. The ranking of the algorithms in
terms of throughput performance remained almost the same.

978-1-5090-0058-6/16/$31.00 2016 IEEE

Average flow throughput (Mbps)

Fig. 8. Average TCP flow throughput for different algorithms

on the random topology.
This paper proposes the implementation of dynamic routing
within OpenFlow controller. The proposed solution estimates
the amount of bandwidth that each link can offer to arriving
traffic flow, and defines the routing metric as a function of the
estimations made. Different variants of the routing metric have
been considered, which are inspired by those used by well
known QoS algorithms: SWP, MIRA, and DORA. Through
the set of experiments in Mininet emulator, we have shown
that the modified version of DORA generally performs the
best, while the best-effort version of MIRA algorithm does not
give adequate results for the chosen performance criteria. The

results also show that the routing over the paths with highest
available bandwidth (SWP-BE algorithm) offers the visible
improvement to the performances of SPF algorithm, and yet
remains simple for the implementation. In our future work we
plan to explore and implement traffic engineering mechanisms
that provide performance guarantees for the priority traffic
flows, but additionally tend to maximize the performance of
best-effort traffic. The performance study might be augmented
with the analysis of the number of rejected QoS requests and
number of flows requiring a re-routing after link failure.
Scalability of the dynamic routing solutions is in large part
determined by complexity and efficiency of the mechanisms
for network state monitoring. Therefore, more design options
for the controller's statistics gathering module should be




Internet society, "Growing pains: bandwidth on the Internet", briefing

paper. Available at:
W. Kim, P. Sharma, J. Lee, S. Banerjee, J. Tourrilhes, S. Lee, P.
Yalagandula, "Automated and scalable QoS control for network
convergence," INM/WREN'10, San Jose, CA, Apr. 2010.
Q. Ma, and P. Steenkiste. "On path selection for traffic with bandwidth
guarantees," IEEE international conference on network protocols, 1997.
QoS Routing Mechanisms and OSPF Extensions, IETF, RFC 2676, Aug.
S. Das, "A unified control architecture for packet and circuit network
convergence," PhD Dissertation, Stanford Univeristy, Department of
electrical engineering, June 2012.
Multiprotocol label switching architecture, IETF, RFC 3031, Jan. 2001.
S. Das, Y. Yiakoumis, G. Parulkar, N. McKeown, P. Singh, P., D.
Getachew, P.D. Desai, "Application-aware aggregation and traffic
engineering in a converged packet-circuit network," National Fiber
Optic Engineers Conference on Optical Fiber Communication
Conference and Exposition (OFC/NFOEC), pp.1-3, 6-10 March 2011.

978-1-5090-0058-6/16/$31.00 2016 IEEE










Open Networking Foundation, "Software defined networking: the new

norm for networks," Web white paper, retrieved Apr. 2015.
S. Tomovic, N. Prasad, I. Radusinovic, "SDN control framework for
QoS provisioning", 22nd Telecommunication Forum TELFOR 2014, pp.
111-114, Belgrade, Serbia, November 2014.
S. Tomovic, N. Prasad, I. Radusinovic, "Performance Comparison of
QoS Routing Algorithms Applicable to Large-Scale SDN Networks",
Proc. of IEEE Eurocon 2015, pp. 172-177, Salamanka, Spain, September
S. Tomovic, M. Radonjic, I. Radusinovic, "Bandwidth-Delay
Constrained Routing Algorithms for Backbone SDN Networks", Proc. of
TELSIKS 2015, pp. 227-230, Nis, Serbia, October 2015.
D. Kreutz, F. M. V. Ramos, P. Esteves Verissimo, C. Esteve
Rothenberg, S, Azodolmolky, S. Uhlig, "Software-Defined Networking:
A Comprehensive Survey," Proceedings of the IEEE, Vol.103, No.1,
pp.14-76, Jan. 2015.
OpenFlow switch specification v1.0.0, Retrieved Apr. 2014. Available
at: http://archive.openflow.org/documents/openflow-spec-v1.0.0.pdf.
POX software. Available at: http://noxrepo.org/pox/about-pox/
Center for Applied Internet Data Analysis, "Analyzing UDP usage in
Internet traffic," Web document. Available at:
OpenvSwitch. [Online]. Available at: htttp://openvswitch.org
Wang, Zheng, and Jon Crowcroft, "Routing algorithms for supporting
resource reservation," IEEE JSAC, 1996.
M. Kodialam, and T. V. Lakshman. "Minimum interference routing with
applications to MPLS traffic engineering," IEEE INFOCOM, vol. 2.,
2000, pp. 884 - 893.
R. Boutaba, W. Szeto, and Y. Iraqi. "DORA: Efficient routing for MPLS
traffic engineering," Journal of Network and Systems Management10.3,
2002, pp. 309-325.
Y. Yang, L. Zhang, J. K. Muppala, and S. T. Chanson, "Bandwidth
delay constrained routing algorithms," Computer networks, vol. 42, issue
4, pp. 503-520, 2003.
A. Kumar, D. Manjunath, and Joy Kuri, "Communication networking:
an analytical approach," Elsevier, 2004.
B. Lantz, H. Brandon, and N. McKeown, "A network in a laptop: rapid
prototyping for software-defined networks," 9th ACM SIGCOM , 2010.
Iperf traffic generator. Available at: https://iperf.fr/