Вы находитесь на странице: 1из 5

2017 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT)

Experimenting with Scalability of Floodlight


Controller in Software Defined Networks
Saleh Asadollahi Bhargavi Goswami
Computer Science Computer Science
Saurashtra University Christ University
Rajkot, India Bangalore, India
asadollahimcitp@gmail.com bhargavigoswami@gmail.com

Abstract—Software Defined Network is the booming Ryu [5], NOX [6], etc., configures and manages
area of research in the domain of networking. With growing dynamically each switch according to the requirement.
number of devices connecting to the global village of Authors have provided details on these controllers,
internet, it becomes inevitable to adapt to any new comparison and difference between each in [7].
technology before testing its scalability in presence of Controllers may implement desired changes in the
dynamic circumstances. While a lot of research is going on to network by installing the suitable forwarding rule through
provide solution to overcome the limitations of the southbound interface such as OpenFlow[8], Opflex[9],
traditional network, it gives a call to research community to NETCONF [10], ForCES [11], POF [12], etc. There are
test the applicability and caliber to withstand the fault other options for Southbound Interface implementation
tolerance of the provided solution in the form of SDN but, OpenFlow is the first choice of researchers and most
Controllers. Out of existing multiple controllers providing
famous with current version of 1.5. Northbound interface
the SDN functionalities to the network, one of the stellar
controllers is Floodlight Controller. This paper is a
is the one who suffice the requirement of implementation
contribution towards performance evaluation of scalability of business policy over application layer of controllers.
of the Floodlight Controller by implementing multiple Widely used northbound interfaces are for deployment of
scenarios experimented on the simulation tool of Mininet, service policy to define traffic behavior.
Floodlight Controller and iPerf. Floodlight Controller is While SDN has been suitable for home [13], Data
tested in the simulation environment by observing Center and enterprise network [14] such as Google,
throughput and latency parameters of the controller and
Facebook, etc., separating control plan and bringing it to a
checked its performance in dynamic networking conditions
remote system raise questions on its scaling capabilities
over Mesh topology by exponentially increasing the number
of nodes.
on different scenarios. To address this issue and to throw
Keywords— Software Defined Netwoks (SDN), Mininet, light upon the scalability of the controller, the authors of
OpenFlow, Floodlight, iPerf, gnuplot this paper presents here the experiments to evaluate the
performance of the diversified scenarios addressing its
scalability. The rest of the paper is organized in the
I. INTRODUCTION following manner. Section II is providing helicopter view
Computer networks performed suitably well on on floodlight controller. Section III provides details about
traditional network infrastructure and equipments but, the simulation test bed set to perform the experiments on
circumstances changed as more and more devices scalability with diversified networking conditions. Section
connected to the internet. Number of users and devices IV provides the obtained experimental results and
embraced internet gradually and progressively with time. evaluation of performance statistics followed by
Because of this, moot pointed problems rose such as: a) conclusion and references.
lack of a global view of entire network, b) configuration
of each complex and individual equipments (routers and II. FLOODLIGHT SDN CONTROLLER
switches) with pre-defined commands, c) implementation
of high level policy over the network gateways, d) Floodlight [15] is an open source, Apache licensed,
configuration of equipments with low level commands java based OpenFlow controller and one of the
that were time and energy consuming, e) difficulty in momentous contribution from Big Switch Network. At the
recovery of network breakdown and most considerable run time of Floodlight Controller both southbound and
point is f) crisis of network programmability for upgrades northbound interface among all the available set of
[1]. configured module applications will be activated and
available for experimentations. Applications interact with
Software Defined Network emerges as a solution for controller to retrieve information and invoke services by
those problems having the concept of separation of using http REST command.
control plan from data plan. By taking out the brain of
each device away in the form of controller, complex Floodlight Controller architecture is shown in Figure
equipments will appear as a combination of port and Flow 1. It contains Core, Internal and Utility Service that
tables that are connected to the controller. An SDN includes various modules where, some of the modules are
controller such as POX [2], OpenDaylight [3], Beacon [4], explained. a) Topology management is in charge for
computation of shortest path by using Dijsktra’s

978-1-5386-2361-9/17/$31.00 ©2017 IEEE


Algorithm, b) Link Discovery is responsible for standardized protocols across the layers. It creates
maintaining the link state information by using LLDPs transparent distribution of client server architecture
packets, c) Forwarding module provides flow commute functionalities in cross layer environment similar to real
through end to end routing, d) Device Manager keeps networks with Cisco Nexus 1000V switches. The open
account of nodes on the network and Storage Source, e) source OVS switch is accepted by multiple virtual
Virtual Network generate layer 2 realm defined by MAC machines as default switch and can be ported to multiple
address. platforms as and when required to be simulated.
To evaluate the statistics related to the performance of
controller, mesh topology is implemented over 6 switches
with three different scenario having difference in only
number of nodes connected to each peripheral switch.
Scenario A. 10 hosts connected to each 5 switches (Total
of 50 host + 6 Switches + 1 controller). B. 30 hosts
connected to each 5 switches (Total of 150 hosts + 6
Switches + 1 controller). C. 60 hosts connected to each 5
switches (Total of 300 host + 6 Switches + 1 controller).
Mininet comes with built in NOX controller, that
supports all the basic controller’s functionalities, allows
multiple controller implementation as well. In this paper,
we do not use default controller of Mininet. The controller
Fig. 1. Floodlight SDN controller architecture
used in this experiment is Floodlight which is an
enterprise edition, Java based OpenFlow Controller with
Further, f) Forwarding: It is forwarding application for Apache license, supported by large community of
packets by default that supports topologies for its utilities. developers from Big Switch Networks. In this paper we
g) Static Flow Entry: An application for installing specific use master version of Floodlight 1.2. The base of this
flow entry that includes match and action columns for a controller is Beacon controller.
specific switch that is enabled by default. Through REST
APIs we can add, remove and inquire flow entries. h)
Firewall: An application to apply Access Control List
rules to restrict specific traffic based on specified match.
i) Port Down Reconciliation: In the event of port down, it
reconciles the flow across the network. j) Learning
Switch: A common L2 learning switch. k) Hub: It always
floods any incoming packets to all other active ports. Not
enabled by default. l) Virtual Network Filter: A simple
MAC-based network isolation application that is
compatible with OpenStack Essex. Thus, Floodlight
controller is part of the Floodlight project by BSN that is a
stellar controller and choice of beginners to expert
researchers in the domain of SDN.

III. SIMULATION ENVIRONMENT


As an effort to implement and test the controller’s
performance, we created a custom topology with three
different scenario having difference in the number of
nodes. As a simulator, Mininet [16] is used and as a
controller Floodlight. Mininet is installed in virtual
machine which gets connected to remotely located
Floodlight controller installed in separate virtual machine.
Python scripting is done to override the default behavior
of the Mininet to customize the topological specification
instead to accepting the automatic decision of number of Fig. 2. Interconnectivity of six switches in mesh network
host connecting to switch. Python script of customized
topology includes the specification of host to switch, The aim of connecting the switches to mesh topology
switch to switch and switch to Floodlight controller. was to develop a scenario that imposes minimum delay in
the transmission of data packets, especially in UDP
The switch used in this experiment is OpenFlow transmission. Figure 2 shows the interconnectivity of six
kernel switch, also known as Open vSwitch or OVS- switches in mesh network with three varieties of
Switch [17] by enabling OpenFlow protocol mode. OVS scenarios, where difference lies in the number of host
is known for its open source distributed virtual multilayer connected to the switch
implementation over virtual machines to provide
environment of soft switch with active multiple

289
The central switch is further connected to only five hosts by means of 5 different intermediate switches
other switches and no host directly. The purpose was to connecting each host to the other. The statistics are logged
generate the bottleneck scenario for communication which and plotted using gnuplot for each scenario under low
is the most practical issue in today’s ISP networks. The QoS and high throughput rate. It can be observed through
performance will be evaluated when traffic is generated the top graph of Figure 3, that average throughput stays in
using TCP and UDP flow together, which is generated the range of 575 MBps to 775 MBps. Majority of the
and transmitted through the central switch. In this simulation time, the throughput remains around 731
experiment, flow generation is done for UDP and TCP MBps.
both by generating dynamic flows simulated using iPerf.
iPerf is used to actively measure the parameters of the
duration, queue behavior, bandwidth, protocol, packet
delivery ratio, drop rates and much more. Out of all the
stated parameters, we have limited analysis to throughput
and latency in this paper. Here, authors have evaluated
TCP and UPD flow measurements for bandwidth and
latency for all the three scenarios that are analyzed in next
section of performance statistics. As mentioned before,
the experiment is designed in such a manner that
bottleneck situation is generated but, only up to the
optimum level till the loss is not encountered. The
flooding of packets in the network was limited such that,
no loss is encountered and the network was utilized
optimally throughout the duration of 100 seconds of
simulations run.
First few seconds the network is in warm up state and
thus, no tracing is considered for that duration. For the
experiment with fixed number of switches communicating
with different number of host connected to switch in each
scenario, we have simulated and logged events that
happened between first and last host of the network to
observe the odds faced by the hosts located at the largest
distance from each other. To utilize the bandwidth and
queuing resources of the network to maximum, in
competence with practical approach, we transmitted UDP
datagram of 1470 byte and TCP window size of 85.3
Kbytes when the bandwidth was 5.99 Gbits/sec. To obtain
statistics of network performance, iPerf was the right tool
to provide us the desired statistics. To provide the
statistics in graphical view, Gnuplot is used. Filtering is
done using ‘grep’ and ‘awk’ commands for pattern
matching and isolation of required parameters.

IV. PERFORMANCE ANALISIS


The experimental testbed is prepared as stated in
previous section and tests are applied and executed on
simulation to investigate the throughput and latency over
bottleneck network traffic. This section exhibits the
results obtained after experimenting with simulations. The
simulation configurations are done for all the points as
stated in previous section. The performance matrices used
in this experiment are throughput and latency as Fig. 3. Throughput for all the three scenarios.
mentioned before with a specific reason. To obtain and This statistics are obtained when the network load is
analyze accurate throughput, correct measure can be stable and number of host is fixed to fifty hosts. In fact
obtained using TCP flow. Similarly, if the requirement is this can be considered as a validation of the Floodlight
to observe latency, it can be obtained by observing UDP controller that is used in the designed topology in
flows. As stated in previous section, we executed presence of small number of host connecting to the
simulations with the environment near to real life scenario network that may be a small scale factory, business unit or
for connectivity between ISP and Internet user, Client and such small home network. It is to be noted that two way
Server, Wired and Wireless networks, etc. The resultant communications is happening between client connected to
throughput graph is shown in Figure 3 that shows TCP peripheral switch and server connected to the controller.
flow for the entire network of all the three scenarios. First The middle graph of Figure 3 shows the similar
graph shows that central switch is communicating with 50 parameter configurations but, with 150 numbers of hosts

290
connecting to network of 6 switches. As depicted in the
middle graph showing scenario 2, Figure 3 shows the
throughput obtained after simulating for about 100
seconds. It was observed from the plotted graph that range
of the throughput stays between 450 and 600 Mbps.
Majority of the simulation duration kept the throughput
stable at 575 Mbps that indicates that throughput is stable
and is acceptable without doubt in presence of stable
network load and fixed number of nodes, again.
Considering the throughput graph provided in bottom
most graph of Figure3 is for scenario 3, which is observed
to be in range of 475 to 725 Mbps for the same duration of
simulation, again. The stability is observed in the graph
similar to the previous two scenarios throughput graphs
but with more variations. Notice once more that majority
of duration of simulation throughput stays stable at 690
Mbps. It is observed that when the number of nodes
connected to switches increases, it imposes load on the
network. The graph of Figure 3 shows high throughput
variations because the bandwidth is optimally utilized by
fifty nodes communicating in presence of TCP and UDP
flows both. But, throughput gets reduced for higher
number of nodes connect and communicate with the same
number of switches because, majority of the packets
spend long time in the pipeline to reach to the destination
due to heavy network load.
In Figure 4, authors have provided plotted graph for
latency that was observed for UDP flows communicating
between the first and last nodes connected to the network
through six switches with three different scenarios.
It was observed in first graph of Figure 4 that average
mean of the observed latency was between 0.01 to 0.02
except few instances of high latency for few packets in the
range of three to five. It indicates that drop rate is not even
1 percent of the entire packet traversing through the
network. This observation was for the first scenario
where, the number of nodes that are communicating is
limited to fifty. But, when the number of nodes increases
with the same resources, the second and third graph of
Figure 4 shows the behavior variations. It was observed in
second graph of Figure 4 that drastically latency reduces
to less than 0.01 and the number of exceptional instances Fig. 4. Latency for all the three scenarios
is only two. This seems to be ideal situation in the
presence of interference. But, in third and bottom most
graph of Figure 4, it was observed that the latency again
increases and reaches in the range of 0.01 and 0.02 same V. CONCLUSION AND FUTURE SCOPE
as first scenario. This is due to the optimum utilization of With this paper, authors have made attempt to address
the bandwidth when the connectionless communication is the scalability features of the Floodlight controller by
happening with the same network. Being mesh topology, implementing various diversified scenarios in simulation
large number of alternate path exist and there is no path experimental environment. In this paper, authors have
that is getting congested during the entire communication provided the clear idea how to create experimental test
with almost zero drop rate which results to lowest latency. bed with analysis of obtained statistical results keeping
In the last graph of Figure 4, it was observed that when the performance as the central focus. We would conclude
the large number of packets are generated by 300 this paper by providing positive sign to move forward to
communicating nodes, for first 3 seconds, latency the researchers who are looking for implementation of
observed was very high which is required for the network their idea over Floodlight Controller in the domain of
traffic blast to get adjusted for first few seconds. Software Defined Networks without any doubt. The
controller not just provide the simulation experimental test
bed support but, also provides clear explanation for
analysis of obtained statistics after the experiments are
simulated. The tools suggested, simulated, shown through
figures and graphs will help the research community to

291
further conduct such experiments in the future by [7] Asadollahi, S., Gowsami, B. (2017). Software Defined Network,
implementing their desired parameters though these Controller Comparison. Proceedings of Tec'afe 2017,Vol.5,
Special Issue 2, April 2017. ISSN: 2320-9798.
experiments. This paper will also address the
[8] McKeown et al, N. (2008). OpenFlow: Enabling innovation in
programmers, developers and new bees in the area of campus networks. ACM SIGCOMM - Computer Communication
SDN, who are looking forward to touch the practical Revie, vol. 38, no. 2, p. 69–74.
aspects of the SDN by following implementation details [9] Smith et al, M. (2014). OpFlex control protocol, Internet
provided in the paper. Further, the research team will Engineering Task Force, from : http://
come up with few more papers on implementation of tools.ietf.org/html/draft-smith-opflex-00
other SDN controllers in the coming future. The team also [10] Enns, R., Bjorklund, M., Schoenwaelder, J., Bierman, A. (2011).
has planned to compare the controllers of SDN, once all Network configuration protocol (NETCONF). Internet Engineering
Task Forc, form http://www.ietf.org/rfc/rfc6241.txt
the stellar controllers are implemented and experimented
by them. [11] Doria et al, A. (2010). Forwarding and control element separation
(ForCES) protocol specification. Internet Engineering Task Forc,
from http://www.ietf.org/r/fc/rfc5810.txt.
REFERENCES [12] Song, H. (2013). Protocol-oblivious forwarding: Unleash the
power of SDN through a future-proof forwarding plane.
[1] Asadollahi, S., Gowsami, B. (2017). Revolution in Existing
Proceedings of ACM SIGCOMM Workshop Hot Topics Softw
Network under the Influence of Software Defined Network.
Defined Netw II. p. 127–132.
Proceedings of the INDIACom 11th, Delhi, March 1-3.2017 IEEE
Conference ID: 40353 [13] Sundaresan, S., de donato, W., Feamster, N., Teixeira, R.,
[2] McCauley, M. (2012). POX, from http://www.noxrepo.org/ Crawford, S. (2011). A. Broadband internet performance: a view
from the gateway. Proceedings of the ACM SIGCOMM ,Aug.
[3] OpenDaylight, Linux Foundation Collaborative Project, 2013, form 2011.
http://www.opendaylight.org
[14] Casado, M., Freedman, M. J., Pettit, J., Luo, J., Mckeown, N.,
[4] Erickson, D. (2013). The Beacon OpenFlow controller. Shenker, S. Ethane. (2007). taking control of the enterprise.
Proceedings of ACM SIGCOMM Workshop Hot Topocs Software SIGCOMM CCR. p 1–12.
Defined Network II, 13-18 p, 2013.
[15] Project Floodlight, Floodlight. (2012). from
[5] Nippon Telegraph and Telephone Corporation, RYU network http://floodlight.openflowhub.org/
operating system, 2012, from http://osrg.github.com/ryu
[16] Lantz, B. Heller, and N. McKeown. (2010). A network in a laptop:
[6] Gude al, N. (2008). NOX: Towards an operating system for Rapid prototyping for software-defined network. Proceedings of
networks. ACM SIGCOMM - Computer Communication Revie. ACM SIGCOMM Workshop Hot Topics Netw, 19th. p. 19:1–19:6
vol. 38, no. 3, pp. 105–110
[17] B. Pfaff and B. Davie. (2013). The Open vSwitch database
management protocol. Internet Engineering Task Force, RFC 7047,
from http://www.ietf.org/rfc/rfc7047.txt

292

Вам также может понравиться