Вы находитесь на странице: 1из 6

Multi - layer Traffic Engineering Experimental

System in IP Optical Network


Daisaku Shimazaki, Eiji Oki and Kohei Shiomoto
NTT Network Service Systems Laboratories, NTT Corp.
3-9-11 Midori-cho Musashino-shi, Tokyo 180-8585 Japan
Email: {shimazaki.daisaku, oki.eiji, shiomoto.kohei}@lab.ntt.cojp
Abstract-We developed distributed-controlled multi-layer
traffic engineering system. There is no previous works that
report the system. Therefore, we can not confirm feasibility and
scalability of the system. In this paper, we report an experiment
on our system consisting of Internet protocol (IP) routers and
optical cross-connects (OXCs). In this system, control is provided
by generalized multi-protocol label switching (GMPLS) and IP
network topology and IP routes are dynamically re-configured
to suit traffic characteristics. These functions prevent traffic
congestion as well as any decrease in link utilization rates. We had
experiments to test behavior of the routers and OXCs depending on traffic characteristics. We cleared that the distributedcontrolled multi-protocol label switching (MPLS)/GMPLS traffic
engineering system was feasible and scalable by the experiments.
I. INTRODUCTION

Broadband access lines bring various services to the IP


network. Some P2P software and video delivery services
are causing rapid increases in traffic loads. New broadband
services may cause unforeseeable fluctuations in traffic loads.
New broadband network which can carry traffic of these
services is required. This network should be manageable
because it contains many types of service.
MPLS/GMPLS [1] traffic engineering can meet the above
requirement. We can strictly set the route of an MPLS/GMPLS
label-switched path (LSP) from source node to destination
node and collect information about LSPs, such as traffic
values. In an LSP network, we can control the assigned
bandwidth or quality-of-service (QoS) of every LSP because
the network operator or network element can acquire all LSP
information.
The centralized MPLS/GMPLS network, unfortunately, has
scalability and robustness problems. The centralized network
means there is one node that can control the network. In the
centralized network, a specific node has all information of the
network and indicates all nodes. It is difficult that the specific
node has scalability and robustness.
Therefore, we focused on the distributed MPLS/GMPLS
network. Many studies have examined the distributed
MPLS/GMPLS network [2] - [4]. We previously proposed a
distributed control mechanism for the MPLS/GMPLS network
[5]. These papers proposed algorithms of distributed LSP
network and investigated the performance of them. However,
there is no confirmation validation report. We must confirm the
behavior of the distributed mechanism, stability and processing
capacity.
This paper introduces distributed MPLS/GMPLS network experiment system called as IP optical network. We
constructed experimental system and confirmed distributed
MPLS/GMPLS network control. This system can measure
1-4244-1206-4/07/$25.00 2007 IEEE

traffic rate and establish and delete optical LSP depending


on traffic demand.
The rest of the paper is organized as follows. In Section II, we introduce a distributed virtual network topology
(VNT) reconfiguration method. In Section III, distributed
MPLS/GMPLS experimental system we constructed is explained. In Section IV, we describe the experiment that we
confirmed in the system. In Section V, the conclusion is given.
II. IP OPTICAL NETWORK

A. Virtual Network Topology (VNT)


A multi-layer GMPLS network [1] applies the concept of
hierarchical LSPs. In the network, the higher-layer network
refers to a lower-layer LSP as a virtual link called forwarding
adjacency (FA). The virtual network consisting of FAs is called
the VNT [6], [7]. This paper considers two layer LSPs, packet
LSP and optical LSP. Figure 1 shows that a lower-layer optical
LSP is referred to as a virtual link by a higher-layer packet LSP
and the virtual network consisting of optical LSPs is called the
VNT. Packet LSPs, which carry IP packets, are established on
optical LSPs.

Fig. 1. Virtual Network Topology (VNT)

B. Traffic-driven VNT re-configuration


If the number of packet LSPs set in an optical LSP is
fixed, the utilization rate of the optical LSP becomes small
when the traffic carried by the packet LSPs becomes small. If
there are many optical LSPs with low utilization rates, network
utilization decreases, which raises costs. VNT reconfiguration,

which consists of dynamically establishing and deleting optical


LSPs and rerouting packet LSPs, can reduce the number of
router interfaces and wavelengths. At the same time, VNT
reconfiguration can prevent network congestion.
In the GMPLS architecture, a link-state routing protocol is
used for topology discovery. Each node advertises the link
state of all links originating from the node. The residual

each link are part of the link state. The link state
is flooded throughout the entire network. When a node sets up
an LSP to a destination, it calculates a feasible route for the
LSP by running the constraint-based shortest path first (CSPF)
algorithm. If a feasible route is found, the node initiates the
LSP setup procedure.
In this scheme, each node runs the VNT calculation algorithm, which requires the current virtual network topology
and the traffic demand of the packet LSPs. The information
on traffic demand of the packet LSPs must be advertised.
The GMPLS link-state routing protocol [9] is extended to
include this traffic demand information. We need to avoid
service disruption when an optical LSP is released. The packet
LSPs must be rerouted before the underlying optical LSP.
To advertise an optical LSP as dormant, the GMPLS routing
protocol is extended. Once each node receives the notice of the
dormant link state, it reroutes the packet LSPs on the dormant
optical LSP to non-dormant optical LSPs. After all packet
LSPs are rerouted, the dormant optical LSP is torn down. In
this way, the graceful teardown of LSP is implemented in a
distributed manner.
resources on

Fig.
The VNT

2.

Traffic driven VNT

design problem has


[10], [11].

static traffic demand


for

given

re-configuration

been

extensively

The VNT

can

studied for

be

designed

initial traffic demand. As the network grows, the

can significantly differ from the initial demand.


Reconfiguration of the VNT would be required to adapt such
traffic demand change.
Several methods for VNT reconfiguration have been proposed [12], [13]. Those methods assume that the future traffic
demand is given. Those methods aimed at the reduction of
topology change in reconfiguration process. The new VNT
is determined from the current one to adapt the given traffic
demand. The traffic demand is hard to anticipate accurately
in real networks. The traffic demand also fluctuates frequently
in real networks. Traffic measurement and reconfiguration of
the VNT should be orchestrated to cope with unpredictable
traffic demand. A method for reconfiguration of the VNT
under dynamic traffic demand change would be required to
cope with unpredictable traffic demand.
The problem of VNT reconfiguration under dynamic traffic
changes was studied in a recent work [14]. The method
uses an off-line algorithm for time-variant offered traffic. It

traffic demand

assumes

that

set

of traffic

a priori.
Another
changes includes an

matrices

at

different

instants

VNT under dynamic


reconfiguration method
[15]. The method monitors the traffic instead of assuming any
future traffic pattern. Simple adjustments are made to current
VNT to mitigate congestion and reclaim network resource
for under-utilized optical LSPs if possible. The method is

is

known

traffic

based

on

centralized

measurements

work

on

on-line

control, which collects the traffic demand

and heuristically calculates the

new

VNT to

suit the latest traffic demand information. The centralized


controller initiates optical LSP setup/teardown procedure. A
heuristic algorithm is used to calculate the VNT. A new optical
LSP is established between the end nodes of multi-hop traffic
with the highest load using the most congested optical LSP to
mitigate the congestion.
A distributed VNT reconfiguration mechanism that can
handle unpredictable traffic demands is studied in [5]. In the
distributed approach, each node decides whether it should
initiate optical LSP setup/teardown. Unless the coordination
mechanism is properly implemented, an inconsistent VNT
might be formed. The proposed method uses a link-state routing protocol for each node to share the same virtual topology
and the traffic demand over the each optical LSP, which is
measured at the originating node. Each node calculates the new
VNT using a simple heuristic algorithm, compares it with the
old one, and initiates optical LSP setup/teardown if necessary.
C. VNT re-configuration algorithm
We define two types of traffic value thresholds; TH: congestion threshold, TL: low utilization threshold, for VNT reconfiguration. Traffic loads on the optical LSPs are measured
at the ingress nodes of the LSPs and VNT reconfiguration is
performed when traffic loads exceed TH or fall under TL.
When traffic congestion, as determined against TH, occurs,
the packet LSP in the congested optical LSP that has the
largest traffic value is selected to be moved. Next, the router
tries to establish an optical LSP between ingress and egress
node of the target packet LSP. If the optical LSP is established,
the target packet LSP is moved to the optical LSP. This
sequence is repeated until congestion is resolved.
The router tries to delete any optical LSP whose traffic load
is under TL. Of course, before the deletion, all packet LSPs
are moved to other optical LSPs without triggering congestion.
This sequence is also repeated until the link's low utilization
is resolved.

Fig. 3. VNT re-configuration algorithm

D. Advertisement of traffic loads


As mentioned above, traffic information is advertised by
extended open shortest path first (OSPF). Traffic loads are
measured at the ingress nodes of packet and optical LSPs.
The utilized bandwidth information is advertised by the OSPF
format shown in Figure4. The value of sub TLV type is
32773 for packet LSPs and 32774 for optical LSPs. Utilized
bandwidth is given in units of mega bytes per second in IEEE
floating Point format.

0
1
2
3
01234561890123456189012345618901

Type

length
Utilized bandwidth

Fig. 4.

OSPF extension for advertisement of bandwidth utilization

Table shows the software specifications. We used REDHAT Linux patched by MPLS-Linux [17], open-source software, to realize label switched routing. To forward packets, we must change parameter net.ipv4.ip forward in the
/etc/sysctl.conf file to enable.

III. EXPERIMENTAL NETWORK

The experimental network system consisted of GMPLS


routers, GMPLS OXCs, traffic generators, and a VNT viewer.
This section describes each piece of equipment.
A. Multi-layer network
Figure shows the experimental network.
VNT
view

No. Item

Spec

CPU

Pentium 4 3.2GHz

2
3

RAM
NIC

2GB DDR2-SDRAM 400MHz ECC


> 1OOBASE-T x 3 (Control plane IF, One or
more GMPLS data plane IF, One or more
MPLS data plane)

Fig. 7. Hardware specifications of PCs for GMPLS router

Optical 1NT
view
4
, 77777

[Redhat Linux 9 kernel 2.4.20 + MPLS Linux


'

1.172

NT
e wer

Self-manufactured software (OSPF-TE,

RSVP-TE)

Fig. 8.

Software specifications for GMPLS router

Figure 9 shows the hardware and software structure of


the GMPLS router PC. GMPLS controller gathers traffic
information from the network interface cards by the simple
network management protocol (SNMP).

W:oXC
3 :Traffic Generator

Fig. 5. Traffic-driven VNT reconfiguration experimental network

In this multi-layer network, Optical LSPs, whose LSP type


is FSC, are established between GMPLS routers. Packet LSPs
are established via Optical LSPs terminated in GMPLS routers.
In this experimental network, Packet LSPs are represented
by MPLS LSPs, which are terminated in GMPLS routers,
the same as Optical LSPs. Therefore, routers in this network
handle GMPLS and MPLS entities.

Os

H/W

risement

Other node

Fig. 9. PC module structure for GMPLS router

Fig. 6.

Hierarchical network

B. Router
We realized GMPLS routers on commercial PCs. Table
shows the hardware specifications of the PCs. The PCs had
several network interface cards. One interface card is used for
the GMPLS control plane. The GMPLS control plane interface
sends and receives OSPF advertisement packets and RSVPTE signaling packets, which establish or release LSPs. Other
interface cards are used to implement the GMPLS and MPLS
data planes. Each PC had as many data plane interfaces as the
number of TE-links.

Figure 10 shows the software module structure of the


GMPLS controller.
In this experimental system, we used self-written GMPLS
controller software. If NetBSD operating system is used for
the platform of GMPLS PC router, there is some open source
MPLS software, such as AYAME [16].
C. OXC
The GMPLS OXC consists of commercial available optical
switches and the GMPLS controller mentioned above (Figure
10). We realized the GMPLS controller by adding the optical
switch control function to the controller designed for GMPLS
routers. In contrast, TRM and CSPF were deleted because the
functions of gathering traffic information and route calculation
were not needed.
. Make configuration file for GMPLS controller software.
SW-PORT is related to SW TYPE, IF ID and Label.
. Compare SW TYPE, IF ID and Label in RSVP message
and GMPLS controller configuration.
. Change the optical switch.

Traffic volume fi(x): data size

r, |(1 )Constant

11

(2)Exponential

fi(x)
-4

4-

(3)Pareto
im

.2
3

..... /

TX 2

Fig. 13.

GMPLS Controller module structure

Fig. 10.

GMPLS

coGntLrolntrplplnn
signal

GMPLS

TE-link

.RSVP

SW

Optical

switch

TE signaling

7_ I/F

Fig. 14.

Traffic generator

D.

TrfGen is the tool that enables to generate traffic from any to


any and consists of traffic transceiver unit and controller unit.

Figure

13 shows characteristics of controller unit of TrfGen.

Traffic pattern
can

as

in IP network is

define traffic pattern

for each

Traffic generator system structure

pairs

studied in

any

probability

[18]

[20].

We

model function

of sender and receiver and program the patterns

traffic schedule.

traffic profile.

as

inter-departure

We

Figure 14 shows the preference view of


change the function of packet size and
of traffic pattern as exponential, Pareto

can

time

and constant distribution. The packet


MTU size will
be divided and sent. Figure 15 shows the view of schedule
preference of traffic pattern. The transceiver unit of TrfGen is
based on I-ITG version 2.4 [21] which is free software.
over

SW-TYPE IF-ID
FSC

A_i

LSC

A_i

UPSTREAM
/LABEL

D.C.

X_i (upstream)
X_i (downstream)

X_ij (upstream)

(upstream)
L
I__ I______ (downstream)

Fig. 12.

SW-PORT

X-ij
(downstream)

Configuration table of GMPLS controller for OXC

Preference view of traffic pattern profile

E. VNT Viewer
VNT viewer is the tool that enables the network status to be
visible. Figure 16 shows view of it. It receives IP and optical
layer traffic engineering information by OSPF and displays
it as topology. It also gathers route information of LSP from
ingress router. It can display the traffic value graph of LSPs.
The traffic value information is gathered from ingress router
by SNMP. We can grasp network status in real time from VNT
viewer.
IV. VNT RECONFIGURATION EXPERIMENT

A. Feasibility of VNT reconfiguration experiment with traffic


increaseldecrease scenario
Packet LSPs were established between all routers. Optical
LSPs were established between Router A OXC 1 Router
B (optical LSP 1) and Router B OXC 2 Router C (optical
LSP 2). Packet LSP 1 between Router A and Router B was
established via optical LSP 1. Packet LSP 2 between Router B
and Router C was established via optical LSP 2. Packet LSP
3 between Router A and Router C was established via optical
LSP 1 and 2.
Constant traffic patterns were loaded to packet LSP 1, 2 and
fluctuant traffic was loaded to packet LSP 3. TH = 80kbps,
TL = 1Okbps. Figure 17 shows VNT reconfiguration was
performed when traffic whose pattern was Pareto distribution
with 1.9 shape parameter was loaded. The parameter e in
figure 17 means moving-average parameter that was applied to
measured traffic value, see Equation (1). Rn is the statistical
rate, e is the moving-average parameter, X, is rate measured
-

250

e=l1

oc=l1.9

Theoretical reconfiguration time


e=O.1

200

e=l

A.
ooook~verage rate

7 150

j ._

---

e=0 1

50
TL

.l

0
0

Fig. 17.
scenario

P-layer view

MPLS LSP
graph

MPLS LSP route)

_ ..-

15
Time [hour]

20

l/
25

30

VNT reconfiguration experiment with traffic increase/decrease

100000

load
c

10

.l

information is advertised by extended OSPF has no scalability


problem in the aspect of OSPF advertisement load.

Fig. 15. Preference view of traffic pattern schedule

(I P topology,

3k.

*sP~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Il

10000

1000

100
Ct

~ 10

11

0.1
1

10

100

1000

Number of nodes
Optical-layer view
(Fiber topology, Optical LSP

./

Status

view

>

Fig. 18. Load of OSPF advertisement

route)

Fig. 16.

the n-th measurement. Equation (1) shows that


emphasizes the past statistical rate.
Rn

We also measured consumed

VNT Viewer

eXn + (1 -e)Rn-1

small

(1)

Figure 17 shows that VNT re-configuration was performed


when measured traffic value was over TH and under TL
successfully and the traffic pattern whose shape parameter was
1.9 did not need smoothing with moving average parameter.
B. Scalability
As mentioned above, the traffic value information is advertised by OSPF. We should investigate OSPF load because
OSPF advertisement interval will be short. Figure 18 shows
estimated OSPF load and measured plots. Each plot is average
of three times measurements. We estimated OSPF load using
OSPF packet format [22], [9] and 4. Estimation condition;
Average nodal degree: 2.2, OSPF update interval: 2 seconds,
packet LSP: established between all nodes. We also measured
OSPF load in the experimental system. The measured plots
were almost on the estimated line. Figure 18 shows that the
control plane of 1000 nodes network should have throughput
about 15Mbps. Therefore, the method that the traffic value

processor power

of GMPLS

router shown by figure 19. Each plot is average of three times


measurements. Figure 19 shows that there is little correlation

between gathering period of traffic value information and


average CPU utilization ratio. Therefore, the method that the
traffic value information is advertised by extended OSPF has
few scalability problems in the aspect of OSPF processing
load.
C. Packet loss experiment caused by make-before-break
In this experiment, make-before-break was performed by
MPLS LSP rerouting since this should, in theory, prevent
packet loss. Make-before-break means the procedure of rewriting ingress and egress router tables, see figure 20.
We measured packet loss in the network topology shown in
IV-A. The threshold traffic rates are TH = 100kbps, 1Mbps,
10Mbps, TL = 4kbps, 40kbps, 400kbps. Figure 21 shows
the result. This figure shows that no packet loss occurred in
decreasing traffic rate. By contrast, some packet loss occurred
in increasing traffic rate.
We described the reason why packet loss occurs below.
Packet order reversal
Process power shortage of the hardware
others
We first captured egress node interface to check whether
packet order reversal occur or not when VNT reconfiguration

interface.
2.5
2

Router 1 *

V. CONCLUSION

Router 2

15
1
U

05

AA

Router 3

4
Gathering period [sec]

Fig. 19.

Consumed CPU

power

for processing traffic value advertisement

Ingress

Egress

RSVP PATH

RSVP RESV
RSVP CONF

(0 Add

(2

new

label table entry

Change label table entry

(O Delete old label table entry

Fig. 20.

Make-before-break procedure

occurred. We recognized however packets were not forwarded


to next hop in ingress node during about 1 second.
We checked the next reason. We assumed that the PC router
did not have enough processor power to forward the packets

without packet loss. We measured packet loss with differential


traffic rate. Figure 21 shows the results. In our experiment
system, the packet loss occurred with 100 kbps in increasing
case. In contract, the router PC can forward the packets with
400 kbps without packet loss in the decrease case. We can
assume that processor power shortage of the hardware was
not the cause of packet loss with comparison between both
experiments.
We made additional investigations and it become clear that
it takes the Linux kernel a significant time to recognize a new
Threshold TH
VNT reconfiguration
with traffic increasing

1 OOK
A

1M
A

1OM
A

4K

40K

400K

Threshold TL
VNT reconfiguration
with traffic decreasing

A: Occasional packet loss


0: No packet loss in every times
*

We tried each test three times.

Fig. 21. Measurement result of packet loss in VNT reconfiguration with


make-before-break

In this paper, we developed IP Optical traffic engineering


technologies to address the requirements. We report the experiment of IP optical traffic engineering that IP routers and
OXCs are controlled by GMPLS and IP network topology and
IP routing are dynamically re-configured depending on traffic
value increase and decrease. These function achieved prevention of traffic congestion or decrease of link utilization ratio.
We also confirmed the scalability of our distributed networks
control system logically and experimentally. Additionally, it
became clear that make-before-break method prevented packet
loss.
REFERENCES
[1] E. Mannie, "Generalized Multi-Protocol Label Switching (GMPLS) Architecture," IETF RFC, RFC 3945, Oct. 2004.
[2] M. Kodialam and T. V. Lakshman, "Dynamic Routing of Restorable
Bandwidth-Guaranteed Tunnels Using Aggregated Network Resource
Usage Information," IEEE/ACM Trans. Networking, vol. 11, no. 3,
pp.399-410, Jun. 2003.
[3] S. Butenweg, "Two distributed reactive MPLS traffic engineering mechanisms for throughput optimization in best effort MPLS networks," Proc.
of ISCC 2003, vol. 1, pp. 379-384, Jul. 2003.
[4] B. Dekeris and L. Narbutaite, "Traffic control mechanism within MPLS
networks," Proc. of ITI 2004, vol. 1, pp. 603-608, Jun. 2004.
[5] K. Shiomoto, E. Oki, W. Imajuku, S. Okamoto, N. Yamanaka, "Distributed Virtual Network Topology Control Mechanism in GMPLS-Based
Multiregion Networks," IEEE JSAC, Vol. 21, No. 8, p.p. 1254 - 1262,
Oct. 2003.
[6] K. Shiomoto, D. Papadimitriou, J.L. Le Roux, M. Vigoureux, D. Brungard, "Requirements for GMPLS-based multi-region and multi-layer
networks (MRN/MLN)," IETF draft, draft-ietf-ccamp-gmpls-mln-reqs02.txt, Oct. 2006, work in progress.
[7] J.L. Le Roux, D. Brungard, E. Oki, D. Papadimitriou, K. Shiomoto,
M. Vigoureux, "Evaluation of existing GMPLS Protocols against Multi
Layer and Multi Region Networks (MLN/MRN)," IETF draft, draft-ietfccampgmpls-mln-eval-02.txt, Oct. 2006, work in progress.
[8] K. Kompella and Y. Rekhter, "Routing Extensions in Support of Generalized Multi-Protocol Label Switching (GMPLS)," IETF RFC, RFC 4202,
Oct. 2005.
[9] K. Kompella and Y. Rekhter, "OSPF Extensions in Support of Generalized
Multi-Protocol Label Switching (GMPLS)," IETF RFC, RFC 4203, Oct.
2005.
[10] R. Ramaswami and K. N. Sivarajan, "Design of logical topologies for
wavelength-routed optical networks," IEEE J. Selected Areas Commun.,
vol. 14, pp. 840 - 851, June 1996.
[11] B. Mukherjee, D. Banerjee, S. Ramamurthy, and A. Mukherjee, "Some
principles for designing a wide-area WDM optical network," IEEE/ACM
Trans. Networking, vol. 4, pp. 684 - 696, Oct. 1996.
[12] J.-F. P. Labourdette, G. W. Hart, and A. S. Acampora, "Logically
rearrangeable multihop lightwave networks," IEEE Trans. Commun.,
vol.39, pp. 1223 - 1230, Aug. 1991.
[13] D. Banerjee and B. Mukherjee, "Wavelength-routed optical networks:
Linear formulation, resource budgeting tradeoffs, and a reconfiguration
study," IEEE/ACM Trans. Networking, vol. 8, pp. 598 - 607, Oct. 2000.
[14] F. Ricciato, S. Salsano, A. Belmonte, and M. Listanti, "Off-line configuration of a MPLS over WDM network under time-varying offered
traffic," in Proc. IEEE INFOCOM, vol. 1, June 2002, pp. 57 - 65.
[15] A. Gencata and B. Mukherjee, "Virtual-topology adaptation for WDM
mesh networks under dynamic traffic," in Proc. IEEE INFOCOM, vol. 1,
June 2002, pp. 48 - 56.
[16] http://www.ayame.org/index.php
[17] http://mpls-linux.sourceforge.net/
[18] K. Thompson, G.J. Miller, R. Wilder, "Wide-area Internet traffic patterns
and characteristics," IEEE Network, Vol. 11, Issue 6, pp. 10-23, Nov.-Dec.
1997.
[19] V. Paxson and S. Floyd, "Wide Area Traffic: The Failure of Poisson
Modeling," IEEE/ACM Trans. on Networking, pp. 236-244, June 1995.
[20] W. Leland, M. Taqqu, W. Willinger, and D. Wilson, "On the SelfSimilar Nature of Ethernet Traffic (Extended Version)," IEEE/ACMTrans.
on Networking, pp. 1-15, Feb. 1994.
[21] http://www.grid.unina.it/software/ITG/download.
php
[22] J. Moy, "OSPF Version 2," IETF RFC, RFC 2328, Apr. 1998.

Вам также может понравиться