Вы находитесь на странице: 1из 9

-1-

1. Introduction
This document describes the design and the implementation oI MPLS Network Simulator(MNS), which supports
the establishment oI CR-LSP(Constraint based Routing-Label Switching Path) Ior QoS traIIic as well as basic
MPLS(Multi-Protocol Label Switching)|2||3||5| Iunctions such as LDP(Label Distribution Protocol)|1||4| and label
switching. In order to support them, MNS consists oI many components; those are CR-LDP, MPLS ClassiIier,
Service ClassiIier, Admission Control, Resource Manager, and Packet Scheduler. To veriIy the accuracy and
eIIiciency oI MNS, two examples are simulated and evaluated. One is a simulation Ior traIIics with diIIerent QoS
each other. The other is a simulation Ior resource preemption.
The rest oI this document is organized as Iollows. Section 2 overviews MNS and its detailed design and
implementation are described in Section 3. Simple examples are simulated and evaluated in Section 4. And
conclusion is given in Section 5.
2. Overview of MNS
This section describes the purpose, scope, architecture, and capability oI MNS. The proposed MNS has been
implemented by extending Network Simulator(NS)|6||7|. NS, which is an IP based simulator, began as a variant oI
the REAL network simulator implemented by UC Berkeley in 1989. Now, NS version 2 is available as a result oI the
VINT project.
2.1 Purpose and Scope
The primary purpose oI this work is to develop a simulator that enables to simulate various MPLS applications
without constructing a real MPLS network. The MPLS simulator is designed under the Iollowing considerations:
1. Extensibilitv -- IaithIully Iollow the object-oriented concept so as to support various sorts oI MPLS application
2. Usabilitv -- design so that users may easily learn and use it.
3. Portabilitv -- minimize the modiIication oI NS code so as not to be subordinate to a speciIic NS version.
4. Reusabilitv -- design so as to aid in developing a real LSR switch.
The implementation scope oI MPLS simulator is as Iollows:
Gaeil Ahn and Woojik Chun
Design and Implementation of
MPLS Network Simulator (MNS)
Department oI Computer Engineering, Chungnam National University, Korea
Iog1, chun}ce.cnu.ac.kr
-2-
1. Label Switching -- label swapping/staking operation, TTL decrement, and penultimate hop popping
2. LDP -- handling LDP messages(Request, Mapping, Withdraw, Release, and NotiIication)
3. CR-LDP -- handling CR-LDP messages
The capability oI MPLS simulator related to setting up LSP is as Iollows:
1. In LSP Trigger Strategy -- support control-driven and data-driven trigger.
2. In Label Allocation and Distribution Scheme -- support only downstream scheme in control-driven trigger, and
both upstream and downstream-on-demand scheme in data-driven trigger.
3. In Label Distribution Control Mode -- support only independent mode in control-driven trigger, and both
independent and ordered mode in data-driven trigger.
4. In Label Retention Mode -- support only conservative mode.
5. ER-LSP based on CR-LDP -- established based on the path inIormation pre-deIined by user.
6. CR-LSP based on CR-LDP -- established based on the parameters such as traIIic rate, buIIer size.
7. Resource Preemption -- preempt the resource oI the existing CR-LSP with setup-priority and holding-priority
8. Flow Aggregation -- aggregate Iine Ilows into a coarse Ilow
2.2 Conceptual model of MNS supporting QoS
Information
Flooding
Packets
Out
LDP/
CR-LDP
Messages
Packets
In
MPLS
Classifier
Packet Scheduler
Admission
Control
Routing
Protocol
Resource
Manager
Service
Classifier
Address
Classifier
CR-LDP
Link
MPLS Node
Figure 1: Conceptual model of MNS
Figure 1 shows a conceptual model oI MPLS node supporting QoS. MNS consists oI such components including
CR-LDP`, MPLS ClassiIier`, Service ClassiIier`, Admission Control`, Resource Manager`, and Packet
Scheduler`.
-3-
Its Iunctions can be described as Iollows;
z LDP/CR-LDP -- generates or processes LDP/CR-LDP message
z MPLS ClassiIier -- executes label operation such as push, pop, and swap Ior MPLS packet
z Service ClassiIier -- classiIies services that should be applied to the incoming packet by using label and interIace
inIormation or the CoS Iield oI MPLS shim header, and associates each packet with the appropriate reservation
z Admission Control -- looks at the TraIIic Parameter oI CR-LDP, and determines whether the MPLS node has
suIIicient available resource to supply the requested QoS
z Resource Manager -- creates/deletes queues on-demand, and also manage the inIormation oI resource.
z Packet Scheduler -- manages the packets in the queues so that they receive the service required.
3. Design and Implementation of MNS
3.1 Label Switching
MPLS simulator has been implemented by extending NS that is an IP based simulator. In NS, a node consists oI
agents and classiIiers. An agent is the sender/receiver object oI protocol, and a classiIier is the object that is
responsible Ior the packet classiIication. For the purpose oI making a new MPLS node, MPLS Classifier and LDP
agent are inserted into the IP node.
Label Switching
L3 Forwarding
Addr
Classifier
Port
ClassiIier
Agent
(source/null)
Agent
(CR-LDP)
MPLS Node
LIB Table
(Outgoing Label and
Outgoing InterIace)
MPLS
Classifier
PFT Table
(Outgoing Label and
Outgoing InterIace)
If unlabeled packet,
then lookup
If labeled packet,
then lookup
Figure 2: Architecture of MPLS Node for Label Switching
The architecture oI MPLS node Ior label switching is shown in Figure 2. When MPLS node receive a packet, it
operates as Iollows:
1. MPLS Classifier determines whether the received packet is labeled or unlabeled. II it is labeled, MPLS
Classifier executes label switching Ior the packet. II it is unlabeled but its LSP exists, it is handled like a labeled
packet. Otherwise, MPLS Classifier sends it to Addr Classifier.
-4-
2. Addr Classifier executes L3 Iorwarding Ior the packet.
3. II the next hop oI the packet is itselI, the packet is sent to Port Classifier.
For the label switching, two tables are deIined as Iollows;
z Partial Forwarding Table (PFT) -- It is a subset oI Forwarding Table and used to map IP packet into LSP at
Ingress LSR. It consists oI FEC, FlowID, and LIBptr. The LIBptr is a pointer that indicate a entry oI the LIB
table.
z Label InIormation Base (LIB) -- It has inIormation Ior the established LSPs and used to provide label switching
Ior the labeled packet. It consists oI in/out-label and in/out-interIace.
3.2 MPLS Real-time Traffic Processing
In order to support MPLS real-time traIIic, the Service Classifier component has been designed and
implemented. CBQ, which was already implemented on NS, is selected Ior the Packet Scheduler component. In the
real world, Packet Scheduler component should be perIormed at nodes. However, that oI NS has been
implemented at Links.
In MNS, a table is deIined to maintain the inIormation oI the established ER-LSP; that is Explicit Route
inIormation Base(ERB). ERB has the inIormation on the established ER-LSP such as LSPID and ServiceID.
LIB Table
(Outgoing Label and
Outgoing Interface)
Outgoing Interface
ServiceID
ERB Table
(ServiceID)
MPLS
Classifier
Service
Classifier
Packet Scheduler 1
Link 1
Link 2
MPLS Node
CBQ
Packet Scheduler 2 CBQ
Link n
Packet Scheduler n CBQ
Lookup
Lookup
Figure 3: MPLS QoS traffic processing of MPLS node and Link
Figure 3 shows MPLS QoS traIIic processing oI MPLS node and link. When a MPLS data packet arrives at the
MPLS node, it gets into the MPLS Classifier. MPLS node indexes the LIB table Ior outgoing-label and outgoing-
interIace to perIorm label operation and then the ERB table Ior ServicID to provide queuing service. According to
the class the packet belongs to, it queues at the corresponding buIIer in CBQ. And it is served by CBQ and
transmitted to the outgoing-interIace.
-5-
3.3 Resource Reservation
The Admission Control and the Resource Manager components have been designed and implemented Ior
resource management. The Resource Manager component is responsible Ior creation/deletion oI CBQ queues and
the management oI the resource inIormation(i.e. Resource Table).
LIB Table
Link 1
MPLS Node
Resource
Table
Create/Delete CBQ
ERB Table
Admission Control 1
CR-LDP Request Message
Packet
Scheduler
Outgoing-Label and
Outgoing-Interface
Resource Manager 1
Admission Control n Resource Manager n Link n
CR-LSP info.
ServiceID
CR-LDP Mapping Message

Figure 4: Resource reservation process of MPLS node and link


Figure 4 shows the resource reservation process oI MPLS node and link. When CR-LDP component receives a
CR-LDP Request message, it call Admission Control to check iI the node has the requested resource. II there is
suIIicient available resource, Admission Control reserves it by updating the Resource table. Then the LDP Request
message is passed to the next MPLS node.
When CR-LDP component receives a CR-LDP Mapping message, it saves the label and interIace inIormation in
the LIB table and the requested CR-LSP inIormation (e.g. LSPID) in the ERB table. Then it calls the Resource
Manager to create a queue to serve the requested CR-LSP, and saves its ServiceID in ERB table. And last, the LSP
Mapping message is passed to the previous MPLS node.
3.4 Class Level
There are three class levels and Iour services(i.e., ST, RT, HBT, SBT), as shown in Figure 5. User can conIigure
the parameters oI CBQ, i.e., traIIic rate, buIIer size, priority. The queues oI ST, HBT, and SBT are statically created
when a simulation environment is conIigured. Those oI RT are dynamically created/deleted when there is a
simulation event; e.g., CR-LDP massage arrives at MPLS node.
-6-
Level 3
Level 2
Level 1
LINK
ST RT
RT 1 RT 2 RT n
BT
HBT SBT
ST : Signaling Traffic
RT : Real-time Traffic
BT : Best-effort Traffic
HBT: High priority BT
SBT: Simple BT
Figure 5: Class level in MNS
3.5 Implementation Environment
MPLS simulator has been implemented on Sun Unix system by extending ns-2.1b6 program, NS version 2.1.
4. Simulation Example
4.1 QoS Traffic Simulation
In this section, we examine several kinds oI MPLS traIIic with diIIerent QoS each other in the MPLS networks as
shown in Figure 6. By doing this work, the accuracy and eIIiciency oI MNS may be veriIied.
SBT
Node0
LSR1
LSR2
LSR3
Node4
HBT
RT1
RT2
SBT
HBT
RT1
RT2
Traffic Source Traffic Sink
2
M
b
p
s
1Mbps 1Mbps
1
M
b
p
s
Figure 6: MPLS Networks
In Figure 6, Node0 and Node4 are IP nodes. LSR1, LSR2, and LSR3 are MPLS nodes. There are Iour pairs oI
workload: one pair oI Simple Best-eIIort TraIIic, called SBT and one pair oI High priority Best-eIIort TraIIic, called
HBT and two pairs oI Real-time TraIIic, called RT1 and RT2, respectively. TraIIic is injected into Node0 and
escape Irom Node4. The link bandwidth is 1Mbit/s except the link between Node0 and LSR1. SBT and HBT
generate the constant bit rate(CBR) traIIic at 250Kbit/s, respectively. RT1 and RT2 generate the CBR traIIic at
-7-
350Kbit/s and 450Kbit/s, respectively. Thus, the total rate oI all workloads is larger than the link bandwidth oI
MPLS networks.
# setup-er-lsp {FEC ER(Explicit-Route) LSPID}
$ns at 0.1 "$LSR1 setup-er-lsp 3 123 1000"
$ns at 0.1 "$LSR1 setup-er-lsp 3 123 1100"
# setup-cr-lsp {FEC ER LSPID Bandwidth BufferSize PacketSize
# SetupPrio HoldingPrio}
$ns at 0.1 "$LSR1 setup-cr-lsp 3 123 1200 400 200 7 3"
$ns at 1.0 "$SBT start"
$ns at 1.0 "$HBT start"
$ns at 1.0 "$RT1 start"
$ns at 10.0 "$LSR1 setup-cr-lsp 3 123 1300 400 200 7 3"
$ns at 11.0 "$RT2 start"
$ns at 30.0 "$RT2 stop
$ns at 31.0 "$LSR1 release-lsp-using-release 1300"
$ns at 40.0 "$SBT stop"
$ns at 40.0 "$HBT stop"
$ns at 40.0 "$RT1 stop"
Figure 7: Code for event scheduling
Figure 7 shows event-scheduling code Ior simulation. The Iour LSP are established Ior the traIIic; those are two
ER-LSPs Ior SBT and HBT, and two CR-LSPs Ior RT1 and RT2. At 0.1 seconds, two ER-LSPs are established Ior
SBT and HBT, and a CR-LSP Ior RT1. At 1.0, SBT, HBT, and RT1 generate their traIIic. At 10.0 seconds, a CR-
LSP is established Ior RT2, and RT2 generates its traIIic at 11.0 seconds. At 30.0 seconds, RT2 stops its traIIic
generation and the CR-LSP Ior RT2 is released at 31.0 seconds. Finally, SBT, HBT, and RT1 stop their traIIic
generation at 40.0 seconds.
Figure 8: variation in traffic bandwidth
The simulation result is shown in Figure 8. As sown in Figure 8, the total bandwidth between 11 seconds and 30
is nearly equal to the link bandwidth, that is, the link eIIiciency is very high. Meanwhile, RT1 and RT2 can obtain
-8-
the bandwidth required, while SBT and HBT utilize the rest bandwidth oI the link. And HBT is served better than
SBT.
4.2 Resource Preemption Simulation
In this section, we simulate the resource preemption deIined in CR-LDP standards. For this simulation, the traIIic
rate oI RT1 and RT2 is rewrited. RT1 and RT2 generate CBR traIIic at 600Kbit/s and 700Kbit/s, respectively. Thus,
when RT1 and RT2 generate traIIic at the same time the total rate oI all workloads is larger than the link bandwidth
oI MPLS networks.
$ns at 0.1 "$LSR1 setup-er-lsp 3 123 1000"
$ns at 0.1 "$LSR1 setup-er-lsp 3 123 1100"
# setup-priority= 7, holding-priority= 4 for RT1 Traffic
$ns at 0.1 "$LSR1 setup-cr-lsp 3 123 1200 600K 400 200 "
$ns at 1.0 "$SBT start"
$ns at 1.0 "$HBT start"
$ns at 1.0 "$RT1 start"
# setup-priority= 3, holding-priority= 2 for RT2 Traffic
$ns at 10.0 "$LSR1 setup-cr-lsp 3 123 1300 700K 2000 200 "
$ns at 11.0 "$RT2 start
$ns at 30.0 "$RT2 stop"
$ns at 31.0 "$LSR1 release-lsp-using-release 1300"
$ns at 40.0 "$SBT stop"
$ns at 40.0 "$HBT stop"
$ns at 40.0 "$RT1 stop"
Figure 9 : Code for event scheduling
Figure 9 shows event-scheduling code Ior simulation. The setup-priority and the holding-priority Ior the CR-LSP
oI RT1 are 7 and 4, respectively. The setup-priority and the holding-priority Ior the CR-LSP oI RT2 are 3 and 2,
respectively. Thus, RT2 can preempt the resource Ior RT1 because the setup-priority oI RT2 is less than holding-
priority oI RT1.
The simulation result is shown in Figure 10. As sown in Figure 10, the resource oI RT1 is preempted bye RT2 at
about 11 seconds. From about 11 seconds, RT1 is served as simple best-eIIort traIIic. Meanwhile, only RT1 can
obtain the bandwidth required, while SBT, HBT, and RT1 utilize the rest bandwidth oI the link. And HBT is served
better than SBT and RT1.
-9-
Figure 10 : variation in traffic bandwidth
5. Conclusion
MPLS is known as one oI the most important technologies Ior solving many problems over Internet. Many
techniques related to MPLS have been proposed. MNS provides a simulation platIorm on which these techniques
can be examined.
By using MNS, we have simulated two examples in this document. Even iI they are simple, the results prove the
Ieasibility and eIIiciency oI MNS. More simulations are needed to veriIy MNS in our Iuture works.
6. References
|1| Bilel Jamoussi, "Constraint-Based LSP Setup using LDP," Internet DraIt, Oct. 1999.
|2| Bruce Davie, Paul Doolan, Yakov Rekhter, "Switching in IP Networks: IP Switching, Tag Switching, and Related Technologies,"
Morgan KauImann Publishers, Inc. 1998.
|3| Eric C. Rosen, Arun Viswanathan, Ross Callon, "Multiprotocol Label Switching Architecture," Internet DraIt, April 1999.
|4| Loa Andersson at al., "LDP SpeciIication," Internet DraIt, June 1999.
|5| R. Callon at al., "A Eramework Ior Multiprotocol Label Switching," Internet DraIt, Sep. 1999.
|6| UCB/LBNL/VINT Network Simulator, 'ns Notes and Documentation, URL: http://www-mash.cs.berkeley.edu/ns
|7| UCB/LBNL/VINT, 'ns manual, URL: http://www-mash.cs.berkeley.edu/ns/ns-man.html

Вам также может понравиться