Вы находитесь на странице: 1из 17

Multi-Protocol Label Switching

Todays next generation networks.

Thomas Jones
[paper written for a college internet protocols class]

Background
What is MPLS?
In all its mystery Multi-Protocol Label Switching was designed to be a data carrying mechanism. That is, MPLS can transport Ethernet, ATM, SONET, Frame Relay, PPP, among other protocols through an IP network. As the years have passed since its formal chartering as an IETF (Internet Engineering Task Force) group in 1997, MPLS has evolved in to so much more than what it was intended to be. Many useful applications of MPLS have evolved. Some of the most popular applications are MPLS Virtual Private Networks, MPLS Trafc Engineering, and MPLS Any Transport over MPLS (AToM). Essentially, MPLS creates a unique layer 2 (data link layer) identier for layer 3 (network layer) network information. The network information mentioned is often referred to as a prex, nothing more than the IP networks that connect to the MPLS switch or router.

What are the benets of MPLS?


First we will start with the bogus benet. In the mid to late 1990s, after the transition from bridging to switching, and the beginning of MPLS, people thought that switching IP packets was a slower process than just switching a label on top of an IP packet. This isnt necessarily true. Technologies like Application Specic Integrated Circuits (ASICs) brought switching to an entirely new level. ASICs were pretty much solely responsible for multiport bridges, effectively what switches are. The term switches came around due to marketing. Marketing of these remarkable switches had suggested bridges were too slow for emerging technologies. Thus, this reformed multi-port bridge was re-branded and called a switch. On larger switch chassis these ASICs exist within the line card itself, so the switching decision no longer had to go to the main CPU (Supervisor card) of the switch. At that time (mid to late 1990s) routers did not have line cards with processors really capable of calculating complex algorithms (used in the routing protocols) for IPv4 addressing, at least at the high data rates found in switches. The router CPUs were mainly for calculations located in the control plane of a router. The purpose of the control plane was to set up the data or forwarding plane. The main components of the control plane are the routing protocols, routing table, and other control or signaling protocols that aid in provisioning the data plane. The data plane is the actual packet forwarding engine (layer 2 path) of the router or switch. As technology naturally ows, ASICs were soon used on line cards found in routers. In all actuality some routing and switching platforms use the same line cards like the Cisco 6500 switch and 7600 router. These are essentially the same chassis and the only real denitive separation is provided in the marketing of the two chassis, one as a switch and the other as a router. So in reality the same technology was used in both devices at the same speeds, with performance differences dependent on the IOS used. In todays networks the justication of wanting to move to MPLS based the notion that label switching is faster than IP switching would be considered bogus.

One common benet includes the use of a unied network infrastructure. Due to the exibility of the MPLS protocol it can identify many different IP based technologies, label them, and forward it onto a common infrastructure. Furthermore this is not just limited to data, but extends to telephony and video like what is seen in telepresence systems. Telepresence systems provide high denition (1080p) real time video conferencing with spacial audio surround sound. Video teleconferencing has always been a high bandwidth low latency application, and a good test tool for performance and the resiliency of a network. Another well known benet is a BGP (Border Gateway Protocol) free core. BGP is often used in MPLS networks for VPN implementation, but can easily and efciently be used for standard routing as well. BGP is the king of routing protocols and can be extremely resource intensive to a router. In MPLS, BGP needs to only run on the edge routers one hop away from the core of your network, or your service providers network. As you will see later the edge routers perform the most important functions. This means the core is relieved of some of the heavy and complex processing burdens, which are placed closer to the source of that critically needed function.

What is its place in layered communication?


It is a niche all its own with no other competitors, MPLS has a complete monopoly in layered communications. Often referred to as switching at layer 2.5, MPLS uses a shim header that contains information that helps move frames from hop to hop. A shim header is a 32 bit header placed between the layer 2 header, and layer 3 payload as shown in the picture below. In certain applications like MPLS VPN and MPLS TE, the shim header can contain enough information to dene a path through an entire network, not just to the next hop. The standard shim header has 4 elds, of which the functions are dened below.

The 4 elds are: Label - 20 bits; This eld stores the label value. This value can be between 0 and 2^20th - 1. The rst 16 of these labels ( 0 - 15) are exempted from normal use, that is, they are reserved for specic functions known as label operations. Experimental (EXP) - 3 bits; This eld is used specically for Quality of Service implementation. Bottom of Stack (S) - 1 bit; This identies if the particular label in the stack is the bottom or top label. This bit is set to 0 unless it is the bottom label in a label stack, if so the bit is set to 1. A stack is a collection of labels on top of the packet. The number of labels you can have (that is, the number of 32 bit elds) on top of a packet is limitless, though you should seldom see a stack of 4 labels or more. Time to Live (TTL) - 8 bits; This eld performs the same function as the TTL eld found in an IP header. Its main function is to prevent a packet from being stuck in a routing loop. If a routing loop occurs and no TTL is present the packet loops forever. If the TTL reaches 0, the packet is discarded.

MPLS architecture
How MPLS works is in fact dependent on a devices topological location within a network. To understand this we must look at a network and dene the topological reference names.

CE - Customer Edge The last customer router before entering the Provider network. This is where the customer provides its internal routes to the provider network. The provider receives the routes on the PE router. PE - Provider Edge This is where the customer routes are received. This is also where label imposition/ disposition happens. That is, this is where labels are created and/or removed for customer trafc to and from the provider network. P - Provider This is where the MPLS trafc is label switched. Devices in this area are called LSRs (Label Switch Routers) which switch trafc through virtual circuits called LSP (Label Switched Paths). It begins at each PE, but the P routers are usually considered the core devices, not the PEs. IP - Customer IP network This is the customers IP network. To further understand exactly how MPLS works you must understand how the control plane and data plane compliment each other during the forwarding process in a Label Switch Router (LSR). This process is slightly different depending if you are on a PE or P device. Below is a graphic to help illustrate those slight differences. *Graphic below is taken from the Cisco Press CCIP edition MPLS VPN book, Volume 1.*

As you can see from the graph the Label Switch Routers exchange routes with each other, usually by the routing protocols OSPF or EIGRP. This is a standard network layer function. The best routes for the respective networks based on the routing protocols in use are placed in the routing table, this is how standard routing works. Once the routing table is populated, CEF (Cisco Express Forwarding) uses that information to enable MPLS label switching. Simply put, CEF is required to be able to label switch in an MPLS network. CEF has two components, the Forwarding Information Base (FIB) and the adjacency table. The FIB (located in the data plane) is responsible for maintaining next hop IP addresses for all the routes in the routing table. The adjacency table is responsible for maintaining the layer 2 information for each FIB entry. The adjacency table is responsible for the layer 2 rewrite, and it avoids the need for an ARP request for each IP address lookup. CEF binds the next hop address for a specic network to a physical interface and mac address. It relies on recursive updates in and from the routing table to do this. This is essentially what allows layer 3 switching. When you enable MPLS on a router, the routing table is also copied in to a MPLS IP routing control table, which remains in the control plane. Adjacent to this is the Label Information Base (LIB, also referred to as Tag Information Base as shown in the picture) which is where the MPLS labels exist. The MPLS IP routing control table is what actually binds labels from the LIB to the IP routes in the IP routing table. The MPLS IP routing and control table is also where the label distribution protocol operates. Said protocol, like Label Distribution Protocol (LDP) for example, shares the locally signicant labels to IP route mappings with other LSRs in the network. This makes the creation of virtual circuits via label stacks possible. Label stacks are used in MPLS applications such as Trafc Engineering and VPN implementation. Again, labels are only locally signicant to a router. MPLS IP routing and control information is also copied in to the FIB and the TFIB/LFIB (Tag or Label Forwarding Information Base). The difference between these two tables (FIB and TFIB/LFIB) is in their purpose in the forwarding of data. What is not shown is a logic block that exists in between the FIB and TFIB/LFIB. This block is where label lookup occurs and the decision to remove the label from the packet for forwarding, or replace the label with the locally signicant label for forwarding occurs. This is applicable when a labeled packet is received. Since the packet is labeled it goes to the TFIB, there is usually an arrow pointing up to the FIB (denoting the logic process) but not in this particular diagram. If the label was removed it would be sent to the FIB for appropriate forwarding. The MPLS Edge router has the most intricate architecture because it must be able to forward data onto and off of the MPLS network, to and from the customer. An

understanding of basic MPLS architecture is paramount to understanding the data and conguration of the lab that was performed for this paper.

MPLS Operation
How does a labeled packet transverse an MPLS network? The aforementioned section dened the logical architecture an MPLS node has. In essence how a single MPLS router treats a labeled IP packet. This section will explain what happens to the labeled packet as it transverses LSRs in an MPLS network. The aim of this section is to take focus off of the router, and place it on the labeled packet. Label Distribution Protocol is the protocol responsible for MPLS switches and routers to communicate with each other. LDP carries the label binding (IP network to label mapping) for Forwarding Equivalence Classes (FECs) through the MPLS network. FECs for all simplicity are ranges or groups of network addresses that are all treated in the same manner. In effect a FEC correlates to a Label Switched Path through a network. Think of it like subnets within the a single network that would follow the same path to a destination. By default Label Switch Routers (LSRs) learn the label to IP mappings of their adjacent neighbors. What this means is that the LSRs learn each others directly connected IP to Label mappings. Depending on how you congure the MPLS LSR, every router can learn and retain each others entire Label to IP mapping table. Although this increases the use of resources, it enables the most efcient operation when dening paths for labeled packets to transverse an MPLS core. The rst step of nding other routers running LDP is sending out Hello messages on all links that are LDP enabled. This is part of conguring MPLS in the router or switch operating system (Cisco IOS). The conguration of said protocol will be covered in the Lab section. These Hello messages are sent out to the multicast address 224.0.0.2. This address among several others are reserved multicast addresses that are for communication between routers running certain routing protocols. It is not similar to multicast routing where the device has to be subscribed to a multicast group to receive these messages. When the protocol is enabled on the router, the router knows to listen for the multicast address, it is predened in the protocol. Once the LDP Hellos are received by another router also using LDP, a TCP session is set up between them. LDP sessions are set up using TCP port 646 where information exchange ensues. The extent to what exact information is to be exchanged and various values of parameters therein can also be manipulated. For the purpose of simplicity in this paper it will not be discussed. These sessions are maintained between adjacent LSRs by receiving LDP packets or LDP keepalive messages. Being that LDP is an entire topic on its own, the modes of which LDP can operate in will be briey discussed. The three modes, each of which has two options, deal with advertisement of label bindings, label retention, and Label Switched Path control modes.

The two advertisement modes are Unsolicited Downstream (UD) and Downstream-onDemand (DoD). Downstream refers to the destination the packet is headed (egress), whereas upstream refers to the direction toward the source of where the packet is being sent from (ingress). As suggested by the name UD advertisement automatically advertises a label binding to its neighbors when it is learned. DoD only supplies the adjacent LSR with the label binding when it is asked for, similar to what would be called a need to know basis. The two label retention modes seem like a pun on politics, but being so makes them easy to remember how they function. The two retention modes are Liberal Retention Mode and Conservative Retention Mode. In Liberal Retention Mode the LSR keeps all remote label bindings received from downstream LSRs, regardless if they are the next hop or not. These labels are stored in the Label Information Base (also known as Tag Information Base as mentioned before), but only the label bindings for adjacent neighbors are put in to the actual Label Forwarding Information Base. Conservative Retention Mode discards any label binding updates except ones from its adjacent neighbors. The main difference here is that Liberal Retention Mode adapts to routing changes quicker, but CRM conserves memory resources more effectively. The only control mode used for IP networks is Independent Control Mode. This is where each MPLS Label Switch Router has locally signicant labels for the networks it is connected to. The other mode, Ordered Control Mode, is used in ATM networks thus not discussed here. As mentioned in the MPLS Architecture section, some labels are reserved for specic functions. These reserved labels and their functions correlate to actions that LSRs perform on the labeled packet. For sake of brevity and in an attempt not to become granular and cause confusion, the label actions will be discussed and not the reserved labels themselves. There are 5 operations, of which 3 are most common (top 3 in the list below). These operations are: Pop - The top label is removed. The packet is forwarded with the remaining label stack or as an unlabeled packet. A label stack is a predetermined set of labels put sequentially on top of one another, which effectively predenes the path for each hop it takes. The labels in the stack are the locally signicant labels of the respective LSRs the packet transverses from ingress to egress in a network. Swap - The top label is removed and replaced with a new one. Push - The top label is replaced with a new label (swapped), and one or more labels are added (pushed) on top of the swapped label.

Untagged/No Label - The stack or label is removed, and the packet is treated as a regular IP packet when forwarded. Aggregate - The label stack is removed, and an IP lookup is done on the IP packet. This happens at the Edge routers that connect to the customer equipment. To add meaning to said label operations, refer to the graphic below to see how a labeled packet travels from one side of a network to another. *Also from the CCIP edition MPLS VPN book, Volume 1 from Cisco Press.*

The L1, L2, L3 notations on the right side of the packets denotes each LSRs (Label Switch Routers) local label for the particular network. When a packet reaches an LSR, for example in Step #3 of the diagram, the LSR performs the swap label operation. This means that the top label is removed, the network or prex is looked up in the Label Forwarding Information Base (LFIB), and the new label is appended to the packet and sent out the respective interface associated with the networks next hop. This similar operation continues until an LSR is reached that has a pop label operation for the

network prex. The pop operation will remove the label from the packet and the packet is forwarded via whatever means of routing is congured for the next hop.

The Lab
Network Diagram
The pings in this network diagram were sourced from the 7206 VXR router and sent to the CE1 router. This is why the ingress path is at the right and the egress path is to the left. A hub was purchased from Best Buy for the temporary use of this packet capture, subsequently returned the following day. The packet snifng laptop attached to the hub is meant to capture packets from two fully functioning MPLS routers. Since the ingress interface of the PE1 router is connected to the egress interface of the P1 router, these two are considered to be fully participating MPLS Label Switch Routers (LSRs). The lab topology is very simple, elementary, very straight forward, and all connections are copper Gigabit Ethernet. A diagram is located below:

Label Insertion and removal


CE1

Label Insertion and removal


CE2

Cisco 3845

PE1

PE2

Cisco 7206

MPLS Cloud
P1 Cisco 3845 P2 Cisco 3845

Hub with laptop for packet snifng

Cisco 3845

Cisco 3845

Label Switching in the cloud

Egress

Ingress

Basic Conguration of an MPLS network


The conguration of a basic MPLS network is actually very simple and only requires a few basic steps. The requirements of such a network are the following: 1. Enable CEF: CEF is essentially what allows the imposition and disposition of labels in an MPLS network. You must make sure it is enabled globally, as well as on the specic interfaces participating in the MPLS network. How to enable CEF globally and on interfaces will be shown later. When possible enable CEF in distributed mode, which is largely platform dependent. Unfortunately it does not to pertain to the platforms used in this lab. 2. Congure IGP routing protocol: Interior Gateway Protocols are routing protocols such as RIP, IGRP, EIGRP, and OSPF. In this case OSPF was used on all the routers. The conguration of this relevant to the lab will be shown later. IGP routing protocols are needed to populate the routing tables, which CEF operation takes over and label binding ensues. 3. (Optional) Dene Label Distribution Protocol: LDP by default is the label distribution protocol. The only other option is TDP, which in the real world is a overwhelming minority, if used at all. The command to manually do this is: router(cong)#mpls label protocol {ldp | tdp} 4. (Optional) Assign LDP Router ID: LDP uses the highest IP address on a loopback interface. A loopback interface is a logical interface as opposed to an actual physical interface such as interface gigabit 0/1 of a router. Loopback interfaces are often used as management IPs for telnet sessions, monitoring, or other forms of maintenance or management. If there is no loopback interface dened, the highest IP address on the router becomes the LDP router ID. To force an interface to be an LDP router interface simply type the command: router(cong)#mpls ldp router -id [interface type] [number] for example, router(cong)#mpls ldp router-id gigabit 0/1 The LDP router ID is important in setting up sessions between MPLS routers to exchange label information. 5. Congure MPLS or Label Forwarding on the Interface: This part of the conguration tells the specic interfaces that they are participating in MPLS or Label Forwarding. The commands to congure this will be shown later.

Conguration of the MPLS Lab


Mirrored from the previous section, conguration of the actual lab devices will be shown in order of the steps aforementioned. The same exact procedure had to be completed on all routers except for both CEs.

1. Enable CEF Globally on the router: 8.11.PE1(cong)#ip cef 1B. Enable CEF on the MPLS participating interfaces: 8.11.PE1(cong-f)#ip route-cache cef 2. Congure an IGP protocol on the Router: 8.11.PE1(cong)#router ospf 1 8.11.PE1(cong-router)#network 10.1.4.0 255.255.255.0 area 0 * The network statements are the networks the loopback IPs are congured for. Directly connected networks are automatically known, loopback interface network addresses are not. 3. Dene Label Distribution Protocol 8.11.PE1(cong)#mpls label protocol ldp

4. Assign LDP router ID: *Loopback IPs are used by default, this step was not completed.* 5. Congure MPLS Label Forwarding on the interface: 8.11.PE1(cong)#interface GigabitEthernet 0/0 8.11.PE1(cong-if)#mpls ip 8.11.PE1(cong)#interface GigabitEthernet 0/1 8.11.PE1(cong-if)#mpls ip This concludes all the necessary commands needed for basic MPLS operation.

Verifying MPLS Conguration and Operation


The following are basic show commands used to verify that MPLS is operational. This requires some basic knowledge of conguring Cisco routers and switches. Below is a diagram of the lab with the necessary IP address information. It is helpful to have the IP address scheme labeled on the network diagram to understand the output of the show commands. 1. Ensure basic layer 1 and 2 connectivity are up on respective interfaces: 8.11.PE1#show ip interface brief Interface IP-Address OK? Method Status Protocol GigabitEthernet0/0 10.1.0.2 YES manual up up GigabitEthernet0/1 10.1.1.1 YES manual up up Loopback0 10.1.2.1 YES manual up up 2. Verify layer 3 routes are propagated through the network: 8.11.PE1#show ip route Codes: C - connected, S - static, R - RIP, M - mobile, B - BGP D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, * - candidate default, U - per-user static route o - ODR, P - periodic downloaded static route Gateway of last resort is not set 10.0.0.0/8 is variably subnetted, 9 subnets, 2 masks O 10.1.10.1/32 [110/4] via 10.1.1.2, 1d00h, GigabitEthernet0/1 O 10.1.9.0/24 [110/4] via 10.1.1.2, 1d00h, GigabitEthernet0/1 O 10.1.3.0/24 [110/2] via 10.1.1.2, 1d00h, GigabitEthernet0/1 C 10.1.2.0/24 is directly connected, Loopback0 C 10.1.1.0/24 is directly connected, GigabitEthernet0/1 C 10.1.0.0/24 is directly connected, GigabitEthernet0/0 O 10.1.6.1/32 [110/3] via 10.1.1.2, 1d00h, GigabitEthernet0/1 O 10.1.5.0/24 [110/3] via 10.1.1.2, 1d00h, GigabitEthernet0/1

10.1.4.0/24 is directly connected, GigabitEthernet0/1

3. Make sure respective interfaces are participating in MPLS: 8.11.PE1#show mpls interface Interface IP Tunnel Operational GigabitEthernet0/0 Yes (ldp) No Yes GigabitEthernet0/1 Yes (ldp) No Yes * The Tunnel column is set to yes when using MPLS Trafc Engineering. The Operational column is yes if packets are labeled on the interface.* 4. Verify LDP is being discovered: 8.11.PE1#show mpls ldp discovery Local LDP Identier: 10.1.2.1:0 Discovery sources: Interfaces: GigabitEthernet 0/1 (ldp): xmit/recv LDP Id: 10.1.4.1:0 5. Verify a LDP neighbor has been established: 8.11.PE1# show mpls ldp neighbor Peer LDP Ident: 10.1.4.1:0; Local LDP Ident 10.1.2.1:0 TCP connection: 10.1.1.2.646 - 10.1.1.1.11012 State: Oper; Msgs sent/rcvd: 12/11; Downstream Up time: 00:10:00 LDP discovery sources: GigabitEthernet0/1, Src IP addr: 10.1.1.2 Addresses bound to peer LDP Ident: 10.1.1.2 10.1.3.1 10.1.4.1

6. Verify label binding to learned IP addresses (also veries CEF is enabled): 8.11.PE1#sh mpls forwarding-table Local Outgoing Prex tag tag or VC or Tunnel Id 16 Pop tag 10.1.0.0/24 17 17 10.1.5.0/24 18 20 10.1.9.0/24 19 18 10.1.6.1/32 20 19 10.1.10.1/32 Bytes tag Outgoing Next Hop switched interface 0 Gi0/1 10.1.1.2 0 Gi0/1 10.1.1.2 0 Gi0/1 10.1.1.2 0 Gi0/1 10.1.1.2 0 Gi0/1 10.1.1.2

These are the basic troubleshooting steps to verify MPLS is working as it should. Other show commands exist that provide more detail and present data in different formats. Those commands were not recorded at the time of lab testing.

Captured Trafc
The alternative purpose of this assignment was to capture trafc and see just what wireshark can detect, and what detail can be provided when read. Screen shots are provided from this capture session. This screenshot shows the captured packets for this session. It is difcult to read due to formatting, but there are various Hello packets captured as well as LDP TCP packets. Here is Wiresharks decoded details of one of the LDP Hello packets:

Here is Wiresharks decoded details of one of the LDP TCP packets. Remember that LDP uses TCP port 646 to exchange label information. This is Wiresharks decoded information of this TCP packet (next page).

Trafc analysis
Obviously there are a lot of Hello packets sent by various protocols. Hello packets are most often used to maintain connections between various protocols as seen in the Protocol column. Some protocols use Hello packets to exchange information, others to maintain relationships between devices. As seen in the trafc capture, LDP uses actual keepalive messages to maintain the state of the sessions. In OSPF, Hello messages, or lack thereof are used to determine changes in a neighboring devices connection.

The majority of basic MPLS operation occurs within the routers logical architecture, not necessarily between routers where trafc capture would occur. In advanced applications of MPLS like Virtual Private Networks, or Trafc Engineering, much more communication between routers occurs than a basic implementation herein. LDP is the actual protocol to observe because its job is to distribute label information. Label information is what allows MPLS forwarding to occur. The LDP trafc that was captured were LDP Hello and LDP TCP session packets. In MPLS TE RSVP is the protocol you would want to observe because RSVP is what establishes, and reserves the predened resources and tunnels. LDP Hello packets are multicasted packets sent to address 224.0.0.2. This is a reserved multicast address specically for LDP. Other reserved multicast addresses include 224.0.0.5 and 224.0.0.6 for OSPF, 224.0.0.9 and 224.0.0.10 for EIGRP. Devices that receive these multicast addresses do not need to be joined to a multicast group. The aforementioned protocols automatically listen for these specic addresses because its built in to the protocol. This is different than actual multicast routing, in which you do need to join a multicast group, as in the case of PIM Sparse Mode. This particular Hello packet is sent via TCP port 646 which is the default for LDP and has a label message length of 20. Though I am not sure if this is in bits or bytes but I can hypothesize that the 20 is in bits. The message would then be the exact length of an MPLS label which is distributed by LDP. It would also be intuitive to have this eld located within the Hello Message eld of the actual LDP protocol. If this is in fact what is happening, it would demonstrate the method that adjacent next hop labels are learned between neighboring LSRs. Wireshark does not capture a tremendous amount of detail, and the probable lack of development in Wiresharks ability to decode an LDP packet could explain a lot of unknowns. Another feasible reason for the lack of more specic detail is that there is none. This may be all the information sent and the parameters Wireshark uses just dont have more accurate descriptors for the respective sections. The LDP TCP packet is perhaps the most interesting. In this packet the neighboring upstream LSR, P1 (10.1.4.1), initiates a TCP session to PE1 (10.1.2.1), establishing a session or data transfer as indicated by the syn/acks. However there is no detail as to just what information is being transfered. It simply could be sending an ack to acknowledge the previous LDP Hello messages between the TCP acks shown in the rst graphic of this section. There is not enough information to make much of a hypothesis other than what was previously mentioned. This is a point where I would say that Wireshark is not developed enough to accurately and descriptively identify the packets and information therein of MPLS LDP trafc. I must also reciprocate the aforementioned logic and suggest that I also may not have the technical inclination to understand the captured trafc displayed by Wireshark.

Вам также может понравиться