Вы находитесь на странице: 1из 32

tellabs.

com Choosing the Best of Todays Ethernet-over-SDH Standards


October 2003

T
ETHERNET OFFERS KEY INGREDIENTS FOR SUCCESS: INEXPENSIVE INTERFACES HIGH BIT RATES SUBSCRIBERS ARE FAMILIAR WITH ETHERNET AND WILLING TO BUY

his paper will introduce the Ethernet standards

available to network operators today. It will

review each standard and conclude which standards are best deployed as building blocks in a carrier-class Ethernet service.

Executive Summary
Ethernet technology dominates the Local Area Network (LAN) and is now expanding rapidly into the Wide Area Network (WAN) space. Ethernet offers the key ingredients for success, namely: inexpensive interfaces; high bit rates; and many subscribers who are extremely familiar with it and willing to buy services that use it. For these reasons, it makes sense to use Ethernet as a subscriber interface. A judicious combination of standards is required for a network operator to successfully deploy Ethernet as a WAN technology. Therefore, network operators are faced with choosing the best standards from the many that are available. This paper assumes that the network operator already owns and operates an SDH network.

tellabs.com

Ethernet Services that Subscribers will Buy


It is believed that subscribers buy Ethernet services because the operators network behaves like a private Ethernet network that is virtually built exclusively for them. Privacy and security must be assured. Spanning Tree Protocol (STP) may be switched on or off as per the subscribers specification. The subscribers network may be a simple point-to-point link or a complex mesh; the operators network should support both. The operators network should support IEEE 802.1Q VLAN tagging, IEEE 802.1P prioriti-sation levels, unicast, multi-cast and broadcast services. This is all implied since the network must appear to the subscriber as their own private Ethernet switched network.

Subscribers may also choose the level of protection they wish to purchase for transport services.

The scope of an Ethernet service offering should be as simple as this: if a private Ethernet switched network can do it, then a operators Ethernet service should do it also. This implies a flexible Ethernet service that deploys the most common popular Ethernet standards that are already deployed in most subscribers LANs.

To assure privacy, all services may be provided on a per-subscriber basis.

Ethernet Standards Available Today


In Table 1 you find a list of SDH standards that often are portrayed as a

Table 1 Todays Ethernet Standards

ITU-T G.7041 ITU-T X.86 ITU-T H.707 ITU-T G.7042 IEEE 802.1D IEEE 802.1Q/P MPLS IEEE 802.17

Generic Framing Procedure (GFP) Link Access Protocol (LAPS) Virtual Concatenation (VC) Link Capacity Adjustment Scheme (LCAS) Ethernet switching Virtual LAN (VLAN) and Prioritisation Multi-Protocol Label Switching Resilient Packet Ring (RPR)

Choosing the Best of Todays Ethernet-over-SDH Standards

tellabs.com

compulsory part of an Ethernet-over-SDH service, but how do they fit together and are they all really necessary? This paper will review each of these standards and conclude which standards are best deployed as building blocks in a carrier-class Ethernet service.

stop when there is no cargo and go when there is cargo to carry.

A layer that marries SDH and Ethernet needs to load the Ethernet trucks onto the SDH conveyer belt. If a burst of trucks arrives faster than the conveyor belt can handle, the trucks need to be held in a

Choose the Best Ethernet to SDH Mapping Technique


In order to marry Ethernet and SDH, a layer of elasticity is needed between the two. SDH is like a conveyer belt, always moving, never stopping. Ethernet is more like trucks (i.e., frames), each operating on their own, at liberty to

queue. Before the loading area fills to the danger level, an 802.3x pause message is sent back to the source of the trucks, asking the sender to stop the flow of trucks for just a moment. Conversely, if the trucks are coming too slowly for the SDH conveyor belt, then trucks filled with packing material are loaded onto the SDH conveyer belt because the SDH conveyer belt must always be filled.

X.86 and GFP Frame Alignment Mechanism


Imagine an Ethernet frame being sent from the subscriber network into the network operator's Ethernet PHY interface. When the Ethernet frame arrives, the 7 octet preamble and 1 octet start of frame delimiter are scrapped since they can easily be rebuilt at the other end of the SDH network. The next step will vary depending on the use of GFP or X.86. If the chosen interface is X.86, then a hexadecimal 7E, called a flag, will be added to mark the beginning of the frame (see Figure 1). Additional fields will be added to the X.85 header, as shown in Figure 1. Since a 0x7E marks the start of the frame, all occurrences of a 7E must be removed from the payload, so all 7E octets are replaced with 0x7D5E. But this complicates things further because all 7D octets must now be replaced with an 0x7D5D string, therefore, a packet laden with 7E and 7D octets will cause this process to inflate the size of

the packet in preparation for transport. This is called "packet inflation" and is the chief incentive to avoid X.86 in favor of GFP. On the other hand, X.86 came before GFP and is nearly identical to a different standard, X.85, known as Packet-over-SDH (POS). When there are no packets being received, X.86 fills the SDH channel with back-to-back 0x7Ds. GFP accomplishes packet delineation in a more deterministic fashion. The limitation with GFP is that it cannot operate in cut-though mode, but the small amount of buffering delay to capture an entire Ethernet frame is offset by the increased throughput and furthermore, the Ethernet packets coming from the end user are arriving at 100 Mbit/s and will readily keep the input buffer to the SDH interface full, nearly eliminating the value of cut-through processing in the first place. A powerful advantage of GFP is its concise and deterministic frame delineation. The mechanism works like this. When an Ethernet frame is

encapsulated by GFP, its length is indicated in the payload length indicator (PLI) field which occupies the first two octets of the GFP header. By reading the first two octets, the receiver knows how long the frame is and therefore where it ends and another begins. The problem for the receiving end is to locate and confirm the first two octets are the PLI field. GFP solves this problem neatly by making the third and forth octets of the GFP header a mathematical function (CRC) of the first two octets. During frame alignment, the receiving end looks for two octets that are the CRC value of the proceeding two octets. The search proceeds octet by octet until two octets are found that are a CRC value of the proceeding two octets. If it truly is the header, then the first two octets must be a payload length indicator and by counting forward the indicated amount of octets, we should find another PLI field which can be confirmed because the two octets following the PLI are the CRC value of the PLI. The process actually runs three times before frame alignment is confirmed.

Choosing the Best of Todays Ethernet-over-SDH Standards

tellabs.com

When the trucks are loaded, they are welded to the truck in front and behind, hence there is no way at the receiving end to tell where one truck ends and another begins, so a special marker must be inserted between trucks to delineate them (i.e., frame delineation.) Loading the Ethernet frames and delineating them is the task of X.86 or GFP, but which one is better?

Ethernet is asynchronous and SDH is synchronous. Usually the SDH circuit will operate at a different bit rate from the subscribers Ethernet connection. In order to make the two compatible, dynamic rate adaptation and frame delineation (where one truck ends and another starts) must be deployed at the point where the Ethernet physical layer ends and SDH begins

Figure 1 An Ethernet Frame May Be Inflated as it is Encapsulated Into an X.86 Frame


Flag (0x7E)

Thrown Away
Preamble (7 octets)

LAPS Header

Address (0x04)

Control (0x03)

SAPI first octet (0xFE)

Start of Frame (one octet)

MAC DA (6 octets)

Add X.86 Framing

SAPI 2nd octet (0x01)

MAC DA (6 octets)

Ethernet Frame

MAC SA (6 octets)

MAC SA (6 octets)

Length/Type

Length/Type

Data (46 - 1500 octets)

Ethernet Frame

PAD

The LAPS header and trailer are added plus all occurrences of 7E and 7D octets in the packet must replaced with a 2 octet sequence, inflating the size of the payload.

FCS of MAC (4 octets)

FCS of MAC (4 octets)

FCS of LAPS (4 octets)

Flag (0x7E)

Choosing the Best of Todays Ethernet-over-SDH Standards

tellabs.com

(PHY/SDH interface see Figure 1 and Figure 2). Both GFP and X.86 accomplish this but in different ways. X.86 inflates the payload and is nondeterministic in its behaviour. GFP accomplishes the task with greater efficiency plus it is deterministic. For details, see the text box titled X.86 and GFP frame alignment mechanism. Based on the nondeterministic behaviour of X.86 and the deterministic behaviour of Generic Framing Proceedure (GFP), we conclude that GFP is the better choice for Ethernet to SDH rate adaptation.

Use Virtual Concatenation to Match Bandwidth More Closely to Customer Requirements


Virtual concatenation combines multiple slower bit rate SDH conveyer belts into a single high-speed conveyer belt. Each of the smaller conveyer belts must start and end at the same point, but they may follow completely different paths inside the SDH network. This permits traffic to be spread across different rings and

Figure 2 An Ethernet Frame is Mapped Directly Into GFP Encapsulation. Overhead is Predictable and Minimal
Payload Length Identifier

PLI (2 octets)

Thrown Away
Preamble (7 octets)

GFP Header

cHEC (2 octets)

Type (2 octets)

cHEC is simply a CRC function of the PLI field

Start of Frame (one octet)

tHEC (2 octets)

MAC DA (6 octets)

Ethernet Frame

MAC SA (6 octets)

Add GFP Framing

MAC DA (6 octets)

MAC SA (6 octets)

Length/Type

Length/Type

Data (46 - 1500 octets)

Data (46 to 1500 octets)

No inflation!
PAD PAD

FCS of MAC (4 octets)

FCS of MAC (4 octets)

Choosing the Best of Todays Ethernet-over-SDH Standards

tellabs.com

recombined at the other end in a completely transparent fashion.

data transport in 2 Mbit/s increments using VC-12s (see Table 2 for the exact values). Virtual concatenation of VC-3

Prior to virtual concatenation, the SDH conveyer belt would operate only at the contiguous bit rates shown in Table 2. Virtual concatenation can inverse multiplex any VC-12, VC-3 or VC-4 channels into a single circuit. Virtual concatenation will only combine similar VC types. For instance, up to 63 VC-12s can be combined to form a single channel. This permits SDH to support

or VC-4 circuits is also supported, permitting these circuits to be combined into a single channel, permitting growth increments in multiples of VC-3s or VC-4s.

As a matter of history, prior to virtual concatenation, contiguous concatenation offered coarse growth increments that are also illustrated in Table 2. Furthermore,

Table 2 Contiguous Concatenation and Virtual Concatenation Types

SDH Container VC-11 VC-12 VC-3 VC-4

Type Low Order Low Order High Order High Order

Payload capacity in Mbit/s 1,6 2,176 48,384 149,76

Contiguous Concatenation
VC-4-4c VC-4-8c VC-4-16c VC-4-64c High Order High Order High Order High Order 599,04 1198,08 2396,16 9584,64

Virtual Concatenation
VC-12-Xv VC-3-Xv VC-4-Xv Low Order Low Order High Order X 2.176 where X = 1 to 63 X 48.384 where X = 1 to 255 X 149.76 where X = 1 to 255

Choosing the Best of Todays Ethernet-over-SDH Standards

tellabs.com

contiguous concatenation requires that all intermediate nodes support contiguous concatenation while virtual concatenation does not require any special capability of the intermediate nodes.

until it is needed as a redundant path when the working channel fails. Hopefully the ring performs well and the protection channel is rarely needed. If this is true, then it may make sense to put the protection channel to work

There is more value to virtual concatenation, but to see the big picture, we need to dig deeper into the SDH protection.

rather than waste it when all systems are functioning normally. Therefore, the G.841 standard permits this channel to carry extra traffic when the

Use LCAS for Protection and Stop Wasting Bandwidth on Traditional SDH Protection Options
SDH Automatic Protection Services (APS) defines three kinds of channels in ITU-T G.841.

working channel is operating. Any traffic that uses the protection channel is operating at some peril, because when the working channel fails, the extra traffic is pre-empted so that the protection channel can perform the job for which it was intended, which is to carry the load of the working channel during a failure. It seems foolish to use

1. Working channel (i.e., protected service) This is a working circuit. If this circuit fails, traffic is diverted to a protection channel within 50 milliseconds. The protection channel is defined next.

the protection channel for anything other than protection, so we conclude immediately that the option to use a pre-emptible channel for working traffic is too risky.

3. Non-pre-emptible Unprotected (i.e., 2. Protection Channel (i.e., pre-emptible service) This is the protection circuit that carries traffic when the working channel fails. This circuit is wasted unprotected service) This is a working channel that does not have any protection available in case of a failure.

Choosing the Best of Todays Ethernet-over-SDH Standards

tellabs.com

Figure 3 Non Pre-Emptible Unprotected Services Will Not Be Affected By Fibre Cuts Elsewhere in the SDH Ring

VC-4-4v Channels Unprotected

NE STM-16 Ring NE

NE NE

Failure!
This type of channel has several positive characteristics. First, it has the advantage of not wasting bandwidth on protection since protection channels are not used. Secondly, several unprotected channels can be grouped together, each operating on different paths so that if a failure occurs, not all of the channels will be affected. Now look at Figure 4 and this time the fibre cut occurs in the VC-4-4vs section and as expected, the VC-4-4v is now down since the service is unprotected. However, an alternate path is known by the affected device and traffic is rerouted with no assistance from SDH. We To understand the advantage of this third option for data traffic, see Figure 3. In this example a VC-4-4V is being carried using an unprotected channel. Service remains after the fibre cut because the cut conclude that unprotected service is best for technologies that already supply their own protection path(s) and do not want to pay for SDH protection bandwidth. It is this logic that drives a powerful did not affect the facilities being used to transport this circuit.

Choosing the Best of Todays Ethernet-over-SDH Standards

tellabs.com

Figure 4 Non Pre-Emptible Unprotected Services are Desirable When Protection is Available From a Higher Layer
n now th K e Pa evice t rna D Alte o This T

VC-4-4v Channels Unprotected

Service Down

STM-16 Ring

Failure!
application for virtual concatenation that specifically serves data services, especially Ethernet-over-SDH. Virtual concatenation can be used to concatenate any VC-12, VC-3 or VC-4 SDH channels. The channels can traverse different rings, some protected, some unprotected or even all of them unprotected. The requirement is that the channels be of the same VC-type and that they drop to a single NE tributary card. The SDH equipment along the path from point A to Z is oblivious to the presence of virtual concatenation.

Link Capacity Adjustment Scheme


If a failure of a single unprotected VC-12 occurs within a group of n VC-12s, the entire virtually concatenated group will fail. Instead, what should happen is that when part of a virtually concatenated group fails, data is diverted to the remaining channels in the group that are still working. This is a primary role of Link Capacity Adjustment Scheme (LCAS).

Choosing the Best of Todays Ethernet-over-SDH Standards

tellabs.com

LCAS is a new kind of protection for data services operating over SDH. It is a protocol that can sense a failure on a virtually concatenated member and drop it from service while keeping the working members carrying traffic. LCAS itself can only sense a problem. It then notifies virtual concatenation of the failure and virtual concatenation automatically load balances traffic across the remaining working members.

dynamic rate adaptation at the Ethernet PHY/SDH interface. This means that the Ethernet PHY can operate at any rate it chooses and/or the SDH transport channels can be dynamically changed and GFP will dynamically adjust to the new rate. GFP rate adjustment is instantaneous. Data will be lost, however, from the instant of failure until LCAS senses the failure. The momentary outage can be as short as instantaneous to as long as 64 milliseconds for VC-4

Remember, LCAS is strictly for data services and is a perfect fit for serving Ethernet-over-SDH. The technology that adds the elasticity to the changing data rates is GFP. Recall that GFP performs

concatenations and 128 milliseconds for VC-12 concatenations. This is why virtual concatenation and LCAS are often mentioned together. They are complementary and ultimately a mutual

Figure 5 LCAS will Remove Member 1 and Balance the Offered Load Across the Remaining Three VC-12s
VC-12 Unprotected

Failure!

1
GFP
VCG

3
100 Base-T

VCG

GFP

100 Base-T

Choosing the Best of Todays Ethernet-over-SDH Standards

10

tellabs.com

requirement if network operators want to use unprotected channels for data services.

VC-12 has failed. Virtual concatenation removes the first VC-12 from service and load balances the remaining traffic across the three remaining VC-12s. GFP

Since we have discussed three important technologies, now is a good time to see an example of how they interoperate so far. In Figure 5 we see a simple point-to-point Ethernet service. The subscribers 100 Base-T connection terminates into an Ethernet interface on an SDH network element. GFP operates at the 100Base-T line rate on the Ethernet side and at 8.7 Mbit/s (4 * VC-12 = 8.7 Mbit/s, see table 2) on the SDH side. GFP dynamically performs necessary buffering and rate adaptation.

instantly senses the back pressure from virtual concatenation and ratchets back to 6.53 Mbit/s [(3 * VC-12) = 6.53 Mbit/s]. On the Ethernet side, GFP continues to operate at the line speed of 100Base-T.

An option often deployed on the Ethernet side to apply backpressure into the Ethernet network is 802.3x flow control. At some point prior to when GFP buffers approach capacity, 802.3x flow control messages can be sent back to the MAC address of senders to inform them to stop sending for n ticks (see the IEEE

Virtual concatenation combines the four VC-12s, making them appear as a single 8.7 Mbit/s point-to-point circuit to GFP. LCAS is constantly scanning the four VC-12s, making sure they are all functioning, when, as shown in Figure 5, a failure on an SDH ring causes an unprotected #1 VC-12 in this VC-12-4v virtually concatenated group to fail. LCAS notices the outage within a maximum of 128 milliseconds and informs virtual concatenation that the #1

802.3x standard for more information on this powerful control). Of course 802.3x flow control is always available, not just during a partial outage.

So far we have seen technology that can deliver simple point-to-point Ethernet service across dedicated bandwidth, which is called Layer 1 Ethernet Private Line. Layer 1 EPL is simple, reliable and can tap protection bandwidth but does not offer statistical multiplexing, which

Choosing the Best of Todays Ethernet-over-SDH Standards

11

tellabs.com

in many applications can multiply the sellable bandwidth many times over. To accomplish statistical gain, plus deliver Virtual Private LAN Service (VPLS), we need to add additional technology to the Ethernet Private Line basic design. Several technologies may perform this function, which are MPLS, 802.1D, 802.1P/Q and IEEE 802.17 RPR.

the source MAC address identifies the identity of the sender and the Ethernet switch uses this information to associate MAC addresses with the inbound switch port. In this fashion the switch quickly associates MAC addresses with each of its switch ports. Once a switch learns the port location of a MAC address, it forwards the Ethernet frame out the appropriate port only.

Offer the Ethernet Service that the Subscribers Want: 802.1D/Q/P


The IEEE 802.1D MAC bridges standard defines Ethernet switching based on the use of source and destination MAC addresses. For the sake of this discussion, an Ethernet switch is simply defined as a multi-port Ethernet bridge. Ethernet permits two MAC clients to communicate via an Ethernet network by sending an Ethernet frame that contains both source and destination MAC addresses.

This means that a MAC client must send frames for a switch to learn its MAC address. Until a MAC client sends Ethernet frames, the switch has not yet learned its MAC address and this particular clients MAC address is unknown to the switch. When a switch receives a frame with an unknown destination MAC address, it must flood the frame out all of its ports. A response to the flooded frame will reveal the location of the unknown MAC address and the switch will stop flooding frames for this particular MAC address given

An Ethernet switch learns the MAC addresses of each connected client by looking at the source address field and recording from which direction it came. Each time an Ethernet frame is sent,

that its location is now known.

In this example, the discovery of the unknown MAC address occurs when the destination MAC client receives the

Choosing the Best of Todays Ethernet-over-SDH Standards

12

tellabs.com

flooded Ethernet frame and responds back to the source, placing its own MAC address in the source field of the Ethernet header. The switch instantly learns where the previously unknown MAC address was. Now that the switch has learned the location of the MAC address, Ethernet frames to this destination are only forwarded to the appropriate Ethernet switch port rather than flooding the entire Ethernet network.

Ethernet switching requires that each MAC client posses a unique MAC address, but there is no guarantee that two subscribers may through accidental or intentional means use the same Ethernet MAC address and cause conflicts or serious security issues.

A hacker can learn MAC address of other subscribers by sniffing broadcast and flooded traffic. Once a victims MAC address is known it is then

The simplicity of the 802.1D protocol makes it compelling for LANs, but if several subscribers share the same Ethernet switch here are a few of the many problems that could happen:

possible to use that MAC address to appear as the victim and access information that is not intended to be shared. Automation exists for hackers to systematically learn and hack each learned MAC address, therefore creating

All subscribers will receive each others broadcasts, multi-casts and flooded frames.

a security nightmare.

There are many more examples, but this evidence is sufficient to conclude that

One subscriber could broadcast incessantly and fill all other subscribers networks with broadcast traffic (in addition to their own of course).

802.1D switching must be partitioned so that each subscribers private Ethernet network appears as a unique instance of an 802.1D switch. This does not mean that there need to be separate physical Ethernet switches, but that each Ethernet switch needs to appear as a separate logical Ethernet switch to each subscriber.

Choosing the Best of Todays Ethernet-over-SDH Standards

13

tellabs.com

IEEE 802.1Q Virtual Bridged Local Networks


The IEEE 802.1Q standard describes how to logically partition a single physical Ethernet network into separate logical Ethernet networks called Virtual Local Area Networks (VLANs). Inbound traffic from each LAN is tagged with a unique VLAN identifier which the Ethernet switched network uses to assure that no traffic on any particular VLAN leaks onto any other

approach. First, there are only 4096 VLAN IDs which is woefully inadequate for carrier-class networks that would possibly require millions of VLAN IDs in the future. Furthermore, most subscribers are already using IEEE 802.1Q VLANs in their own networks and they expect to maintain complete control of this.

Therefore, VLAN tags may not be referenced by the Ethernet network operator to partition different subscribers traffic from each other. Instead, the Ethernet network operator must process

VLAN. While this works quite well, there are limitations to the IEEE 802.1Q VLAN

the VLAN tags on a per subscriber basis because the subscriber is already using

Table 3 Some of the More Popular IEEE Ethernet Standards

802.1Q 802.1P 802.1D 802.1W 802.3U 802.3Z

VLAN Priority Bridge Protocol and Spanning Tree Rapid Spanning Tree 10/100Base-T (auto negotiation) Describes a Gigabit physical layer 1000 - Base-SX (770-860 nm optical Layer) 1000 - Base-LX (1270-1350 nm optical layer) 1000 - Base-LH (Not an IEEE standard, normally uses 1310 nm of the DWDM C and L bands) 10 Gigabit Ethernet physical layer Ethernet in first mile 1000Base-T (Gigabit Ethernet over copper) Flow Control

802.3AE 802.3AH 802.3AB 802.3X

Choosing the Best of Todays Ethernet-over-SDH Standards

14

tellabs.com

the VLAN tags within their organisation to partition their own traffic. For instance, a certain subscriber does not want the sales department to have access to the personnel departments network so they will put both departments in separate VLANs. Often a subscriber uses routing, packet filter or firewall services to control traffic flow between the two VLANs.

are being queued. If there is so much bandwidth available that congestion never occurs, then obviously 802.1P is not needed. However, the likelihood of bandwidth availability always being greater than bandwidth consumption cannot be guaranteed so support for 802.1P is a very good idea. Since voice and video over IP are growing in importance, support of the 802.1P field

IEEE 802.1P may become mandatory for Ethernet Supplement to Media network operators if they wish to Access Control (MAC) be competitive. Bridges: Traffic Class Expediting IEEE 802.1D, and Dynamic 802.1Q, and 802.1P Multicast Filtering Conclusions
The IEEE 802.1P standard defines how real-time frames are tagged and forwarded in an Ethernet network. In order to offer 802.1P services, an 802.1Q header is required. Part of the 802.1Q header is a priority field that identifies the priority of the Ethernet frame. By simply reading the 802.1P priority field, the Ethernet switch knows how the frame is to be treated and forwards the frame with the indicated level of priority. The 802.1P field is only important when there is congestion in the network and frames The simplicity and effectiveness of these standards has made them ubiquitous. They exist for the sake of controlling a subscribers private LAN environment. If an Ethernet network operator tries to use the same technology without making any additional provisions to partition subscribers, a security nightmare will result. Therefore, a carrier must deploy some additional technology beyond these IEEE standards to partition each subscribers network from that of other subscribers.

Choosing the Best of Todays Ethernet-over-SDH Standards

15

tellabs.com

Figure 6 An MPLS Label Switched Path


WORKPLACE
In = 125 Out = 99 Direction = East

99

212

In = 99 Out = 212 Direction = East

In = 22 Out = 125 Direction = North

125

In = 70 Out = 22 Direction = North

22

70

Guarantee a Secure Network By Using MPLS


Multi-Protocol Label Switching may be used as the workhorse of the Ethernet network operators network since it has the advantage of offering end-to-end services while still delivering statistical gain, partitioning of subscriber traffic and meeting the delivery specification spelled out in subscribers Quality-of-Service (QoS) Service Level Agreements (SLAs).

MPLS creates virtual pathways called Label Switched Paths (LSPs). To understand the mechanism of MPLS and in order to give a dry topic some levity, an analogy is presented here. Assume you are giving a customer direction to your workplace. You could simply give the postal address, including street address and tell your customer to find their own way. Instead you decide to make it easy for your customer and you give directions in a totally different fashion. First, you post a person (smiley) to stand at every

Choosing the Best of Todays Ethernet-over-SDH Standards

16

tellabs.com

intersection. You give each smiley a label switching table that you have already configured, as shown in Figure 6. You then tell your customer to start walking north, carrying the number 70 and to look for a bright yellow smiley face. The first smiley takes the number 70 and in this case gives back a number 22 and tells your customer to keep walking north. At the next intersection, the smiley takes the number 22 in exchange for a number 125 and tells your customer to keep walking north. At the next intersection, the number 125 is exchanged for the number 99 and your customer is told to walk east. The number 99 is exchanged for number 212 at the next intersection and told to continue walking east. Standing outside your office, you see a bewildered person walking towards you carrying the number 212 and you know that this must be your customer.

millions of Label Switched Paths by adding more information to the label switching table at each smiley face. Since the LSP is virtual, not physical, the network can be made controllable by software, so the analogy needs to be carried a bit further.

Obviously it is too tedious to compute the best LSP through the city streets and then to update each smileys LSP table manually. It is much better if a map of the city is available to you and all you have to do is click the A and Z points so that the management system would calculate the best path and update each smileys LSP table. Furthermore, you should be able to identify the QoS level of each LSP so that each smiley face could manage the relative importance of each packet attempting to pass through the intersection.

This simple analogy points out some very important issues. First, the street is a common pathway that can be used by lots of other traffic. The differentiator from one traffic flow to another is the label number itself. There is nothing in this analogy keeping us from designing

To see how this analogy applies to real-world networks, consider that the smiley faces are MPLS switches which are fully contained on a single card in an SDH network element. The streets are Virtually Concatenated Groups (Groups) that interconnect each MPLS switch. We

Choosing the Best of Todays Ethernet-over-SDH Standards

17

tellabs.com

now have common data highways to carry multitudes of statistically multiplexed, QoS protected subscriber traffic and a new diagram is required to see this enhancement over EPL.

Real-time or Expedited Forwarding: This class is for applications like video and Voice-over-IP where jitter must be tightly controlled. This class may not be over-subscribed and bursted traffic is discarded. This service compares with

Provide True QoS Using MPLS


Multi-Protocol Label Switching offers another key feature for network operators, that is, the ability to guarantee Quality-of-Service on a per subscriber basis. Rather than simply providing connectivity, a network operator can offer (and charge) for different levels of service. The service levels are based on two components: bandwidth class and a service grade.

ATM Constant Bit Rate (CBR) or leased line service.

Business Data With or Without Burst: Also called assured forwarding, this class is for business data traffic and is subject to minor queuing. Lower priority traffic is pushed out of the way when present. This is similar to ATMs Variable Bit Rate (VBR) or RPRs Class B-CIR (Committed Information Rate)

Best-Effort: The bandwidth class specifies the sustained and peak bandwidth guarantees. The service grade determines the delay, jitter, and drop precedence aspects. The QoS experienced by the customer is a function of both bandwidth class and service grade. Some generic examples of service classes are as follows: Consider a subscriber that wishes to operate video conferencing over a network operators Ethernet network. One way to accomplish this is to configure a new LSP in parallel with an This is for low priority traffic that may be able to tolerate widely varying queuing delays, such as internet traffic. This is similar to ATMs Unspecified Bit Rate (UBR).

Choosing the Best of Todays Ethernet-over-SDH Standards

18

tellabs.com

existing best-effort LSP. The appropriate LSP is selected based on the priority of the Ethernet frame being sent. A more elegant method which will avoid additional label switched paths for each grade of service uses the EXP bits in the MPLS header. The three bits can be used in a fashion similar to the 802.1P field and can specify the service level within a given label switched path.

specific subscribers traffic along the entire path of the network is the basic premise of end-to-end (E2E) service.

Increase Utilisation By an Order of Magnitude With MPLS Based Statistical Multiplexing


Multi-Protocol Label Switching may be used to add statistical multiplexing capabilities to our EPL design. Recall the Layer 1 EPL design shown in Figure 5. This design could only support a single subscriber per virtual concatenation. By adding MPLS, it is now possible to operate many subscribers across the same virtually concatenated group. In Figure 7, a VC-12-4V is being used to interconnect two different subscribers. Separate MPLS

Using this approach, when the subscribers Ethernet frames carrying video traffic are sent into the operators network, the subscriber tags the Ethernet frames with an 802.1P field that indicates a high priority level. The network operators Ethernet card will forward the frame over the appropriate label switched path and set the EXP bits to high priority.

Subsequent MPLS switches read the EXP bits and forward the Ethernet frame accordingly.

LSPs are assigned per subscriber in order to assure that both subscribers networks are partitioned from each other. The Ethernet interface, MPLS switching,

Using separate label switched paths for each subscriber assures that each switching point along the way never loses track of each subscribers traffic. The ability to individually manage each

GFP encapsulation, and virtual concatenation are all included in a single Ethernet services card as illustrated in figure 7.

Choosing the Best of Todays Ethernet-over-SDH Standards

19

tellabs.com

The Ethernet services card maps customer A1 traffic to an LSP that transports subscriber As Ethernet frames between locations A1 to A2. Likewise, the same service is offered between subscriber Bs B1 and B2 locations. In order to ensure that no customer may utilize more bandwidth than they have purchased, the Ethernet services card restricts the bit rate according to the bandwidth class and a

service grade that was sold to the customer. This is explained in greater detail in the MPLS QoS section.

A Layer 2 Ethernet Private Line makes sense for:

1. Low-cost, two locations only: a subscriber has only two locations and does not wish to purchase higher priced Layer 1 EPL services.

Figure 7 A Layer 2 Ethernet Private Line Combines Virtual Concatenation, LCAS and MPLS to Enable Statistical Multiplexing

Ethernet Services Card


10/100/1000 Base-X Plugs

Embedded Ethernet switch

VC-12-4V

MPLS

GFP

VCG

MPLS LSPs

Subscriber Subscriber

A1 B1
100 Base-T

1
100 Base-T

A2 B2

Subscriber Subscriber

Choosing the Best of Todays Ethernet-over-SDH Standards

20

tellabs.com

2. Hub and spoke networks: a multi-location subscriber network where all traffic must flow back to a common hub location such as a corporate headquarters.

single virtual Ethernet switching function which acts as a private Ethernet switch per subscriber. The physical interfaces to the Ethernet services card are 10/100/1000 Mbit/s Ethernet on one side and SDH on the other. Rate adaptation

Use Ethernet Private Networking With Distributed Switching to Support Mesh Applications, Avoid Back-Haul and Provide Resiliency
When subscribers wish to operate a mesh design which allows information to flow directly between each of their multiple locations rather than back-hauling the data to a common point, a more advanced service is required.

and protection technologies are also included on this single card.

Interconnecting Ethernet services cards in the core of the network forms the core of an Ethernet services network as seen in Figure 8, Central Office A. The Ethernet services cards are interconnected using low-cost Gigabit Ethernet links. Since GFP dynamically rate adapts to whatever bandwidth traverses these Gigabit links, we can use all or any portion of the Gigabit Ethernet link to support Ethernet traffic between the two

The Ethernet services card may be added to an SDH network element wherever Ethernet services are needed. This card is capable of IEEE 802.1D/Q/P Ethernet switching in addition to MPLS switching, and Ethernet to SDH rate adaptation as shown in Figure 8. Furthermore, the Ethernet switching function of the Ethernet service card represents a unique instance to each subscriber, creating a

SDH rings. The traffic that traverses the Gigabit Ethernet links is partitioned by MPLS LSPs so the links are really common statistically multiplexed highway between SDH rings.

Furthermore, cross-connecting different subscribers additional networks are now added using MPLS LSPs which are virtual, not physical. The conclusion here is

Choosing the Best of Todays Ethernet-over-SDH Standards

21

tellabs.com

Figure 8 Adding Ethernet Services Cards in the Core Network Elements

Ethernet Services Card


10/100/1000 Base-X Plugs

Embedded Ethernet switch

GFP

VCG

MPLS

GFP

VCG

GFP

VCG

A
Central Office A Gigabit Ethernet links interconnect Ethernet services cards within the same Central Office, creating the inter-ring links

C D
Central Office B

that MPLS not only solves the problem of partitioning different subscribers traffic, it does so in a virtual fashion that greatly reduces port counts in the core of the network and eases the effort involved for provisioning Ethernet services using existing SDH rings.

added. Ethernet services are then physically connected to the subscribers sites 1-8. Next, virtual concatenation groups are added to interconnect the new Ethernet services cards.

Once the physical layer is completed, five label switched paths are configured

In Figure 9, a subscriber is added to our Ethernet-over-SDH network. Since we have the core already built, Ethernet services cards F, G, H, I and J must be

as seen in Figure 10. Ethernet services card C is elected as the core Ethernet switch and LSPs one to five are built to connect this subscribers traffic to the

Choosing the Best of Todays Ethernet-over-SDH Standards

22

tellabs.com

Figure 9 Virtual Concatenated Groups are Configured to Interconnect Ethernet Service Cards

8 2 A 1 1 F B C D 2 3 G 5 E H 4 5 4
this subscriber. Ethernet services card I will switch traffic between subscriber locations 6 and 7, passing traffic to any other destinations through LSP #3. Ethernet services card H will locally Since all Ethernet service cards are capable of Ethernet switching and MPLS label switching, the same cards can be used to support many subscribers. Since any Ethernet services card can act as a virtual Ethernet switch and/or MPLS switch, we choose C as the best choice for the core virtual Ethernet switch for switch 4 and 5 and pass any other traffic through LSP #4 respectively. The core switch C examines the MAC address or VLAN tag (subscribers choice) and chooses the appropriate LSP, forwarding the packet directly to the destination. We conclude that the network seems versatile so far, but what happens when
Virtually concatenated Group (VCG)

7 6 3 I

core Ethernet switch. As an example, look at LSP #1. It originates at switch F, passes through switch B and terminates in switch C.

Choosing the Best of Todays Ethernet-over-SDH Standards

23

tellabs.com

Figure 10 Label Switching Paths are Configured to Create a Private Virtual Ethernet Service Offering. Additional LSPs would Normally Be Added for Redundancy, But are Not Shown in the Figure for the Sake of Clarity

2
Configure additional LSPs to operate B as a secondary "Core" for resiliency

A
Core Virtual Ethernet Switch

7 6 I
Edge Virtual Ethernet Switch Locally Switches Traffic Between 6 and 7

3 F B
Edge Virtual Ethernet Switch Locally Switches Traffic Between 2 and 3

C D

2 3 G

5 E H 4 5 4

Edge Virtual Ethernet Switch Locally Switches Traffic Between 4 and 5

another subscriber is added? The answer is that the same infrastructure is reused, potentially adding no additional capital expenses (CapEx) to the project if the new subscribers locations are already near an SDH NE that is equipped with an Ethernet services card.

interconnect all eight locations using a standard mesh network, 28 label switched paths are required to interconnect all sites according to this formula: n(n-1)/2 or 8(8-1)/2 = 28. Rather than 28 LSPs, this architecture accomplished the same by configuring only five LSPs yet delivering a virtual mesh service.

This design methodology avoids the n-squared problem of trying to build this same network using an LSP mesh. For example, if a subscriber would As additional subscribers are added to this network, new label switched paths must be created for each subscriber in order to

Choosing the Best of Todays Ethernet-over-SDH Standards

24

tellabs.com

partition each subscribers traffic from the others. When an Ethernet card receives an Ethernet frame, it references the MPLS label in order to determine to which subscribers network the frame belongs. Since the Ethernet card maintains separate MAC address tables for each subscriber, MPLS labels serve an important function of discriminating which subscriber a particular Ethernet frame belongs to. With MPLS, complete partitioning of a

subscribers traffic from the other subscribers traffic is guaranteed.

Resilient Packet Ring


Resilient Packet Ring establishes a new MAC layer, designed to run autonomously on its own ring, providing its own protection. It is important to note that RPR does not define a new Layer 1 and normally would use Gigabit Ethernet, 10 Gigabit Ethernet or virtually concatenated SDH channels as the spans.

Figure 11 RPR Ring Architecture


RPR Station An RPR node - 255 stations maximum - Each stations identity = its MAC address

Domain All stations belonging to this RPR.

Ringlet 2

Ringlet 1

RPR Span The Physical Layer - The link between stations - All spans must use the same bit rate - An RPR span may traverse multiple SDH rings - Gigabit or 10 Gigabit Ethernet PHY can serve as the link - RPR does not define interconnection of multiple RPR domains.

Choosing the Best of Todays Ethernet-over-SDH Standards

25

tellabs.com

The general topology of the RPR architecture is illustrated in Figure 11. The RPR network elements, called stations, are interconnected to adjacent stations via RPR spans. Since RPR is Layer 1 agnostic, an RPR span could theoretically be anything although practicality dictates otherwise so RPR currently spells out either SDH or Ethernet PHY as the preferred Layer 1 technology.

is up to the management software to calculate how much protection bandwidth will be available to RPR in the event of a failure and make sure protected services are not over committed.

RPR Need MPLS


Similar to 802.1D switching, RPR must depend on MPLS or some other means to identify and partition each subscribers traffic. When traffic is inserted onto an RPR ring, RPR encapsulates the frame

There are two limiting factors regarding the spans themselves. First, they all must be the same bit rate, so if stations one and two are connected via a VC-4-5v, they all must be connected at that rate regardless of situational necessity. This means that RPR requires a fixed amount of bandwidth between every SDH network element on the SDH ring if we assume that RPR stations are plug-in units in each SDH network element. Second, RPR provides its own protection scheme; therefore, RPR should be assigned virtually concatenated unprotected channels to meet the subscriber requirement for bandwidth and levels of protection. When allocating RPR bandwidth for subscriber utilisation, it

and assigns appropriate source and destination MAC addresses. These source and destination MAC addresses are of the RPR stations, not the subscriber.

Consider this example while viewing the RPR topology shown is Figure 11.

1. Two subscribers Ethernet frames are inserted at station A and stripped at station D.

2. RPR station A individually encapsulates both Ethernet frames but both carry the same RPR MAC addresses as follows: From: Station A To: Station D

Choosing the Best of Todays Ethernet-over-SDH Standards

26

tellabs.com

3. The frames pass through Station B which cannot discriminate between the two subscribers since both RPR frames contain the same source and destination RPR MAC addresses.

MPLS is still required unless the entire RPR ring is intended for only a single subscriber.

Service Classes
When traffic enters an RPR ring, it must be assigned one of three user classes of service, some of which are divided into subclasses so we can say there are really

4. The frames pass through Station C with the same limitation explained in step 3.

5. The frames arrive at station D where the RPR header is stripped off and the contents of the RPR frame are revealed.

five distinct levels of service if the entire RPR specification is deployed. When packets are inserted onto an RPR ring, they must be marked according to the

6. If MPLS was used, then the MPLS label could now be referenced in order to properly forward the frame.

type of treatment they require when traversing the RPR ring. Each service class is now explained in detail here:

7. If MPLS or some other function is not used to identify the ultimate source and destination of the frame, then the identity of the frame is lost.

Class A0 and A1 is for applications such as video and Voice-over-IP. Class A0 traffic may not be over-subscribed and if the allocated bandwidth for A0 is not used, it cannot be reclaimed by lower

Therefore, we conclude that RPR is not an end-to-end service and cannot be used for that particular purpose. RPR service begins when traffic enters a ring and immediately ends when the frame is stripped from the ring. A service like

priority services and it is simply wasted. Bursted traffic is discarded.

Choosing the Best of Todays Ethernet-over-SDH Standards

27

tellabs.com

Class B-CIR (Committed Information Rate) is for business data traffic and is subject to minor queuing but will push lower priority traffic out of the way when present.

station failure steering or wrapping may be deployed but not both. Regardless of the protection method, the ring just lost 50 per cent of its capacity and traffic must be routed the opposite way around the ring, approximately similar to SDH.

Class B-EIR (Excess Information Rate) allows business traffic to take a weighted fair share of the available unallocated bandwidth plus any reclaimed bandwidth from other subscribers that are not using their Class A1 or Class B-CIR traffic at this particular instant.

Spatial Reuse
Spatial reuse only has meaning to people familiar with FDDI and token ring. When a packet is placed on a FDDI or token ring, it would block access to the entire ring while it traveled the entire circumference of the ring to be stripped by the station that sent the packet in the

Class C is for low priority traffic that may be able to tolerate large amounts of queuing delays.

first place. RPR is different from those LAN topologies. With RPR a packet can be sent from station A to station B while

RPR Protection
RPR can provide protected services through one of three different means, either wrapping, steering or pass though. The appropriate method depends on what was deployed and the type of failure. If the station itself fails, it may take itself off line and simply pass data through as if it were a repeater. This is better then breaking the connection all together. If RPR senses a fibre cut or catastrophic

at the same time a packet is being sent from station B to station C. To an SDH engineer this is standard operating procedure but nevertheless, we cannot discount spatial reuse because the old ways like FDDI and token ring wasted lots of bandwidth. Compared with a pure MPLS network that does not include RPR, spatial reuse does not deliver anything especially useful or new.

Choosing the Best of Todays Ethernet-over-SDH Standards

28

tellabs.com

Figure 12 Ethernet Services Cards With RPR Stack Included

10/100/1000 Base-X Plugs

Embedded Ethernet switch

GFP

VCG

Ringlet 1

MPLS

RPR
GFP VCG

Ringlet 2

Adding RPR to the Ethernet Services Card


The same Ethernet services card we studied earlier now has a new RPR layer sandwiched between MPLS and GFP (see figure 12). This is the RPR layer services the RPR ringlets and serves the MPLS client.

The first thing we notice is that we used a lot more bandwidth on our individual rings to create the four RPR rings required to deploy this service. We also notice that there are five LSP required since RPR does not support end-to-end services. Since our Ethernet services card supports Ethernet switching in addition to RPR, cards G, H and I perform local switching between 2 and 3, 4 and 5,

In comparison to the network shown in Figure 10, an RPR enabled Ethernet card is now added to show a different implementation. Shown here in Figure 13, we see the same network deployed using RPR with MPLS supplying the end-to-end services.

6 and 7 respectively rather than placing the packets on RPR. When packets are destined for other locations, it is MPLS that really does all the work; RPR just forms an additional link layer (which is superfluous) between Ethernet services

Choosing the Best of Todays Ethernet-over-SDH Standards

29

tellabs.com

cards which is already adequately handled by GFP and MPLS.

much lower cost and achieve better bandwidth efficiency with virtual concatenation, LCAS, 802.1D/Q/P

In Figure 13, the network using RPR looks much like it did in Figure 10. The difference is that additional bandwidth is consumed on each ring to support RPR ringlets. The tough question is now asked, What is RPR doing? It appears that it is using up bandwidth and adding cost. Over an SDH network, it seems to add an additional layer without eliminating anything or providing any new services. We can accomplish the same thing at a

switching, and MPLS.

So Where does RPR Make Sense?


If we eliminate SDH from the equation, and use native Gigabit Ethernet as the PHY between RPR stations, then RPR could make sense. In this case, LCAS, virtual concatenation, SDH clocking and SDH itself are eliminated from the equation. All of this assumes that network

Figure 13 Ethernet Service Using RPR

2
A 1
Core Virtual Ethernet Switch

7 6 I
Edge Virtual Ethernet Switch Locally Switches Traffic Between 6 and 7

3 F B
Edge Virtual Ethernet Switch Locally Switches Traffic Between 2 and 3

C D

2 3 G

5 E H 4 5 4
Virtually concatenated Group (VCG) Label switched Path (LSP)

Edge Virtual Ethernet Switch Locally Switches Traffic Between 4 and 5

Choosing the Best of Todays Ethernet-over-SDH Standards

30

tellabs.com

operators are planning on running a data-only network with no legacy equipment that requires an SDH network.

Conclusion
For network operators who already operate an SDH network, adding Ethernet services to existing SDH network

For network operators who already have an SDH network, the new next-generation features make RPR a hard sell. RPR-over-SDH was a good idea when RPR development first began because LCAS and virtual concatenation did not then exist.

elements is now a reality. A powerful combination of virtual concatenation, LCAS, GFP and MPLS all packed into a single Ethernet services card is a compellingly simple solution.

Adding RPR to the equation offers no value unless SDH can be removed from the picture entirely and that does not seem plausible in the near future given the state of the world economy and the fact that most organisations are looking for ways to capitalise on the investments they have already made rather than buying all new equipment and building an overlay network. Therefore, the conclusions on each of the following technologies are as you see on the next page:

Choosing the Best of Todays Ethernet-over-SDH Standards

31

tellabs.com

How to reach us:


North America Tellabs One Tellabs Center 1415 West Diehl Road Naperville, IL 60563 U.S.A. +1.630.378.8800 Fax: +1.630.798.2000 Asia Pacific Tellabs 9 Temasek Boulevard #43-02 Suntec Tower Two Singapore 038989 Republic of Singapore +65.6336.7611 Fax: +65.6336.7622 Europe, Middle East & Africa Tellabs Abbey Place 24-28 Easton Street High Wycombe, Bucks United Kingdom HP11 1NT +44.870.238.4700 Fax: +44.870.238.4851 Latin America & Caribbean Tellabs 13800 North West 14th Street Sunrise, FL 33323 U.S.A. +1.954.839.2800 Fax: +1.954.839.2828

Use SDH as the Layer 1, which is already deployed in most carriers networks. Use standard 10/100/1000 Ethernet to interface with the subscriber. Use GFP rather than X.86 to interface Ethernet (asynchronous) and SDH which is synchronous. Use virtual concatenation to provide physical connectivity growth in 2 Mbit/s increments. Use LCAS for protection services that will direct traffic only on the working channels and stop traffic from flowing on failed channels. LCAS also offers the benefit of not wasting bandwidth on protection channels. Use a hybrid card that plugs directly into the SDH network element which provides virtual 802.1D/Q/P switching services with a separate logical Ethernet switched network per subscriber.

Use MPLS to partition subscribers traffic, offer virtual end-to-end services and meet Service Level Agreements involving best-effort, assured forwarding and expedited forwarding over a statistically multiplexed backbone. Therefore the recommended protocol stack is as follows: The subscriber sends Ethernet frames, with or without VLAN tagging. Which are encapsulated within an MPLS frames. Which are encapsulated within GFP frames. Which are load-balanced across multiple virtual concatenated channels using virtual concatenation. LCAS provides protection services in case a particular channel fails by forcing traffic to flow only on the remaining working channels. SDH provides the Layer 1 connectivity, similar to plumbing between water fixtures.

The following trademarks and service marks are owned by Tellabs Operations, Inc., or its affiliates in the United States and/or other countries: TELLABS, TELLABS and T symbol, and T symbol .

Any other company or product names may be trademarks of their respective companies.

2003 Tellabs. All rights reserved. 74.1412E Rev. A 10/03

Choosing the Best of Todays Ethernet-over-SDH Standards

32

Вам также может понравиться