Вы находитесь на странице: 1из 86

Packet-

OPtical tHe tRaNSMODe WaY

Packet- OPtical tHe tRaNSMODe WaY

iNtRODuctiON

Packet-Optical the Transmode Way, has been written by Transmode to help customers, prospects, partners or anyone else who needs to have a better under- standing of the packet-optical world. It is intended to accompany WDM the Transmode Way, which covers all aspects of Layer 1 optical networking. This book focuses on the integration of higher layer function- ality into optical systems to create Packet-Optical Transport Systems (P-OTS) i.e. packet-optical net- works.

Optical fiber provides almost loss-less transmission of signals at an ultra-wide range of frequencies. Packet switching, implemented according to the Ethernet family of protocols, offers one of the most efficient ways for sorting and directing streams of digital data. With packet-optical networking these two outstand- ing technologies are positioned to dominate the next generation of transport networks.

The term P-OTS is widely used within the industry and has been recycled from an earlier definition (Plain Old Telephone Service) to cover a range of solutions and networks with varying degrees of capabilities and functionality. Transmode defines P-OTS devices and networks using the classifications developed by Infonetics Research. We have therefore asked Andrew Schmitt, Principal Analyst at Infonetics Research, to give us a short introduction to packet-optical techno- logy and the specific definitions they use in Chapter 2.

Packet-optical integration has some great advan- tages in terms of cost and service differentiation. Transmode´s Native Packet Optical 2.0 architecture takes this one step further and its benefits in terms of reduced equipment and operational cost, key capabili- ties such as latency and sync and simplified opera- tions are outlined in Chapter 3. Chapter 4 then

takes the reader further on into how these values are leveraged by various applications, such as Business Ethernet, Mobile Backhaul and CableTV Backhaul.

For those wanting a better understanding of the various Layer 2 Ethernet technologies, Chapter 5 includes a description of how these function and how they are leveraged in wide area networks.

Packet-Optical the Transmode Way has been written to enable readers to use the book as needed to research a particular subject or to read the complete volume from end to end. Either way, we hope you find the book informative and useful.

As with the accompanying book WDM the Transmode Way, the descriptions in this book are kept as product release independent as possible. Current product

details of the TM-Series, the multi-layer management suite Enlighten™, or other parts of Transmode’s product portfolio are available at:

www.transmode.com

Unique features of Transmode’s packet-optical solutions are highlighted with this marker throughout the text.

The information included is subject to change with- out further notice. All statements, information and recommendations are believed to be accurate but are presented without warranty of any kind.

ConTenT

iNtRODuctiON

3

1. aN OveRvieW Of tHe Packet- OPtical MaRket bY aNDReW ScHMitt, iNfONeticS ReSeaRcH 9

2. Packet-OPtical NetWORkiNg 13

2.1

Chapter summary

14

2.2

The principles of packet-optical integration

14

2.2.1

Why aggregate traffic at Layer 2?

14

2.2.2

Ethernet transport at Layer 2 versus Layer 1

16

2.2.3

Native Ethernet and ODU2e framing

19

2.2.4

MPLS-TP for traffic engineering and service scalability

22

2.3

A packet-optical architecture optimized for transport

26

2.4

The main elements of a Transmode packet-optical transport network

28

2.4.1

Ethernet Demarcation Units (EDU) and Network Interface Devices (NID)

28

2.4.2

Ethernet Muxponders (EMXP)

29

2.4.3

Optical add/drop multiplexors (OADM, ROADM) and other optical elements

29

2.4.4

The multi-layer service management system

29

2.5

Advantages of packet-optical transport

31

2.5.1

Benefits of the packet-optical approach

31

2.5.2

Advantages of Transmode’s Native Packet Optical 2.0 architecture

31

2.6

Migrating legacy TDM services to ethernet

33

2.6.1

One common infrastructure for Ethernet and legacy TDM services

33

2.6.2

Using the iSFP to convert SDH/SONET services for Ethernet transport

34

2.7

Multi-layer network management

36

2.7.1

Transmode’s multi-layer management suite Enlighten

36

2.7.2

Network management principles

38

2.7.3

A unified information model for multi-layer management

38

2.7.4

Layer 2 Service Provisioning

39

2.7.5

Layer 2 Service Assurance

39

2.8

Software Defined networking (SDn) and network Virtualization

41

3. aPPlicatiONS Of Packet- OPtical NetWORkiNg

45

3.1

Chapter summary

46

3.2

ethernet services for enterprises

Business ethernet

46

3.2.1

Serving enterprise customers

46

3.2.2

A network for Business Ethernet

47

3.3

Aggregation of IP traffic

IP backhaul

49

3.3.1

IP based services over a common infrastructure

49

3.3.2

A lean and transport centric aggregation network

49

3.3.3

The flexible optical network brings scalability

50

3.4

Mobile backhaul

51

3.4.1

3G and 4G/LTE place new requirements on mobile backhaul

51

3.4.2

A backhaul network optimized for 3G and 4G/LTE

52

3.5

Switched video transport

53

3.5.1

Streaming 3D and HD video to the home

53

3.5.2

Transmode’s solution for switched video transport

54

4.5.1

Carrier Ethernet: Ethernet as a transport service

68

3.6

Data center interconnect and cloud computing

55

4.5.2

The Carrier Ethernet architecture and terminology

70

4. etHeRNet aND laYeR 2 tecHNOlOgieS

4.5.3

Carrier Ethernet 2.0 Services

71

57

4.5.4

Carrier Ethernet Service Attributes

72

4.1

Chapter summary

58

4.6

Carrier ethernet traffic management

4.2

ethernet basics

58

73

4.2.1

Ethernet mode of operation

58

4.6.1

Bandwidth profiles

73

4.2.2

Virtual LANs

60

4.6.2

Class of Service (CoS) and Service Level Agreements (SLA)

75

4.2.3

Ethernet physical media (PHY)

61

4.6.3

Traffic shaping

76

4.3

Synchronization and circuit emulation services over ethernet

62

4.7

Carrier ethernet operations, Administration and Maintenance (ethernet oAM)

4.3.1

Synchronous and asynchronous transport

 

76

62

4.7.1

The management framework

76

4.3.2

Synchronization standards

64

4.7.2

Standards for Ethernet OAM

77

4.4

ethernet protection

66

4.7.3

The service lifecycle

78

4.4.1

Link aggregation (LAG)

66

4.7.4

Ethernet Service OAM – Performance and fault management

4.4.2

Ethernet ring protection switching (ERPS)

 

78

66

Summary

82

4.5

Carrier ethernet architecture and services

68

iNDex

83

1. aN OveRvieW Of tHe Packet- OPtical MaRket

By AnDrew SChMITT, InfoneTICS reSeArCh

Optical technology has surged forward in recent years with the move to higher speed coherent

Optical technology has surged forward in recent years with the move to higher speed coherent optics, more optical flexibility with the wide-scale adoption of ROADM technology and the integration of Ethernet func- tionality from higher layers in the OSI stack into the optical layer, what we refer to as Packet-Optical integration. Of these, the move to packet- optical solutions is perhaps the most confusing, as the phrase means different things to different parties, be they vendors, operators, analysts or the media. The variation stems from individual experiences with the wide range of solutions on the market.

Infonetics formed its definition of packet-optical largely through dialogue with service providers from many regions and markets. In a nutshell, packet-optical integration encompasses a range of systems that supports a combination of optical and packet/Ethernet technology. After many years of tracking this evolving market, and asking carriers specifically how they define packet-optical, we recently modified and updated our defini- tions of the P-OTS market to match the latest trends in the industry, and we split the industry into two distinct sub-segments: metro-edge P-OTS and metro-core P-OTS.

Metro-edge P-OTS are systems aimed at applications toward the edge of an optical network. These systems are WDM-based optical network- ing platforms with integrated Ethernet switching. They have varying degrees of support for Layer 1 technologies such as SDH/SONET and OTN, but must also support Layer 2 Ethernet functionality. This Ethernet

functionality, which will be explained in full later in this book, includes a minimal level functionality that must be supported to be classified within the metro-edge P-OTS category. This functionality includes provision- ing, managing, verifying and protecting Layer 2 Ethernet services using a defined group of standardized protocols and procedures. Of course, many systems in the market will support a much wider level of functionality than this minimum requirement. There are also some systems that do not meet the required level, but still are marketed as packet-optical systems.

Metro-core P-OTS are similar systems that support all the functionality required in a metro-edge P-OTS system but also support applications within the core of a network. As such, they are typically physically larger systems that support a larger capacity but must also support switching across the whole chassis or node from any port to any other port, rather than just within ports on a single plug-in unit that is common in metro- edge P-OTS platforms. These metro-core P-OTS platforms must therefore support centralized switching fabrics for Ethernet traffic and a centralized SDH/SONET and/or OTN switch. They must also support fully integrated ROADM-based optical switching and a single control plane.

These definitions allow us to track the progress of the industry with a clear demarcation between the different systems that are closer to the edge of the network and those deeper in the network and closer to the core. Those at the edge often have features that are very application specific, such as Ethernet synchronization schemes for mobile backhaul

networks, and can require this functionality in both compact edge nodes and larger aggregation nodes. These metro-edge P-OTS systems need to also pay special attention to the service demarcation and provisioning as- pects of running a network and therefore support of standards like those defined by the Metro Ethernet Forum (MEF) are particularly important.

Those systems at the core of a network are typically dealing with traffic from many applications and as such require less service awareness but require the capacity to handle more traffic and a higher degree of Layer 1 transport capabilities over Layer 2 Ethernet, as shown below.

capabilities over Layer 2 Ethernet, as shown below. Figure 1. The future of transport and switching.

Figure 1. The future of transport and switching. Source: Infonetics Research.

While Transmode’s product portfolio supports some of the metro-core P-OTS functionality such as integrated ROADM technology, the function- ality of the company’s Native Packet Optical 2.0 architecture currently falls clearly into the metro-edge P-OTS category. The terms Packet- Optical and P-OTS in this book cover both the metro-edge and metro- core P-OTS definitions from Infonetics Research. The use of these terms implies that the strict minimum functionality levels are achieved within the packet-optical transport system.

The move to P-OTS networks allows operators to capitalize on Ethernet as a single standard service protocol and to build single networks capable of supporting many parallel applications over the same infrastructure. It has the potential to play a significant role in enabling operators to address the challenges they face as they evolve their networks, particu- larly as they look to add support for Software Defined Networking (SDN) features.

We at Infonetics Research see the evolution of P-OTS systems and networks as a very important aspect of the overall evolution of optical networking. This area is one of the fastest growing segments of the indus- try and has a large part to play in supporting the networks of today and tomorrow. Exciting times are ahead!

Andrew Schmitt Principal Analyst, Optical Infonetics Research

Andrew Schmitt Principal Analyst, Optical Infonetics Research © TRANSMODE AN OVERVIEW OF THE PACkET-OPTICAL MARkET 11

2. Packet-OPtical NetWORkiNg

2.1 Chapter summary

Optical fiber provides almost loss-less transmission of signals at an ultra-wide range of frequencies. Packet switching, implemented accord- ing to the Ethernet family of protocols, offers one of the most efficient ways ever for sorting and directing streams of digital data. Packet-optical networking fundamentally addresses how to leverage the outstanding characteristics of these two technologies to implement the next genera- tion of telecommunication networks.

The scalability and cost effectiveness of Ethernet has made it the unifying service protocol for modern wide area networking. Increasingly the consolidation of the optical and Ethernet/IP transport infrastructure within the same network elements has become the means to drive down both network investment costs and the associated operational costs. The additional support of label switching mechanisms (MPLS-TP 1 ) is an extra tool kit to complement Ethernet and to enhance the transport capabili- ties and scalability of the network. Supervised by a multi-layer manage- ment system, integrating the handling of OSI Layer 1 optical channels and Layer 2 Ethernet services, a flexible, cost efficient and future proof telecommunications infrastructure is here and ready to be deployed.

1 Multiprotocol Label Switching – Transport Profile.

This chapter deals with the principles of transporting Ethernet traffic over WDM networks and describes how these two key technologies are integrated by a unifying architecture – Native Packet Optical 2.0. It also highlights the advantages of the architecture, including how traffic is transported with minimal delay and without loss of synchronization. The chapter ends with a section on network management and approaches to Software Defined Networking (SDN).

2.2 The principles of packet-optical integration

2.2.1 why aggregate traffic at Layer 2?

The introduction of Layer 2, i.e. Ethernet, aggregation brings several benefits to network operators. Traditionally, metro aggregation networks were implemented by WDM equipment attached to other equipment such as DSLAMs and mobile base stations or to enterprise networks. The traffic from these was transported to an IP core network directly, i.e. at Layer 1, using fibers in ring or star topologies. As the number of end points grew, the central IP core network had to be extended, requiring more IP routers at more sites and with more ports, as indicated on the left side of Figure 2.

Figure 2. Migrating from Layer 1 to Layer 2 aggregation of traffic. In this situation,

Figure 2. Migrating from Layer 1 to Layer 2 aggregation of traffic.

In this situation, there are several reasons to introduce a Layer 2 aggregation network:

yyThe IP core network can be reduced in size to only a few central routers instead of being spread-out throughout the full metro net- work area. Layer 2 aggregation equipment is normally less costly 2, consumes less power, has lower latency and requires less expertise to configure than IP routers. This centralization of the IP core network reduces the investment necessary and the operational costs.

yyLayer 2 aggregation can perform statistical multiplexing of data traffic, and the WDM channels of the underlying optical network can be used much more efficiently than if only Layer 1 aggregation was used. Statistical multiplexing allows the bandwidth to be divided arbitrarily among a variable number of users in contrast to Layer 1 aggregation

2 Analyst estimate that Layer 2 equipment is only between 30 – 50% the cost of Layer 3 equipment.

(time or frequency multiplexing), where the number of users and their data rates are fixed. Statistical multiplexing makes use of the fact that the information rate from each source varies over time and that bandwidth of the optical path only needs to be consumed when there is actual information to send.

yySince the traffic is concentrated at Layer 2 in the aggregation network it can be handed over to the IP core routers via a few high speed inter- faces rather than over many lower speed interfaces. This simplifies administration and contributes to a lower cost per handled bit.

yyAs an additional benefit the aggregation network itself can be used to offer services within the metro/regional area. For example, point to point Ethernet connections can be provided between offices in a city center without loading any central router nodes. Such direct con- nectivity gives more rational traffic handling and reduced forwarding delay compared to using the central IP routers.

forwarding delay compared to using the central IP routers. Figure 3. The statistical multiplexing of Ethernet

Figure 3. The statistical multiplexing of Ethernet is leveraged to “fill the pipes”, a feature which is especially useful in the edge of the network where traffic often is more variable.

2.2.2 ethernet transport at Layer 2 versus Layer 1

Given the benefits of a Layer 2 aggregation network, it is important to understand how such a network differs from a traditional network with Layer 1 aggregation of Ethernet traffic over WDM.

Transporting Ethernet traffic between two remote sites with WDM as the underlying bearer technology can be done in two fundamentally different ways:

yyUsing transparent transport of Ethernet frames over a WDM channel, i.e. Layer 1 (optical) transport

yyUsing an intermediate Carrier Ethernet Network 3 that in turn gets its frames transported over one or more WDM channels, i.e. Layer 2 (Carrier Ethernet) transport. This is the technology used in a Layer 2 aggregation network.

Both alternatives have advantages and disadvantages.

Both alternatives have advantages and disadvantages. Figure 4. Layer 1 transport: Ethernet traffic is carried

Figure 4. Layer 1 transport: Ethernet traffic is carried transparently at Layer 1 over a WDM wavelength.

3 See section 4.5 for more information about Carrier Ethernet as defined by Metro Ethernet Forum (MEF).

A basic Layer 1 Ethernet transport solution takes every incoming frame

from the sending customer Ethernet network and puts it into a digital

wrapper adapted for transmission over the WDM channel. At the receiv- ing end, the wrapper is removed and the original frame is handed over

to the customer Ethernet network. In this way, every single frame is

forwarded without modification between the two customer networks.

The Layer 1 transport solution provides a transparent path between the two customer Ethernet networks, giving the highest possible Quality

of Service (QoS) in terms of latency, latency variation and packet loss.

A Layer 1 network is also fully deterministic and provides 100% through-

put regardless of what services that are carried by the Ethernet traffic.

But since a Layer 1 network is totally transparent to the Ethernet traffic

it is also unaware of any Ethernet service information and can only manipulate the traffic at Layer 1.

information and can only manipulate the traffic at Layer 1. Figure 5. Layer 2 transport: Ethernet

Figure 5. Layer 2 transport: Ethernet traffic is transported via a Service VLAN in a Carrier Ethernet Network extending between operator sites.

In a Layer 2 transport network the customer Ethernet networks are

interconnected via an intermediate Ethernet, the Carrier Ethernet Network.

A Service VLAN (SVLAN) or an MPLS-TP pseudowire in the Carrier Ethernet

Network is used to keep traffic from each set of customer Ethernet net- works separated, i.e. the SVLAN/pseudowire establishes the connectivity between the involved customer Ethernet networks belonging to the same subscriber. The frames of the Carrier Ethernet SVLAN/pseudowire are transported over channels of the WDM network, just as before.

A plain Layer 1 transport solution cannot concentrate the Ethernet

traffic being aggregated, which may result in low utilization of the WDM wavelengths. For example, a Layer 1 network collecting Gigabit Ethernet signals that are utilized to a very low extent, will still carry them as if they were 100% loaded. This may lead to unnecessary investment in Layer 1 equipment for additional wavelengths in the transport network.

In the Layer 2 transport solution the incoming customer Ethernet frames

are analyzed and acted upon by the equipment located at the ingress point of the Carrier Ethernet Network, before being forwarded. It is pos- sible to concentrate the incoming flow of Ethernet frames by statistical multiplexing and applying shaping and policing to the Ethernet traffic. The Layer 2 solution can be made fully “Ethernet service aware” and analyze and act upon Layer 2 traffic.

Multiple Ethernet signals are TDM-multiplexed in a Layer 1 solution. The aggregated signal is not a standard Ethernet signal and consequently de-multiplexing must be done before the original signals are handed over to a switch/router. A Layer 2 solution performs aggregation to a new native Ethernet signal that can be handed over without any need for de- multiplexing. A Layer 2 solution will normally require a lower port density on the receiving switch/router, which may have a beneficial impact on the cost of that equipment.

Since a Layer 2 network can use policing to separate the effective data rate offered to a subscriber from the actual line rate available on the access line, a Layer 2 network can offer a more flexible and granular set of transport services than a Layer 1 network. While a Layer 1 network typically only provides services at the standard Ethernet line rates, such as 100 Mbit/s or 1 Gbit/s, a Layer 2 network may offer much more flexi- bility, such as 25 Mbit/s, 200 Mbit/s or 400 Mbit/s transport services over a physical 1 Gbit/s port.

The Layer 2 network is intrinsically less deterministic than a Layer 1 trans- port solution. The throughput of a Layer 2 network may suddenly change due to introduction of a new service or due to changed traffic situations. However, the Layer 2 network can be made to behave in a deterministic way by use of predefined capacity reservations, i.e. by use of traffic engineering.

The following table summarizes some of the differences between a Layer 1 and a Layer 2 network for wide area Ethernet transport.

Layer 1 ethernet transport

Layer 2 ethernet transport

TDM multiplexing: Collects and delivers traffic at the same format and data rate.

Statistical multiplexing: Collects traffic at one data rate and can deliver input from many sources over one interface at a higher rate.

Fixed data rates, typically 100 Mbit/s, 1 GbE and 10 GbE.

Flexible selection of data rates. Allocation of bandwidth by policing and shaping of traffic.

Deterministic. “What goes in comes out.”

Relies on traffic engineering to achieve deterministic behavior.

on traffic engineering to achieve deterministic behavior. Lowest delay, jitter and packet loss. Statistical

Lowest delay, jitter and packet loss.

Statistical multiplexing implies a risk for delay, jitter and lost packets.

implies a risk for delay, jitter and lost packets. Embedded management channels via overhead bytes in

Embedded management channels via overhead bytes in line signal wrapper.

Embedded management channels via separate management VLAN at Layer 2.

Figure 6. Characteristics of Layer 1 and Layer 2 Ethernet transport.

The Transmode packet-optical offering provides the user of the TM-Series with three principal alternatives for transport of Ethernet traffic:

yyPlain Layer 1 transport, i.e. transponders and muxponders that provide 100% transparent Ethernet transport at OSI Layer 1.

yyethernet-aware Layer 1 transport, i.e. muxponders that provide 100% transparent transport but have Layer 2 features, such as providing information on to what extent a Gigabit Ethernet connection is utilized. This enables the network operator to analyze the wavelength utilization and avoid unnecessary investment in transponders/ muxponders to launch additional wavelengths. Another unique Transmode feature is the ability to inject/extract management

VLAN’s on a Gigabit Ethernet client port. This enables direct remote management of Layer 2 devices via the same DCN 4 solution as for the optical transport equipment.

yyfull Layer 2 transport according to the Carrier Ethernet specifica- tions by Metro Ethernet Forum (CE 2.0 from MEF), i.e. equipment providing aggregation and concentration of Ethernet and other traffic with a selected set of Layer 2 functions that support the transport task. Such functions are for example IEEE 802.3ad link aggregation, Traffic Shaping and Policing and Bandwidth Profiles with guaranteed

4 Data Communications Network used for management and control of the network equipment.

bandwidth allocation. The Transmode Layer 2 network elements – the Ethernet Muxponders (EMXP) – are also Layer 1 aware, meaning that they can be connected directly to a WDM-link and support features such as Forward Error Correction (FEC) at the optical layer.

All the above mentioned units – transponders, muxponders and Ethernet Muxponders – are plug-in units that can be inserted in any combination in the different chassis options provided by the Transmode TM-Series platform. As an example, Layer 2 capable Ethernet Muxponders can be used at the edge of the network to collect and aggregate Ethernet traffic and hand over to Layer 1 transponders or muxponders that provide con- tinued transport with highest cost-efficiency and Quality of Service.

2.2.3 native ethernet and oDU2e framing

The Carrier Ethernet Network of a Layer 2 transport network is built on the WDM optical channels, i.e. the frames of the Carrier Ethernet Network are transported by the underlying WDM optical system. The Ethernet payload data can be packaged in the transport containers of the optical system in several different ways.

Legacy optical transport systems SDH and SONET incorporate adaption methods such as the Generic Framing Procedure (GFP) to allow for packet data transport. The Optical Transport Network (OTN) standard is a more recent digital wrapper technology that wraps any client signal, including Ethernet frames, in overhead information for operations, administration and management of the optical links.

The Transmode Native Packet Optical 2.0 architecture supports two types of encapsulation of the Carrier Ethernet traffic for WDM transport:

yynative ethernet framing: The frames of the Carrier Ethernet Network are transported as is, i.e. using the same framing as on an ordinary LAN, when forwarded over the WDM wavelength. This is

the standard encapsulation recommended by Transmode for metro/ aggregation networks, since it allows each intermediate node to examine and manipulate the Ethernet control information, i.e. to perform statistical multiplexing and differentiation of services at the Ethernet level.

yyoDU2e framing according to the oTn standard: The frames of the Carrier Ethernet Network are carried over the WDM wavelength by Optical Channel Data Units (ODU) according to the OTN standard. This encapsulation is especially favorable when the Carrier Ethernet extends over longer distances or the data is to be transported via an intermediate core OTN network. Using ODU2e framing between the Ethernet Muxponders allows use of the inherent OTN Forward Error Correction (FEC) mechanisms and optical path monitoring bits, which are of importance for long reach links. And since the encapsulation is in ODU2e format, these data units can also easily traverse any intermediate OTN switches transparently before reaching their final destination.

transparently before reaching their final destination. Figure 7. ODU2e and native Ethernet framing. The

Figure 7. ODU2e and native Ethernet framing. The two-colored bars symbolize Ethernet frames with the control information in red.

Native Ethernet framing means that standard Ethernet framing is applied to the data payload at the edge of the network, instead of encapsulating it with an OTN or other digital wrapper. By treating the Ethernet packets natively, it is possible to inspect them within the intermediate network nodes and to act upon the Ethernet headers so that the combined bene- fits of Layer 2 intelligence and efficient Layer 1 transport can be realized. This becomes especially important in the edge of the network where decisions about traffic prioritization are done and where traffic is aggre- gated to fill the pipes. The wrapping of traffic into full OTN can then be done at the handover to the core network, after aggregated pipes of traffic that are correctly shaped have been created, avoiding wasting bandwidth.

Native Ethernet framing uses the VLAN tag or MPLS-TP labels to switch the frames to ports associated with either IP services or with transport services. Each of these service domains is optimized and simplified for the particular service types. For example, frames containing data for high value and high quality IP services (IP-MPLS, IP-VPN or VPLS) can be switched to paths for transport to the necessary IP devices. By contrast, frames that are destined for transport services (Ethernet, MPLS-TP or OTN) can be kept within the optical transport network, with minimal use of expensive Ethernet switching and IP routing resources.

of expensive Ethernet switching and IP routing resources. Figure 8. Service aware transport enables a differentiated

Figure 8. Service aware transport enables a differentiated service offering with multiple classes of services having different characteristics.

The Optical Transport Network (OTN) is a more recent addition to the standards for public telecommunications networks and is sometimes referred to by its ITU-T name G.709. The standard was designed to trans- port both packet mode traffic such as IP and Ethernet, and legacy SDH/ SONET traffic over fiber optics with DWDM. It supports forward error correction (FEC) and management functions for monitoring a connection end-to-end over multiple optical transport segments. Today OTN has its main application in the long haul network where error correction and interoperability between several operators’ equipment are important.

OTN wraps any client signal in overhead information for operations, administration and management. The client signal to be transported is mapped into an Optical Channel Payload Unit (OPU). The OPU is then encapsulated into the basic unit of information transport by the protocol, the Optical Channel Data Unit (ODU), which is carried within an Optical Channel Transport Unit (OTU) defining the line rate of the connection.

Unit (OTU) defining the line rate of the connection. Figure 9. The OTN signal structure and

Figure 9. The OTN signal structure and terminology. The Carrier Ethernet frame is carried as the payload of an Optical Channel Payload Unit (OPU).

Transmode’s Ethernet Muxponders, the EMXP family, has the necessary framing capacity for ODU2e 5 , including optional G.709 FEC on all 10G ports. The optional ODU2e framing on 10G ports allows the native 10G Ethernet frame to be mapped into an ODU2e data unit ready for trans- port into the OTN core and large OTN switches. This is most useful once the traffic has been aggregated as much as possible to ensure the best possible utilization of the 10G circuit.

An OTN core network can also provide a unified transport layer where core nodes can combine traffic from OTN muxponder based Layer 1 services with EMXP based ODU2e framed Ethernet services from the Native Packet Optical 2.0 architecture. End to end performance monitoring is achievable even over multiple carrier networks through OTN with tandem connection monitoring and Carrier Ethernet’s inherent Operations, Administration and Maintenance (OAM) capabilities.

In summary: In the access and aggregation part of the transport network where service granularity is required, a service aware packet-optical mechanism is beneficial to support different QoS. Also, access to Ethernet OAM bytes and service tags enable end-to-end management of Ethernet services. Using native Ethernet framing offers benefits from both a revenue generation, investment and an operational perspective in these parts of the network. On the other hand, OTN has all the benefits of a long haul optical transport network once the traffic has been sufficiently aggregated and traverses the core part of the network. ODU2e framing has its main value in long haul and core network parts.

native ethernet framing

oDU2e framing according to oTn

Ethernet frames are transported natively over the WDM channels with minimum extra overhead

Ethernet frames are transported in Optical Channel Payload Units (OPU) within ODU2e frames and OTU containers according to the OTN standard

The service information contained in the Ethernet frame can be accessed at every node of the network. This allows for statistical multiplexing and service differentiation at intermediate network nodes

The Ethernet frame is wrapped inside the OPU/ODU and cannot be read and acted upon without de-multiplexing

No Forward Error Correction without an additional Layer 1 Transponder

Includes Forward Error Correction and optical path monitoring mechanisms which are important for long distance links

mechanisms which are important for long distance links Especially suited in metro and regional networks were

Especially suited in metro and regional networks were traffic is aggregated and service differentiation is applied

Especially suited in core and long distance networks were traffic has been aggregated into larger streams

Figure 10.

Some characteristics of native Ethernet framing and ODU2e framing.

5 ODU2e is an OTN Optical Channel Data Unit specifically designed for transport of 10 Gigabit Ethernet and Fiber Channel 10 GFC signals at a data rate of 10.4 Gbit/s.

2.2.4 MPLS-TP for traffic engineering and service scalability

While Carrier Ethernet Networks and the use of Service VLANs (SVLAN) bring great advantages to the packet-optical network, there are some limitations in terms of protection options, traffic engineering and service scalability. These can be addressed by the use of MPLS-TP, which is the Transport Profile of Multiprotocol Label Switching (MPLS).

MPLS-TP is a way to simplify Carrier Ethernet services by pre-defining connection oriented services over packet-based networking technologies in a way that gives support for traditional transport operational models. It takes the advantages of MPLS concepts by adding more flexibility and network manageability than the basic Ethernet SVLAN architecture.

manageability than the basic Ethernet SVLAN architecture. Figure 11. Using MPLS-TP to define label switched paths

Figure 11. Using MPLS-TP to define label switched paths within the Carrier Ethernet Network.

PRINCIPLES OF MPLS-TP Multiprotocol Label Switching (MPLS) is a technique that forwards packets based on labels as opposed to a standard Carrier Ethernet network where the frames are switched based on their SVLAN tags and MAC addresses.

are switched based on their SVLAN tags and MAC addresses. Figure 12. MPLS-TP Framework. A Label

Figure 12. MPLS-TP Framework.

A Label Switched Path (LSP) is defined between nodes were traffic enters

and leaves the MPLS-TP network. Using MPLS-TP terminology, the entry and exit nodes are referred to as MPLS-TP Provider Edge (PE) nodes and any intermediate nodes being passed by the LSP are referred to as MPLS-TP

Provider (P). Often the physical node performing the PE function is called

a Label Edge Router 6 (LER) and the intermediate transit node is called

a Label Switching Router (LSR). Transmode’s Ethernet Muxponders can act as both an LER and an LSR, and also combine these roles.

6 Note that the use of the term “router” is historic and neither requires nor precludes the ability to perform IP forwarding. It is sometimes used instead of “node” in MPLS context.

Figure 13. MPLS-TP Tunnel. An MPLS-TP tunnel is a pre-defined MPLS-TP transport path from the

Figure 13. MPLS-TP Tunnel.

An MPLS-TP tunnel is a pre-defined MPLS-TP transport path from the source LER to the destination LER. The MPLS-TP tunnel always has an active LSP that defines the primary and working path. It may also have a protect LSP which define a recovery path.

Both the tunnel and the LSP can be envisaged as pre-defined circuits for information to follow through the network, and consequently tunnels and LSPs are configured in advance from the network management system. A key feature of MPLS-TP, which distinguishes it from classic IP MPLS, is in fact that management and protection are designed to operate without a dynamic control plane, i.e. similar to a traditional SDH/SONET network, where circuits are set up by the management system.

The actual data traffic is carried by a pseudowire (PW) inside the LSP/ tunnel. One MPLS-TP LSP may carry one or more pseudowires, i.e. the pseudowires offer a means for multiplexing of traffic.

the pseudowires offer a means for multiplexing of traffic. Figure 14. Data is carried by a

Figure 14. Data is carried by a pseudowire defined within the MPSL-TP Tunnel and Label Switched Path (LSP).

A pseudowire is an emulation of a Layer 2, point-to-point, connection- oriented service over a packet-switching network (PSN), from Attachment Circuit (AC) to AC. The pseudowire used in MPLS-TP is a connection estab- lished between two MPLS-TP Label Edge Routers (LER) across the MPLS-TP tunnel/LSP with the Attachment Circuit frames encapsulated as MPLS data.

F i g u r e 1 5 . E t h e r n

Figure 15. Ethernet over MPLS encapsulation.

A Layer 2 transport service is established between two Attachment Circuits and the service is carried by a pseudowire. The pseudowire travels through the network in an MPLS-TP tunnel. The MPLS-TP tunnel is in its turn mapped to at least one LSP, the active LSP.

is in its turn mapped to at least one LSP, the active LSP. Figure 16. Relation

Figure 16. Relation between pseudowire, tunnel and Label Switched Path (LSP).

The following diagram summarizes how transport in an MPLS-TP network relates to the OSI model. Note that Ethernet framing is present both at the link layer and the client service layer.

present both at the link layer and the client service layer. Figure 17. MPLS-TP OSI network

Figure 17. MPLS-TP OSI network layers. The two variants of the physical layer correspond to native Ethernet framing and ODU2e framing respectively.

Both Ethernet SVLAN and MPLS-TP forwarding techniques have their own benefits and it is often advantageous to be able to offer services based on both technologies. For example multicast services are generally more suited for deployment directly over SVLANs on Ethernet, whereas point- to-point trunks requiring protection benefits more from the MPLS-TP features.

MPLS-TP is fully supported by Transmode’s Native Packet Optical 2.0 architecture and the EMXP family of muxponders. Any physical port on an Ethernet Muxponder can support both native Ethernet and MPLS-TP, allowing operators to deploy MPLS-TP where and when it makes sense for them. It is possible to run MPLS selective per port or separate MPLS traffic based on MAC address and VLAN within the same port. This allows seamless migration and co-existence with the two protocols running independently side by side.

In order to deploy MPLS in a production network, MPLS can be intro- duced either as an overlay to the existing Ethernet or incremental as an evolutionary build-out in parallel on the same networking hardware. Transmode believes in a smooth evolution, provided by Ships in the Night capability with Ethernet and MPLS in parallel as independent non-inter- fering protocols in the same system.

MPLS-TP IN FLExIbLE OPTICAL NETWORkS Packet-optical networks are often deployed today over a ROADM based Flexible Optical Network and this brings a mesh based structure to the wavelength routing and the paths available through the physical network for any services. One previous drawback with Ethernet was that it wasn’t well suited for protection and restoration over mesh based networks as the available protection schemes were largely based on point to point or ring architectures. MPLS-TP is highly suited for a mesh based environment and allows network operators to design network resilience strategies that are closely aligned to the physical structure of the network, ensuring the best possible resilience and service up time.

MPLS-TP – EASy SERVICE CREATION Another advantage of MPLS-TP is that it breaks the service creation into two steps. Firstly, tunnels are created between end points within the network for service and protection paths for the MPLS-TP based services. Then, the network administrator simply creates new services by adding the new services to the tunnel end points as pseudowires, safe in the knowledge that all routing aspects of the service have already been handled. This brings two distinct advantages. First, it makes the solution more scalable and it is simpler to add a large number of services to the network. Second, it brings a very familiar look and feel to service creation

to the network, as it is similar to the processes involved in traditional transport networks. This helps operators migrate from traditional trans- port networks to packet-optical networks.

MPLS-TP RESOLVES ThE MAC SCALAbILITy PRObLEM In an SVLAN based Carrier Ethernet Network all MAC addresses of the attached Customer Ethernet networks are visible to every switch within the Carrier Ethernet Network. Since each customer network may include an extensive number of devices and MAC addresses, this result in a need for large MAC address tables in each network node, creating various problems and extra equipment cost. Using MPLS-TP, the customer MAC addresses are encapsulated within the pseudowire payload and not seen by the intermediate switches of the Carrier Ethernet Network. The switches of the Carrier Ethernet Network do not have to be designed with the number of Customer Ethernet MAC addresses in mind.

MPLS-TP ALLOWS FOR VIRTUALLy UNLIMITED NUMbER OF CUSTOMERS The IEEE 802.1Q standard allows for a maximum of 4094 SVLANs in a Carrier Ethernet Network and one SVLAN is normally required per sub- scribing customer. Since MPLS-TP uses tunnels and label switched paths to define the connectivity within the network, there is no such upper limit for the number of customers that can be handled by a network using MPLS-TP.

Of course, as the MPLS-TP services in Transmode’s Native Packet Optical 2.0 architecture are delivered over the same hardware platform as native Ethernet services, they also benefit from the same transport-like per- formance with extremely low latency and almost zero jitter and can be combined with synchronization schemes such as SyncE when required, e.g. in mobile backhaul networks.

2.3 A packet-optical architecture

optimized for transport

Transmode’s Native Packet Optical 2.0 architecture is the base for Trans- mode’s packet-optical networks. The architecture builds on Transmode’s long and recognized experience in optical networking combined with the Ethernet, MPLS-TP and OTN transport capabilities outlined in section 2.2. The architecture supports the delivery of fully MEF compliant Carrier Ethernet 2.0 services and other Layer 2 services in combination with the flexibility of a wide choice of underlying transport technology alternatives.

wide choice of underlying transport technology alternatives. Figure 18. Key features of Transmode’s Native Packet

Figure 18. Key features of Transmode’s Native Packet Optical 2.0 architecture.

A key objective in the development of the architecture has been to expand

the number of services that can be provided by and over an optical infra- structure: More services over the same network means more revenues and

less cost for the operator. By integrating a selected set of Layer 2 functions with the optical layer, the network becomes much more potent in terms

of service offering and can be made more scalable. The tight integration

between Layer 1 and Layer 2 also makes it possible to increase resilience and improve traffic management in ways not possible with less integrated approaches.

One major advantage of the Native Packet Optical 2.0 architecture is that

it is agnostic to the chosen transport network technology. The traffic may

flow over a ROADM-based, flexible optical network which provides the

underlying connectivity and can be used for transparent Layer 1 services.

In addition, a Transmode packet-optical network can seamlessly inter-

operate with its transport network over MPLS-TP tunnels, Ethernet SVLANs or OTN switches and any combination of these.

Figure 19. Transmode’s Native Packet Optical 2.0 architecture offers a wide range of Layer 2

Figure 19. Transmode’s Native Packet Optical 2.0 architecture offers a wide range of Layer 2 services and the selection of multiple underlying transport technologies.

Furthermore, together with the management suite Enlighten, the Native Packet-Optical 2.0 architecture provides multi-layer traffic and service management. Depending on traffic load or link degradation, the packet- optical network can switch between different transport alternatives, ensuring the highest possible quality of service for the subscribers. The tight integration between Layer 2 and Layer 1 functionality in the architecture also opens up for advanced management of the traffic.

For example, the optical channel quality information detected at Layer 1 may be used for automated decisions on how a particular SVLAN or MPLS-TP tunnel is set up and handled at Layer 2.

Native Packet Optical is implemented through the family of optimized Ethernet Muxponders (EMXP) within the widely deployed TM-Series networking platform.

2.4 The main elements of a Transmode packet-optical transport network

Having looked at the principles of packet-optical networking, it is time to discuss how these functions are distributed in a complete network. Figure 20 illustrates the general architecture of a packet-optical network. As usual the network may be divided into an access, an aggregation, a regional/metro core and a core segment, each having their optimal tech- nology implementation and architecture.

their optimal tech- nology implementation and architecture. Figure 2O. The overall architecture of the packet-optical

Figure 2O. The overall architecture of the packet-optical transport network.

2.4.1 ethernet Demarcation Units (eDU) and network

Interface Devices (nID)

Demarcation of the provided service is a key function of the packet- optical access network as it enables the service provider to extend his control over the entire service path, starting from the customer hand off points. The customer’s equipment is connected to the Carrier Ethernet network via a provider-owned demarcation device (Ethernet Demarcation Unit (EDU) or Network Interface Device (NID)) deployed at the customer location. The unit enables a clear separation between the user and the provider Ethernet networks.

Transmode’s Ethernet Demarcation Unit is an independent unit that supports Service Level Agreement (SLA) management capabilities, includ- ing sophisticated traffic management and hierarchical Quality of Service (QoS) mechanisms, standard end-to-end Operations, Administration and Maintenance (OAM) and performance monitoring, extensive fault man- agement and diagnostics, all to reduce service provider operating costs and capital expenses.

The same set of demarcation functionality is also available via Transmode’s Network Interface Device, which is a port device, supported by its parent Ethernet Muxponder. The NID performs service OAM but leaves the ser- vice policing and tagging to be done by the Ethernet Muxponder, reducing cost and complexity of the customer located equipment.

cost and complexity of the customer located equipment. Figure 21. Transmode’s Network Interface Device (NID). 28

Figure 21. Transmode’s Network Interface Device (NID).

2.4.2 ethernet Muxponders (eMXP)

In the aggregation network, traffic ingresses via an Ethernet Muxponder (EMXP) in the first node. The very same node may also include mux- ponders/transponders using additional WDM channels for fully transpar- ent Layer 1 transport and use Layer 2 transport only where inspection and OAM information is required at intermediate points, and where traffic needs to be aggregated by statistical multiplexing.

traffic needs to be aggregated by statistical multiplexing. Figure 22. Two of the units in Transmode’s
traffic needs to be aggregated by statistical multiplexing. Figure 22. Two of the units in Transmode’s

Figure 22. Two of the units in Transmode’s Ethernet Muxponder (EMXP) family.

The aggregation and the metro core networks are interconnected via a packet-optical platform that can provide switching at OSI Layers 1, and 2. In the metro core, Ethernet SVLANs and MPLS-TP enables detailed handling of the Layer 2 services while WDM keeps legacy transport services at Layer 1. A ROADM 7 enabled Layer 1 provides flexible wavelength switching capabilities, while a unified control plane provides management capabilities across the aggregation and metro core networks.

7 Reconfigurable Optical Add/Drop Multiplexor.

Finally, the core network is typically implemented by a set of IP routers using full IP-MPLS 8 for traffic engineering purposes. Normally operators prefer to have a clear demarcation point towards the aggregation net- work at the edge of the core network, often referred to as a provider edge (PE) router. The routers of the core network have a broad functionality and very high capacity; they are normally interconnected via a strict Layer 1 WDM network, since the main objective is to provide “fat pipes” without any need for Ethernet aggregation.

2.4.3 optical add/drop multiplexors (oADM, roADM)

and other optical elements

The Layer 2 specific elements of the aggregation and metro core networks use Layer 1 optical WDM channels for the transport of Ethernet frames between network nodes as described in section 3.2.3. All the above Layer 2 specific network elements interwork seamlessly with the flexible optical networking elements at Layer 1, when present in a truly integrated packet- optical platform, such as Transmode’s TM-Series. A TM-Series node may include Layer 1 transponders and muxponders, Ethernet Muxponders, ROADMs and other optical network elements 9 . The optical layer is indi- cated by the multicolored ring and links in Figure 20.

2.4.4 The multi-layer service management system

The chosen network architecture has a profound influence on the degree of operational simplicity that is possible to achieve when it comes to network management. The real benefits of packet-optical networking can only be realized with a truly integrated Layer 1 and Layer 2 transport platform and a unified Layer 1 and Layer 2 management system.

8 MPLS used in conjunction with IP and its routing protocols.

9 For a description of how the optical elements work and are used in a flexible optical network, refer to the book “WDM the Transmode Way” from Transmode.

A network for Carrier Ethernet services may be implemented as a

separate Layer 1 optical network with Layer 2 Ethernet switches attached externally over standard interfaces, as depicted in Figure 23. Although

conceptually simple, such a configuration results in a complex hierarchy

of

management systems. These systems must be carefully integrated

in

order to provide a useful Ethernet service provisioning and assurance

environment.

Ethernet service provisioning and assurance environment. Figure 23. Ethernet services provided by separate Ethernet

Figure 23. Ethernet services provided by separate Ethernet switches attached to an optical network results in a complex hierarchy of management systems.

Using a true packet-optical network and an integrated multi-layer management suite, such as Transmode’s Enlighten, which includes the Transmode Network Manager, improves this situation drastically.

Network Manager, improves this situation drastically. Figure 24. Ethernet services provided by an integrated

Figure 24. Ethernet services provided by an integrated packet-optical platform and managed from a multi-layer management suite such as Enlighten to simplify operations and reduce cost.

A multi-layer management system has access to both Layer 1 and Layer

2 network status information and can manage both optical and packet mode equipment. Since Layer 1 and Layer 2 functions are handled by one single system, provisioning of Ethernet services affecting both Layer 2 and Layer 1 can be done by simple point-and-click commands from the man- agement system. Furthermore, Layer 2 services are monitored end-to-end and adequate Layer 1 resources can be allocated directly, should optical paths be broken or changes in the traffic pattern occur.

The multi-layer management approach brings further benefits in terms

of lower cost for management hardware, less training, less integration

and simpler administration and maintenance of the entire network. Especially for network operators not already having an extensive

Operations Support System (OSS) in place, the unified packet-optical man- agement system offers significant advantages over the integration

of multiple separate management systems.

2.5 Advantages of packet-optical transport

2.5.1 Benefits of the packet-optical approach

Implementing a packet-optical network helps the operator attain several types of valuable advantages:

REDUCED INVESTMENT AND OPERATIONAL COSTS The network will have fewer physical units through the integration of Layer 1 and Layer 2 transport functions in the same hardware platform. The integration reduces packaging and cabling costs as well as all types of inventory. The integrated approach reduces network complexity and lowers the OPEX significantly.

Efficient aggregation of Ethernet traffic can solve the problem of underutilized WDM channels at the edge of the network. By filling the pipes packet-optical transport increases the utilization of the available bandwidth and ensures an efficient handover to the core network at a lowered cost.

If the network can distinguish between high value IP services and legacy traffic, it is possible to offload legacy traffic to the most cost effective transport at Layer 1 avoiding the consumption of equipment ports with higher complexity and cost, except where needed.

POTENTIAL FOR ADDITIONAL SERVICE REVENUES Service awareness is critical if the differentiated QoS requirements of new multimedia applications and cloud services are to be met end-to- end throughout the network. To do that, it is important to retain service transparency, i.e. to leverage all existing information present in Ethernet

VLAN tags and MPLS labels for example. This should be done until such a point in the network where traffic with the same QoS requirements has been fully aggregated.

Full service awareness enables the operator to differentiate and market higher value services with specific SLAs, increasing the service revenues.

bETTER CUSTOMER SATISFACTION AND ThEREby REDUCED ChURN Massive scalability can easily be attained through the WDM functionality in an optical/Ethernet platform, where multiple services can be assigned to the same wavelength or assigned to their own specific wavelengths. New wavelengths can be added on an as-needed basis. Support for scal- ing Ethernet services is the most important requirement since Ethernet traffic will constitute the great majority of future growth in bandwidth requirements. With an integrated packet-optical platform, it is very easy to upgrade transport capacity as customer demand grows.

End-to-end OAM is important to ensure reliability and resiliency of the transport underlying the services carried over the network. It is important to ensure both SLAs and service objectives that are internal to the service operator and those that might explicitly be offered to a subscriber. Effi- cient tools for operations and maintenance are vital to achieve customer satisfaction.

2.5.2 Advantages of Transmode’s native Packet optical 2.0 architecture

Packet-optical transport provides many advantages as described previ- ously. But other characteristics more related to the actual implementa- tion of the packet-optical nodes used in the network are also of signifi- cant importance. Native Packet Optical 2.0 combines Transmode’s long and recognized optical network experience with outstanding Ethernet/

Layer 2 capabilities, including MEF Carrier Ethernet 2.0 (CE2.0) services, MPLS-TP and OTN compatible transport. And the elements of the Native Packet Optical 2.0 architecture are truly optimized for the best and most cost efficient implementation of packet-optical transport networks.

A LEAN AND TRANSPORT CENTRIC IMPLEMENTATION OF ThE EThERNET/LAyER 2 FUNCTIONS The packet mode functionality of a packet-optical node may include anything from a simple Ethernet bridge to a full fledge IP router and more. When implementing a transport focused packet-optical node, it is of utmost importance to select an optimal level of functionality for the transport task. Including superfluous functions adds both to equipment cost and operational complexity, especially in the aggregation and metro/ regional segment of the network having the majority of the network nodes. On the other hand, the node must include sufficient features to make service differentiation, OAM and necessary resilience possible.

Transmode’s Ethernet Muxponders and other Layer 2 equipment in the Native Packet Optical 2.0 architecture have all been designed with this balance in mind. Functionality of value in a metro aggregation or metro core network, such as the capability to handle MPLS-TP is included, while most of the complex IP handling has been omitted.

EFFICIENT TRANSPORT OF SyNChRONIzATION INFORMATION Accurate synchronization, especially in mobile networks is essential. As an example, 3G and LTE networks need an accurate timestamp as well as frequency synchronization while GSM only requires frequency synchronization. Transmode’s Ethernet Muxponders fully support the ITU-T synchronous Ethernet (SyncE) standards for distribution of timing signals throughout the packet-optical network.

IEEE 1588v2 10 is a common standard to deliver both phase and frequency information over a packet network, but it is sensitive to packet delay variation (jitter). IEEE 1588v2 can be deployed over any legacy Ethernet network, but it may rapidly lead to quality problems if jitter becomes too high. With the jitter-free and transport centric Transmode implementa- tion of network nodes, IEEE 1588v2 will converge fast and can deliver the required timestamps.

LOW LATENCy Transmode’s Ethernet Muxponders are designed for transport and capacity; no central switching fabric, no input queues and no network processors limiting performance. This results in minimum delay and no jitter within the Ethernet Muxponders and a network. The ultra-low latency has significant value overall, and is crucial in certain applications e.g. between data centers and in algorithmic trading applications for the financial world. Additionally, the stability and low latency of the Ethernet Muxponders add minimal jitter and delay in mobile backhaul applications, enabling more and longer radio hops when a combined wireless and wired backhaul network is being deployed.

LOW POWER CONSUMPTION Energy costs can be a significant item in the OPEX of any telecommuni- cations network. The hardware elements of the Native Packet Optical 2.0 architecture have all been designed with this in mind, having typical power consumption as low as less than 7W per 10-Gigabit Ethernet for example.

10 The IEEE 1588 standards describe a hierarchical master-slave architecture for clock distribution in computer networks originally known as the Precision Time Protocol (PTP).

Not only does the low power consumption save direct energy costs. For every 10 W of heat generated by the equipment, up to an additional 5W is required for cooling purposes in a typical telecom environment. This means that for every 10W consumed power saved, air-conditioning requirements can be lowered by as much as 5W, a further 50% saving on energy.

MULTI-LAyER MANAGEMENT Transmode’s multi-layer management suite Enlighten enables unified Layer 1 and Layer 2 Operations, Administration and Maintenance of the packet-optical network. Transmode’s Native Packet Optical 2.0 archi- tecture supports the full range of operations, administration and main- tenance functions defined by MEF for Carrier Ethernet 2.0. Furthermore, the architecture is fulfilling the requirements of ITU-T Recommendation Y.1731, which additionally addresses performance management.

2.6 Migrating legacy TDM services to ethernet

2.6.1 one common infrastructure for ethernet

and legacy TDM services

SDH and SONET optical transmission systems based on TDM technology have been the basis for many telecommunication services offered during the last few decades. However, from a bandwidth perspective, Ethernet based traffic has already surpassed the amount of legacy TDM traffic. Operators are faced with the challenging task of maintaining existing TDM-services while upgrading networks for new Ethernet services, all in the most cost efficient way. Network upgrades are required to cater for the growing amount of packet mode traffic generated by Internet access, video on demand and cloud computing, while the shrinking amount of TDM traffic has to be taken care of to sustain revenues from the existing services. Building a separate new infrastructure for the Ethernet traffic while maintaining a shrinking SDH/SONET network is one option, but an integrated approach where the new packet mode network also provides the legacy TDM services, is an attractive alternative.

the legacy TDM services, is an attractive alternative. Figure 25. Migration towards a single, packet-oriented,

Figure 25. Migration towards a single, packet-oriented, transport infrastructure for all services.

Many TDM services running over SDH/SONET systems can be replaced by Ethernet equivalents, and are well suited for the packet-optical infrastructure described in the previous sections. However, there is a sig- nificant portion of the traffic that cannot be simply migrated to Ethernet. This fact creates a dilemma for network operators as the SDH/SONET systems start to reach end of life or if a large SDH/SONET network must be supported for a small number of services or customers.

Transmode’s packet-optical platform offers several alternatives for the handling of legacy, TDM-based services in a way that greatly facilitates a gradual shift towards one common, packet mode, Ethernet infrastructure for all traffic.

yyExisting Layer 1 services carried over SDH/SONET networks can be transported totally separately from the Ethernet services using different wavelengths, while still using the same WDM platform and optical transmission network. Transmode’s TM-Series platform includes powerful ROADMs for flexible handling of the optical paths and muxponders/transponders that adapt SDH/SONET traffic to optical transport.

yyLayer 1 services based on STM-1/OC-3 and STM-4/OC-12 can easily be adapted for transport over Ethernet through the use of Transmode’s Intelligent SFP (iSFP) pluggable optics that provide circuit emulation over a SyncE capable Ethernet. The iSFP modules can be fitted into any Gigabit Ethernet port of Transmode’s Ethernet Muxponders, allowing a very flexible and tactical service migration.

allowing a very flexible and tactical service migration. Figure 26. SDH/SONET transport alternatives in

Figure 26. SDH/SONET transport alternatives in Transmode’s Native Packet Optical 2.0 architecture.

2.6.2 Using the iSfP to convert SDh/SoneT services for ethernet transport

The Intelligent SFP (iSFP) is used to deliver TDM services over a network built for Carrier Ethernet services. It provides circuit emulation of 155 Mbit/s or 622 Mbit/s TDM services (STM1/OC3 or STM4/OC12) over a SyncE capable Carrier Ethernet Network.

The iSFP module performs packetizing of the SDH/SONET service, con- verting a STM-1/OC-3 service into a 170 Mbit/s Ethernet stream or a STM- 4/OC-12 service into a 680 Mbit/s stream, creating a transparent bit pipe between two locations in the packet-optical network. The existing SDH/ SONET service is transparently migrated to the Carrier Ethernet Network with service adaption at the edge of the network and standard Ethernet frames used between these locations.

The TDM traffic is mapped into an Ethernet Virtual Connection (EVC) that can either be transported as an Ethernet Service VLAN or via an MPLS-TP service. Transmode’s Ethernet Muxponders can be used to perform layer

2 aggregation to ensure full utilization of higher speed 10G Ethernet con- nections. The applied Transparent SDH/SONET over Packet (TSoP) adaption

is currently outlined in an open Internet Engineering Task Force (IETF) draft.

This TSoP transport ensures complete transfer of the data and payload structure, all overhead bytes, protection protocols and synchronization at both STM-1/OC-3 and STM-4/OC-12.

and synchronization at both STM-1/OC-3 and STM-4/OC-12. Figure 27. Transparent SDH/SONET over Packet (TSoP). The

Figure 27. Transparent SDH/SONET over Packet (TSoP).

The two directions of the service can either operate as two independent timing domains or one direction can be frequency locked to the other direction.

Transmode’s Enlighten also supports synchronization management and monitoring. Probes can be deployed in the network to monitor the SyncE quality in the network, which is used as a reference for both ends of the transparent SDH/SONET service. It can also be used to monitor the SDH/ SONET sync quality of the connected systems to provide reassurance of the sync in the TDM service that is carried over the Ethernet network.

the TDM service that is carried over the Ethernet network. Figure 28. Using SyncE as a

Figure 28. Using SyncE as a reference for SDH/SONET differential clock recovery.

SyNChRONIzATION Network synchronization is the cornerstone of the SDH/SONET network. Here the quality of the underlying Ethernet network can be critical.

PROTECTION

Transmode’s Native Packet Optical 2.0 architecture provides excellent Synchronous Ethernet (SyncE) performance due to patented innovations

To maintain the SDH/SONET protection, the existing ASON and SNCP protection schemes are replaced by MPLS-TP and Ethernet protection

in

circuit design.

options. Protection is supported with MPLS-TP for topologies such as ring, mesh and partial mesh. Also Ethernet protection is supported with ring

In

Transmode’s iSFP solution SDH/SONET sync transport is provided

(ERPS) and point to point (LAG) options, as described in chapter 4.4.

with the Differential Clock Recovery (DCR) mechanism transferring the SDH/SONET clock to the transparent emulated service with SyncE as

a reference in both ends. DCR transfers the synchronization clock to

the emulated service and extracts it again at the far end of the service.

These mechanisms provide comparable protection with equivalent, if not better, switching speed performance than traditional SDH/SONET protection schemes.

2.7 Multi-layer network management

Many of the advantages of packet-optical networking originate from the ability to manage both the Layer 1 optical channels and the Layer 2 Ethernet services of the network in a coordinated way. Only then can services be created end-to-end, having their optical channels established at the same time as the Ethernet service attributes are assigned to the Ethernet Virtual Connections using them. Importantly, it is only then possible to efficiently monitor the performance of an Ethernet service and determine if a fault originated in the optical or in the packet switching elements of the network.

optical or in the packet switching elements of the network. Figure 29. An Ethernet service in

Figure 29. An Ethernet service in a packet-optical network is created over multiple layers of underlying connections that need to be managed in a coordinated way.

Coordinated handling of the optical and Ethernet layers of the network calls for a multi-layer management system. The structure of such a system must follow a well thought through and standardized architecture, while also being designed for ease of use and with the network operators in mind. Transmode’s multi-layer management suite, Enlighten, provides network operators with full control of their integrated packet-optical network and supports planning, deployment and operation of the net- work in the most cost efficient and rational way.

2.7.1 Transmode’s multi-layer management

suite enlighten

Enlighten is the multi-layer management suite for managing Transmode’s optical and packet-optical transport networks. It provides a full range of management tools helping operators with the tasks throughout the entire service and network’s life cycle such as planning, deploying and operating a packet-optical network and its services.

Furthermore, Enlighten has been designed in accordance with the principles of the Business Process Framework (eTOM), published by the TM Forum (TMF, formerly TeleManagement Forum). eTOM is a guidebook that defines the most widely used and accepted standard for business processes in the telecommunications industry. The eTOM model describes the full scope of business processes required by a service provider and defines key elements and how they interact.

Figure 30. The eTOM Business Process Framework. Source TM Forum. The Enlighten multi-layer management suite

Figure 30. The eTOM Business Process Framework. Source TM Forum.

The Enlighten multi-layer management suite provides support for processes in the “Operations” area of the process map in Figure 30 and comprises the following entities:

yyThe Transmode Network Design Tool (TNDT)

yyThe Transmode Planning Tool (TPT)

yyThe Transmode Network Manager (TNM)

yyThe Embedded Node Manager (ENM)

yyThe Enlighten Ecosystem

yyThe Enlighten Portal

A central element in the Enlighten management suite is the Transmode Network Manager (TNM), a cost-effective and scalable carrier class, service, network and element management system based on the ITU-T recommendation M.3010.

management system based on the ITU-T recommendation M.3010. Figure 31. Transmode’s Network Manager, TNM. TNM provides

Figure 31. Transmode’s Network Manager, TNM.

TNM provides a centralized system for operations, administration and management of the entire packet-optical network and hides the com- plexity of the underlying equipment to higher order business support systems. Its integrated management capabilities also provide the founda- tion for service management of individual end-to-end connections. TNM increases the visibility of the network and simplifies many repetitive tasks which increase the performance of the network while lowering opera- tional expenses.

2.7.2 network management principles

Management of a communications network comprises many diverse actions, which classically have been grouped into five main categories, the FCAPS suite:

yyFault management (F) encompasses functions for detecting failures and isolating the failed equipment, including the restoration of connectivity.

yyConfiguration management (C) refers to functions for making orderly and planned changes within the network. An important part of configuration management is keeping an inventory of equipment, software releases etc. in the nodes.

yyAccounting (or administration) management (A) deals with functions that makes it possible to bill users for the network resources they use.

yyPerformance management (P) comprises functions for monitoring and fine tuning the various parameters that measure the performance of the network and forms the basis for service level agreements with the network users.

yySecurity management (S) refers to administrative functions for authenticating users and setting access rights and other permissions on a per-user basis.

A great deal of standardization of management procedures has taken place among operators and their vendors to facilitate network opera- tions. Organizations such as TM Forum have developed extensive specifications of management functionality, interfaces and protocols.

Transmode’s network management philosophy fully adheres to the relevant TM Forum standards. For integration with back-office support systems, TNM provides TM Forum Frameworx compliant web-services interfaces based on the TMF608 model. These interfaces hide the complexity of the underlying optical network and reduce the time, risk and costs associated with systems integration.

TNM provides the following full suite of MTOSI 2.0 11 compliant interfaces:

yyInventory

yyAlarms

yyActivation/Provisioning

yyPerformance statistics

2.7.3 A unified information model for multi-layer

management

At the center of Transmode’s multi-layer management architecture

is the unified information model. There are several standards for modeling

multi-layer networks: The ITU-T G.805 recommendation provides

a description of circuit switched network connections through a multi-

layer network while the G.809 recommendation provides the same but for connectionless networks. G.805 and G.809 provide generic methods for modeling and describing networks from a functional and structural architecture perspective.

11 Multi-Technology Operation Systems Interface (MTOSI) is a TM Forum standard for implementing interfaces between Operations and Support Systems (OSS).

The G.805/G.809 model can be mapped into management informa- tion models using the equipment management functions specified in the TMF608 (MTOSI 2.0) model. Management information models are specifically important because they formally define and describe the reference points that the operator’s Operations and Support System (OSS) must interact with in order to manage a piece of transport equipment.

The unified information model allows the entire network to be modeled all the way from the optical fibers run in the ducts up to the services created on top of them, including all the OSI layers in between. At the bottom of the multi-layer information model used by Transmode is the TMF608 model for the Layer 1 network. Upon the Layer 1 model the higher order models for connection oriented Ethernet (q-in-q), MPLS-TP and the multi-point Ethernet models are added, as indicated in Figure 32.

Ethernet models are added, as indicated in Figure 32. Figure 32. Transmode’s multi-layer management

Figure 32. Transmode’s multi-layer management architecture is based on a unified Layer 1 and Layer 2 information model.

The fact that there is one single, unified information model from the fiber up to the Ethernet service is the enabler of unified management and of all the operational benefits from using a truly integrated packet-optical network. In a network with a separate Layer 1 and Layer 2 service delivery

a unified view must be achieved in a higher order OSS through integration of multiple systems. The unified information model enables all admin- istrative and management actions such as planning, provisioning and operations to be performed across Layer 1 and Layer 2.

2.7.4 Layer 2 Service Provisioning

For networks with Layer 2 equipment, the Transmode Network Man- ager (TNM) offers a provisioning module that provides point-and-click provisioning for both Ethernet SVLANs and MPLS-TP paths. The provision- ing module provides point-and-click creation of tunnels, Label Switched Paths (LSPs) and pseudowires which automates the configuration process and reduces the time and cost to provision resources and services.

To be able to deliver and provision a Layer 2 service, the optical channels must also be configured. One advantage with TNM multi-layer manage- ment is that the operator can provision an optical channel by the same management system before configuring a Layer 2 service, effectively speeding up the entire provisioning process.

2.7.5 Layer 2 Service Assurance

A Carrier Ethernet Network must provide the ability to monitor, diagnose

and centrally manage the network, using standards-based vendor inde- pendent implementations. Transmode’s TNM takes this one step further by offering management across the layers to further simplify manage- ment, increase visibility of the network and reduce operational costs.

The Layer 2 assurance module of TNM extends the management system’s operational model and well-known management processes from Layer 1 into Layer 2. The module is plug-and-play on top of the Layer 1 assurance module and does not only add Layer 2 management capability but also provides integrated Layer 1 and Layer 2 management from one unified graphical interface.

Performance management of a Layer 2 network comprises the measur- ing of throughput, delay, jitter, and packet loss for a particular service. To define the end points between which such measurements are taken, Maintenance Entity Groups and Maintenance End Points are defined as described in section 4.7.4.

The assurance module provides a complement to established Ethernet standards for fault management such as IEEE 802.1ag (Connectivity Fault Management, CFM). The goal of CFM is to monitor an Ethernet network and pin-point where a problem occurs while the TNM provides a graphical multi-layer user interface to help in quickly finding the exact root-cause of a networking problem independent whether it occurs on Layer 1 or Layer 2.

problem independent whether it occurs on Layer 1 or Layer 2. Figure 33. TNM multi-layer view

Figure 33. TNM multi-layer view of an Ethernet Virtual Connection.

The assurance module discovers and tracks the operational state of the Layer 2 services and the underlying Layer 1 paths supporting the services. Fault information is indicated not only on the map but also graphically for individual paths, links and ports. If the origin of a networking problem resides on Layer 1, the user can turn to the features in the Layer 1 assurance module to quickly resolve the problem without having to change system or interface. The module also presents G.826 performance statistics for Layer 1.

2.8 Software Defined networking (SDn) and network Virtualization

Software Defined Networking (SDN) is an approach to computer network- ing that allows network administrators to manage network services through abstraction of lower OSI level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying system that forwards traffic to the selected destination (the data plane). Instead of being executed in each network element, the functions of the control plane are centralized to one or more SDN controllers. SDN allows network administrators to have a programmable and central control of network traffic flows.

a programmable and central control of network traffic flows. Figure 34. Software defined networking. Control of

Figure 34. Software defined networking. Control of the network is separated from the network equipment and placed in an SDN controller.

Software defined networking increases the flexibility of the physical network because the network services can be created “dynamically” by the controller and the image it has of the network, rather than by direct manipulation of the physical switches, routers etc. The SDN controller thus enables network virtualization, i.e. it integrates the status of the physical network resources and the network functionality into a single, software-based administrative entity, a virtual network. Network virtualization as defined by ITU-T Y.3011 enables the creation of such logically isolated virtual networks over shared physical networks, so that multiple virtual networks can simultaneously coexist over the same physical resources.

Network virtualization brings a higher degree of abstraction to the trans- port network, an abstraction which is favorable when it comes to mobility of resources, service creation and management, especially in the context of cloud computing: The flexibility of the virtual network matches the dynamic reconfiguration capabilities of computing and storage in cloud computing. Network virtualization also offers the owner of the physical network the possibility to delegate more of the network control to the service providers using his network, as indicated in Figure 35.

Figure 35. Network virtualization allows two service providers to “see” different virtual networks based on

Figure 35. Network virtualization allows two service providers to “see” different virtual networks based on the same physical infrastructure.

Software defined networking requires some method for the control plane to communicate with the data plane. One such mechanism, OpenFlow, is often misunderstood to be equivalent to SDN, but several other mechanisms also fit into the concept. Path Computation Element (PCE) is a less disruptive way to achieve SDN than OpenFlow. Compared to the OpenFlow architecture, which places all control in the control plane, the PCE approach only moves the path-finding functions of the control plane to the controller while retaining the other control plane functionality in the elements of the physical network.

The PCE model enables network operators to customize the actual path computation algorithms that are currently integrated in, for example, rout- er operating systems. The traditional path computation architecture gives operators little, or no, scope to radically change or to routinely increment, path computation. PCE server products provide open software APIs, to allow operators to customize or replace path algorithms. These interfaces allow the OSS to influence network behavior via the PCE server, instead of using direct communications with every network element. The increased flexibility and openness for customization enables operators to address the rapid change of pace set by today’s applications and traffic flows.

Transmode’s Native Packet Optical 2.0 architecture already allows for multi-layer management of traffic flows, including centralization of certain control functions as indicated in the left side of Figure 36. Transmode’s approach towards SDN comprises of an expansion of the architecture as shown on the right side of the same figure. An SDN controller entity, including PCE, topology database and virtual router functions, is introduced as a layer between the physical network and the management system. Furthermore, the SDN entity is given standardized, open software interfaces, which make it possible for other, higher level, management systems and other, external, SDN-compatible networks to interact with the SDN controller entity. The SDN entity as such may be located in a single central node, or distributed to multiple sites in the network.

The introduction of the SDN controller entity enables a seamless integra- tion with other software defined networks and is an excellent foundation for the virtualization of higher order networks and services. For example, having virtual router functions, a Transmode packet-optical network can interact with the protocols of a Layer 3 IP-network, dynamically assigning transport network capacity to routes as requested by the routers of the Layer 3 network.

routes as requested by the routers of the Layer 3 network. Figure 36. The SDN approach

Figure 36. The SDN approach chosen by Transmode.

3. aPPlicatiONS Of Packet-OPtical NetWORkiNg

3.1 Chapter summary

Packet-optical networks providing Carrier Ethernet services are rapidly becoming a primary infrastructure for telecommunications network

operators. The versatility of packet mode Ethernet services in combination with the capacity and scalability of optical transport have proven to be

a winning combination in a variety of applications. In this chapter we take

a brief look on some of the areas where the packet-optical technology has already proven its superiority.

3.2 ethernet services for enterprises

– Business ethernet

3.2.1 Serving enterprise customers

The enterprise Information and Communications Technology (ICT) landscape is constantly changing. New IP based voice and video services, cloud computing, and distributed data centers add new requirements on the enterprise wide area data network (WAN), especially in terms of increased bandwidth and enhanced performance. Legacy WAN connec- tivity technologies such as frame relay (FR) and ATM 12 are gradually being phased out by many network operators. And wide area connectivity based on TDM 13 leased lines does not provide the operational flexibility expected by the modern enterprise.

As ICT managers look for solutions, Ethernet services such as the MEF defined Carrier Ethernet services have emerged as an attractive alterna- tive to provide businesses with a best of breed, cost-effective, wide area networking solution for enterprise ICT applications. Ethernet standards from the MEF, ITU and IEEE have now added features and functionalities that make Ethernet a WAN capable technology. And the potential of Ethernet services has already been discovered by industries such as finance, healthcare, education, government, IT, retail, real estate, legal, media and more.

12 Asynchronous Transfer Mode.

13 Time Division Multiplexing.

An increasing number of network operators are offering MEF-compliant Carrier Ethernet services. In many cases, these services are replacing some of the operators’ legacy technologies (such as FR and ATM) while in other cases they are co-existing alongside other established wide area network- ing technologies, such as Layer 3 virtual private network services – also referred to as IP-VPN services.

For the network operator, Ethernet services offer an additional business opportunity, especially if Carrier Ethernet can be implemented in a cost efficient way on the same equipment already used to provide Layer 1 transport services. The enterprise demand for Ethernet services is pro- jected to have a continued rapid growth and reach a value of almost 45 Billion USD worldwide in year 2015. Transmode and other packet-optical equipment vendors are actively enabling this important transition.

vendors are actively enabling this important transition. Figure 37. Forecasted revenues from Ethernet services

Figure 37. Forecasted revenues from Ethernet services purchased by enterprises.

3.2.2 A network for Business ethernet

Figure 38 shows the principal elements of a typical packet-optical network deployed by a network operator providing Carrier Ethernet services for enterprises. The optical aggregation and metro networks use WDM as the underlying transport technology and are enhanced with Ethernet switching functions to carry Ethernet traffic between the subscribers’ LANs and other data networks as needed.

the subscribers’ LANs and other data networks as needed. Figure 38. Transmode’s offering for operators providing

Figure 38. Transmode’s offering for operators providing Ethernet services to enterprises.

The integrated packet-optical network enables both transparent Layer 1 services and more advanced Layer 2 (Carrier Ethernet) services. Having the optical network as the base makes it possible to support a broad range of traditional Layer 1 transport services. And thanks to the integrated Layer 2 functions it is also possible to offer the enterprise a wide range of Ether- net services with different bit rates, QoS and Service Level Agreements (SLAs), using bandwidth profiles and various protection and monitoring functions. The packet-optical network is ideally suited to support a multi- tude of enterprise subscribers, each having their own individual require- ments on the characteristics of the wide area service they want to buy.

Transmode’s portfolio for Carrier Ethernet services includes both demar- cation units (EDU and NID) which can be placed at the customer site (CPE) and Ethernet Muxponders (EMXP), i.e. muxponders with integrated Layer 2 functions and located in the network nodes. All units are fully MEF certified for Carrier Ethernet 2.0 (CE 2.0) services and form an inte- gral toolbox for the creation of a state of the art packet-optical network. Furthermore, the EMXPs are seamlessly integrated with other traffic units and optical units in Transmode’s TM-Series, forming a true packet-optical network.

The Ethernet Demarcation Unit (EDU) is designed for minimal delay and jitter, giving unprecedented QoS and SLA fulfillment. The EDU includes highly accurate and precise OAM and performance monitoring through microsecond resolution and per service visibility for all key OAM and SLA parameters, enabling individual SLA monitoring and service differentiation.

The Ethernet Muxponders are scalable up to 100G and also optimized for ultra-low latency and jitter. They can be equipped with long reach interfaces prepared for integration with OTN transport core, have fast, flexible and future-proof OAM processing and are fully integrated with the optical transport platform of the TM-Series.

All TM-Series equipment is managed by Enlighten, the multi-layer management suite, which includes advanced functions for seamless planning, provisioning and management of Carrier Ethernet 2.0 compliant services.

3.3 Aggregation of IP traffic – IP backhaul

3.3.1 IP based services over a common infrastructure

Fixed and mobile networks provide telecommunications services to end users, services generated by the operators themselves and by external entities. The services can be Internet access, telephony with access to the PSTN and consumer access to TV and media streams. Increasingly all these services are built on the IP suite of protocols, and service provisioning to an end user increasingly becomes equivalent of enabling the flow of IP traffic to and from the subscriber.

Much of the telecommunications operator’s efforts today are centered on creating seamless services working over both the fixed and the mobile networks – Fixed Mobile Convergence (FMC). One important element of the operator’s FMC strategy is to use a common transport infrastructure of packet-optical equipment for all IP services, irrespective of if access is fixed or mobile. This simplifies the implementation of common higher layer service entities, and reduces both CAPEX and OPEX.

layer service entities, and reduces both CAPEX and OPEX. Figure 39. Fixed Mobile Convergence (FMC) uses

Figure 39. Fixed Mobile Convergence (FMC) uses a common transport infrastructure for IP traffic from mobile users and users in the home and enterprise.

The infrastructure delivering the operator’s services is typically divided into an access network, an aggregation network and a core network.

yyThe access network must serve millions of end-points, each requiring a finite capacity, while leveraging the available access medium in an optimal way.

yyThe aggregation network collects traffic from a large number of end users that is coming from hundreds or thousands of access nodes in the access networks, smoothing the individual peaks and troughs of traffic into a smoother average traffic flow. Packet-optical technolo- gies can bring significant benefits to this part of the FMC common infrastructure.

yyThe core network routes and distributes traffic between aggregation networks and a finite number of entities implementing the higher layers of the services, i.e. data centers, servers and other networks.

3.3.2 A lean and transport centric aggregation network

The role of the aggregation network is to transport IP traffic from a large number of access nodes to much fewer core nodes. Traditionally this task has been performed by SDH/SONET links and WDM wavelengths, but a more capacity efficient aggregation network is accomplished when using Ethernet services to aggregate multiple traffic streams. Ethernet leverages the inherent possibilities for statistical multiplexing of much of the data traffic and fills the available bandwidth pipes more efficiently. That is why Transmode recommends the use of native Ethernet and MPLS-TP in the aggregation network.

Figure 40. Aggregation of IP traffic over a packet-optical infrastructure. The aggre- gation network can

Figure 40. Aggregation of IP traffic over a packet-optical infrastructure. The aggre- gation network can be made less complex and given better performance if the IP functionality is restricted to the end nodes, i.e. the access nodes and the routers of the core network.

There is no need for advanced routing and filtering functions in the aggre- gation network. Instead a carefully chosen combination of Ethernet switching and optical functions should make the aggregation network act as a bundle of wires carrying the traffic. Handling of the IP protocol, including IP/MPLS, should be restricted to as few nodes as possible, i.e. to the core nodes and as needed in the access nodes. Such a consolidation of IP/MPLS reduces both cost and operational complexity. Furthermore, the simpler processing of Ethernet frames compared to IP packets in the nodes of the aggregation network reduces delay and jitter, while making all traffic flows more predictable than in an IP network.

Transmode’s Native Packet Optical 2.0 architecture, implemented with the TM-Series and Enlighten, is optimized for transport and does not introduce unnecessary complexity or extra IP functions. It is designed for Ethernet transport (Layer 2) according to the MEF CE 2.0 specification providing:

yyHigh bandwidth utilization

yyLow latency necessary for IPTV, IP-telephony, video on demand and mobile backhaul

yySupport for both point-to-point and multicast applications

yyLayer 2 performance management (utilization, latency, jitter, packet loss)

yyEfficient synchronization (full Synch-E support)

yyExcellent protection and resilience (Link aggregation, ERPS, MPLS-TP Linear Protection)

yyPredictable performance, making it easy to maintain and troubleshoot

yyLow power consumption

3.3.3 The flexible optical network brings scalability

Given the rapid demand for more bandwidth from consumers and enter- prises, the aggregation network requires flexibility to accommodate future growth. Transmode’s Native Packet Optical 2.0 architecture integrates the capacity of the optical WDM network seamlessly with the Layer 2 switching functions of a Carrier Ethernet network. It is easy to upgrade the capacity of the transmission links between the nodes and the switching capacity as needed following a pay-as-you-grow model.

For example, assume that the demand for bandwidth has been growing extra fast at one particular remote node in the aggregation network. Thanks to the packet-optical integration of the TM-Series, an additional express wavelength can easily be opened up for the high traffic node, leading its Ethernet traffic direct to the core node, without loading any intermediate switches. This has several benefits:

yyOnly the high traffic node and the core node need to be upgraded – there is no requirement for a fork-lift upgrade of the whole aggrega- tion network, you pay-as-you-grow.

yyA dedicated wavelength for the additional traffic means minimal latency.

yyThe express wavelength can be equipped with error correction (OTN-FEC) to cater for the longer distance now being passed.

yyProtection schemes can be implemented in the optical layer to improve resilience.

Thanks to the integrated packet-optical functionality of the TM-Series it is also possible to complement the EMXP with other types of mux- ponders, should there be a need to transport legacy TDM traffic over yet another wavelength towards the core node for example.

yet another wavelength towards the core node for example. Figure 41. An express wavelength is added

Figure 41. An express wavelength is added to the aggregation networks to cater for increased bandwidth demand from a remote node.

3.4 Mobile backhaul

3.4.1 3G and 4G/LTe place new requirements

on mobile backhaul

The evolution of radio technologies for the air interface between user equipment and the base station has meant a revolution in terms of data access rates for mobile devices. The once single line display of a mobile phone has become the versatile 1920 * 1080 pixel screen of the intelligent tablet. In parallel, the data rates per user have gone from a few kbit/s to more than 100 Mbit/s. Simultaneously new radio spectrum made avail- able in combination with capacity and coverage requirements have led to a smaller cell size with many more radio cells than just a few years ago.

The growth in data rates and the higher number of cells place an ever increasing demand for capacity and reach on the mobile backhaul networks, may they be implemented by microwave links, copper wires or fiber. A primary challenge for the mobile operator is to dimension his backhaul network to cope with the upcoming traffic demand, avoiding making it a bottle neck for his mobile services and avoiding end user frustration over long response times and unpredictable performance.

Figure 42. The evolution of mobile devices places new requirements on the backhaul networks. 2G

Figure 42. The evolution of mobile devices places new requirements on the backhaul networks.

2G mobile networks have traditionally relied on PDH/SDH technologies and Layer 1 multiplexing for backhaul from the mobile base stations to the core network. With the introduction of IP based 3G and 4G/LTE mobile networks, the demand for a Layer 2, i.e. an Ethernet based, aggregation network has become evident for mobile operators. Moreover, LTE introduces new requirements on how to synchronize and maintain the network, and the mobile operators will require transport services with more stringent service level agreements (SLAs). For a regional utility carrier or local operator sitting on large assets of fiber, this constitutes an important business opportunity if a network that allows the introduction of such services can be built in a cost effective manner.

3.4.2 A backhaul network optimized for 3G and 4G/LTe

The most attractive backhaul offering for a mobile operator upgrading to IP-based 3G and 4G/LTE services builds on established standards and is optimized for packet mode transport. Using Ethernet as the packet switching technology makes it possible to benefit from the cost rational- ization of Ethernet equipment, create tailor-made backhaul offerings and make maximum use of installed fiber.

backhaul offerings and make maximum use of installed fiber. Figure 43. Carrier Ethernet 2.0 (CE 2.0)

Figure 43. Carrier Ethernet 2.0 (CE 2.0) services for mobile backhaul.

Implementing Ethernet services, rather than IP/MPLS based services with similar characteristics, can make the necessary investments significantly lower, especially if the Ethernet services can be implemented on the already deployed WDM platform such as the TM-Series. The Ethernet products in Transmode’s TM-Series are fully MEF CE 2.0 certified and have the functionality that makes it possible to implement all of the CE 2.0 service types shown in Figure 43.

Furthermore, a backhaul network implemented with the TM-Series has the following characteristics of immediate importance for mobile operators:

yyMinimal latency and jitter. Latency and jitter have a cumulative effect as traffic passes through consecutive nodes in access rings, aggrega- tion networks and the core. Latency must be low, predictable and stable, not varying with load, throughput or packet replication.

yyEfficient mechanisms for synchronization. Synchronization is critical in mobile backhaul networks as users move from cell site to cell site and expect uninterrupted service. Proper performance of the packet- optical network is ensured through technologies such as SyncE and a low latency design philosophy.

yyResilience. The network must include protection mechanisms that guarantee carrier class resilience against outages. A TM-Series net- work support protection mechanisms such as link aggregation, ERPS and MPLS-TP linear protection to ensure maximum reliability.

yyEfficient tools to manage the operational complexity. To be successful in the long run it is also necessary to be in control and to operate the network in an efficient way. The Enlighten multi-layer management suite has service aware and integrated Layer 1 and Layer 2 manage- ment functions that make the network easy to operate. Service creation can otherwise be difficult for complex networks, but with Enlighten, setting up integrated Layer 1 services, MEF-based Layer 2 services and MPLS-TP services is easy. The Enlighten Portal even allows the mobile operator to monitor the performance of the subscribed to services in real time.

3.5 Switched video transport

3.5.1 Streaming 3D and hD video to the home

A modern CATV operator offers a wide range of services over the trans-

port infrastructure once built for TV distribution. Services encompass both entertainment and communications for residential users as well as connectivity and value added services to enterprises. Some services, like LAN interconnect, may be created fully in-house by the CATV operator while others rely upon external networks, data centers and media hubs.

A major trend in CATV networks is the rapid growth of traffic per end-

user and in the overall network. The advent of streaming HD and 3D video services and cloud computing has made IP traffic grow some 30 – 50%

per year. Network operators also feel the need to radically lower the cost

of their transport networks if they are to be profitable in the future, and

that requires reducing the costs without compromising on scalability, efficiency, manageability, or the ability to offer differentiated services.

To meet these demands, network providers are going through a conver- sion from analog to digital distribution technologies and packet-optical networks. Packet-optical networks have been the industry consensus answer to meet future needs, with the convergence of legacy and next generation services onto Ethernet.

Figure 44. New access nodes (CCAP) in CATV networks require robust, high capacity aggregation networks.

Figure 44. New access nodes (CCAP) in CATV networks require robust, high capacity aggregation networks.

Essential for the transformation of CATV networks today is the intro- duction of DOCSIS 3.0 14 compatible equipment and the development of a new, super-dense, power- and space-saving access node architecture. The new architecture, referred to as the Converged Cable Access Platform (CCAP), combines the Edge QAM and Cable Modem Termination System (CMTS) into one node. Such upcoming super nodes will require significant amounts of bandwidth in the aggregation/distribution network; where traditional CMTS and Edge QAM nodes could be served with 1 Gbit/s, capacities of 10 – 100 Gbit/s per CCAP will be required.

3.5.2 Transmode’s solution for switched video transport

Transmode’s Switched Video Transport solution is based on the Native Packet Optical architecture. The integration of selected functionality into the transport network equipment enables cost efficient capacity

14 Data Over Cable Service Interface Specification. DOCSIS 3.0 has been ratified as ITU-T Recommendation J.222.

increases and the service differentiation capabilities typically found in Ethernet networks.

Transmode’s EMXP family of Ethernet Muxponders allows operators the use of features that are unique for demarcation, aggregation and transport of Ethernet services. Enabling cost efficient increase of metro network capacity, the EMXPs use packet optical technology for the best transport network economics.

Transmode’s Switched Video Transport implements IGMPv3 features to listen to the IGMP 15 network traffic between Edge QAMs and multicast routers. By participating in the IGMP conversation and switching the individual channels, the Transmode solution radically reduces the cost of the network. The IGMPv3 features, IGMP Snooping and Source-Specific Multicast, allow network traffic to be highly optimized, as a destination only receives the traffic intended for it.

as a destination only receives the traffic intended for it. Figure 45. An overview of Transmode’s

Figure 45. An overview of Transmode’s Switched Video Transport solution, based on a packet-optical aggregation network.

15 Internet Group Management Protocol.

3.6 Data center interconnect and cloud computing

Data centers are the technology epicenter of many businesses today and the connectivity inside and between datacenters is pivotal to business agility. Whether it’s moving an application, a set of users between sites or invoking a disaster recovery plan, high capacity flexible connectivity is crucial to businesses that run multi-tenant data center estates or that are moving towards cloud computing for their own business use.

Optical networks have since long been the prime type of media used when interconnecting data center sites distributed over a geographical metro or regional area. A prime challenge has always been protecting the organization from physical breaks in the fiber infrastructure; hence the move to diversity, tri-versity and even quad-versity between data center locations.

and even quad-versity between data center locations. Figure 46. Interconnection of distributed data centers via

Figure 46. Interconnection of distributed data centers via a wide area packet-optical network. The packet-optical network combines ultra-high capacity, transparent, trans- port of Ethernet and SAN traffic at Layer 1 with the transport of other Ethernet traffic at Layer 2.

With Storage Area Networks (SAN) and Ethernet fabric technology inside the centers, there are demands to support lossless SAN and Ethernet wide area interconnectivity over high capacity, low jitter optical links. Transmode’s 80 channel DWDM systems support transparent 10G transport in addition to 40G and 100G signaling rates, future-proofing investments.

For distributed data center organizations, Transmode’s packet-optical technology enables a pay-as-you-grow approach, being able to start with two sites and as little as four wavelengths and adding nodes as demand arrives without service interruption. If SLA management is required for services, Transmode’s Ethernet Demarcation Units can be deployed to provide assurance on latency, jitter, delay, packet-loss, uptime and throughput.

The TM-Series and Enlighten are a perfect fit for data center requirements, with an ultimate reach of up to 1500km and a mature optical manage- ment platform that integrates Layer 1 and Layer 2 transport. Data center interconnect with TM-Series nodes ensures that cloud, business continu- ity and low latency applications delivery can be executed at highest speed and managed under a single service orientated platform, ultimately reducing the complexity and cost of the network.

4. etHeRNet aND laYeR 2 tecHNOlOgieS

4.1 Chapter summary

Packet switching may come in many shapes, but in the context of packet- optical networks for access and metro/regional applications, packet switching is almost synonymous with Ethernet switching and the use of the Ethernet family of protocols. This chapter focuses on the character- istics of Ethernet, and especially on how the original connectionless LAN protocols for Ethernet have been augmented to make Ethernet a con- nection-oriented technology suitable for the use in wide area transport networks, i.e. to make Ethernet into a Carrier Ethernet Network. Attention is also paid to how to handle synchronization and how to create resilience in Ethernet networks.

This chapter is primarily intended as a tutorial and source of reference for those interested in general Ethernet and Layer 2 technologies used for wide area networking. More detailed information about Carrier Ethernet can also be found at the Metro Ethernet Forum web-site:

4.2 ethernet basics

Ethernet is a family of protocols and networking technologies originally designed for local area networks (LANs) in the 1980s but now also widely used for other topologies and distances. Standardized by IEEE 16 in the IEEE 802.n family of standards, Ethernet has largely replaced competing wired LAN technologies and is today the dominating link layer protocol in data networks.

Ethernet can be used in bus, star and mesh topologies and over a variety of physical media, including coaxial cable, twisted pair copper cable, wireless media, and optical fiber. Typical Ethernet data rates today are

100 Mbit/s (Fast Ethernet), 1 Gbit/s (Gigabit Ethernet or GbE) and 10

Gbit/s (10-Gigabit Ethernet or 10 GbE). Standards for 40 Gbit/s and

100 Gbit/s Ethernet were approved in 2010 and standards for Terabit/s

Ethernet are being developed.

4.2.1 ethernet mode of operation

Systems communicating over Ethernet divide a stream of data into individual packets called Ethernet frames. Each frame contains a source and a destination address and an error-checking code so that damaged data can be detected and re-transmitted.

so that damaged data can be detected and re-transmitted. Figure 47. The basic Ethernet frame (FCS:

Figure 47. The basic Ethernet frame (FCS: Frame Check Sequence used for error control). 17.

16 Institute of Electrical and Electronic Engineers.

17 For Gigabit Ethernet some vendors provide equipment that support a jumbo frame option, where frames can have a data payload of up to 9 000 bytes.

18 Media Access Control.

A

basic principle of the original Ethernet standard is that destination

Switched Ethernet removes the media contention and capacity problem

and source addresses refer to unique physical ports attached to the transmission medium, the MAC addresses which are permanently written into the hardware of every Ethernet Network Interface Card (NIC). The Ethernet protocol is concerned with the addressing, error checking

inherent to shared Ethernet, and reduces it to a contention problem within the switch, which needs to buffer frames from multiple users try- ing to access the same destination simultaneously. Another advantage of switched Ethernet is that switches can be made non-blocking, allowing

and transport of frames across a physical transmission link and referred

for simultaneous traffic between several ports. Furthermore, each switch

to

as a link layer or Layer 2 protocol. 19

port now provides the full bandwidth of the Ethernet medium to the connected station.

In

shared Ethernet, the earliest mode of Ethernet operation, frames

were broadcasted to every possible receiver in a broadcast domain and a mechanism called Carrier Sense Multiple Access with Collision Detection (CSMA/CD) was used to avoid collisions on the transmission medium. 20

In the 1990s the IEEE 802.3x standards for Ethernet defined full duplex

operation between a pair of Ethernet stations, i.e. simultaneous trans- mission and reception of frames over a twisted copper pair or fiber pair. At the same time a flow control mechanism called the MAC control protocol was introduced. If traffic gets too heavy, the control protocol can pause the flow of frames for a brief time period.

Today practically all LAN and every WAN Ethernet are based on switched Ethernet. In switched Ethernet all Ethernet stations have their own, indi- vidual, full duplex, connection to a central switch (sometimes called multi- port bridge). The switch has a forwarding table which matches Ethernet stations’ MAC addresses with a corresponding switch port, and sends the frame to the correct destination.

19 The number 2 refers to the second layer in the standardized ISO Open Systems Interconnection (OSI) reference model for data communications.

20 In CSMA/CD, the devices (called stations) can broadcast data over the medium whenever it is idle. If more than one station transmit at the same time and signals collide, the transmission is stopped by the involved stations, which will then wait for some random time and then restart transmission.

Basic Ethernet switches do not modify the Ethernet frames as they pass through and are generally much simpler than Layer 3 IP routers because they operate at the link layer and do not run complex routing protocols. Switches may also be both cheaper and faster than IP routers because the switching function can be implemented entirely in hardware, rather than in software running on an expensive high-performance processor. Finally, switches are simpler to manage than IP routers, since configuration does not involve the same complexity as with routers.

does not involve the same complexity as with routers. Figure 48. A meshed and switched Ethernet.

Figure 48. A meshed and switched Ethernet.

In a meshed Ethernet, there are several paths between nodes, and frames

could be forwarded in infinite loops within the network, if no coun- termeasures were taken. There must be one and only one open route

between each node of the network, and all other interconnecting ports

of

the switches must be blocked. Such a “one route” network topology

is

called a spanning tree. The Spanning Tree Protocol (STP) and the Rapid

Spanning Tree Protocol (RSTP) in the Ethernet standard are distributed algorithms that can be run by the switches to form a spanning tree.

In wide area applications of Ethernet, e.g. Carrier Ethernet, other mecha-

nisms, such as Ethernet Ring Protection Switching (ERPS) and manual configuration of virtual connections, are used to ensure that there is only one path between ingress node and egress node of the network. More information about this can be found in section 4.4.2.

4.2.2 Virtual LAns

The task of the Ethernet switch is to move the frame from one LAN

segment to another based on the destination MAC address. If the address

is unknown the frame is flooded to all the switch ports except the

incoming one. This creates one single broadcast domain per switched network, which is a potential problem in larger networks since broadcast frames are propagated and replicated throughout the entire network. The problem of large broadcast domains as well as the security problem of having all traffic available at every Ethernet station is overcome by the introduction of virtual LANs.

A virtual LAN (VLAN) is a logical group of Ethernet stations that appear

to one another as if they were on the same physical LAN segment, even though they may be spread across a large network. Each Ethernet station on a particular VLAN will only hear broadcast traffic from the other mem- bers of the same VLAN. Using MAC address based VLANs makes it pos- sible to let the VLAN span multiple switches. Interconnection between VLANs may then be provided by Layer 3 devices such as IP routers.

may then be provided by Layer 3 devices such as IP routers. Figure 49. Three virtual

Figure 49. Three virtual LANs (green, blue and red) each with their own individual broadcast domains and interconnected by a router.

All Ethernet frames in a VLAN have a distinct identifier, called the VLAN identifier (VID), located in a designated VLAN tag field, specified by the IEEE 802.1Q/p standard and inserted in the frame by the Ethernet switch. The full VLAN tag field is 4 bytes long and contains a Tag Protocol Identi- fier (TPID) and a Priority Code Point (PCP), which indicates the frame priority level.12 bits of the VLAN tag are available for VLAN identification, but two values are reserved, making a maximum of 4094 VLANs possible in one single switched network using the basic standard.

in one single switched network using the basic standard. Figure 50. IEEE 802.1Q/p encapsulation of the

Figure 50. IEEE 802.1Q/p encapsulation of the VLAN tag.

VLANs can be used to implement virtual private networks (VPN) and VLAN frames include priority fields that may be used to create services with different priorities, i.e. qualities. This feature is of particular interest in wide area applications where VLANs can be used to separate and clas- sify traffic from different sources and users and to direct it along different paths of the wide area network.

4.2.3 ethernet physical media (Phy)

The Ethernet standard comprises a data link layer and an Ethernet physical media (PHY) part, the latter being specific for the transmission media and data rate employed. When Ethernet is transported over a WDM wide area network, the Gigabit Ethernet and the 10-Gigabit Ethernet PHY standards are of most interest since these are the types of Ethernet deployed in metropolitan and other wide area networks. The Transmode TM-Series includes transponders and muxponders that can be equipped with trans- ceivers supporting fast Ethernet, Gigabit Ethernet, 10-Gigabit Ethernet and 100-Gigabit Ethernet.

4.2.3.1 fAST eTherneT (fe) PhySICAL LAyer

Fast Ethernet, i.e. Ethernet at 100 Mbit/s, has two predominant physical formats: 100BASE-TX that runs over two wire unshielded twisted copper pairs (UTP) and 100BASE-FX that runs over optical fiber. 100BASE-FX uses 1300 nm light transmitted via two strands of optical fiber, one for receive and the other for transmit. Maximum length is 2 km over multi-mode optical fiber.

The 100 in the media type designation refers to the transmission speed of 100 Mbit/s. The “BASE” refers to baseband signaling, which means that only Ethernet signals are carried on the medium. The TX and FX refer to the physical medium that carries the signal.

The TM-Series traffic units support both the electrical 100BASE-TX and the optical 100BASE-FX client interfaces.

100BASE-TX and the optical 100BASE-FX client interfaces. 4.2.3.2 GIGABIT eTherneT (GBe) PhySICAL LAyer Gigabit

4.2.3.2 GIGABIT eTherneT (GBe) PhySICAL LAyer

Gigabit Ethernet (GbE) can be transmitted over shielded fiber cables and over shielded copper cables. It can also be transmitted over unshielded twisted pairs of copper. The transmission is set up to operate in full duplex (most common) or half duplex mode. The standard defines a physical media dependent (PMD) sub layer which specifies the transceiver for the physical medium in use. There are three types of PMDs for GbE:

Short range, long range and shielded copper. The short range uses 850 nm light with a reach of 220 – 250 m on multimode fiber. The long range PMD uses 1310 nm light with a reach of 550 m on multimode fiber and 5 km on single mode fiber. The PMD for shielded copper reaches only 25 m. For unshielded copper, which is common in many office installations, multiple twisted pairs are used to send multilevel signals in a way that extends the reach to 100 m.

Ethernet in the First Mile later added 1000BASE-LX10 AND –BX10

name

Medium

Specific distance

1000bASE-Cx

Twinaxial cabling

25 meters

 

220

to 550 meters

1000bASE-Sx

Multi-mode fiber

dependent on fiber dia- meter and bandwidth

1000bASE-Lx

Multi-mode fiber

550 meters

1000bASE-Lx10

Single-mode fiber using 1,310 nm wavelength

10 km

1000bASE-bx10

Single-mode fiber, over single-strand fiber:

10 km

1,490 nm downstream 1,310 nm upstream

1000bASE-Tx

Twisted-pair cabling (Cat-6, Cat-7)

100 meters

Figure 51. Gigabit Ethernet physical media. Source: Wikipedia.

51. Gigabit Ethernet physical media. Source: Wikipedia. The TM-Series traffic units support both the electrical

The TM-Series traffic units support both the electrical (1000BASE-T) and the optical (1000BASE-L) variants for single and multimode fiber in the GbE physical layer when interfacing to client systems.

4.2.3.3 10-GIGABIT eTherneT (10GBe) PhySICAL LAyer

10-Gigabit Ethernet (10GbE) can be transmitted over fiber optics and copper cables, but copper cables are only used over short distances such as interconnections within a chassis.

For fiber optic cables, the physical layer can be implemented in two main variants: LAN PHY and WAN PHY, optimized for use in local area and wide area networks respectively. Both the LAN PHY and the WAN PHY operate over a short range, a long range, extended range or long reach PMD. Short range uses 850 nm over multimode fibers up to 300 m; long range uses 1310 nm and reaches 260 m on multimode and 10 km on single mode fiber. The extended reach PMD uses 1550 nm and has a maximum reach of 40 km on single mode fiber.

4.3 Synchronization and circuit emulation services over ethernet

4.3.1 Synchronous and asynchronous transport

Packet switching technologies, including Ethernet, are inherently asyn- chronous, i.e. incoming frames are received at one data rate (one rate of bit/s), buffered and multiplexed with other frames over intermedi- ate links with higher data rates and delivered at yet another rate to the receiver. There is no fixed relationship between the timing, phase or frequency of the incoming bit stream and the outgoing bit stream from the network.

This is quite different from the principles of time division multiplexed (TDM) transmission technologies such as PDH, SDH and SONET 21 tradi- tionally used in wide area transport networks. In TDM each stream of information to be transferred over the network is allocated a specific timeslot in the transmission system, a procedure that requires careful frequency and phase synchronization of all intermediate network nodes handling the flow of passing bits.

Today, services such as circuit switched telephony and the storage area networks in datacenters are still based on TDM technologies, but increas- ingly this TDM traffic needs to be transported over packet-optical net- works. It becomes necessary to emulate a traditional wireline circuit over an Ethernet network and to maintain synchronization between the ingress port and egress port of the Ethernet wide area network.

Somewhere in every TDM network there is an extremely accurate frequency source, a primary reference clock (PRC), from which all other TDM clocks in the network directly or indirectly derive their timing, i.e.

21 Plesiochronous Digital Hierarchy, Synchronous Digital Hierarchy and Synchronous Optical Networking respectively.

frequency. Clocks derived in this manner are said to be traceable to a PRC. The primary clock signal is distributed “downwards” through the network in order to synchronize all necessary devices, which are normally grouped into separate stratum clock levels; depending on how “far” from the origi- nal PRC the device is located.

how “far” from the origi- nal PRC the device is located. Figure 52. Stratum clock levels

Figure 52. Stratum clock levels in a TDM network for circuit switched telephony. Clock signals may be distributed over several paths to ensure redundancy.

The migration from such a synchronous TDM network to Ethernet-based asynchronous transport introduces new challenges. When a packet net- work is to support TDM-based services, it must provide correct timing at the traffic interfaces. The transport of the TDM signals through an Ether- net requires that the signals at the output of the packet network comply with the TDM timing requirements for the attached TDM equipment to

interwork. Such an adaptation of TDM signals to be transported by a packet network is called a circuit emulation service and the entity perform- ing the adaptation is referred to as an interworking function (IWF).

is referred to as an interworking function (IWF). Figure 53. Emulation of a TDM circuit over

Figure 53. Emulation of a TDM circuit over a packet mode network. The Interworking Functions provide the traffic interfaces for the TDM circuits.

Circuit emulation is closely related to the concept of pseudowires.

A pseudowire (PW) is an emulation of a point-to-point connection over

a packet switching network. The pseudowire emulates the operation of

a “transparent wire” carrying the service, but it must be realized that this

emulation rarely will be perfect. The service being carried by the pseudo- wire may be SDH, SONET, ATM or frame relay, while the underlying packet network may be a Layer 2 network such as Ethernet, an IP network or an MPLS network. A pseudowire encapsulates incoming cells, bit streams and protocol data units (PDUs) and transports them through tunnels set up in the packet network.

In addition to the synchronization of frequencies required by TDM net-

works, other networks are dependent on having the exact same time

in every node. Frequency is a relative entity, measured relative to

a frequency standard, e.g. the PRC, with respect to jitter, wander and slip. Time represents an absolute, monotonically increasing value, generally

traceable back to the rotation of the earth (day, hour, minute, second). Mechanisms to distribute absolute time in a network are significantly different from what is used to distribute frequency and normally rely upon the usage of time stamps being sent between nodes. Time stamps may also in some network applications be used for the indirect genera- tion of frequency (differential timing).

The most advanced requirements on synchronization in today’s metro networks are generated by the mobile backhaul traffic, which is depen- dent on both frequency, phase and time synchronization. For example, for 3GPP2 base stations, including those for LTE, the following require- ments are typical:

yyA frequency accuracy of 0.05 ppm at the air interface

yy2.5 µs time accuracy between neighboring base stations, i.e. ± 1.25 µs difference to coordinated universal time (UTC)

4.3.2 Synchronization standards

Two main principles and related standards are available for providing synchronization across an Ethernet network:

yySynchronous ethernet (Synce). An ITU-T standard using the Ethernet PHY media to distribute timing (frequency). SyncE uses the PHY clock transmissions and re-generates the clock signal from the incoming bit stream in a way similar to traditional TDM systems by use of phase locked loop (PLL) circuitry. This approach requires SyncE support in every network node traversed by the Ethernet signal and does only provide frequency and phase synchronization, not time of day synchronization.

yyPrecision Time Protocol (PTPv2) – Ieee 1588v2. An IEEE standard using Layer 2 embedded OAM 22 packets with the highest priority to ship clock/phase and time of day information across a packet network. Special hardware is often used to process these packets for higher accuracy, but the packets remain in standard Ethernet frames.

but the packets remain in standard Ethernet frames. Figure 54. Standards for synchronization over Ethernet.

Figure 54. Standards for synchronization over Ethernet. SyncE uses Ethernet PHY to transfer a reference frequency to every node. IEEE 1588v2 uses Ethernet frames carrying time stamps to send current “time of day” to the nodes.

SyNChRONOUS EThERNET (SyncE) With synchronous Ethernet, a master slave architecture at the physical level is used to provide timing distribution from backbone to access nodes. A reference timing signal traceable to a PRC is injected into a back- bone Ethernet switch and this signal is then distributed to the next node which extracts timing from the incoming bit stream and synchronizes the outgoing bit stream to this rate. Timing for TDM circuits transported over the packet network can be recovered at the appropriate interworking functions (IWF) for the circuit emulation service.

The Transmode TM-Series Ethernet Muxponders, the EMXP, used as nodes in a Transmode packet-optical network, fully support the ITU-T synchro- nous Ethernet recommendations (G.8262/Y.1362 and others) for jitter and wander tolerances, supported frequencies, and clock specifications as

22 Operations, Administration and Maintenance.

specified for Synchronous Ethernet Equipment Clocks (EEC). The EMXP is also compatible with ITU-T recommendation G.823 regarding clock selection logic, possible clock quality levels, noise tolerances, noise generation and transfer limits, holdover performance etc.

The EMXP can do automatic synchronization source selection to improve synchronization resilience. Synchronization Status Messages (SSM) between nodes are used to provide traceability of the synchronization source. Network level SSM is defined in ITU-T recommendation G.781. The Ethernet Synchronization Messaging Channel (ESMC) is a communica- tion channel for the SSM and described in ITU-T recommendation G.8264.

In a Transmode packet-optical network the synchronous Ethernet func- tion is entirely based on the Ethernet PHY media and its circuitry in the Ethernet Muxponders. Using either of the Layer 2 traffic forwarding mechanisms (see section 3.2) MPLS-TP Label Switch Paths or Ethernet service VLANs has no effect on timing or the ESMC. A physical Ethernet interface using MPLS-TP will forward Synchronous Ethernet in the same way as a physical Ethernet interface using an Ethernet UNI or NNI.

PRECISION TIME PROTOCOL (PTP) – IEEE 1588v2 IEEE 1588 (PTP) enables sub-microsecond synchronization of clocks by having a master clock sending multicast synchronization packets contain- ing time stamps. All IEEE 1588 receivers correct their local time on the basis of the received time stamp and an estimate of the one-way delay from transmitter to receiver.

PTP is based on IP multicasting and can be used on any network that supports multicasting. Precision is typically in the range of 100ns – 100µs depending on real-time capabilities of the end systems. The PTP standard can distribute time/phase, frequency or both. It is resilient because a failed network node can be routed around and the time can be taken

from more than one master clock. IEEE 1588v2 packets fully comply with Ethernet and IP standards and are backwards compatible with all existing Ethernet switching and IP routing equipment. There is no requirement for intermediate Ethernet switches traversed by the emulated circuit to be IEEE 1588v2 aware as they see the timing frames as normal data.

The convergence time of the PTP protocol, i.e. the time it takes for the protocol to achieve the desired level of synchronization, is dependent on the quality of the underlying packet network. Although the Transmode packet-optical Ethernet Muxponders are not directly involved in the PTP/1588v2 protocol handling, the very low wander, jitter and delay within an EMXP based packet-optical network makes the PTP protocol converge extremely rapidly.

DIFFERENTIAL TIMING Timing recovery for a TDM circuit emulation service requires that the timing of the signal is similar on both ends of the packet network, i.e. at the “outside” of the IWFs. The clock of the TDM service must be preserved in such a way that the incoming service clock frequency is replicated as the outgoing service clock frequency. In network-synchronous operation the packet network operates fully synchronized using a PRC-traceable clock, but this does not necessarily preserve the timing of the external TDM service. Using differential timing, the difference between the external TDM service clock and the network reference clock is encoded and transmitted across the packet network. Differential timing makes it possible to recover the external TDM service clock at the far end of the packet network.

More information on timing recovery and transport of legacy TDM services such as SDH and SONET over a Transmode packet-optical network can be found in section 2.6 of this book.

4.4 ethernet protection

Wide area services, such as telephony, Internet access and video on demand, require a high level of availability; typically unavailability is only tolerated for a few minutes per operating year. When failures occur in the network, they are not supposed to be noticed by the subscriber. The main purpose of an Automatic Protection Switching (APS) mechanism is to guarantee the availability of back up resources and ensure that switchover is achieved within milliseconds.

Protection switching can be implemented at various OSI layers of the network: The optical transmission network may include alternative fiber routes and protection switching at Layer 1. The Ethernet/Layer 2 can perform protection switching, and there may exist protection mecha- nisms also at higher OSI layers. This section deals with the Ethernet/Layer 2 protection mechanisms.

The Spanning Tree Protocol (STP) and the Rapid Spanning Tree Protocol (RSTP) of Ethernet can prevent loops and assure backup paths. However, both protocols are too slow to respond to network failures and not used in packet-optical networks 23 . Instead other mechanisms, such as Link Aggregation Groups (LAG) and Ethernet Ring Protection Switching (ERPS) are more suitable. For Layer 2 networks employing MPLS-TP label switched paths (see chapter 2), even more advanced protection schemes such as MPLS-TP linear protection, are available.

4.4.1 Link aggregation (LAG)

Link aggregation, as defined by IEEE 802.3ad, is a method for aggregating two or more parallel physical transmission links to a Link Aggregation Group (LAG), such that a Media Access Control (MAC) client can treat the group as if it were a single link. Link aggregation is capable of increas-

23 Service restoration times of 30s or more are typical for STP.

ing both the capacity and the availability of the communication channel between devices interconnected by Ethernet. Link aggregation can also provide load balancing in which traffic is spread across several physical links in order to avoid the overload of any single link.

links in order to avoid the overload of any single link. Figure 55. Link aggregation functional

Figure 55. Link aggregation functional diagram.

Most implementations of LAG now conform to what used to be clause 43 of the IEEE 802.3-2008 Ethernet standard, informally referred to as “802.3ad”. This includes the Transmode EMXP which has been interoper- ability tested against several other vendor solutions. With the Transmode EMXP it is possible to define a link aggregate group consisting of up to 8 ports. All ports in the group must have the same port speed.

4.4.2 ethernet ring protection switching (erPS)

Ring based networks are often attractive since they can offer a simple form of redundancy in a network consisting of many nodes. A redundant path for each node is provided by just one additional link that closes two adjacent branches to form a loop. However, Ethernet as such does not allow for loops since frames would circulate forever, so the loop has to be blocked at some point in the network. This can be accomplished by use of Ethernet Ring Protection Switching (ERPS), as specified in ITU-T recommen- dation G.8032.

Figure 56. Ethernet ring protection switching (ERPS) way of working. The solid line between nodes

Figure 56. Ethernet ring protection switching (ERPS) way of working. The solid line between nodes C and D represents a Ring Protection Link (RPL).

An Ethernet ring is made up of two or more nodes interconnected via transmission links. Each node has two links connected to its adjacent nodes. To avoid loops, one of the links in the full ring is always blocked. This link, which is blocked under normal conditions, is called the Ring Protection Link (RPL). One of the nodes connected to the RPL is assigned to be the RPL owner and is responsible to control the status of the link. In the example above the link between node C and node D is the RPL and node C is the RPL owner.

When a failure is detected, the RPL owner is responsible for un-blocking the RPL and open it for traffic. A link failure can be detected by link down events or OAM frames e.g.’ loss of continuity’. When the failure has been detected, the ring is said to be in a ‘Signal Failure’ (SF) state. The node detecting the failure condition will block the traffic on the failed link and inform the other nodes of the ring.

When the nodes are informed about a ring failure, the learned MAC addresses are flushed. In the example above the nodes of the ring will re-learn the destination MACs of the stations participating in the blue traffic, which will now flow over the unblocked RPL.

The Ethernet Ring Protection Switching available with the Transmode EMXP based nodes provides sub-50ms protection for Ethernet traffic using ring topology and ensures that there are no loops formed at the Ethernet layer.

The Transmode EMXP based nodes can also support multiple rings, so that several Ethernet protection switching rings may be joined at one physical location. This makes the EMXP extremely useful for aggregation of traffic coming from multiple rings covering many sites, for example in a mobile backhaul network.

Version 2 of Ethernet Ring Protection Switching (ERPSv2) adds some very useful additions. In ERPSv1 only revertive operation was supported. ERPSv2 adds non-revertive operation to minimize unplanned traffic hits. ERPSv2 also adds ‘Manual switch’ and ‘Force switch’ operator administra- tive commands.

The most important addition is that ERPSv2 introduces more advanced Ethernet ring interconnection architectures (multi-ring/ladder network) with the concepts of sub-rings. This adds the ability to have different rings interconnected in at least 2 points, avoiding a single point of failure.

ERPSv2 has the ability to support multiple ERP instances on a single ring. In combination with the possibility to allocate a specific set of VLANs to a ring instance there are a number of new deployment possibilities:

yyPossible to mix with MPLS-TP backbone VLAN

yyCommon spans with other rings

yyMixing protected / unprotected VLANs

4.5 Carrier ethernet architecture and services

4.5.1 Carrier ethernet: ethernet as a transport service

Legacy carrier networks provide transport services with very high avail- ability and predictable performance, using e.g. SDH or SONET multiplex- ing schemes. Packet switched transport networks are different, as they offer new services based on features such as asynchronous transport, statistical multiplexing and full connectivity between multiple end points. The classical “carrier grade” characteristics, such as quality of service, security and high availability have to be realized by new mechanisms in a packet network.

On the other hand, packet switched wide area networks, especially those based on Ethernet, offer many advantages to network operators:

yyMost potential users of a WAN or MAN service have an Ethernet based LAN and want to extend that LAN to multiple sites. It makes sense for a carrier to offer Ethernet type transport services, since customers are familiar with the protocol and their equipment already have Ethernet interfaces.

yyEthernet is the all dominant Layer 2 data networking protocol, which means that economies of scale have made Ethernet switching equip- ment very attractive from a cost perspective. Naturally, carriers want to take advantage of the continued performance and cost evolution of the Ethernet technology.

yyMany non-Ethernet wide area technologies such as TDM, ATM, frame relay etc. can be replaced or emulated by Ethernet alternatives, mak- ing a transport network based on Ethernet more uniform and less complex to operate than a network using multiple other technologies.

To transport Ethernet traffic efficiently in metro and regional networks, the carrier/network operator/service provider must establish an Ethernet of his own – a Carrier Ethernet Network 24 – that forwards traffic between the customer LANs “in the Ethernet way”. Using Ethernet as the transport mechanism requires addition of functions that transform the connection- less and broadcast oriented Ethernet for LAN use into a more predictable and “circuit-like” channel suitable for wide area networking.

The need for such a Carrier Ethernet had been recognized for some time. The Metro Ethernet Forum (MEF) was formed in 2001 by vendors and service providers to develop standards for ubiquitous LAN interconnect services over optical metropolitan networks. The principal concept was to bring the simplicity and cost model of Ethernet to the wide area network, while adding stability, predictability and manageability. Since then, MEF has issued a wide range of technical specifications for Carrier Ethernet equipment, specifications that are adhered to by Transmode.

equipment, specifications that are adhered to by Transmode. Figure 57. Carrier Ethernet: A connection oriented version
equipment, specifications that are adhered to by Transmode. Figure 57. Carrier Ethernet: A connection oriented version

Figure 57. Carrier Ethernet: A connection oriented version of Ethernet provided as a service to subscribing customers. (NID: Network Interface Device).

24 In many Metro Ethernet Forum technical specifications and some other literature the Carrier Ethernet Network is referred to as a Metro Ethernet Network (MEN), which is the same thing.

Metro Ethernet Forum defines a Carrier Ethernet Network as a set

of certified network elements that are interconnected and provide

Carrier Ethernet Services, locally and worldwide.

In

this context it is important to remember that the term “Ethernet”

is

ambiguous and standardized in various ways by multiple interest groups.

yyEthernet regarded as a “point-to-point” transmission link, i.e. the physical characteristics of Ethernet framing and transmission. This is the IEEE 802.3 scope and view.

yyEthernet regarded as a packet switched network (PSN) infrastructure. This is the 802.1 (bridging) view and also the managed Ethernet network view by ITU-T SG15/SG13.

yyEthernet regarded as a service. This is the Metro Ethernet Forum scope and view. The Carrier Ethernet Services are concerned by the user-to-user transfer of Ethernet 802.3 frames over any available physical transport layer.

A Carrier Ethernet Network is by definition a two-layer structure,

consisting of a physical transport layer, which can be WDM, SDH/SONET, Ethernet physical (Ethernet PHY) or any other physical transport techno- logy, and a pure Ethernet frame handling layer, the Ethernet MAC (ETH)

layer. 25 The Ethernet services offered are created “on top of” transmission technologies such as e.g. WDM optical networks. The following discus- sion focuses on the Carrier Ethernet Services and the Ethernet MAC layer. Details of how the Ethernet MAC layer is carried by the optical WDM layer

of the Transmode TM-Series are described in chapter 2, “Packet-optical

networking”.

Metro Ethernet Forum has defined five main attributes of Carrier Ethernet that distinguishes it from the familiar LAN-oriented Ethernet, and makes

it suitable as a transport service offered by carriers. The five attributes are:

25 The ETH layer is sometimes referred to as the path layer.

yyStandardized services. Carrier Ethernet provides four standardized service types (E-Line, E-LAN, E-Tree and E-Access) that enable trans- parent, private line, virtual private line and multi-point to multi-point connectivity over the wide area network. The services are provided independent of the underlying transport protocols and media used, with a wide choice and granularity of bandwidth and quality of service options. The services requires no changes to customer LAN equipment or networks and accommodates existing network connectivity such as time-sensitive TDM traffic and signaling while being delivered over a single Ethernet connection between network and customer.

yyScalability. The scale of a customer LAN and the network of a service provider are fundamentally different in terms of geographical reach, number of users (end points) and bandwidth. Carrier Ethernet is scalable in all those dimensions, while allowing the service provider to use various underlying transport technologies to achieve the best total economy.

yyreliability. Carrier Ethernet is resilient and reliable. Protection mechanisms are available to provide end-to-end and individual link protection. The speed of recovery from failures is comparable to that of SDH/SONET networks or better.

yyQuality of service. Carrier Ethernet supports delivery of critical applications that are expected to meet high performance levels. The performance parameters of Carrier Ethernet are quantifiable and measurable so that they can be included in a Service Level Agreement (SLA) for voice, video and data over converged business and residential networks.

yyService management. Carrier Ethernet service providers are expected to manage large numbers of customers and their multiple services, spanning wide geographical areas. Carrier Ethernet includes advanced capabilities for provisioning, maintaining and upgrading the Ethernet services.

Figure 58. Carrier Ethernet attributes as defined by Metro Ethernet Forum (MEF). 4.5.2 The Carrier

Figure 58. Carrier Ethernet attributes as defined by Metro Ethernet Forum (MEF).

4.5.2 The Carrier ethernet architecture and terminology

A service provided by a Carrier Ethernet Network starts at one User Net-

work Interface (UNI) and ends at another UNI. The UNI is the point where

the service provider accepts and delivers Ethernet frames, i.e. a dedicated, physical, demarcation point between the responsibility of the service provider and the responsibility of the subscriber. The attached Customer Equipment (CE) can be e.g. a router, a switch or a computer system and the physical medium of the UNI can be copper, coax or fiber operating

at 10 Mbit/s, 100 Mbit/s, 1 Gbit/s or 10 Gbit/s according to the IEEE 802.3

Ethernet PHY/MAC protocol.

The UNI functions are divided between the Customer Equipment (CE) and the provider edge equipment as the function sets UNI-C and UNI-N, respectively. Sometimes the customer equipment does not support all the UNI-C functions; in such cases a Network Interface Device (NID) or an Ethernet Demarcation Unit (EDU) belonging to the Carrier Ethernet

Network is located at the customer site and acting as the actual physical demarcation point of the service. Carrier Ethernet demarcation is a key element in Carrier Ethernet networks as it enables service providers to extend their control over the entire service path, starting and ending at the customer hand off points.

The association between two or more UNI:s via the Carrier Ethernet Network is referred to as an Ethernet Virtual Connection (EVC). In the Carrier Ethernet world, this association is the equivalent of a “circuit” and it is the Ethernet Virtual Connection that is assigned the various characteristics – attributes – that a customer subscribes to.

– attributes – that a customer subscribes to. Figure 59. Carrier Ethernet basic concepts and terminology.

Figure 59. Carrier Ethernet basic concepts and terminology.

Sometimes an Ethernet Virtual Connection passes the networks of more than one service provider. The interface between two such service provid- ers is referred to as an External Network to Network Interface (ENNI).

The Ethernet Virtual Connection (EVC), i.e. the association between two or more UNIs, performs two basic functions:

yyConnects two or more subscriber sites (UNIs), enabling the transfer of Ethernet service frames between them.

yyPrevents data transfer between subscriber sites that are not part of the same EVC. This capability enables an EVC to provide data privacy and security similar to frame relay or ATM Permanent Virtual Circuits (PVC).

The Carrier Ethernet specifications allow for three types of Ethernet Virtual Connections: Point-to-point EVC, Multipoint-to-multipoint EVC and Routed Multipoint (Point-to-Multipoint) EVC, which are used to create the Carrier Ethernet services.

which are used to create the Carrier Ethernet services. Figure 60. Ethernet Virtual Connection types. 4.5.3

Figure 60. Ethernet Virtual Connection types.

4.5.3 Carrier ethernet 2.0 Services

In the Carrier Ethernet Network, data is transported across point-to-

point, point-to-multipoint and multipoint-to-multipoint EVCs according

to the attributes and definitions of a set of well-defined Ethernet service

types which provide transparent data transport between the UNIs. The four MEF-defined service types are E-Line, E-LAN, E-Tree and E-Access.

A MEF Ethernet Service consists of an Ethernet service type associated

with one or more bandwidth profiles and supporting one or more Classes of Service (CoS). A service also defines the transparency to Layer 2 control protocols and how they should be handled.

Two variants of Ethernet services are defined for each of the four main Ethernet service types, differentiated by the method for service identifica- tion used at the UNIs. Services using port-based UNIs, i.e. there is only one UNI and EVC per physical port of the provider edge device, are referred to as “Private”, while services using UNIs that are VLAN-based and multiplexed over the same physical interface, are referred to as “Vir- tual Private”. For example, an E-Line service that is port based is referred to as Ethernet Private Line (EPL).

ethernet

Port-based

VLAn-based

Service Type

(All-to-one bundling)

(Service multiplexed)

E-Line

Ethernet Private Line (EPL)

Ethernet Virtual Private Line (EVPL)

(Point-to-Point EVC)

E-LAN

Ethernet Private LAN

Ethernet Virtual Private LAN

(Multipoint-to-

(EP-LAN)

(EVP-LAN)

Multipoint EVC)

E-Tree

Ethernet Private Tree (EP-Tree)

Ethernet Virtual Private Tree (EVP-Tree)

(Rooted multipoint

EVC)

E-Access

Access EPL

Access EVPL

(Point-to-Point EVC)

Figure 61. Ethernet Services in MEF Carrier Ethernet 2.0 and their relation to Ethernet Virtual Connections.

The four Carrier Ethernet 2.0 service types E-Line, E-LAN, E-Tree and E-Access target different applications as indicated in Figure 62. E-Line services are typically used to replace private TDM-lines and frame relay VPNs as well as for Internet access. The other three service types are used in more specialized applications such as mobile backhaul and where several Carrier Ethernet service providers cooperate.

Figure 62. Carrier Ethernet 2.0 service types. Source: Metro Ethernet Forum. The TM-Series’ Ethernet products
Figure 62. Carrier Ethernet 2.0 service types. Source: Metro Ethernet Forum. The TM-Series’ Ethernet products

Figure 62.

Carrier Ethernet 2.0 service types. Source: Metro Ethernet Forum.

The TM-Series’ Ethernet products are fully MEF and CE2.0 certified and have the functionality needed to implement all of the four service types above in a packet-optical network.

Implementing the CE2.0 services gives the network operator a whole new range of services to offer “on top” of the legacy layer-1 transport service, thereby adding new revenue streams to an existing network investment. Furthermore, implementing Ethernet services, rather than IP/MPLS based services with similar characteristics, can make the necessary investments significantly lower, especially if the Ethernet services can be implemented on the already deployed WDM platform.

4.5.4 Carrier ethernet Service Attributes

A user of the Carrier Ethernet subscribes to an Ethernet service type

(E-Line, E-LAN, E-Tree, E-Access) having either a port-based or VLAN-based UNI, and having more detailed characteristics as specified by its Ethernet service attributes, which are listed in MEF specifications 6.1 and 10.2.

The service attribute represents a service characteristic, which in turn

is further defined by a set of Ethernet service attribute parameters. The

attributes and parameters customize the overall performance and quality

of service for each individual service and subscriber.

of service for each individual service and subscriber. Figure 63. tes and parameters defining its behavior

Figure 63.

tes and parameters defining its behavior in detail.

Each subscribed to Ethernet service type has an associated set of attribu-

The Ethernet service attributes are of three main types:

1. Per eVC service attributes defining the characteristics of the Ethernet Virtual Connection as such, e.g:

yyEVC ID and type; point-to-point or multipoint

yyList of connected UNIs

yyCustomer VLAN ID and Class of Service preservation

yyEVC performance: Frame Delay (Latency), Inter Frame Delay Variation, Frame Loss Ratio and Availability

2. eVC per UnI service attributes, e.g:

yyUNI/EVC ID

yyCustomer VLAN ID/EVC mapping

yyIngress/egress bandwidth profiles per Class of Service

3. Per UnI service attributes such as:

yyUNI ID and physical interface capabilities: Data rate, frame format

yyIngress and egress bandwidth profiles

yyService multiplexing capability

yyLayer 2 control protocol processing

4.6 Carrier ethernet traffic management

The different applications, users and data flows in a Carrier Ethernet Net- work require different priorities and performance guarantees. The process of differentiating traffic in this way is referred to as traffic management, and involves mechanisms such as queuing, scheduling and policing of the Ethernet frames. With traffic management in place it is possible to guar- antee a certain Quality of Service (QoS) for a given service with respect to e.g. data rate, delay, jitter and packet dropping probability.

Quality of service guarantees are important if the network capacity is insufficient, especially for real-time streaming multimedia applications such as voice over IP and IP-TV, since these often require a fixed bit rate and are delay and loss sensitive. In the absence of network congestion, QoS mechanisms are in principle not required. However, temporary changes in traffic patterns and reconfigurations, for example when caused by protection switching, makes QoS mechanisms necessary in virtually all networks.

Carrier Ethernet defines several traffic management mechanisms, which are described below.

4.6.1 Bandwidth profiles

A bandwidth profile is a set of traffic parameters that define the maxi- mum average bandwidth available for the customer’s traffic. An ingress bandwidth profile limits traffic transmitted into the network and an egress bandwidth profile can be applied anywhere to control overload problems of multiple UNIs sending data to an egress UNI simultaneously. Frames that meet the profile are forwarded; frames that do not meet the profile are dropped.

Bandwidth profiles allow service providers to offer services to users in increments lower than what is set by the physical interface speed.

Also, it provides a possibility to engineer the network and make sure that certain parts of the network are not overloaded.

MEF 10.2 specifies three levels of bandwidth profile compliance for each individual service frame 26 :

yyGreen: The service frame is subject to Service Level Agreement (SLA) performance guarantees.

yyyellow: The service Frame is not subject to SLA performance guaran- tees, but will be forwarded on a “best effort” basis. These frames have lower priority and are discard-eligible in the event of network congestion.

yyred: The service frame is to be discarded at the UNI by the traffic policer.

Bandwidth profiles can be defined per Ethernet Virtual Connection (EVC) and per Class of Service (see below for definition) and are governed by a set of parameters, the most important being:

yyCommitted Information Rate (CIR) which defines the assured band- width expressed as bits per second.

yyExcess Information Rate (EIR) which defines temporary “extra” band- width that may be temporarily used expressed as bits per second.

yyCommitted Burst Size (CBS) and Excess Burst Size (EBS) which define temporary bursts of information that can be handled.

The TM-Series EMXP also supports a simpler bandwidth profile which only uses the two traffic parameters:

yyRate, expressed as bits per second.

yyBurst Size expressed as bytes.

26 A service frame is a subscriber Ethernet frame to be forwarded by a service in the Carrier Ethernet.

frame to be forwarded by a service in the Carrier Ethernet. Figure 64. can always be

Figure 64.

can always be met, the three EIR:s cannot always be met simultaneously.

Conceptual example with three EVCs sharing the same UNI. The three CIRs

Ingress bandwidth profiles can be applied per UNI (all traffic regardless of VLAN tag or EVC ID) or more granular on an EVC basis or even based on a Class of Service marking such as a customer applied VLAN priority tag.

marking such as a customer applied VLAN priority tag. Figure 65. Three types of bandwidth profiles

Figure 65.

Three types of bandwidth profiles are defined in MEF 10.1.

The compliance to the bandwidth profile is determined through two “leaky bucket” algorithms, using a principle referred to as Two Rate Three Color Marker (TrTCM).

4.6.2 Class of Service (CoS) and Service Level Agreements (SLA)

The integration of real-time and non-real time traffic over Ethernet requires differentiating packets from different applications and providing differentiated performance according to the needs of each application. When a network experiences congestion and delay, some packets must be dropped or delayed. This differentiation is in Carrier Ethernet referred to as Class of Service (CoS).

CoS can be applied at the EVC level (same CoS for all frames transmit- ted over the EVC), or applied within the EVC by customer defined priority

values in the data, such as Customer VLAN IEEE 802.3 “Q” or “p” tag markings.

The Transmode EMXP currently supports eight different CoS priorities with 0 being the lowest (best effort) and 7 being the highest (priority real- time data) priority. The EMXP has eight CoS queues per port. These eight queues are serviced by a scheduler that can use three different scheduling schemes for emptying the eight egress queues: Strict, round robin and weighted round robin.

The Class of Service settings together with the bandwidth profiles are in turn used for making Service Level Agreements (SLAs) between the service provider and his customers. In addition to the various parameters of the bandwidth profiles (CIR, EIR etc.) the SLA typically also specifies maxi- mum values for various types of frame delay and frame delay variation (jitter), and values for the availability of the subscribed to service.

values for the availability of the subscribed to service. Figure 66. Examples of Service Level Agreements

Figure 66.

Examples of Service Level Agreements for different applications. Source: Metro Ethernet Forum.

4.6.3 Traffic shaping

Traffic shaping is a traffic management technique which delays some frames in a Carrier Ethernet in order to bring them into compliance with a

desired traffic profile. Traffic shaping is a form of rate limiting, as opposed

to the policing of the bandwidth profiles, where excess frames are simply

dropped. Normally, traffic shaping is not part of any subscriber SLA, but rather a network internal mechanism, used by the operator to “even out” traffic flows and create fairness between users of the network resources.

Traffic shaping is done by imposing additional delay on some packets such that the traffic conforms to a given bandwidth profile. Traffic shaping provides a means to control the volume of traffic being sent out on an interface in a specified period (bandwidth throttling), and the maximum rate at which the traffic is sent (rate limiting).

A drawback with traffic shaping is increased latency and jitter for the

Ethernet Virtual Connection but the gain can be better throughout, since the overall flow of frames may be improved: Instead of dropping traffic

in a policer, it may be better to shape the traffic to make sure no frames

are lost (or at least as few as possible), avoiding retransmissions at higher protocol layers.

4.7 Carrier ethernet operations, Adminis- tration and Maintenance (ethernet oAM)

Using Ethernet as an end-to-end wide area network service rather than as a link layer protocol creates a need for a new set of Operations, Admin- istration and Maintenance (OAM) mechanisms and protocols. Service pro- viders must be able to provision and maintain large volumes of Ethernet services and subscribers in a rational and cost efficient way.

Furthermore, an end-to-end wide area Ethernet service, i.e. an Ethernet Virtual Connection (EVC), often involves one or more carriers/network operators providing the underlying transmission capacity in addition to the Ethernet service provider. Carrier Ethernet OAM requires coordination of OAM performed by a number of administrative entities and by differ- ent technical systems.

4.7.1 The management framework

Ethernet OAM builds on an established management framework and terminology using the concept of a data model, the Management Infor- mation Base (MIB), describing the status of the individual elements in the managed network.

status of the individual elements in the managed network. Figure 67. The Network Management System (NMS)

Figure 67. The Network Management System (NMS) uses a data model – a Management Information Base (MIB) – to keep track of the status of the individual network elements.

The Management Information Base (MIB) is a database representation

of the managed objects in a telecommunications network. The database,

normally located in the central network management system, keeps an updated view of network element status by sending queries to the

elements and is also used for configuration and provisioning activities.

A MIB together with an associated management protocol, such as

SNMPv2, defines a standard network management interface for the administration and maintenance of a particular network element.

and maintenance of a particular network element. Figure 68. The Ethernet OAM framework and terminology.

Figure 68. The Ethernet OAM framework and terminology.

4.7.2 Standards for ethernet oAM

From an OAM perspective there are several standards that work together in a layered fashion to provide Carrier Ethernet OAM: IEEE 802.3ah de- fines OAM at the link level. With more of an end to end focus, 802.1ag de- fines connectivity fault management for identifying network level faults, while ITU Y.1731 adds performance management which enables SLAs to be monitored. The functions of these OAM layers are implemented either in a stand-alone network demarcation device (i.e. the Transmode NID or EDU) or integrated into the node equipment (i.e. into the Transmode EMXP).

into the node equipment (i.e. into the Transmode EMXP). Figure 69. Standards for Carrier Ethernet OAM.

Figure 69. Standards for Carrier Ethernet OAM.

4.7.3 The service lifecycle

The life of an EVC starts with a service order initiated by a customer. The order contains various types of EVC information such as UNI loca- tions, bandwidth profiles, Class of Service etc. After provisioning of the EVC, the service provider and the involved network operators conduct initial turn-up testing to verify that the EVC is operational and fulfills the subscribed to characteristics. While the EVC is in use, all parties involved – the subscriber, the network operators/carriers, and the service provider – want to monitor the same EVC to ensure that it adheres to the specified Service Level Agreement (SLA) regarding delay, jitter, loss, throughput, availability, etc. Finally, when the EVC is not needed any longer, the assigned network resources should be freed up and made available to other EVCs.

The described life cycle of an Ethernet Service is depicted in Figure 70. The involved processes fall in three main categories: Provisioning, per- formance management and fault management. The IEEE, ITU-T and MEF OAM standards provide the means to monitor and execute the required actions on the Carrier Ethernet.

and execute the required actions on the Carrier Ethernet. Figure 70. The life cycle of an

Figure 70. The life cycle of an Ethernet Service according to Metro Ethernet Forum. Some of the involved standards have been indicated.

Ethernet service provisioning comprises the processes of setting up the required Ethernet virtual connections (EVCs) for a customer and assign- ing attributes such as bandwidth profiles and class of service to them. The provisioning process also includes procedures for testing the service when it has been set up, but before it is turned over to the customer. The configuration of the service is checked for correctness and verified against its Service Acceptance Criteria (SAC).

The Carrier Ethernet provisioning process specified by MEF is based on the ITU-T specification Y.1564.

4.7.4 ethernet Service oAM – Performance and fault

management

Performance management and fault management comprises the pro- cesses of monitoring the Ethernet virtual connections (EVCs) for proper operations, discover any problems and correct faults that have occurred.

yyLink level performance and fault management as defined by IEEE 802.3ah provides mechanisms to monitor link operation and health, and for elementary fault isolation.

yyThe more sophisticated end-to-end, service level performance and fault management of the Ethernet Virtual Connection is often referred to as Ethernet Service OAM (SOAM) and is addressed by MEF Specifica- tions 17, 30 and 31, and the ITU-T Y.1731 and IEEE 802.1ag standards.

In a real wide area network, Ethernet Virtual Connections may span sev- eral networks, each with their own management needs. Since managing functionality “end-to-end” means different things to the end customer, the network operator and the Carrier Ethernet Service Provider, Ethernet SOAM must handle and interact with performance and fault manage- ment over several Ethernet OAM domains. An OAM domain is simply a network or sub-network of elements belonging to the same administra- tive entity managing them.

belonging to the same administra- tive entity managing them. Figure 71. Hierarchical OAM domains define the

Figure 71. Hierarchical OAM domains define the OAM responsibilities and the flows of OAM data.

Recognizing the fact that Ethernet networks often encompass multiple administrative domains, IEEE 802.1, ITU-T SG 13 and MEF have adopted

a

common, multi-domain SOAM reference model. The Carrier Ethernet

is

portioned into customer, service provider, and operator maintenance

levels. Service providers have end-to-end service responsibility; operators provide service transport across a sub-network.

In this model, an entity that requires management is called a Main- tenance Entity (ME). An ME is essentially an association between two maintenance end points within an OAM domain, where each end point corresponds to a provisioned reference point. For example, in the previous figure, the green arrow between the two CEs represents a subscriber ME.

A Maintenance Entity Group (MEG) consists of the MEs that belong to the same service inside a common OAM domain. The MEs exist within the same administrative boundary and belong to the same point-to-point or multipoint Ethernet Virtual Connection. For a point-to-point EVC, the MEG contains one single ME.

A MEG End Point (MEP) is a provisioned OAM reference point which

can initiate and terminate proactive OAM frames. It can also initiate and react to diagnostic OAM frames. The MEPs are indicated by triangles in the figures.

A MEG Intermediate Point (MIP) is any intermediate point in a MEG that

can react to some OAM frames. A MIP does not initiate OAM frames; neither does it take action on the transit Ethernet traffic flows. The MIPs are indicated by circles in the figures.

The concepts of the SOAM reference model are summarized by Figure 72, indicating the six default MEG levels considered by MEF.

Figure 72. Example of Ethernet SOAM Maintenance Entities. Source: Metro Ethernet Forum. 80 EthErnEt and

Figure 72. Example of Ethernet SOAM Maintenance Entities. Source: Metro Ethernet Forum.

Given the above SOAM reference model, MEF specifications 30.1 and 35 define a wide set of performance and fault management activities for the EVC and its sub-components such as:

PERFORMANCE MANAGEMENT

yyFrame Delay. Measurement of one-way and two-way (round-trip) delay from MEP to MEP.

FAULT MANAGEMENT

yyContinuity Check. “Heartbeat” messages are issued periodically by the MEPs and used to proactively detect loss of connection between end- points. Continuity check is also used to detect unintended connectiv- ity between MEGs. The continuity check is used to verify basic service connectivity and health.

yyRemote Defect Indication Signal. When a downstream MEP detects

yyInter-Frame Delay Variation. Differences between consecutive Frame

a

fault it will signal the condition to its upstream MEP(s). The behavior

Delay measurements.

is

similar to the RDI function in SDH/SONET networks.

yyFrame Loss Ratio. The number of frames delivered at an egress UNI compared to the number of transmitted frames over a specified time, e.g. a month.

yyAvailability. Downtime is measured over e.g. a year and used to calcu- late the availability of the service.

yyAlarm Indication Signal. A MEP can send an alarm signal to its higher level MEs, thereby informing the higher level MEs of the disruption, immediately following the detection of a fault.

yyLinktrace is an on-demand OAM function initiated in a MEP to track the path to a destination MEP. It allows the transmitting node to discover connectivity data about the path.

yyLoopback is an on-demand OAM function used to verify connectivity of an MEP with its peers.

Summary

Optical fiber provides almost loss-less transmission of signals at an ultra-wide range of frequencies. Packet switching, implemented according to the Ethernet family of protocols, offers one of the most efficient ways ever for sorting and directing streams of digital data. With packet- optical networking, these two outstanding technologies are positioned to dominate the next generation of trans- port networks. And the continuing evolution driven by industry groups ensures dependable and open standards for future needs.

Transmode’s Native Packet Optical 2.0 architecture, realized by the TM-Series platform and the Enlighten multi-layer management suite, represents an exceptional toolbox for deployment of these new networks. The archi- tecture encompasses elements which are fully certified for Carrier Ethernet 2.0 services, capable of both native Ether- net and OTN-adapted transport and having additional scalability and resilience enabled by MPLS-TP. Special attention has been given to efficient synchronization, ultra-low latency and minimal jitter in small as well as large networks. Furthermore, configuring and managing a Transmode packet-optical network is simple and straight- forward thanks to the carefully designed suite of multi- layer management tools.

iNDex