You are on page 1of 21

INTRODUCTION

All signs point to Carrier Ethernet as the changing enterprise WAN landscape pressures service providers to migrate away from legacy infrastructure and deploy carrier-grade Ethernet services to meet demand for high bandwidth services with guaranteed performance levels at lower price points. In this Telecom Insights guide, Nemertes Research discusses the standards and technology needed to architect next-generation Ethernet networks, and looks at how carriers can position various Carrier Ethernet services (E-Line, E-LAN, E-Tree) effectively to serve business needs. Meanwhile, IDC analyzes the complexity of existing metro Ethernet networks, how carriers can get the service velocity they need at the network edge, and why they need to keep an open mind when it comes to deploying intelligence closer to the edge without adding more complexity and cost. Finally, Ciena addresses that a number of business trends - including the maturation of virtualization and cloud-based applications - are impacting service provider networks and service consumption. Service providers must adapt quickly in an environment where demand for services is evolving continuously, network traffic distribution is changing constantly, and bandwidth is growing continually. Pure cost reduction, although very important, is no longer sufficient; the emphasis must shift to growing top-line revenue through the creation and deployment of new Ethernet business services with greater velocity, automation, and customization.

In this series: Metro Ethernet service deployment eased by standards choices Carrier Ethernet meets new enterprise metro data center needs Metro network complexity: Time to cut the Gordian knot? Delivering True Carrier Ethernet business services

Some content provided by www.searchtelecom.com

Sponsored by

Metro Ethernet service deployment eased by Carrier Ethernet standards choices


By Irwin Lazar Ethernet service deployment is skyrocketing, and specifically, metro Ethernet services for enterprises are in highest demand. Why the specific metro Ethernet interest? Two main reasons. First, bandwidth: Ethernet services typically deliver some of the highest-available bandwidth in the WAN. Second, simplicity: Ethernet services are often plug-and-play. The single biggest drawback to Ethernet services? Lack of availability. Id use more of it if I could get it, is the common refrain. One reason providers are slow to deploy Carrier Ethernet (relative to its popularity) is that for carriers, it often represents a radical departure from their existing architectures. Service providers continue to depend on traditional SONET/SDH-based access, metro and transport technologies, even as they watch demand increase for IP and Ethernet. That means theyre managing separate transport hardware and provisioning systems to handle both their legacy networks and the new generation of packet-transport protocols that include Ethernet. The current crop of Ethernet services is defined to run over multiprotocol label switching (MPLS). But from the carrier perspective, turning up new customers on MPLS-based services like Ethernet requires a complex set of steps involving multiple different operational support systems (OSS). Carrier Ethernet standards designed to create single control plane To streamline deployment and management of these new services, carriers are seeking a way to merge their Layer 1, 2 and 3 operations and management infrastructures so they can operate a single control plane for provisioning. Two emerging specifications seek to do exactly that: MPLS-transport protocol (MPLS-TP) is an approach that started life as T-MPLS (Transport-MPLS) within the within the International Telecommunications Union (ITU). Due to concerns about interoperability between the ITUs T-MPLS proposal and existing MPLS standards, the ITU turned over T-MPLS development to the Internet Engineering Task Force (IETF). Provider Backbone Bridge -- Traffic Engineering (PBB-TE ) is the main competitor to MPLS-TP. This approach is based on leveraging existing IEEE 802.1 standards to enable carriers to natively deploy Ethernet services using existing Ethernet technologies.

The benefits of MPLS-TP. MPLS-TP is essentially an MPLS extension based on the concept of extending MPLS resiliency and provisioning mechanisms to Ethernet via a new transportfocused profile. With MPLS-TP, carriers can mix and match circuit or packet-based services in the same network, using a single control plane and operational support system (OSS) for service provisioning. Perhaps MPLS-TPs most important quality is that it applies circuit-switching-like functionality to MPLS, treating MPLS label switch paths as dedicated circuits. This approach enables operators to define bi-directional paths (same path forward and backward), eliminating the LSP (label

Sponsored by

switch path) merging capability of MPLS, whereby packets going to the same destination can be merged into a single LSP. By eliminating LSP, MPLS-TP enables providers to isolate customer traffic into separate end-to-end virtual circuits. In addition, MPLS-TP eliminates the need for IP at the end of the LSP by extending the label all the way out to the end device in a path. This allows service providers to eliminate the need to configure IP services on edge devices, instead allowing MPLS-based provisioning of lower-layer services such as Ethernet connections at Layer 2. Proponents of MPLS-TP also tout the following benefits for carriers and service providers that already have MPLS cores: MPLS-TP leverages existing MPLS standards, providing a smooth migration path, adding only extensions for Layer 2 forwarding, provisioning and management. That means a more seamless deployment for service providers already using MPLS. MPLS-TP can support a mix of traditional circuit-switched Layer 1 and 2 services (like SONET/SDH or WDM), as well as packet-based services like Ethernet, enabling service providers to protect existing customer revenues while modernizing their own transport infrastructures. Re-use of existing MPLS services means additional services can be provisioned using existing OSS with minimal modifications.

PBB-TE advantages. A competing proposal is PBB-TE, based originally on Nortels proprietary Provider Backbone Transport (PBT) but now undergoing standardization within the IEEEs 802.1Qay working group. Unlike MPLS-TP, PBB-TE only supports Ethernet, meaning that other Layer 2 services must be tunneled within MPLS or converted via a gateway. Proponents of PBB-TE tout the following benefits: Simplified infrastructure based on using existing 802.1 protocols such as VLAN tunneling (also known as double-tagging or Q-in-Q) to enable a provider to tunnel customer VLAN within provider VLANs Elimination of Ethernet inefficiencies by replacing MAC learning and spanning tree with a new protocol (Provider Link State Bridging -- PLSB) a link-state protocol that uses the common IS-IS routing protocol to calculate optimal and redundant paths. Re-use of existing ITU and IETF standards for resiliency such as resilient-packet-ring (RPR) and IEEE 802.1ag carrier OAM standards

Its too soon yet to say which approach will ultimately win out, but the existence of both specs spells good news for users. Both standards offer carriers the opportunity to achieve lower operating costs while improving service delivery. And that means that Carrier Ethernet services users are growing to love will be more widely-available than ever in coming years. Irwin Lazar is the vice president for Communications Research at Nemertes Research

Sponsored by

Carrier Ethernet meets new enterprise metro data center needs


By Johna Till Johnson Metro area networks have come a long way since the leased lines and SONET rings of yore. True, they are still widely deployed and extremely versatile technologies, but as user applications increasingly feature voice and data convergence and high-bandwidth/low-latency requirements, carriers are changing their metro area networks to support these applications. To understand how metro area networks are evolving, it makes sense to examine enterprise network architectures and the applications they need to support. Enterprise WANs connect three distinct types of sites, according to Nemertes analyst Katherine Trost: Tier 1: Data centers Tier 2: Distributed offices Tier 3: Remote offices and users Metro area networks are most commonly used to connect sites at the top tier of the WAN, which includes data centers, contact centers, administrative headquarters and some (but not all) distributed offices. These tier 1 sites are typically geographically close, and the applications located there generally move massive volumes of data. As a result, they need very low latency and very high reliability. Tier 1 WAN sites are perfect for metro area networks based on technologies including dedicated fiber, dense wave-division multiplexing (DWDM) and, more and more often, Carrier Ethernet. Carrier Ethernet suited for data center replication and call centers Data center storage replication is one of the most common applications at this tier of the WAN. Enterprises continue to consolidate multiple data centers down to a handful, then use data center replication between two or three data centers to ensure reliability and redundancy. Often, two data centers will replicate synchronously over the metro area, typically using Fibre Channel as the core communications protocol. The theoretical maximum length of a synchronous Fibre Channel connection is on the order of 120 miles (depending on the bandwidth of the link). But latency is typically the primary gating factor, and the maximum practical distance for synchronous replication is roughly 30 miles. Options for synchronous connection include Fibre Channel over SONET, Fibre Channel over DWDM and, potentially, Fibre Channel over Ethernet (FCOE). But future deployment of FCOE across the WAN will depend largely on the degree to which FCOE achieves acceptance within the data center, and here a large question mark remains. We found near-zero adoption of FCOE within data centers, with low planned usage for the next 24 months. Another common metro-area application is connectivity into contact centers (call centers), which may handle hundreds of thousands of phone calls simultaneously. As enterprises move toward a converged voice and data architecture, incoming calls are carried across the WAN rather than across dedicated voice private lines, as they were previously. But ordinary WAN services such as Multi-protocol Label Switching (MPLS) may suffer from route-convergence problems: If a logical link fails, it may require multiple seconds for the network to re-establish connectivity. For most traffic, this isnt a problem, but an outage of several seconds is long enough to cause callers to a contact center to hang up in frustration.

Sponsored by

With Carrier Ethernet, however, the connections can be engineered at layer 2, avoiding the route-convergence problem altogether. So service providers are increasingly turning to Carrier Ethernet and other technologies that support both voice and data and also can provide real-time redundancy. The growth of high-bandwidth applications in the metro Administrative headquarters often require metro-area connectivity, particularly in organizations (such as higher education institutions and state and local government) where many offices are in close proximity in a campus environment. Here, too, we see a greater-than-typical use of Carrier Ethernet (virtually all of the organizations Nemertes works with in both verticals have some degree of Ethernet in use in their metro-area networks). The use of Carrier Ethernet is likely to grow as high-bandwidth applications like video conferencing, telepresence, streaming video and distance learning increase. The use of these applications is rising steeply today, driven by several related (but not identical) trends: Increasing acceptance of the virtual workplace. Nearly 90% of organizations consider themselves virtual workplaces -- meaning that they actively encourage collaboration among employees or workers who are geographically separated. Travel restrictions (79% of organizations say travel restrictions have increased the use of video conferencing). Increased deployment of streaming video, particularly for training applications and distance learning.

The common theme across all these applications and WAN architectures is Ethernet. As voice and data converge, and as Ethernet becomes as widely deployed within data centers as Fibre Channel, Carrier Ethernet becomes the logical way to achieve high-bandwidth, low-latency links across the metro area. And those enterprises that deploy it, love it: Seventy-nine percent say theyre extremely happy with their Ethernet deployments, and the vast majority say they expect to deploy more Carrier Ethernet in the near future. The bottom line is that we have seen the future of metro-area networks, and its increasingly Carrier Ethernet. Low-latency, high-bandwidth, cheap and cheerful Ethernet services meet the needs of those tier 1 WAN sites that are close enough together to be served by a metro-area network. Johna Till Johnson is the president and senior founding partner of Nemertes Research

Sponsored by

Metro network complexity: Time to cut the Gordian knot?


By Eve Griliches Service providers are beginning to see success in rolling out IP services, whether they are wireline providers competing for television services or cable operators adding VoIP and streaming media to their existing high-speed Internet offerings. Building on this success, service providers now need to scale their IP services, which are often media-rich applications that are bandwidth hungry and require stringent guarantees for that bandwidth. At the same time, they must increase the speed of offering these services while reducing the cost of operating the overall network. To achieve the service quality needed to deliver media-rich applications, service providers have had to compromise their original infrastructure goals of building simple and cheap metro Ethernet edge/aggregation networks. Instead, they have built multi-service metro networks with high-functioning equipment. Adopting this approach has gotten the job done, but metro networks are now complex, expensive to operate and dont deliver the service velocity providers need. Service providers need to consider a few alternative solutions. Rethink how multi-service networks are built. Instead of incrementally adding service delivery features to expensive equipment, perhaps the time is right to extract the critical service management features into a purpose-designed session layer. This would have two benefits: o First, by simplifying the requirements on routers and switches, service providers would have more options to reduce the cost of packet transport. o Second, it would encourage innovation and discussion on how best to deliver critical service management functions. It might also give rise to a new category of product, which could deliver the dynamic, adaptive session-by-session Quality of Experience (QoE) required to support a world rich in media-heavy IP services. Break the current paradigm. This will require a new approach, as well as a new vision of how to manage the metro edge. We believe that vendors and service providers open to new approaches will become more competitive and will be able to deliver services with velocity and quality as never before.

Examining the carrier edge The past 10 years have seen increasing pressure on service providers to cost-reduce their networks. Their top line has been threatened by the accelerating decline in the traditional voice market, and their bottom line has been challenged because the price of bandwidth has been declining faster than the cost to produce that bandwidth. In response, service providers came up with a two-part plan. First, increase bandwidth and reduce costs by rebuilding the metro network with Ethernet to take advantage of Ethernets lower cost base. And second, introduce new IP services over this high-speed metro Ethernet network, generating more than enough revenue to fill the gap left by declining legacy voice revenue. Because these new services are bandwidth-hungry video services (multi-channel television, Video on Demand (VoD) and other media-rich services, such as interactive gaming and video conferencing), they require an entirely new network.

Sponsored by

This looked like a good approach, but it turned out that the twin elements of new revenue from new services and bandwidth cost reduction are more difficult to put together. The basic problem is the nature of these new services. Media-rich services are extremely sensitive to packet loss, jitter and latency. Many of them need constant bandwidth. And, most important, user satisfaction with these services is sensitive to all of these factors. In a video network, each and every video session must be delivered with welldefined bandwidth determined at session setup, and with zero packet loss. In a gaming session, there is less need for constant bandwidth, but it is increasingly important that latency and jitter be minimized. In addition, the way consumers use IP services has begun to change. IP services were initially source driven, offered by service providers, and pushed to consumers. A rapid shift is occurring, however. As the number and variety of IP services are expanding, the focus is shifting from the provider delivering these services to the end user pulling them -- choosing and invoking different applications, on-demand, and depending on the topicality and content offered. In a world of pull-based, unicast applications, it is difficult or impossible to predict how much bandwidth users will require, much less where and when. This problem is actually compounded now because of the behavior of new software applications that measure and take every bit of bandwidth available. It takes just a few individual users running applications like Move Networks HD Adaptive Streaming, or any peer-to-peer application, to consume all the excess bandwidth in the network, affecting everyones performance. To date, no satisfactory solution has been implemented that can evenly provide bandwidth across concurrent users. The previous solution almost every provider in the world deployed to deal with this problem was to over-provision the network. In todays world, this strategy simply wont work. Overprovisioning is not practical because users choose for themselves from a wide range of bandwidth-intensive applications, and many of these applications take much of the available bandwidth. Because todays applications are media rich and quality sensitive, the degree of over-provisioning would need to rise considerably to deliver QoE, otherwise video traffic could easily get stuck behind a burst of gaming or peer-to-peer traffic. Analyzing metro Ethernet networks today The factors discussed above have had considerable impact on how service providers build multi-service metro networks. Instead of being able to deploy simple, inexpensive Ethernet switches, service providers have been forced to select much higher-functioning networking gear. Though the problems of multi-service metro networks are nothing like those of the Internet backbone, many service providers have felt forced to deploy the gear originally designed for the Internet core simply because the sophistication is there to successfully support multi-play services. As service providers achieve penetration and success with their multi-play services, the strains between their original vision of a simple, cost-effective metro network and the network they actually built have become more apparent. Because the price of bandwidth continues to fall faster than Moores law, the high cost of metro networks remains a critical issue for service providers.

Sponsored by

In addition to the capital expenditure outlay, operational expenditures in todays multi-service networks are also an issue. The most commonly used architectural approach for metro networks is a combination of MPLS and DiffServ quality of service (QoS). Each service is engineered into the network via a web of MPLS tunnels, and each tunnel is sized to the maximum expected load for each service from each subnet. Each tunnel is then carefully configured onto the network node by node and link by link to ensure that each link has the requisite capacity, with the DiffServ bits used to prioritize services against each other. This approach is somewhat complex to configure and provision because it requires link-by-link engineering of services and tunnels. Perhaps worse, it is also static. Simple network operations like adding access nodes and trunks are difficult because they require re-engineering of the tunnels; this technique also affects the service providers ability to deliver new services quickly. Prior to rolling out a new service, there must be a careful estimate of bandwidth needed for each subnet; tunnels need to be engineered for the maximum expected bandwidth, and then those tunnels have to be configured into the network. This approach has worked for providers as long as each service on the network has been allocated enough bandwidth per service. But what happens when there is a snow day and everyone is working from home? The network suddenly cannot handle the bandwidth requested for the service, and quality of experience goes right out the window. The problem here is that the bandwidth for each of these services has already been configured, and there is no protection if a particular session within that tunnel needs more bandwidth for its application. There is little opportunity to manually adjust, in real time, to optimize for this. So ultimately, a single user or time of day or event can affect the daily bandwidth of these applications with no real ability of the provider to make adjustments for the impending congestion, which often materializes. Looking at the network or policy management approach Another approach to the problem of managing services is through network management or policy management. Many vendors have supported this approach, but not for large networks where network state needs to be sensed and services should be configured in real time. The reason is that services traverse the network in multiple directions, not just from provider to consumer anymore. Consumers are rapidly moving from accepting the push sourced model of consuming services to a pull model. The centralized management or policy administrator simply does not scale to handle the constant pings and requests from all the network elements. If the policy manager runs the network, it still has no real-time knowledge of bandwidth changes and new requirements within the network. If you want the network to be aware of these changes, you have to be able to configure the network in real time. In fact, most network managers are understandably reluctant to have a policy manager constantly changing the state of key network elements for fear of destabilizing the entire network. Also, the policy manager does not really scale as subscribers are added and changes are reflected in the network. Large networks in general have problems with real-time data changes, and what has been optimized for today will be out of date tomorrow.

Sponsored by

Rethinking the problem Lets rethink the crux of the problem. Media-rich applications need session-by-session servicespecific bandwidth and QoS. Embedding service-aware functionality in switches and routers does not really solve the problem. Switches and routers are designed to forward packets, hop by hop. They dont provide full-function session management capabilities. The session management features the switches come equipped with have been added incrementally, without the whole problem really being thought through. Embedding service awareness makes them more complex and expensive, and dilutes what they do best -- forwarding packets from origin to destination. In an ideal world, maybe the right place to start would be by separating the delivery of services from the transport of packets. If we started like this, then: o The transport layer would focus on packet delivery and no longer be service aware. Service providers could flatten the transport layer and eliminate protocols and complex configurations. Service providers would then be able to purchase hardware optimized for price/performance metrics in a cost-effective manner. This would actually stimulate investment and innovation as well as new approaches to the transport layer. o We could extract service creation and delivery to a separate session layer architected from the ground up rather than as a series of incremental band-aids forced on current equipment. The right session layer would no longer be link by link (which is how transport equipment functions today) but would be dynamic, with no preconfigured tunnels, and it would function end to end to secure bandwidth and QoS enforcement. Perhaps most important is the ability to create and deliver new services quickly. To do this, the network must be flexible to the needs of the actual services. The session layer would not provide packet transport but would provide session processing of all kinds: session initiation, management of the quality of service, and scaling these services to an increasing number of subscribers. With session management, network occupancy levels can rise while simultaneously preserving quality of experience. This introduces sophisticated congestion management to the network, a way to reduce calls, ratchet back errant usage, and provide fair usage to the huge number of subscribers to ensure they get what theyve paid for. If the lower layers handle the transport and traffic management, then simple and cost-effective processing power can be used for session management. Interestingly, it almost sounds like a job for an off-the-shelf general-purpose processing computer or server. Session management: How it might work A new and intriguing way to potentially solve the problem is to provide a session-by-sessionbased management approach. Session-by-session service delivery would address or consider the entire bandwidth in the network and allocate that bandwidth based on policy and individual customer and service profiles -- assuming it is prioritized. Yet it could still allocate leftover bandwidth to lower-level customers, so that no consumer is ever cut off. In essence, there is a single point of management for each session rather than centralized management for all sessions. This actually decouples and extracts the service from the transport layer, allowing services to be delivered faster and cheaper and right in the data path. Video on Demand sessions can be admitted to the network based on actual network usage

Sponsored by

10

and availability. To maximize utilization, dual-rate scheduling is applied so that if the Video on Demand session cannot be initiated at that time owing to lack of bandwidth, a time when it can be initiated will be communicated to the customer. Also, selective suspension or suppression of individual sessions is possible so that ongoing authorized sessions are not affected. Ultimately, a session-by-session approach enables dynamic traffic patterns to avoid congestion in real time, which is exactly what every provider is looking for. This ensures that the network has a much higher utilization rate without over-provisioning. It also enables the idea of fair usage in which everyone on the network gets equal bandwidth based on their service profile. This involves no new protocols and no complex software, and it helps simplify and bring significant cost reduction to the carrier edge. Applications for session management Perhaps the most intriguing application for session management is implementation of the fair usage model. This is applicable for any MSO network as well as any wireless network, which will be required to manage the increasing data traffic. As we all know, it is often only 5% of consumers who hog the bandwidth, often doing it with P2P upstream loads, which inherently reduce downstream capacity. Attempts at throttling have succeeded but have also resulted in customer churn and FCC issues. This approach offers a way for the provider to monetize new applications and ensure their delivery, all on the same network. This is a huge issue today for MSOs and will become one for any of the wireless operators, especially with the proliferation of the iPhone, mobile maps and session-based one-to-one mobile gaming applications. IDCs Wireless Infrastructure service estimates that about 25% of all cell sites today account for well over half of the traffic. That means the heavy concentration in urban sites is putting huge demands on cell site capacity, which, of course, is physically limited. A big benefit of this fairusage model is that it can actually be deployed for upstream and downstream sessions, so that as traffic becomes less and less predictable, the fair-usage model provides even and smooth coverage at much lower cost points and keeps customers satisfied with transmission in both directions. For mobile operators just building out their backhaul networks, the cost to deploy these new networks is huge, and it is not clear, with the flat-rate services to date, whether the incoming ARPU will cover the capital and operational expenditures and provide reasonable payback in the near term. In addition, P2P traffic has begun to increase on mobile networks, so the P2P users clearly are crowding out other mobile subscribers, leaving them with little to no bandwidth and, often, dropped calls. And what level of video quality will really be possible on the mobile network when the bandwidth from each cell site has clear limitations? Here again, a network that can guarantee service quality session-by-session would enable providers to monetize the services. This approach would ensure that the limited bandwidth allocated to each cell site was being fully utilized for the priority customers and would still address the other customers with fair amounts of working bandwidth. This example enables the mobile operator to increase revenue, as well as cost-contain the network.

Sponsored by

11

Outlook for metro networks Service providers will need to rethink their approach to metro networks to simplify and speed service delivery and to cut costs. We believe traditional approaches work to some extent but ultimately do not and will not scale. Bandwidth usage has changed, becoming much more dynamic, which requires a shift in thinking about how to solve the congestion problem, as well as how to implement and fix the problem. Separating service delivery into separate session layers will benefit operators in several ways: o o o o o By simplifying the functionality needed in the transport network, providers could turn to less expensive solutions. By flattening the network, providers reduce operational expenses. Providers can deliver a better experience by protecting QoS session-by-session and run their network hotter by supporting more subscribers on the network. Service creation can be simplified and rolled out faster, with less overhead and time spent on engineering studies of the network. A focus on service creation and delivery could enable more sophisticated services over a wider footprint, irrespective of the network elements deployed.

We are intrigued by new approaches and believe some of them have a disruptive opportunity in the growing carrier edge market to help providers deliver services faster with the quality they deserve at much lower costs. Sometimes its hard to assume that a major network can be deployed in a radically different way, but tough economic times do bring some of the best ideas to market. This also reflects the shift from proprietary networks to more standards-based hardware, which leverages chip enhancements and declining cost structures. We expect service provider networks to look more and more like IT networks and large data centers, with standard processing equipment in clusters or grids, with smart software and advanced algorithms running the network. Essential guidance Today, we do need to think differently and have open visions on how to deploy intelligence closer to the edge without adding complexity and cost. Life in telecom is sinusoidal -- the edge used to be somewhat simple and easy to deploy. But life has provided us extensive applications and shifted the power to the users, enhancing our work and home life. Change must occur in carrier networks to meet this demand, and change is always hard. We know the edge has become the problem again. This time, lets solve it quickly and painlessly with a new low-cost approach. The answer does seem to be clear -- start by separating the service from the transport. If this can be done, a lower-cost network can be deployed, and faster, richer service creation and delivery will be possible. This also opens up the opportunity for major innovation at both the service and transport levels. As discussed, vendors that can break the paradigm of simple and cost-effective versus intelligent and high-cost will deliver true differentiation and innovation and will enjoy a competitive advantage. The ideas laid out in this study give some clues as to how this might be done. Eve Griliches is a program director within IDCs Telecommunications Equipment group

Sponsored by

12

Delivering True Carrier Ethernet business services


By Malcolm Loro Rapid increases in the breadth and sophistication of business applications are impacting enterprises networking requirements and driving the adoption of next-generation packet-based Virtual Private Network (VPN) services. Ethernet, in particular, provides attractive benefits: simplicity, scalability, lower cost, and the ability to support multiple applications and services over a single network interface. Today, business customers can rely on robust network-based Ethernet business services to deliver a full range of critical business applications, such as: Secure, multi-site connectivity with traffic separation Data Center Interconnect Access to Layer 3 services (Internet access, IP VPNs) Transport of advanced communications applications (voice, data and video) Access to cloud services (server virtualization, Software as a Service (SaaS)) Enterprise customers are drawn to network-based Ethernet services as a means to control costs and ensure business processes can scale effectively, while maintaining control over critical IT functions. Users only pay for the bandwidth they require, and have the flexibility to introduce new applications and additional bandwidth rapidlyin very granular increments when needed. A single, familiar Ethernet interface enables convergence of all services over a common network infrastructure, simplifying operations. Layer 2 Ethernet VPN services, unlike IP VPNs, are compatible with non-IP traffic and typically are simpler to manage and less expensive to deploy. In addition, Ethernet services provide secure traffic separation and full service transparency, allowing the enterprise to maintain inhouse control over routing information and security and encryption techniques. Unlike the complexity of multiple overlay networks based on legacy technologies, Ethernet has ushered in a new era of one network, multiple services. Ethernet gives service providers the ability to customize and differentiate service offerings with tiered classes of service, performance reporting, and SLA guarantees. In addition, Ethernets bandwidth offeringsmuch more granular than in legacy TDM servicesprovide the flexibility and scalability end-users want. However, not all Ethernet services are created equal. A number of business trendsincluding the maturation of virtualization and cloud-based applicationsare impacting service provider networks and service consumption. Service providers must adapt quickly in an environment where demand for services is evolving continuously, network traffic distribution is changing constantly, and bandwidth is growing continually. Pure cost reduction, although very important, is no longer sufficient; the emphasis must shift to growing top-line revenue through the creation and deployment of new Ethernet business services with greater velocity, automation, and customization. Cienas Ethernet business service solutions enable the transition to service-driven networks combining software intelligence and programmable devices to create low-touch, high-velocity networks. Only Ciena delivers True Carrier Ethernet, which offers a wide range of enhanced Ethernet capabilities and features above the minimum standards defined by organizations such as the Metro Ethernet Forum (MEF)that significantly accelerate and automate scalable Ethernet service creation and activation.

Sponsored by

13

Cienas Carrier Ethernet Service Delivery solution Cienas True Carrier Ethernet services architecture and Carrier Ethernet Service Delivery (CESD) portfolio are depicted in Figures 1 and 2. This solution allows service providers to realize new levels of speed, differentiation, operational scalability, and reliability in delivering revenue-generating Ethernet business services.

Figure 1. True Carrier Ethernet services architecture

Sponsored by

14

ActivEdge 3000 Series (Service Delivery Switches)


Model
3180 3181 3190 3911 3916 3920 LE-311v 3930 3931 3940 3960

Description
Multiservice Pseudowire Gateway (8T1/E1) Multiservice Pseudowire Gateway (16T1/E1) Multiservice Delivery and Aggregation Weather-proof Ethernet Demarcation (10-port) Ethernet Demarcation (6-port) Ethernet Demarcation (12-port) Ethernet Service Delivery Extended-temp Ethernet Service Delivery Weather-proof Ethernet Service Delivery 1st Tier Ethernet Aggregation 10G Ethernet Service Delivery

NNI/UNI Ports
(2) 100M/GbE SFP (2) 100M/GbE SFP (40) 100M/GbE SFP (2+2) 10G SFP+ (2) 100M/GbE SFP (2) GbE SFP (4) 100M/GbE SFP (4) GbE SFP (2) GbE/10G SFP+ (2) GbE/10G SFP+ (4) 100M/GbE SFP/RJ45 (2) 10G XFP

UNI Ports
(8) 10/100M RJ45 (8) T1/E1 (8) 10/100M RJ45 (16) T1/E1 (16/32) STM-1/OC-3 (4/8/32) STM-4/OC-12 (8) 10/100/1000M RJ45 (2) 100M/GbE SFP (2) 100MGbE SFP/RJ45  (8) 10/100/1000M RJ45 (24) 10/100M RJ45 (4) 100M/GbE SFP (4)100M/GbE SFP/RJ45 (4) 100M/GbE SFP (4)100M/GbE SFP/RJ45 (20) 100M/GbE SFP/RJ45 (2) 10G XFP (8) 100M/GbE SFP/RJ45

Total Gb/s 2 2 84 10 6 12 6.4 28 28 24 48

Form Factor
1RU 1RU 3RU Outdoor  1RU ETSI ~280mm (w) 1RU ETSI 1RU 1RU ETSI Outdoor  1RU 1RU

Temp Range
-40oC to +65oC -40oC to +65oC 0oC to +50oC -40oC to +65oC 0oC to +50oC 0oC to +50oC 0oC to +50oC -40oC to +65oC -40oC to +65oC 0oC to +50oC 0oC to +50oC

ActivEdge 5000 Series (Service Aggregation Switches)


5140 5150 5305 5410 Extended-temp 1st Tier Aggregation Extended-temp Aggregation/MPLS Edge Ethernet Aggregation/MPLS Edge High-capacity Aggregation/MPLS Edge (4) 100M/GbE SFP/RJ45 (2) Dual 10G XFP Option Slots (5) Slots-> (10) 10G or (120) GbE (10) Slots-> (40) 10G or (480) GbE (20) 100M/GbE SFP/RJ45 (48) 100M/GbE SFP NA NA 24 88 50 1000 2RU ETSI 2RU ETSI 6RU 22RU -40oC to +65oC -40oC to +65oC 0oC to +40oC 0oC to +40oC

Figure 2. CESD platform summary The ActivEdge 3000 Series of Service Delivery Switches (SDS) is available with a range of 10/100 Ethernet, Gigabit Ethernet (GbE) and 10GbE physical port counts, to fit small, medium, and large customer sites and multi-tenant office buildings precisely, with placement in customer premises, on the sides of buildings, or on utility poles. The ActivEdge 5000 Series of Service Aggregation Switches (SAS) provides multiple tiers of FE/ GbE/10GbE aggregation to better fill the transport facilities within both the metro access and aggregation tiers and ultimately minimize the number of IP/MPLS router ports with which they interwork. These switches can be deployed in a wide variety of locations, including business parks, outside plant cabinets, and central offices. The CESD portfolio incorporates the latest innovations in Ethernet switching technology, control plane protocols, encapsulation techniques, QoS capabilities, and Operations, Administration, and Maintenance (OAM) mechanisms. This combination enables the service provider to deliver carrier-grade business services backed up by verifiable SLAs with rigorous performance and availability guarantees. However, the CESD portfolio is much more than just a set of network devices. It is a unified portfolio that employs a common service-aware operating system and Ethernet Services

Sponsored by

15

Manager (ESM) system to provide exceptional operational efficiency and consistent system and service attributes across all Ethernet access and aggregation applications. The more services, customer types, and locations served with a common operating model, the greater the return on investment. With Cienas CESD portfolio, service providers can optimize all aspects of the service lifecycle, accelerate time to revenue, and increase profitability. A common Service-Aware Operating System (SAOS) across all CESD platforms provides consistent service offerings and a common deployment and provisioning model across the network. This consistency drives operational efficiencies and cost savings by permitting rapid rollout of new services and the latest advances in Ethernet technology and standards. The Ciena advantage: True Carrier Ethernet While the CESD portfolio supports the complete catalog of MEF-compliant Carrier Ethernet service offerings, it goes above and beyond the minimum capabilities defined by the standards to provide True Carrier Etherneta design implementation that allows service providers to differentiate their service attributes for each of the five key MEF-defined Carrier Ethernet characteristics: Standardized Services, Scalability, Quality of Service, Reliability and Service Management. Standardized services, scalability and quality of service Ciena provides the greatest flexibility for building and deploying Ethernet networks by abstracting the services from the access or transport network technology and supporting all MEF services across any topology and different tunnel encapsulation formats, as shown in Figure 3. The service and transport layers are coupled through comprehensive, standardsbased OAM capabilities to provide visibility, manageability, and controls.

Figure 3. A common service portfolio for all markets

Sponsored by

16

With no constraints imposed by the transport network, a common service portfolio can be deployed for all markets and operators can optimize bandwidth, network paths, and reliability alternatives without sacrificing service selection or quality. In addition, all services can be provisioned on any port. In fact, one important differentiator of True Carrier Ethernet is that logically separate Ethernet Virtual Connections (EVCs) with different encapsulations can be on the same physical port. The cornerstone of True Carrier Ethernet is Cienas virtual switching architecture. Services are typically identified with tagging/labeling schemes which can be difficult to coordinate across larger topologies with many service instances. With Cienas virtual switching architecture, the physical switch can be partitioned into logically partitioned switch resources, known as Virtual Switches, which create separate, secure address and switching domains within a single Ethernet service switch. Virtual switches provide: Isolated domains for repeating MAC Addresses, VLANs, and MPLS labels Simplified tagging architectures with improved security (reduced cross-talk) Easier interworking between disparate encapsulation formats Tremendous MAC scalability Virtual switching expands the operators ability to address customer connectivity and service needs while overcoming network and topology limits. In fact, Cienas virtualized architecture scales to thousands of virtual switches to provide an exceptional level of service scalability. In addition, rich hierarchical classification and traffic management work in combination with the virtual switches to provide granular and measurable bandwidth control for predictable QoS. Superior QoS controls provide predictable service delivery and allow the creation of enforceable and reliable SLAs. With True Carrier Ethernet, granular traffic contracts can be applied to very richly defined services using a combination of Layer 1 through Layer 4 parameters to classify a service flow. Tight control is achieved by segmenting bandwidth (i.e. service category, customer, department or user, and application) using a hierarchy of virtual ports, with traffic profiles and traffic management applied at all levels in the hierarchy. Providers then can create unique service offerings, for broader customer appeal and higher revenues. Reliability Cienas CESD portfolio provides Ethernet flexibility and transmission reliability with multiple resiliency options, including G.8032 Ethernet Ring Protection Switching and multi-tiered, dualhomed PBB-TE. G.8032 provides deterministic 50 ms protection switching enabling operators to deliver carriergrade Ethernet services and attain the resiliency capabilities of the legacy SONET infrastructure without the associated costs. Cienas G.8032 solution enables operators to create 1GbE or 10GbE rings that are highly flexible and scalable, permitting the number of network elements on the ring to increase as needs grow and even include ring spans based on other service layer technologies and speeds. Cienas proven PBB-TE solution applies a multi-tiered tunnel approach, with tiers of PBB-TE tunnels providing device and path protection such that operators can add, service, and upgrade sites without having to reconfigure all layers of network elements. This capability provides deterministic protection while simplifying the provisioning and ongoing maintenance effort.

Sponsored by

17

Service management Cienas industry-leading service management tools, including the ESM and comprehensive OAM capabilities, empower operators to deploy Carrier Ethernet networks quickly and easily, accelerate new service introduction and assure service quality and availability. Cienas ESM is an automated service activation, creation, and management platform that implements a groundbreaking service provisioning technique to dramatically accelerate service roll-outs. As shown in Figure 4, network operators can create service visualization screens providing hierarchical, network, service, and inventory/events views. Each view is instrumented to provide the necessary access and control for managing the network. Service provisioning has been simplified through the use of service templates and provisioning wizards. For example, an operator can select two endpoints for a point-to-point service and run the provisioning wizard to set service-specific fields, automatically creating the service and configuring any intermediate elements. Service attributes, such as QoS parameterscommitted information rate, excess information rate and burst parameterscan be configured and later changed automatically through the use of service templates defining those parameters.

Figure 4. Service visualization and provisioning Once services are deployed, operators require an effective OAM strategy to monitor the health and performance of the network and end-customer services. The approach to OAM can make or break the business case as ineffective implementations will drive up costs and leave customers dissatisfied with SLA performance. Cienas portfolio delivers an extensive OAM feature suite to monitor the status of system and network links; measure the performance of customer Ethernet services; confirm link and service throughput and quality conform to SLAs; and distribute this management information across the network. OAM features available today include:

Sponsored by

18

IEEE 802.1ag Connectivity Fault Management (CFM) IEEE 802.3ah Ethernet in the First Mile (EFM) IEEE 802.1AB Link Layer Discovery Protocol (LLDP) ITU-T Y.1731 Performance Monitoring: Delay, Jitter, Loss IETF RFC 5618 TWAMP Sender & Responder for L3 SLA Monitoring IETF RFC 2544 Performance Benchmarking Test Generation and Reflection

These OAM tools pave the way to increased competitiveness and customer satisfaction. For example, built-in RFC 2544 Performance Benchmarking capabilities empower the operator to be highly responsive to service disruptions. When service impacts are detected by ongoing PM tests (Y.1731 or TWAMP) or are reported by the end-customer, performance tests can be initiated immediately by the NOCno technician scheduling is required, no trucks are rolled. Testing can occur at virtually no cost to isolate and localize the issue and then focus resources on addressing the specific root cause. The responsiveness means troubles are fixed faster, minimizing service impact and creating higher customer satisfaction. Putting it all together a service example Figure 5 shows an example in which a service providerhere referred to as XYZ Communicationsdelivers managed True Carrier Ethernet business services for an enterprise customer who wants to interconnect multiple sites over a single, reliable, and cost-effective network to provide guaranteed, scalable services that are compatible with their growing suite of IP and Ethernet applications.

Figure 5. L2VPN service In this example a multipoint-to-multipoint EVP-LAN EVC provides a L2VPN service connecting four enterprise sites, so that all sites appear to be on the same LAN and have access to shared resources such as servers. Using customer-located ActivEdge 3000 Series service delivery, XYZ Communications provides VLAN tagging for traffic separation and differentiated classes of service for different applications and departments, with individual traffic prioritization per flow. Customer satisfaction and loyalty are ensured by providing strong SLA guarantees, based on the rich set of QoS and traffic management techniques, combined with sophisticated OAM diagnostic tools. ActivEdge 5000 Series service aggregation switches cost-effectively interwork with XYZ Communications existing IP/MPLS core, providing efficient aggregation over a shared Metro Ethernet Network (MEN) and minimizing the number of expensive IP/MPLS router ports required.

Sponsored by

19

The aggregators provide VLAN to MPLS mapping and Ethernet traffic separation for service transparency, security, and scalability. With Ethernet providing the access and aggregation, this configuration is simpler and less expensive than a fully routed L3VPN, and allows the customer to maintain in-house control over routing tables and security and encryption techniques. XYZ Communications takes advantage of the True Carrier Ethernet service management capabilities to achieve rapid automated service activation, simplify and scale their operations, and lower deployment costs for service activation, changes, and upgrades. When each CESD switch is connected to the network, it automatically retrieves and loads its configuration file, improving the speed and accuracy of device turn-up and eliminating the need to deploy a highly trained installation technician. Once the device has been auto-configured, it is auto-discovered by the ESM and added to the existing network topology. Using the ESM, XYZ Communications creates differentiated service templates based on the wealth of CESD traffic classification features and capabilities. To provision a new service, the operator launches a provisioning wizard and, with point-and-click simplicity, creates the desired EVC, applies the appropriate service templates, and activates the service. As the customers Ethernet service requirements evolve over time, upgrading the traffic contracts at each CESD switch is as simple as modifying the handful of service templates. Rather than a truck roll to each site or device-by-device remote configuration, the appropriate service template is modified and pushed out automatically to update all the devices with EVCs implementing that service. For example, when XYZ Communications changed their Silver service from 40 to 50 Mb/s, every service configured to Silver was changed automatically, dramatically reducing the number of configuration and provisioning steps required and enabling a rapid, error-free service upgrade. Realizing the service-driven network To stay ahead of the curve, service providers must automate the network, from customer edge to service provider core, and deliver high-performance business services with plug-and-play simplicity. The realization of these capabilities requires a new level of intelligence and functionality from the network: a service-driven network that provides the ability to rapidly respond to a multitude of customer requirements, deliver and guarantee a dynamic range of guaranteed network services, and add new features and capabilities as customer needs and bandwidth requirements change. By developing a service-driven network, providers create the means to compete more effectively, delivering new Ethernet services with greater velocity while adapting to and capitalizing on the application innovation driving end-users bandwidth usage and networking behavior. Ciena enables service providers to create true service-driven networks optimized for top-line revenue growth. These are software-defined, fully-automated networks that can activate any service between any set of endpoints and adapt to end-users changing needs. With True Carrier Ethernet advances delivering a wide range of capabilities and features that enhance the key Ethernet business service attributes, service providers can realize new levels of speed, agility, and performance in the deployment of revenue-generating services. Malcolm Loro is a Director of Portfolio Solutions at Ciena Corporation

Sponsored by

20

About Ciena
Ciena is the network specialist. We collaborate with customers worldwide to unlock the strategic potential of their networks and fundamentally change the way they perform and compete. With focused innovation, Ciena brings together the reliability and capacity of optical networking with the flexibility and economics of Ethernet, unified by a software suite that delivers the industrys leading network automation. We routinely post recent news, financial results and other important announcements and information about Ciena on our website. For more information, visit www.ciena.com.

Sponsored by

21