Вы находитесь на странице: 1из 13

White Paper

IP Next-Generation Network Requirements for Scalable and Reliable


Broadcast IPTV Services

The Cisco CRS-1 Carrier Routing System delivers scalable and reliable multicast for broadcast IPTV services in
next-generation core networks.
For service providers, the ability to offer broadcast IPTV services with high availability and reliability is imperative to remain competitive
in an increasingly crowded industry. Meeting customer expectations for a high-quality IPTV service requires a network architecture and
routing system that can handle the challenges of delivering broadcast video to millions of users simultaneously while also handling video
on demand (VoD) as needed. The Cisco IP Next-Generation Network (IP NGN) architecture and the Cisco CRS-1 Carrier Routing System
meet these requirements through a unique combination of outstanding platform scalability and reliability and efficient multicast for video

distribution across the network. Together, these solutions from Cisco Systems allow service providers to deliver entertainment-grade
video services over an IP NGN.
CHALLENGES FOR DELIVERING ENTERTAINMENT-GRADE VIDEO ON IP NETWORKS
Service providers are moving decisively to deploy a variety of video-based services over IP infrastructure to maximize average revenue per
user and to remain competitive. IP enables added flexibility and lowers cost for both broadcasters and service providers as they
increasingly move to offer bundled triple-play services (video, voice and data) over a converged infrastructure. Video over the Internet has
also evolved in many forms over the last few years from the latest TV shows being available for download to users posting their own
created content on social community Websites. However, this type of video, normally referred to as over the top video, should be
distinguished from true entertainment-grade video, which can typically be displayed on large screens to provide sufficient quality of
experience for the viewerusually in real time.
Delivering entertainment-grade video over IP, often referred to as IPTV, which includes both IP-based broadcast and VoD services, poses
significant challenges as service providers scale their networks and delivery systems to manage millions of subscribers, withstand periods
of peak demand, and provide a superior quality of experience for viewers while balancing network capacity and efficient capital
investment. This paper primarily focuses on requirements and solutions for scalable broadcast IPTV services for service providers.
MEETING HIGH USER EXPECTATIONS FOR VIDEO QUALITY
Entertainment-grade broadcast video

whether standard-definition television (SDTV) or high-definition television (HDTV)have high

user expectations for service level. Similar to the dial-tone of the traditional public switched telephone network (PSTN), viewers have
come to expect broadcast video on their televisions to simply work. Viewers do not want to suffer quality losses or disruptions in
programming, meaning that the provider must protect the quality of broadcast video service from outages or degradation. Continued poor
quality or repeated outages for video services, especially broadcast, will mean lost customers for the service provider.
IP based network video is inherently very intolerant of packet loss because it is often highly compressed using encoding mechanisms such
as MPEG-2 and MPEG-4, and because these video codecs typically do not recover from packet loss at the network layer, losing even a
single packet of IP-encapsulated video can produce a visible degradation of video quality.
Video codecs typically receive an MPEG-2 or MPEG-4 stream that is composed of three types of frames: the I, P, and B frames.

The I frames are intra frames that contain information to describe the entire frame within a video stream. They are similar to
standalone images that can be used to recreate the entire information for a picture within the stream.

The P frames are predictive frames; they rely on information from a previous I or P frame to rcreate themselves as a full picture.

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 1 of 13

The B frames are bidirectional, needing information from both the previous frames and following frames in the sequence to draw
themselves fully. The previous and following frames can be only I or P frames.

The MPEG decoder or encoder uses a sequence of I frames, followed by P- or B frames within a stream, to form a group of pictures (GOP).
The GOP is the number of frames between successive I frames, as shown by Figure 1. In SDTV broadcasts, the GOP typically varies from
12 frames in a 25-frames-per-second (fps) PAL signal to 15 frames in a 30-fps NTSC signal, meaning an I-frame is sent approximately
every half second.
Figure 1.

GOP Sequence in an MPEG Video Stream

The GOP sequence of encoding allows an MPEG stream to greatly reduce the bandwidth requirements of raw video to rates that are more
manageable for todays networks. However, the interdependency of the P and B frames on the I frame also creates a requirement for highly
reliable network transmission. The loss of an I frame can lead to introduction of significant visual artifacts such as pixilation,
macroblocking, or even loss of picture frame, degrading the subscribers viewing experience.
Most service provider networks today support some type of high-availability mechanisms in the network1 to provide protection in the event
of a node or link failure. Yet tests2 have shown that loss of even a single IP packet containing the I frame within the GOP in a single
program transport stream (SPTS) can lead to significant degradation of the viewing experience.
Figure 2.

Impact of IP Packet Loss on the Viewing Experience for an MPEG Stream

As shown in Figure 2, a single lost packet can result in up to one second of viewing degradation, with subsequent packet loss of up to 1000
packets for up to four seconds of degradation. Industry norms have defined the acceptable video quality of experience to be no more than
one visible degradation per two-hour program. The corresponding network quality of service (QoS) for this measure is an allowed packet
loss rate of approximately one packet in one million (10-6 ). For service providers delivering IPTV, this 10-6 maximum loss is considered a
baseline requirement in the market.

Multiprotocol Label Switching (MPLS) Fast Reroute(FRR) is one such mechanism, which typically provides low levels of packet loss through failover to a
backup path usually within 50ms.
2
Tests conducted by Cisco used a single-program transport stream (SPTS) at 3.75 Mbps with a GOP size of 15.

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 2 of 13

OPTIMIZING THE INFRASTRUCTURE FOR BROADCAST VIDEO


Whether a traditional telephone company, a cable company, or a new network operator, most providers are redesigning their networks to
build IP-based NGNs. These providers have realized that an IP NGN is required to handle the availability, the reliability, as well as the
tremendous resource consumption of video while still supporting voice and data services with flexibility and control. Given that IP network
video has far more stringent jitter and loss requirements than typical IP voice services, this can be especially challenging when combined
with the dynamic nature of best-effort Internet traffic, which may also include over the top video, in networks today. Furthermore, the
service providers network infrastructure must continue to also support existing voice and business VPN services. Hence an IP NGN must
also provide some mechanism for service isolation and separation while still providing the ubiquity and flexibility of a common IP
foundation. For a more detailed discussion on IP NGN requirements, refer to the Cisco white paper Building the Carrier-Class IP Next
Generation Network.
Telephone companies are adding infrastructure to deliver video over existing telephone lines or through new deployments of fiber links to
homes and other consumer locations. Cable operators already have a video infrastructure in place. However, this infrastructure in certain
areas lacks the full flexibility that IP can provide, for both broadcast and VoD services. As these operators move to offer more flexible
content and expanded channel line-ups, the current network infrastructure must also meet these new requirements efficiently.
The challenge for all these providers is to design their networks in a way that delivers a superior user experience and facilitates scalability
at the lowest possible cost. A more detailed discussion about delivering IPTV and other video services over an IP NGN is presented in the
Cisco white paper Optimizing Video Transport in Your IP Triple Play Network.
DELIVERING BROADCAST VIDEO SERVICES EFFICIENTLY
Individual bandwidth requirements per household are expected to dramatically increase, reaching approximately 1.1 terabits(Tb) per
month3 by 2010 in the U.S alone. This bandwidth growth will be driven by the adoption of HDTV, personal video recorders (PVRs), and
other video-based applications over the network. For comparison, 20 of these homes would generate more traffic than the whole Internet
backbone in 1995. As a result, estimates over the next five years alone project that global monthly IP traffic will rise to 11 exabytes,4 with a
compound annual growth rate (CAGR) of more than 56 percent globally, led largely by video applications.
Because broadcast video is streamed from a single source to multiple receivers (subscribers), these streams can create a tremendous traffic
load on network infrastructure that has not been optimized for its efficient distribution. With customized channel line-ups in each region
and with hundreds of channels or more5 in each line-up, the ability to scale delivery of broadcast video to millions of users is a crucial
requirement for service providers.
The bandwidth required for each broadcast streams can vary between 1 to 19.396 Mbps, depending on the picture resolution, encoding, and
compression rates used. Hence the distribution of all channels to all possible locations statically may not be the best use of the network
capacity; it would be much more efficient to have some dynamic capability that distributed the necessary channels to the various market
regions as they require them. This setup would enable optimal bandwidth use in the network as viewer patterns fluctuate. Clearly IP
networks have to be architected to be able to handle this dynamism both efficiently and cost-effectively while supporting the massive scale
that is necessary end to end.
To summarize, service providers need a network architecture that can deliver the following:

Emerging large-scale IP broadcast video services with the availability and reliability that delivers an entertainment-grade
experience, which will be the foundation of customer retention

Cisco estimates, based on each home with one HDTV television, one SDTV television, two PVRs, one VoIP phone, and high-speed data service.
1 exabyte (EB) = 1 x 10 bytes. Source: Cisco estimates, Ovum, Bernstein, and public company data.
5
Major cable providers in the United States currently offer 300+ channels, while some terrestrial service providers in Japan are looking to offer channel
line-ups in excess of 1000+ channels across the country. Most satellite providers also offer in excess of 500+ channels today.
6
19.39Mbps is the maximum data rate as defined by the Advanced Television Systems Committee (ATSC) for HDTV broadcasts using MPEG-2 transport.

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 3 of 13

Capability to recover from network failures quickly and in a way that is transparent to end customers

Multiple services across a common network infrastructure with separation of those services for enhanced availability, reliability,
and manageability

Support for efficient and more dynamic distribution of broadcast video over IP for increased flexibility and reduced costs

THE SOLUTION: A NETWORK DESIGNED FOR HIGHLY AVAILABLE AND SCALABLE BROADCAST VIDEO SERVICES
To address the challenges of delivering entertainment-grade video programming, the service providers network must support high
availability and scalability. Video support must be deployed in the core, distribution, and access layers of the network. Networks
architected for video delivery are typically broken into three functional areas: super headend (SHE), video hub office (VHO), and video
switching office (VSO) (Figure 3). Cable operators may have a similar structure but use different terminology for these functional areas in
the network.

Super headend (SHE): The super headend receives satellite feeds of scheduled and live programs from broadcasters. The super
headend may also receive assets for on-demand services by means of satellite, store VoD for special on-demand content, and
incorporate back-end systems such as the subscriber database. New video architectures typically identify two national super
headends, for redundancy against solar interference, which usually reside within the core of the transport network, with one
headend serving as primary and the other as backup. Although most new video networks are built with this architecture, many
current video providers also have satellite receiving stations in each region, resulting in higher overall operational costs. With an IP
NGN, video providers have an opportunity to nationally consolidate many receiving stations into a primary and back-up location.
The super headend would then deliver channel line-ups, VoD assets, and niche programming to the regions over the IP NGN.

Video hub office (VHO): VHOs often contain real-time encoders for local television stations and PEG channels (public, education,
and government), VoD servers as well as the network routers that connect the distribution and core networks. Service providers
typically maintain a few dozen regional VHOs, predominantly in metropolitan areas, that serve 100,000 to 500,000 homes. VHOs
may also provide connectivity for business services such as Layer 3 and Layer 2 VPNs.

Central office and video switching office (VSO): Central offices and VSOs house aggregation routers and DSLAMs that aggregate
traffic from subscriber homes.

Figure 3.

IP NGN Architecture for the Streamlined Delivery of Video Services Across Network Layers and Locations

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 4 of 13

Cisco helps service providers achieve streamlined service delivery through the ServiceFlex design, an integral part of the Cisco IP NGN
architecture. Ciscos ServiceFlex design delivers video, voice, and data services while ensuring IP intelligence is maintained throughout the
network, from the core to the distribution and aggregation layers. ServiceFlex employs flexible and proven Layer 3 native IP and IP
Multicast mechanisms, which deliver proven scalability, resiliency, and are used in the worlds largest video distribution networks,
enabling service providers to deliver scalable broadcast IPTV services while creating a robust foundation for future video services and
applications.
IP MULTICAST FOR THE DELIVERY OF SCALABLE BROADCAST VIDEO SERVICES
Traditional IP communications allow a host to send packets to one receiver (unicast transmissions) or to all receivers (broadcast
transmissions). IP Multicast provides another option by allowing a host to send packets to a defined group of receivers, as defined by a
multicast group. A mechanism known as anycast further allows the reuses of IP addresses across multiple devices in order to provide
enhanced redundancy. This mechanism can be used in conjunction with multicast to ensure that the source of a multicast stream is not a
single point of failure. This scenario is achieved by having two multicast sources configured with identical source and destination IP
addresses but residing at different locations in the network, allowing the network to determine the optimal one to use at a given point in
time. For efficient delivery of video services, IP Multicast used in conjunction with anycast mechanisms thereby provides source
redundancy of multicast streams in the Cisco ServiceFlex design.
IP Multicast is a mature, proven bandwidth-conserving technology specifically designed to reduce network traffic by delivering a single
video stream to potentially millions of recipients simultaneously (Figure 4). By replacing separate streams for each recipient with a single
stream for all, IP Multicast reduces the burden on intermediate routers and reduces overall network traffic. Within the network, routers are
responsible for replicating and distributing the multicast content to all eligible recipients. A more detailed look at multicast technology is
presented in the Cisco white paper IP Multicast Technical Overview.
Figure 4.

IP Multicast Technology Reduces Network Load by Efficiently Delivering a Single Video Stream to Many Recipients
Simultaneously

THE CISCO CRS-1: MEETING SERVICE PROVIDER REQUIREMENTS FOR A BROADCAST VIDEO SOLUTION
To solve the varied challenges of delivering broadcast video over an IP network, service providers need a routing system that can form the
foundation of their IP NGN, and deliver on the following:

Scale to handle terabit traffic levels

Support robust and scalable multicast replication in hardware

Support resilient routing of broadcast video feeds, with minimal service disruption at all times across the network

Provide continuous operation of the routing platform when router components or software processes potentially fail

Allow service separation, for each service to be isolated and eliminate the effects of one service on another

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 5 of 13

In the core of an IP NGN, the Cisco CRS-1 is the platform of choice for meeting these requirements. The Cisco CRS-1 allows service
providers to build a highly scalable, available, and flexible converged network infrastructure. The platform runs Cisco IOS XR Software,
which is a unique self-healing and self-defending operating system designed for always-on operation, while scaling system capacity from
320 Gbps up to 92 Tbps.
For delivery of video services, the Cisco CRS-1 offers the advantages of network and platform scalability, multicast scalability, network
resilience, platform redundancy, and secure service separation.
NETWORK AND PLATFORM SCALABILITY
Service providers typically have deployed two single-chassis routers at each network node for resiliency, with interconnects between them.
However, as video traffic demands grow, and with single-chassis routers offering inadequate capacity, service providers have added more
routers and more interconnect links, which increases complexity and cost.
The primary advantage of a scalable multichassis design is that providers can avoid the use of expensive, high-speed interconnect links
between multiple routers in a point of presence (POP) as this connectivity is provided by the multichassis fabric. As a result, overall system
capacity and ports can be freed for customer-facing bandwidth and applications. Analysis has shown more than 40 percent cumulative
CapEx and OpEx saving over a period of 5 years for multichassis systems compared with equivalent capacity single-chassis units.7
With the rapid increase in video traffic, providers will experience a greater need to deploy multichassis systems to achieve economical
scalability for both the number of interfaces and total bandwidth that must be supported across their core networks. To accommodate this
growth in an operationally cost-effective and non-disruptive manner, a primary requirement is a system with a switch fabric architecture
that supports in-service scaling from a single chassis to a large number of multichassis units.
The Cisco CRS-1 can be deployed as a single-shelf or multishelf system that is built by interconnecting multiple line-card shelves using up
to eight fabric shelves (92 Tbps). The Cisco CRS-1 is also upgradable between single and multishelf while in operation with no service
disruption to customer traffic. The innovative Cisco CRS-1 system fabric, based on three-stage Benes topology, allows multichassis
scalability from 2 to 72 line-card shelves, each line card shelf supporting 16 slots at 40 Gbps. The Cisco CRS-1 line-card shelves hence can
support up to 9216 ports of 10 Gigabit Ethernet and 1152 ports of OC-768c/STM-256 Packet over SONET/SDH (PoS) in the systems
maximum configuration.
As the data-plane-forwarding component of a routing system is scaled to terabit capacities, the corresponding control-plane processing
must scale in a comparable manner. Multiple-terabit routing systems with single-point-of-control processing present a bottleneck to the
overall scalability of those systems and their applications and a potential single point of failure.
A more distributed design, in both software and hardware, is needed to allow control-plane scaling for demanding video applications. The
processing capabilities of a single, central route processor would be inadequate or undesirable for overall service resiliency of such a large
system.
The Cisco CRS-1 supports the use of multiple distributed route processors (DRPs) in addition to the standard active and standby route
processors (RPs) across a multishelf configuration. This design eliminates the limitation of a single primary route processor per routing
system.
In the Cisco CRS-1, the DRPs are composed of two dual-processor symmetric multiprocessing (SMP) complexes per module. The DRPs
use the distributed control-plane scaling aspects of the Cisco IOS XR operating system for applications such as Border Gateway Protocol
(BGP), Label Distribution Protocol (LDP), Protocol Independent Multicast (PIM), and Internet Group Management Protocol (IGMP). Use
of such a distributed control-plane mechanism in the Cisco CRS-1 provides support for a large number of routes, peers, and hosts for any

Cisco estimates based on POP bandwidth projections for a large North American service provider.

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 6 of 13

protocol, such as multicast routing, independent of other routing system applicationsan important consideration for a scalable broadcast
video service.
MULTICAST SCALABILITY
Multicast implementations in single-chassis and multichassis routing systems vary in design, which can affect overall performance, service
scalability, and the experience of the IPTV viewer. The two predominant designs differ at the point within the routing system architecture
where packet replication for multicast is actually performed: within the line card or within the switch fabric.
Replication Within the Line Card
In the line-card-based replication approach, when a Join (2) message (a request to participate in a IPTV multicast stream or group) as
shown in Figure 5a (on the left) arrives at the platform, the receiving line card makes a request to the line card (normally through the
central route processor) containing the inbound stream and obtains a copy of the data as indicated by (3). In certain line-card-based
replication designs, subsequent Join messages send requests to the last-joined neighboring line card for additional copies of the data, rather
than to the initial line card that contains the source stream, once a certain limit (2, 3, and 5 in this example) has been reached (6). The linecard replication design creates a tree-like structure internally within the routing system to distribute the multicast traffic to the receivers.
This line-card replication approach has several major disadvantages. First, because replication is performed at the ingress and egress line
cards, the single multicast stream8 traverses the switch fabric numerous times as the number of neighbors increases, thereby consuming
fabric capacity unnecessarily in the routing system. Second, because the line cards themselves perform the multicast replication, the
forwarding performance of unicast traffic on these line cards can be adversely affected, and the forwarding performance of multicast traffic
is always a fraction of the overall unicast forwarding performance because of the suboptimal design of the replication architecture.
Finally, when a Prune (2) message (a request to exit a multicast stream) arrives at one of the line cards participating in the replication, such
as when the last user of a downstream IPTV session leaves, that node is removed from the multicast tree structure, halting all downstream
replication from that line card to other neighbors (4 and 5), shown on the right in Figure 5b. This action forces a reconvergence of the
multicast tree within the routing system, which imposes serious degradation on those continuing downstream IPTV multicast sessions on
the exiting nodeand potentially outright session loss.
Figure 5.

Non-optimal Line-Card Replication Design for Multicast

These intrinsic limitations for multicast in line-card-based replication designs create network-level and service-level scalability limitations
for service providers wanting to deploy IPTV services over an IP or MPLS9 based core network requires to scale to millions of
simultaneous users. To efficiently offer these services, they need the scalable and flexible core architecture provided by the Cisco IP NGN
together with the Cisco CRS-1.

8
9

An MPEG-2 stream is typically about 19.39 Mbps for HDTV and 3.5 Mbps for SDTV.
Note, the same replication mechanism would typically be used regardless of whether the multicast traffic is IP or MPLS based transport as with PTMP-TE.

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 7 of 13

Replication Within the Switch Fabric


The service-intelligent, three-stage, self-routed Benes topology in the switch-fabric architecture of the Cisco CRS-1 implements line-rate
multicast replication within the fabric (Figure 6). This implementation eliminates the complexity and burden of systemwide multicast
replication from the forwarding engines (FEs) on the line cards, which is the mechanism used in many other fabric routing systems.
The fabric topology is also service-aware through the use of separate hardware sets of priority queues for multicast traffic, unicast traffic,
and control messages. The separate queues help ensure that cells10 across the fabric have bounded jitter and latency characteristics,
particularly important when encoded video and VoIP services are mixed with best-effort data applications at multiple-gigabit speeds.
The self-routing, fully non-blocking fabric of the Cisco CRS-1 also allows lossless transport in the event of a component failure by
supporting a 1:N redundancy mechanism between the eight separate fabric planes. This capability provides a level of resiliency and selfhealing that facilitates the correct distribution of traffic flows while still maintaining the service intelligence required for efficient and
scalable multicast distribution.
The Benes topology switch fabric of the Cisco CRS-1 consists of three stages, as shown in Figure 6. Stage 1 (S1) accepts cells from the
ingress line card and distributes them across all Stage 2 (S2) fabric cards. Stage 2 performs multicast replication based on fabric group
identifiers11 (FGIDs), delivering the cells to the appropriate Stage 3 (S3) fabric card through multiple priority queues for both unicast and
multicast traffic. This queuing implementation accelerates traffic as it exits Stage 2. Stage 3 replicates streams based on FGIDs to the
appropriate egress line card, which subsequently reassembles the packets and delivers them to the appropriate interface or subinterface.
A prune or failure at any of the endpoints does not produce an undesirable, systemwide effect, unlike line-card-based replication
mechanisms, and does not affect the experience of any other IPTV viewer across the system. Additionally, because the replication is
performed by the fabric, fabric capacity is used efficiently to maximize overall system scalability, and multiple fabric traversals for the
same multicast streams are eliminated. This efficiency is crucial because the Cisco CRS-1 architecture has been designed to scale to 1152
possible line cards across 72 line-card shelves, to support a truly scalable multicast implementation at the foundation of the Cisco IP NGN.
Figure 6.

Service-Intelligent Switch-Fabric Architecture of the Cisco CRS-1 Supporting Fabric-Based Multicast Replication

NETWORK-LEVEL RESILIENCY FOR BROADCAST VIDEO


Consumer expectations regarding the quality of broadcast video are increasing, and service providers need to differentiate their video
offerings based on quality and overall reliability. Broadcast video requires the fastest service restoration times and guaranteed delivery
because outages have the potential to affect wide audiencespotentially in the millions. Meeting these expectations requires a high degree
of resilience across the network infrastructure.
To successfully meet these challenges and transport video reliably, service providers must consider a network architecture that maintains
the users viewing experience, even during times of network failure or instability, by going beyond common existing resiliency

10
11

IP packets are divided into fixed-length cells for efficient transport within the Cisco CRS-1 routing system.
FGIDs determine which destinations get copies at Stage 2 and Stage 3 cards.

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 8 of 13

mechanisms in the industry. Service providers will specifically need to consider a network that supports path diversity to ensure that no
single point of failure exists for the delivery of reliable broadcast video.
Path Diversity
The Cisco IP NGN rchitecture with the Cisco ServiceFlex design supports path diversity for delivery of dual-live video streams from either
single or dual sources for maximum resiliency. Dual-live refers to the capability for a client to simultaneously receive identical content
from either one or two independent sources via different network paths (Figure 7). If errors occur in a video stream, the receiver can switch
from one stream or source to the other. The dual-live design utilizing path diversity avoids the disruption that can be caused by network
reconvergence time and the resulting loss of I-frames within a broadcast MPEG stream across the network.
The end clients for dual-live transmission can be QAMs, DSLAMs, ad splicers, or other devices capable of sourcing redundant feeds.
Whether SDTV or HDTV, broadcast video requires significantly less overall bandwidth than VoD when efficient multicast is used,
meaning that additional capacity for streaming content from duplicate sources using native IP multicast mechanisms is not excessive and
can be supported in networks today.
For a dual-live architecture to be effective, the network must support path diversity. The path diversity mechanism ensures that the path
between the primary video source and the receiver share no links or nodes in common with the path between the secondary video source
and the receiver, so that a link or node outage cannot affect both paths simultaneously. For VoD services, anycast mechanisms can be used
to provide source diversity, as such services are typically limited to a smaller number of viewers. For heavily watched broadcast services,
path diversity allows the service provider to maintain the maximum subscriber viewing experience even during times of network-level
disruption.
Figure 7.

Cisco IP NGN with the Cisco ServiceFlex Design Supports Path Diversity That Delivers Dual-Live Video Streams Without the
Disruption of Recovery Time from Network Failure

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 9 of 13

PLATFORM REDUNDANCY FOR BROADCAST VIDEO


The overall reliability and redundancy of the routing platform is also an important factor for highly reliable distribution of broadcast video.
Certain platform-level characteristics are imperative for support of broadcast video services in any core network.
The Cisco CRS-1 is designed for continuous system operation and can hence recover from element and component failures within the
system at every level, without disrupting existing user traffic and services. The Cisco CRS-1 data plane and control plane provide
independent scaling and redundancy mechanisms:

Granular, in-service software upgrades (ISSU) for all applications, drivers, and system and subsystem components

Hitless route processor switchover with nonstop routing (i.e. Intermediate System to Intermediate System [IS-IS] Protocol),
nonstop forwarding (i.e. BGP or Open Shortest Path First [OSPF] Protocol), and fast convergence of control protocol subsystems
such as distributed line-card-based Bidirectional Forwarding Detection (BFD) for PIM

Granular process restart, distribution, and management, as well as interprocess fault tolerance

Cisco IOS XR Software, a microkernel-based, symmetric multiprocessing (SMP), highly modular operating system that supports
distributed applications and infrastructure

Complete system redundancy at the hardware level for control, forwarding, management, power, and cooling

As service providers consolidate multiple services over an IP NGN architecture, the redundancy aspects of both the data-plane forwarding
and control-plane processing allows the support of continuous system operation for the improved service reliability needed for broadcast video.
SERVICE SEPARATION
Some type of service separation is needed to help ensure fairness and operating efficiencies for multiple services across a converged IP
NGN infrastructure. This separation helps service providers manage the vast amounts of best-effort Internet traffic while supporting the
sensitivity to jitter and latency of video and business services.
One type of separation uses DiffServ12, a protocol that provides QoS for multiple traffic typesand defines and classifies services based on
priorities. These priorities are defined within the header of IP packets and determine per-hop behavior (PHB) by intermediate nodes in the
network. DiffServ-based QoS is a stateless design that facilitates scalability, performance, interoperability, and flexibility. However,
DiffServ typically cannot provide administrative separation at a per-service level, because only a single, end-to-end administrative domain
exists for the network.
Another, more common approach is separation by topology. Each service is allowed a separate topology that uses separate physical or
virtual interfaces and forwarding instances. In the past, service providers typically deployed multiple parallel networks for each service in
order to ensure full separation. With IP NGNs, this deployment can be accomplished over a converged network that shares common
routing hardware and software instances. Each service usually has limited knowledge about the topology of other services. Many
technologies exist at both Layer 2 and Layer 3 to achieve topology-based separation, including VLANs, Virtual Private LAN Services
(VPLS), Layer 2 Tunneling Protocol (L2TP) Version 3, MPLS traffic engineering, and Virtual Route Forwarding (VRF). A common
mechanism is the use of MPLS traffic engineering, where experimental bit (EXP) class definitions determine traffic engineering tunnel
selection for each service. Topology mechanisms provide flexibility for path selection, bandwidth control, and per-service QoS that may be
sufficient for some applications. However, the level-of-service separation does not provide the level of granular control needed by some
service providers, for example, in MPLS Layer 3 VPNs.
Although each VPN service is virtualized using VRF on a router, this virtualization is limited in terms of operational management. For
example, a software upgrade will affect all VPN instances and typically cannot be limited to a subset where needed. In addition, software
faults and events such as distributed denial-of-service (DDOS) attacks have the potential to affect all services on the system. To overcome
12

Differentiated Services - RFC 2474

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 10 of 13

these limitations, service providers in some cases have resorted to building service-specific physical networks, each using IP, such as one
network for best-effort Internet traffic and a separate network for VPN services.
The introduction of broadcast video to the core network poses a similar dilemma for the service provider: How best to gain the expense
savings offered by a single IP NGN infrastructure while maintaining the separation requirements of individual services.
With the Cisco Service Separation Architecture (SSA) in Cisco IOS XR Software, the Cisco CRS-1 provides the capability to partition the
system into multiple secure domain routers (SDRs) through the use of additional DRP modules with dedicated processing memory and
management across a single-chassis, or multichassis system.
SDRs provide a means to partition a single physical router into multiple, independent routers. SDRs perform all routing system functions in
the same manner as a physical router, but share resources such as the switch fabric, power, and cooling with the rest of the system to
provide overall cost savings for the service provider.
In particular, the Cisco SSA allows complete software independence among SDRs within a single physical routing system, which allows
the SDRs to run independent versions of software and sets of applications among them. As a result, any effects on an SDR resulting from a
fault or network event are completely self-contained and do not affect other SDRs or services on the system. Cisco SSA also allows
complete management separation between SDRs, which allows different administrative groups within a service provider to independently
configure, maintain, and manage services among them securely.
Cisco SSA on the Cisco CRS-1 offers service providers a much more granular, flexible, and resilient form of service separation than other
methods available today. The use of Cisco SSA in the Cisco ServiceFlex Design provides service separation between best-effort Internet
traffic, Layer 3 or Layer 2 business VPN services, and broadcast video services, as depicted in Figure 8. This separation allows the delivery
of scalable and resilient broadcast video services over a Cisco IP NGN core infrastructure.
Figure 8.

Cisco ServiceFlex Design Supporting Service Separation Across the Cisco IP NGN Core Infrastructure

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 11 of 13

CONCLUSION
Broadcast video services and transport are playing an increasingly important role in service provider IP NGNs. Cisco has developed a
service optimized IP NGN architecture, for service providers that, together with the Cisco CRS-1:

Delivers emerging large-scale IP broadcast video services with the availability and reliability that ensures an entertainment-grade
experience

Supports efficient and scalable distribution of these broadcast video services over an IP NGN for increased flexibility and reduced
costs through proven IP Multicast technology

Enables recovery from network failures quickly and in a way that is transparent to end customers through path diversity and anycast
mechanisms

Offers multiple services across a common network infrastructure but still provides separation of those services for enhanced
availability, reliability, and manageability

With these solutions, service providers can offer scalable and reliable broadcast IPTV services over an IP NGN that delivers high quality
entertainment-grade viewing experience for all their customers.
FOR MORE INFORMATION
Additional information about the concepts discussed in this document can be found in the following documents:

Building the Carrier-Class IP Next Generation Network white paper, at:


http://www.cisco.com/en/US/prod/collateral/routers/ps5763/prod_white_paper0900aecd802e2a52.shtml.

Optimizing Video Transport on Your IP Triple Play Network white paper, at:
http://www.cisco.com/en/US/prod/collateral/routers/ps368/prod_white_paper0900aecd80478c12.shtml.

IP Multicast Technical Overview white paper, at:


http://www.cisco.com/en/US/products/hw/routers/ps368/products_white_paper0900aecd804d5fe6.shtml.

Integrated Video Admission Control for the Delivery of a Quality Video Experience white paper, at:
http://www.cisco.com/en/US/prod/collateral/routers/ps368/prod_white_paper0900aecd804a05bd.shtml.

For more information about the Cisco CRS-1, visit: http://www.cisco.com/go/crs.

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

Page 12 of 13

Printed in USA

All contents are Copyright 19922006 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information.

C11-361577-00 09/06

Page 13 of 13

Вам также может понравиться