Вы находитесь на странице: 1из 100

TCOM 513

Optical Communications
Networks
Spring, 2006
Thomas B. Fowler, Sc.D.
Senior Principal Engineer
Mitretek Systems

ControlNumber

Topics for TCOM 513


Week 1: Wave Division Multiplexing
Week 2: Opto-electronic networks
Week 3: Fiber optic system design
Week 4: MPLS and Quality of Service
Week 5: Heavy tails, Optical control planes
Week 6: The business of optical networking: economics
and finance
Week 7: Future directions in optical networking

ControlNumber

Heavy-tailed Distributions
For large x values, cumulative distribution function F(x) has
property that its complementary distribution

1 F ( x) 1 x
where

1 0

and

[0,2]

Setting =2 and differentiating above


equation,
f ( x) 2 1 x 3

ControlNumber

Heavy-tailed Distributions (continued)


Recall from calculus that

1
dx
p
x

only if p > 1

ControlNumber

Heavy-tailed Distributions (continued)


Since Var ( x ) E ( x 2 ) [ E ( x )]2

[ E ( x)]2 is fixed, so then the variance is determined by


2 1
E ( x ) x f ( x)
dx

If in addition < 1, then for the mean, since

f ( x) dF ( x) / dx 1 x 2
1
E ( x) xf ( x)

ControlNumber

Physical Significance of Infinite


Variance
0.45

0.4
0.35

Consider finite variance on


expanding time scales:

pdf

0.3
0.25
0.2
0.15
0.1

0.45

0.05

0.4

0.35

-20

-16

-12

-8

-4

12

16

20

-50

-40

-30

-20

-10

10

20

30

40

50

0.2

0.45

0.15

0.4

0.1

0.35

0.05

0.3

0
-5

-4

-3

-2

-1

pdf

pdf

0.3
0.25

0.25
0.2
0.15
0.1
0.05
0

ControlNumber

Physical Significance of Infinite


Variance (continued)
28
24
20

Infinite variance case

16
12
8

-20

20

70
4

60
3

50
2

40
1
0

-5
-5

5
5

30
20
10
0

-50

50

ControlNumber

Network session or connection size


(length in bytes)
Empirical data from 220,000 connections at www site:

Source: Willinger & Paxson, 1998

ControlNumber

Network session or connection size


(length in bytes) (continued)
Use points to calculate F(x), then plot 1- F(x) against
corresponding session size
Behavior agrees with Pareto distribution
Yields 1.3
Corresponds to infinite variance
Aggregate property of traffic source

ControlNumber

10

Heavy-tailed distributions (continued)


Upper tail declines like power law with exponent < 2
Appears as lack of convergence of sample variance as
function of sample size
Pareto distribution is simplest heavy-tailed distribution

k
, , k 0 x k
x
dF ( x)
p ( x)
k x 1
dx

F ( x) 1

Effect increases as decreases

ControlNumber

11

Heavy-tailed Distributions (continued)

1
0.9

CDF=F(x)

0.8

Pareto distribution, =0.5, k =1

Pareto distribution, =1, k =1

0.9

0.7
0.6
0.5
0.4
0.3
0.2

Exponential Distribution PDF

0.1

Pareto Distribution PDF=p(x)


5

10

0.7

CDF=F(x)

0.6
0.5
0.4
0.3
0.2

PDF=p(x)

0.1

0
0

0.8

15

20

Pareto distribution, =1, k=1

0
0

10

15

20

Pareto distribution, =0.5, k=1

ControlNumber

12

Heavy-tailed Distributions (continued)


1

Pareto distribution, =1, k=1

0.9

CDF=F(x)

0.8
0.7
0.6
0.5
0.4
0.3

Exponential Distribution PDF

0.2
0.1

Pareto Distribution PDF=p(x)

0
0

10
x

15

20

ControlNumber

13

Heavy-tailed Distributions (continued)


1

CDF=F(x)

Pareto distribution, =1, k=1

0.1

Pareto Distribution PDF=p(x)

0.01
0.001
0.0001

Exponential Distribution PDF

0.00001
0.000001
0.0000001
0.00000001
0.000000001
0

10
x

15

20

ControlNumber

14

Relationship of Key Traffic Concepts


Heavy tails

Files, transmission
times

Infinite
variance

Selfsimilarity

Order of explanation

Order of discovery

Transmission
times

Packets,
sessions

Long-range
dependence

Packets,
sessions

Burstiness on
multiple scales
Packets

ControlNumber

15

Analyzing networks in terms of planes


Management
Deploys and manages services

Control
Network-level coordination

Minimal in
standard IP
networks

State information management


Decision-making
Action invocation

Data or bearer
Physical transmission of data through network

Best effort utilizing


h/w and s/w in
components in
standard IP networks

ControlNumber

16

Traditional approaches to integration

ControlNumber

17

ControlNumber

18

Problems of traditional approach

ControlNumber

19

Steps in the evolution of network


architectures (continued)
Dynamic IP Optical Overlay Control Plane
Ring replaced by mesh using DWDM
Dynamic wavelength provisioning
Easier, more scalable control
Enhanced restoration capabilities
Integrated IP Optical Peer Control Plane
Integrate dynamic wavelength provisioning processes
of Optical Transport Network (OTN) into IP network
routing
Make OTN visible to IP network

ControlNumber

20

Old World networks


Utilize SONET for delivering reliable WAN connectivity at
layer 1
Large, interconnecting rings
Lots of expensive hardware, e.g., ADMs
Utilize ATM for provisioning data services
Connection oriented
Can assure QoS, VPN
High operational cost for high reliability
High overhead (~25%)

ControlNumber

21

New World networks


Eliminate SONET, ATM
Still need to provide layer 2 functionality

Source: Tomsu & Schmutzer

ControlNumber

22

IP/Optical adaptation

Packet over
SONET (POS)
Ref. p. 91
PPP does L2
functions

Dynamic
Packet
Transport
(DPT)
SRP=
spatial
reuse
protocol;
Intended
for ring
architecture
Ref. p. 105

Gigabit
Ethernet

ATM

Simple
Data Link
(SDL)

Source: Tomsu & Schmutzer

ControlNumber

23

Two-layer architecture
IP edge routers aggregate
traffic and multiplex it onto
Big Fat Pipes

Source: Tomsu & Schmutzer

ControlNumber

24

Two-layer architecture (continued)


Ingress traffic multiplexed onto big fat pipes (BFPs)
Provided by optical layer
Optical layer functions as cloud for interconnecting
attached devices
Key point is that configuration within optical layer
controlled at same time that service layer is configured
(common management)
May use either DWDM or dark fiber
Can be point-to-point, ring, or mesh
Similar to ATM networks because any logical
connectivity between IP nodes can be implemented

ControlNumber

25

New World: Overlay and Peer


Models
Main distinction in New World models is between overlay
and peer models
Overlay
OTN (Optical Transport Network) or bearer plane is
opaque to IP network
OTN merely provides connections to IP network above
Has its own control plane
Peer
Common control plane for both OTN and IP networks
Optical connections derived from IP routing knowledge
(paths)

ControlNumber

26

Overlay and Peer Models (continued)


IP Network

IP + Optical Network

Peer Model
Optical Transport Network

Overlay Model

Source: Tomsu & Schmutzer

ControlNumber

27

MPS overlay model


Two separate control planes
Interaction minimized
IP network routing and signaling protocols independent of
corresponding optical network protocols
Edge devices see only lightpaths, not topology
Similar to IP over ATM
Client/Server model: IP ~ client, optical network ~ server
Two versions
Static
Signaled

ControlNumber

28

MPS overlay model

PXC
Metro optical network
(Access Ring)
DWDM

PXC

Core Optical Network

Core optical network


(Regional Ring)
DWDM

GigE
OC12

PXC

Metro optical network


(Access Ring)
DWDM

PXC
OC12/
OC48

OC12/
OC48

Optical Network UNI

Source: Cellstream

ControlNumber

29

MPS overlay modelanother view

ControlNumber

30

Dynamic Optical Control Plane


Central problem: wavelength provisioning
Optical cross-connects (OXCs) combined with IP routing
intelligence to control wavelength allocation, setup, and
teardown
Done dynamically
Allows same elements to be reconfigured rapidly to
improve utilization
Other benefits
Expedited provisioning
Enhanced restoration
Any virtual topology can be provided

ControlNumber

31

Implementing dynamic optical control


plane: Wavelength routing
IP routing protocols (e.g., OSPF) adapted to create routing
protocol used by wavelength routers (WRs) in optical layer
Connections can be dynamically provisioned to
interconnect IP routers
Wavelength routing protocol only protocol running on
WRs
IP network does not participate in wavelength routing
process
IP network interacts with OTN on client/server
relationship
Overlay model
Typical use: OTN owned by optical interexchange carrier,
other service providers buy lightpaths to establish their own
IP networks

ControlNumber

32

Overlay model: wavelength routing


IP Router B

IP Router A
IP Network

Wavelength Routing
Network
Lightpath
(IP Connectivity
between A and B)
Wavelength
Router

Wavelength
Router
Source: Tomsu & Schmutzer

ControlNumber

33

Wavelength routing control plane


Responsible for establishing end-end connection or
lightpath
Two methods of implementing IP-based control plane
Attach external IP routers to each OXC
Integrate IP routing functionality into OXC

Source: Tomsu & Schmutzer

ControlNumber

34

Method 1: details
Routers with control interface called wavelength routing
controllers (WRCs)
WRCs provide needed functions
Resource management
Configuration and capacity management
Addressing
Routing
Traffic engineering
Topology discovery
Restoration

ControlNumber

35

Method 1: details (continued)


Control interface specifies primitives used by WRC
Connect: cross connect input, output channels
Disconnect: remove connection
Switch: change incoming channel/link combination
OXC communicates with WRC
Alarm: failure condition

ControlNumber

36

Cross-connect tables illustration

Source: Tomsu & Schmutzer

ControlNumber

37

Digital communication network


Control plane exchanges control traffic through Digital
Communications Network (DCN)
In band
Default-routed lightpath used
Out of band
Routers and leased lines used to set up completely
separate IP network interconnecting all WRs

ControlNumber

38

Operation of control plane


WRs exchange info about network topology and status of
OTN across DCN
All elements have unique IP addresses
Routers
Amplifiers
Interfaces
MPS used for lightpath routing and service provisioning in
OTN
Provisions LSPs in service layer

ControlNumber

39

Route calculation for lightpaths


Centralized
Distributed

ControlNumber

40

Centralized lightpath routing


Uses traffic engineering control server
Server maintains information database
Topology
Inventory of physical resources
Current allocations
WRs request lightpath to be set up
Server checks resource availability and initiates
resource allocation at each hop

ControlNumber

41

Centralized lightpath routing


(continued)

Source: Tomsu & Schmutzer

ControlNumber

42

Distributed lightpath routing


Each WR maintains information database and set of routing
algorithms
Perform neighbor discovery after bootup
Builds topology map
Creates resource hierarchies
Constraint-based routing used to define appropriate
path through network

ControlNumber

43

Distributed lightpath routing


(continued)

Source: Tomsu & Schmutzer

ControlNumber

44

Distributed lightpath routing


(continued)

Source: Tomsu & Schmutzer

ControlNumber

45

Comparing WRs and LSRs


Similar in architecture and functionality
LSR (Label Switched Router) provides unidirectional
point-to-point connections (LSPs=Label Switched Paths)
Traffic aggregated in FECs (Forwarding Equivalence
Classes)
WR (Wavelength Router) provides unidirectional optical
point-to-point connections (lightpaths)
Used to transmit traffic aggregated by service layer
Two key differences
LSR must process packets (do label lookup)
WR does not do any packet level processing
Switching info for WR is lightpath ID, not any packet label

ControlNumber

46

Comparing WRs and LSRs (continued)


Lightpaths are very similar to LSP
Unidirectional, point-to-point virtual paths between
ingress and egress node
LSPs define virtual topology over data network, as do
lightpaths over OTN
Allocating label allocating channel to a lightpath

Source: Tomsu & Schmutzer

ControlNumber

47

Comparing WRs and LSRs


(continued)
MPLS label = fixed length value in packet header
MPS label = certain wavelength over fiber span
Label space is significant
In MPLS, may be thousands of FECs
Wont work in MPS, because only 40-128 labels (s)
available
Must aggregate traffic into traffic trunks ~ lightpath
Suitable for core use

ControlNumber

48

Integrated Optical Peer Control Plane


Second of the methods of integrating IP and optical
Differs from overlay model in that there is a single control
plane rather than two separate control planes
IP network sees optical network
Uses MPLS Traffic Engineering (MPLS-TE) to implement
control plane and provision lightpaths across OTN and
service layer

ControlNumber

49

MPS peer model (continued)


Single control plane spans entire network
IP, Optical networks treated as single network
OXCs treated as IP routers with assigned IP addresses
Edge devices see entire network
No distinction between NNI, UNI
Single routing protocol over both domains
Topology and link state information maintained by IP and
optical routers is same
Reuses existing MPLS framework

ControlNumber

50

Control Plane Functions


Control Channels
May be on dedicated fiber(s)
Could also be Ethernet connection or IP tunnel
Bi-directional
Manage links
Restoration
Establish LSPs

ControlNumber

51

MPS peer model


PXC
Metro optical network
(Access Ring)
DWDM

PXC

Core Optical Network

Core optical network


(Regional Ring)
DWDM

GigE
OC12

PXC

Metro optical network


(Access Ring)
DWDM

PXC
OC12/
OC48

OC12/
OC48

Source: Cellstream

ControlNumber

52

MPS peer modelanother view

ControlNumber

53

Peer model connectivity

Source: Tomsu & Schmutzer

ControlNumber

54

Peer model OTN (continued)


Edge LSRs have two functions
Aggregate traffic flows
Request unidirectional lightpaths (LSPs) to be set up by
WRs through OTN
Dynamically switched through OTN
Terminated at Edge LSRs
Control plane requirements
Establish optical channels
Support traffic engineering functions
Protection and restoration mechanisms

ControlNumber

55

Building blocks for MPS

Source: Tomsu & Schmutzer

ControlNumber

56

MPS control plane architecture

Source: Tomsu & Schmutzer

ControlNumber

57

Multiprotocol lambda switching


(MPS)

ControlNumber

58

Network-to-network issues

ControlNumber

59

MPS and related technologies

ControlNumber

Comparison of MPLS and MPS Routers

60

ControlNumber

61

Network architecture incorporating


MPLS, MPS

MPS
Inner core
Highest degree of aggregation

Outer core MPLS


Edge aggregation
(routing, < OC48)

MAN aggregation (routing, < OC12)

ControlNumber

62

Future evolution

ControlNumber

63

Lightpath networking: architectures


and topologies

Source: Sycamore Networks/NGN1999

ControlNumber

64

Architectures and topologies


(continued)

Source: Sycamore Networks/NGN1999

ControlNumber

65

CISCO IP + Optical network

ControlNumber

66

The overall networking problem


Metro access
Need to handle much legacy (existing) access types
First level of aggregation
Likely will move more in the Ethernet direction
Core switching
Easier to implement newer technology
MPLS, MPS

ControlNumber

67

Metro access methods

Source: Tomsu & Schmutzer

ControlNumber

68

Possible IP and optical metro


evolution

Source: Tomsu & Schmutzer

ControlNumber

69

Optical core evolution: migration to


intelligently controlled meshed core

Source: Tomsu & Schmutzer

ControlNumber

70

Possible core evolution

Source: Tomsu & Schmutzer

ControlNumber

71

MPLS Traffic Engineering (MPLSTE)


Standard routing protocols compute the optimum path
from source to destination
Use routing metric
Hop count
Cost
Link bandwidth
Single least cost (according to metric) path chosen
Alternate paths ignored
Possibly longer, but faster
Do not understand nature of IP traffic
Fractal
Can lead to inefficiencies

ControlNumber

72

MPLS-TE: example of inefficiencies of


standard routing protocols

Source: Tomsu & Schmutzer

ControlNumber

73

Solutions to traffic engineering


problem
Non-scaling
Manipulate Interior Gateway Protocol (IGP) metrics
Use of policy-based routing
Define complex access lists and characterize traffic
flows
Static
MPLS-TE
Utilization of all network resources analyzed and taken
into account in path calculation
If multiple paths exist, best chosen based on current
network situation

ControlNumber

74

Traffic Engineering
Objectives
Maximize network resource efficiencies
IP traffic has large variations, unpredictable
Fractal distribution
Continually tune network parameters
Adjust resource partitions between working/protection
segments
Usually deals with longer timescales (hours, days)
Shorter variations dealt with by higher layers
On-line or off-line

ControlNumber

75

Internet Time Scales


Multifractals:
Fractals:

Effects of network
transport
protocols

1 ms

10

100

Diurnal &
other
effects

Long-range
dependency

1s

Measurement time

10

100

1,000

104

105

ControlNumber

76

MPLS-TE functional components

Source: Tomsu & Schmutzer

ControlNumber

Information flow for MPS traffic


engineering

77

ControlNumber

78

Sample queue hysterisis control

ControlNumber

79

MPS Traffic Engineering (continued)


Utilizes concepts from MPLS
Overlay model: centralized network controller
Smaller networks, larger timescales
Given traffic matrix, resolve (figure out) LSP/lightpath
topologies
Fewer resource synchronization and lockout problems
due to central control
Single point of failure
Controller needs large amount of information
Link states
Router status
Traffic patterns

ControlNumber

80

MPS Traffic Engineering (continued)


Peer model: distributed traffic engineering
Localized decisions
Better scalability
Heuristic routing algorithms
Robust
Still active research area
Multi-vendor operability may require standards

ControlNumber

81

Network survivability principles


Definition: ability of a network to maintain an acceptable
level of service during a network or equipment failure.
(Lucent)
Multilayer survivability: possible nesting of survivability
schemes among subtending network layers, and the way
these schemes interact with each other. (Lucent)
Survivability is important because of expected growth
(scaling) of optical networks
Failure of a single link can affect tens or hundreds of
thousands of customers
QoS is dependent on an effective restoration method

ControlNumber

82

Survivability concepts
Classification
End-to-end: single survivability mechanism used to
deliver end-to-end survivability
Example: single backup line which takes over if any
problem occurs with main line
Cascaded: multiple survivability mechanisms, each
functioning in a limited area or domain
Example: Main line divided into segments with
switches at each node; backup lines between nodes
Nested: multiple survivability mechanisms for a single
domain

ControlNumber

83

Survivability concepts (continued)

Source: Tomsu & Schmutzer

ControlNumber

84

Protection and restoration


Two closely related concepts
Difference is in scope and layer
Protection: lower layers, narrower scope
Restoration: higher layers, broader scope
Both commonly used to provide maximum degree of
survivability

Restoration
Node failures
Multiple failures

Protection

Protection

Protection

Link failures

Link failures

Link failures

ControlNumber

85

Protection
Lower layer (physical layer) mechanism
First line of defense against faults such as fiber cuts
Topology and technology specific
Fast
Limited usefulness
Does not correct node faults
Cannot handle multiple faults, e.g., two cable cuts
Types
Dedicated: 50% of entire network capacity set aside
Example: Unidirectional Path Switched Rings (UPSR) used
in SONET
Shared: certain amount of total capacity set aside and shared
across all network resources
Example: Multiple LSP tunnels sharing single backup LSP
tunnel

ControlNumber

86

Protection types
1+1
All traffic sent over 2 paths simultaneously
Destination receives both, selects one
In case of failure, destination switches to other path
1:1
Two parallel paths, only one used in normal operation
In unidirectional systems, source will not know if one
path is cut
Requires additional signaling
In normal operation, low priority traffic can use backup
path
1:N
Similar to 1:1, except that n working paths share one
protection path

ControlNumber

87

Protection switching characteristics


Methods of handling traffic when failed path comes back on line
Nonreverting
Backup remains as primary path
Restored path becomes backup
Reverting (common in 1:n protection)
Restored path resumes normal role
Backup path reverts to backup role
Switching methods
Path
Source, destination control
Entirely new, disjoint path used
Line
Adjacent nodes handle
Only local link affected

ControlNumber

88

Path and line switching

Source: Tomsu & Schmutzer

ControlNumber

89

Restoration
Overlaid mechanism
Can handle more types of failures
Links
Nodes
Multiple failures
Typically used in mesh topologies
Basic idea is to utilize alternate paths to route traffic
around failure location
Two approaches
Centralized
Distributed
Precomputed alternate paths
No single point of failure for restoration method itself

ControlNumber

90

Restoration (continued)
Typically done at layer 2 or layer 3
ATM
PNNI routing protocol reroutes virtual circuits
IP
Dynamic routing protocol (OSPF, IS-IS, RIP, BGP)
find new path through network
MPLS
Reroute onto backup paths (labels)
Backup paths may be preselected

ControlNumber

91

ITU restoration time components

Source: Tomsu & Schmutzer

ControlNumber

92

Higher-layer survivability mechanisms


MPLS

IP

ATM PNNI

SONET
shared

SONET
dedicated

Restoration
time

50 ms 1 s

1-10 s

1-10 s

< 100 ms

< 100 ms

Restoration
capacity
(dedicated)

0-100%

0%

0%

< 100%

100%

Rest. capacity
useable by low
priority traffic

Yes

N/A

N/A

Yes

Depends on
config.

Linear
topologies

Yes

Yes

Yes

Yes

Yes

Ring topologies

Yes

Yes

Yes

Yes

Yes

Mesh
topologies

Yes

Yes

Yes

No

Yes

Standards

In progress

Yes

Yes

Yes

Yes

Source: Tomsu & Schmutzer

ControlNumber

93

Optical network (layer 1) survivability

OTN

Optical channel sublayer

Optical channel
protection (OCH)

Optical multiplex sublayer

Optical multiplex section


protection (OMS)

Optical transmission section sublayer

Line protection

ControlNumber

94

Optical line 1+1 protection

Source: Tomsu & Schmutzer

ControlNumber

95

Optical line 1:1 protection

Source: Tomsu & Schmutzer

ControlNumber

96

Optical channel protection (1:1)

Source: Tomsu & Schmutzer

ControlNumber

97

Survivability mechanisms for OTN


OCH

OMS shared

OMS
dedicated

Line

Restoration time

< 50 ms

< 200 ms

< 200 ms

< 50 ms

Restoration capacity
(dedicated)

100%

< 100%

100%

100%

Rest. Capacity useable


by low priority traffic

No

Yes

Depends

No

Linear topologies

Yes

Yes

Yes

Yes

Ring topologies

Yes

Yes

Yes

N/A

Mesh topologies

Possible

No

No

N/A

Standardization

In
progress

In
progress

In
progress

In
progress

Source: Tomsu & Schmutzer

ControlNumber

98

Multilayer survivability
Typically layer 1 protection is fastest
~100 ms
If this is adequate, higher layer restoration unnecessary
Higher layer restoration slower
Kicks in if layer 1 protection cannot solve problem
1 s or more
Coordination of mechanisms may be necessary if time
constants too close
To prevent network from becoming unstable

ControlNumber

99

Survivability design trends for optical


networks
Eliminating SONET and ATM
Eases problem of interaction between layers
But decreases available survivability mechanisms
DWDM system may not automatically provide optical
protection
Forces higher layers (IP or ATM) to handle
Not designed to do this
Necessary to modify routing protocols to converge
faster
~1 s about the best at present

ControlNumber

100

Survivability in MPS networks


Typically 1:1 or 1:n protection used
Optical LSPs (OLSP) for protection predefined
If failure occurs, adjacent nodes switch all aggregated
LSPs onto backup OLSP
No end-end signaling required
Path protection can also be used
Requires signaling between endpoints of OLSP