Вы находитесь на странице: 1из 11

White paper

Calculating the optimum time


to evolve to 40G

Contents
1.0 Executive summary

Moving to 40G and beyond

2.0 Introduction: Exploding traffic volumes


demand ever higher bandwidth

4
4

2.1 The all-important question


2.2 Demand and availability

3.0 Avoid paying the overlay penalty

7
7
7

3.1 A note on an alternative approach


3.2 Extending the OP model to 100G
3.3 A matter of timing

4.0 Get set for evolution

5.0 Conclusion

40G is the better option to protect existing investments 9
7.0 Glossary

10

3 Calculating the optimum time to evolve to 40G

1.0 Executive summary


Moving to 40G and beyond
As exploding traffic volumes demand higher network capacity, some Communications
Service Providers (CSPs) are already engaged in the mass deployment of 40 Gigabit per
second (40G) transmission technology in their networks. However, many others are delaying
the deployment of 40G because the capital cost of deploying a single channel using 40G
components is currently higher than providing the same capacity using four channels at
10G. The cost of 40G technology is coming down fast, but it still has some way to go
before it matches the price of 10G directly.
On the other hand, each network fiber
can only carry a limited number of
channels. Deploying 10G technology
fills four times as many channels in
order to provide the same capacity.
This could lead to the need to bring a
second fiber pair and a second DWDM
system into play, and this risk rises
as the availability of free channels
falls. Deploying a second fiber pair,
or overlay, entails substantial added
costs, defined here as the overlay
penalty (OP). This paper describes
a simple model that aims to take this
OP into account when weighing up
the choice between 10G and 40G.

The resulting calculations show that


deploying 40G will, in many cases,
be the most cost-effective option long
before the capital cost of 40G and
10G components reach direct parity.
While 40G deployments continue,
successful trials of 100G technology
have already taken place. The first
100G products could appear as
early as 2010. Although they are
likely to be prohibitively expensive at
first sight, they will find a market in
router interconnection and network
bottlenecks where the urgent need for
extra capacity exceeds capital cost
considerations. Looking further ahead
to when the prices for 100G start to
come down, a version of the OP-based
model described in this paper could
also be used to help CSPs decide the
most appropriate time to begin mass
deployments of 100G technology.

4 Calculating the optimum time to evolve to 40G

2.0 Introduction:
Exploding traffic volumes demand
ever higher bandwidth
CSPs need to deploy solutions that not only meet todays demanding requirements, but that
can also evolve to meet future needs. By 2015 there will be a single global communications
infrastructure carrying both fixed and mobile backhaul traffic. A 100-fold increase in traffic is
expected over the same period, and some growth estimates are even higher.
Accommodating such a dramatic increase in a relatively short time frame
represents a formidable challenge for the industry.
Todays CSPs routinely use dense
wavelength division multiplexing
(DWDM) to split individual optical
fibers into up to 80 slots, each of which
typically carries 10 Gigabits per second
(10G). But far higher capacities will be
needed in the future.

The reason that numbers for 40G


have been slow to take off is that many
CSPs are delaying their investment in
40G technology and opting instead to
meet rising demand by filling more and
more of their remaining DWDM slots
with 10G.

Some leading edge networks are


already able to handle 40G and
trials are underway to develop 100G
per DWDM slot, increasing the
maximum capacity of each fiber from
0.8Terabits per second (Tbps) today
to 3.2 and 8Tbps respectively. The
first commercial 100G products are
not expected to arrive until 2010/11.
In contrast, 40G is already being
deployed commercially by some of
the worlds major network CSPs.

Their main motive in going down the


10G route is the relatively high capital
cost (CAPEX) of 40G equipment, which
currently makes it more expensive to
deploy a single 40G channel than it is
to provide the same capacity using
four 10G channels.

We are going to be butting up


against the physical capacity
of the Internet by 2010.
Jim Cicconi,
Vice President of legislative affairs for AT&T

However, anyone looking to build


a case for investment should be
looking beyond upfront costs to
the longer-term cost implications
of choosing 10G rather than 40G.
This paper uses a simple model to
explore the cost effectiveness of 40G
as the demand for capacity continues
to grow and the available DWDM slots
fill up. The same model is also applied
to 100G, where current costs for 100G
interfaces are far higher than those of
10 x 10G.

2.1 The all-important question


Network CSPs need to consider what
they will do if their existing fibers reach
capacity. If it means that they need to
bring a second, or overlay, system into
play then they will incur significant new
costs. In addition to the expense of
buying or renting the extra fiber, there
is the cost of all the added hardware
and systems to consider, as well as the
extra operating costs associated with
maintaining and managing them.
So the crucial question that everyone
should be asking is: When does it
become cost-effective to use 40G
instead of 10G in order to avoid the
need to deploy an overlay system?

2.2 Demand and availability


Moving at the right time is crucial.
Being too early might be
disadvantageous in terms of CAPEX,
but CSPs who leave it too late might
end up having to deploy an overlay
system. So when is the right time?
The answer to this question is
influenced by demand and availability.
How will capacity demand evolve? Is it
predictable or increasing more or less
than expected? When will high capacity
interfaces be available on the router
side, pushing backbone line rates to
40G and 100G respectively?
And dont forget the CSPs customers.
Will they be willing to accept any traffic
interruption when replacing 10G with
40G at a later date?

5 Calculating the optimum time to evolve to 40G

40G and 100G evolution: A significant ramping up of


40G deployments is expected by the end of 2009

100G

Lab trials, field trials,


Proof of concept work

Mature technology, price erosion


volume market

Developing technology, pre-standard


early adopter market

10G

40G

Developing technology,
high cost
early adopter market

Standardization,
price erosion
volume market

40G

First 100G products

100G

2008

Fig. 1

2009

2010

Carrier customers:
Demand for 100G

Standardization:
40G, 100G

2011

2012

40G vs. 100G:


40G mass market vs

6 Calculating the optimum time to evolve to 40G

3.0 Avoid paying the overlay penalty


The following approach answers this question about timing in a very simple way. First, take
the cost of deploying the overlay network and divide it up between all the remaining slots on
the existing fiber. This is called the overlay penalty (OP).
The OP increases as the number of
available slots reduces. This is valid
in practical terms as well as in the
calculation, since there is a greater
risk that the CSP will end up having
to deploy an overlay if it has fewer
free slots available. The tipping point
in the decision between 40G and
10G then becomes the point at which
the cost of the 40G components
is equal to or less than that of four
10G slots plus four times the OP.
This is called the break-even load.

Overlay penalty
Overlay (1st System)

Overlay penalty
mapped to
remaining channels

Terminal A1

Of course, this simplified approach


does not take into account some
of the other important factors in the
decision-making process. Some of
these strengthen the case for 40G,
such as the added operating costs
associated with managing four times
as many 10G slots, or the ongoing
costs of operating an overlay network.
Others work against 40G, such as the
added cost associated with spending
today rather than in the future when
equipment becomes cheaper. These
are just a few of the direct and indirect
costs that all contribute to the total
cost of ownership (TCO) of the
various alternatives. Nevertheless,
the OP-based model gives a
simple, transparent approximation
of the relative costs involved.

Two curves are shown in fig. 3, each


representing a different TCO for the
overlay. The purple curve assumes
a very low cost for the overlay
based on a simple point-to-point
network. The orange curve assumes
twice the TCO of the purple curve,
representing a more complex link.

Taking the OP into account, it becomes


immediately apparent that deploying
40G may be the more cost-effective
option sooner than the relative
component costs might suggest.

The graph shows how the break-even


point works out for a given link
as the number of available slots
varies and the ratio of 40G to 10G
component prices changes.

Terminal B1

Overlay (2nd System)

Terminal A2

Terminal B2

Fig. 2
With the lower TCO for the overlay
(purple curve), if the cost ratio is 5.2,
the break-even load is 40 occupied
slots. This means that if 40 slots are
loaded, it makes sense to use 40G in
any further capacity expansions. Or in
other words, with half the slots loaded,
the capital cost of 40G might be 5.2
times that of 10G, but the business
case already works out in favor of 40G.
Clearly, an earlier introduction of higher
line rates of 40G makes sense for
high value connections that include
line amplification or Reconfigurable
Optical Add-drop Multiplexers.

Number of lambdas loaded

Break-even load 40G


70
60

60

58

56

53

50
40

40

36

32

30

50

46

32

27
20

20

20
11

10
0

0
6.4

6.2

5.8

5.6

5.4

Price 40G / price 10G

Fig. 3

40

5.2

0
5

4.8

4.6

7 Calculating the optimum time to evolve to 40G

This figure is based on a very simple


network a point-to-point connection
between two terminals. In real
deployments the networks are much
bigger, with more nodes and more
complex network elements, such as
ROADMs or Photonic Cross-connects
(PXC). This translates into higher costs
for an overlay network, both in terms
of CAPEX and OPEX. This makes
the OP in real networks even higher
and makes 40G the most attractive
option at an even higher price ratio.

offer an extremely simple model that


can be applied at an early stage in
the decision-making process. The
swap-out option can be incorporated
into more complex costings later on.

3.2 Extending the OP model


to 100G
A similar, OP-based approach can also
be used to look at the relative merits of
100G deployments, although we need
to use the prices predicted by analysts
of 10G and 100G components from
2010 onwards. An added complication
is that 40G will have reached a very
attractive price level by then, so it will
also be a significant competitor to
100G. The impact of 40G has not been
considered here, however. The cost
of 40G will fall much faster over the
next two or three years than the cost of
10G, which makes 10G a more stable
reference point for the 100G estimates.

The simple OP calculation used


here gives a very rough estimate.
Each deployment is different and
each CSP will need to customize
the cost estimations used in the
model if they wish to generate their
own reliable business case.

3.1 A note on an
alternative approach

The results of the 100G calculations


are shown in fig. 4 below. Once
again, two cases are represented.
The purple curve is based on a
simple point-to-point network.
The orange curve assumes twice
the TCO of the purple curve,
representing a more complex link.

There is another way in which CSPs


might choose to ease the crunch in
capacity as their available DWDM
slots fill up with 10G channels: They
could begin swapping out their
existing 10G channels with 40G.
This white paper does not consider
the swap-out alternative in any great
detail for three reasons. First, many
CSPs would find the disruption
to existing services during the
upgrade unacceptable. Second,
simply abandoning the investment
already made in 10G systems is
likely to be an expensive option.
Finally, the aim of this paper is to

Taking the same example of 40


channels loaded at the lower TCO,
100G deployments could be justified
once the capital cost of 100G reaches
13 times that of 10G. And again,
with more complex networks, the
introduction of 100G makes sense at
an earlier stage.

Number of lambdas loaded

Break-even load 100G


70
60

60

58

56

53

50

50
40

40

36

32

30

46

32

27
20

20

20
11

10
0

0
16

Fig. 4

40

15.5

15

14.5

14

13.5

Price 100G / price 10G

13

0
12.5

12

11.5

3.3 A matter of timing


Todays rollout of 40G transport has
many parallels to the transition that
the DWDM industry went through in
the late 90s, when new 10G transport
networks began to replace 2.5G as
the long haul standard. The reasons
were the same as those of today:
To provide additional network capacity,
to reduce transmission costs by
taking advantage of economies
of scale, and to support the latest
generation of high-end routers.
Looking ahead, 100G is already on
the horizon and its very likely to be a
similar story as the industry shifts to
100G in the future. The first commercial
100G products are expected on the
market from 2010 onwards. Although
we anticipate promising performance
results from pre-standard 100G in 2009
and early 2010, market acceptance in
this timeframe is questionable since
100G will be relatively expensive.
Even so, early deployment might
make sense for hotspots where
overlay networks are not feasible
and a higher cost is acceptable.
According to Infonetics, a small market
will develop when the 100G price falls
to between 15 and 20 times that of
10G. The bulk of the market will follow
when the cost falls to nearer 10 x 10G.
When 40G was introduced, it took three
to four years before the technology
ramped up to significant volumes.
With even optimistic predictions for
100G, this would still translate into
2012 and beyond before 100G is
likely to be seen in higher volumes.

8 Calculating the optimum time to evolve to 40G

4.0 Get set for evolution


Nokia Siemens Networks has extensive experience in both 10G and 40G deployments.
For example, in March 2009, Current Analysis said that Nokia Siemens Networks is the
acknowledged leader in 40G.
By 2008, Nokia Siemens Networks
became the first company to roll-out
mass deployments of 40G and is
currently leading the 40G market with
AT&T, one of the worlds biggest CSPs
a leading customer.
Our hiT7300 multi-reach DWDM
platform is the most compact and
cost-efficient 40G platform in the
industry, reaching from the metro to
the backbone. Our complementary
hiT7500 platform is designed for long
haul and ultra long haul applications.
Extensive automation in planning and
procurement, delivery of pre-configured
equipment, self-guided provisioning
and automated maintenance and fault
management all contribute towards
enormous reductions in OPEX and
speed up the time-to-market for
both platforms.
40G per wavelength on an integrated
DPSK card is the most cost-efficient
answer to the increased traffic
aggregated from the access to
the core.
Nokia Siemens Networks are also
involved in ongoing 100G trials and
standards development, working
closely with international bodies such
as the ITU-T, IEEE HSSG and OIF.
In 2007, we were the first to
demonstrate the OIF-accepted 100G
DP-QPSK modulation format in the
laboratory. In 2008, Verizon and
Nokia Siemens Networks successfully
achieved a record-breaking field trial of
mixed 10G, 40G and 100G channels
more than 1,000 km. Now it is the time
to translate the results of this trial into
real products.

Roadmap to success
1999: Distance record:
1200km DWDM
w/o regeneration
1998
40G

100G

2003: First to deploy


full C band tunable
transponder
2004

First to demonstrate
40G DWDM

2007
2500km transmission
of 10 x native 111G

2006: First to deploy


FBGs for dispersion
compensation

2003: First to deploy


ROADM

2006
1700km transmission
of 40G on installed
10G system

2008
Field world record with
mixed 100G, 40G, 10G
on >1000km

2006: First to provide


design robust against
power transients

First to roll-out 40G


technology in mass
deployment
Prepared
100G technology for
mass deployment

2007: First to deploy


hitless 50GHz MEMS
wavelength selective switch

Record in the lab, tested in the field, successful mass deployment

Fig. 5

In short, Nokia Siemens Networks has


the expertise to support CSPs at every
stage of network evolution, from 10G
through 40G and beyond. Moreover,
Nokia Siemens Networks has a
great deal of expertise in consultancy
and drawing up business cases for
investment. Based on the operators
actual networks and subscriber
demands, we help our customers
optimize existing networks and
upgrade them in the most efficient way.

9 Calculating the optimum time to evolve to 40G

5.0 Conclusion:
40G is the better option to protect
existing investments
The mass roll-out of 40G technology is already happening, but the high component costs
are persuading many CSPs to delay investing in 40G until the CAPEX involved can match
that of providing equivalent capacity using 10G channels in their DWDM slots.
By calculating the possible future
costs associated with the inefficient
use of DWDM slots, the simple model
devised by Nokia Siemens Networks
demonstrates that it may in many
cases be cost effective over the life of
the network to opt for 40G technology
now, rather than waiting for 40G
components to achieve direct parity
on price.

Furthermore, successful 100G trials


are already underway. As 100G
components are standardized and
commercialized over the next two or
three years, a similar calculation could
help CSPs decide the best time to start
deploying 100G when the price begins
to fall in comparison with 10G and
40G components.

10 Calculating the optimum time to evolve to 40G

7.0 Glossary
CAPEX

Capital Expenditure

CSP

Communications Service Provider

DPSK Differential Phase Shift Keying. A digital modulation scheme that


conveys data through the difference in applied phase shifts of a
carrier wave.
DWDM Dense Wavelength Division Multiplexing. A system of optical signals
multiplexed within the 1550 nm band.
FBG

 iber Bragg Grating. A short length of optical fiber that reflects some
F
optical wavelengths and transmits others, making it useful as an
optical filter.

Gigabit per second, e.g. 10G, 40G etc

IEEE HSSG Higher Speed Ethernet Study Group of the Institute of Electrical and
Electronics Engineers
ITU-T Telecommunication Standardization Sector. The body that coordinates for telecommunications standards on behalf of the
International Telecommunication Union.
MEMS Micro Electro Mechanical Systems. Miniature devices that use
mechanical movement to achieve short or open circuits.
OIF

Optical Internetworking Forum

OP

Overlay penalty

PXC Photonic Cross-connects. An all optical device used to switch


high-speed optical signals in a fiber optic network.
ROADM

 econfigurable Optical Add-drop Multiplexers (ROADMs). A form of


R
Optical Add-drop Multiplexers that can add and drop wavelengths
without the need to convert them to electronic signals and back
again to optical signals.

Nokia Siemens Networks Corporation


P.O. Box 1
FI-02022 NOKIA SIEMENS NETWORKS
Finland
Visiting address:
Karaportti 3, ESPOO, Finland
Switchboard +358 71 400 4000 (Finland)
Switchboard +49 89 5159 01 (Germany)

Copyright 2009 Nokia Siemens Networks. All rights reserved.


Nokia Siemens Networks and the wave logo are registered trademarks of Nokia Siemens Networks.
Other company and product names mentioned herein may be trademarks or trade names of their respective owners.
Products and solutions herein are subject to change without notice.

Every effort is made to ensure that our


communications materials have as little
impact on the environment as possible

www.nokiasiemensnetworks.com

Вам также может понравиться