Вы находитесь на странице: 1из 51

e d u c a t io n se rv ic e s c o u rsew a re

Data Center Physical Design


Student Guide
Data Center Physical Design

NOTE: Please note this Student Guide has been developed from an audio narration. Therefore it will have
conversational English. The purpose of this transcript is to help you follow the online presentation and may require
reference to it.

Slide 1

Build the Best

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 1

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 2


Data Center Physical Design

Slide 2

Juniper Networks
Data Center Design
Best Practices

Data Center Physical Design

2016 Juniper Networks, Inc. All rights reserved. | www.juniper.net | Proprietary and Confidential

Welcome to Juniper Networks Data Center Physical Design eLearning module.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 3


Data Center Physical Design

Slide 3

Navigation

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 3

Throughout this module, you will find slides with valuable detailed information. You can stop any slide with the Pause
button to study the details. You can also read the notes by using the Notes tab. You can click the Feedback link at any
time to submit suggestions or corrections directly to the Juniper Networks eLearning team.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 4


Data Center Physical Design

Slide 4

Course Objectives

After successfully completing this course, you will be


able to:
Determine proper placement of equipment within the
data center
Describe various cabling used in the data center
Discuss the evolving requirements for optics in the data
center

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 4

After successfully completing this course, you will be able to:


Determine proper placement of equipment within the data center
Describe various cabling used in the data center; and
Discuss the evolving requirements for optics in the data center

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 5


Data Center Physical Design

Slide 5

Agenda: Data Center Physical Design

Physical Design Considerations


Evolving Optical Requirements

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 5

This course consists of two sections. The two main sections are as follows:
Physical Design Considerations; and
Evolving Optical Requirements.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 6


Data Center Physical Design

Slide 6

Juniper Networks
Data Center Design
Best Practices

Physical Design Considerations

2016 Juniper Networks, Inc. All rights reserved. | www.juniper.net | Proprietary and Confidential

Physical Design Considerations

This section will compare the various equipment rack layouts used in a data center along with their advantages and
disadvantages. The various types of cabling options will also be described.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 7


Data Center Physical Design

Slide 7

Section Objectives

After successfully completing this section, you will be


able to:
Determine proper placement of equipment within the
data center
Describe various cabling used in the data center

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 7

After successfully completing this section, you will be able to:


Determine proper placement of equipment within the data center; and
Describe various cabling used in the data center.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 8


Data Center Physical Design

Slide 8

Seven Domains in the Network

Edge
Access and Core
Aggregation

Consumer and Data


Business Device Center

Campus and WAN


Branch

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 8

Seven Domains in the Network

There are seven specific domains in the networkAccess and Aggregation, Edge, Core, Data Center, WAN, Campus
and Branch, and Consumer and Business Device.

In this course we will look at the challenges, requirements, and drivers for todays data center, as well as the Juniper
Networks recommended solutions and designs for meeting those requirements and overcoming the challenges of data
center deployments.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 9


Data Center Physical Design

Slide 9

Physical Layout
Multiple physical divisions:
Referred to as segments,
zones, cells, or pods

Physical Considerations:
Placement of equipment
Cabling requirements and restrictions
Power and cooling requirements

Layout options:
Top of rack
Bottom of rack
Middle of row
End of row
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 9

Physical Layout

One of the first steps in data center design is planning the physical layout of the data center. Multiple physical
divisions exist within the data center that are usually referred to as segments, zones, cells, or pods. Each segment
consists of multiple rows of racks containing equipment that provides computing resources, data storage, networking,
and other services.

Physical considerations for the data center include placement of equipment, cabling requirements and restrictions,
and power and cooling requirements. Once you determine the appropriate physical layout, you can replicate the
design across all segments within the data center or in multiple data centers. Using a modular design approach
improves the scalability of the deployment while reducing complexity and easing data center operations.

The physical layout of networking devices in the data center must balance the need for efficiency in equipment
deployment with restrictions associated with cable lengths and other physical considerations. Pros and cons must be
considered between deployments in which network devices are consolidated in a single rack versus deployments in
which devices are distributed across multiple racks. Adopting an efficient solution at the rack and row levels ensures
efficiency of the overall design because racks and rows are replicated throughout the data center.

This section discusses the following data center layout options:


Top of rack (ToR) or bottom of rack (BoR);
Middle of row (MoR); and
End of row (EoR).

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 10


Data Center Physical Design

Slide 10

Top of Rack and Bottom of Rack


Switches

Computing
Top of Rack and
Deployment Storage
Devices

Computing
and
Bottom of Rack Storage
Deployment Devices

Switches
Minimizes cable length Legacy devices must be managed separately
Copper 10-Gigabit Ethernet cable More complicated topology and management
lengths use less power Uplinks are required for connection between
Pros Can provide switching redundancy Cons the servers in adjacent racks, increasing latency
on a per rack basis Typically, more devices are needed (Note that
Cabling runs can be simpler Virtual Chassis addresses these issues)

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 10

Top of Rack and Bottom of Rack

In a ToR and BoR deployment, network devices are deployed in each server rack. A single device (or pair of devices
for redundancy at the device level) provides switching for all of the servers in the same rack. To allow sufficient space
for servers, the general recommendation is that devices in the rack should be limited to a 1U or 2U form factor.

A ToR or BoR layout places high-performance devices within the server rack in a row of servers in the data center.
With devices in close proximity, cable run lengths are minimized. Cable lengths can be short enough to accommodate
1-Gigabit Ethernet, 10-Gigabit Ethernet, and future 40-Gigabit Ethernet connections. Potential also exists for
significant power savings for 10-Gigabit Ethernet connections when the cable lengths are short enough to allow the
use of copper, which operates at one-third the power of longer-run fiber cables. Note also that deploying switches in a
middle-of-rack deployment might offer the additional benefits of even shorter cable runs and smaller cable bundles.

With ToR and BoR layouts, you can easily provide switching redundancy on a per rack basis. However, each legacy
device must be managed individually, which can complicate operations and add expense, because multiple discreet
24 or 48-port devices are required to meet connectivity needs. Both top of rack and bottom of rack deployments
provide the same advantages with respect to cabling and switching redundancy. Cabling run lengths are minimized in
this deployment and are simpler than MOR or EOR configurations. ToR deployments provide more convenient access
to the network devices, while BoR deployments can be more efficient from an airflow and power perspective, because
cool air from under-floor heating ventilation and cooling (HVAC) systems reaches the network devices in the rack
before continuing to flow upward.

ToR and BoR deployments do have some disadvantages, however. Having many networking devices in a single row
complicates topology and management. Because the devices serve only the servers in a single rack, uplinks are
required for connection between the servers in adjacent racks, and the resulting increase in latency can affect overall
performance. Agility is limited because modest increases in server deployment must be matched by the addition of
new network devices. Finally, because each device manages only a small number of servers, more devices are
typically required than would otherwise be needed to support the server population. Juniper has developed a solution
that delivers the significant benefits of ToR and BoR deployments while addressing the previously mentioned issues.
The solution, Virtual Chassis, is described in a later chapter.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 11


Data Center Physical Design

Slide 11

End of Row
End of Row Deployment

High
Density
Computing and Storage Devices Switch

A single access tier for an entire row of Longer cable runs can exceed the length limits
servers for 10-Gigabit Ethernet and 40-Gigabit Ethernet
Requires fewer uplinks Port utilization is not always optimal with chassis
Pros Simplifies network topology Cons switches
Most chassis consume a great deal of power,
Best for 1-Gigabit Ethernet deployments
with relatively few servers cooling, and space, even when not fully
populated

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 11

End of Row

If the physical cable layout does not support ToR or BoR deployment, or if the customer prefers a large chassis-based
solution, the other options would be an EOR or MOR deployment, where network switches are deployed in a
dedicated rack in the row.

In the EoR configuration, which is common in existing data centers with existing cabling, high-density switches are
placed at the end of a row of servers, providing a consolidated location for the networking equipment to support all of
the servers in the row. EoR configurations can support larger form factor devices than ToR and BoR rack
configurations, so you end up with a single access tier switch to manage an entire row of servers. EoR layouts also
require fewer uplinks and simplify the network topologyinter-rack traffic is switched locally. Because EoR
deployments require cabling over longer distances than ToR and BoR configurations, they are best for deployments
that involve 1-Gigabit Ethernet connections and relatively few servers.

Disadvantages of the EoR layout include longer cable runs which can exceed the length limits for 10-Gigabit Ethernet
and 40-Gigabit Ethernet connections, so careful planning is required to accommodate high-speed network
connectivity. Device port utilization is not always optimal with traditional chassis-based devices, and most chassis-
based devices consume a great deal of power and cooling, even when not fully configured or utilized. In addition,
these large chassis-based devices can take up a great deal of valuable data center rack space.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 12


Data Center Physical Design

Slide 12

Middle of Row
Middle of Row Deployment

High
Density
Switch

Computing and Storage Devices Computing and Storage Devices


A single access tier for an entire row of Port utilization is not always optimal with chassis
servers switches
Requires fewer uplinks Most chassis consume a great deal of power,
Pros Simplifies network topology Cons cooling, and space, even when not fully populated
Reduced cable lengths to support
10-Gigabit and 40-Gigabit Ethernet

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 12

Middle of Row

An MoR deployment is similar to an EOR deployment, except that the devices are deployed in the middle of the row
instead of at the end. The MoR configuration provides some advantages over an EoR deployment, such as the ability
to reduce cable lengths to support 10-Gigabit Ethernet and 40-Gigabit Ethernet server connections. High-density,
large form-factor devices are supported, fewer uplinks are required in comparison with ToR and BoR deployments,
and a simplified network topology can be adopted.

You can configure an MoR layout so that devices with cabling limitations are installed in the racks that are closest to
the network device rack. While the MoR layout is not as flexible as a ToR or BoR deployment, the MoR layout
supports greater scalability and agility than the EoR deployment.

Although minimizing the cable length disadvantage associated with EoR deployments, the MoR deployment still has
the same port utilization, power, cooling, and rack space concerns associated with an EoR deployment.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 13


Data Center Physical Design

Slide 13

Optic Form Factor Options

What are the different types of optic form factors?

SFP SFP+
100BASE-X 10GBASE-X
Juniper supports a wide
1000BASE-X Direct Attach Copper
range of optics in our data
center platforms from the
listed form factors
These form factors provide
QSFP CFP
low power consumption
40GBASE-X 100GBASE-X and high density to ensure
LC and MTP High-speed greater port density in
Direct Attach Copper connectivity between fewer rack units
core devices and
data centers

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 13

Optic Form Factor Options

Before we discuss data center cabling, lets first take a look at the various optic form factors.

Junipers data center platforms support a variety of form factorsSFP, SFP+, QSFP, and CFP. These form factors
provide low power consumption and high density to ensure greater port density in fewer rack units. Depending on the
deployment scenario, Junipers data center platforms support different pluggable optic modules that can be selected
based on distance, form factor, and wavelength.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 14


Data Center Physical Design

Slide 14

Small Form Factor Pluggable (SFP)

Cost-effective 1GbE connectivity

SFP SFP+
100BASE-X 10GBASE-X
1000BASE-X Direct Attach Copper
Juniper data center
platforms support a range
of 1GbE SFP optics
1GE-LX, SMF, 10km
1GE-SX, MMF, 500m
QSFP CFP
40GBASE-X 1GE-T, Cat5e, 100m
100GBASE-X
LC and MTP High-speed
Direct Attach Copper connectivity between
core devices and
data centers

GbE = Gigabit Ethernet

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 14

Small Form Factor Pluggable (SFP)

SFP transceivers provide support for 1-Gigabit Ethernet fiber-optic or copper cables. Juniper data center platforms
support a range of 1-Gigabit Ethernet SFP transceiver typessingle-mode fiber-optic (SMF), multimode fiber-optic
(MMF), and category 5 enhanced (Cat5e) copper.

SFP transceivers are hot-insertable and hot-removable.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 15


Data Center Physical Design

Slide 15

Small Form Factor Pluggable (SFP+)

Dense 10GbE optics

SFP SFP+
100BASE-X 10GBASE-X
1000BASE-X Direct Attach Copper
Juniper utilizes SFP+ to
provide dense pluggable
optic support
10GBASE-ZR, SMF, 80km
10GBASE-ER, SMF, 40km
QSFP CFP
40GBASE-X 10GBASE-LR, SMF, 10km
100GBASE-X
LC and MTP 10GBASE-SR, MMF, 300m
High-speed
Direct Attach Copper connectivity between 10GBASE-USR, MMF, 100m
core devices and
data centers

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 15

Small Form Factor Pluggable (SFP+)

SFP+ transceivers are an enhanced SFP transceiver that provides support for data rates up to 10 Gbps for fiber-optic
or copper interfaces. Juniper utilizes SFP+ to provide dense pluggable optic support for SMF and MMF 10-Gigabit
Ethernet interfaces.

SFP+ transceivers are hot-insertable and hot-removable.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 16


Data Center Physical Design

Slide 16

Quad SFP (QSFP)

High-speed 40GbE optics

SFP SFP+ QSFP optics provide 40GbE


100BASE-X 10GBASE-X support
1000BASE-X Direct Attach Copper Both MTP and LC connectors
depending on optic
MTP Connectors
40G-SR4, MMF, 150m
QSFP CFP 40G-ESR4, MMF, 400m
40GBASE-X 100GBASE-X LC Connectors
LC and MTP 40G-LX4,MMF,100m
High-speed
40G-IR4,SMF,2km
Direct Attach Copper connectivity between
40G-LR4, SMF,10km
core devices and
40G-ER4,SMF,40km
data centers
4x10G-IR4,2KM
4x10G-LR4,10km

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 16

Quad SFP (QSFP)

QSFP transceivers are quad (that is, four-channel) transceivers that provide support for fiber-optic or copper cables.
Juniper utilizes QSFP optics for 40-Gigabit Ethernet interfaces, or 10-Gigabit Ethernet interfaces when using a
breakout cable. QSFP transceivers are hot-insertable and hot-removable.

QSFP+ and QSFP28 are variations of QFSP, allowing for higher data rates.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 17


Data Center Physical Design

Slide 17

C Form Factor Pluggable (CFP)

High-speed 100GbE optics

SFP SFP+ Typically found in core boxes


100BASE-X 10GBASE-X such as the EX9200 platform,
1000BASE-X Direct Attach Copper CFP brings 100GbE connectivity
between core devices and
between data centers
100GBASE-SR10
QSFP CFP 100GBASE-LR4
40GBASE-X 100GBASE-X 100GBASE-ER4
LC and MTP High-speed
Direct Attach Copper connectivity between
core devices and
data centers

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 17

C Form Factor Pluggable (CFP)

CFP transceivers are 100 Gbps transceivers that provide support for fiber-optic cables with built-in clock recovery
circuits. Juniper utilizes CFP transceivers to provide 100-Gigabit Ethernet connectivity. The C stands for the Latin
letter C used to express the number 100, since the CFP was primarily designed for 100-Gigabit Ethernet use.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 18


Data Center Physical Design

Slide 18

Data Center Cabling

Cabling is a major cost in the data center


Any major change to the data center will involve the
need to run new cable
Types of Cabling Used in the Data Center
Network Tier Devices Connected Cable Type
Access Tier Servers to access switches, appliances, and Fiber and copper
monitoring devices; servers to storage networks
Access Aggregation Tier Access switches to each other and to aggregation tier Fiber and copper
or core; SAN or NAS to each other and to the core

Core Core or backbone routers and switches to each other, Fiber


to access aggregation tier, and to telecom room
equipment
Network Edge (WAN Connections) WAN router to data center network and to service Fiber
provider equipment

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 18

Data Center Cabling

Cable installation is a major cost in data centers due to the labor involved in pulling cables through conduits and cable
trays and, to varying degrees, the price of the cabling itself. Organizations install different types of cabling in different
parts of the data center based on factors such as the equipment being connected, the bandwidth required by a
particular device or link, and the distances between the connected devices. Organizations often try to install sufficient
cabling to accommodate future expansion. However, any major change to a data centersuch as upgrading to
higher-performance servers or moving to a higher-speed corecan result in the need to run new cabling.

Cabling runs basically everywhere in the data center, both within tiers and between them. Cabling runs within racks to
connect servers and other appliances to each other and to their networks, between racks and the access switches,
between the switches in the access tier, between access switches and switches in the aggregation or core tiers,
between devices in the core, and between core devices and edge equipment housed in the telecom room.

The table on the slide summarizes data center cabling infrastructure, showing the data center network tiers, the
devices connected within them (for example, WAN routers, storage area network [SAN] devices or network attached
storage [NAS] devices) and the type of cable typically used.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 19


Data Center Physical Design

Slide 19

Planning for 40-Gigabit Ethernet


and 100-Gigabit Ethernet
Higher bandwidth will be needed in the data center
The IEEE has defined standards for 40-Gigabit and
100-Gigabit Ethernet
Cabling for 40-Gigabit and 100-Gigabit Ethernet
Ethernet Type Cable Type Maximum Distance
40GBASE-KR4/CR4 Copper (Twinax) 10 meters (7 seems more likely)

40GBASE-SR4 OM3 MMF 100 meters

40GBASE-LR4 SMF 10 km

100GBASE-CR10 Copper (Twinax) 10 meters (7 seems more likely)

100GBASE-SR10 OM3 MMF 100 meters

100GBASE-LR4 SMF 10 km

100GBASE-ER4 SMF 40 km

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 19

Planning for 40-Gigabit Ethernet and 100-Gigabiit Ethernet

Several trends are driving the need for higher bandwidth throughout the data center. For example, dual-port 10-
Gigabit Ethernet network interface cards (NICs) for servers are getting cheaper, so they are being deployed more
often. The use of 10-Gigabit Ethernet in the equipment distribution area is driving the need for 40-Gbps to 100-Gbps
links in the access, aggregation, and core tiers. To date, network equipment vendors have been delivering speeds of
40-Gbps and 100-Gbps by using variants of a technique called wavelength-division multiplexing (WDM). For example,
using WDM, vendors create a link that is essentially four 10-Gbps signals combined onto one optical medium.
Similarly, 100-Gbps links can be composed of four 25-Gbps or 10 10-Gbps channels.

In 2007, the Institute of Electrical and Electronics Engineers (IEEE) began the process of defining standards for 40-
Gigabit Ethernet and 100-Gigabit Ethernet communications, which were ratified in June 2010. These 40-Gigabit
Ethernet and 100-Gigabit Ethernet standards encompass a number of different physical layer specifications for
operation over single mode fiber (SMF), OM3 multi-mode fiber (MMF), copper cable assembly, and equipment
backplanes (see the table on the slide for more details). To achieve these high Ethernet speeds, the IEEE has
specified the use of ribbon cable, which means that organizations need to pull new cable in some or all parts of the
data center, depending on where 40-Gigabit Ethernet or 100-Gigabit Ethernet is needed.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 20


Data Center Physical Design

Slide 20

Cabling Options Compared


MMF MMF
Fiber Type SMF
OM3 OM4
Cable Cost Low High High
Transceiver Type CWDM Parallel Optics Parallel Optics
Transceiver Cost High Low Low
Expected Reach for 40-Gigabit
10 km 100 m 150 m
and 100-Gigabit Ethernet

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 20

Cabling Options Compared

The table on this slide gives you an overview of data center fiber cabling options. Coarse wavelength division
multiplexing (CWDM) and parallel optics SMF and MMF transceiver types are compared, showing the associated
costs and distance limitations.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 21


Data Center Physical Design

Slide 21

Transceiver and Cable Type


10-Gigabit Ethernet 40-Gigabit Ethernet 100-Gigabit Ethernet
MMF SMF MMF SMF MMF SMF

LC LC MTP LC MTP LC

SFP+ SFP+/XFP QSFP+ CFP CXP CFP

2 Fiber 2 Fiber 12 Fiber 2 Fiber 24 Fiber 2 Fiber


OM3/OM4 OS1 OM3/OM4 OS1 OM3/OM4 OS1

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 21

Transceiver and Cable Type

The illustration on the slide shows the 10-Gigabit Ethernet, 40-Gigabit Ethernet, and 100-Gigabit Ethernet transceiver
and fiber cable types that are used in the data center. With advances in technology the data center is moving beyond
10-Gigabit Ethernet using MMF or SMF with small form-factor pluggable plus (SFP+) or 10-Gigabit small form-factor
pluggable (XFP) transceivers. The advent of 40-Gigabit Ethernet and 100-Gigabit Ethernet speeds has introduced
new connectivity options.

Mechanical transfer pull-off (MTP) is a special type of fiber optic connector made by a company named US Conec.
MTP is an improvement of the original multi-fiber push-on (MPO) connector designed by a company named NTT. The
MTP connector is designed to terminate several fibers strandsup to 24 strandsin a single ferrule. MTP
connections are held in place by a push-on pull-off fastener, and can also be identified by a pair of metal guide pins
that project from the front of the connector.

Multi-mode transceiver types for 40 and 100-Gigabit Ethernet MTP connections include quad small form-factor
pluggable plus (QSFP+) and CXP. CXP was designed for data centers where high-density 100-Gigabit connections
will be needed in the future (the C stands for the Roman numeral for 100). 40 and 100-Gigabit Ethernet single-mode
fiber connections use 100-gigabit small form-factor pluggable (CFP) transceivers with LC connection fiber pairs.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 22


Data Center Physical Design

Slide 22

Server Cabling
Maximum Power Latency Bit Error
Technology Type Distance (per side, in (milliseconds)
(meters) watts) Rate

SFP+ DAC Twinax Passive 7 ~0.1 ~0.1


10-15
Twinax Active 15 ~0.5 ~0.3
SFP+ USR OM2 10
1 ~0 10-15
OM3 100
SFP+ SR 62.5 micron 82
1 ~0 10-15
50 micron 300
10GBase-T Cat 6 55 ~8 2.5
10-12
Cat 6a and 7 30 ~4 1.4
Cat 6a and 7 100 ~8 2.5

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 22

Server Cabling

The slide illustrates that 10GBase-T is not acceptable, at this time, for converged (that is, lossless Ethernet) networks.
Vendors in the industry are pushing for the Fiber Channel over Ethernet (FCoE) Data Center Bridging (DCB)
standards body to approve 10GBase-T; however, it is currently not supported.

The bit error rate (BER) of 10 to the power of minus 15 is important, because that is the specification for Fiber
Channel SANs.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 23


Data Center Physical Design

Slide 23

Future-Proofing Data Center Cabling

Specify minimum of OM3 fiber


OM4 as an option for extra reach
Design data centers for 100150 m maximum
lengths between switches
Consider higher fiber count requirements
2 fibers per link becomes 24 fibers
MTP (or MPO) connectors will become the standard
transceiver interface, compared to LC connectors
Consider cable management and structured cabling
Limit patch panels in the path to a maximum of two

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 23

Future-Proofing Data Center Cabling

Cabling is a high cost item because it is labor intensive. To maximize their cabling dollar, the customer should try to
future-proof the cable plant. Which is possible because 40-Gigabit and 100-Gigabit use the same cabling guidelines.
The only difference is that 100-Gigabit requires twice as many fibers. Specify a minimum of OM3 fiber and OM4 if
extra reach is needed.

A data center should be designed with a maximum cabling distance of 100 to 150 meters between switches. The 100
to 150 meter length limit is part of the 40-Gigabit and 100-Gigabit specification for multi-mode fiber. 150 meters is the
longest length supportedassuming the customer is using OM4 cabling and no more than two patch panels are in the
path. If more than two patch panels are present, then OM4 is limited to 125 meters. If using OM3 fiber, use 100
meters as the maximum length.

A patch panel should be by the Main Distribution Area (MDA) and patch panels in each row. Structured cabling should
be used between the patch panels. To maximize cost savings we recommend running large fiber bundles between the
patch panels.

Obtaining all cabling components from the same manufacturer is important and a life-time or multi-year guarantee
comes with the installation. Using the same manufacturer and having a guarantee in place is important because fiber
plants have always run the risk of having polarity issues and with the increased speeds of 40-Gigabit Ethernet and
100-Gigabit Ethernet, polarity issues become more pronounced and troublesome. Manufacturers specifically design
their components to work together to avoid these issues.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 24


Data Center Physical Design

Slide 24

Upgrade Cable Plant to 100-Gigabit


Ethernet
For migrating to 100-Gigabit Ethernet from 40-Gigabit Ethernet, 12-strand fiber
(12f) patch cords at system ends are removed and replaced with
24-strand fiber (24f) Y-cable
TOR to patch panel
Storage
MTP patch cords
Equipment
12-strand to
24-strand Y-Cable 16 racks per row,
8 rows
SW2
128 server racks

SW1

Patch panels in switch area,


MTP patch cords to switches
12-strand to 24-strand Y-Cable

End of row patch panel with MTP


Trunk cable pre-terminated to
adapters
12-strand MTPs

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 24

Upgrade Cable Plant to 100-Gigabit Ethernet

The slide illustrates an example of using structured cabling. It addresses both the LAN and the SAN. It is comprised of
a patch panel for the MDA with bulk fiber cable connecting to patch panels in each row, and patch cables to connect
the active components. Cabling is not the place to cut corners. The customer should align themselves with a good
cabling vendor that they can rely on for present and future needs.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 25


Data Center Physical Design

Slide 25

Example: 128 Racks in the Pod


Server Area 96-strand fiber trunk cable with 8 x12 MTP termination to C1
~5 meters (16 feet) 96-strand fiber trunk cable with 8 x12 MTP termination to C2

Row1
Row 1
16 96-strand
fiber cables
Row 2
Row2
terminate at
patch panel
Row3
Row 3
12-strand
20 meters (64 feet)

fiber cables Patch Panel at


Row 4
Row4
from patch Chassis
panel to
chassis
Row 5
Row5

Chassis Area
Row 6
Row6

96-Strand 96-Strand
Row 7
Row7
Fiber Fiber
Cable Cable

Row 8
Row8

Patch Panel at TOR Area

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 25

Example: 128 Racks in the Pod

The slide shows an example of structured cabling for a pod that consists of 128 racks distributed over eight rows. This
example is important because it just happens to be the exact maximum size of QFabric.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 26


Data Center Physical Design

Slide 26

Hot and Cold Aisle Design


Cool air is drawn in from a common cold aisle
Hot air is exhausted out a common hot aisle
Having as much separation and containment of hot and cold
air as possible is desirable
Helps avoid hot spots within the data center
Cold Air Hot Air Cold Air Hot Air Cold Air
Delivered Removed Delivered Removed Delivered

R R R R
o o o o
Cold Hot Cold Hot Cold
Aisle w Aisle w Aisle w Aisle w Aisle

1 2 3 4

Overhead View

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 26

Hot and Cold Aisle Design

A critical element to minimizing power consumption in the data center is the concept of hot aisles and cold aisles. The
idea is to keep the cool air supplied to the equipment separate from the hot air exhausted from the equipment. Data
center devices are racked so that cool air is drawn into the equipment on a common cold aisle where the cool air is
delivered. The other side of the rows create a common hot aisle into which the hot air from the equipment is
exhausted. The hot air can then be drawn into the air conditioning equipment, cooled, and redistributed into the cold
aisle.

It is desirable to have as much separation as possible between the cool air supplied to the devices and the hot air
exhausted from the devices. This separation makes the cooling process more efficient and provides more uniformity of
air temperature from the top to the bottom of the racks, preventing hot spots within the data center. Physical barriers
above and around racks can be used to help achieve the desired separation. The racks can also help achieve the
desired separation and air flow.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 27


Data Center Physical Design

Slide 27

Commercial Cabinets Enable Hot Aisle


Cold Aisle Data Center Design
Many racks are designed to assist air
flow from cold aisle to hot aisle
Raised floors with perforated tiles,
ducts, and plenums can also be used to
control air flow
Back of RackHot Aisle
Baffles Baffle redirects Cool air enters
redirect warm warm exhaust air the rack from
air from the to the rear the front
left side of
the switch to
the back
Cool air from the
front to the right
side of the
switch

Front of RackCold Aisle Front of RackCold Aisle


(Top View) (Viewed from the Front)
2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 27

Commercial Cabinets Enable Hot Aisle Cold Aisle Data Center Design

Products are available from several rack manufacturers that provide support for implementing hot aislecold aisle
designs in data centers. For example, cabinets are available that takecold air in front of the rack, move it through the
chassis with specially designed baffles, and then expel hot air at the rear of the cabinet.

Cool air is often forced through perforated tiles in raised floors as a way of delivering cool air to the cold aisle.
Plenums above the racks are then used to vent to hot air for re-cooling. More recently, delivering the cold air through
ducts and plenums above the rack cabinets and exhausting the hot air through separate ductwork and plenums has
been used to take advantage of the natural tendency of cold air to fall and warm air to rise.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 28


Data Center Physical Design

Slide 28

Section Summary

In this section, we:


Determined proper placement of equipment within the data
center
Described various cabling used in the data center

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 28

In this section, we:


Determined proper placement of equipment within the data center; and
Described various cabling used in the data center.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 29


Data Center Physical Design

Slide 29

Learning Activity 1: Question 1

Which two of the following would be the best choices


for switch placement if the customer preferred a
chassis based switch? (Choose two.)

A) EoR
B) ToR
C) MoR
D) BoR

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 29

Learning Activity 1: Question 1

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 30


Data Center Physical Design

Slide 29

Learning Activity 1: Question 2

What is the minimum cabling you should recommend


for 40-Gigabit Ethernet or 100-Gigabit Ethernet?

A) OS1
B) Rj-45
C) Cat 6a
D) OM3

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 29

Learning Activity 1: Question 2

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 31


Data Center Physical Design

Slide 30

Juniper Networks
Data Center Design
Best Practices

Evolving Optical Requirements

2016 Juniper Networks, Inc. All rights reserved. | www.juniper.net | Proprietary and Confidential

Evolving Optical Requirements

This section will take a look at the evolving requirements for optics in the data center, and will describe Junipers
optical product offerings.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 32


Data Center Physical Design

Slide 31

Section Objectives

After successfully completing this section, you will be


able to:
Discuss the evolving optical requirements in the data center
Discuss 40-Gigabit Ethernet connectivity and use cases
Describe Junipers optical product portfolio

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 31

After successfully completing this section, you will be able to:


Discuss the evolving optical requirements in the data center;
Discuss 40-Gigabit Ethernet connectivity and use cases; and
Describe Junipers optical product portfolio.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 33


Data Center Physical Design

Slide 32

Data Center Evolving Requirements


Optics

Prior 2014 2015 2018

Core/
Aggregation Nx10GbE 40GbE 100GbE Nx100GbE

Access Nx10GbE 40GbE Nx40GbE 100GbE

Servers 1GbE 10GbE Nx10GbE 40,100GbE

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 32

Data Center Evolving Requirements

Lets drill down a little deeper into a concern that might sometimes be overlooked, and thats optics. On this slide, we
are looking at the data center and our perception on evolving requirements, you see the core/aggregation layer,
access layer, and the compute layer (servers). At the core/aggregation layer we have seen an evolution from 10-
Gigabit Ethernet to 40-Gigabit Ethernet, and that is progressing into 100-Gigabit Ethernet. At the compute layer, we
have seen things grow exponentially from 1-Gigabit to 10-Gigabit interconnect from the server side to the access
layer, or ToR switch. Already we are seeing bonded 10-Gigabit Ethernet for the purpose of achieving greater
bandwidth and bigger trunk size. Sooner, rather than later, we will see 40-Gigabit Ethernet.

But lets look more specifically at the access layer. What we see here is a lot of movement, and it seems to be where
a lot of the significant development is happeningwhere the actual fabric touches the compute node, and more so
where the leaf and spine network touches the compute node or the compute network. At the access layer you see
bonded 10-Gigabit Ethernet, currently moving into 40-Gigabit Ethernet today, and in the not-so-distant future we are
looking at bonded 40-Gigabit Ethernet, just as we did with 10-Gigabit Ethernet in the recent past. Looking ahead, we
will see 100-Gigabit Ethernet in the data center. Again, looking at the access layer, there is quite a bit of movement.
This is where we will focus for the remainder of this module.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 34


Data Center Physical Design

Slide 33

Connectivity Using 40 Gbps

Core Core

nx10Gbps 40Gbps
TOR TOR

10Gbps 10Gbps

Server Server

16X10G uplinks (3:1 OS) Fewer uplinks (3:1 OS)


Hashing in-efficiencies Optimized hashing
Max 10G server throughput 40G throughput

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 33

Connectivity Using 40 Gbps

Building on what we described on the previous slide, on this slide we are showing the use of N by 10-Gigabit
bonding to achieve greater bandwidth coming out of a ToR switch. This ToR switch in turn services compute nodes or
servers, as shown in the diagram on the left of the slide. Each server is running multiple virtual machine (VMs).

In this type of solution, there is a certain amount of oversubscription that is often acceptable for an access layer tier.
However, what needs to happen in a multi-link situation such as this, is that hashing has to occur for load balancing
and equalization of traffic across all of the links being used. As we add more links, not only do we add more cabling
and physical connections, but also we do not truly achieve the bandwidth that we had hoped for by bonding together
multiple 10-Gigabit links.

Comparing and contrasting this type of solution with where everything is going today, the oversubscription, as shown
by the diagram on the right, does not change. However, by virtue of reducing the number of links, 2 as opposed to 4 in
our examples on the slide, using 40-Gigabit uplinks instead of 10-Gigabit uplinks, we have optimized hashing and
achieved a closer realization of true 40-Gigabit throughput coming from the ToR into the fabric, or into the core of the
network. Less links are preferred over more links.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 35


Data Center Physical Design

Slide 34

40G Use Cases: Inter-Rack Connectivity

Inter-rack requires:
Low OS between racks
Reach up to <400m
High rack to rack traffic
Existing MMF
Juniper Offers Multiple Options
Reach
Model Cable Connector IEEE Standard 100G Ready
OM3/OM4

40G-SR4 100/150m 12 fiber MTP Yes Yes

40G-eSR4 300/ 400m 12 fiber MTP Yes Yes

40G-LX4 100/150m 2 fiber LC No No

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 34

40G Use Cases: Inter-Rack Connectivity

In regard to data center use cases for 40-Gigabit Ethernet, there are various options that Juniper does offer. When
looking at inner-rack requirements, a lot of the requirements revolve around low oversubscription, moderate to short
reach, and lots of rack-to-rack traffic. More importantly, multimode fiber (MMF) might have been utilized for the
purpose of 10-Gigabit interconnect.

The table at the bottom of the slide shows three 40-Gigabit fiber options offered by Juniper Networks. With regard to
the 40G-SR4, and 40G-eSR4 (that is, extended SR4) optics, as you can see, they have moderate to long reach in the
data center, allowing for flexibility and choice. They use a specialized connector, but they are IEEE standardized, and
most importantly, they are 100-Gigabit ready. These are offerings that Juniper has today.

However, at the bottom of the table, the newest introduction to these options is the 40G-LX4 optic. The 40G-LX4
offers short reach within the data center, but as you can see, with regard to cabling requirements, the 40G-LX4 is
down to two fiber lines as opposed to the 12 fiber lines needed for the other two options. This is important because,
what we are effectively saying here is that, this is using MMF fiber, and by virtue of introducing these new connectors,
the existing fiber plant can be retained. The ramifications are hugethere are cost savings for the customers, and
there is the fact that you have interchangeability that allows for seamless upgrade to 40-Gigabit Ethernet by utilizing a
simple modular connector.

The 40G-LX4 option is not currently IEEE standardized, or 100-Gigabit Ethernet ready. However, in the upcoming
slides we will discuss standardization and market adoption for this specific optic.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 36


Data Center Physical Design

Slide 35

40G-LX4 Introduction

Accelerate Time to Value with 40-Gigabit Ethernet

Seamless Upgrade From 10GbE to 40GbE

JNP-QSFP-40G-LX4 Benefits:
Rx No fiber upgrade for 40G

Duplex LC
Duplex LC

Tx

Use existing MMF


Duplex 10Gbps
Low infra cost
(4 different wavelengths)

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 35

40G-LX4 Introduction

When you look at the new 40G-LX4 from Juniper, it is important to note specifically how it has been implemented. This
is a joint effort with a company called Finisar.

The new Juniper QSFP LX4 optic uses familiar duplex LC connectivity, thereby allowing you to retrofit these new 40-
Gigabit connectors to existing fiber, and unlike other solutions, we are actually using four different wavelengths and
10-Gigabit lanes to achieve 40-Gigabit throughput. Again, this is done over 2 fiber lines (multimode).

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 37


Data Center Physical Design

Slide 36

40G Data Center: 40G Uplinks

Aggregation Aggregation

Insert LX4 optics


= Existing fiber trunks
6x40G uplinks

Insert LX4 optics


Access Access Access Access

TOR Uplinks with 40G

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 36

40G Data Center: 40G Uplinks

In regard to how the LX4 optics would be implemented, it is as simple as retrofitting the LX4 optical modules where
10-Gigabit modules or transceivers are used. In the example shown on this slide, we would be doing so at the access
layer (on the fabric-facing side, in this example) and at the aggregation tier (or fabric interconnect). Again, it is as
simple as retrofitting the connectors on either side, leaving the cable in between.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 38


Data Center Physical Design

Slide 37

Solution Comparison
BiDi versus LX4, SR4

Attribute Cisco Bidi Juniper LX4 Juniper SR4


Rate 40 Gbps 40 Gbps 40 Gbps

Connector Type LC LC MPO

Fiber Count 2 2 12

Fiber Type MMF MMF MMF

Reach 125m /w OM4 150m /w OM4 150m /w OM4

Watts 3.5 3.5 1.5

Wavelength (nm) 832-918 1310 850

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 37

Solution Comparison

A discussion of the Juniper LX4 optics would not be complete without comparing it to one of the more prominent 10- to
40-Gigabit solutions out there todaynamely the Cisco bi-directional optic, also known as BiDi. This optic has been
available for a short while now, and there are some key points of comparison that we would like to point out.

The Juniper LX4 has slightly longer reach than the Cisco BiDi, and while the Cisco BiDi solution was intended to
service the same use case (10- to 40-Gigabit migration over existing fiber) the BiDi only works with Cisco specific
solutions and equipment. Therefore, it is more of a proprietary solution that does not service the greater needs of the
industry. Comparing the Cisco BiDi to the Juniper LX4, Juniper is proud of the fact that they are the first company to
work with Finisar to develop an open standard, 40-Gigabit, migrate-able optic. Looking at the anticipation around the
Juniper LX4, it is going to be adopted as an industry-wide, open standards-based optic. The LX4 is not proprietary in
nature, and while it still needs to be ratified, by working in conjunction with Finisar, we avoid vendor lock-in and allow
for third-party integration.

Lastly, not to leave out the SR4, one of the biggest features of the SR4 is that it is 100-Gigabit ready. For those
customers that are forward thinking and have the awareness today to understand that if they dictate their cabling plant
in a certain way, they will have no problem reaching that level of bandwidth as the actual switch interfaces and optics
become less expensive. It is also worth noting that the SR4 consumes less power than the other optics. In regard to
reach, it does come in multiple varietiesyou can go from moderate to long reach with the SR4. Therefore, the
Juniper SR4 is another valid offering to keep in mind for the data center.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 39


Data Center Physical Design

Slide 38

Juniper Networks 40-Gigabit Deployment


Options
40G Optics Portfolio
Support
Internal Internal
breakout Length No of cables Optic mode Connector Use case
speeds lanes
cables
QFX-QSFP-40G-SR4 Yes 150m/OM4 Parallel 8 MMF MTP DC spine and leaf
QFX-QSFP-40G-ESR4 Yes 400m/OM4 Parallel 8 MMF MTP DC spine and leaf
JNP-QSFP-40G-IR4 No 2km Serial 2 SMF LC DC spine and leaf
JNP-QSFP-40G-LR4 No 10km Serial 2 SMF LC DC spine and leaf
JNP-QSFP-40G-ER4 No 40km Serial 2 SMF LC DC edge
JNP-QSFP-40G-LX4 No 150m/2km Serial 2 MMF/SMF LC DC spine and leaf
4X10GbE
JNP-QSFP-4x10GE-IR Yes 2km Parallel 8 SMF MTP DC spine and leaf
JNP-QSFP-4x10GE-LR Yes 10km Parallel 8 SMF MTP DC spine and leaf
QFX-QSFP-DAC-1-10M No up to 10m Parallel NA Copper NA Server access
QFX-QSFP-DACBO 1-10M Yes up to 10m Parallel NA Copper NA Server access
JNP-QSFP-AOC-1-30M No up to 30m Parallel NA Fiber NA Server access
JNP-QSFP-AOCBO-1-10M Yes up to 10m Parallel NA Fiber NA Server access

There are many inherent benefits to buying Juniper optics and transceivers as opposed to trying third-party optics:
Co-engineering support with a vendor and optics manufacturer
With Juniper branded optics, you have end-to-end support

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 38

Juniper Networks 40-Gigabit Deployment Options

There are many reasons why the optics discussion is critical to the data center. Essentially, this is all money that
would be left on the table if opportunities presented themselves in a data center, or if you have any other type of
opportunities where you are introducing very dense switching. To not address the optical opportunity and realize the
lucrative aspects of selling optics, and the actual peace-of-mind behind selling Juniper branded optics, is leaving
money on the table. Lets first wrap up with what these optics actually entail.

We have already discussed the SR4s and the LX4, we also have an intermediate reach 40-Gigabit optic (up to 2
kilometers), as well as a long reach 40-Gigabit optic (up to 10 kilometers), and an extended reach 40-Gigabit optic (up
to 40 kilometers). The reason behind this, and the reason to note the IR4 specifically, is that again, this is a good
opportunity for your customers to reduce their expense by virtue of the fact that they can justify a long reach optic in
the data center. An LR4 is more suited for inter data center connectivity, but also, for requirements of something
greater than 400 meters than an extended SR4 optic would offer. Again, offering flexibility is one thing, but this might
save tremendous CapEx when deploying multiple switches with thousands of ports.

Rounding out the new optics family from Juniper, are the multiple breakout optic options shown in the table on the
slide. By breakout, we mean that these are 40-Gigabit to 10-Gigabit breakout solutions which allow our customers, for
example, to take the QFX5100 switches with 40-Gigabit Ethernet interfaces, and still utilize them for direct compute or
server attachment, or simply to break them out into 10-Gigabit cables or links to attach to other network elements.

We should point out that, with respect to the LX4 specifically, and the benefits it provides our customers when moving
to 40-Gigabit, there are certain other inherent aspects in regard to using Juniper optics. There are many inherent
benefits to buying Juniper optics and transceivers as opposed to trying third-party optics. Foremost is the fact that
Juniper, having co-engineering support with a vendor and optics manufacturer such as Finisar, can often avoid
systemic problems or quality assurance issues with the optics and stop them before they get out into the field. In
contrast, if you are dealing with an optics manufacturer on your own, you have less of a relationship and less of a
chance of avoiding such issues.

Again, with Juniper branded optics, you have end-to-end support, whereby with third-party optics, if there is deemed to
be a problem at that level, it will not be supported. As you can see, this is money on the table that you do not want to
leave behind. Some of our competition is instituting very interesting strategies with regard to optics in general. You
can read about certain competition that is actually instituting licensing on optics which throttles the optics bandwidth
down after a certain period of time if those licenses are not purchased. Imagine how disruptive and how broken your

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 40


Data Center Physical Design

network could become if those licenses were not procured in time. While you did buy a full-rate optic interface, all of a
sudden, after a certain time, it drops down to a lesser rate. These are the strategies that are being used by our
competition, and we note these because, when it comes to optics, you should always consider the fact that you have a
true vendor like Juniper and solution support behind every optic we sell.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 41


Data Center Physical Design

Slide 39

Juniper Networks 100-Gigabit Deployment


Options
100G Optics Portfolio
Support
Internal Internal
breakout Length No of cables Optic mode Connector Use case
speeds lanes
cables
JNP-QSFP-100G-LR4 No 10km 4x25GbE Duplex 2 MMF LC DC spine
JNP-QSFP-100G-CWDM4 No 2km 4x25GbE Duplex 2 MMF LC DC spine
JNP-QSFP-100G-PSM4 Yes 2km 4x25GbE Parallel 8 SMF MTP DC spine
JNP-QSFP-100G-SR4 Yes 100m/OM4 4x25GbE Parallel 8 MMF MTP DC spine
JNP-QSFP-100G-ER4 (20km) No 20km 4x25GbE Duplex 2 SMF LC DC spine
JNP-QSFP-100G-ER4 (40km) No 20km 4x25GbE Duplex 2 SMF LC DC spine
JNP-QSFP-100G-SW4 No 100m/OM4 4x25GbE Duplex 2 MMF LC DC spine
JNP-100G-AOC-1M-30M No 30m 4x25GbE NA NA NA NA DC spine
JNP-DAC-4X25G-1M - 3M Yes 3m 4x25GbE NA NA NA NA DC leaf
JNP-DAC-2X50G-1M - 3M Yes 3m 4x25GbE NA NA NA NA DC leaf
JNP-100G DAC-1M - 3M Yes 3m 4x25GbE NA NA NA NA DC leaf

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 39

Juniper Networks 100-Gigabit Deployment Options

Juniper Networks supports a variety of 100-Gigabit Ethernet interfaces. The use of 100-Gigabit Ethernet is expected
to increase dramatically in the next few years. The growth of 100-Gigabit Ethernet in the data center can be traced
back to cloud applications, which require higher-bandwidth interfaces. Not only will 100-Gigabit Ethernet be a leading
solution for data center spine applications in cloud-scale deployments, it will also grow in popularity for enterprise
deployments as it is highly cost-efficient and its bigger pipes provide better application performance.

Juniper offers a truly seamless path for data center upgrades that enable customers to choose 10-Gigabit Ethernet or
40-Gigabit Ethernet in the spine today and then migrate to 100-Gigabit Ethernet as bandwidth needs grow. This is
accomplished using Juniper Networks QFX10000 spine switches, which offer up to thirty QSFP28 100-Gigabit
Ethernet ports in a compact, single rack unit form factor.

Juniper offers exceptionally cost-optimized 100-Gigabit Ethernet optics and cables that complement the spine and leaf
architectures with open and flexible connectivity options. These options are backward compatible with 40-Gigabit
Ethernet speeds, which establishes a path to 100-Gigabit Ethernet deployments. Juniper solutions offer true
investment protection and the ability to easily move to higher speeds, enabling 100-Gigabit Ethernet deployments in
the most seamless fashion.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 42


Data Center Physical Design

Slide 40

Section Summary

In this section, we:


Discussed the evolving optical requirements in the data
center
Discussed 40-Gigabit Ethernet connectivity and use cases
Described Junipers optical product portfolio

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 40

In this section, we:


Discussed the evolving optical requirements in the data center;
Discussed 40-Gigabit Ethernet connectivity and use cases; and
Described Junipers optical product portfolio.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 43


Data Center Physical Design

Slide 41

Learning Activity 2: Question 1

Which three of the following are true of the Juniper


40G-LX4 optics? (Choose three.)

A. Uses standard duplex LC connectors


B. No need to change the fiber lines
C. Uses fewer fiber cables than other 40G optics
D. 100-Gigabit certified and ready

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 41

Learning Activity 2: Question 1

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 44


Data Center Physical Design

Slide 41

Learning Activity 2: Question 2

Which three of the following are true of Junipers SR4


optics? (Choose three.)

A. Uses 2 fiber lines


B. Can accommodate moderate to long reach
C. 100-Gigabit ready
D. Consumes less power than other optics

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 41

Learning Activity 2: Question 2

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 45


Data Center Physical Design

Slide 42

Course Summary

In this course, we:


Determined proper placement of equipment within the
data center
Described various cabling used in the data center
Discussed the evolving requirements for optics in the data
center

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 42

In this course, we:


Determined proper placement of equipment within the data center;
Described various cabling used in the data center; and
Discussed the evolving requirements for optics in the data center.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 46


Data Center Physical Design

Slide 43

Additional Resources

Education Services training classes


http://www.juniper.net/training/technical_education/
Juniper Networks Certification Program Web site
www.juniper.net/certification
Juniper Networks documentation and white papers
www.juniper.net/techpubs
To submit errata or for general questions
elearning@juniper.net

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 43

For additional resources or to contact the Juniper Networks eLearning team, click the links on the screen.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 47


Data Center Physical Design

Slide 44

Evaluation and Survey

You have reached the end of this Juniper Networks


eLearning module
You should now return to your Juniper Learning
Center to take the assessment and the student
survey
After successfully completing the assessment, you will earn
credits that will be recognized through certificates and non-
monetary rewards
The survey will allow you to give feedback on the quality and
usefulness of the course

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 44

You have reached the end of this Juniper Networks eLearning module. You should now return to your Juniper
Learning Center to take the assessment and the student survey. After successfully completing the assessment, you
will earn credits that will be recognized through certificates and non-monetary rewards. The survey will allow you to
give feedback on the quality and usefulness of the course.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 48


Data Center Physical Design

Slide 45

Copyright 2016 Juniper Networks, Inc.

All rights reserved. JUNIPER NETWORKS, the Juniper Networks logo,


JUNOS, QFABRIC, NETSCREEN, and SCREENOS are registered
trademarks of Juniper Networks, Inc. in the United States and other
countries. All other trademarks, service marks, registered
trademarks, or registered service marks are the property of their
respective owners.

2016 Juniper Networks, Inc. All rights reserved. CONFIDENTIAL SOT-DCD01G-ML5 www.juniper.net | 45

Copyright 2016 Juniper Networks, Inc.

All rights reserved. JUNIPER NETWORKS, the Juniper Networks logo, JUNOS, QFABRIC, NETSCREEN, and
SCREENOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other
trademarks, service marks, registered trademarks, or registered service marks are the property of their respective
owners. Juniper Networks reserves the right to change, modify, transfer or otherwise revise this publication without
notice.

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 49


Data Center Physical Design

Slide 46

CONFIDENTIAL

Course SOT-DCD01G-ML5 Juniper Networks, Inc. 50


e d u c a t io n se r v ic e s c o u rsew a re

Co rp orat e and Sal es Head q uart ers APAC Head q uart ers EMEA Head q uart ers Copyright 20 10 Junip er Net w orks, Inc.
Al l right s reserved. Junip er Net w orks,
Junip er Net w orks, Inc. Junip er Net w orks ( Hong Kong) Junip er Net w orks Ireland t he Junip er Net w orks logo, Junos,
119 4 Nort h Mat hild a Avenue 26 / F, Cit yp laza One Airsid e Business Park Net Screen, and ScreenOS are regist ered
Sunnyvale, CA 9 4 0 8 9 USA 1111 Kings Road Sw ord s, Count y Dub l in, Ireland t rad em arks of Junip er Net w orks, Inc. in
Phone: 8 8 8 .JUNIPER Taikoo Shing, Hong Kong Phone: 35.31.8 9 0 3.6 0 0 t he Unit ed St at es and ot her count ries.
( 8 8 8 .58 6 .4737) Phone: 8 5 2.2332.36 36 EMEA Sales: 0 0 8 0 0 .4 58 6 .4737 Al l ot her t rad em arks, service m arks,
or 4 0 8 .74 5.20 0 0 Fax: 8 52.2574 .78 0 3 Fax: 35 .31.8 9 0 3.6 0 1 regist ered m arks, or regist ered service
Fax: 4 0 8 .74 5.210 0 m arks are t he p rop ert y of t heir
w w w.junip er.net resp ect ive ow ners. Junip er Net w orks
assum es no resp onsib il it y f or any
inaccuracies in t his d ocum ent . Junip er
Net w orks reserves t he right t o change,
m od if y, t ransf er, or ot herw ise revise t his
p ub l icat ion w it hout not ice.

Вам также может понравиться