Академический Документы
Профессиональный Документы
Культура Документы
SH
Juniper Networks Design—
T
NO
WAN
16.a
DO
—
LY
Student Guide
Volume 3 of 3
ON
E
US
408-745-2000
www.juniper.net
A
marks are the property of their respective owners.
Juniper Networks Design—WAN Student Guide, Revision 16.a
Copyright © 2016 Juniper Networks, Inc. All rights reserved.
SH
Printed in USA.
Revision History:
Revision 16.a—May 2016.
The information in this document is current as of the date listed above.
The information in this document has been carefully verified and is believed to be accurate. Juniper Networks assumes no responsibilities for any inaccuracies that may
T
appear in this document. In no event will Juniper Networks be liable for direct, indirect, special, exemplary, incidental, or consequential damages resulting from any defect or
omission in this document, even if advised of the possibility of such damages.
NO
DO
Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
—
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
LY
ON
E
US
AL
RN
TE
IN
Contents
A RE
Chapter 10: WAN Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1
SH
Best Practices and Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
OoB Management Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-13
Junos Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-21
Juniper WAN Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-30
Lab: WAN Management Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-46
T
NO
Chapter 11: WAN Virtualization and SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
SDN Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-3
NorthStar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
Contrail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-42
SD-WAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-58
DO
Lab: SDN Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-64
iv • Contents www.juniper.net
Course Overview
RE
This five-day course is designed to cover best practices, theory, and design principles for Wide Area Networks (WAN)
design including WAN interconnects, security considerations, virtualization, and management/operations. This course
covers both service provider and enterprise WAN design.
A
Intended Audience
SH
This course is targeted specifically for those who have a solid understanding of operation and configuration and are
looking to enhance their skill sets by learning the principles of WAN design.
Course Level
JND-WAN is an intermediate-level course.
T
Prerequisites
NO
The prerequisites for this course are as follows:
• Knowledge of routing and switching architectures and protocols.
• Knowledge of Juniper Networks products and solutions.
• Understanding of infrastructure security principles.
DO
• Completion of the Juniper Networks Design Fundamentals (JNDF) course.
Objectives
After successfully completing this course, you should be able to:
•
—
Describe high level concepts about the different WAN architectures.
• Identify key features used to interconnect WANs.
• Describe key high level considerations about securing and monitoring a WAN deployment.
LY
• Describe how core WAN technologies are used to solve specific problems facing network designers.
• Discuss core routing requirements.
• Explain how to design a high performance MPLS WAN core.
• Define CoS requirements for the WAN core.
AL
• Explain how enterprise WAN technologies are used to solve specific problems facing network designers.
• Outline various solutions regarding campus and branch WANs.
• Explain how data centers are interconnected through WANs.
IN
RE
• Describe security concepts regarding WANs.
• Explain the differences between LAN security concepts and WAN security concepts.
• Explain VPN-related concepts regarding WANs.
A
• Describe methods to manage WANs.
SH
• Discuss key concepts related to WAN management.
• Explain how virtualization and SDN can be leveraged in the WAN.
• Describe various SDN products and how they are used in the WAN.
• Describe MX, SRX, T, PTX, ACX, QFX, EX, and NFX Series devices and the basics of how they relate to WAN
T
solutions.
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
Day 1
Chapter 1: Course Introduction
A
Chapter 2: Overview of WAN Design
SH
Chapter 3: WAN Connectivity
Chapter 4: Network Availability and Traffic Prioritization
Lab: Network Availability and CoS Design
Day 2
T
Chapter 5: Service Provider Core WAN
NO
Lab: WAN Core Design
Day 3
Chapter 6: Service Provider Edge WAN
Lab: Service Provider Edge—VPN Design
DO
Lab: Service Provider Edge—Services Design
Day 4
Chapter 7: Enterprise WAN
Lab: Enterprise WAN Design
—
Chapter 8: Data Center WAN
Lab: Data Center WAN Design
LY
Day 5
Chapter 10: WAN Management
Lab: WAN Management Design
Chapter 11: WAN Virtualization and SDN
E
RE
CLI and GUI Text
Frequently throughout this course, we refer to text that appears in a command-line interface (CLI) or a graphical user
A
interface (GUI). To make the language of these documents easier to read, we distinguish GUI and CLI text from standard
text according to the following table.
SH
Style Description Usage Example
Franklin Gothic Normal text. Most of what you read in the Lab Guide and
Student Guide.
T
Courier New Console text:
commit complete
• Screen captures
NO
• Noncommand-related syntax Exiting configuration mode
GUI text elements:
• Menu names Select File > Open, and then click
Configuration.conf in the Filename
DO
• Text field entry
text box.
CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type config.ini
in the Filename field.
E
Finally, this course distinguishes between regular text and syntax variables, and it also distinguishes between syntax
variables where the value is already assigned (defined variables) and syntax variables where you must assign the value
(undefined variables). Note that these styles can be combined with the input style as well.
CLI Undefined Text where the variable’s value is the Type set policy policy-name.
RN
topology.
IN
RE
Education Services Offerings
You can obtain information on the latest Education Services offerings, course dates, and class locations from the World
A
Wide Web by pointing your Web browser to: http://www.juniper.net/training/education/.
About This Publication
SH
The Juniper Networks Design—WAN Student Guide is written and maintained by the Juniper Networks Education
Services development team. Please send questions and suggestions for improvement to training@juniper.net.
Technical Publications
T
You can print technical manuals and release notes directly from the Internet in a variety of formats:
• Go to http://www.juniper.net/techpubs/.
NO
• Locate the specific software or hardware release and title you need, and choose the format in which you
want to view or print the document.
Documentation sets and CDs are available through your local Juniper Networks sales office or account representative.
Juniper Networks Support
DO
For technical support, contact Juniper Networks at http://www.juniper.net/customers/support/, or at 1-888-314-JTAC
(within the United States) or 408-745-2121 (outside the United States).
—
LY
ON
E
US
AL
RN
TE
IN
T
NO
Chapter 10: WAN Management
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design—WAN
A RE
SH
T
NO
DO
—
LY
ON
We Will Discuss:
• Methods to manage WANs; and
E
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
at the completion of a virus scan, or setting a reminder in our calendars qualifies as automation. Look for areas of your
network that can benefit from the consistency automation provides. The Junos operating system (OS) provides an extensive
US
set of automation APIs that allow many different types of on-box and off-box automation options. New WAN services can be
dynamically created based on network conditions. Monitored information such as device performance counters can
automatically generate support notifications and begin the remediation process.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Why Virtualize?
Virtualization allows WANs to be more dynamic and flexible. WAN providers that can quickly bring new services to market
gain a durable competitive advantage. Virtualizing WAN routing permits the orchestration of compute, network, and storage
E
necessary to spin up new services as customer demands increase. Virtualized service offerings benefit WAN providers by
reducing the number of managed devices and lowering costs.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
shorter decision timeframes in rolling out new value-added services. All of these benefits then result in reduced expenses
related to OpEx and CapEx.
US
desired. Another key use for analytics is in troubleshooting and diagnosing network issues. Traditional tools like debug, show
commands, sFlow and SNMP are limited in these new deployments. Analytics also plays a key role in telemetry by providing
online diagnostics, optical measurements and memory stats, with dashboards, and alarms. It provides information on
application health with SLA monitoring.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
dashboard is often displayed on large monitors distributed throughout the network operations center (NOC). Information
such as device and link status, traffic load, and alarm conditions are immediately visible to all administrators. Juniper
US
provides management applications for several WAN products including Junos Space, Contrail, and NorthStar. These
applications can be extended, customized, or integrated into custom management dashboards.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
and so forth) is necessary but insufficient. To obtain an accurate picture of normalcy, you must have and maintain the
following:
US
Only then can you construct a set of baselines for network behavior as well as criteria that can be applied to establish
threshold conditions. This context allows you to automate the assessment of whether a condition is normal or undesirable.
For example, at times more than 80% interface utilization might be considered normal, and at other times it might not.
RN
Conversely, low utilization (at an unexpected time) could be a leading indicator of loss of network access upstream.
Performance conditions like these are defined by performance history context and information like time of day, direction of
flows, customer type, and so forth.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
devices on a regular basis. Historical data collected by these applications can provide immediately-useful time series
visualizations of gross network traffic behavior as well as individual device performance.
US
Junos Telemetry Interface enables export of various low level statistics from Juniper platforms at high scale. Junos Telemetry
Interface provides much better scale than SNMP because it uses a push model instead of pull. Therefore it makes perfect
sense to use it for exporting data which is already available via SNMP but at higher scale. An example of that is interface
statistics of firewall counters. At the same time JUNOS Telemetry Interface infrastructure may be used for other low level
AL
information export, which is not available through SNMP and which will be hard to implement through SNMP. This includes
queue depth monitoring or LSP event statistics.These tools (and similar tools) can also be extended to incorporate threshold
detection once baseline norms have been established.
Traffic flows can be monitored by enabling flow reporting on the ingress and egress routers or switches. Different vendors
RN
might support slightly differing implementations of flow reporting (such as NetFlow, C-Flow, S-Flow, and IPFIX). Choose the
one that fits your needs and is supported by your network hardware. (A detailed analysis of flow monitoring tools is beyond
the scope of this course.)
Continued on the next page.
TE
IN
RE
Traffic flow data collection and reporting allows you to see the direction and composition of IP traffic through your network.
This type of data is much more useful than gross per-interface, packet-rate, or byte-rate because it can also report usage
broken down by source and destination ports as well as by IP address. Such a break down is useful, because even “normal–
looking” network utilization or packet rates can be unintended or anomalous traffic at the Application level (for example,
A
slow network scans, undesired services, and so forth). Deep Packet Inspection (DPI) tools can provide insight into the
Application level performance. This level of monitoring enables you to detect when otherwise normal applications on your
SH
network might be behaving in inappropriate ways. An example of this type of behavior would be undesired SSH virtual private
networks (VPNs) masquerading as Hypertext Transfer Protocol over Secure Sockets Layer (HTTPS) sessions.
Enable SNMP agents on your switches and routers to provide continuous and historical instrumentation of traffic loads
through the devices. Also, you can utilize SNMP instrumentation to monitor the internal “device health” of your routers,
switches, and servers (such as CPU, memory, and buffer utilization, as well as environmental factors like temperature).
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Standardize Configurations
Standardization, while it might seem boring and obvious, is crucial to almost every aspect of your network, especially
network management. For any process to be scalable, it must be replicable. Standardization enables this replicability.
E
Logical organization and use of standard conventions for naming, placement, and use of terminology makes it easier for
support staff to manage the network. In the same way, consistency and standardization make it possible to automate
management functions.
Standardizing the physical layout, including rack layout and device slot and module population, expedites deployment as well
as diagnostics.
AL
Standardizing software and firmware versions (that is, keeping like devices running identical software and firmware)
expedites troubleshooting, simplifies sparing and replacement, and makes determination of software versus hardware
failures much more obvious.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
pattern matching logic that might be required by network management applications, and expedites physically locating
devices (for example, site and room ID, row, rack number, and so on). However, brevity is good, so resist encoding too many
US
Description Fields
Standardized use of the description fields available in device management agents makes it easier (or even possible) to
AL
automate the population of those fields as part of configuration creation. The ability to easily parse and extract data from
device description fields enables software as well as humans to quickly derive additional context from device configurations.
Using standardized description fields in device configurations can also expedite troubleshooting and diagnostics. Just like
with hostnames, you should define a convention and follow it strictly. Finally, items for consideration include customer IDs,
RN
circuit IDs, site code, row and rack ID, “under test”, and so on.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
Cleanly separating management and production networks as close to Layer 1 as possible removes complexities around
bandwidth management and accounting, making network management processes simpler and more scalable. So,
US
production network loads or failures do not impact the ability to monitor and control the network infrastructure, because
access to device and network performance and fault information is most crucial when the network is in a failure mode. Note
that separation at the physical interface level is preferable to “logical” separation.
Separating “production” and “management” networks also mitigates bandwidth contention (that is, performance) problems
AL
A RE
SH
T
NO
DO
—
LY
ON
ports are connected to console terminal server devices that provide an IP address for administrator SSH or TELNET
connections. Some device types provide and OOB network interface for management network connectivity. This interface is
US
hardware isolated from other network interfaces on the platform and generally only provides management services such as
SSH, TELNET, HTTP, or SNMP. To provide remote dial-in management access some devices include an AUX port that supports
modem connectivity.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Administrator workstations can be dynamically assigned to an administrative VLAN that has access to the management
network. Remote support can be provided through an authenticated and encrypted VPN connection.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Levels of Access
WAN designers determine the type of management access required by network administrators. Most network devices
provide a serial console port and CLI access. Some hardware can be equipped with management modules that provide
E
additional remote management functionalities. Devices can be powered on or off, re-imaged with new software, or have a
configuration restored. Lights out capabilities permit performing administrative functions on devices even when the device is
US
powered off.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
OoB Security
The OoB management network is another connection point to the WAN and several documented network attacks have taken
advantage of trusted management network connections. Many network devices permit unauthenticated access through the
E
console port so the console terminal server should provide consistent authentication capabilities. Centralize as much
authentication, access, and accounting as possible. Rely on scalable centralized services such as RADIUS or TACACS to
US
A RE
SH
T
NO
DO
—
LY
ON
Passing this information across an already degraded production network can make existing problems worse. Keep as much
management traffic as possible confined to the management network.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
network. Procedures for access to the management network must be clearly defined and enforced. Many network
components have pre-defined well known default administrator accounts. Rename or disable these accounts and review
US
default settings after any software upgrades. To promote accountability, each administrator with access to the management
network should be authenticated using an individual set of credentials.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Junos Space
The slide highlights the topic we discuss next.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
platform for deep element and fault, configuration, accounting, performance, security (FCAPS) management, plug-n-play
management applications for reducing costs and provisioning new services quickly, and a programmable SDK for network
US
customization. With each of these components working cohesively, Junos Space offers a unified network management and
orchestration solution to help you more efficiently manage the network.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
servers or in hypervisors using KVM. The JA2500 hardware appliance is purpose-built to host the Junos Space Network
Management Platform and is fine-tuned to ensure high availability and high performance of Junos Space applications. The
US
hardware appliance versions of Junos Space does not require hardware and operating system configuration expertise to
deploy the appliance, and it also makes initial configuration and deployment quite easy by providing a simple, menu-driven
console interface. Another advantage of deploying the Junos Space hardware appliance is that it simplifies ordering,
maintenance, and support of your network by making Juniper Networks the single destination for all your hardware and
software requirements for Junos Space, as well as your other networking devices. Multiple appliances can be clustered
AL
RE
A Junos Space Virtual Appliance includes the same software and all the functionality available in a Junos Space hardware
appliance. However, you can deploy the virtual appliance on a VMware ESX server (version 3.5 or higher) or an ESXi server
(version 4.0 or higher). The main driver for choosing Junos Space Virtual Appliances would be that it allows you to utilize any
existing investment already made in VMware virtualization infrastructure instead of purchasing new hardware. You can also
A
scale up a Junos Space Virtual Appliance by increasing the resources assigned to it in terms of CPU, memory, and disk
space.
SH
Extending the breadth of the Junos Space Network Management Platform are multiple Junos Space Management
Applications that optimize network management for various domains. These applications, with their easy-to-use interface,
enable you to provision new services across thousands of devices and optimize workflow tasks for specific domains, such as
core, edge, data center, campus, security, mobile, and more.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Junos Space Security Director helps organizations improve the reach, ease, and accuracy of security policy administration
with a scalable, GUI-based management application. Used by both enterprises and service providers, it helps administrators
more quickly and intuitively manage all phases of security policy lifecycle, from policy creation to remediation, through one
centralized web-based interface.
Junos Space Service Now is a remote, automated troubleshooting client that enables Juniper Networks to quickly identify a
AL
problem in the customer’s network to achieve a 40% increase in Day 1 issue resolution. Used by both enterprises and
service providers, Junos Space Service Now creates an incident detection system by performing automated diagnostic data
collection on first occurrence of an issue. The information collected from devices, by means of the Junos Space Network
Management Platform, is presented in an easily accessible format that automates and speeds troubleshooting and
RN
RE
Junos Space Service Insight helps you reduce network downtime by delivering proactive bug notifications specific to your
network configuration, and thorough automated end-of-life/end-of-support (EOL/EOS) analysis where you can do complete
EOL auditing across hundreds of devices in seconds. Used by both enterprises and service providers, Junos Space Service
Insight works seamlessly with Junos Space Service Now, to deliver targeted bug notifications, identify which network devices
A
could potentially be impacted, and to perform impact analyses for EOL/EOS notifications.
Junos Space Virtual Director is a key component of the Juniper Networks vSRX security suite. Junos Space Virtual Director
SH
provides a comprehensive and automated VM management solution that enables administrators to automate the
provisioning and resource allocation of Juniper Networks vSRX virtual firewall.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
environment. This configuration might work well in a small POC or demonstration environment, however, for a medium to
large production environment, we recommend that you manually change the virtual appliance package to use four cores of
US
the processor.
Note that we recommend the use of at least 32 GB RAM when you deploy Junos Space Virtual Appliance as a fault
monitoring and performance monitoring (FMPM) node—16 GB of RAM is recommend for a regular Junos Space Virtual
Appliance. This special FMPM node performs fault and performance monitoring of the managed devices and Junos Space
AL
nodes, and any events or alarms are stored in a PostgreSQL database on this FMPM node.
Once you have changed the hard disk size for the virtual appliance, you must expand the VM disk size within the console.
This action can be accomplished by selecting option 6, Expand VM Drive Size, in the Junos Space console setting menu.
RN
RE
The distributed Junos Space Virtual Appliance files are created with 32 GB of disk space. To support Junos Space
functionality, after deploying the Junos Space Virtual Appliance to the VMware ESX server, you must add disk resources for
the Junos Space Virtual Appliance.
When configuring the virtual appliance as a space node, you must add disk resources as follows for expanding the partitions:
A
• 40 GB for /var
SH
• 25 GB for /var/log
• 15 GB for /tmp
• 20 GB for /
T
When configuring the virtual appliance as a specialized or FMPM node, you must add disk resources as follows for
expanding the partitions:
NO
• 120 GB for /var
• 40 GB for /var/log
• 20 GB for /tmp
• 20 GB for /
DO
Note that you need to add the disk resource, then expand the drive size of a partition, and again add a disk resource to
expand the drive size of another partition. Space available in a disk resource cannot be shared among the partitions. For
example, you cannot share a disk resource of 80 GB between the /var, /var/log, and /tmp partitions. You must add a disk
resource of minimum 40 GB and then expand the drive size of the /var partition; again add a disk resource of 25 GB and
then expand the drive size of the /var/log partition and so on.
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A Junos Space fabric, also known as a Junos Space Cluster, comprises one or more IP-connected nodes. This idea might
US
seem a little strange, in which a cluster can consist of just one node. However, with Junos Space, when you deploy one node
it is considered a Junos Space cluster of one. The reason behind this idea is that when the first Junos Space node is
deployed, it joins the cluster by itself. This action makes it easy for another node to join the cluster. There is no need to set
the first node in a cluster mode, then add the next node to the cluster. You just simple add the additional node to the cluster.
To further discuss the topic of a node, a node is a logical object that represents a single Junos Space hardware appliance or
AL
Junos Space virtual appliance, its operating system, and the Junos Space software that runs on the operating system. Each
Junos Space appliance or virtual appliance that you install and configure is represented as a single node in the fabric. You
can add nodes without disrupting the services that are running on the fabric. You can also combine hardware and virtual
appliances in the same fabric—there is no need to go all virtual or all physical in a Junos Space fabric.
RN
Currently, you can deploy up to six nodes in the fabric to manage up to 15,000 devices─2500 devices per node.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
security devices running Junos OS. Junos automation tools for on-box automation, which leverage the native XML capabilities
of the Junos OS, include commit scripts, operational scripts (op scripts), event policies and scripts, and macros.
US
Junos automation simplifies complex configurations and reduces potential configuration errors. It saves time by automating
operational and configuration tasks. It also speeds troubleshooting and maximizes network uptime by warning of potential
problems and automatically responding to system events.
Junos automation can capture the knowledge and expertise of experienced network operators and administrators and allow
AL
a best-in-class product.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
risk during these maintenance windows. A seemingly small operational mistake can lead to significant service outages.
Therefore, network operators need a solution that can snapshot the system before and after any changes, compare the
US
snapshot data.
Junos Snapshot Administrator can perform the following functions on either a single device or list of devices running the
Junos OS:
RN
A RE
SH
T
NO
DO
—
LY
ON
management solutions. The Junos XML API enables these systems, and allows programming in any language that supports
the industry standard Network Configuration Protocol (NETCONF).
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
shortly). This ensures that as new Junos features are released, they become immediately available to both on-box and
off-box automation.
US
As we mentioned earlier, Junos automation tools include op scripts, event policies and scripts, and commit scripts for on-box
automation.
In regards to op scripts, the input is a blank XML document. An op script can also call on a wide variety of Junos functions
depending on the script’s purpose. Then the script sends XML output telling the Junos OS which results or information
AL
errors, or attempting to correct the error. It then returns an XML document telling the Junos CLI what to display to the user, if
anything. Because event scripts are triggered by events, not by the user, warnings and errors are often written to a log rather
than returned in the XML output.
With commit scripts, the input is the XML representation of the Junos configuration. A commit script can call on Junos
TE
functions to access information needed to evaluate the configuration. It then returns an XML document that tells the Junos
OS what actions, if any, it should take. The XML document can contain instructions telling the Junos infrastructure to change
some of the configuration, issue a warning, issue an error and stop the commit operation, or perform some combination of
these operations.
IN
A RE
SH
T
NO
DO
—
LY
ON
NETCONF
NETCONF is a protocol that is used as the communication channel to communicate with a device. NETCONF provides
mechanisms to install, manipulate, and delete the configuration of network devices. It uses XML-based data encoding for the
E
configuration data and the protocol messages. NETCONF protocol operations are realized as remote procedure calls
(RPCs). The protocol messages are exchanged on top of a secure transport protocol (SSHv2).
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Junos Python EZ
Python is a high-level programming language. Junos PyEZ is a Python library that enables you to remotely manage and
automate devices running the Junos OS. You do not need to install any client software on the remote nodes in order to use
E
Junos PyEZ to manage the devices. Also, Python is not required on the managed devices running the Junos OS, because
Junos PyEZ utilizes NETCONF and the Junos XML APIs. To use Junos PyEZ to manage devices running the Junos OS, you must
US
enable NETCONF over SSH on the managed device and ensure that the device meets requirements for SSHv2 connections.
Junos PyEZ enables both types of users—non-programmers and programmers—to easily interact and manage a network of
Junos OS devices.
Non-programmers can use the native Python shell on their management server (laptop, tablet, phone, and so on) as their
AL
point-of-control for remotely managing Junos OS devices. The Python shell is an interactive environment that provides the
necessary means to perform common automation tasks—such as conditional testing, for-loops, macros, and templates.
These building blocks are similar enough to other shell environments, such as Bash, to enable the non-programmer to use
the Python shell as a power tool, instead of a programming language. From the Python shell, a user can manage Junos OS
RN
devices using native hash tables, arrays, and so on, instead of using device-specific Junos OS XML or resorting to
screen-scraping the actual Junos OS CLI.
Continued on the next page.
TE
IN
RE
There is a growing interest and need to automate the network infrastructure into larger IT systems. To do so, traditional
software programmers, development and operations (DevOps), and so on, need an abstraction library of code to further
those activities. Junos PyEZ is designed for extensibility so that the programmer can quickly and easily add new widgets to
the library in support of their specific project requirements. There is no need to "wait on the vendor" to provide new
A
functionality. Junos PyEZ is not specifically tied to any version of the Junos OS, or any Junos OS product family.
Junos OS PyEZ is designed to provide the same capabilities a user would have on the Junos OS CLI, but in an environment
SH
built for automation tasks. These capabilities include, but are not limited to:
• Provide facts about the device such as software version, serial number, and so on;
• Retrieve the operational or run-state information (think show commands) using Tables and Views;
T
• Retrieve the configuration using Tables and Views;
• Make unstructured configuration changes with snippets and templates;
NO
• Make structured configuration changes with modeled abstractions; and
• Provide common utilities for tasks such as secure copying of files and software updates.
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
The JUISE project (that is, the Junos User Interface Scripting Environment) allows scripts to be written, debugged, and
executed outside the normal Junos OS environment. Users can develop scripts in their desktop environment, performing the
code/test/debug cycle in a more natural way. Tools for developers are available, including a debugger, a profiler, a call-flow
tracing mechanism, and trace files. The Junos-specific extension functions are available for scripts, allowing access to Junos
XML data using the normal jcs:invoke and jcs:execute functions. Commit scripts can be tested against a Junos device's
AL
configuration and the results of those scripts can be tested on that device. JUISE scripts are typically written in SLAX, an
alternative syntax for Extensible Stylesheet Language Transformations (XSLT). For information about SLAX, refer to link
shown on this slide.
The JUISE software can be used to execute scripts independent of a Junos OS based device, allowing the use of SLAX scripts
RN
in the off-box world. Scripts can communicate with zero or more devices, allowing a centralized manipulation of multiple
devices.
JUISE includes a rudimentary NETCONF server, allowing Junos OS devices to invoke RPCs to Unix devices. Junos OS devices
can perform NETCONF or Junoscript RPCs to a JUISE device, and invoke scripts or programs on that device.
TE
JUISE is available as an open-source project by way of Google code using the link shown at the bottom of this slide.
IN
A RE
SH
T
NO
DO
—
LY
ON
You typically deploy the Puppet software using a client-server arrangement, where the server, or Puppet master, manages
one or more agent nodes. The client daemon, or Puppet agent, runs on each of the managed compute resources.
You create Puppet manifest files to describe your desired system configuration. The Puppet master compiles the manifests
into catalogs, and the Puppet agent periodically retrieves the catalog and applies the necessary changes to the
configuration. Juniper Networks provides support for using Puppet to manage certain devices running the Junos OS.
AL
RE
The components of Puppet for the Junos OS are as follows:
• jpuppet package: This package is installed on the agent node running the Junos OS and contains the Puppet
agent, the Ruby programming language, and support libraries.
A
• netdevops/netdev_stdlib Puppet module: This module contains generic Puppet Type definitions. It does not
include any specific Provider code.
SH
• juniper/netdev_stdlib_junos Puppet module: This module contains the Junos OS-specific Puppet Provider code
that implements the Types defined in the netdevops/netdev_stdlib module. You install this module on the
Puppet master when managing devices running Junos OS.
• Ruby gem for NETCONF (Junos XML API): This gem is installed on the Puppet master and is also bundled in the
T
jpuppet package.
When using Puppet to manage devices running the Junos OS, the Puppet agent makes configuration changes under
NO
exclusive lock, and logs all commit operations with a Puppet catalog version for audit tracking. Puppet report logs include a
Junos OS source indicator for log entries specific to Junos OS processing and tags associated with the operation or error,
which enables easy report extraction.
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
framework. The Chef client is supported on some Junos OS-based devices. Chef runs in a client/server model where
predefined (recipe) templates for configurations are stored on the Chef server. Devices running the Chef client request
US
changes from the Chef server upon boot-up and the Chef recipes are transferred to the client device. The strength of Chef is
its ability to manage an entire IT infrastructure from a client workstation where DevOps professionals bundle multiple recipes
into Chef cookbooks to make orchestrated configuration changes across multiple types of devices. Chef can be used to
install applications as well.
AL
The Juniper Networks Chef module provides options for configuring the following:
• Physical interfaces: Physical Ethernet interface attributes, such as administrative state, description, speed,
duplex mode, and MTU;
RN
• Layer 2 Ethernet switching services: Logical Ethernet switching interface attributes, such as description, VLAN
membership, and port mode (access or trunk);
• Link aggregation groups (LAGs): LAG interface attributes, such as name, member links, Link Aggregation Control
Protocol (LACP) mode, and minimum up-links required; and
TE
RE
A Chef for Junos OS deployment consists of the following major components: a Chef server, at least one workstation, and one
or more nodes.
• Chef server: The server acts as a hub for configuration data. The server stores cookbooks and the node object
metadata that describes each registered node managed by the Chef client. You can choose between two types
A
of servers:
SH
• Enterprise Chef: A highly scalable server that includes premium features and support. You can install the server
behind a firewall, or you can use a cloud-based server hosted by Chef.
• Open source Chef: An open-source, free version of the server that is the basis for Enterprise Chef.
• Workstations: You perform most of your work on a workstation. You use the Chef CLI, called knife, to develop
T
cookbooks and recipes, which are stored in a local Chef repository. From the workstation, you can synchronize
the local repository with your version-control system, upload cookbooks to the Chef server, and perform
NO
operations on nodes.
• Nodes: A node is any device or virtual device that is configured to be managed by the Chef client. To manage a
node, the Chef client running on the node obtains the configuration details, such as recipes, templates, and file
distributions, from the Chef server. The Chef client then does as much of the configuration work as possible on
the node itself. For a Junos OS device to be a Chef node, it must have the Chef client installed and configured
DO
on it.
IT frameworks like Puppet and Chef are attractive because they provide a layer of abstraction between what you want to
manage and how the resource is actually implemented.
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
While this tool has been designed for managing servers, the framework can be utilized for simple but common tasks for
US
deploying network equipment. To this end, Juniper will produce Ansible modules that will enable users to automate the basic
tasks of commissioning Junos OS based equipment. Examples of modules (such as junos_install_configs and junos
shutdown) are shown on this slide. An Ansible module represents a unit of work that is performed by a managed node. You
can execute modules directly or through playbooks. The modules use console connections or NETCONF to perform the
various tasks. Ansible was selected because it does not require an on-box agent (as Puppet and Chef do) and it does not
AL
A RE
SH
T
NO
DO
—
LY
ON
We Discussed:
• Methods to manage WANs; and
E
A RE
SH
T
NO
DO
—
LY
ON
Review Questions
1.
E
2.
US
3.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
RE
1.
The purpose of the OOB management network is to isolate traffic associated with network management from production customer
traffic.
A
2.
Among the best practices for securing the OOB management network are physical isolation from the production network, limiting
SH
access to the management network, centralizing authentication and access control, eliminating shared logins, and restricting distribution
of the management network addressing into the production network.
3.
Junos on-box automation options include operation, commit, and event scripts. Off-box automation options include PythonEZ,
T
Puppet, Chef, JUISE, and Ansible.
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
A
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
T
NO
Chapter 11: WAN Virtualization and SDN
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design—WAN
A RE
SH
T
NO
DO
—
LY
ON
We Will Discuss:
• How virtualization and SDN can be leveraged in the WAN; and
E
• Various SDN products and how they are used in the WAN.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
SDN Overview
The slide lists the topics we will discuss. We discuss the highlighted topic first.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
network in order to make forwarding decisions. In order to centralize the control plane, the control plane must first be separated
from the forwarding plane
US
Centralized Management
Centralization of the control plane also provides the ability to configure the network from a central management pane.
Traditional networks have struggled for decades with the concept of central management because of the distributed nature of
AL
the control plane. Centralized management is much simpler to accomplish in an SDN environment because of the centralized
control plane.
The northbound API interface on the SDN controller allows developers to write applications that have the ability to program the
forwarding elements directly. This allows for reactive behavior based on the business applications running on the network, as
well as the automated spin-up and tear-down of policy as well as virtualized business applications, networks, and services (NFV)
TE
to scale along with business requirements and demands. Thanks to the open northbound APIs that can be used in an SDN
environment such as Representational State Transfer (REST) which is a stateless communications protocol that uses the HTTP
protocol, network applications can be written in any language that can interact with those APIs. Python and Ruby have become
very popular in the networking and DevOps arenas, while languages like PHP can also be used for web applications and front
IN
ends.
A RE
SH
T
NO
DO
—
LY
ON
Cost Savings
Savings in capital expenditure (CapEx), which is the up-front cost typically encompassing the purchase of hardware and
E
software) can be made because the hardware required for an SDN-enabled network with a centralized control plane can be
much simpler, and therefore cheaper. White Box switches are now being manufactured to Open Compute Project standards,
US
Resource Savings
Operational Expenditure (OpEx), which is the ongoing cost of managing and maintaining services, savings can be made because
AL
of the centralized, programmable nature of the network. The ability to automate the network using standard tools and
programming languages means less mistakes are made and less manpower is required for deployment and maintenance.
Time Savings
RN
SDN enables a much faster spin-up of services for enterprises and service providers, which can significantly decrease
go-to-market time for new applications and architectures.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
these four planes permits the optimization of each plane within the network. We discuss these four planes separating in
detail later in this chapter.
US
Earlier in this content, we discussed how centralizing the control plane can be beneficial. To take the concept of
centralization a little further, we can centralize certain aspects of the management and services plane. Centralizing the
management, services, and control plane can help simplify network design and lower operating costs. We discuss
centralizing these planes in detail later in this chapter.
AL
SDN is a huge benefit to Cloud providers in that it can provide elastic scale and provide flexible deployment of services. For
example, if a customer is using a firewall service from a Cloud provider, they might only need 1 Gbps of firewall throughput
for most of the month. However, one or two days out of the month they might need 5 Gbps of firewall throughput. Without
elastic scaling, the customer needs to purchase 5 Gbps of firewall throughput for the entire month. Whereas with elastic
RN
scaling, the customer receives 1 Gbps of firewall throughput, until the time comes that more throughput is necessary. Then,
they are provided with 5 Gbps of firewall throughput. When the customer no longer needs the 5 Gbps of firewall throughput,
only 1 Gbps of firewall throughput is provided.
Continued on the next page.
TE
IN
RE
You can accomplish elastic scaling with SDN in the previous example by spinning up a virtual machine (VM)-based firewall
that provides 1 Gbps of throughput. Then, by setting a certain threshold which spins up additional VM based firewalls when
a specific firewall throughput is reached, provides that additional throughput that the customer needs. Additionally, you
would need to set a threshold that would spin down the additional VM-based firewalls when they are no longer needed. This
A
permits you to provide the customer with only what they need, when they need it.
The fourth principle, which discusses creating a common platform, refers to the Juniper Contrail product. Contrail permits
SH
you to develop applications in the form of software development kit (SDK) application programming interfaces (APIs) to tailor
Contrail to your SDN solution.
The fifth principle, which discusses standardizing protocols, is important because nobody wants to experience vendor lock-in
with their SDN solution. To avoid vendor lock-in it is important that standardized protocols are used which have
T
heterogeneous support across vendors. An example of this principle is how Juniper uses standardized and mature protocols
to provide communication between the SDN controller and the managed devices—such as the Extensible Messaging and
NO
Presence Protocol (XMPP). Adhering to this principle allows you choice and an overall reduction in the cost of your SDN
deployment.
The last principle describes how broadly applying SDN principles to all areas of networking can better those areas. For
example, SDN can provide improvements to the security area of your network by virtualizing your firewall which allows you to
become more agile to new and existing threats. SDN can help a service provider by allowing them to dynamically implement
DO
service chaining based on the needs of each customer they provide service to. The examples of how SDN can help your
network are almost endless, suffice it to say that SDN can optimize and streamline your network no matter which sector of
networking your are in.
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
introduced. SDN is like a remodel—you need to do it one step at a time. Like most remodels, there is more than one way to
get to the SDN result, but here is a reasonable set of four steps to reach the goal:
US
Management is the best place to start as this provides the biggest bang for the buck. The key is to centralize network
management, analytics, and configuration functionality to provide a single master that configures all networking devices.
This lowers operating cost and allows customers to gain business insight from their networks.
Centralizing management does several things, each of which provides significant value. You start by creating a centralized
AL
management system. Similar to cloud applications, this centralized management system is packaged in x86 virtual
machines running on industry standard servers. These VMs are orchestrated using one of the commonly available
orchestration systems such as VMware’s vCloud Director, Microsoft System Center, or OpenStack.
RN
In the case of the service provider, their operational and business systems connect to the centralized management VMs
which configure the network. Similarly within a data center, that same data center orchestration system (VMware vCloud
Director, OpenStack, and so forth) can now directly manage the network
Continued on the next page.
TE
IN
RE
Configuration is performed through published APIs and protocols—where possible these protocols are industry-standard. As
SDN is still a new technology, industry standard protocols are still emerging but it is very important that moving forward
these standards get created.
Networking and security devices generate huge amounts of data about what is happening across the network. Much can be
A
learned by analyzing this data and like other aspects of business, analytic techniques applied to networking and security
data can transform our understanding of business.
SH
Pulling management from the network device into a centralized service provides the first step to creating an application
platform. Of greatest urgency is simplifying the connection to the operational systems used by enterprises and service
providers. However, as this platform takes shape, new applications will emerge. The analytics provides insight into what is
happening within the network, enabling better business decisions and new applications which will dynamically modify the
T
network based on business policy. Centralized management enables changes to be performed quickly—enabling service
providers to try out new applications, packages and plans, quickly expanding those that work and dropping those that do not.
NO
In fact, like other new platforms we have seen over the years, the possibilities are endless and the most interesting
applications will only emerge once that platform is in place.
Extracting services from network and security devices by creating service VMs is a great next step because services are an
area that is terribly under served by networking. This enables network and security services to independently scale using
industry-standard, x86 hardware based on the needs of the solution.
DO
Creating a platform that enables services to be built using modern, x86 VMs opens up a whole new world of possibility. For
example, the capacity of a security firewall today is completely limited by the amount of general-purpose processing power
you put into a single networking device—the forwarding plane is faster many times over. So if you can pull the security
services out of the device and then run them on a bank of inexpensive x86 servers, you dramatically increase capacity and
agility. —
As a first step, you can tether, or connect these services back to a single networking device. You can put the x86 servers in a
rack next to the networking device or they can be implemented as server blades within the same networking device. Either
way, this step opens up the possibilities for a whole new set of network applications.
LY
The centralized controller enables multiple network and security services to connect in series across devices within the
network. This is called SDN service chaining—using software to virtually insert services into the flow of network traffic.
Service chaining functionality is physically accomplished today using separate network and security devices. Today’s physical
approach to service chaining is quite crude, separate devices are physically connected by Ethernet cables. Then, each device
ON
must be individually configured to establish the service chain. With SDN service chaining, networks can be reconfigured on
the fly, allowing them to dynamically respond to the needs of the business. SDN service chaining will dramatically reduce the
time, cost, risk for customers to design, and the testing and deliver new network and security services.
The final step of optimizing network and security hardware can proceed in parallel with the other three. As services are
disaggregated from devices and SDN service chains are established, network and security hardware can be used to optimize
E
performance based on the needs of the solution. Network and security hardware will continue to deliver ten times or better
forwarding performance than can be accomplished in software alone. The combination of optimized hardware together with
US
SDN service chaining allows customers to build the best possible networks.
The separation of the four planes helps to identify functionality that is a candidate for optimization within the forwarding
hardware. This unlocks significant potential for innovation within the ASICs and system design of networking and security
devices. While an x86 processor is general purpose, the ASICs within networking devices are optimized to forward network
traffic at extreme speeds. This hardware will evolve to become more capable—every time you move something from software
AL
into an ASIC, you can achieve a massive performance improvement. This requires close coordination between ASIC design,
hardware systems, and the software itself. As SDN becomes pervasive, the ability to optimize the hardware will create many
opportunities for networking and security system vendors.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
The bottom plane, forwarding, does the heavy lifting of sending the network packets on their way. It is optimized to move
data as fast as it can. The forwarding plane can be implemented in software but it is typically built using application-specific
integrated circuits (ASICs) that are designed for that purpose. Third-party vendors supply ASICs for some parts of the
switching, routing, and firewall markets. For high performance and high scale systems, the forwarding ASICs tend to be
specialized and each vendor provides their own, differentiated implementation. Some have speculated that SDN will
AL
commoditize switching, routing, and firewall hardware. However, the seemingly insatiable demand for network capacity
generated by thousands of new consumer and business applications creates significant opportunity for differentiation in
forwarding hardware and networking systems. In fact by unlocking innovation, SDN will allow further differentiation from the
vendors who build these systems.
RN
If the forwarding plane is the brawn of the network, the control plane is the brains. The control plane understands the
network topology and makes the decisions on where the flow of network traffic should go. The control plane is the traffic
officer that understands and decodes the networking protocols and ensures that the traffic flows smoothly. Importantly, the
control plane learns everything it needs to know about the network by talking to its peer in other devices. This is the magic
TE
that makes the Internet resilient to failures, keeping traffic flowing even when a major natural event, such as a large
earthquake, brings down thousands of networking devices.
Continued on the next page.
IN
RE
Sometimes network traffic requires more processing and for this, the services plane does the job. Not all networking devices
have a services plane—you will not find this plane in a simple switch or router. But for many routers and all firewalls, the
services plane does the deep thinking, performing the complex operations on networking data that cannot be accomplished
by the forwarding hardware. The services plane is the place where firewalls stop the bad guys and parental controls are
A
enforced. This plane enables your smart phone to browse the Web or stream a video, all the while ensuring you are correctly
billed for the privilege.
SH
Like all computers, network devices need to be configured, or managed. The management plane provides the basic
instructions of how the network device should interact with the rest of the network. Where the control plane can learn
everything it needs from the network itself, the management plane must be told what to do. The networking devices of today
are often configured individually. Frequently, they are manually configured using a command-line interface (CLI), understood
by a small number of network specialists. Because the configuration is manual, mistakes are frequent and these mistakes
T
sometimes have serious consequences—cutting off traffic to an entire data center or stopping traffic on a cross-country
networking highway. Service providers worry about backhoes cutting fiber optic cables but more frequently, their engineers
NO
cut the cable in a virtual way by making a simple mistake in the complex CLI used to configure their network routers or
security firewalls.
While the forwarding plane uses special purpose hardware to get its job done, the control, services, and management planes
run on one or more general purpose computers. These general purpose computers vary in sophistication and type, from very
inexpensive processors within consumer devices to what is effectively a high-end server in larger, carrier-class systems. But
DO
in all cases today, these general purpose computers use special purpose software that is fixed in function and dedicated to
the task at hand. That inflexibility is the root of the issue that has sparked the interest in SDN.
If you crawled through the software inside a router or firewall today, you’d find all four of the networking planes. But with
today’s software that networking code is built monolithically without cleanly defined interfaces between the planes. What
you have today are individual networking devices, with monolithic software, that must be manually configured. This makes
—
everything harder than it needs to be.
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• Storage
• Network
The SDN controller role is to orchestrate the network and networking services like load balancing and security based on the
needs of the application its assigned compute and storage resources.
AL
The orchestrator uses the northbound interface of the SDN controller to orchestrate the network at a very high level of
abstraction, for example:
• Create a virtual network for a tenant, within a data center or across data centers.
RN
• Deploy a network service (for example, a load balancer) in a tenant’s virtual network.
Continued on the next page.
IN
RE
The SDN controller is responsible for translating these requests at a high level of abstraction into concrete actions on the
physical and virtual network devices such as:
• Physical switches, e.g., Top of Rack (ToR) switches, aggregation switches, or single-tier switch fabrics.
A
• Physical routers.
• Physical service nodes such as firewalls and load balancers.
SH
• Virtual services such as virtual firewalls in a VM.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Controller is used to control and optimize MPLS traffic flows across the WAN.
US
The Contrail Cloud controller is a Cloud Automation Controller. It is built upon the ETSI NFV specification. The Contrail
controller allows for simple centralized SDN management of the network which can contain both virtual and physical
devices. It can also enable dynamic service chains within a data center, POP, or across the WAN.
The rest of this chapter will focus on designing the WAN using NorthStar and designing network services that span WAN links
using Contrail.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
NorthStar
The slide highlights the topic we discuss next.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
MPLS Management
MPLS service providers must maximize the financial investment that has been made to build and operate the network.
Investment is necessary in hardware and software but also in the people that make it all work together. Maximizing this
E
enormous financial commitment will determine business success or failure. MPLS tunnels must be designed to use
available paths efficiently. To meet customer service level agreement requirements a successful network design must
US
incorporate failure recovery. Customers see the service provider network as a service that is always available.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
problem. Individual devices each have their own view of the network and can make individual decisions affecting traffic flow
that might not be optimal from a network wide perspective. Centralizing as much network information and decision making
US
as possible can provide benefits when making network wide MPLS design decisions. Administrator requirements for
individual tunnels can be centrally defined and signaled based on the specific needs of each individual tunnel. What
happens when links or nodes in the network fail? Will tunnels successfully be re-routed? This is enough to keep network
designers awake at night.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
switched paths can be provisioned, deployed, and managed. The NorthStar Controller administrator interface provides
visibility for all routers, links and label switched paths in a service provider or large enterprise network.
US
Centralized processing
A key concept of SDN is the separation of the decision making component of the network, the control plane, from the
component that actually forwards the traffic based on decisions made by the control plane. This component is known as the
AL
forwarding plane. The NorthStar controller peers with a service provider or enterprise route reflector and learns network
destinations. A key component of the learning process is not only destinations but also traffic engineering characteristics for
the links that make up all available paths through the network. This information is stored in a traffic engineering database on
the NorthStar Controller. The NorthStar controller uses this network wide traffic engineering database and industry leading
RN
algorithms to calculate the explicit route objects (EROs) that will determine signaling for all label switched paths in the
network.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
NorthStar Features
The NorthStar Controller is an example of software defined networking. It provides a centralized control plane for MPLS
domains. The NorthStar Controller is a WAN SDN controller that automates the discovery and creation of traffic-engineered
E
label switched paths across a service provider or large enterprise network. NorthStar Controller provides granular visibility
into, and control over, IP/MPLS flows. The NorthStar Controller provides complex inter-domain path computation and
US
optimum path for every LSP in the network. Network designers can use the NorthStar Controller interface to simulate the
failure of links and nodes and verify secondary paths are available for affected LSPs. The maintenance mode feature allows
LSPs to be dynamically re-routed around nodes or links that are being taken off line for maintenance reasons.
Continued on the next page.
RN
TE
IN
RE
Additional NorthStar capabilities include:
• Employs sophisticated, industry-leading path computation algorithms.
• Addresses multilayer optimization with multiple user-defined constraints.
A
• Provides specific ordering and synchronization of paths that are signaled across routed network elements.
SH
• Offers a global view of network state for monitoring, management, and proactive planning.
• Features predictable, deterministic network state within a margin of error for demand forecasts.
• Minimizes distributed state and increases efficiency of existing network elements by offloading control-plane
processing.
T
• Creates a foundation for additional centralized network infrastructure services through an API for the network.
NO
• Simplifies operations by enabling SDN programmability control points across disparate network elements.
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
NorthStar Components
The NorthStar Controller performs three main functions: analyze, optimize, and virtualize. The analyze component of the
NorthStar Controller is used to discover the service provider or enterprise network topology. The NorthStar Controller peers
E
with a local network router using IS-IS, OSPF, or more commonly a BGP route reflector with the traffic-engineering address
family enabled. The traffic-engineering address family allows BGP peers to exchange more than just network destinations.
US
Information about link administrative groups, maximum bandwidth, available bandwidth, and metrics is communicated and
stored by the NorthStar Controller in a traffic engineering database.
After the topology has been learned, the optimize component of the NorthStar Controller uses administrator defined
constraints such as minimum bandwidth, explicit path, or path delay to create the explicit route objects (ERO) that will
AL
determine the signaling path for all label switched paths in the network.
The virtualize component uses the Path Computation Element Protocol (PCEP) to peer with label edge routers and distribute
the EROs necessary to properly signal all label switched paths.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
PCEP
Communication between the NorthStar Controller and label edge routers is accomplished using the PCEP protocol. The
NorthStar Controller maintains a stateful PCEP session with all label edge routers in the network. PCEP is used to
E
communicate and signal information for all label switched paths in the network.
US
PCEP defines two roles, the PCE (path computation element) and the path computation client (PCC). The PCE is the role
played by the NorthStar Controller. The PCE is responsible for running algorithms against a centralized traffic engineering
database and calculating paths that meet design requirements for all LSPs throughout the network. These paths are
communicated as a series of explicit route objects (EROs) to the PCC to be signaled. The actual path computation is off
loaded from the forwarding devices. This separation of duties allows the NorthStar Controller to perform complex algorithms
AL
against the traffic engineering database and calculate paths without affecting forwarding performance on the label
switching routers.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Routing Protocols
To make complex path calculations across a multipath network the NorthStar Controller must learn all of the characteristics
of all the paths between all the point A’s to all the point Z’s. The point A’s representing the ingress label switching routers
E
(LSRs) and the point Z’s representing the egress LSRs for all the destinations reachable on the network.
US
The preferred method to communicate all routing information is the BGP routing protocol with the traffic engineering address
family enabled. In a typical configuration, the NorthStar Controller is configured to peer with a route reflector in the service
provider core network. The route reflector and the NorthStar Controller exchange destination information The traffic
engineering address family also enables the BGP protocol to exchange link characteristics such as available bandwidth,
administrative groups, maximum bandwidth, delay, etc. These link characteristics are stored on the NorthStar Controller in a
AL
traffic engineering database. The NorthStar Controller uses industry leading algorithms to compare administrator defined
LSP path requirements against current network conditions. The results of these calculations create a set of EROs for each
LSP that are dynamically signaled using PCEP to the label edge routers. The EROs are used by the label edge routers (PCCs)
to determine the signaling path for the label switched paths that forward traffic across a service provider backbone.
RN
IS-IS and OSPF protocols can also be used to communicate layer three addressing and traffic engineering information. The
multitier nature of these routing protocols require peering with multiple area border routers in a multilayer environment.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
LSP Management
NorthStar Controller can manage different types of LSPs. In the next few pages we examine the terminology used to identify
LSPs in a NorthStar Controller environment.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
environment before the NorthStar Controller arrived. The LSP is defined in the local configuration on the router, and the
router maintains both the operational state and configuration state of the LSP. The PCC sends LSP status information to the
US
NorthStar Controller so its presence and properties can be included in available bandwidth calculations. The NorthStar
Controller learns these LSPs for the purpose of visualization and comprehensive path computation, but cannot change any
attribute on these LSPs.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
When an LSP is externally controlled, the controller manages the following LSP attributes:
• Bandwidth
• Setup and Hold priorities
• LSP metric
AL
• ERO
Any configuration changes to these attributes performed from the local router configuration are overridden by the values
configured from the NorthStar Controller. Changes made to these attributes from the PCC do not take effect as long as the
RN
LSP is externally controlled. Any configuration changes made from the PCC take effect only when the LSP becomes locally or
router controlled. When an LSP is externally controlled, any attempt to change the configuration of the LSP from the PCC
(except for auto-bandwidth parameters) results in the display of a warning message from the router CLI.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
can update the LSP attributes. The LSP does not appear in the local PCC configuration file and no commit is required for the
externally controlled LSP to be signaled. PCEP is used to communicate the EROs for the LSP to the PCC edge routers and to
US
A RE
SH
T
NO
DO
—
LY
ON
schedule stop time the LSP is re-signaled and taken down. This feature allows for dynamic customer bandwidth needs. LSPs
can be re-signaled to a lower bandwidth during weekends.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
to shift traffic around the off line component. This works well for IP traffic, but does not affect LSPs crossing the router about
to go off line unless their paths are re-optimized very aggressively based on metric changes. When the router is brought
US
down, the LSPs still traversing it will switch to their protection paths. In such a mode of operation a small amount of traffic
loss is inevitable. The NorthStar Controller can shift the traffic away from the router or link about to be taken off line. A
maintenance-coordination application can interact with an active stateful PCE to identify the affected LSPs and re-compute
and re-signal them around the node about to be taken off line. The benefit of an active PCE in this case is in the ability to not
only evaluate all impacted LSPs concurrently, but also move them as needed, and determine when all of them have
AL
successfully switched.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
display how LSPs would be re-routed around the failed links or nodes.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
WANDL IP / MPLSView
Are your LSPs using the network in the most efficient manner possible? What changes could you make to your network
design to receive the most from your network investment? Can you currently model and simulate these changes, analyze the
E
effects, and create configurations that could be applied? WANDL IP/MPLS View performs off line network analysis to answer
these questions.
US
WANDL IP/MPLSView addresses major areas of network planning including traffic and routing analysis, capacity planning,
resiliency analysis/disaster planning, path design, and optimization. From a set of network configuration files and other
optional data, an intelligent multivendor analyzer within IP/MPLSView constructs an accurate network topology, fully aware
of multiple protocols, layers, autonomous systems (AS), areas, VPNS, etc. Alternatively, the user can manually construct any
AL
requirements.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
network to perform outage simulation, capacity planning, path design, and equipment inventory. We will examine these use
cases in more detail.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
information to create an off line model of the existing network. The off line model network is used by IP/MPLSView to
simulate network outages, perform equipment inventory, and verify compliance without affecting the production network.
US
The more information that can be imported into IP/MPLSView the more accurate and detailed the network model will be.
IP/MPLSView supports output from multiple vendors.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
I add a new 10 Mb LSP to the network what route would it take? IP/MPLSView can answer these questions.
US
IP/MPLSView can simulate the failure of nodes and links. Individual nodes or links can be failed. Multiple links and nodes
can be failed simultaneously. Once a failure is simulated IP/MPLSView can generate reports demonstrating new paths that
would be signaled for the LSPs affected by the simulated failure.
IP/MPLSView can simulate the addition of LSP demands on the network. Report output displays available paths across the
service provider network that meet the LSP constraints defined by the administrator.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
provides detailed visibility into LSP and IP traffic flows allowing network designers to verify all resources are being used to
their maximum capacity.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
to calculate optimum paths through the network and display the results as an optimized MPLS core. Networks can be
designed to efficiently utilize the available paths and hardware possibly delaying the need for expensive equipment
US
upgrades and fiber installation. Fully optimize the existing network environment.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
System Requirements
IP/MPLSView is a client-server application and allows the user to access the program on a Solaris or Linux server via a JAVA
client on a Windows, Solaris, or Linux platform.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• Cisco Systems
• ECI Telecom LTD
• Force10
AL
• Foundry
• Huawei
• IBM
RN
• Riverstone
• Sycamore
IN
• Tellabs
A RE
SH
T
NO
DO
—
LY
ON
Contrail
The slide highlights the topic we discuss next.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
or an over-the-top (OTT) service provider your network most likely has a mix of different environments that you want connect.
You might have a traditional network that is VLAN based and has legacy and VMWare based servers. You can also have one
US
or more next generation CLOS based data centers (DCs) or points of presence (POPs) that contains a mixture of VMs and
containers, bare metal servers and storage, physical dedicated appliances that provide load balancing, and WAN
optimization, and also possibly virtualized versions of the same appliances. You network can also use resources on public
clouds such as Amazon Web Services (AWS), Microsoft Azure, or Google’s Compute Engine (GCE). And Finally there are
branch offices or customer sites that need to be tied into the network.
AL
All of these networks need to interconnect. Given the underlay networks described in previous paragraph and shown in the
slide, there are six typical network interconnects that need to be taken into consideration. First is the legacy interconnect
that looks at how to connect the VLANs of the traditional data center to the CLOS based data center. Second, is a VM+BMS
RN
interconnect that looks at how to connect virtual machines to bare metal servers. Third is a Multi-DC Distributed cloud that
looks at how to create a CLOS based fabric that spans multiple data centers. Fourth is Phy+Virtual service chaining that
connects services chains that include both physical and virtual devices. Fifth is Hybrid Cloud that looks at how you connect
you private cloud to a public cloud. And finally sixth is Branch and looks at how you connect your branches and customer
sites to your main network.
TE
RE
Contrail Cloud is the management and orchestration piece that sits on top and provides the means to connect these
heterogeneous networks together. Contrail and Openstack also give network administrators the ability to automate the
creation of virtual networks. With the ability of creating virtual networks comes the ability to create service chains which can
require network data to flow through designated a string of services creating what is often referred to as a service chain. One
A
of the strengths of Contrail is its ability to connect virtual networks with physical networks and to create service chains that
include both physical and virtual devices.
SH
The next several pages will describe Contrail and its functionality in more detail.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
when the network application programming interface (API) is called, a Layer 2 infrastructure is built. What Contrail does is
intercept that call to build the network. However, Contrail builds its own kind of network because, at the abstracted layer in
US
the orchestration system, the request is defined as asking for a network without describing how that network should be built.
It is up to the underlying systems to decide how that network should be built. For the most part, the orchestration system
does not care how a network is built, just that it is built.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Overlay Networking
Virtual networks are implemented using two networks—a physical underlay network and a virtual overlay network. This
overlay networking technique has been widely deployed in the Wireless LAN industry for more than a decade but its
E
application to data center networks is relatively new. It is being standardized in various forums such as the Internet
Engineering Task Force (IETF) through the Network Virtualization Overlays (NVO3) working group and has been implemented
US
in open source and commercial network Virtualization products from a variety of vendors.
The role of the physical underlay network is to provide an IP fabric—its responsibility is to provide unicast IP connectivity from
any physical device (server, storage device, router, or switch) to any other physical device. An ideal underlay network provides
uniform low-latency, non-blocking, high-bandwidth connectivity from any point in the network to any other point in the
AL
network.
The vRouters running in the hypervisors of the virtualized servers create a virtual overlay network on top of the physical
underlay network using a mesh of dynamic tunnels amongst themselves. In the case of Contrail these overlay tunnels can be
MPLS over generic routing encapsulation (GRE), MPLS over User Datagram Protocol (UDP), or VXLAN tunnels.
RN
The underlay physical routers and switches do not contain any per-tenant state: they do not contain any media access
control (MAC) addresses, IP address, or policies for virtual machines. The forwarding tables of the underlay physical routers
and switches only contain the IP prefixes or MAC addresses of the physical servers. Gateway routers or switches that connect
a virtual network to a physical network are an exception—they do need to contain tenant MAC or IP addresses.
TE
RE
The vRouters, on the other hand, contain per tenant state. They contain a separate forwarding table (a routing instance) per
virtual network. That forwarding table contains the IP prefixes (in the case of a Layer 3 overlays) or the MAC addresses (in the
case of Layer 2 overlays) of the virtual machines. No single vRouter needs to contain all IP prefixes or all MAC addresses for
all virtual machines in the entire data center. A given vRouter only needs to contain those routing instances that are locally
A
present on the server. For example, which have at least one virtual machine present on the server.
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Contrail Architecture
Contrail consists of a number of components. Here we list those components and give an overview of what they do:
E
• OpenStack: OpenStack is a number of small projects interacting with each other to make up the cloud
orchestration system.
US
– Glance (image storage): Stores virtual hard disk images for deploying VM instances. Stores images such
as QCOW2 IMG, VMware VMDK, etc.
– Keystone (identity): Manages user roles and their mappings to various services.
RN
– Cinder (block storage): Stores data similarly to a traditional disk drive; for high speed data transactions.
– Swift (object storage): Horizontally scalable storage of individual objects.
– Ceilometer (not shown on the slide): Provides telemetry services, keeps a count of each user’s system
TE
RE
• Configuration Nodes: You can think of the Configuration Node as a large schema transformer. In other words, it
acts much like a compiler does when programming. Typically, a compiler takes the source code and transforms
it into something that is machine executable. The Configuration Node does much the same thing. It takes an
abstract data model you have designed using a language that’s common and easy to understand and
A
transforms, compiles, that information into something the underlying infrastructure can understand. The
Configuration Node takes information from the OpenStack GUI, the Contrail GUI or the northbound API and
SH
transforms it into something southbound Contrail controller node can understand.
• Control Node: The Control node subscribes to information that is provided by the Configuration Nodes. The
Control Node is the SDN controller and also controls the flow of data between compute nodes and physical
devices in the overlay network. The control node takes the information received from config node and
transcodes it into configuration information that passes on to the individual compute nodes.
T
• Compute Node and vRouter: A Compute node, which runs the vRouter, is typically a server running a hypervisor.
NO
The vRouter is a kernel module that is inserted into the Linux bridge or OVS module of the hypervisor kernel.
Each VM is mapped into one or more virtual routing and forwarding (VRF) instances on the vRouter and the
vRouter Agent populates the VRFs using information learned control node. The vRouter also enforces security
policies and performs network address translation (NAT) as needed. Even though it’s called a vRouter and does
routing duties, the vRouter is not something you log into and configure manually; it is controlled and
programmed by the control node.
DO
• Analytics Engine: The analytics nodes are responsible for the collection of system state information, usage
statistics, and debug information from all of the software modules across all of the nodes of the system. The
analytics nodes store the data gathered from across the system in a database that is based on the Apache
Cassandra open source distributed database management system. The database is queried by means of an
SQL-like language and REST APIs. The Sandesh protocol, which is XML over TCP, is used by all the nodes to
—
deposit data in the NoSQL database. All vRouters dump their data into the database. Every packet, every byte,
and every flow, is collected so there is 100% visibility of all traffic going through the vRouters. At any point, it is
possible to view all live flows traversing a vRouter. Furthermore, once the flow data has been dumped into the
Cassandra database, you can use the aforementioned query interfaces to mine the data for historical or
LY
A RE
SH
T
NO
DO
—
LY
ON
Multi-Tenancy in Contrail
A virtualized multi-tenant data center allows multiple tenants to be hosted in a data center. Multi-tenancy means that
tenants share the same physical infrastructure (servers, network, storage) but are logically separated from each other. The
E
• In a service provider data center providing public cloud services it means a customer or applications belonging
to a customer.
• In an enterprise data center implementing a private cloud, it could mean a department or applications
belonging to a customer.
AL
The number of tenants is important because some architectural approaches (specifically native end-to-end VLANs) have a
limit of 4096 tenants per data center.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
each tenant which allows other physical devices within that tenant’s network to remain isolated. The control nodes provide
signaling information to the gateway using BGP.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
by Service Nodes that can be located in various places as shown in the example. In Contrail terms, this is called service
chaining.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Contrail Versions
As we have already seen, OpenContrail is an Apache-2.0 licensed open source product that can be used for free by anyone.
Juniper Networks also provides commercial versions of the OpenContrail system named Contrail Networking and Contrail
E
Cloud Platform. The difference between the two is that Contrail Cloud Platform includes commercial support for the entire
open source stack including the Contrail System and the other open source components such as for example OpenStack.
US
A RE
SH
T
NO
DO
—
LY
ON
nodes, and two compute nodes. Each node should have the minimum recommended hardware requirements that are listed
on the slide.
US
This information is very important to know when designing a Contrail deployment. Knowing the hardware requirements for
each node allows you to properly provision your Contrail deployment. Note that the requirements listed on the slide are the
same if you are running each node on a different physical server or running each node on a virtualized server.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
technician to add new hardware and reconfigure existing hardware. Juniper’s Contrail Cloud CPE solution uses virtual
network functions (VNFs) to reduce or eliminate the need for dispatching service technicians.
US
Using VNFs allow enterprises and service providers to quickly expand their markets without limitation to geography. VNFs
also allow for shorter delivery times and the ability to have more customizability in their service offerings.
The Contrail Cloud CPE solution can be implemented either centrally within a POP or distributed at the customer’s location.
The next two slides will explain how it works.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Cloud CPE—Centralized
The centralized Cloud CPE solution adds a service orchestration component on top of the Contrail Cloud solution. The
Contrail Service Orchestration (CSO) component includes service design and administration tools. It also includes a web
E
portal that allows customers to purchase additional services. Once new services are selected and purchased, the CSO sends
the information to Contrail and OpenStack through REST APIs. The Contrail system then creates the requested virtual
US
service can also be canceled as desired by the customer without a technician ever having to visit the customer premise.
This helps the service provider, because they don’t need to send out trucks or technicians. It helps the customer because
they can turn up a new service within just a few minutes, not weeks or months. This same scenario can be applied to large
organizations with hundreds of branch offices. A new or updated service can be implemented centrally and applied to all the
RN
A RE
SH
T
NO
DO
—
LY
ON
Cloud CPE—Distributed
The distributed Cloud CPE solution is similar to the centralized solution except that the VNFs are located on the customer’s
premise instead of at the POP or data center. Some customer’s require that the VNFs be located on premise for legal,
E
As long as there is IP connectivity to the distributed CPE, the VNFs are provisioned by the Contrail Service Orchestrator in
same way that they are in the centralized Cloud CPE.
The CPE device in the distributed model needs more storage, RAM and processing power than a normal CPE device in order
to run the VNFs.
AL
Juniper recently released the NFX250 Network Services Platform to host VNFs and support the distributed Cloud CPE
solution. The NFX250 has a Xeon processor with up to 32 GB of RAM and up to 512 GB of flash storage.
For additional information about the NFX250, see Juniper’s website. http://www.juniper.net/us/en/products-services/
routing/nfx250/
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
SD-WAN
The slide highlights the topic we discuss next.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
SD-WAN
SD-WAN is a relatively new technology and is still evolving. A simple explanation of SD-WAN is that it takes all the great
features of the distributed cloud CPE and combines it with the centralized WAN visualization, control and optimization of
E
Juniper’s NorthStar product. In addition to working on an MPLS network, it has the ability to run on top of many different
WAN protocols include LTE networks and broadband networks.
US
As is typical for any SDN solution, SD-WAN is centrally managed, but different for many WAN solutions, it is managed by the
customer and not the service provider. Many SD-WAN solutions can also centrally deploy VNFs to the remote sites.
And finally SD-WAN is supposed to be easy to deploy at remote locations. Some in the market advertise plug-n-play or zero
touch provisioning for the devices.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
SD-WAN Benefits
We have discussed what SD-WAN is now lets take a little closer look at numerous benefits SD-WAN offers in more detail.
E
• Centralized Control: One of the key tenets of SDN is the centralization of the control plane. Instead of each
individual device computing forwarding paths on it’s own, a centralized SDN controller uses network wide
US
knowledge to compute optimum paths. The SDN controller communicates the necessary path configuration
information to the forwarding devices. All path configuration constraints and requirements can be defined using
a common management interface on the SDN controller. Path configuration no longer requires configuring each
forwarding device manually. Forwarding devices maintain a connection to the SDN controller. This connection is
used by the SDN controller to communicate new path requirements and by the forwarding devices to report
AL
be provisioned dynamically at time of client sign up or based on increased customer demand. SDN solutions
have the ability to display network path link utilization and optimize the network path configuration to use
existing hardware more efficiently.
• Competitive Advantage: Time to market is a competitive advantage. A network service provider that can offer
TE
services quicker than their competition has a huge competitive advantage. Service setup can be done in
minutes instead of weeks using traditional provisioning methods. Customers can access a portal and request
new services from the network provider and instead of days or weeks the services are provisioned immediately.
IN
RE
This is a continuation of the list of numerous benefits that SD-WAN offers:
• Improved Quality: Centralizing path computation and configuration on a SDN controller reduces errors and
increases uptime. Distribution of a centralized path configuration promotes consistency. SDN solutions allow
paths to be re-routed for maintenance or failure reasons.
A
• Automation: At the core of SDN is automation. Automation provides the centralized control, visibility, scalability,
SH
and optimization mentioned on previous pages. Everyone has done some form of automation in their
networking career. Simple tasks such as scheduling a backup of a devices configuration or receiving SNMP
traps are examples of automation. SDN provides automation by abstracting device level commands. SDN
removes the necessity of manually configuring individual network components. Network paths defined using
the SDN controller management interface are translated into forwarding device level commands and
T
distributed to forwarding devices.
• Visibility: Communicating all path information to the SDN controller provides a network wide view of paths
NO
throughout the network. One management interface can provide all the necessary network path information.
One place to go is a good thing as networks have grown.
• Scalability: Removing the need to manually configure paths on each forwarding device is a huge benefit of SDN.
All necessary path configuration requirements can be defined on a centralized controller and immediately
distributed to thousands of forwarding devices. Scalable consistent configuration is available to all devices and
DO
can be re-used as new devices and path requirements arise.
• Optimization: SDN allows a network to be dynamic. The conditions and requirements of forwarding paths can
change over time. A forwarding path that previously had plenty of bandwidth available and very low latency can
become congested and slow as new network demands come into play. Some network links are congested while
others have excess bandwidth available. The centralized picture that an SDN controller provides can notify
—
administrators of path conditions and provide optimization of traffic flows. New bandwidth, latency, hop count,
and other requirements can be signaled to forwarding devices to optimize network paths. Traffic can be
optimized and better distributed across all available forwarding paths.
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
We Discussed:
• How virtualization and SDN can be leveraged in the WAN; and
E
• Various SDN products and how they are used in the WAN.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Review Questions
1.
E
2.
US
3.
AL
4.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
RE
1.
SDN requires the separation of the control plane from the data plane
2.
A
Path Computation Element Protocol (PCEP) is a TCP-based protocol defined by the IETF PCE Working Group, and defines a set of
messages and objects used to manage PCEP sessions and to request and send paths for multi domain traffic engineered LSPs (TE
SH
LSPs).
3.
The Centralized Cloud CPE solution houses all of the VNFs in one central location. The distributed Cloud CPE solution has VNFs
located on the customer premise.
T
4.
NO
SD-WAN lowers costs by allowing non-time-sensitive traffic to flow over inexpensive broadband links while routing time-sensitive
traffic over provider MPLS links
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
T
NO
Chapter 12: WAN Device Portfolio
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design—WAN
RE
A
SH
T
NO
DO
—
LY
ON
We Will Discuss:
• Juniper’s product portfolio and the basics of how they relate to WAN solutions.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
The trend that is facilitating the refresh of the WAN is the overall traffic growth we are seeing across multiple areas. This
trend is irrespective of whether you look at the total number of websites, the amount of video traffic, or the amount of
non-video traffic. There has been an exponential growth that began around 2008 and is trending up and to the right. The
overall traffic growth has been extraordinary.
Very closely associated with this growth in traffic is the total number of connections that we see across the Internet. What
AL
used to be just two devices “talking” to each other has now proliferated into many millions of devices that need to connect
across a wide area.
Growth in demand is good, it drives business forward, and it makes new things possible. But innovation and the associated
RN
economics are not keeping up with the growth in demand and the requirements for reduced spending in both service
provider and enterprise environments. As you design your network, these are important considerations to keep in mind.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Platforms Overview
Networks devices come in all shapes and sizes. Each devices or family of devices have a particular set of capabilities that
they were designed to efficiently accomplish. Juniper’s devices incorporate industry-leading custom silicon that is designed
E
to meet the many network domain requirements. Different services and applications require different types of approaches
and have different types of demands. For example, leased line services have fairly high bandwidth density, while the services
US
complexity is not extreme. On the other side of the scale are, for example, the multi-service edge, the consumer edge, or
secured routers, which have lower bandwidth density but a much higher service complexity. Juniper looks at these
requirements to create purpose built ASIC designs.
Proper product placement is very important and can mean the difference between a successful or failed design proposal.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Silicon Requirements
This slide highlight some of the key network domains and offers a visual representation of key differences in silicon
capabilities and design. Each domain has unique requirements which means that the devices positioned there should be
E
A RE
SH
T
NO
DO
—
LY
ON
as a generalized positioning for products shown and is not exact or definitive by any means. As per usual with network design
and product placement, there is always some flexibility and in all cases is dictated by customer needs. While this may be a
US
reasonably good starting point, you should take a closer look at the product’s scaling details, hardware options, and
software feature set to ensure you are recommending the right product and solution for the customer’s environment.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
software for your design. Once the Low Level Design is complete, a list of hardware and software requirements can be
generated and matched against feature lists in a specific version of Junos using the pathfinder tool. This would lead to a
US
target version. A product issue impact review (PIIR) can be requested and purchased from Juniper’s Advanced Services that
will take this information against an internal database of known issues and further refine the recommendation. For example,
a pathfinder search might lead to the conclusion that version X is required, but a PIIR might find that there was a particular
bug in feature Y that was resolved in a version X release 4.
AL
Once a specific release is selected, Design Validation Testing can be done against that release code to ensure that the
software operates as required.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
MX Series Portfolio
The slide highlights the topic we discuss next.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
velocity, and also creates layers of inefficiencies. Rather, the best solutions are those that leverage service convergence at
greater scale for improved economics and experience.
US
The Universal Edge capabilities of the MX Series portfolio bring service providers closer to achieving true convergence and
unification across their service delivery networks. This is critical for efficient and fast rollout of next-generation services. The
service convergence on MX Series 3D platforms with open Session and Resource Control (SRC) controller and content
optimized media flow controller fundamentally changes provider economics. Universal Edge helps service convergence and
AL
promotes the use of the same platform on different networks to leverage economy of scale.
As service providers move towards network convergence, Universal Edge will function as an unified edge to leverage
economy of scale and scope. By leveraging the MX Series and the Junos OS, the Universal Edge transforms the end user
experience and delivers solid business economics with unparalleled scale, unlimited application availability, and
RN
revolutionary economics.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
MX5, MX10, MX40, and MX80; an aggregation optimized router, the MX104; modular edge routers including the MX240,
MX480, and MX960; and ultra-high capacity edge and converged edge/core platforms, the MX2010 and MX2020.
US
Modular Port Concentrators (MPCs) leverage the Junos Trio chipset to deliver high-density 1-, 10-, 40-, and 100-Gigabit
Ethernet, as well as ATM/SONET and inline services across the entire MX Series portfolio. These advanced capabilities allow
customers to flexibly mix-and-match interfaces to create service-specific and “pay-as-you-grow” configurations. The MPC
houses the Packet Forwarding Engines (PFEs) to deliver comprehensive Layer 3 routing (IPv4 and IPv6), MPLS, Layer 2
AL
A RE
SH
T
NO
DO
—
LY
ON
illustration is only meant as a generalized positioning for products shown and is not exact or definitive by any means. As per
usual with network design and product placement, there is always some flexibility and in all cases is dictated by customer
US
needs. While this may be a reasonably good starting point, you should take a closer look at the product’s scaling details,
hardware options, and software feature set to ensure you are recommending the right product and solution for the
customer’s environment.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
MX Offerings: Part 1
This slide provides a snapshot of some scaling numbers for the mid-range MX Series chassis. Back in 2008, Juniper started
shipping the MX80 chassis which provides up to 80 Gbps bi-directional support and offers two modular slots plus four
E
built-in 10-Gigabit Ethernet ports. This solution has been very successful in the enterprise edge market and Juniper
acknowledged the desire for lower bandwidth needs for the enterprise as well as managed services. Therefore, Juniper
US
developed the MX5, MX10, and MX40 chassis based on the same MX80 hardware architecture. Being based on the same
hardware allows an upgrade path from the MX5 all the way to the MX80 by means of licenses. In addition, these mid-range
chassis have been designed from the beginning to support high-performing services, and a service Modular Interface Card
(MIC) slot is available in the back of the chassis which supports the Multiservices MIC.
AL
The MX104 router is a highly redundant and full-featured mid-range modular MX Series platform built for space and
power-constrained service provider and enterprise facilities. It is designed to provide aggregation of enterprise, mobile,
business, and residential access services, as well as deliver edge services for metro providers. The MX104 offers 80 Gbps of
capacity with four fixed 10-Gigabit Ethernet ports and four MIC slots for flexible network connectivity and virtualized network
RN
services. It is optimized for central office deployment, supports a redundant control plane for high availability, and its chassis
is environmentally hardened for deployment in outside cabinets and remote terminals.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
MX Offerings: Part 2
This slide provides a snapshot of some scaling numbers for the high-end MX Series chassis. The MX240, MX480, and
MX960 are modular edge routers within the MX Series product family. The MX240 offers high-interface density and
E
performance in a space-efficient package that is practical for enterprise WAN, data center, and campus deployments as well
as several service provider applications in small and medium points of presence (POPs). The MX480 offers a dense, highly
US
redundant platform primarily targeted for medium to large enterprise campuses and data centers, and service provider edge
applications in medium and large POPs. The MX960 is a high-density, high-capacity platform designed for the service
provider edge and data center cores.
The MX2000 line of routers is a game-changing solution that offers investment protecting capacity, density, and scale;
AL
increased operational excellence; and service agility to help network operators transform the economics of networking.
Consisting of the MX2010 and the MX2020, the MX2000 line offers the performance and reliability at scale that enables
cable operators, service providers, content providers, and cloud operators to confidently and efficiently build the best
network across residential, mobile, and cloud hosting markets. The MX2010 and MX2020 chassis provide redundancy and
RN
resiliency. All major hardware components including the power system, the cooling system, the Control Board and the switch
fabrics are fully redundant.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
support the ever increasing traffic flows across the network. SDN and virtualization allows you to optimize that equipment,
but it does not replace the requirement for physical infrastructure—you need both, and you will for as long as you build
US
networks.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Virtual MX
Virtual MX (vMX) is a feature-rich, virtualized MX Series platform that runs on x86-based servers to support a broad set of
cloud, cable, mobile, and enterprise applications. In support of emerging Network Function Virtualization (NFV) initiatives,
E
the vMX delivers a virtualized router solution built on over 15 years of Juniper routing investment and experience. The vMX
control plane is powered by the Junos OS, the same operating system that powers the entire MX Series portfolio, while the
US
forwarding plane is powered by vTrio, which is programmable Junos Trio chipset microcode that is optimized for execution in
x86 environments. With the Junos OS and vTrio, the vMX offers advanced IP/MPLS routing and switching support that
ensures the flexible and highly efficient delivery of the widest variety of applications and services.
vMX eliminates the cost, complexity, and delay associated with qualifying, maintaining, and sparing physical routing
AL
elements, enabling rapid service deployment and scale-out of services—critical success factors when expanding into niche
markets and new geographies that are too risky to attempt with traditional network elements. These same attributes help
overcome persistent issues related to equipment acquisition for lab trials and release certification.
vMX increases service agility by enabling users to quickly implement and scale services by spinning up new routing
RN
instances on demand, and supporting non-disruptive service introductions in parallel with current services. This approach
eliminates the risk, complexity, and delay associated with reconfiguring and requalifying the current infrastructure for new
services. Furthermore, vMX has a granular licensing model that accommodates uncertain forecasts, enabling users to
purchase only the amount of capacity they need, reducing the risk of stranded capital.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
highest performance and with the capacity to grow with customers’ business or traffic.
US
NGFW services are available across the full line of SRX Series Services Gateways, and can perform full packet inspection,
applying security policies based on Layer 7 information. As a result, you can create security policies based on the
applications running across your network, the user receiving or sending network traffic, or the content traveling across your
network—a strategy that protects your environment against threats, manages how network bandwidth is allocated, and
controls who has access to what resources.
AL
SRX Series Services Gateways come in a broad range of models from all-in-one security and networking appliances to highly
scalable, high-performance chassis solutions. All solutions can be centrally managed using Junos Space Security Director,
and other security services are easily added to existing SRX Series platforms for a cost-effective solution.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
environments. Not all devices can support security intelligence, as noted on the slide. Note that this illustration is only meant
as a generalized positioning for products shown and is not exact or definitive by any means. As per usual with network design
US
and product placement, there is always some flexibility and, in all cases, is dictated by customer needs. While this might be
a reasonably good starting point, you should take a closer look at the product’s scaling details, hardware options, and
software feature set to ensure you are recommending the right product and solution for the customer’s environment.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• vSRX: Delivers a complete and integrated virtual security solution that includes core firewall, robust networking,
advanced security services at Layers 4-7, and automated lifecycle management capabilities.
US
• cSRX: Delivers container based SRX firewall capabilities. Scaling numbers are not yet available for this platform.
• SRX300: 1-Gbps firewall with 250-Mbps IPsec VPN capabilities in a 1 RU form factor. It consolidates security,
routing, switching, and WAN connectivity into a fanless desktop device ideal for retail-type offices with 50 users.
• SRX320: 1-Gbps firewall with 250-Mbps IPsec VPN capabilities in a 1 RU form factor. It consolidates security,
AL
routing, switching, and WAN connectivity into a desktop device ideal for distributed enterprises with 50 users.
• SRX340: 3-Gbps firewall with 500-Mbps IPsec VPN capabilities in a 1 RU form factor. Delivering consolidated
security, routing, switching, and WAN connectivity in a 1 RU form factor, this device meets the needs of
RN
optimized for securing and connecting midsized or large branch locations that are geographically dispersed.
• SRX1500: 10-Gbps firewall with up to 2-Gbps IPsec VPN capabilities in a 1 RU form factor. This Services
Gateway is optimized for securing the enterprise data center perimeter or distributed enterprise campus
IN
A RE
SH
T
NO
DO
—
LY
ON
• SRX3400: Delivers market-leading performance, scalability, and service integration in a mid-sized form factor. It
supports up to 30 Gbps firewall, 8 Gbps firewall and IPS, or 8 Gbps of IPsec VPN, and up to 175,000 new
US
connections per second. It is ideally suited for medium to large enterprise, public sector, and service provider
networks.
• SRX3600: Delivers market-leading performance, scalability, and service integration in a mid-sized form factor. It
supports up to 55 Gbps firewall, 15 Gbps firewall and IPS, or 15 Gbps of IPsec VPN, plus up to 175,000 new
connections per second. It is ideally suited for securing medium to large-sized enterprise data centers, hosted
AL
offers 10GbE, 40GbE, and 100GbE connectivity options for maximum flexibility. It is ideal for securing large
enterprise edges and data centers, service provider infrastructures, and next-generation services.
• SRX5600: delivers firewall performance of up to 960 Gbps, advanced anti-threat capabilities, and an
unprecedented 100 million concurrent user sessions on a highly reliable system, with six-nines availability. It is
TE
ideal for securing large enterprise data centers, service provider infrastructures, and next-generation services
and applications, as well as enforcing unique per-zone security policies.
• SRX5800: delivers up to 2 Tbps firewall, six nines carrier-grade reliability, more than 100 Gbps intrusion
prevention system (IPS), and an industry record-breaking 100 million concurrent user sessions. It is ideal for
IN
securing large enterprise data centers, hosted or collocated data centers, and service provider infrastructures.
A RE
SH
T
NO
DO
—
LY
ON
in SRX Series security policies, high threat data capacity by supporting over a million custom feed entries, and effective
protection by enforcing only the most relevant intelligence, which ensures rapid enforcement while reducing false positives.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
vSRX
vSRX was introduced in January of 2014. At a very high level, vSRX is a physical SRX Series device in virtual machine (VM)
format. You can think of it, for example, as taking an SRX550 or SRX240 from the SRX Series product line, stripping away the
E
sheet metal, power cable, and all of the physical elements of the device, and then you have vSRX. You get all the flexibility
associated with running the product in a VM—put it in the cloud or put it in various different infrastructures to have the VMs
US
use vSRX as their default gateway for all traffic that they process.
Because it’s in a VM, it’s bound at about 5 Gbps firewall throughout. You get the flexibility of the VM format, you don’t have to
make any kernel changes, and there’s no dependency on API integrations. vSRX is for use cases where you need north-south
filtering at about 5 Gbps of performance, and you also need connectivity features like NAT, routing, and VPN. All of those
AL
pieces are inherent in Junos and all are available in the vSRX product. Think of this as the North-South and connectivity
feature set in VM format.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
cSRX
The cSRX solution provides L4 – L7 SRX security features that can run within a Linux container management system like
Docker. The container consists of a set of control plane security components and a data-processing component named
E
Flowd. These components are single sourced from Junos and ported to run in a container environment greatly enhancing the
cSRX’s scalability due to its low footprint and minimal resource dependencies. cSRX is an important piece of Juniper’s NFV
US
offerings
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
Core routers must have stability in the control plane, robust operating systems and hardware, resilience and resistance to
US
network failures, and tremendous scale and longevity. Many edge routers have very rich feature sets, but when edge routers
enter the core there is a general compromise of all of the aforementioned attributes to some degree. This compromise is not
acceptable in core networks.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Evolution
New traffic dynamics such as mobility, video, and cloud-based services are transforming traditional network patterns and
topologies. Statically designed and manually operated networks must evolve to meet demands quickly and more
E
economically. Many operators have seen profitability stagnate and TCO increase under the weight that growing traffic
demands have placed upon them.
US
Network operators for traditional models over provision infrastructure months in advance in anticipation of increasingly
unpredictable traffic patterns and volumes. This causes providers to outlay huge capital expenses in idle anticipation of the
future. This imbalance in the fundamental financial equation can no longer be sustained—operators need to become more
agile in order to optimize their existing network resources, shorten planning cycles, and remove rigid network layers. Once
AL
this is done, service providers can begin leveraging a highly intelligent Supercore infrastructure to increase revenues by
creating new customizable services.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
redesigning the entire chassis and software. The PTX Series provided the port density and power efficiency needed for the
label-switching router (LSR) market.
US
Looking to the present and beyond, we now have the technology to take that same value into the IP core market. Along with
the introduction of new line cards, we can address IP core routing and converged IP core applications—applications such as
full IP core, service rich core, peering, data center interconnect (DCI), and so on. As shown at the bottom of the diagram on
this slide, we also have IP/MPLS over DWDM (packet optical convergence) for all of the use cases.
AL
With all of these capabilities, Juniper refers to the PTX Series as universal IP core routers. The PTX Series can address any
application and any convergence needed for the IP core routing segment.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
introducing the industry’s only core router that exceeds these requirements and easily fits into the service provider network,
expanding the Juniper Networks Converged Supercore architecture beyond lean LSR deployment, as illustrated on this slide.
US
Juniper’s Converged Supercore architecture redefines the way networks are built by collapsing multiple layers into a flat
elastic fabric, including performance-optimized IP/MPLS routing and metro/long-haul dense wavelength-division
multiplexing (DWDM) layers. The Converged Supercore architecture builds upon the virtual integration of IP/MPLS
technology and integrated coherent interfaces for extended terrestrial optical interconnect with a centralized, near-real-time
AL
traffic optimization SDN controller. The extensible Juniper Networks NorthStar Controller automates the creation of
traffic-engineering paths across the network, increasing network utilization and enabling a customized programmable
networking experience.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
PTX Offerings
This slide provides a snapshot of some scaling numbers for the PTX Series chassis.
E
• PTX1000: Fixed-configuration core router that provides up to 2.88 Tbps of forwarding capacity, the PTX1000
supports flexible interface configuration options ranging from 288 10GbE interfaces using a quad small
US
form-factor pluggable plus (QSFP+) breakout module, 72 GbE interfaces at 40 Gbps using a QSFP+ breakout
module, and 24 100GbE interfaces using QSFP28.
• PTX3000: Ultra-compact core router that is highly optimized to meet the service provider challenges of space
and power by providing 8 Tbps of forwarding capacity, and less than half a watt per Gbps power consumption. It
provides dense, fully integrated photonics and coherent dense wavelength-division multiplexing (DWDM)
AL
100GbE transport.
• PTX5000: 24-Tbps core router that provides high-density 10GbE, 40GbE, and 100GbE interfaces for large
networks and network applications, addressing the needs of Internet service providers and high-volume
RN
content providers. It offers integrated 100GbE coherent dense wavelength-division multiplexing (DWDM)
interfaces with optical transport network (OTN) framing at industry-leading densities.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
T Series Offering
The T4000 is a half rack chassis that delivers 4 Tbps throughput. The T4000 system more than doubles the per-slot capacity
of the legacy T1600 systems from 100 to 260 Gbps—and with only 25% increase in power. The T4000 supports up to 208
E
ports of 10-Gigabit Ethernet or 16 ports of 100-Gigabit Ethernet interfaces, making it the densest line-rate IP/MPLS routing
system. The T4000 offers a wide range of interface flexibility—from SONET/SDH to 100-Gigabit Ethernet—and FPC and PIC
US
compatibility with other legacy T Series platforms which provides investment protection. The T4000 provides high scalability
with multi-chassis support, component redundancy to ensure reliability, and a rich set of services provided by the Junos OS.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
Juniper has led the way with the Universal Edge—one common platform for business, residential, and mobile subscribers.
Access networks could also benefit from consolidation, but this has remained a challenge due to the disparate technologies
US
developed for mobile, residential, and business access. Instead of relying on separate access devices to connect customers,
operators want to converge access networks to deliver both a more predictable experience to users and better economics to
shareholders.
Juniper’s Universal Access solution is based on the high-performance Juniper Networks ACX Series Universal Access
AL
Routers, anchoring the first fully integrated, end-to-end network architecture that combines operational intelligence with
capital cost savings. With Universal Access, operators can extend the edge network and its capabilities to the customer,
whether that’s a cell tower, a multi-tenant unit, or a residential aggregation point. This creates a seamless MPLS architecture
that is critical to delivering the benefits of 4G radio and packet core evolution with minimal truck rolls, paving the way for new
RN
revenue, new business models, and a more flexible and efficient network.
Universal Access also uses Junos Space to offer a single management plane. This enables an open, standards-based
management system that allows for rapid provisioning, fault management, Operation, Administration, and Maintenance
(OAM), and service monitoring, as well as integration into existing management systems. In addition, proven synchronization
TE
technology results in better utilization of assets—more calls added, fewer calls dropped, and more data transported with
fewer retransmissions, leading to happier customers and a healthier bottom line.
IN
A RE
SH
T
NO
DO
—
LY
ON
• ACX500: It is a fanless mobile backhaul router, certified for indoor and outdoor use delivering up to 6 Gbps
throughput and support an integrated, software-enabled GPS receiver. The ACX500 is ideally suited for
US
small-cell applications.
• ACX1000: Fanless router that delivers up to 60 Gbps throughput with 8x T1/E1 interfaces, 8x copper GbE ports,
and 4x copper/fiber GbE ports.
• ACX1100: Fanless Ethernet-only router that delivers up to 60 Gbps throughput with 8x copper GbE ports, and 4x
AL
• ACX2100: Fanless router that delivers up to 60 Gbps throughput using 16x T1/E1 ports, 4x copper 10/100/
1000Mbps ports, 4x combination copper/fiber GbE ports, 2x GbE SFP ports, and 2x 10 GbE SFP+ ports.
• ACX2200:Fanless router that delivers up to 60 Gbps throughput using 4x copper 10/100/1000Mbps
TE
interfaces, 4x combination copper/fiber GbE ports, 2x GbE SFP ports, and 2x 10 GbE SFP+ ports.
• ACX4000: Actively cooled router that delivers up to 60 Gbps throughput using a number of fixed ports that
include 2x 10GbE, 2x GbE SFP, or 8x copper/fiber GbE ports. The chassis also includes two Modular Interface
Card (MIC) slots that can house 16x T1/E1 ports, 6x GbE copper/fiber combination ports, or 4x CHOC3/STM-1
IN
ports, creating a mobile access platform versatile enough for any deployment.
A RE
SH
T
NO
DO
—
LY
ON
capacity and density, ACX5000 Series routers feature 40GbE interfaces for NNI uplinks and full support of E-LINE, E-LAN,
E-TREE and E-ACCESS, as well as IP/IP VPN services. These routers support high-availability networking features such as
US
ISSU, Virtual Chassis and MC-LAG. With the ACX5000, you can build high-density Telco infrastructures that are both efficient
and scalable. To meet the needs of sites with limitations on rack space or cooling, we offer two form factors for easy
deployment:
• ACX5048: Actively cooled router that delivers up to 1.44 Tbps throughput in a 1 RU form factor using 48x
AL
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
QFX Overview
The QFX10000 line can be deployed in a number of different network designs and fabrics, including Layer 3 fabric, Junos
Fusion, and Juniper MC-LAG for Layer 2 and Layer 3 networks, giving customers complete architectural flexibility. Additionally,
E
the open architecture ensures that customers can innovate on top of Juniper Networks Junos operating system to accelerate
the pace of innovation. The QFX10000 line also supports Junos Fusion, a simple, easy-to-deploy fabric that scales to support
US
mid-to-large data center deployments while simplifying network management and configuration. The QFX10000 line
includes four different models, offering flexible solutions for every data center application.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• QFX10002-72Q: A fixed-configuration platform supporting up to 24x 100GbE interfaces, 72x 40GbE interfaces
US
A RE
SH
T
NO
DO
—
LY
ON
EX Series Portfolio
The slide highlights the topic we discuss next.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
WAN EX Overview
The EX9200 line of switches runs the same Junos OS used by all other Juniper Networks EX Series Ethernet Switches, as
well as the Juniper Networks routers that power the world’s largest and most complex networks. The EX9200 is ideal for
E
simplifying campus, data center, and combined campus and data center network environments.In the campus, the EX9200
collapses the core and distribution layers; when used with Juniper access layer switches deployed in an MC-LAG
US
configuration, the EX9200 helps eliminate Spanning Tree Protocol, dramatically simplifying the network architecture and
network operations. The EX9200 switches will also serve as the foundation for the Junos Fusion Enterprise architecture, a
new standards-based approach for a scalable switch fabric in the enterprise campus. Junos Fusion Enterprise dramatically
simplifies campus deployments by collapsing the entire network into a single management point, with the EX9200 as its
core. Junos Fusion Enterprise can also serve as the shared core for enterprise campus environments that have on-premise
AL
data centers. A future software release will enable the EX9200 line of modular Ethernet switches to support Junos Fusion
Enterprise technology and function as an Aggregation device
The EX9200 is also a key component of Juniper’s MetaFabric architecture, which provides a simple approach to building
RN
data center networks. Additionally, the EX9200 supports data center interconnect (DCI), critical to workload mobility and
application availability, by delivering leading technologies such as MPLS, VPLS and E-VPN. For networks evolving to SDN, the
EX9200 integrates with VMware NSX SDN controllers and can act as a VXLAN Layer 2 and Layer 3 Gateway. The EX9200
platform also interoperates with the Open vSwitch Database (OVSDB) to support detailed management capabilities and
integrates with Juniper Networks Contrail SDN controller to allow users to choose their preferred SDN systems.
TE
The EX9200 line cards support an extensive set of Layer 2 and Layer 3 services that can be deployed in any combination of
L2-L3 applications. The EX9200-40F-M line card supports IEEE 802.1ae MACsec with AES-128/256 bit encryption,
providing support for link-layer data confidentiality, data integrity, and data origin authentication.
IN
The EX9200 line includes three models, offering flexible solutions for many enterprise and data center applications.
A RE
SH
T
NO
DO
—
LY
ON
EX WAN Offerings
This slide provides a snapshot of some scaling numbers for the EX9200 line of switches. The EX9200 line includes three
modular chassis models:
E
• EX9204 Ethernet Switch: This 5 RU modular chassis has four slots that support up to three line cards.
US
• EX9208 Ethernet Switch: This 8 RU modular chassis has eight slots that support up to six line cards.
• EX9214 Ethernet Switch: This 16 RU modular chassis has fourteen slots that support up to twelve line cards.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
NFX250 Platform
The NFX250 Network Services Platform eliminates the operational complexities of deploying multiple types of customer
premises equipment (CPE) to meet myriad customer service needs. As part of Juniper’s Cloud CPE solution, the NFX250
E
gives communication service providers and enterprises the flexibility to deploy secure, high-performance services
on-premises in a single CPE device. This software-driven solution uses virtualization to speed and simplify the process of
US
simultaneously support multiple Juniper and third-party VNFs on a single device and provide built-in, dynamic, policy-based
routing, making the NFX250 ideal for small to midsized businesses as well as large multinational or distributed enterprises.
Contrail Service Orchestration can be used to deliver these VNFs to the NFX250.
RN
TE
IN
RE
A
SH
T
NO
DO
—
LY
ON
We Discussed:
• Juniper’s product portfolio and the basics of how they relate to WAN solutions.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Review Questions
1.
E
2.
US
3.
AL
RN
TE
IN
RE
1.
Juniper refers to the PTX Series as universal IP core routers.
2.
A
Juniper refers to the MX Series as universal edge routers.
SH
3.
The ACX500, 1000, 1100, 2000, 2100, and 2200 routers are built to be passively cooled, meaning they do not have a fan to push air
though the device.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• Content Explorer: Junos OS and ScreenOS software feature information to find the right software release and
hardware platform for your network.
• Feature Explorer: Technical documentation for Junos OS-based products by product, task, and software
release, and downloadable documentation PDFs.
AL
• Learning Bytes: Concise tips and instructions on specific features and functions of Juniper technologies.
• Installation and configuration courses: Over 60 free Web-based training courses on product installation and
configuration (just choose eLearning under Delivery Modality).
RN
• J-Net Forum: Training, certification, and career topics to discuss with your peers.
• Juniper Networks Certification Program: Complete details on the certification program, including tracks, exam
details, promotions, and how to get started.
TE
• Technical courses: A complete list of instructor-led, hands-on courses and self-paced, eLearning courses.
• Translation tools: Several online translation tools to help simplify migration tasks.
IN
www.juniper.net
Juniper Networks Design—WAN
A RE
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
www.juniper.net
Acronym List
RE
API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application programming interface
A
AS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .autonomous systems
ASIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application-specific integrated circuit
SH
AWS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amazon Web Services
CapEx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .capital expenditures
CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .command-line interface
CPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . customer premises equipment
CSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Contrail Service Orchestration
T
DC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data center
DCI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data center interconnect
NO
DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domain Name System
DPI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Deep Packet Inspection
DWDM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dense wavelength-division multiplexing
ECMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . equal-cost multipath
ECR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Energy Consumption Rating
EER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . energy efficiency ratio
DO
EF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .exposure factor
EOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . end-of-life
EOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . end-of-support
ERO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . explicit route object
FCAPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fault, configuration, accounting, performance, security
—
GCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Google’s Compute Engine
GRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .generic routing encapsulation
GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .graphical user interface
HTTPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hypertext Transfer Protocol over Secure Sockets Layer
LY
RE
RPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . remote procedure call
SaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software as a Service
SDK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . software development kit
SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . software-defined network
SFP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .small form-factor pluggable
A
SLAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stylesheet Language Alternative Syntax
SSHv2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Secure Shell
SH
UDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .User Datagram Protocol
VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual LAN
VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual machine
vMX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual MX
VNF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual network functions
T
WAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . wide area network
XMPP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extensible Messaging and Presence Protocol
NO
XSLT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Extensible Stylesheet Language Transformations
DO
—
LY
ON
E
US
AL
RN
TE
IN