Академический Документы
Профессиональный Документы
Культура Документы
A
SH
Juniper Networks Design
T
NO
Fundamentals
15.b
DO
—
LY
Student Guide
Volume 2 of 2
ON
E
US
408-745-2000
www.juniper.net
A
marks are the property of their respective owners.
Juniper Networks Design Fundamentals Student Guide, Revision 15.b
Copyright © 2015 Juniper Networks, Inc. All rights reserved.
SH
Printed in USA.
Revision History:
Revision 15.a—March 2015.
Revision 15.b—June 2015.
The information in this document is current as of the date listed above.
T
The information in this document has been carefully verified and is believed to be accurate. Juniper Networks assumes no responsibilities for any inaccuracies that may
appear in this document. In no event will Juniper Networks be liable for direct, indirect, special, exemplary, incidental, or consequential damages resulting from any defect or
NO
omission in this document, even if advised of the possibility of such damages.
DO
Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
YEAR 2000 NOTICE
Juniper Networks hardware and software products do not suffer from Year 2000 problems and hence are Year 2000 compliant. The Junos operating system has no known
time-related limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
—
SOFTWARE LICENSE
The terms and conditions for using Juniper Networks software are described in the software license provided with the software, or to the extent applicable, in an agreement
executed between you and Juniper Networks, or Juniper Networks agent. By using Juniper Networks software, you indicate that you understand and agree to be bound by its
license terms and conditions. Generally speaking, the software license restricts the manner in which you are permitted to use the Juniper Networks software, may contain
prohibitions against certain uses, and may state conditions under which the license is automatically terminated. You should consult the software license for further details.
LY
ON
E
US
AL
RN
TE
IN
Contents
A RE
Chapter 10: Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1
SH
Designing for Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
T
NO
Chapter 12: Putting Network Design into Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-1
Network Design Recap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-3
Responding to the RFP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-9
Final Lab Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-17
Lab: Final Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-18
DO
Appendix A: Network Migration Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1
Migration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A-3
Migration Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A-6
—
Migration Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-13
iv • Contents www.juniper.net
Course Overview
RE
This three-day course is designed to cover introductory best practices, theory, and design principles for overall network
design and will serve as the prerequisite course for other design subject areas — data center, security, and WAN.
A
Objectives
After successfully completing this course, you should be able to:
SH
• Provide an overview of network design needs and common business requirements.
• Describe key product groups related to campus, WAN, data center, and security architectures.
• Analyze and interpret common RFP requirements.
T
• Scope a network design by gathering data and working with key stakeholders.
NO
• Describe ways of processing customer data and design requests.
• Identify boundaries and scope for the design proposal.
• List common considerations when creating a design proposal.
• Provide an overview of network security design and common vulnerabilities.
DO
• List high-level design considerations and best practices for securing the network.
• List the components of the campus network design.
• Describe best practices and design considerations for the campus.
• Describe architectural design options for the campus.
—
• List the components of the WAN.
• Describe best practices and design considerations for the WAN.
LY
• Create a network design proposal that satisfies customer requirements and business needs.
• Provide an overview of the steps involved in migrating a network.
• Describe best practices used in network migration.
TE
RE
This course is targeted for Juniper Networks system engineers, partner sales engineers (including Champions), and
services partners who are interested in learning network design introductory concepts. However, the course is also
applicable to a general audience of Juniper customers with a desire to learn more about network design.
Course Level
A
Juniper Networks Design Fundamentals is an associate-level course.
SH
Prerequisites
The prerequisites for this course are as follows:
• Understanding of the OSI model and TCP/IP;
T
• Knowledge of routing architectures and protocols;
NO
• Knowledge of switching architectures and protocols;
• Knowledge of Juniper Networks products and solutions;
• Understanding of infrastructure security principles; and
• Basic knowledge of hypervisors and load balancers.
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
Day 1
Chapter 1: Course Introduction
A
Chapter 2: Network Design Fundamentals
SH
Chapter 3: Understanding Customer Requirements
Lab: Understanding Customer Requirements
Chapter 4: Organizing the Data
Chapter 5: Securing the Network
T
Day 2
NO
Chapter 6: Creating the Design—Campus
Lab: Creating the Design—Campus
Chapter 7: Creating the Design—WAN
Lab: Creating the Design—WAN
DO
Chapter 8: Creating the Design—Data Center
Lab: Creating the Design—Data Center
Chapter 9: Business Continuity and Network Enhancements
Day 3
—
Chapter 10: Network Management
Chapter 11: Automation
LY
RE
CLI and GUI Text
Frequently throughout this course, we refer to text that appears in a command-line interface (CLI) or a graphical user
A
interface (GUI). To make the language of these documents easier to read, we distinguish GUI and CLI text from standard
text according to the following table.
SH
Style Description Usage Example
Franklin Gothic Normal text. Most of what you read in the Lab Guide and
Student Guide.
T
Courier New Console text:
commit complete
• Screen captures
NO
• Noncommand-related syntax Exiting configuration mode
GUI text elements:
• Menu names Select File > Open, and then click
Configuration.conf in the Filename
DO
• Text field entry
text box.
CLI Input Text that you must enter. lab@San_Jose> show route
GUI Input Select File > Save, and type config.ini
in the Filename field.
E
Finally, this course distinguishes between regular text and syntax variables, and it also distinguishes between syntax
variables where the value is already assigned (defined variables) and syntax variables where you must assign the value
(undefined variables). Note that these styles can be combined with the input style as well.
CLI Undefined Text where the variable’s value is the Type set policy policy-name.
RN
topology.
IN
RE
Education Services Offerings
You can obtain information on the latest Education Services offerings, course dates, and class locations from the World
A
Wide Web by pointing your Web browser to: http://www.juniper.net/training/education/.
About This Publication
SH
The Juniper Networks Design Fundamentals Student Guide is written and maintained by the Juniper Networks Education
Services development team. Please send questions and suggestions for improvement to training@juniper.net.
Technical Publications
T
You can print technical manuals and release notes directly from the Internet in a variety of formats:
• Go to http://www.juniper.net/techpubs/.
NO
• Locate the specific software or hardware release and title you need, and choose the format in which you
want to view or print the document.
Documentation sets and CDs are available through your local Juniper Networks sales office or account representative.
Juniper Networks Support
DO
For technical support, contact Juniper Networks at http://www.juniper.net/customers/support/, or at 1-888-314-JTAC
(within the United States) or 408-745-2121 (outside the United States).
—
LY
ON
E
US
AL
RN
TE
IN
T
NO
Chapter 10: Network Management
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design Fundamentals
RE
A
SH
T
NO
DO
—
LY
ON
We Will Discuss:
• Design considerations and best practices for managing networks; and
E
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
The enterprise network of today is stretched in multiple dimensions to meet the needs of the many types of users. The slide
US
illustrates several managed networks, from the high-performance demands of the service provider network to the
proliferation of high-bandwidth applications in a customer enterprise network. Each network type has its own needs. The
management needs for these network types vary, just as bandwidth needs vary between these network types.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
different ideal than operations management. The planning group has a different viewpoint than the provisioning group and
so on.
US
• Manageability: It might seem odd that manageability needs to be mentioned within the context of network
management but it is often forgotten in the entire scheme of things. Manageability refers to how easy, or hard,
it is to manage the network.
• Measurement: Defined as the analysis of collected data for performance, errors, security, etc. With this
RN
A RE
SH
T
NO
DO
—
LY
ON
• Fault management: Includes network monitoring, alarm management, and dealing with faults that occur in the
network involving both hardware and software failures. Combined, these allow a network operator to:
– Determine if the network is operating as expected.
– Track the current network state and, if necessary, compare it against previously collected network
AL
baselines.
– Visualize the network using GUI-related applications.
Fault management encompasses alarm management which is, arguably, the most important aspect of network
RN
RE
• Configuration management: Includes the initial configuration of the device to bring it online in addition to any
configuration changes thereafter. In addition, best practices include the following:
– Network auditing and discovery to keep device inventory current and accurate.
A
– Backup and restoration of device configurations.
– Software image management, including storage and delivery.
SH
• Accounting management: The heart of a network’s economics and, therefore, needs to be highly reliable and
accurate. It deals with data such as total bandwidth usage, peak versus non-peak usage, and the different
types of traffic flowing on a network.
• Performance management: Involves analysis of metrics such as:
T
– Throughput: Usually expressed as packets per second at the network layer or requests per second at the
NO
application layer.
– Delay/jitter: How long are packets or requests taking to reach a destination and are they consistent?
– Quality: Are packets being dropped? Are requests not making it to the applications?
These metrics are almost invariably retrieved by sampling the traffic data using instant snapshots and
DO
data-over-time methods.
• Security management: Usually means one of two things:
– Security of the network itself. That is, protecting the network against hacking attempts, Distributed Denial
of Service (DDoS) attacks, etc.
–
—
Security of management operations. This includes restricting device access and creating audit trails.
That is, records are kept of who (or what, in case of an NMS) has accessed a given device and what, if
any, changes were made.
The OAMP(T) model does not match up one-to-one with the FCAPS model but, for the most part, the same things are
LY
covered:
• Operation: This is the day-to-day running of the network, i.e., the coordination between administration,
maintenance, and provisioning.
ON
• Administration: Network support that does not include network changes. This includes the network design,
tracking usage, and keeping inventory current.
• Maintenance: Involves making changes that affect the well-being of the network. These include scheduled
hardware and software changes, routine checks, and maintaining operational management security.
E
involves exactly what you would expect – determining the cause for faults in the network and, possibly,
implementing temporary solutions until such a time as maintenance can carry out a more permanent fix.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
resources. Furthermore, management traffic can take up a significant portion of the overall capacity and only rises as the
network scales. If allowed to go unchecked, this can lead to degraded network conditions which make diagnosing and fixing
US
Device Separation
Segregating production and management networks eliminates bandwidth contention and allows monitoring to continue
AL
even as the production network nears its limits. Ideally, the separation between the two networks should be as close to the
Physical Layer as possible. Simple logical separation is better than nothing, but far from ideal. Some keys to separating
management and production networks include a well-thought-out IP addressing scheme, which again is part of network
design, as well as complete separation at the Physical Layer so the network is less prone to the aforementioned shared
RN
A RE
SH
T
NO
DO
—
LY
ON
Consistency
For network management to function effectively, standardization is crucial. This applies to almost every aspect of the
network, be it device configuration, device naming, rack placement, cabling, and so on. Talking standardization is one thing.
E
Following it consistently is quite another. However, this consistency is key if the network is be managed effectively. Recall the
manageability of the network mentioned previously? This is exactly what was meant. Without standardization, network
US
management is haphazard and unscalable. Adhering to documented standards make it much easier for the support staff to
manage the network. Furthermore, for any process to be scalable, it must be replicable and standardization enables this
replicability. Finally, standardization leads to automation, which can result in a reduction of operating costs. Standardizing
the physical layout, including rack layout and device slot and module population, expedites deployment as well as
diagnostics.
AL
Similar Devices
Standardizing software and firmware versions (that is, keeping like devices running identical software and firmware)
RN
expedites troubleshooting, simplifies sparing and replacement, and makes determination of software versus hardware
failures much more obvious.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
decide upon. Consistency and follow-through is key as it eases automation of Domain Name System (DNS) zone edits,
simplifies pattern matching logic that might be required by network management applications, and expedites physically
US
locating devices by their region, site, room ID, row, rack number, and so on. Note that too much of a good thing can apply
here. That is, too many bits of information in the hostname can lead to confusion and delays. Consider using DNS
sub-domains instead.
Description Fields
AL
In addition to hostnames, most devices have some sort of description field that can be used for more information about a
device. Standardizing this information allows both humans and software to easily extrapolate context from it, leading to
quicker troubleshooting and diagnostics. As with hostnames, it is imperative that a convention be defined and strictly
RN
followed. Items for consideration in a description field include customer IDs, circuit IDs, site code, row and rack ID, “under
test”, and so on.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Creating Backups
Now that you have all these wonderful standardized configurations, what is the next step? Well, if there is one thing you can
count on, it is that a network device will fail when you least expect it to. Getting called on a Saturday at 3 a.m. is no fun at all.
E
Backups are absolutely essential to get a replacement device back on the network, in the same state as before, with a
minimum of downtime. This is only possible if a system is in place that gathers and maintains backups of device
US
configurations. Many options are available in this realm but most backups are performed using some sort of simple data
transfer protocol. The list might include:
• FTP
• TFTP
AL
• SCP
One caveat to consider is that FTP and TFTP transfer data in an unencrypted form so, if you have to go this route, ensure that
these transfers are only made on the separate management network thereby isolating them from the production network.
RN
For network devices, it is typical to backup the entire configuration. However, for servers, you might be able to get away with
only backing up the configuration files and not the entire disk image. For the most part, most backups are “pulled” meaning
that a management system queries the devices and gathers the necessary data. However, some devices can “push” a
backup to a destination. Examples of this are Junos commit scripts, discussed in subsequent slides.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Therefore, it is imperative that console access be implemented and maintained. Do not kid yourself. Even though you might
have an out-of-band management network in place, or have staff on-hand 24/7, properly managed console access is key.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
device swap. During a network failure, you do not want to have to waste any time searching for serial cables, adapters, and
ports before you can get access. Therefore, you should permanently connect serial consoles to their dedicated console
US
A RE
SH
T
NO
DO
—
LY
ON
network normally behaves. These baselines, over time, can be compared against one another to find trends that could
indicate a current problem or be useful for future network upgrades.
US
Continuous network performance monitoring (for example, status polling, interface monitoring, and so forth) is necessary
but insufficient. To obtain an accurate picture of normalcy, you must have and maintain the following:
• A sufficient history of network and device performance data;
AL
• An understanding of the applications that are supposed to be running on your network; and
• A clear statement (such as a policy) of which types of traffic are supposed to be running on your network.
Only then can you construct a set of baselines for network behavior as well as criteria that can be applied to establish
RN
threshold conditions. This context allows you to automate the assessment of whether a condition is normal or undesirable.
For example, at times more than 80% interface utilization might be considered normal, and at other times it might not.
Conversely, low utilization (at an unexpected time) could be a leading indicator of loss of network access upstream.
Performance conditions like these are defined by performance history context and information like time of day, direction of
TE
A RE
SH
T
NO
DO
—
LY
ON
devices on a regular basis. Historical data collected by these applications can provide immediately-useful time series
visualizations of gross network traffic behavior as well as individual device performance. These tools (and similar tools)
US
might also be extended to incorporate threshold detection once baseline norms have been established.
Traffic flows can be monitored by enabling flow reporting on the ingress and egress routers or switches. Different vendors
might support slightly differing implementations of flow reporting (such as NetFlow, C-Flow, S-Flow, and IPFIX). Choose the
one that fits your needs and is supported by your network hardware. A fully detailed analysis of flow monitoring tools is
AL
Traffic flow data collection and reporting allows you to see the direction and composition of IP traffic through your network. This
type of data is much more useful than gross per-interface, packet-rate, or byte-rate data because it can also report usage
RN
broken down by source and destination ports as well as by IP address. Such a break down is useful, because even normal–
looking network utilization or packet rates can be unintended or anomalous traffic at the Application level; for example, slow
network scans, undesired services, and so forth. Deep Packet Inspection (DPI) tools can provide insight into the Application level
performance. This level of monitoring enables you to detect when otherwise normal applications on your network might be
TE
behaving in inappropriate ways. An example of this type of behavior would be undesired SSH Virtual Private Networks (VPNs)
masquerading as Hypertext Transfer Protocol over Secure Sockets Layer (HTTPS) sessions.
Enable SNMP agents on your switches and routers to provide continuous and historical instrumentation of traffic loads
through the devices. Also, you can utilize SNMP instrumentation to monitor the internal health of your routers, switches, and
IN
servers (stats such as CPU, memory, and buffer utilization, as well as environmental factors like temperature).
A RE
SH
T
NO
DO
—
LY
ON
number of devices, falling into the trap of using individual local authentication is relatively easy. So, a best practice is to
leverage existing AAA servers for all device access, and deploy those servers if they are not already present.
US
Not only does using centralized AAA systems enable a sound security practice to scale up to thousands of devices, but they
centralize control of authentication, enabling more granular accountability. Centralized AAA services can enable security
features not available for devices that authenticate locally, such as password expiration or two-factor authentication. The use
of a least privilege approach for profiles minimizes exposure. Centralized AAA makes it easy to quickly modify security
AL
profiles.
The value of centralized AAA increases exponentially with the number of devices.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
the actual network devices enables a network management solution that can scale to a very large number of devices.
Delegation of data collection and data reduction reduces the processing load on central network management systems and
US
RPM
Response time monitoring between points on your network can be delegated to Junos devices with real-time performance
RN
monitoring (RPM) configured. RPM allows your network devices to monitor delay, jitter, and loss between endpoints on your
network.
Notification Reduction
TE
Another way to achieve scalability in your network management approach is to delegate event data reduction tasks to your
distributed devices. The Junos event policy features enable event reduction capabilities such as pairing, duplicate
suppression, prioritization, and so on.
IN
A RE
SH
T
NO
DO
—
LY
ON
infrastructure design, such as configuring hardware or developing automation scripts—as simple as possible.
US
You should not get clever with your management network design; it needs to “just work”. For management and control LANs,
use single-VLAN, flat, switched networks and avoid complicated Spanning Tree Protocol (STP) arrangements as much as
possible. The result will be fewer points of failure. More importantly, potential failure modes will be obvious and (hopefully)
quickly repairable when failures occur.
Unfortunately, failures occur when they are least convenient. Ensure that operational procedures are well-documented and
AL
easy to follow. Automate the collection of diagnostic data, and more importantly, ensure that these tools and processes are
well-documented and that they perform as described. If the diagnostic process is too difficult to document, then it is
probably too complex and you should try to simplify it.
RN
Being clear and obvious is much more important for automation and diagnostic script coding than attempting to be clever
and efficient. For just about any automation script, the reader’s ability to understand it is far more important than that
internal functions be super-efficient. Always follow good coding practices and insert useful comments, especially around any
logic that might not be immediately obvious. When devising a network configuration or writing automation scripts, consider
that it will not work as expected at some time in the future. Typically, these problems occur in the middle of the night (often
TE
at 3:00 am), and if the engineer who encounters them does not understand your network design, you can expect a call at
3:15 am.
IN
A RE
SH
T
NO
DO
—
LY
ON
running on the Junos Space Network Management Platform. These applications enable you to automate the end-to-end
provisioning of new services across thousands of devices with a simple point-and-click graphical user interface (GUI), as well
US
as optimize management for specific domains such as core, edge, access and aggregation, data center, WAN, and campus
or branch.
The growing demand for more wireless network access, as well as recent trends like bring your own device (BYOD) and the
consumerization of IT, are causing a rapid increase in the volume of users, devices, and applications seeking network
AL
access, and this is creating new challenges for network managers. Junos Space Network Director addresses these
challenges by offering a unified wired and wireless network management solution that accommodates the entire network
lifecycle. Junos Space Network Director is a next-generation network management application that holistically addresses
management of the network infrastructure that is required to meet the explosive demand for smart devices, a variety of
RN
RE
Junos Space Network Director delivers:
• Critical elements of advanced management applications by providing operational efficiency, expedited error
free service rollout, enhanced visibility, and fast troubleshooting.
A
• Operational efficiency by employing a correlated view of various network elements. Junos Space Network
Director offers a holistic view of every aspect of network operation to remove the need for disjointed
SH
applications throughout the lifecycle of the network.
• Faster rollout and activation of services while protecting against configuration errors with profile-based
configuration and configuration pre-validation.
A single pane of management to provide a unified view of network infrastructure, including a correlated view of overlay
T
services and user experience on top of that network infrastructure. Junos Space Network Director also tracks aggregated
utilization, network hotspots, failures, correlated radio frequency (RF) data, and usage to a user level, providing deep
NO
visibility and easy troubleshooting of connectivity, equipment, and general failures.
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
Build mode is used to discover network devices and to create and manage device configurations. Build mode also provides a
US
view where devices can be organized into hierarchal groupings based on logical relationships or physical locations.
In deploy mode, device-related changes such as image management, configuration pushes, and reconciliation are applied.
Monitor mode provides detailed visibility into network status and performance by collecting information from devices and by
maintaining the information in a database. The monitor lifecycle offers graphs that are easy to understand and tables that
AL
can be sorted and filtered, allowing you to quickly visualize the state of the network, spot trends developing over time, and
review important details. The monitor lifecycle divides monitoring activity into two categories—traffic monitoring and client
monitoring.
The fault mode provides visibility into unexpected network events and manages network faults or exceptions, which appear
RN
as alarms in Junos Space Network Director. Junos Space Network Director collects, analyzes, and correlates these low-level
events into alarms, and it allows the network administrator to view current active alarms, summaries of categorized alarms,
and alarm details, including the individual events that are correlated to an alarm.
Continued on the next page.
TE
IN
RE
Report mode enables administrators to run reports on the data that’s collected and stored to a server by Junos Space
Network Director. Network Director can set up report definitions that specify the format of the report (Hypertext Markup
Language [HTML], PDF, comma-separated variable [CSV]), the historical time frame that the report covers, and the report
contents. You can choose between predefined report content for reporting on alarms, Network Director activity, device
A
inventory, sessions, traffic, and RF information. Scope can be selected around a device, a location, or a group of devices.
Reports can be scheduled, allowing the report to run at a single specified time in the future or on a recurring basis.
SH
Generated reports are stored on Junos Space Network Director, where they are available for downloading. You can also
choose to have them delivered through e-mail or archived to a secure copy protocol (SCP) file server when you create the
report definition.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
whether hardware or virtual, comes with all necessary software preinstalled and includes the CentOS operating system,
relational database, application server, and various other third-party components. The main driver for choosing Junos Space
US
Virtual Appliances is it allows utilization of any existing investment already made in VMware virtualization infrastructure
instead of purchasing new hardware. You can also scale up a Junos Space Virtual Appliance by increasing the resources
assigned to it in terms of CPU, memory, and disk space.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
silos and tools is causing information overload. Industry-specific regulations exist that mandate security best practices.
Compliance mandates internal IT risk-assessment programs. Evolving internal and external threats to the network present
US
yet another challenge. These threats can come from insider abuse or complex integrated attacks.
As the designer, you must implement network security measures to overcome the challenges your customer faces.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
describe network security management as the method of collecting events, which are logs from network devices,
applications, end points, and so forth; network monitoring, which is the active analysis of live network communications; and
US
the collections of asset information and vulnerability profiles. The process also includes normalization and mapping of all
data to a single format for processing storage—for example, mapping events based on category, IP address, username, and
so forth.
Requirements
AL
The network security management process is potentially overwhelming considering the massive amount of network traffic
and log activity taking place on the network; thus, the requirements for a network security management solution are
extensive. The solution must be able to search interfaces for queries and forensics, correlate data to identify and prioritize
RN
threats, as well as reduce the number of false alarms. The solution must also be able to produce notifications and alarms for
instances where policies have been violated or threats are detected. The solution must also generate reports that can
provide information on operation and compliance.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
RE
A
SH
T
NO
DO
—
LY
ON
We Discussed:
• Design considerations and best practices for managing the network; and
E
A RE
SH
T
NO
DO
—
LY
ON
Review Questions
1.
E
2.
US
3.
AL
RN
TE
IN
RE
1.
A baseline reference is necessary for “good performance” or “normal behavior” in the network, thereby making failures or anomalies
easier to spot and troubleshoot.
A
2.
Standardizing configurations makes it easier for IT staff to manage the network, makes it possible to automate network functions,
SH
expedites deployments and diagnostics, and simplifies sparing and replacement.
3.
The Junos Space Network Application Platform is available as a hardware appliance or as a virtual appliance.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
T
NO
Chapter 11: Automation
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design Fundamentals
RE
A
SH
T
NO
DO
—
LY
ON
We Will Discuss:
• Design considerations and best practices for network automation; and
E
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
and updating device configurations. Otherwise, operating expenditures (OpEx) increases and reliability decreases.
Automating device configuration maintenance is crucial for creating scalable networks.
US
Automating Configuration
Automation requires and reinforces consistency, which as we have shown, is another critical factor to scalability—requiring
consistency in the rules within automation as well as consistency in the data, such as device configuration settings. As more
AL
configuration data is derived from automated mechanisms, the data become more consistent and predictable than in
hand-crafted configuration data.
Business rules, service level agreements (SLAs) and Best Current Practices (BCP) are all inputs for establishing configuration
parameters. The more specific those inputs, the more likely that automated device configuration can be implemented
RN
effectively.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Automation Options
Configuration management automation can take place concurrently from multiple places in your network. However,
configuration management should be considered a centralized function. For example, provenance of configuration data
E
must always be identifiable and centralized control provides a single authoritative source for policy and configuration data,
enabling that accountability.
US
Network management system (NMS)-based device configuration management comprises the tools that automate the
creation, deployment, monitoring, and management of the device configurations from a centralized system or set of
systems.
However, delegating some configuration management functions to devices themselves is also possible. We will call the
AL
former NMS-based device configuration management, and the latter on-device configuration management. Distributing
configuration management functions through on-device techniques enhances network scalability by enabling timely
automated detection of configuration inconsistencies, offloading your central NMS, thereby reducing network management
overhead traffic and enabling continuity of configuration management functions when centralized services are unavailable.
RN
On-device configuration management encompasses services and software that run on network devices to help maintain
configuration, based on some form of policy.
In Junos devices, commit scripts are resident software code that is executed when configurations are changed. Commit
scripts enable instant error checking and avoidance by allowing users to codify network architectural constraints and
standards, thereby guarding against common errors and automatically enforcing best practices configurations.
IN
A RE
SH
T
NO
DO
—
LY
ON
Commit Scripts
Commit script logic is contained entirely in the device. In other words, constraint checking, verification functions, or both are
all performed by (and on) the Junos device with no reliance on an external NMS.
E
Customer Control
US
Junos commit scripts are scripts that enforce your customized configuration rules, so you have better control over how
devices are configured. Each time a new candidate configuration is committed, the active commit scripts are called and they
inspect the new candidate configuration. If a configuration violates your custom rules, the script can instruct the Junos
operating system to perform various actions, including generating custom error, warning, and system log (syslog) messages,
AL
A RE
SH
T
NO
DO
—
LY
ON
Next, we examine some examples. Junos commit scripts can insist that each Asynchronous Transfer Mode (ATM) interface
not have more than 1000 permanent virtual connections (PVCs) configured, or that an interior gateway protocol (IGP) not use
an import policy that will import the full routing table or that all LDP-enabled interfaces are configured for an IGP. Another
example of Junos commit scripts is that they insist that the re0 and re1 configuration groups are set up correctly and that
nothing in the foreground configuration is blocking their proper inheritance. In all cases, potential configuration problems are
AL
A RE
SH
T
NO
DO
—
LY
ON
Commit scripts can change a configuration, correcting errors as they are detected, or commit scripts can protect a
configuration based on implicit rules.
US
Examples
For example, commit scripts can automatically build a protocols ospf group containing every Ethernet interface
configured under the [interfaces] hierarchy. They can automatically configure family iso on any interface with family
AL
mpls or apply a configuration group for any SONET interface with a description string matching a particular regular
expression. Problems are alleviated before they even occur.
Verifying and constraining device configurations is a task that could also be provided by off-device network management
tools (either commercially available or developed in-house). However, the on-device capabilities provided by commit scripts
RN
can ensure that non-compliant configurations never get through the commit process into the running configuration.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Fault Diagnosis
Fault diagnosis is a set of written procedures from a Network Operations Center (NOC) handbook, or similar documentation.
Many fault diagnosis procedures can be automated—either fully or partially—using free or commercial software.
E
Benefits of Automation
US
There are many benefits to automating fault diagnosis procedures, some of which are listed on the slide. In addition to
generally available automation tools, the Junos OS can provide the following on-device diagnostic functions:
• Junos OS operation (op) scripts, which can be executed manually by an operator at any time or which might be
AL
• They can be automatically triggered by threshold crossings, traps, or syslog messages received from devices;
• They can incorporate other tools (such as traceroute) and trigger on-device diagnostics on remote devices;
• They have a broader scope of data sources than on-device diagnostics, and they might be able to synthesize a
more complete view of a problem; and
TE
• They can interface with other management systems—for example, they might query a Customer Relationship
Management (CRM) system for customer contact information.
IN
A RE
SH
T
NO
DO
—
LY
ON
calls (RPCs) supported by either of the two APIs: Junos NETCONF/XML API and Junos automation.
US
A RE
SH
T
NO
DO
—
LY
ON
Op Scripts
Junos op scripts execute any Junos command. You can capture, process, and automatically deliver the results to the
command-line interface (CLI) or a remote system.
E
Event Scripts
US
Junos event scripts can execute Junos commands or scripts in response to an event policy. Junos event scripts are very
similar to op scripts, but can also operate on data received from the Junos event subsystem. You can capture and process
diagnostics as well as transmit to remote systems for analysis.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
diagnostics from multiple devices, they support a wider variety of network elements, and they can time-align diagnostic
results from multiple points in the network.
US
NMS-based diagnostics can be triggered by multiple mechanisms, including device traps, syslog messages, trouble-ticket
interfaces, and others.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
and scale in real time, while reducing the risk of human error.
US
Using Chef, you write abstract definitions of your infrastructure in Ruby and manage the definitions like you manage source
code. These abstract definitions are applied to the nodes in your infrastructure by the Chef clients running on those nodes.
When you bring a new node online, the only thing that the Chef client running on that node needs to know is which
definitions to apply.
Continued on the next page.
AL
RN
TE
IN
RE
Chef for Junos OS enables Chef support on selected Juniper Networks devices. You can use Chef for Junos OS to automate
common switching network configurations, such physical and logical Ethernet link properties and VLANs, on these devices.
Within the Chef framework, the abstract infrastructure definitions are contained in reusable cookbooks and recipes:
• Cookbooks: Packages that contain the recipes, files, attribute definitions, and so on that describe a portion of
A
your infrastructure and how to deploy, configure, and manage it. For example, the apache2 cookbook
maintained by Chef contains recipes for installing and configuring an Apache HTTP Server.
SH
• Recipes: Written in Ruby and describe the installation, configuration, and management of the infrastructure
elements.
• Resources: The major building blocks of recipes. A resource is a platform-neutral representation of an element
of the system and its desired state—for example, a service that should be started or a file that should be
T
written.
NO
• Providers: The underlying platform-specific implementations that bring resources to their desired states. For
example, a resource might specify a particular software package to be installed, without describing how it is
installed. The providers associated with the resource direct the Chef client how to perform the actual
installation on specific platforms.
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Junos PyEZ
Junos PyEZ is a Python micro-framework to remotely manage or automate Junos OS devices. The user is NOT required to be
a software programmer, have sophisticated knowledge of Junos OS, or have a complex understanding of the Junos OS XML
E
• Non-Programmers: Using PyEZ as a powerful shell. This means that non-programmers, for example, the network
engineer, can use the native Python shell on their management server (laptop, tablet, phone, and so on) as
their point-of-control for remotely managing Junos OS devices. The Python shell is an interactive environment
that provides the necessary means to perform common automation tasks, such as conditional testing,
for-loops, macros, and templates. These building blocks are similar enough to other shell environments, like
AL
Bash, to enable the non-programmer to use the Python shell as a power tool, instead of a programming
language. From the Python shell, a user can manage Junos OS devices using native hash tables, arrays, and so
on, instead of using device-specific Junos OS XML or resorting to screen scraping the actual Junos OS CLI.
• Programmers: As we have discussed, there is a growing interest and need to automate the network
RN
infrastructure into larger IT systems. To do so, traditional software programmers, DevOps, hackers, and so on,
need an abstraction library of code to further those activities. Junos PyEZ is designed for extensibility so that
the programmer can quickly and easily add new widgets to the library in support of their specific project
requirements. There is no need to wait on the vendor to provide new functionality. Junos PyEZ is not specifically
TE
A RE
SH
T
NO
DO
—
LY
ON
What Is SDN?
Enterprises and service providers are seeking solutions to their networking challenges. They want their networks to adjust
and respond dynamically, based on their business policy. They want those policies to be automated so that they can reduce
E
the manual work and personnel cost of running their networks. They want to quickly deploy and run new applications within
and on top of their networks so that they can deliver business results. And they want to do this in a way that allows them to
US
introduce these new capabilities without disrupting their business. This list is a tall order, but SDN has the promise to deliver
solutions to these challenges. How can SDN do this? To decode and understand SDN, we must look inside networking
software. From this understanding, we can derive the principles for fixing the problems.
The SDN solution must also solve the current challenges of the network. Networks must adjust and respond dynamically to
AL
changes in the network. This network agility can be accomplished through a decoupling of the control and forwarding planes
on individual network devices. The decoupling of the control and forwarding plane also lends to alleviating the need for
manually configuring each and every network device.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
plane resides on each network device. This is problematic because each network device at best only has a partial
understanding of the network. This partial understanding of the network leads to inefficient traffic forwarding and bandwidth
US
utilization.
When you move the control plane to a logically centralized position in the network, and leave the forwarding plane
distributed, you suddenly have an entity that has a complete view of the network. From a high level perspective, this
centralized device is the SDN controller. The SDN controller is able to make the control plane decisions. In other words, it
AL
tells each network device how and where to forward the traffic. Then, each network device is able to focus on forwarding the
traffic. The end result is efficient traffic forwarding and use of network bandwidth.
Another benefit of decoupling the control plane and the forwarding plane is that the compute control plane functions, that
are largely redundant on each networking device, are moved to the SDN controller. This movement of control functions
RN
A RE
SH
T
NO
DO
—
LY
ON
blocking or forwarding modes. There are other port modes that are beyond the scope of this use case, but suffice it to say
that STP causes certain ports to forward traffic and certain ports to not forward traffic. There are other types of STP that help
US
A RE
SH
T
NO
DO
—
LY
ON
free, the SDN controller takes over as the central control plane and makes the path decision for the switches. These path
decisions are passed on to the switches. Next the switches, who still retain their individual forwarding planes, forwards the
US
traffic as prescribed by the SDN controller. Having the control functions logically handled by the SDN controller results in the
optimal path being selected and the redundant path being available in case of a failure scenario in our example.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
What Is Contrail?
Juniper’s Contrail is a simple, open, and agile SDN solution that automates and orchestrates the creation of highly scalable
virtual networks. These virtual networks let you harness the power of the cloud—for new services, increased business agility,
E
• Simple: Creates virtual networks that integrate seamlessly with existing physical networks, and that are easy to
manage and orchestrate.
• Open: Avoids expensive vendor-lock with an open architecture that interoperates with a wide range hypervisors,
orchestration systems, and physical networks.
AL
• Agile: Speeds time to market for new services by automating the creation of virtual networks that interconnect
private, hybrid, and public clouds.
Service providers can use Contrail to enable a range of innovative new services, including cloud-based offerings and
virtualized managed services. For enterprises, Contrail can increase business agility by enabling the migration of
RN
RE
Network programmability uses SDN as compiler to understand and translate abstract commands into specific rules and
policies that automate provisioning of workloads, configure network parameters, and enable automatic chaining of services.
This concept hides complexities and low level details of underlying elements (ports, virtual LANs (VLANs), subnets, and
others) through abstraction to allow for effortless extensibility and simplified operational execution.
A
Network Virtualization Function (NVF) provides dynamic service insertion by automatically spinning up and chaining together
Juniper and third party service instantiation that dynamically scales out with load balancing. This concept reduces service
SH
time-to-market, improving business agility and mitigating risk by simplifying operations with a more flexible and agile virtual
model.
Big data analytics queries, ingests, and interprets structured and unstructured data to expose network knowledge using
representational state transfer (REST) APIs and graphical user interfaces (GUIs). This concept enables better insight,
T
proactive planning and predictive diagnostics of infrastructure issues employing both near-real-time and historical
information on application usage, infrastructure utilization, system logs, network statistics like flows, latencies, jitter, and
NO
other data.
Open system architecture supports standards-based protocols and open orchestration platforms to enable ultimate vendor
agnostic interoperability and automation. We discuss more on open source with the OpenContrail in the next page.
Visualization provides an exception-based dashboard and user interface with hierarchical (virtual networks to individual
flows) presentation of real-time and historical network data. This concept simplifies operations and decision-making by
DO
providing a simple, yet comprehensive, view into infrastructure to help efficient correlation and orchestration across physical
and overlay network components.
Contrail is an extensible system that can be used for multiple networking use cases but there are two primary drivers of the
architecture:
•
—
Cloud Networking—Private clouds for enterprises or service providers, Infrastructure as a Service (IaaS) and
Virtual Private Clouds (VPCs) for Cloud service providers.
• NFV in service provider network—This provides Value Added Services (VAS) for service provider edge networks
such as business edge networks, broadband subscriber management edge networks, and mobile edge
LY
networks.
The Private Cloud, the VPC, and the IaaS use cases all involve a multi-tenant virtualized data centers. In each of these use
cases multiple tenants in a data center share the same physical resources (physical servers, physical storage, physical
ON
network). Each tenant is assigned its own logical resources (virtual machines, virtual storage, virtual networks). These logical
resources are isolated from each other, unless specifically allowed by security policies. The virtual networks in the data
center might also be interconnected to a physical IP VPN or L2 VPN.
The NFV use case involves orchestration and management of networking functions such as a firewalls, intrusion detection or
preventions systems (IDS/IPS), deep packet inspection (DPI), caching, WAN optimization in virtual machines instead of on
E
physical hardware appliances. The main drivers for virtualization of the networking services in this market are time to market
and cost optimization.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Within a typical environment, there are virtual machines (VMs), which could be anything from tenant VMs to a VM designed
for some network function such as firewalling or deep packet inspection. Virtual networks are an integral building block as
US
they connect the aforementioned VMs together. Finally, gateway devices are used to bridge virtual networks to physical
networks.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
when the network application programming interface (API) is called, a Layer 2 infrastructure is built. What Contrail does is
intercept that call to build the network. However, Contrail builds its own kind of network because, at the abstracted layer in
US
the orchestration system, the request is defined as asking for a network without describing how that network should be built.
It is up to the underlying systems to decide how that network should be built. For the most part, the orchestration system
does not care how a network is built, just that it is built.
AL
RN
TE
IN
RE
A
SH
T
NO
DO
—
LY
ON
We Discussed:
• Design considerations and best practices for network automation; and
E
A RE
SH
T
NO
DO
—
LY
ON
Review Questions
1.
E
2.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
RE
1.
Network Automation is necessary for creating, monitoring, and updating device configurations reliably while simultaneously decreasing
operating expenses.
A
2.
Automation solutions such as Chef and Junos PyEZ assist in network management by streamlining automation solutions and
SH
simplifying complicated scripting functions.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
A
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
T
NO
Chapter 12: Putting Network Design into Practice
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design Fundamentals
A RE
SH
T
NO
DO
—
LY
ON
We Will Discuss:
• The foundational topics that have been taught throughout the course; and
E
• Creating a network design proposal that satisfies customer requirements and business needs.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
yourself apart from the competition by paying attention to the details and responding to the customer with a design that
meets (or exceeds) the customers’ goals, requirements, and expectations.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
requirements and project boundaries. You will need to understand how those requirements and boundaries will impact your
design and determine if you can establish a plan that works around the boundaries set forth while still meeting—or
US
A RE
SH
T
NO
DO
—
LY
ON
is safe to fly before taking off. You might even make a checklist before you go shopping for groceries to make sure you have
the items that will satisfy your meal plans for the week. Certainly, network design is no different. A checklist will help you
US
verify that a job is not skipped and that no task is unfinished before responding to the customer.
No two network design checklists are identical, but your checklist should have steps that accomplish the main objectives
listed on the slide. Each objective might involve several sub-steps. For example, you will certainly need to perform multiple
steps when attempting understand a customer’s existing environment. The exact steps in the checklist are not as important
AL
as making sure you accomplish all of the tasks needed to respond to the customer in a manner that places your proposal
ahead of the competition.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
customer’s expectations. The next step is responding to the customer’s Request for Proposal (RFP). This step is where you
will submit, in writing, your design to the customer. As we mentioned earlier in the course, every RFP is different; thus, every
US
RFP response will be different. There are key sections, however, that should be in every proposal—the executive summary,
solution overview, and technical specifications. We discuss these sections, in details, on subsequent slides.
Analyzing customer data is an iterative process—a process that you’ll repeat until you fully understand what the customer
needs. You should feel completely comfortable with the your vision of the customer’s request before responding to the RFP.
The customer will expect your response to address their key requirements. Use prudence when writing your proposal,
carefully using their terminology and formatting as you respond to specific requests in the RFP. Be sure to outline the
RN
benefits of using your design so that the customer understands why they should pick your plan over others.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Network’s (and your) value proposition to the customer. The executive summary will likely be read by all key decision makers.
In some cases, the executive summary might be the only section of the proposal that certain decision makers will read. The
US
executive summary is your opportunity to demonstrate your strengths against the requirements the customer has set and
introduce themes that illustrate the benefits of selecting your proposal over the competition.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
1. Write a summary that all decision makers of the organization can understand. Use words that non-technical
individuals can decipher so that all decision makers can easily comprehend this section of your document.
US
2. Focus on organization issues and keep the technical aspects to a relative amount. The technical content will be
provided in the solution overview.
3. Keep it short and simple. For example, allow one or two pages for every fifty pages of technical content in your
proposal.
AL
4. Avoid using templates, canned responses, or other non-personal responses in your proposal.
5. Avoid using clichés phrases and statements. For example, a phrase such as “Juniper Networks is delighted to
be given the opportunity to respond to this RFP...” makes you—and Juniper—look subservient. This type of
RN
A RE
SH
T
NO
DO
—
LY
ON
1. Start by stating the need or problem of your customer. Catch their attention by addressing the requirements
they have. Articulate the customer’s requirements in some—but not too much—detail. State that you intend to
US
demonstrate throughout your proposal how you can deliver against their requirements.
2. Spell out the benefits and payoffs of meeting the need or solving the problem, as well as the possible
consequences of inaction. Point out the probable return on investment. Identifying the business benefits will
establish a common goal and measurable objectives, and will also show that you understand and empathize
with their requirements.
AL
3. Provide an overview of your proposed solution. This is the main section of the executive summary and describes
how you will solve the customer’s problem or meet their specified requirements. Recommend your proposed
solution and discuss specifically how you will deliver it.
RN
4. Supply relevant information that outlines why the customer should choose your design. You should demonstrate
you are a credible, experienced, reliable designer and that Juniper Networks is reliable provider for the products
and solutions that address any cost concerns. Highlight Return Of Investment (ROI).
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
of further assistance...” or “thanking you in advance...”. Choose a closing statement that treats them as an equal, such as
the example on the slide.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
scope of their request. Outline the key benefits of your solution, but keep it short and to the point. You can be a little more
technical in explaining your solution here, but assume that non-technical executives will be reading this section, so use
US
vocabulary that all readers of the document will understand. This section should be no longer than three pages.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
topologies, your bill of materials, and implementation road maps in this section as well.
US
There are obviously other sections that you would include in your RFP response, such as legal terms, service and support
details, training plans, warranty information, and so forth. These details are beyond the scope of this class and are omitted
from this content. For concerns or questions regarding these matters, see your Juniper Networks account representative.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
to practice! The final lab will provide you with a scenario in the form of an RFP that you will respond to. You use the details in
the RFP to assess the details of the customer’s business and technical goals, as well as their requirements for a new
US
network. You will be expected to be able to provide a campus, WAN, and data center design for the customer. Your RFP
response should include an executive summary, solution overview, and technical specifications for your design.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
We Discussed:
• The foundational topics that have been taught throughout the course; and
E
• Creating a network design proposal that satisfies customer requirements and business needs.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• Content Explorer: Junos OS and ScreenOS software feature information to find the right software release and
hardware platform for your network.
• Feature Explorer: Technical documentation for Junos OS-based products by product, task, and software
release, and downloadable documentation PDFs.
AL
• Learning Bytes: Concise tips and instructions on specific features and functions of Juniper technologies.
• Installation and configuration courses: Over 60 free Web-based training courses on product installation and
configuration (just choose eLearning under Delivery Modality).
RN
• J-Net Forum: Training, certification, and career topics to discuss with your peers.
• Juniper Networks Certification Program: Complete details on the certification program, including tracks, exam
details, promotions, and how to get started.
TE
• Technical courses: A complete list of instructor-led, hands-on courses and self-paced, eLearning courses.
• Translation tools: Several online translation tools to help simplify migration tasks.
IN
www.juniper.net
Juniper Networks Design Fundamentals
RE
A
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
www.juniper.net
RE
A
SH
Juniper Networks Design Fundamentals
T
NO
Appendix A: Network Migration Strategies
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design Fundamentals
A RE
SH
T
NO
DO
—
LY
ON
We Will Discuss:
• An overview of the steps necessary to migrate a network; and
E
A RE
SH
T
NO
DO
—
LY
ON
Migration Overview
The slide lists the topics we will discuss. We discuss the highlighted topic first.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
technology. The solution is network migration—removing the older solution and replacing it with something newer and better.
US
The slide provides three examples where the network is migrated from something old to something new:
1. Elements of the old network are replaced with new components to improve certain aspects of the network
whilst retaining key elements of the old network.
2. The old network is completely replaced by a new network.
AL
3. The old network, which consisted of several components, is replaced with a newer, consolidated network
infrastructure.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A Migration Methodology
Juniper Networks follows a common methodology when migrating a network, regardless of the size or scope of the project.
The methodology requires that you not only understand the current state of the customer’s network, but that you also
E
understand what the customer desires when the project is complete. The concept is similar to any design project you work
on for a customer—it requires that you can provide a clear outcome, which in this case is a tested and low risk migration plan
US
and execution. The outcome is driven by an integration of Juniper Networks’ best practice methodologies.
The migration methodology shown on the slide follows three main steps: analysis, migration plan, and migration execution.
We discuss this methodology further in the next section.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Migration Approaches
The slide highlights the topic we discuss next.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
customer to maximize the desired outcome whilst simultaneously identifying and minimizing any associated risk. We
examine each phase of the plan on the next several slides.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
1. Stakeholder Engagement: As a representative for Juniper, you will be responsible for determining the customer
stakeholders. This will involve interviews with stakeholders, a communication plan, and determining the
US
governance model. You will also need to identify user roles and responsibilities, and the handover process.
Once you have determined key stakeholders, you will then define a migration plan that involves an inventory
resource and skills available to you, project governance, steering committee, approval processes, escalation
procedures, and so forth. Meanwhile, the customer will need to ensure that all stakeholders are engaged and
internal commitments agreed upon. The customer should provide you with processes, technology and people to
AL
Confirm target and intermediate network and systems involved in the migration. Outline a migration strategy
and confirm the design and components with the customer. Create an implementation plan analysis and solicit
feedback from the customer. During this sub-step, the customer should ensure that all business and technical
objectives are provided. The customer should also provide existing network systems and operational details to
TE
you, as well as participate in workshops. By the end of this step, the customer should agree on the outlined
migration strategy.
Continued on the next page.
IN
RE
3. Migration Constraints and Analysis: During this sub-step, you will identify all process and operation
considerations, such as lab availability, applications and services inventory, security requirements,
maintenance window requirements and processes, and also field availability and skills. Identify network and
systems considerations, such as systems provisioning, feature parity and design impacts, and 3rd party
A
dependencies. Finally, identify the migration operational considerations. These will include capacity
requirements, performance requirements, and customer management requirements. The customer’s role in
SH
this step will be to ensure that all planning operational and resource constraints are shared. The customer
should also ensure that all network and systems considerations are provided.
4. Migration Strategy: Once you have completed sub-steps 1 through 3, you are ready to formulate a migration
strategy. Identify migration testing and trial periods by agreeing end-to-end migration testing coverage. Agree
with the customer on lab, pilot and field migration testing. Confirm dependencies, resources, and inputs.
T
Develop a migration strategy and risk report. Provide a field trial strategy pilot, determine resource availability,
and author a documentation plan. At this time, the customer should ensure that all testing and trials are agreed
NO
upon, as well as all impacted network and service issues are understood. The customer will also ensure that all
end customer service requirements have been considered.
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
1. Migration Planning: At this point, you are responsible for creating a project plan for the people and processes
involved in the plan. Create an execution plan for services and the network, including a rollback procedure,
US
should the migration run into any issues. Ensure the availability, as well as any customization requirements for
the migration tools you will be using. Provide an impact and risk description for each migration step. Offer a
detailed scheduling document, including project planning, network freeze planning, and maintenance
windowing. Describe any operation impact details. Plan for handing over the network to the customer, as well as
training. The customer will need to start the maintenance window request process. They will also need to create
AL
a migration plan for all impacted applications, networks, people, and processes. The customer will need to
review and agree upon the stated migration plan.
2. Migration Acceptance Test Plan: During this sub-step, you will define the acceptance test plan, including
post-migration and production steps.) Select a field trial site. The customer, during this sub-step, should agree
RN
on the acceptance test plan criteria and identify or confirm the pilot site.
3. Migration Validation Testing: Your responsibility is to build the lab environment for the migration. Conduct all
planned tests, including migration testing. Validate the network and systems integration. Review test results
according to the agreed acceptance criteria. Respond to all identified test issues. The customer should provide
TE
and ensure connectivity to 3rd party equipment. They must review test results according to the agreed
acceptance criteria and provide feedback on all identified issues.
IN
A RE
SH
T
NO
DO
—
LY
ON
1. Pre-Migration Readiness: Open one (or more) proactive Juniper Technical Assistance Center (JTAC) cases for all
products relating to the migration. Execute pre-migration checks, including configuration, functionality, and
US
operational state checks on a pilot site or device and report readiness to the customer. Meanwhile, the
customer should execute the same pre-migration checks on 3rd party equipment and report readiness to you.
2. Migration Cutover: Execute your migration plan and cutover for the pilot site or device to the production
environment. The customer should notify service and application owners, as well as any of their customers of
the cutover. They should also provide assistance to you during the migration plan and cutover execution.
AL
3. Post-Migration Acceptance Testing: After the migration cutover, execute your post-migration acceptance test
plan and document the results. Troubleshoot any issues identified during acceptance testing. Fine tune the
setup where needed and offer post-migration monitoring support. Hand over any open issues to JTAC. Collect
RN
and document lessons learned. During this step, the customer will notify service and application owners, or any
of their customers to execute their acceptance test plans and report any issues. They should review
post-migration acceptance testing results with you and confirm their acceptance of the results.
4. Migration Handover: Now that testing is complete, you will hand over the project to the customer and transfer
TE
any knowledge gained also. This information should also be exchanged with JTAC. Review any lessons learned
with the customer and revise the migration plan, if required, before the next migration. Create a project closure
summary. The customer should also notify service and application owners, as well as any of their customers
that the migration window has ended.
IN
A RE
SH
T
NO
DO
—
LY
ON
• Analysis: Automation simplifies data collection and accelerates analytical logic, such as extracting configuration
US
can also streamline the implementation by using scripted bulk migration and comprehensive monitoring.
Finally, automation can enhance validation by automating acceptance testing and providing detailed reports.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Migration Examples
The slide highlights the topic we discuss next.
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
The example on the slide describes the results from a campus LAN refresh.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
network.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
We Discussed:
• An overview of the steps necessary to migrate a network; and
E
A RE
SH
T
NO
DO
—
LY
ON
Review Questions
1.
E
2.
US
AL
RN
TE
IN
RE
1.
The three main phases to Juniper Networks’ migration approach are Analysis, Migration Plan, and Migration Execution.
2.
A
Including automation scripts or tools in your migration plan can help drive efficiency, accelerate project delivery, and simplify migration
workflows. Automation can also help enable precision and promote accuracy, as well as mitigate risk.
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
T
NO
Appendix B: Sample Campus Designs
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design Fundamentals
A RE
SH
T
NO
DO
—
LY
ON
We Will Discuss:
• Various campus network topographies; and
E
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
Campus Topologies
The modern campus can be further described as falling within five topologies:
E
• A horizontal topology of low to medium density such as a small office building with only a few floors and perhaps
1–4 wiring closets per floor.
US
• A vertical topology will have a higher density of users with multiple points of access such as in a multi-floor
building.
• A metro campus is essentially a group of buildings located in close proximity to each other connected with fiber.
• The user density is usually lower in a metro campus, whereas a widely distributed campus, which will have
AL
multiple types of buildings (such as horizontal, vertical, and metro) spread out over a wider geographic area and
connected through a LAN or WAN. User density will vary.
• Finally, a hub and satellite topology would be one or two campuses or data centers connected to several smaller
RN
branch offices. The main site would typically have a higher user density while the branch sites would have a
lower user density.
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
The least complex campus topology you will encounter in terms of geographic considerations, will be a horizontal topology. It
US
will be a building of perhaps one to four floors and between 250 to 1000 users. It will have a minimal number of wiring
closets, most likely one WAN or Internet connection, and possibly a second for backup.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
EX4300 Series switches participating in Virtual Chassis are placed in the access closets. Using Virtual Chassis technology
can greatly reduce the number of switches and uplinks you must manage.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
the network is less complex because no links are blocked, spanning tree is not used, and no VRRP-like protocols are being
used.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
closets—at least two per floor—and most likely two WAN or Internet connections.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Aggregation needs to occur for each building. Depending on the height of the building, intermediate aggregation might be
required between floors if there are more than 100 meters between the wiring closet switch and the building aggregation
US
layer. The modular Juniper Networks EX6200 Ethernet switch is used for denser user populations within 100 meters of a
wiring closet. Again, SRX Series devices serve as the firewall and external facing routed interface for the campus. Redundant
WLAN controllers and dual connections to APs provide redundancy for the WLAN.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
Chassis aggregation switches. The redundant WLAN controllers located in Building A in this example are directly connected
to the aggregation tier (in a two-tier design they might be connected in the core).
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
200 to 250 users per floor. A metro campus topology will have several wiring closets (at least two per floor) and most likely
two WAN or Internet connections that terminate to a core wiring closet (most likely in the main building). Dark fiber is
US
available between all buildings and the core wiring closet, as well as between all floors within the same building.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
access layer Virtual Chassis across wiring closets, so each floor can be managed as a single switch. An EX8200 Virtual
Chassis provides a redundant Layer 2 and Layer 3 collapsed core to which WLC Controllers and WAN routers are connected
US
A RE
SH
T
NO
DO
—
LY
ON
Access switches connect directly to the collapsed core through multiple 10GbE LAG links, which eliminate the need to run
STP or VRRP-like protocols, eliminating blocked links. WLAN controllers, WAN routers, and miscellaneous building services
US
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
centers. In many cases, there is a need for traffic segmentation between certain campuses and data centers. This is
delivered by the MX Series routers using MPLS.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
• In each campus, a EX8200 Virtual Chassis provides a redundant Layer 2 Layer 3 10GbE collapsed core;
US
• All connections from the access layer to the collapsed core are dual 10GbE LAG connections;
• Wireless LAN controllers in each campus are clustered for redundancy in their respective networks; and
• Each campus WLAN is configured as its own mobility domain.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• Building A’s EX4300 access switches connect directly to the EX8200 Virtual Chassis in Building A, using a dual
10GbE LAG;
• A EX4600 Virtual Chassis in Building B acts as the aggregation switch for Buildings B, C, and D;
• The EX4600 Virtual Chassis in Building B connects to the EX8200 Virtual Chassis through a 2-port or 4-port
AL
10GbE LAG;
• WLCs are clustered for redundancy; and
• The campus WLAN is configured as its own mobility domain.
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
A RE
SH
T
NO
DO
—
LY
ON
have different requirements depending on their size and other specific needs.
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• All sites have redundant WAN connections using multiple carriers or Internet VPN;
• Redundant SRX Series devices are used in headquarters while the large and medium branches use clustering;
US
A RE
SH
T
NO
DO
—
LY
ON
• The small branch could use redundant WAN connections such as a T-1 through their primary carrier and 3G
wireless Internet VPN as a backup connection;
US
• EX4300 or EX3300 switches provide LAN connectivity and can also provide PoE+ ports; and
• A Branch SRX Series device can provide firewalling and UTM (such as IDP, antivirus, Web filtering, and access
control).
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• The medium branch office could use redundant WAN connections such as a T-1 through their primary carrier,
and DSL or cable Internet VPN as a backup connection;
US
• EX4300 or EX3300 switches provide LAN connectivity and can also provide PoE+ ports; and
• Branch SRX Series devices provide clustered high availability firewalling and UTM.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
• Redundant WAN connections, such as DS3, using primary and secondary carriers;
• EX4300 Virtual Chassis switches provides redundant LAN connectivity and PoE+ ports; and
US
A RE
SH
T
NO
DO
—
LY
ON
We Discussed:
• Various campus network topographies; and
E
T
NO
Appendix C: Sample Request for Proposal
DO
—
LY
ON
E
US
AL
RN
TE
IN
Juniper Networks Design Fundamentals
Overview
RE
In this appendix, Juniper Networks has helped one of its regional distributors, ACME corporation,
responded to a request for proposal (RFP) put out by XYZ, Inc. ACME corporation is a large reseller of
A
network equipment, professional services provider, and a strategic partner of Juniper Networks. XYZ, Inc. is
a growing pharmaceutical company. XYZ, Inc. will soon be updating its datacenter that houses its own
SH
network infrastructure as well as the network equipotent of several of its strategic partners. As part of the
update, XYZ, Inc. will add more capacity and make the datacenter more modern and efficient.
In this appendix, you will:
T
• Assess the customer’s details and request for proposal (RFP) requirements.
NO
• Review ACME corporation’s response to the customer’s requirements.
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
E
Sally Stevens
Friday 14th March 2015
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
A RE
JUNIPER NETWORKS CONFIDENTIALITY
NOTICE
SH
Thank you for the opportunity to submit this non-binding (other than pricing for now-available
products listed in our quotes), subject to contract, proprietary and confidential proposal for your
consideration
T
Trademarks
NO
Juniper Networks, the Juniper Networks logo, Junos, NetScreen and ScreenOS are registered
trademarks of Juniper Networks, Inc. in the United States and other countries. All other
trademarks, service marks, registered trademarks or registered service marks are the property of
their respective holders.
DO
This RFI contains information relating to Juniper Networks’ development plans and plans for future
products, features or enhancements (“SOPD”). SOPD information is subject to change at any
time, without notice. Except as may be set forth in definitive agreements for the potential
transaction, Juniper Networks provides no assurances, and assumes no responsibility, that such
future products, features or enhancements will be introduced. Therefore, XYZ, INC. should
—
ensure that purchasing decisions:
a) are not being made based upon reliance of timeframes or specifics outlined in the SOPD;
and
LY
b) would not be affected if Juniper Networks delays or never introduces the future products,
features or enhancements.
ON
RE
RFP Contacts
A
Name Sally Stevens
SH
Title Account Director
Telephone
Email
T
Title Sales Engineer
Telephone
NO
Email
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
Contents
RFP Contacts ........................................................................................................................... 6
A
Executive Summary ................................................................................................................ 8
SH
Partnership ............................................................................................................................. 10
Technical Summary ............................................................................................................... 11
Technical Specifications....................................................................................................... 13
Platform Roadmap ................................................................................................ 13
T
Physical Requirements ......................................................................................... 19
NO
Future enhancements ..................................................................................................... 37
Data Centre Interconnect conclusion .............................................................................. 38
Transition Plan ....................................................................................................................... 38
Legacy to Point-of-Proof migration........................................................................ 39
DO
VLAN Harmonisation .................................................................................................... 47
Feature and Service Support ................................................................................ 47
Data Centre Operations’ Standards ...................................................................... 50
—
Virtualization, Consolidation, Expansion and Cost Innovation .............................. 51
ILNP – (Identifier-Locator Network Protocol) Juniper are considering as an alternative
standard to Cisco LISP. ................................................................................................ 57
LY
Summary ............................................................................................................... 57
Sales Services ...................................................................................................... 60
Product Introduction .............................................................................................. 62
ON
Maintenance.......................................................................................................... 75
References ............................................................................................................ 76
Supporting Material ............................................................................................... 77
Commercial Offering ............................................................................................................. 78
AL
Legal ....................................................................................................................................... 79
Appendix One - Appendices................................................................................................. 80
RN
TE
IN
RE
Executive Summary
A
Juniper Networks is pleased to present to XYZ, Inc. our Data Centre solution.
SH
We offer a proven solution such that XYZ, Inc. can deploy a cost-effective family of switches that
delivers the high availability, unified communications, integrated security and operational
excellence which you need today, whilst also providing a platform for supporting your
requirements in the future.
Working with our authorized partners, Juniper has a broad, deep and successful track record in
T
delivering Data Centre technology that is easy to deploy and manage which is both reliable and
NO
cost effective, along with software and services to manage the network in a virtualized data
centre environment.
Juniper Networks has a strong footprint and track record within the Public Sector in the UK.
Several Central Government Departments utilise Juniper security solutions, as well as our
switching portfolio. We have PSN customers, for example Regional Government, as well as a
large percentage of Higher Education facilities in the UK. DFTS,
DO
Janet and Dante, all run on secure Juniper Networks.
The Juniper switch solution offers an innovative alternative to the cost and complexity of
managing a legacy network.
Our solution will help lower your total cost of ownership through a flatter design, with a single
—
Networking OS, common management structure and will reduce the space, power and cooling
requirements.
Juniper solutions are designed to deliver scalable port density and performance, providing XYZ,
Inc. with an economical pay-as-you-grow approach to building your flexible and high performance
LY
network.
The proposed solution for the DC Network Refresh provides the following key business benefits:-
ON
increased business agility and reduced time to market for new services.
• Pay As You Grow – Juniper’s solution uses the latest QFX switch technology
(QFX5100) based around 1RU and 2RU switch platforms enabling a true pay as you
grow model.
•
AL
Financial Options – Juniper proposes traditional capex pricing models as well as flexible
pay as you go and FMV leasing.
• Green Savings – Substantial quantifiable savings on power and rack space. *See
appendix one for detail.
RN
RE
• Cost Performance - Juniper is a pure play organisation with significant investment back
into R & D, delivering cutting edge products at a lower cost of ownership and
A
procurement cost.
SH
Thank you for the opportunity to respond. Juniper has a dedicated team available to meet with
you at your earliest convenience to discuss our proposal and answer any questions which you
may have.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
Partnership
A
Juniper Networks Partner Advantage Program
SH
Juniper Networks’ go-to-market model puts a deliberate dependency on our Authorised Partner
Community. Juniper Partners are backed up by Juniper Partner Advantage, a partner program
designed to ensure that Partners are rewarded, trained and certified to sell, design and support
Juniper products and solutions.
T
For the purpose of this RFP Juniper is partnering with ACME to deliver the UK Pricing schedule,
NO
ACME is also XYZ, Inc.’s Distribution Partner, so we trust that this is the best possible
arrangement.
Juniper also has a strong partnership with XYZ, Inc. XYZ, Inc. is a Juniper Elite Portfolio and
Services partner across the complete Juniper portfolio. XYZ, Inc. can deliver and support the
entire Juniper product suite, from High End Routing, Routing, Switching, Firewalls, Secure
DO
Routers, and Remote Access solutions. XYZ, Inc. has many Juniper certified Pre and Post
Sales Engineers and also takes part in our Ingenious Champion Program, being one of the very
few Juniper partners globally to have multiple Ingenious champions (subject matter experts in
Juniper technologies).
In addition to the delivery of Juniper solutions to a wide range of Government and Enterprise
—
customers (including Home Office, Centrica, G4S and Marks and Spencer), XYZ, Inc. Services
also run the XYZ, Inc. Core Network on a Juniper platform and utilise Juniper technologies
throughout its business. XYZ, Inc. is also one of Juniper’s selected EMEA marketing partners,
and is invited to participate in the Juniper Partner Advisory Council. In support of XYZ, Inc., we
LY
have assigned a full time Partner Manager and Technical Account Manager.
ON
E
US
AL
RN
TE
IN
RE
Technical Summary
A
The Juniper Networks technology solution offers a reliable, repeatable, secure and scalable
SH
network that reduces the current physical footprint and meets XYZ, Inc.’s requirements for today,
whilst scaling to meet XYZ, Inc.’s requirements in the future.
The proposed solution for the DC Network provides XYZ, Inc. with the following key technical
benefits:
T
• Flexibility - Our solution is ultimately flexible; with virtual-chassis fabric enabling
NO
software defined switch boundaries allowing the hardware to provide L2/L3
services today with a long roadmap through to secure full SDN implementation.
DO
effective, compact form factor.
• Scalability - Our solution is scalable from one virtual chassis fabric of two switches
to 32 switches per Virtual Chassis Fabric (VCF) , which can be replicated into a
series of VCF’s to meet the forthcoming implementation of virtual servers, also
allowing implementation of an elastic service to meet changing requirements
—
without the need for large upfront capital costs.
• Reliability - Our solution leverages much of the same field-proven Juniper Carrier
technology―including high performance ASICs, system architecture and Junos
E
software―that powers the world’s largest service provider networks. The result is a
robust, time-tested and highly reliable infrastructure solution for high performance
networking.
US
• Data Centre Savings – By utilizing switches that are 1RU in size, the Juniper
solution reduces space and power consumption but increases the scale and size of
the services that can be supported and developed. * See Appendix one for data
AL
• Interoperability – Our solution is fully standards compliant and, due to our Open
API’s, can work with legacy hardware and other vendors as required to support
migration from your current estate. We will fully support and plan your migration in
an incremental fashion. We also ensure complete configuration and testing prior to
RN
deployment.
• Longevity of spares and support -
The hardware proposed is new to the market
and, as a result, XYZ, Inc. can expect the longest anticipated lifetime, as well as
support for 5 years from the announcement of any end of sale date. Juniper also
TE
RE
•
A
Management - Junos Space is a comprehensive Network Management Solution
that simplifies and automates management of Juniper’s switching, routing, and
security devices. Junos Space consists of a network management platform for
SH
deep element and FCAPS management, plug-n-play management applications for
reducing costs and provisioning new services quickly, and a programmable SDK
for network customization. With each of these components working cohesively,
Junos Space offers a unified network management and orchestration solution.
T
All of these elements contribute to a simple cost efficient solution that provides XYZ, Inc. with a
solid foundation for the development of faster applications running on high speed virtual
NO
platforms, to enable you to provide an excellent service to your Customer.
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
Technical Specifications
A
Platform Roadmap
SH
Table 13.1: Required platform roadmap detail.
Ref Requirement Weighting
R001 Please provide a platform roadmap to cover a minimum 5 year period from Mandatory
T
[6.2.1] the middle of 2015 till the end of 2019. The platform(s) should be available as
general release by Q2 calendar year 2015. New hardware introduced during
NO
this period can be included if it adheres to requirement R004. It is expected
that the roadmap will include platforms capable of fulfilling scalability and
availability not less than that achievable with the current Cisco Catalyst
switches (combination of 6500-series and 3750G-series).
Juniper Juniper Networks has proposed three switches for the Server Access and
Core layer within our proposal. These include the EX4300-48T to provide
DO
100MB and 1GbE RJ45 for existing server connectivity and the QFX5100-
48S to provide both 1GbE and 10GbE Fibre based connectivity and the
EX9200 to provide core 10GbE, 40GbE and 100GbE connectivity whilst also
providing EVPN and MPLS support for site-to-site connectivity.
The roadmap on the items specified in our response is listed below. At this
—
time, Statements of Product Directions (SOPD) are only issued for a 12-
month period for QFX based and EX based products.
More specific SOPD information can be obtained from our Product Line
LY
Infrastructure
• QFX5100-96S
IN
RE
Interfaces and Chassis
A
• Parallel Single Mode (PSM) optic support
IPv4
SH
• BGP ADD PATH support
IPv6
T
• IPv6 match criteria for L2 ACLs
NO
Metro Ethernet
• QinQ support
• MPLS
• RSVP auto-bandwidth
DO
Platform and Infrastructure
Routing Protocols
LY
Switching
• Metafabric 1.1
System Management
E
• ZTP
US
Virtual Chassis
VLAN Infrastructure
RN
• VC Local Bias
• G.8032v1 and v2
IN
RE
• Integration with VMware's NVP controller
A
Layer 2 Features
• VXLAN L2 Gateway
SH
MPLS
T
Network Management and Monitoring
NO
• SNMPv3 support
• Operation, Administration, and Maintenance (OAM)
• 802.3ah support
DO
Software Defined Networking (SDN)
•
—
FCoE Transit on QFX3500, 3600 VC
Switching
LY
• Metafabric 2.0
Virtual Chassis
ON
• 32 member VCF
• ISSU support on VCF
IPv6
TE
• BGP
IN
RE
• IPv6 access security - RA guard, DHCPv6 snooping, ND
inspection, v6 source guard
A
• IPv6 match criteria for L2 ACLs
Management
SH
• Nonstop software upgrade (NSSU)
Metro Ethernet
T
• PVLAN within switch (Private VLAN)
NO
Multicast
DO
• Async support for Interface statistics on Junos(ui/kernel/pfe)
• L2NG - SFLOW for EX9204/9208/9214
• sFlow
Switching
ON
• Campus 1.0
• Metafabric 1.0
• Metafabric 1.1
System Management
E
• ZTP
TBD
• Puppet Agent
AL
Virtual Chassis
RE
• Access security 802.1x on Virtual Chassis
• Access security Captive Portal on Virtual Chassis
A
• Access security on Virtual Chassis
• L2NG - Access control: dot1x, captive-portal, UAC support
on VC for EX9204/9208/9214
SH
Class of Service (CoS)
T
High Availability (HA) and Resiliency
NO
• ISSU support with 32XS, 4QS and 2C-8XS line cards
DO
IPv6
Layer 2 Features
—
• L2PT
LY
Metro Ethernet
• Private VLAN
•
ON
Port Security
Switching
TE
IN
RE
• Campus 1.1
A
VLAN Infrastructure
SH
R002 Please indicate your End of Sale / Support lifecycle policy for device Mandatory
[6.2.2] hardware, particularly regarding the timeline and support availability between
End of Sale announcement and the end of product life.
Juniper Juniper End of Sale (EoS) policy for hardware devices is communicated
T
through the support website and directly via e-mail to the e-mail address or
support contract holder. When a product reaches its end of life (EOL), Juniper
NO
is committed to communicating important milestones throughout the EOL
period, including the initial EOL notification, Last Order Date (LOD) for
product, End of Support (EOS) EOS milestone dates, as well as other key
information pertaining to Juniper hardware and software products. Any
product being discontinued will be announced as EOL a minimum of one
DO
hundred-eighty (180) days prior to the discontinuation and end of sale date,
also referred to as last order date. On the last order date, products are
removed from the price list and are no longer available for purchase.
Last day of support for both hardware and software is five years after the End
of Sale date. Up to this point hardware and software will be supported under
—
the existing contract in place prior to the End of Sale announcement.
An example of the EOL-EOS timetable can be seen below. Please note that
this example is not specific to the technology proposed within this RFP.
LY
ON
E
US
AL
RN
TE
IN
RE
Each milestone is communicated with the support contract holder of the
equipment and via the Juniper EoL web site.
A
R003 Please indicate your End of Sale / Support lifecycle policy for the operating Mandatory
[6.2.3] system and any additional system software, particularly regarding the timeline
and support availability between End of Sale announcement and the end of
SH
software support.
Juniper End of Sale and End of Support for software follows the same guidelines as
outlined in the previous question for hardware.
R004 Please confirm protocol and service compatibility across all platforms in the Mandatory
[6.2.4] roadmap i.e. full compatibility between hardware implementations with
T
product certification only dependent on the operating system version. Please
include supporting evidence to show where compatibility has been a feature
NO
of, or improved by previous platform roadmaps.
Juniper The Juniper hardware runs the same version of code across all platforms in
the EX and QFX series switches. Junos is designed with compatibility
between platforms from day one and, as such, for our solution we look to run
the same version of Junos on the EX4300’s, EX9200’s and QFX5100’s to
ensure complete compatibility between the platforms. In running the same
DO
version of code, we know that features and functions are the same and will
work in the same way across all platforms.
As and when XYZ, Inc. requires a new feature in forthcoming releases, a beta
version of that code would be released to XYZ, Inc. to test in their lab with the
—
results passed back to our development team. The development team would
make any changes that XYZ, Inc. have noted and then issue either as
standard code or as a special release. Juniper would suggest that XYZ, Inc.
sign up to the beta release programme and Junos development programme
LY
product uptime, throughput, availability and support for new services &
protocols.
Juniper Please refer to Question R001 for a full list of roadmap or Statement of
Product direction (SOPD) information which includes both Hardware and
Software specific functions.
R006 Please provide any published benchmarks for the platforms in your roadmap. Medium
E
[6.2.6]
Juniper Please refer to Question R001 for a full list of roadmap or Statement of
US
Physical Requirements
AL
Appendix 3 details a high-level view of the current data centre networks. It also contains key
requirements which should be addressed in the supplier’s response.
RN
Suppliers should be innovative in the introduction of new infrastructure in the data centres where
rack space and power is at a premium.
_____________________________________________________________________
TE
RE
Table 13.2: Physical Requirements
A
Ref Requirement Weighting
R007 Please provide your solution(s) proposal to replace the current switch Mandatory
[6.3.1] infrastructure with a completely new Data Centre switch topology for SDC01,
SH
SDC02 and SHP01.
Juniper To provide XYZ, Inc. with the best of breed solution and a foundation upon
which new and more flexible applications can be deployed, Juniper Networks
proposes the latest QFX and EX switching solution utilising the unique
properties of Virtual Chassis Fabric to allow a multi-tenancy overlay
T
architecture deployed in a series of virtual switch clusters.
In proposing this form of architecture, XYZ, Inc. can move away from the
NO
large, centralised locations for server connectivity to a more flexible
distributed solution that centres on a spine and leaf approach, but still
provides the single points of management and distributed layer 2 and layer 3
switching and routing domains they are used to.
This architecture also provides the option scaling up or down to suit the
DO
flexibility of newer virtual server installations and provides the option of an
SDN overlay to provide traffic separation for different security requirements
whilst introducing management orchestration and automation to both the
server layer and network layer.
Solution Overview
—
Juniper proposes a two-tier architecture utilising the new EX4300 and
QFX5100 to provide both copper and Fibre connectivity for 1GbE and 10GbE
server connections at the access/aggregation layer. Utilising a spine and leaf
LY
SFP connectivity with 6 x 40GbE SFP ports for uplinks. The QFX5100-48S
also comes with dual out of band management ports (both SFP and RJ45),
dual 650watt power supplies and five fan trays. All of which are hot
swappable.
TE
To complement the QFX and provide connectivity for existing servers the
EX4300 provides 48 ports of 10/100/1000Mbps RJ45 copper connectivity
IN
RE
with 4 x 1/10GbE front facing uplink ports and a further 4 x 40GbE port on the
rear. Again like the QFX5100-48S, the EX4300 has dual power supplies and
A
fan trays, which are all hot swappable.
It’s at this point that we can introduce Virtual Chassis in to the solution.
SH
Juniper Networks unique Virtual Chassis technology, enables up to 20
interconnected switches to be managed and operated as a single, logical
device with a single IP address and single MAC address. Virtual Chassis
technology enables enterprises to separate physical topology from logical
groupings of endpoints and, as a result, provides efficient resource utilization.
T
The advantages of connecting multiple switches into a Virtual Chassis Fabric
NO
include:
• better-managed bandwidth at a network layer,
• simplified configuration and maintenance because multiple devices
can be managed as a single device,
• increased fault tolerance and high availability (HA) because a Virtual
Chassis can remain active and network traffic can be redirected to
DO
other member switches when a single member switch fails,
• and a flatter, simplified Layer 2 network topology that minimizes or
eliminates the need for loop prevention protocols such as Spanning
Tree Protocol (STP).
It also allows multiple links to be aggregated together into single links. Thus
—
the two 10GbE links from each top of rack switch would aggregated into a
single link providing 20GbE of uplink connectivity to the concentrators.
With the introduction of the QFX5100 series of switches the existing Juniper
Virtual Chassis technology is further scaled and enhanced to support a spine-
LY
RE
• Integrated Director and in-band communication for Control
Plane
A
• WAN and Security devices can plug into the spine or any
member
SH
• Full Layer2 and Layer3 support in every member of the Fabric
• Enhanced In Service Software Upgrade
As the diagram shows below and as mentioned in the previous section, the
Virtual Chassis Fabric (VCF) topology would start with two QFX5100-48S
T
switches with a dual 10/40GbE link connecting them together and forming the
Spine of the VCF formation.
NO
One of the switches is configured as the master Route Engine (RE) and the
DO
other is configured as the hot-standby backup RE. This formation provides
the single IP gateway and MAC address for the whole of the VCF whilst
providing a converged control plane across the two spine switches. This
converged control plane removes any convergence issues if a single spine
switches were to fail. In line with every Juniper product, we maintain the
—
control plane and data plane separation, allowing traffic to flow even if the
whole control plane were to fail.
It’s at this point that additional nodes or leafs can be added to the VCF. This
LY
nodes and confirms that the serial number on the leaf nodes match the serial
numbers in the master configuration. Once confirmed, the leaf switches join
the VCF and are now fully operational from the master RE.
E
US
AL
Another consequence of joining the VCF is that the uplinks from leaf nodes to
the spine switches are automatically aggregated and renamed virtual chassis
ports or VCP’s. These VCP’s are removed from the configuration so they
cannot be renamed or re-configured.
RN
As more leaf nodes are added to the VCF and registered with the master RE
the control plane and forwarding planes on each device start to become
aware of the other switches in the VCF. The master RE creates the state
information and federates the state to other switches enabling distributed
TE
RE
Spanning Tree inside the fabric. Traffic is load balanced on all links to
achieve an internal 1.8 microsecond latency. And it is worth noting that inside
A
the VCF the default is to do local switching, so only traffic that is destined for
either the spine or further will traverse the spine all other traffic inside the
VCF will be switched locally. As such, you can achieve 550 nanoseconds
SH
inside a VCF, and you can also do 16 way server multi-homing going from
your server into the VCF fabric.
T
NO
DO
—
LY
ON
Using the same process as outlined earlier, you can add up to 18 leaf nodes
to a single VCF. As our roadmap information states, this will be changed via
software to 32 leaf nodes in mid-2015. Once you hit this limit, you can then
create a second VCF in the same way. This is where our solution scales to
E
As the diagram above shows, the first VCF is replicated as many times as
required to support the number of ports within each Tech Hall.
TE
To provide connectivity between the VCF’s we can implement two things. The
first is to introduce a Core layer which will aggregate the connections from
IN
RE
each VCF, whilst providing ongoing connectivity to the other Data Centres
and the wider network environment. We can also connect the VCF’s together
A
as well. This then allows another route for traffic to flow in an east-west basis
as opposed to the traditional north south route.
SH
The Core layer would comprise of EX9200 chassis’. These provide dual route
engines, multiple power supplies and fan trays. Our initial solution would be
to place a 6RU (Rack Unit) EX9204 within each Tech Hall and then connect
these to EX9200’s together to form a Virtual Chassis. In implementing the
EX9200’s at this layer we introduce the following benefits:
T
NO
DO
•
—
Distributed forwarding plane whilst enabling a single point of
management for the core layer
• In the same way that the QFX5100’s support multiple interface types
LY
to allow for future growth, the EX9200 series provide support for
multi-10GbE, 40GbE and 100GbE. This allows 10GbE to be utilized
on day one, but with the option of 40GbE and 100GbE when traffic
patterns and bandwidth utilization dictate higher link speeds
ON
based MPLS.
With the introduction of the EX9200’s we come full circle to our entire
solution per a data Centre (as shown in the diagram below) and Juniper
TE
would look to replicate this across all of the data centres in the same way as
noted above.
IN
A RE
SH
T
NO
The next section of this response covers some of the innovation of our
solution and hardware involved.
DO
Hitless Upgrade With Single Switch ISSU
—
In the traditional Data Centre environment, typically there are multiple racks,
multiple top of racks in each rack, and so the customer is leveraging the
hardware redundancy to do its upgrade. So you always have one of the
TORs active, however when you’re doing that you basically have one node
available to you at the time when you’re doing the upgrade. Applications will
LY
basically run at half bandwidth and you end up having to group these multiple
racks and create longer maintenance windows. Whereas using VCF you can
actually do hitless software upgrade using a single switch and you can
upgrade multiple racks at a time and applications can run at full bandwidth
ON
because you’re not taking down any of your data links while you’re still
upgrading your active RE. Because you continue to… you can do multiples of
these racks at a time with shorter maintenance windows, and it does not
require hardware redundancy. You don’t have to maintain dual TORs in each
rack just for the purpose of upgrading.
E
tech hall. The diagram below provides an overview of the QFX5100 internal
architecture and it is worth noting that both the EX9200 supports ISSU as
standard and the EX4300 will support it in the coming months.
AL
RN
TE
IN
A RE
SH
T
NO
DO
—
LY
ON
RE
Juniper Networks QFX5100
A
SH
T
The QFX5100 line of flexible, high-performance, low-latency and feature-rich
NO
L2 and L3 switches are optimized for virtualized data centre environments.
QFX5100 switches are a universal building block for multiple fabric
architectures including Juniper Networks’ mixed 1/10/40GbE Virtual Chassis,
Virtual Chassis Fabric and QFabric architectures, and open architectures
such as Spine and Leaf and L3 All QFX5100 switches support ISSU to
deliver hitless data centre operations
DO
QFX5100 switches also include support for virtualized network environments
including Juniper Contrail and VMware NSX L2 gateway services.
For added flexibility, the 40GbE ports on the QFX5100 switches can be used
as 4x10GbE ports using QSFP+ to SFP+ direct attach copper (DAC) or
—
QSFP+ to SFP+ Fibre splitter cables and optics.
Juniper Networks QFX Series Switches features include:
• Up to 2.56Tbps
LY
supplies and fans, control and data plane separation: QFX Series
switches are designed with robust high-availability features, including
US
support large media access control (MAC) address tables that enable
large-scale server virtualization deployment. Select QFX Series switches
include integrated support for Juniper Contrail and VMware NSX L2
Gateway Services functionality to programmatically enable connectivity
RN
network.
IN
RE
• Energy efficient, environmentally friendly solutions: QFX Series
switches are environmentally conscious “green” solutions that lower
A
operational expenses. The switches’ variable-speed fans dynamically
adjust their speed based on ambient temperature to optimize and reduce
SH
power consumption to just over 2 Watts per 10GbE port.
For more information, please see: http://www.juniper.net/us/en/products-
services/switching/qfx-series/#overview
T
Juniper Networks EX4300
NO
DO
EX4300 Ethernet switches are compact, fixed-configuration platforms that
satisfy a variety of high-performance branch, campus and data centre access
deployments. —
Juniper Networks Virtual Chassis technology enables up to 10 EX4300
switches to be interconnected over a 320 Gbps backplane using four back-
panel 40GE ports or in to a Virtual Chassis Fabric with the QFX5100 series,
creating a single, logical device that delivers a highly scalable, cost-effective
LY
RE
The EX9200 line of programmable, flexible, and scalable modular Ethernet
core switches simplifies the deployment of cloud applications, virtualized
A
servers and rich media collaboration tools across campus and data centre
environments. As a key element of Juniper Networks “Simply Connected”
portfolio of resilient switching, security, routing, and wireless products, the
SH
EX9200 Series enables collaboration and provides simple and secure access
to mission critical applications. In the data centre, the EX9200 simplifies
network architectures and network operations to better align the network with
today’s dynamic business environments.
T
NO
DO
—
Juniper Networks EX9200 Series Ethernet Switches.
LY
EX9200 provides that and more in its silicon and at the system and
networking levels. The EX9200 is based on Juniper One custom silicon, an
ASIC designed by Juniper Networks which provides a programmable Packet
Forwarding Engine (PFE) and allows for native support of networking
protocols such as virtualization using MPLS over IP and overlay network
protocols. ASIC micro code changes delivered through updates to Juniper
E
RE
• Programmability to address future business needs;
• Carrier grade availability
A
Juniper Networks EX9200 Series is ready to handle changing networking
SH
demands for at least the next decade.
T
EX9200 Series: Features for Simplifying and Transforming IT
NO
Infrastructures
Feature Benefit
Programmable ASIC Provides the platform for future innovation (e.g.,
VXLAN, NVGRE, etc.) and offers investment
protection to customers by delivering features
DO
that were traditionally delivered by replacing the
hardware
Junos automation Automates operational and configuration
tasks―simplifying configurations and reducing
potential errors
—
Junos SDK Exposes programmable interfaces for Software
Defined Networking (e.g., OpenFlow, Puppet,
etc.)
LY
Models
E
flexibility:
• EX9204 – 4-slot, 6 RU chassis that supports up to three line cards
• EX9208 – 8-slot, 8 RU chassis that supports up to six line cards
• EX9214 – 14-slot, 16 RU chassis that supports up to 12 line cards
AL
Fully configured, a single EX9214 chassis can support up to 320 10GE ports
(240 at wire speed for all packet sizes), delivering one of the industry’s
RN
highest line-rate 10GE port densities for this class of feature rich and
programmable switch.
The EX9200 switch fabric is capable of delivering 240 Gbps (full duplex) per
slot, enabling scalable wire-rate performance on all ports for any packet size.
TE
RE
Juniper Networks proposal is based on two EX9204 chassis switches,
A
providing 40GE or 10GE connectivity to the Aggregation layer. 1/10GE
connectivity is also proposed for WAN and inter-DC connectivity. An
EX9204 will be deployed in each tech hall, but virtualized together via
SH
Virtual chassis to provide a single Core layer/WAN gateway.
R008 Please provide your solution proposal to provide multi-tenancy (same Mandatory
[6.3.2] customer different domains; e.g. Live, Dev, Clone) separation within a data
centre switch infrastructure and details of any assurance for any separation
technologies used.
T
Juniper The Juniper solution can provide multi-tenancy in a series of ways from
EVPN at the edge of the network to VPLS, VXLAN, MPLS and our SDN
NO
approach to VPN overlay through Juniper contrail to the internal data centre.
Our response first looks at separation at layer 2 and through the use of
Juniper contrail.
DO
VPLS
Juniper Networks offers virtual private LAN service (VPLS) over MPLS, a
standards-based technology that meets the challenges and requirements
associated with data centre interconnectivity. A single physical network can
—
be partitioned into several logical VPLS instances that are separate and
secure logical L2 networks. This means that all logical instances can be
overlaid on the same physical network, and the same physical network can
appear as different logical VPLS networks. Each VPLS instance appears as a
bridge domain that extends the L2 segments between the different data
LY
The VPLS instances are mapped to the VLANs, which contain virtualized
resources. VLAN continuity can be maintained across the Wan without
ON
RE
Whitepaper on Implementing VPLS for Data Centre interconnectivity
A
http://www.juniper.net/us/en/local/pdf/implementation-guides/8010050-en.pdf
EVPN
SH
To provide Layer 2 stretch services between the data centres, then EVPN
(Ethernet Virtual Private Networks) becomes a logical choice. Ethernet VPN
(EVPN) enables you to connect a group of dispersed customer sites which in
this case would be SDC01, SDC02 and SHP01 using a Layer 2 virtual bridge.
As with other types of VPNs, an EVPN is comprised of customer edge (CE)
T
devices (QFX5100’s) connected to provider edge (PE) devices. The PE
devices can include an MPLS edge switch (MES) such as the EX9200’s
NO
proposed at the core layer that acts at the edge of the MPLS
infrastructure. To provide multi-tenancy aspect you can deploy multiple
EVPNs within the network, with each EVPN assigned to a series of virtual
routers within virtual servers which in turn connect to customers while
ensuring that the traffic sharing that network remains private.
DO
The MPLS infrastructure allows you to take advantage of the MPLS
functionality including fast reroute, node and link protection, and standby
secondary paths whilst allowing for inter-op between different vendors, as
MPLS/EVPN is an open standard.
For EVPNs, learning between MES’s takes place in the control plane rather
than in the data plane (as is the case with traditional network bridging). The
—
control plane provides greater control over the learning process, allowing you
to restrict which devices discover information about the network. You can
also apply policies on the MESs, allowing you to carefully control how
network information is distributed and processed. EVPNs utilize the BGP
LY
control plane infrastructure, providing greater scale and the ability to isolate
groups of devices (hosts, servers, virtual machines, and so on) from each
other.
ON
The MESs attach an MPLS label to each MAC address learned from the CE
devices. This label and MAC address combination is advertised to the other
MESs in the control plane. Control plane learning enables load balancing and
improves convergence times in the event of certain types of network failures.
The learning process between the MESs and the CE devices is completed
using the method best suited to each CE device (data plane learning, IEEE
E
The policy attributes of an EVPN are similar to an IP VPN (for example, Layer
3 VPNs). Each EVPN routing instance requires that you configure a route
distinguisher and one or more route targets. In this case the route reflector
could be a virtual router sitting within the EX9200’s that are or would be
facing out in to the WAN. A CE device attaches to an EVPN routing instance
on an MES through an Ethernet interface that might be configured for one or
AL
more VLANs or this could be a VPLS domain as you can take the VPLS
elements to implement inside the Data Centre and the EVPN to provide Layer
2 between data centres.
RN
RE
Contrail
A
Another option, which leverages the open standards of MPLS and the newer
functionality of Juniper SDN approach, is through the use of Contrail.
SH
From a Data Centre perspective, Juniper has developed Contrail, which is an
open source SDN solution that automates and orchestrates a virtual network
overlay. All of the networking features such as switching, routing, security,
and load balancing are moved from the physical hardware infrastructure to
software running in the hypervisor kernel that is managed from a central
T
orchestration system.
NO
Contrail
DO
SDN controller that is responsible for providing the management, control, and
analytics functions of the virtualized network.
The Contrail vRouter is a forwarding plane (of a distributed router) that runs in
the hypervisor of a virtualized server. It extends the network from the physical
—
routers and switches in a data centre into a virtual overlay network hosted in
the virtualized servers. Contrail vRouter is conceptually similar to existing
commercial and open-source vSwitches such as the Open vSwitch (OVS),
but it also provides routing and higher-layer services (for example, vRouter
LY
instead of vSwitch).
The Contrail SDN Controller provides the logically centralized control plane
and management plane of the system and orchestrates the vRouters.
ON
Virtual Networks
Virtual Networks (VNs) are a key concept in the Contrail system. VNs are
logical constructs implemented on top of the physical network. They are used
to replace VLAN-based isolation and provide multi-tenancy in a virtualized
E
data centre. Each tenant or an application can have one or more virtual
networks. Each virtual network is isolated from all the other virtual networks
US
Overlay Networking
network and a virtual overlay network. This overlay networking technique has
been widely deployed in the wireless LAN (WLAN) industry for more than a
decade, but its application to data centre networks is relatively new. It is
being standardized in various forums such as the Internet Engineering Task
Force (IETF) through the Network Virtualization Overlays (NVO3) working
RN
RE
ideal underlay network provides uniform low-latency, non-blocking, high-
bandwidth connectivity from any point in the network to any other point in the
A
network.
SH
virtual overlay network on top of the physical underlay network using a mesh
of dynamic “tunnels” among themselves. In the case of Contrail these overlay
tunnels can be MPLS over GRE/UDP tunnels or VXLAN tunnels.
The underlay physical routers and switches do not contain any per-tenant
T
state. They do not contain any Media Access Control (MAC) addresses, IP
address, or policies for virtual machines. The forwarding tables of the
NO
underlay physical routers and switches only contain the IP prefixes or MAC
addresses of the physical servers. Gateway routers or switches that connect
a virtual network to a physical network are an exception—they do need to
contain tenant MAC or IP addresses.
The vRouters, on the other hand, do contain per-tenant state. They contain a
DO
separate forwarding table (a routing instance) per virtual network. That
forwarding table contains the IP prefixes (in the case of L3 overlays) or the
MAC addresses (in the case of Layer 2 overlays) of the virtual machines. No
single vRouter needs to contain all IP prefixes or all MAC addresses for all
virtual machines in the entire data centre. A given vRouter only needs to
—
contain those routing instances that are locally present on the server (that is,
which have at least one virtual machine present on the server.)
In the data plane, Contrail supports MPLS over GRE, a data plane
encapsulation that is widely supported by existing routers from all major
vendors. Contrail also supports other data plane encapsulation standards
such as MPLS over UDP (better multi-pathing and CPU utilization) and
VXLAN. Additional encapsulation standards such as NVGRE can be easily
added in future releases.
E
The control plane protocol between the control plane nodes of the Contrail
US
system or a physical gateway router (or switch) is BGP (and NETCONF for
management). This is the exact same control plane protocol that is used for
MPLS L3VPNs and MPLS EVPNs.
The protocol between the Contrail SDN Controller and the Contrail vRouters
is based on XMPP [ietf-xmpp-wg]. The schema of the messages exchanged
AL
The fact that the Contrail system uses control plane and data plane protocols
RN
that are very similar to the protocols used for MPLS L3VPNs and EVPNs has
multiple advantages. These technologies are mature and known to scale, and
they are widely deployed in production networks and supported in
multivendor physical gear that allows for seamless interoperability without the
TE
A RE
SH
T
NO
DO
Contrail Architecture Details
—
LY
ON
E
US
AL
As the diagram above shows, the Contrail system consists of two parts—a
logically centralized but physically distributed Contrail SDN Controller and a
set of Contrail vRouters that serve as software forwarding elements
implemented in the hypervisors of general-purpose virtualized servers.
RN
implement the web-based GUI included in the Contrail system.
The Contrail
IN
RE
system provides three interfaces: a set of northbound REST APIs that are
used to talk to the orchestration system and the applications, southbound
A
interfaces that are used to communicate to virtual network elements (Contrail
vRouters) or physical network elements (gateway routers and switches), and
an east-west interface used
to peer with other controllers. OpenStack and
SH
CloudStack are the supported orchestrators, standard BGP is the east- west
interface, XMPP is the southbound interface for Contrail vRouters, and BGP
and NETCONF are the southbound interfaces for gateway routers and
switches.
T
Internally, the Contrail SDN Controller consists of three main components:
NO
• Configuration nodes are responsible for translating the high-
level data model into a lower-level form suitable for
interacting with network elements.
• Control nodes are responsible for propagating this low-level
state to and from network elements and peer systems in an
DO
eventually consistent way.
• Analytics nodes are responsible for capturing real-time data
from network elements, abstracting it, and presenting it in a
form suitable for applications to consume.
EX9200 core layer to provide the full overlay enter point for traffic entering
and exiting the data Centre network.
R009 Please provide your solution proposal for DMZ provision within the proposed Mandatory
[6.3.3] architecture. The DMZs may be in a shared or separate architecture. The
DMZs may contain Firewalls, Load Balancers and other network related
E
environment as proposed for the wider data Centre solution, but scaled to
support the size and layout of the DMZ. This would mean the switches can be
the same, installed and configured in the same way in either spine or leaf or
in a daisy-chain architecture. The principles of a single point of management,
distributed forwarding table and flexible scaling would stay the same.
AL
Juniper would look to be provided with more details on the DMZ environment
and would then architect our DMZ approach to suit.
R010 The Nexus switch infrastructure is not being replaced; however please Mandatory
[6.3.4] provide details of how your switch solution can be merged with the current
TE
RE
Juniper Data Centre Interconnect using VPLS
Virtual Private LAN Service (VPLS), which provides both intra- and inter-
A
metro Ethernet connectivity over a common IP/MPLS network is our preferred
method of connecting between different Datacentres. VPLS is a standard
implementation that guarantees interoperability between this is preferable
SH
over propriety implementations such as OTV promoted by a single vendor
and introducing no significant benefits to justify a new technology. VPLS also
ensures a low risk deployment options to XYZ, Inc. Data Centres with
different vendor infrastructures
VPLS/MPLS is an extension to VLANs. Some of these similarities between
T
VLANS and MPLS are:
• VLANS have dot1q tags similar to the tags in MPLS LSP. The
NO
VLANS use priorities in the packet header (i.e. 802.1p) similar to the
priority fields in MPLS (DSCP/EXP QOS).
• VLANS enable Layer2 segmentation whereas MPLS enables Layer2
and Layer3 segmentation.
DO
• VLANs provide localized segmentation that is limited in scale
whereas MPLS enables network wide segmentation with very large
scale.
• VLANs use STP to discover and establish neighbor paths whereas
MPLS uses OSPF/BGP and LDP to do the same.
—
• To optimize the multicast, broadcast and flooding VPLS uses P2MP
that allows data plane based replication to designated recipients.
This enables optimal bandwidth utilization and better scalability.
LY
Future enhancements
To meet future demands of XYZ, Inc. regarding optimized DCI, localization
and controlled traffic flow, EVPN will be introduced with the ability for an
MPLS edge switch (MES) that acts at the edge to advertise locally learned
E
RE
Data Centre Interconnect conclusion
A
EVPN will replace or be running in parallel with already deployed VPLS DCI
connections. As EVPN matures and A/A EVPN DCI is supported in the
environment, old VPLS links can gradually be tiered down.
SH
XYZ, Inc. Data Centres that have not adopted EVPN will use VPLS as DCI
protocol. This ensures connectivity with new and existing data centres
independent of vendor infrastructure.
R010-1 To be able to be integrated with the existing Cisco topology both physically Mandatory
T
[6.3.5] and logically in such a way as to allow a phased migration from old to new
infrastructure, without compromising the stability of the network.
NO
Consideration must be given to how the layer 2 VLAN topology, the layer 3
routing instances and the load-balancing functions will be dealt with during
and after the migration.
Juniper
Transition Plan
This section details the recommended migration steps to migrate from XYZ,
DO
Inc.’s existing DC to the new DC network.
We will study the following transitions:
• Legacy to Point-of-Proof
• Nexus-based to Point-of-Proof
—
• Multi-interconnections concerns
Please note: The Transition Plan proposed in this section has been built with
LY
Please note(2): This Transition Plan covers technology aspects only. A more
detailed Transition Plan will be built before the migration, as described in the
‘Transition Methodology’ Chapter. This detailed Plan will be built after a
complete assessment of the XYZ, Inc. environment, and will include, for
example:
E
• Timetable
IN
A RE
Legacy to Point-of-Proof migration
Starting Point
SH
Here we are using an example of the interconnection of the existing and
proposed networks during the server migration phase.
The example configuration is built with:
• 4 Catalyst 65xx acting as access + distribution
T
• L2 and L3 multitenancy
NO
• Serving multiple racks (either in racks rows,
single racks or standalone)
DO
—
Build new solution (with first PoD)
LY
Here, we’ll start building The Core of the solution, and 1 PoD:
ON
EX 9K
E
US
Virtual
Chassis
AL
Fabric
RN
InterConnect Solutions
There are 2 options to interconnect the 2 solutions. One is STP free, and
using Multi-Chassis LAG Active/Standby technology; the second one needs
STP to run between the 2 solutions.
TE
• MC-LAG (STP-free):
IN
RE
An MC-LAG in Active/Standby mode is created on the new solution core
(MX). Each link of the MC-LAG connects 1 MX to a different Catalyst. This
A
link will be used for L2 traffic.
SH
X
T
NO
DO
—
LY
ON
This first approach is the preferred one, to keep both environments as much
independent as possible during the transition, and will be used as reference
for the remaining of the transition.
E
• RSTP:
US
RE
Root bridge and back-up root bridge of the STP, to get the following STP
topology:
A
SH
X
X X X
T
NO
DO
—
LY
ON
• Interconnection capacity:
Technologies like Q-in-Q can be used to make those Gateways transparent
to any VLAN need, so they won’t need any type of specific configuration
except the Q-in-Q to carry all VLANs.
E
A RE
X
SH
T
NO
DO
—
LY
ON
L3 Migration
This is how L3 routing and Server Gateways will be handled during the
migration. The starting point is:
• HSRP used on the example PoD
• Routing (OSPF) for the VLANs of the PoDs is
E
A RE
HSRP
SH
X
Routes Routes
L3 Core OSPF
T
NO
DO
—
Please Note: The green “Routes” arrows represent how routing is performed
to reach the server VLAN.
LY
ON
E
US
AL
RN
TE
IN
RE
At this stage, all servers from both existing and new solution are using the
example HSRP as their Default Gateway:
A
SH
HSRP
X
T
NO
Routes Routes
OSPF
DO
—
LY
ON
A RE
SH
HSRP
X
Routes Routes
T
OSPF
NO
DO
—
LY
Then we will move the routing from the existing solution to the new one (the
ON
new solution routers will be the preferred path for traffic being sent to the PoD
VLAN, and the Default gateway will be moved also. Please note: When
VRRP transition to master state, it will send a gratuitous ARP. This will allow
all servers to update their ARP table with the new MAC address for their DG
(VRRP and HSRP are using different Mac address ranges). To get the
benefit of this feature, HSRP must be brought down before this transition
E
happens.
US
AL
RN
TE
IN
A RE
SH
X
T
OSPF
NO
Routes Routes
VRRP
DO
—
LY
after the pre-defined stability period, once a rollback plan is not needed
anymore
US
AL
RN
TE
IN
A RE
OSPF
SH
Routes Routes
VRRP
T
NO
DO
VLAN Harmonisation
—
The VLAN harmonization can be handled by the EX9K
LY
ON
E
US
AL
RN
RE
2. Support projected – in these instances please indicate the date support will be available
3. Migration required – in these instances please indicate an alternative feature/service, and
A
the steps required to migrate or reconfigure existing devices
4. Not supported
SH
_____________________________________________________________________
Juniper: Read and Understood
T
R011 1. Spanning Tree Protocol (Rapid-PVST and PVST+ used currently) - Mandatory
[6.4.1] Fully supported open standard STP including VSTP, MSTP & RSTP
NO
2. First Hop Redundancy (currently Cisco HSRP) - Fully supported but
Juniper use VRRP or Virtual Chassis or both.
3. Static Routes – Fully supported
4. OSPF (for IPv4) – Fully supported
5. BGP (for IPv4) – Fully supported
5.a Redistribution between routing protocols – Fully supported
DO
6. Access Control Lists (ACLs) e.g. for routing controls SNMP controls
etc. - Fully supported but Juniper call them Firewall filters
7. SNMP v1, v2c and v3 – Fully supported
8. VLAN tagging using 802.1q (including ability for tagged/un-tagged
frames, default/native VLAN etc.) – Fully supported
—
9. Port-based Network Access Control (802.1x) – Fully supported
10. Port Mirroring (currently Cisco SPAN) – Fully supported
11. Neighbor discovery (currently Cisco CDP) – Fully supported but
Juniper use LLDP (Link Layer Discovery Protocol)
12. Layer 2 Tunneling Protocol (L2TP) or other equivalent protocols –
LY
Fully supported
13. Link Aggregation Control Protocol (LACP) – Fully supported
14. Transmission Control Protocol (TCP) – Fully supported
15. User Datagram Protocol (UDP) – Fully supported
ON
RE
and Netflow. Both of which will work with most analytics systems on
the market with the exception of Cisco.
A
29. UDLD uni-directional link detection type features – Not Supported as
this is Cisco proprietary. Juniper support the 802.3ah OAM link Fault
management feature
SH
30. Logging capabilities – internal and external syslog – Fully supported
and can push logging to an external device or file server.
31. NTP master capabilities (if upstream NTP sync fails) – Fully
Supported
32. Interface descriptions and ability to easily view applied descriptions -
T
– Fully Supported
33. Integration with Bluecoat WAN optimisers (e.g. WCCP) – Fully
NO
Supported via Filter based forwarding as WCCP is Cisco Proprietary
34. Interface options: 10M, 100M, 1G and 10G interface capabilities UTP
vs fibre speed, duplex settings – Fully Supported and configurable on
an interface by interface basis. The EX4300 supports
10/100/1000Mbps RJ45 connectivity and speed and duplex settings
DO
are configured on the interfaces. The EX4300 also supports 1/10GbE
Fibre based SFP on the uplink interfaces and 40GbE QSFP on the
rear uplink interfaces. The QFX5100-48S supports 1/10Gb SFP
interfaces and speed can be configured for 1Gb or 10Gb with duplex
settings on the interfaces. The QFX5100-48S supports 40GbE QSFP
—
on the uplink interfaces, which can also be broken out using a
breakout cable to 4 x 10GbE SFP. Again speed and duplex settings
are on an interface basis. The EX9200 supports the same SFP and
interface options as both the EX4300 and QFX5100. Details on SFP
LY
independent/junos/topics/reference/specifications/optical-interface-
ex4300-support.html
QFX5100
http://www.juniper.net/techpubs/en_US/release-
independent/junos/topics/reference/specifications/interface-qfx5100-
E
support.html
EX9200
US
http://www.juniper.net/techpubs/en_US/release-
independent/junos/topics/reference/specifications/optical-interface-
ex9200-support.html
35. DHCP and other UDP port forwarding – Fully Supported
AL
Supported
4. Non-disruptive Operating System Updates – Fully supported and
please refer to response within question R07 for a full overview.
4.a Low failover times between supervisors and between resilient
chassis – Fully supported, but relevant from a core layer point of
TE
RE
5. Traffic Shaping/Throttling (to reduce traffic flow to use only a portion
of Available link bandwidth) – Fully supported
A
6. Multi device/Multi chassis stacking - – Fully supported but classed as
Virtual Chassis in Juniper terms
7 Inter-switch / inter-stack cross-device link aggregation (like Cisco
SH
VPC) – Fully supported but classed as virtual chassis and virtual
chassis fabric and MC-LAG
8. Inter-device connectivity options to replace dependency on spanning
tree . (e.g. Cisco Fabric Path) – Fully supported but Juniper would use
Virtual Chassis Fabric as this removes STP
T
9. Location independent IP addressing e.g. Cisco LISP – Not supported.
LISP is Cisco proprietary and still in an "Experimental" state according
NO
to the latest RFC. Juniper would like to discuss options around why
this would be required.
10. Routing protocol authentication & graceful restarts – Fully supported
11. Configuration checkpoints / rollbacks – Fully supported
12. Banners for pre/post logons – for SSH and HTTPS – Fully supported
Juniper Answers in line in the list above.
DO
Data Centre Operations’ Standards
Table 13.4: Device support requirements
Ref Requirement — Weighting
R013 Enterprise Management: All devices are currently monitored 24x7 using HP Mandatory
[6.5.1] Network Node Manager, Solar Winds and the BMC software suite, and node
up/down alerts are used to create automated fault tickets. Please indicate
your device support for the SNMP protocol, and any configuration changes
LY
capability.
Please also indicate your support for RADIUS.
Juniper TACACS+ and RADIUS are both fully supported for administrator
authentication on QFX and EX platforms, as well as for authentication to the
Junos Space management platform. Configurable syslog is also supported on
AL
both QFX and EX platforms, including support for sending differentiated event
logs to different Syslog servers.
R015 Remote configuration is currently achieved in-band using SSH, and out-of- Mandatory
RN
[6.5.3] band using asynchronous connectivity to the Cisco console port. Please
indicate how the same configuration methods can be achieved using your
products/solutions.
Juniper All EX and QFX configuration and troubleshooting can be performed using in-
TE
RE
configuration changes. All insecure protocols can be disabled. All devices
feature a standard console port for serial console access. Juniper devices
A
also provide a dedicated out of band management Ethernet port, which has
direct connectivity to the route engine. This allows out of band management,
which bypasses the packet-forwarding engine and the in-band forwarding
SH
traffic to allow complete isolated access to the switch. The EX4300 has a
10/100/1000Mbps RJ45 port and the QFX5100 has both a RJ45 port and a
SFP port.
R016 Configuration backup/restore: A backup copy of all network device Mandatory
[6.5.4] configurations are help on a CiscoWorks server so that they can be restored
T
in the event of device failure. Please indicate this function can be carried out
on your products, and any additional software tools which may be required.
NO
Juniper Junos Space is able to maintain regular configuration backups of all EX and
QFX devices.
In addition to the local configuration history (by default Junos devices store
up to 50 previous configurations locally on the device), devices can be
configured to automatically upload configuration files to an FTP or SCP target
after every change or at specific (e.g. daily) intervals.
DO
R017 Please indicate any additional mandatory or recommended device software High
[6.5.5] tools, which are required for configuration, management, monitoring or
diagnostic purposes.
Juniper All EX, QFX and Junos Space configuration and monitoring can be performed
using standard terminal emulation and web browser tools. No mandatory
—
device tools are required.
monitoring tools such as Solarwinds Orion NPM, HP node manager and other
management platforms.
Cost Innovation
Table 13.5: Virtualization requirements
E
[6.6.1] multiple 1GE copper RJ45 connections, although Fast Ethernet is still used in
limited circumstances. As these physical servers are assessed for hardware
technology refresh, the default assumption is that the services residing on
them will be deployed onto a virtualized platform (unless justification is given
explaining why it must remain on dedicated hardware).
Please suggest how your network solution would enhance the connectivity,
AL
reside on the same switches with no change in the architecture or in the way
traffic traverses the fabric and the wider data centre.
IN
RE
From a management point of view several options exist for the configuration
A
and control of the proposed data centre design and implementing automation
in to the design.
SH
CLI
The entire proposed devices run the same Operating system – JunOS. Junos
operating system is a reliable, high-performance network operating system
for routing, switching, and security. It reduces the time necessary to deploy
new services and decreases network operation costs. Junos offers secure
T
programming interfaces and the JunOS SDK for developing applications that
can unlock more value from the network.
NO
Running Junos in a network improves the reliability, performance, and
security of existing applications. It automates network operations on a
streamlined system, allowing more time to focus on deploying new
applications and services. And it's scalable both up and down—providing a
consistent, reliable, stable system for developers and operators. Which in
turn means a more cost-effective solution for your business.
DO
SDN
Path Computation Client (PCC): PCC is an SDN technology available on the
EX9200 Series. PCC enables network programmability to allow IT managers
to dynamically create optimal paths including slices, overlays or virtual paths,
to optimize on-demand bandwidth requirements.
Junos Space —
Junos Space is a Network Application Platform, which is fully capable of
fulfilling the role of a Network Management System (NMS) for Juniper
Network Devices and providing integration into the existing Operational
Support Systems (OSS) through leveraging existing capability and further
LY
running on the Junos Space Network Management Platform. While the Junos
Space Network Management Platform offers broad fault, configuration, and
device provisioning capabilities with a task-specific user interface, the
multiple Junos Space Management Applications extend the breadth of the
TE
RE
services across thousands of devices with a simple point-and-click GUI, as
well as optimize management for specific domains such as core, edge,
A
access and aggregation Junos Space Network Director provides a single
pane of glass view into both the wired and wireless networks, and creates a
holistic, full lifecycle management solution for the network.
SH
Junos Space Network Director today delivers:
• Critical elements of advanced management applications by providing
operational efficiency, expedited error free service roll-out, enhanced
visibility and fast troubleshooting.
• Operational efficiency by employing a correlated view of various
T
networks elements. It offers a holistic view of every aspect of network
operation to remove the need for disjointed applications throughout
NO
the lifecycle of the network.
• Faster roll-out and activation of services while protecting against
configuration errors with profile-based configuration and configuration
pre-validation.
• Single pane of glass management that provides a unified view of the
network infrastructure including a correlated view of overlay services
DO
and user experience on top of network infrastructure. Junos Space
Network Director also tracks aggregated utilization, network hotspots,
failures, correlated RF data and usage to a user level providing deep
visibility and easy troubleshooting of connectivity, equipment and
general failures. —
Summary of the feature currently provided in Network director
LY
ON
E
US
More details on Network Director 1.5 features can be found in the following
link:
AL
http://www.juniper.net/techpubs/en_US/network-director1.5/information-
products/pathway-pages/index.html
topology)
IN
RE
• Auto Discovery, topology, vMotion moves and view
FCAPS for new platforms
A
• Data Centre - QFX3000-M/G, EX9200, EX4300, QFX3500,
QFX3600, QFX5100
SH
• Campus – EX4300, EX9200, Firescout, Altair
Topology
• QFabric Topology
• VCF Topology
T
• Auto discovery, topology import
NO
• Physical, logical, Status, utilization and monitoring of links,
ports and devices
Advanced grouping features
• Custom grouping of devices in hierarchical fashion
DO
o Auto assignment of new devices based on predefined
rules
• Grouping of ports from 1 or more device
Zero Touch provisioning
Monitoring —
• Thresholding
• Utilization, Top N links, VM mobility
• IPv6 Sessions
LY
Mobile interface
• Monitoring and Fault management
• Orchestration APIs
• Single point for integration with external Cloud/DC
E
orchestration tools
US
o OpenStack
Another element of Junos Space is through the Junos Space Virtual Control
application. Virtual Control allows users to manage, monitor, and control the
virtual networks that run within virtualized servers deployed in the data
TE
RE
Control contributes to a comprehensive solution that extends across the
routing, switching, and security infrastructure.
A
Rather than rebuild the virtual switch that comes as part of the hypervisor
software, Junos Space Virtual Control integrates with the hypervisor vendor’s
SH
existing management tools, delivering a combined solution that benefits from
both vendors’ innovation and Juniper Networks’ orchestration solutions.
Virtual Control integrates with VMware vSphere, providing access to the
VMware virtual switch (vSwitch) framework (both vNetwork Distributed Switch
and vNetwork Standard Switch). Virtual Control, users can discover, manage
T
and monitor the entire virtual network (vNetwork) inventory consisting of
vSphere Hosts, vSwitches, and virtual machines from multiple VMware
NO
vCenter Server instances. Virtual Control efficiently manages vSwitch Port
Groups and Uplink Port Groups and constantly monitors, logs, and reacts to
vNetwork events to keep track of virtual machine locations in the network.
Virtual Control also allows users to group VMware’s recommended vSwitch
Port Group best practice settings into profiles; using these profiles, Port
Groups that share best practice settings but have varying VLAN requirements
DO
can be quickly and easily deployed on different vSwitches. Virtual Control can
also be used to discover Port Groups being managed via VMware vCenter
Server. This allows for flexible operational models that define how
management responsibilities are split between network and server teams. In
addition, Virtual Control enables error free deployment of VMware services
such as VMotion, Distributed Resource Scheduler (DRS), high availability
—
(HA), and fault tolerance.
Juniper also provides other options around automation using Puppet for
JunOS all of which can be supported on the proposed platforms.
R019 As we move towards a more virtualized service delivery method, we are High
LY
[6.6.2] aware that automated VM expansion and workload mobility would enhance
the delivery of service to our customer.
Please suggest how your network solution would allow us to exploit some of
the VM mobility options, for example, by permitting IP-address mobility for
ON
mobility of VM’s and the L2 environment between the tech halls and data
centre. We touched upon VPLS, MPLS and Contrail in that response, and
US
how those technologies can be used to provide not only separation but also
mobility of VM’s across the data Centre. Whilst not wanting to outline the
same elements again, we also leverage juniper switches as VMware NSX L2
gateways and using this functionality with VXLAN to provide the option
vMotion between tech halls and maybe between data centres, but Juniper
use the term maybe because we would suggest an element of caution in
AL
regards to the use of long distance vMotion between data centres and that
maybe limiting the L2 and L3 boundaries to individual data centres or groups
of tech halls would be a safer option as opposed to prospect of live vMotion
between data centres.
RN
This is not to suggest that with the right testing and larger enough WAN pipes
that it would not work, and with the use of EVPN this can be done, but an
element of caution should be held.
TE
Our reasoning behind this is as follows. With the Federation of Clouds the
possibly has been put forward that you can connect data centres using WAN
IN
RE
extensions for the purpose of moving VMs without losing sessions. This is
known as Long Distance vMotion by VMware or generically as long distance
A
live migration. This means that long distance vMotion would provide better-
allocated server resources and maintain application availability between data
centres and to do so they would need to maintain L2 reachability during
SH
vMotion across the WAN.
Juniper see issues with doing long distance live motion and beyond
debunking the protocols we believe that you should consult with customers
on the use cases and recommend best practices when discussing server
T
virtualization and networking. Live motion has been demonstrated to work
well within a routing domain and within the data centre using Data Centre
NO
Bridging (DCB) but large scale vMotion over the WAN is mostly unproven and
whilst juniper have tested this to some degree customers should be sceptical
of it.
Long distance live motion over the WAN has limitations due to latency and
bandwidth requirements and the complexity of extending layer 2. Issues
DO
include the potential for misrouted traffic coming to the original data centre
when the VM has moved to the backup data centre, traffic tromboning where
traffic is looped from one router to another, large bandwidth requirements,
storage pooling and storage replication requirements and the complexity of
implementing the bridging architecture. The main problem with moving VMs
around is connecting back to the storage. Storage vendors market replication
—
schemes that could cause problems for customers and need to be carefully
evaluated.
motion. They should look for implementations that are tested and that scale.
They should look for integrated solutions that ease provisioning of the
network technology to enable live motion across domains and over the WAN.
ON
can be moved to it. If VM needs more compute resources than are available
on a server it can be moved to another server in the same rack or on another
US
the overload workloads are sent to other data centres using DCI at L2 and
live motion. As an example, Amazon deals with the capacity issue without
moving virtual machines. They simply control VM creation in a DC based on
available capacity. Many VMs are running batch jobs and have a short life
RN
you start moving the virtual machines to the backup data centre while they
are running. In this user case there is a requirement to have high bandwidth,
IN
RE
low latency, and links in place to handle the traffic that would be created once
live motion starts. It is difficult to estimate how much bandwidth would be
A
needed and how the migration would take since the users are constantly
updating the VMs and live motion must keep sending the deltas. In any case
it would be expensive to have the bandwidth standing by just in case it is
SH
needed.
One has to wonder if customers would implement live motion across data
centres to prepare for a once in a life time event of a total DC shutdown when
they can implement a backup plan that is not nearly as complex or costly and
T
only causes approximately 30 minutes of down time using shutdown, copy
and restart.
NO
• Layer 3 – Routed Live Motion
If customers want to do live motion they could consider routed L3 live motion
which most hypervisor vendors support (VMware, Microsoft, KVM, Citrix).
This method uses dynamic DHCP / DNS updates and can use session
tunnelling to keep existing sessions alive if needed and it does not require L2
bridging.
DO
Since Microsoft has announced support for L3 live motion with dynamic DNS
updates from the DHCP server and VMware has it in their product and so do
KVM and Citrix, L3 live motion is a possible alternative for customers that
don’t want to deal with L2 bridging.
Protocols
—
Listed below is a summary of the protocols that Juniper would recommend in
a Data Centre scenario where VMotion is an option.
• Eliminate STP
LY
Juniper propose Virtual Chassis Fabric on the EX and QFX Series switches.
No STP needed.
• Scale and Extend VLANs
Juniper proposes VPLS to VLAN stitching from the EX9200 to the QFX or
ON
you could use QinQ or VLAN groups. Juniper also supports EVPN for L2
stretch which could tie back to either VLAN’s within the DC or a VPLS domain
or Juniper contrail.
• Connect to virtual ports
Juniper could propose Virtual Ethernet Port Aggregator (VEPA), which is part
E
of the IEEE 802.1Qbg and available in the EX and QFX series or Space
Virtual Control.
US
Summary
RN
Whilst the option for vMotion over long distances is possible Juniper believes
that at this time they are untried, untested and to some degree rely on
unreasonable amounts of bandwidth, potentially creating routing problems
and problems reaching storage. Customers should proceed with caution and
TE
carefully consider if they should go long distance vMotion or take a more local
approach to vMotion in the first instance and test long distance vMotion
IN
RE
before implementing it or relying on it as the foundation of their virtual
strategy.
A
R020 Currently the network infrastructure relies upon physical separation, CAPS High
[6.6.3] approved cryptography and Evaluated firewalls to provide separation
between different security domains.
SH
Please explain how your products could enable us to consolidate physical
resources whilst still maintaining assured separation conformant to CESG
standards such as, but not limited to, GPG12 (Virtualization) , IAS1 Part 2
Appendix C (Assurance) and IAS4 (Cryptography)
Juniper As outlined in response R008, several technologies exist from Juniper to aid
T
in the secure transportation and separation of traffic with different security
grading’s over a single IP infrastructure. As discussed in responses R008
NO
Juniper can use VPLS to provide separation between the different traffic
types and by creating separate VPLS domains for traffic with different
security grades. The VPLS domains could be stretched between data centres
by either extending VPLS in to the WAN or via the use of EVPN, which would
be tied to the VPLS domains at each Data Centre.
DO
Juniper can also encrypt VPN’s, which would allow traffic deemed restricted
(in the old scheme) or secret (Government new scheme) to not only be
isolated with its own VPLS domain but also encrypted. Traffic graded as
official under the new marking scheme would also be allocated to its own
VPLS domain, but would require no encryption.
—
Whitepaper on Implementing VPLS for Data Centre interconnectivity
http://www.juniper.net/us/en/local/pdf/implementation-guides/8010050-en.pdf
dedicated VPN with complete isolation from other VR’s on the same VM
server and from other VPN’s in the network. Isolation in this case would be
the same as if the whole network was a large MPLS network. Encryption
could also be employed on traffic deemed to be secret and above and no
encryption on all other traffic. In the same way as described using VPLS,
E
In both of these methods, Juniper believe this would provide the same
standard of security as the PSN and Juniper could implement virtual version
of the CPA approved SRX to provide a complete solution.
AL
Juniper would like to point out at this time we believe the following is still true
from CESG. For the moment, cross-domain solutions (where a virtual
machine is connected to more than one security domain) are specifically
excluded and that data sharing is similarly excluded, though it remains
RN
the risks of importing malware or leaking data, and have agreed how they are
IN
RE
going to handle the risks.
A
With this in mind, we would work with XYZ, Inc. to find a suitable solution,
which would meet CESG requirements and provide the cost saving of not
having to implement separate solutions.
SH
R021 Please explain how your devices are able to provide increased connectivity High
[6.6.4] and throughput, and how they would enable XYZ, Inc. to:
• Increase speed at which new connections are deployed
• Upgrade to 10GigabitEthernet server connectivity to provide
T
consolidation of existing bulk 1GE copper connections
NO
• Aid planning for short term peak workloads
Juniper To provide this functionality we have specified mixed virtual chassis fabrics of
EX4300’s to provide RJ45 copper based connectivity and QFX5100’s to
provide 1/10GbE SFP based connections. The QFX5100-48S as described
earlier within our responses provides 48 ports of 1/10GbE and 6 x 40GbE
uplink ports. The 48 ports of 1/10GbE are dual speed ports with no limitation
DO
on what can be 10GbE or 1GbE. In providing dual speed ports as standard,
XYZ, Inc. have a low cost migration policy towards 10GbE virtual servers
when older 1GbE server are no longer required. The only cost in this
migration would be for SFP optics or DAC twinax cables as all of the ports on
the QFX5100 are SFP based.
—
The same principle applies to uplink ports. Juniper have specified 10GbE
connectivity using the 6 x 40GbE uplink via a 40GbE to 4 x 10GbE breakout
cable to allow 10GbE uplinks on day one with the option as traffic increases
LY
to utilise 40GbE via the introduction of 40GbE QSFP’s and 40GbE DAC
cables.
In introducing this capability from day one as standard across the design,
short term peak workloads can be managed as the migration to 10GbE virtual
ON
servers gathers pace whilst providing a simple platform in the form of VCF to
manage multiple switches from a single management instance.
location but needs to be connected back to a VCF, then the only limitation
would be the Fibre run to connect the switch to the spine node. Once
US
connected it part of the VCF and will act as such. This would also mean that
servers in different locations that need to be migrated to newer VM servers
could be connected to the same VCF allowing a more flexible migration
approach without the need to move servers.
R022 Regarding pricing and cost innovation: High
[6.6.5]
AL
• Please indicate innovation in pricing models that would allow XYZ, Inc.
to move towards a ‘pay for what we use’ charging mechanism. Explain
how this could be applied to your solution offerings.
• Please describe the discount structures that would be offered e.g.
RN
amount.
IN
RE
At end of lease, XYZ, Inc. would have the following End of Lease
A
options:
- Return the assets to IBM Global Financing
- Continue with the prime term rentals
SH
- Renew the lease
- Pay the conversion charge (based on the Fair Market Value of the
assets) and
- Keep the assets on peppercorn rental (a minimal annual
charge)
T
- Sell the assets to an independent Third Party Agent
*Above subject to final pricing and credit approval
NO
• Please see all quotes in Pricing Appendix for Discount structures and
Rebates
We have also provided a TCO Savings report, citing savings of 79% in
power and cooling and 76% in space savings.
DO
Sales Services
Table 13.6: Sales Service Requirements
Ref Requirement
— Weighting
R023 Pre-sales: Please describe the services and/or resources that could be Mandatory
[6.7.1] available to XYZ, Inc. for both commercial and technical help with the design
LY
account manager, and a resource pool of Sales engineers all based in the
UK, to provide commercial and technical support in a pre-sales environment.
R024 Post-sales: Please describe any post-sales services and/or resources which Mandatory
[6.7.2] could be available to XYZ, Inc., in order to assist with the ongoing support
and maintenance of deployed solutions including, especially regarding any
design/feature/function compatibility and interoperation.
E
and the complexity of your network. In order to assist XYZ, Inc. with the
ongoing support and maintenance of the deployed solutions Juniper has
included optional pricing for the Juniper Care Plus service, in addition to the
Juniper Care maintenance pricing with 4 hour hardware replacement as
requested. The full service descriptions for both Juniper Care and Juniper
AL
RE
• Access to Advanced Services to deliver the following prescriptive
services –
A
o Software Upgrade Recommendation and Review Service
(includes a Bug Scrub)
SH
o Product Issue Impact Review Service
T
o Feature Rollout and Change Review Service
NO
o Network Change Plan Review Service
DO
The Advance Services provided through Juniper Care Plus enable you to
effectively manage the ongoing lifecycle tasks of design changes, software
feature and functionality review, network change and associated
implementation planning and software recommendation review of your
software requirements, assessment of software upgrade risk, analysis of
potential impact on your network, and recommendations on a target software
—
release that can best meet your requirements.
that ensures you can take full advantage of all the Juniper Care Plus
features, capabilities, and benefits. Please see the Appendix for further
details within the AS Credits datasheet.
R024a Post-sales: Please describe the services and resources which could be Mandatory
ON
[6.7.3] available for involvement in the design, build and operation of the new
network
Juniper Juniper could provide a Resident Consultant to work alongside the XYZ, Inc.
team for either 6 or 12 months. The Resident Consultant works daily with
XYZ, Inc. staff at your location, becoming intimately familiar with your unique
E
Consultant helps you avoid many network issues before they arise, and is
fully prepared to help resolve issues as quickly as possible when they do
occur. The Resident Consultant assists with network design, deployment, and
support process definition and documentation, deployment and
implementation of Juniper Networks equipment, and post cutover activities for
your network.
AL
• specifications
IN
RE
• Providing informal workshops to transfer knowledge to engineering
staff
A
• Applying industry best practices to the design, planning, and
implementation of the
• network
SH
• Applying extensive industry experience to optimize network
performance and proactively
• analyze potential enhancements
• Evaluating technical specifications for interoperability
T
The Resident Consultant also assists with deployment of Juniper Networks
equipment, post cutover activities, and day-to-day operations for larger
NO
networks.
Consultancy based project time, is also available, please see the SOW in
appendices
DO
Product Introduction
A new network enterprise platform will represent a significant outlay for XYZ, Inc. to develop
support capabilities, build a service model and execute the migration of services from existing
platforms. The resources and time taken to introduce a new platform to live service are a
—
significant risk. For each of the below requirements please state how your proposal would reduce
the impact and burden of the risks and costs to XYZ, Inc.
_____________________________________________________________________
Juniper: Read and Understood
LY
[6.8.1] Inc. design and support capability and that which is required to deploy and
manage new infrastructure. This includes, but is not limited to:
• Cost to retrain
• Number of design and support engineers required
• Access to ‘sandpit’ hardware environments for network staff
E
Included within your response to address this skills gap, please provide
pricing for:
US
o Troubleshooting
o High availability (i.e. clustering type technologies)
o Security
o Performance management
RN
Juniper are well versed in cross training, we can offer * to Junos courses, as
well as helpful conversion kits to assist ongoing learning.
IN
RE
We also have FASTRACK… which is self-pace and enables staff to dip in
A
and out of modules as required. Self pace is free of charge, and can get the
Team proficient to JNCIS level with no financial cost.
There a lot of complimentary tools, training and resources available to reduce
SH
skills gaps quickly.
Hardware environments will be available within the POC options stated.
Juniper offers Publicly scheduled courses are also available at a per person
cost of approximately $700 per person per day. Alternatively Juniper can
T
provide Private class training for up to 16 students per class at $7,000 per
day (private, onsite with some customization if required).
NO
Suggested courses include, but are not limited to:
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
To enable and support the XYZ, Inc. operation team during capability
development and network transition, Juniper will provide the following new
A
customer services which are complimentary.
SH
employees to take the JNCSI-Junos certification. This training component of
the Services is designed to help train the Customer technical staff
responsible for design and operations tasks, and to help provide foundational
Junos OS skills needed to support Junos OS devices, including networking
and basic routing and switching fundamentals.
T
This training has been specifically calibrated for multi-tiered network teams
NO
and includes:
DO
Platform specific self-guided multimedia content for post event studies.
Service Management- Provided on a remote basis for ninety (90) days and
include support for key operational tasks as the Customer becomes familiar
with Juniper Networks’ support infrastructure. The start date is one week after
ON
RE
eOperational Review
A
- Organizational roles
- Maintenance procedures
SH
- Upgrade procedures
T
R026 Service model – this includes all aspects to build a service around the Mandatory
NO
[6.8.2] infrastructure such as:
• Integration with the XYZ, Inc. enterprise management Framework:
work required to configure and deploy management tools for XYZ,
Inc.’s needs
• Support routes: setting up correct support routes for first, second and
DO
third line support with correct paths for escalation
• Validation of XYZ, Inc. solutions: work required by XYZ, Inc. to validate
standard configurations on new hardware/operating system
combinations – a common issue is availability of hardware on which to
do the necessary validation—
Please suggest possibilities for reducing the time and associated cost to XYZ,
Inc. to develop these capabilities which would be necessary before any new
platform could be considered ready for live service. If this includes
chargeable professional services please include appropriate costs in your
LY
proposal.
Juniper Juniper’s Professional Services organization includes the most experienced
and knowledgeable internetworking professionals in the industry. Our team
appreciates the complexities and the subtleties inherent in large-scale design
ON
and can assist XYZ, Inc. with respect to the proposed project.
The transformation methodology is based on four phases, with the first three
AL
A RE
SH
T
NO
Figure 1. Four project phases in data centre network transformation
DO
methodology
To support XYZ, Inc. during Build and Operate phase Professional Services
(PS) can provide Resident Consultant (RC) and Test, Implementation and
Migration (TIM) Engineers who can complement the customer engineering
and operation teams. —
Integration with the XYZ, Inc. enterprise management Framework
Juniper Networks’ Professional Services has extensive experience in
management systems integration in Data Centre environments and with
LY
Often these APIs can be exploited by external systems directly and can be
integrated by the customer’s systems integrators/partners. Further Juniper
Networks’ Processional Services can provide a range of supporting services
TE
RE
In addition to integration Juniper Networks’ Professional Services is able to
A
leverage a variety of Automation mechanisms in Junos Space (e.g. CLI
Configlets, scripts, templates) and other tools (such as Puppet, Chef, Ansible)
to aid in the automation of build and in-life data centre deployment and
SH
operations functions. Examples include DC/device build automation, frequent
configuration tasks and network/service configuration validation.
To provide not only a cost efficient but also time efficient method of validating
the proposed solution, Juniper would look to validate the proposed solution
T
with XYZ, Inc. through a Proof of Concept at our labs, which would allow both
Juniper and XYZ, Inc. to define the baseline architecture and a common
NO
configuration set across the equipment not only to test different POC
scenarios but also to carry forward to an on-site test.
Once the POC is complete, we can then take these configuration examples
and implement them in to an on-site testing lab such as SHP01 and produce
cut-down version of the full architecture utilising the equipment proposed for
DO
SHP01 and produce a baseline configuration as we move towards a live
implementation.
Juniper would envisage that the equipment proposed for SHP01 would stay
in that location and would be used for further software testing as the live
solution evolves. —
Juniper is also able to provide access to Junosphere, which would provide
XYZ, Inc. with another option for testing software and configurations in a
virtual environment. As this is a virtual system run within Juniper’s private
LY
cloud, XYZ, Inc. and Juniper could produce an example of their network and
then XYZ, Inc. people can access this private cloud to test configurations
before retesting in a lab environment.
ON
In each case these services would allow both Juniper and XYZ, Inc. a
process to validate the design and configuration prior to live implementation
and provide testing for further changes to be tested.
“Introduction to JTAC” which will cover the best practice methods to engage
and interact with Junipers support organisation. Covering all aspects of case
handling, technical and management escalation processes from initial contact
US
to resolution and customer service centre tools and knowledge base systems.
The comprehensive “Guide to JTAC” is also attached in the appendix for
general reference.
R027 Product build definition – represents the work required to develop a standard Mandatory
AL
[6.8.3] build:
• Standard OS image
• Hardening of OS configuration as required to meet XYZ, Inc. security
RN
standards
• Template containing standard base configuration (including device
remote access for config/support and management tool access) and
typical interface/routing configuration
Again, please describe the help you can offer to reduce the impact of
TE
ensuring all the above capability is in place as will be necessary for the
IN
RE
platform to be available for live service. If this includes chargeable
professional services please indicate appropriate costs in your proposal.
A
Juniper Juniper prides its self on the development and stability of the Junos OS
software. This is in part due to our focus on working with customers to
provide them with the best version of software possible for their deployment
SH
and working with them to introduce new features in to Junos.
T
applications that can unlock more value from the network
NO
• One operating system: Reduces time and effort to plan, deploy, and
operate network infrastructure and provides the same configuration
experience across all platforms
• One release train: Provides stable delivery of new functionality in a
steady, time-tested cadence.
• One modular software architecture: Provides highly available and
DO
scalable software that keeps up with changing needs
This means that XYZ, Inc. can rely on a standard OS image across all of the
platforms proposed in our solution and after testing standardised across a OS
image that meets all of their requirements.
—
From a hardening point of view, Junos already goes through several beta
versions of testing both within Juniper and with specific customers before
being released to market. With this in mind, Juniper would sign XYZ, Inc. up
LY
to the Junos Beta program. The purpose of the Beta program is to partner
with customers to test and validate new features prior to FRS release and
also provide customers an opportunity to share with Juniper on their feedback
about our product usability, feature and performance.
ON
In this way, XYZ, Inc. can test new software features prior to release and
decide if these features and the new software version could be implemented
in to the network. Another aspect of the beta release program is that the
Junos engineering team can test functions and features specific to XYZ, Inc.
prior to beta release. Your Juniper aligned SE with XYZ, Inc.’s permission
E
would provide the Junos team with an in-depth design and configuration of
your network. The Junos team would then test new software against this
US
design in their labs and provide XYZ, Inc. with the results. This allows XYZ,
Inc. to have Junos developed with their infrastructure in mind.
Your aligned SE would provide beta access and sign up to the programme
upon order of the support contract. This is a free service that Juniper provides
for the benefit of both Juniper and it customers.
AL
To complement the above services, Juniper also has a cloud based virtual
environment called Junosphere. Junosphere Cloud is a virtual environment
from Juniper Networks that can reduce the costs of network planning and
RN
modelled and designed. When built, these networks can be used for training,
IN
RE
network modelling, planning for new services or examining "what-if"
scenarios for the installed network.
A
Networks created in the Junosphere Cloud environment run the same Junos
operating system that powers Juniper routers, switches and security
SH
platforms for unparalleled realism and accuracy. They can be easily and
quickly scaled down or up to hundreds or thousands of nodes for a level of
scale and accuracy not possible with alternative network simulation
approaches.
This would provide XYZ, Inc. another option in meeting their security
T
hardening processes and testing new features and functions without the use
of a test lab or running in the live environment.
NO
The web link below provides access to more details on this service:
http://www.juniper.net/us/en/products-services/software/junos-
platform/junosphere/
What all of the above items provide is the ability to test and configure a
DO
standard configuration template and OS version for devices prior to
implementation in a live environment, thus reducing risk and time to
implementation.
support contract. Details of this service have been included in our additional
information section.
[6.8.4] services to a new platform, this goes beyond the technical migration issues of
server/switch connectivity to include, but not limited to:
• Guarantee of live service during migration period; parallel running of
new and old infrastructure
• Capacity planning to deliver correctly sized infrastructure
E
above and place them in to a process that Juniper Professional Services and
our solution team can use to implement the Juniper proposal.
The components are divided in to four areas, Design, Assess, Plan and
TE
Migration.
IN
RE
Design
Effective design is essential to reducing risk, delays, and the total cost of
A
network deployments. This is the element of the process that ties the sales
and solution team to the customer’s technical team and in turn the Juniper
Pro-services team.
SH
It’s at this point that these teams come together to address the following:
• Build upon the initial solution
• Work out any additional requirements not covered by the RFP
• Gain a thorough understanding of the customers immediate and
T
long-term business requirements
• How the solution will address these requirements and any future
NO
performance and capacity issues
• Test the initial solution in a POC to validate the architecture and how
the solution will interoperate with the customers’s existing solution
and if they can run in parallel with each other
The output from these processes allows Juniper and yourselves to finalise an
DO
architectural foundation capable of supporting your current needs with the
scalability required to allow you to take advantage of future opportunities. It
also proves the solution so Juniper and XYZ, Inc. can move to the next stage,
which is Assess.
Assess
—
This starts the process in which Juniper Professional Services become the
project leads from Juniper’s point of view and the sales team step back.
Juniper Professional Services use the following diagram to show their
LY
methodology in migrations.
ON
E
US
AL
The Assess process defines three main scopes, which are project definition,
project scope and identifying the potential threats to the implementation and
migration.
RN
Project Scope audits the existing infrastructure, identifies any potential issues
TE
that may affect the implementation and categorises them and assigns the
right people to migrate that risk.
IN
RE
This then folds in to the threat identification process so all issues prior to
A
migration and live service implementation have been tested and signed off
with the right stakeholders and any solution adjustments are agreed.
SH
Plan
The plan element of the Pro-services engagement draws up the necessary
documentation and project scope taking in to account the items already
covered so all aspects of the project are ready to be implemented together
with the potential risks, there dependencies, associated risks and how any
T
unforeseen issues can be mitigated.
NO
Migrate
The Migrate process is the implementation of the project taking in to account
all of the items noted previously and that the implementation is phased to
mitigate any potential problems. A phased approach would also allow both
the existing and new services to be run in parallel with service from the old
moved to the new once a suitable period of time has passed with both
DO
services running.
Essential to the success for the DC Network Fabric is clear concise project
management that ensures that the project runs smoothly and that
communication between Customer and Juniper is strong, including the use of
a shared file structure. Juniper Networks has developed the Juniper Project
Management Methodology (JPMM) based on the standardized PMI and
E
services domain.
RE
• Juniper can supply appropriately qualified project managers to run
the Juniper specific work packages to reduce the burden on XYZ,
A
Inc.
• A Project Architect can oversee all design work and act as the
SH
Juniper Technical Lead.
T
DC Juniper solution.
R030 Ability to validate new infrastructure proposal within a supplier or customer Mandatory
NO
[6.8.6] proof of concept lab.
o Old to new integration and connectivity
o Trialling migration options, tools & techniques
o Integration of routing protocols
o Integration of firewalls, load balancers
o Integration in to existing network fault and monitoring
DO
systems
The supplier should indicate which of the listed requirements in this document
can be proved in their Proof of Concept environment
Juniper
—
Juniper can provide proof of concept (POC) against all of the listed
requirements noted above in two ways, either via an off-site POC in one of
Junipers purpose built POC labs or via an on-site POC.
Juniper’s purpose built POC labs are located in Sunnyvale US, Westford US,
LY
Amsterdam NL, Hong Kong and Singapore. Each POC lab can provide a
complete demonstration lab of the proposed solution with full fault and feature
testing and configuration. It can also provide interoperability with third party
products that are part of the solution or the client’s existing network. In each
ON
network.
RE
• Efficient use of Juniper resources
• Cost and time savings for the client who capitalises on Juniper’s
A
significant investment in their success
• Valuable hands-on experience
• Exposure to Juniper best practices regarding design, testing, and
SH
analysis
• Close integration with the Executive Briefing Centre for achievement
of all meeting objectives
• Opportunity to see other technology and to talk to engineers within
Juniper who are developing the next generation of hardware and
T
software standards.
NO
A typical Juniper POC involves:
DO
• Customized device hardware and software revisions and
configurations, matched as closely as
possible to the client’s
devices
• Customized tester configuration regarding packet size/distribution,
packet rates, protocols,
connection setup rates, etc.
—
• Validation of defined features/functionality based upon client-defined
test plan criteria
• Acceptance of performance, scale, and failover behavior based upon
client-defined test plan criteria
LY
performance/acceptance criteria
For an on-site POC, the principles are the same, but Juniper would be more
reliant on the customer’s lab environment. Juniper would supply the
necessary equipment from Juniper loans in Amsterdam and ship to site.
Juniper would then resource Juniper based SE’s and Professional Services
E
team to run the POC with the customer and replicate the same process that
Juniper would use in one of their dedicated POC labs.
US
It should be noted that a cost may be involved for the professional services
element of this scenario, where as a Juniper based POC would be free
except for travel related costs.
AL
Environmental Requirements
Table 13.8: Environmental Requirements
RN
• Manufacture
IN
RE
• Power efficiency
• Disposal
A
Juniper Juniper's greatest impact on the environment is through our products, so
we're reducing our carbon footprint with products that are environmentally
SH
responsible in all phases of their life cycles, a complex challenge that
demonstrates our commitment to protecting the environment.
We monitor compliance with local and international laws in all of our locations
worldwide, and work with governments, industry partners, and consortia, to
T
harmonize regulations with innovation. We collaborate with governments,
industry vendors, and customers, to develop and implement energy metrics
NO
that measure the efficiency of networks.
DO
• Reduced CO2 footprint
• Reduced ozone-depleting materials and substances
• Compliance with legislated restrictions of hazardous substances
• Reduced fossil fuel consumption—
Juniper follows EU Directive 2002/95/EC Restrictions of the use of Certain
Hazardous Substances in Electrical and Electronic Equipment. Juniper
Networks follows EU RoHS restrictions throughout all product series. We
LY
have eliminated the use of all banned substances currently outlined in the EU
RoHS directive and reduced the usage of restricted substances to levels at or
below legal limits. And, we continue to improve product designs to meet the
changing environmental requirements though the implementation of the
ON
Juniper provides recycling support for our equipment in order to meet the
European Union's WEEE Directive. All of our products introduced after
August 13, 2005, are marked or documented with an icon depicting a
E
Additional Requirements
TE
Warranty/Support
IN
RE
Warranty/Support must be in place to cover these units for the period of their service life and
must commence from installation on site rather than from delivery to the bonded warehouse.
Please clearly specify warranty provided with each item/product proposed.
A
_____________________________________________________________________
Juniper: Read and Understood
SH
Juniper’s Standard Warranty is applied to the QFX. Our Enhanced lifetime warranty is applied to
the EX range, so covering the EX9204 and the EX4300.
Standard warranty covers 1 year and has a start date within 90 days from the original shipment to
XYZ, Inc., please see appendix one, for complete details.
T
Enhanced Warranty covers 5 years and has a start date within 90 days from shipment date, please
also see appendix one
NO
Juniper would be prepared to offer some flexibility, once we understand the installation schedule
and can seek VP approval for adjournment, within reason.
The Support and maintenance start date can be postponed for up to 6 months after purchase;
however, if the support is not procured within one year from the Hardware purchase date, then
Juniper have the right to impose a reinstatement fee after that time.
DO
Maintenance —
During the migration XYZ, Inc. will have a number of existing support and maintenance contracts,
the vendor shall include in their responses how they plan to offset these costs
Regardless of existing XYZ, Inc. support and maintenance contracts with incumbent suppliers, any
LY
response to this RFP MUST include full details of proposed support and maintenance contracts
(including costs). Details that must be included are as follows:
• Warranties
• Recommendations as to engineer support/location
ON
• Cost of hardware replacement if faulty unit cannot be returned – any additional cost
needs to be specified if this is not included in the support contract
• A 4 hours SLA for hardware support
Please describe any additional offerings that would include the support of the current switch
infrastructure including costs.
AL
_____________________________________________________________________
Juniper: Read and Understood
With Reference to all of the above queries, they have been answered and incorporated within the
RN
bid document.
TE
IN
RE
References
A
Please provide two reference sites to support the information within your proposal. These should
include references that demonstrate the following:
SH
• Experience in government accounts
• Interoperability with Cisco Catalyst/Nexus-based Data Centre network deployments
These references will be used to aid marking against the requirements described above.
_____________________________________________________________________
T
Juniper: Read and Understood
NO
The Prime Minister’s Office, 10 Downing Street had a requirement
to replace their existing network infrastructure with a solution that
would provide a robust and resilient architecture to support the
next generation of applications and secure communications. Their
existing LAN had reached its end of life and support and would
DO
not provide capability to support newer programs and stringent
security requirements.
The Prime Minister’s Office also necessitated an infrastructure so
if crisis hit, they could move the staff immediately to any of the
three sites the PM uses without downtime.
—
The Cabinet Office also uses Juniper Switches for its internal infrastructure.
The solution is divided between three sites, Prime Minister’s Office and two secure Data centres.
Within each location is a dual core layer connects to a series of spine and leaf virtual Chassis’s.
LY
These three core locations are connected to each other via dual 10GE dark fibre to create a
resilient MPLS ring.
The core layer at each site is dual connected to form a Virtual Chassis. Due to the security
ON
requirements at all sites, users PC’s connect to the network via direct fibre as such the access
layer of the solution consists of 1GbE Fibre switches acting as leaf nodes connecting back to a
spine layer of 10GbE switches to form a Virtual Chassis’s. This is then replicated to form a series
of virtual Chassis’s which make up the user connectivity at each site and is duplicated in to the
data centres for server connectivity
E
US
AL
• NYSE Euronext needed next-generation data centres, consolidating from 10 to 4, to meet its
stringent requirements for high performance, extraordinary reliability, low latency, scalability,
TE
RE
• Juniper deployed EX Series Ethernet Switches (800+), QFX 3500 (125+) and MX Series 3D
Universal Edge Routers (40+)
A
• Juniper’s security presence at NYSE includes Remote Access Infrastructure, Site to Site
SH
VPN, and Active Directory User Authentication
T
NO
Supporting Material
DO
Vendors can include any other supporting material within their proposal that they feel adds value
to XYZ, Inc.’s strategic aims and brings innovation whether through technical capability,
professional services or commercial offerings. This material will be used to aid marking against the
requirements already defined.
_____________________________________________________________________
—
Juniper: Read and Understood. Please see Appendix 1 for supporting technical datasheets.
Juniper would also like to offer XYZ, Inc. an ‘On boarding Package’. This is a program that
eliminates potential risk when migrating to Juniper
We offer it as a free of charge value add, when Juniper have no current estate in an end user
LY
Customer, such as XYZ, INC. * The part number is within the Hardware Pricing as Zero
A RE
Commercial Offering
SH
Please see Appendix Two, Pricing for Hardware, Support & Maintenance and
Professional Services, Value Add and TCO.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
Legal
A
Please note that Juniper Networks is the manufacturer of the equipment proposed in this RFP.
SH
Accordingly, Juniper Networks and XYZ, Inc. will not have a direct contractual relationship. Any
contract for purchase of hardware, installation and commissioning templates will therefore need
to be negotiated and agreed upon between XYZ, Inc. and ACME. Juniper Networks will not be
bound by the terms and conditions of such contract between XYZ, Inc. and ACME.
T
NO
DO
—
LY
ON
E
US
AL
RN
TE
IN
RE
Appendix One - Appendices
A
SH
• EX9204 datasheet and EX4300 datasheet
EX9200.pdf EX4300.pdf
T
NO
• QFX5100 datasheet
QFX5100.pdf
DO
• Junos Space datasheet
Junos Space.pdf
—
•
LY
A Methodology for
Transformation of Da
datasheet.pdf
• JTAC Guide
AL
JTAC Guide.pdf
RN
A RE
Enhanced Limited Standard Product
Lifetime Warranty on Warranty.pdf
SH
• Training Information
T
Fast Track
NO
ogram - 3000051-e
• Junosphere
DO
Junosphere.pdf
—
LY
ON
E
US
AL
RN
TE
IN
RE
AAA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Authentication, Authorization, and Accounting
A
ACL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .access control list
AP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . access point
SH
API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application programming interface
ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Address Resolution Protocol
ASIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .application-specific integrated circuit
ATM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asynchronous Transfer Mode
BC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . business continuity
T
BCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Best Current Practices
BOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .bill of materials
NO
BoR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bottom-of-rack
BPDU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bridge protocol data unit
BYOD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bring-your-own-device
CapEx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .capital expenditures
CE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . customer edge
CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .command-line interface
DO
COS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . class of service
CPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . cells per second
CRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Customer Relationship Management
CSV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . comma-separated variable
DCI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data center interconnect
—
DMZ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . demilitarized zone
DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Domain Name System
DPI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Deep Packet Inspection
DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . disaster recovery
LY
RE
LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . link aggregation group
LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lightweight Directory Access Protocol
LLDP-MED. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Link Layer Discovery Protocol–Media Endpoint Discovery
LSYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . logical systems
MAC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . media access control
A
MC-LAG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multichassis link aggregation group
MDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Main Distribution Area
SH
MIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modular Interface Controller
MMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multimode fiber
MoR. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .middle-of-rack
MPC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Modular Port Concentrator
MPO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . multifiber push-on
T
MSS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . maximum segment size
MSTP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Multiple Spanning Tree Protocol
NO
MTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mechanical transfer pull-off
MTU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . maximum transmission unit
NAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . network access control
NAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . networked attached storage
NAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Address Translation
DO
NIC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .network interface card
NMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .network management system
NSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Network and Security Manager
NSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . nonstop active routing
NVF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Virtualization Function
OCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Open Compute Project Foundation
—
OpEx . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .operational expenditures
PCI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Payment Card Industry
PE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . provider edge
POD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . point of delivery
LY
RE
ToR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .top-of-rack
TTR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . time to restore
UC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unified communications
UCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unified communications and collaboration
UTM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unified Threat Management
A
VAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Value Added Services
VC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtual Chassis
SH
VIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual IP address
VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual LAN
VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .virtual machine
VoIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . voice over IP
VPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Virtual Private Cloud
T
VPLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual private LAN services
VPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual private network
NO
VR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual router
VRID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual router identifier
VRF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual routing and forwarding
VRRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . virtual router redundancy protocol
VSTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VLAN Spanning Tree Protocol
DO
WAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Wide Area Network
WDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . wavelength-division multiplexing
XaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Everything as a Service
XFP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . small form-factor pluggable transceivers
—
LY
ON
E
US
AL
RN
TE
IN