Вы находитесь на странице: 1из 23

IMPLEMENTATION GUIDE

Branch High Availability in the


Distributed Enterprise

Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Table of Contents
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Target Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Terminology and Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Design Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Juniper Networks Distributed Enterprise Connectivity Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Branch Office High Availability Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Understanding Chassis Cluster (JSRP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Understanding Virtual Chassis Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Implementation and Configuration Guidelines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Configuring Branch Office with a Single SRX Series Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Configuring Redundant IPsec Tunnels on a Single SRX Series Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Configuring Zones on an SRX Series Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Configuring NAT on SRX Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Configuring Branch Office with Redundant SRX Series Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Configuring SRX Series Chassis Cluster (JSRP) For Device-Level WAN high availability . . . . . . . . . . . . . . . . 11
Configuring Device-Level LAN high availability with EX Series Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . 14
Enable Virtual Chassis Uplink Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Connectivity Use Cases and Failover Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Internet Backhaul Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Normal State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Failover Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Private WAN as Primary and Internet as Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Normal State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Failover Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Internet and Private WAN Split Tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Normal State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Failover Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
About Juniper Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

ii Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Table of Figures
Figure 1: Distributed enterprise connectivity architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Figure 2: Distributed enterprise high availability network design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Figure 3: SOHO or retail store link-level high availability architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 4: Remote office link-level high availability architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 5: Medium-to -large branch office device-level high availability architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Figure 6: Link-level redundant WAN connectivity architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Figure 7: Security zones on an SRX Series device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 8: Device-level redundancy high availability architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 9: SRX Series chassis cluster (JSRP) configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Figure 10: Configuring EX4200 Virtual Chassis in medium-to-large branch office. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Figure 11: Connecting EX Series Virtual Chassis to redundant branch SRX Series Services Gateways. . . . . . . . . . . . 15
Figure 12: Traffic flow example in the Internet Backhaul Only use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 13: Traffic flow example in the Internet as Backup use case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 14: Traffic example in the Internet and PWAN Split Tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Copyright 2010, Juniper Networks, Inc. iii


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

iv Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Introduction
Because there are various levels of high availability that can be deployed in the branch offices, enterprises need to
identify what level they want to achieve, and then deploy the appropriate level of device and link redundancy that
supports the high availability requirements.
Link-level redundancy essentially requires two links to operate in an active/active or active/backup setting so that if
one link fails, the other can take over (or likely reinstate) the forwarding of traffic that had been previously forwarded
over the failed link. Failure on any given access link should not result in a loss of connectivity. This only applies to
branch offices with at least two upstream links connected either to a private network or to the Internet.
Another level of high availability is device-level redundancy, effectively doubling up on devices to ensure that a
backup device can take over in the event of a failed device. Typically, the link redundancy and device redundancy are
coupled, and this coupling effectively ties failures together. With this strategy, no single device failure should result in
a loss of connectivity from the branch office to the data centers.
A truly high availability design that provides assured connectivity to business-critical enterprises and branch offices
should employ a combination of link and device redundancy that should connect the branch to dual data centers.
Traffic from the branch office should be dual-homed to each data center so that in the event of a complete failure
in one of the data centers, traffic can be rerouted to a backup data center. Whenever failures occur (link, device, or
data center), traffic should be rerouted in less than 30 seconds. Within this period of time, packet loss might occur.
However, sessions will be maintained if the user applications can withstand these failover times. Branch offices with
redundant devices should provide session persistence so that in the event of a failure, established sessions will not
be dropped, even if the failed device was forwarding traffic.
This document provides configuration data and how to information relevant to providing link-level as well
as device-level high availability deployment, for each of the branch office network profiles at the distributed
enterprise environment.

Scope
This paper provides configuration parameters for each branch network deviceincluding Juniper Networks
SRX Series Services Gateways and Juniper Networks EX Series Ethernet Switchesas part of a high availability
enterprise network. No security or unified threat management (UTM) features are detailed in this document. For
details about implementing UTM features in branch offices, see the Branch Office UTM Implementation Guide. For
details about deploying EX Series Ethernet Switches in the branch office, see Deploying EX Series switches in
Branch Offices.

Target Audience
Security and IT engineers
Network architects

Terminology and Concepts


Table 1 lists the terms that represent key items and concepts related to high availability implementation at
distributed enterprises. These will serve as an aid in helping you understand the high availability requirements as it
applies to the design considerations and implementation guidelines presented in this paper. For further details about
integrating distributed enterprise branch office and campus network designs into your enterprise, see the Branch
Office Reference Architecture, which provides the key architectural considerations and details that are required to
incorporate branch office and campus network designs into your enterprise.

Copyright 2010, Juniper Networks, Inc. 1


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Table 1: High Availability Requirements and Description


Requirements Description
Connectivity
Link-level redundancy Failure on any given access link should not result in a loss of connectivity.
This only applies to distributed enterprise locations with at least two
upstream links connected either to a private network or to the Internet.
Device-level redundancy In this type of deployment, no single device failure should result in a loss of
connectivity from the remote enterprise locations to the data centers.
Data center failure protection If a complete failure occurs in one of the data centers, traffic should be
rerouted nonstop to a backup data center because the remote enterprise
locations are recommended to deploy in a way to connect to at least two
data centers backing up each other with active IPsec tunnels.
Fast failover times Whenever failures occur (link, device, or data center failures), traffic
should be rerouted in less than 30 seconds. Within this period of time,
packet loss might occur, but sessions will be maintained if the user
applications can withstand these failover times.
Session persistence Distributed enterprise locations with redundant devices should provide
session persistence. If a failure occurs, established sessions should not be
dropped, even if the failed device forwarded traffic.
Load balancing Customer traffic is recommended to load-balance across dual connections,
but also possibly multiple connections, to the data center. If a link or
application fails, all traffic is directed via the remaining link(s) to its
destination at the data center or branch office.
Network Address Translation (NAT) NAT is enabled so that machines in the trust and guest zones can access
the Internet. During a failure, Internet sessions might not be preserved
because the translated addresses of that traffic might have to change,
since different service providers might be using them to connect to the
Internet for higher availability.
Security
Traffic symmetry Chassis cluster, aka JUNOS Services Redundancy Protocol (JSRP) features
and redundant group interfaces are enabled at the branch offices to ensure
no firewalling would drop any traffic because of asymmetric routing. For
these to work, this design guarantees that both ingress and egress traffic
flows traverse the same services gateway cluster. The similar redundancy
scenario is also implemented at the data centers.
Secure management traffic SNMP monitoring is enabled through the IPsec tunnels. Whenever backup
devices are provided (for branch office with redundant devices), it is
possible to monitor all the devices in the branch office location, even if one
of them is not forwarding traffic (or terminating any IPsec tunnel).
Manageability
Complete visibility and control Site wide, real-time, and historical status monitoring and reporting of log
files, device status and configurations to a centralized location such as a
network operations center (NOC)
Uniform network operating system A single network operating systemthe industry-leading high-performance
Juniper Networks JUNOS Software allows enterprises to simplify branch
office network infrastructure, provide unified network management, and
realize significant savings in total cost of ownership (TCO).

2 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Design Considerations
There are design considerations that can be applied universally, regardless of the branch office profile. The following
paragraphs discuss these considerations as they apply to high availability of network infrastructure, services, and
applications. Next is a discussion of how Juniper Networks enterprise reference architecture applies to distributed
enterprises and all of their major locationsthe branch offices, campus, and data centers.

Juniper Networks Distributed Enterprise Connectivity Architecture


Figure 1 depicts a high-level view of Juniper Networks distributed enterprise connectivity architecture. The section
that follows presents the design details that are specific to each of the redundancy levels and technologies covering
all the branch office profiles.
Network infrastructure at branch offices normally contains a relatively smaller amount of computing resources
when compared to central facilities or headquarters. Branch offices typically are located where customer
interactions occur which means there is increased demand for supporting applications and assuring application
performance, an increased demand for security, and an increased demand for availability. Because remote locations
usually lack the presence of local IT staff, the equipment hosted at these facilities must not only be cost effective,
feature-rich, and highly reliableit must also offer centralized management capabilities.

CAMPUS DATA CENTERS


ENTERPRISES
OWN CORE

SRX Series

EX4200/Virtual
Chassis
BRANCH OFFICE

PRIVATE WAN
SRX Series (Managed Services)
NOC

EX2200/EX3200
DATA CENTER

REMOTE OFFICE

PUBLIC WAN
(Internet)
SRX Series

3G wireless

SOHO MANAGED/HOSTED DATA CENTER

Figure 1: Distributed enterprise connectivity architecture

Juniper Networks offers a set of compelling solutions to meet the needs of distributed enterprise deployments. The
aforementioned connectivity architecture addresses the concerns of enterprises in each of the areas of connectivity,
security, end-user performance, visibility, and manageability. As detailed in the Branch Office Reference Architecture,
enterprises require a completely new approach in branch office networking to avoid services disruption, which
also impairs productivity in the branches and in turn negatively impacts the business. These requirements define a
new product category know as services gateways, which simplifies branch networking and lowers TCO by unifying
multiple services such as security, voice, data, and access servers into one remotely manageable platform.
Copyright 2010, Juniper Networks, Inc. 3
IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Juniper Networks enterprise reference architecture builds a model that addresses these needs of the enterprises
as well as the number of users at that particular branch office location. Branch office architectures are classified
into three common profilesSmall Office Home Office (SOHO), remote office, and medium-to-large branch office.
The different branch design profiles with Juniper Networks SRX Series Services Gatewaysthat run the industry-
leading JUNOS Softwareaddress the requirements of todays distributed enterprise locations, while ensuring that
businesses continue to gain efficiencies in CapEx and OpEx.
The designs outlined in the following section are built to address the different levels of high availability required
for different branch profiles and the current as well as future demands of high-performance businesses. Finally,
implementing these designs with all the branch platforms, including services gateways and local switches running
a single network OSJUNOS Software on all the branch network elementsallows enterprises to simplify branch
office networking infrastructure, provide unified network management, and realize significant savings in operational
expenditure. For more details about how JUNOS Software helps shape networking, see Juniper Networks JUNOS
Software: Architectural Choices at the Forefront of Networking.

Branch Office High Availability Considerations


Because most enterprises employ far more users in distributed enterprise locations than at headquarters in the
same location as data centers, they need a network infrastructure in the branch offices that performs as well as the
one in the centralized headquarters with high availability and minimized disruption. Since there are multiple branch
profiles that require various levels of high availability, enterprises need to identify what level they want to achieve for
the specific branch office and then deploy the appropriate level of device and link redundancy that supports the high
availability requirements.
In common scenarios, most branch offices connect directly to data centers via either a private WAN link, provided
by service providers in forms of managed services, MPLS Layer 2 or Layer 3 VPN, ATM or Frame Relay links, or an
IPsec VPN over the Internet. In many cases, enterprises choose to deploy IPsec VPN over the private WAN links to
add encryption and security on sensitive traffic because the private WAN networks that most service providers offer
are not really exclusive but actually are shared among many customers.

EX Series

SRX Series

SOHO REMOTE

SRX Series

SRX Series M Series


PRIVATE WAN
(Managed Services)
Virtual Chassis
SRX Series M Series
BRANCH OFFICE SRX Series HQ/CAMPUS

INTERNET
M Series

NOC

DATA CENTERS
Figure 2: Distributed enterprise high availability network design

4 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

As detailed in the Branch Office Reference Architecture, there are two types of high availability designs at most
distributed enterprise locations. Each of these design profiles is discussed in detail in the following considerations:
Link-level redundancyThis uses a single SRX Series device and either single or dual private WAN or Internet
connection, as seen in many small branch offices. The SRX Series device provides integrated LAN switching
capability with 8 to 16 Ethernet ports.

SRX Series
PSTN
PRIVATE WAN

3G wireless

WX Series
Client

Figure 3: SOHO or retail store link-level high availability architecture

For connecting more devices, LAN connectivity in link-level high availability-only branch offices can be implemented
with a single fixed configuration of a Juniper Networks EX2200 Ethernet Switch or Juniper Networks EX3200
Ethernet Switch. These Ethernet switches offer cost-efficient, complete Layer 2 and Layer 3 switching capabilities;
10/100/1000BASE-T copper port connectivity with either full or partial Power over Ethernet (PoE); and the full
JUNOS Software feature set.

Access
Point

3G wireless PoE
SRX Series
PoE
INTERNET
EX2200/
PSTN EX3200
Local
Printer

WX Client

Figure 4: Remote office link-level high availability architecture

Device-level redundancyThis consists of two SRX Series devicesone connects to a private WAN, or a managed
services connection, while the other connects to the Internet, as seen in many medium-to-large branch offices.
Device redundancy is achieved through a chassis cluster (JSRP) and redundant Ethernet groups. LAN redundancy is
implemented with a Juniper Networks EX4200 Ethernet Switch connected to both the edge devices to provide a high
availability configuration.

Copyright 2010, Juniper Networks, Inc. 5


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Access
Point
PoE
SRX Series
PRIVATE WAN Virtual
Chassis

Local
SRX Series Printer
INTERNET PoE

PSTN

WX Client

Figure 5: Medium-to -large branch office device-level high availability architecture


The solution profile types and the services they provide are derived from a basic reference architecture in which the
connectivity between distributed enterprise locations (branch/campus) and data centers is provided using the public
network (the Internet) and private WAN/MAN networks (either using point-to-point lines, metro Ethernet, managed
services, or MPLS Layer 2-/Layer 3-based VPNs).
It is not the purpose of this document to detail the different design decisions made at the branch or campus.
Instead, the intention is to use these designs as a starting point for building an IPsec-based VPN network.

Understanding Chassis Cluster (JSRP)


Chassis clusteringalso known as JUNOS Services Redundancy Protocol (JSRP)provides network node redundancy
by grouping a pair of the same kind of supported Juniper Networks secure routers or SRX Series Services Gateways
into a cluster. The devices must be running Juniper Networks JUNOS Software with enhanced services.
The two nodes back up each otherwith one node acting as the primary and the other as the secondaryensuring
stateful failover of processes and services in the event of system or hardware failure. If the primary node fails, the
secondary takes over processing of traffic. Nodes in a cluster are interconnected over Ethernet links and synchronize
configuration, kernel, and session state across the cluster to facilitate high availability of interfaces and services.
Chassis cluster functionality includes:
Resilient system architecture, with a single active control plane for the entire cluster and multiple Packet
Forwarding Engines (PFEs)presents a single services gateway view of the cluster
Synchronization of configuration and dynamic runtime states between nodes within a cluster
Monitoring of physical interfaces and failover if the failure parameters cross a configured threshold

Understanding Virtual Chassis Technology


Juniper Networks Virtual Chassis technology is a feature of the Juniper Networks EX4200 Ethernet Switch, allowing
the interconnection and operation of switches as a unified, single, high-bandwidth device.
Up to 10 EX4200 switches can be interconnected via dedicated Virtual Chassis ports on each device or through
optional uplink module ports that are configured as Virtual Chassis ports. All EX4200 switch models support Virtual
Chassis technology. With this technology, a single logical device that supports up to 480 10/100/1000BASE-T ports
or 240 100BASE-FX/1000BASE-X ports can be configured. Optional Gigabit Ethernet or 10-Gigabit Ethernet uplink
ports can extend the Virtual Chassis configuration over greater distances. Different switch models in the EX4200
line can be interconnected in the same Virtual Chassis configuration, providing flexibility of port and density options.
Solutions that use the EX4200 line which includes Virtual Chassis technologycombine the scalability and
compact form factor of standalone switches with the high availability, high-backplane bandwidth characteristics, and
high-port densities of traditional chassis-based switches.

6 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Virtual Chassis configurations enable economical deployments of switches that deliver network availability in the
branch office locations where installation might otherwise be cost prohibitive or physically impossible. In a Virtual
Chassis configuration, all member switches are managed and monitored as a single logical device. This approach
simplifies the branch office network operations, allows the separation of placement and logical groupings of physical
devices, and provides efficient use of resources.
The Virtual Chassis solution also offers the same high availability features as other Juniper Networks chassis-based
switches and routers, including GRES for hitless failover.
Hardware Requirements
distributed enterprise branch secure routersJuniper Networks SRX Series Services Gateways
distributed enterprise branch secure switchesJuniper Networks EX Series Ethernet Switches
Software Requirements
JUNOS Software Release 9.5 or later is required.

Implementation and Configuration Guidelines


This section enables solution implementation, by describing device configuration of associated branch network
components or infrastructures, and then showing readers how to verify operation of the solution.

Configuring Branch Office with a Single SRX Series Device


To implement a link-level high availability deployment, each branch requires two WAN connections and two IPsec
VPN tunnels for each data center. Traffic is load-balanced across each pair of tunnels. Whenever traffic is directed to
a given data center, sessions are load-balanced in a round-robin fashion across each IPsec tunnel going to that data
center. In turn, the tunnels are configured in such a way that each tunnel uses a different egress link, resulting in a
balance of the upstream links for VPN traffic routing.

Advertises prefix
172.18.8.0/21
172.18.0.0/18
192.168.4.0/24
Advertises prefix 192.168.5.0/24
10.16.2.0/24 over interface
interface Tunnel.1 Tunnel.1

st0.0: 10.255.1.5 T1: 10.255.1.254

Trust Network DC-A Network


10.16.2.0/24 INTERNET 172.18.8.0/21
SRX Series

st0.1: 10.255.2.5 T2: 10.255.2.254 NOC Network


192.168.4.0/24
Figure 6: Link-level redundant WAN connectivity architecture

Configuring Redundant IPsec Tunnels on a Single SRX Series Device


The previous figure shows a link-level redundancy configuration with connection to a data center. Note that even
though multiple data centers might be used, from the branch high availability perspective, the configuration is
identical. Only the IPsec tunnel configurations and their routings change. For simplicity, only the IPsec configuration
to one of the data centers is shown. A sample configuration for setting up redundant IPsec VPN tunnels on an SRX
Series device is listed in the following:

[edit]
set security ipsec vpn-monitor-options interval 5
set security ipsec vpn-monitor-options threshold 5
#VPN monitoring is used to detect problems on tunnels. As multiple tunnels are
used, single failure is not fatal
set security ike policy preShared mode aggressive
set security ike policy preShared proposal-set standard

Copyright 2010, Juniper Networks, Inc. 7


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

set security ike policy preShared pre-shared-key ascii-text $9$S2pyMXVb2aGiYg


set security ike policy preShared_2 mode aggressive
set security ike policy preShared_2 proposal-set standard
set security ike policy preShared_2 pre-shared-key ascii-text $9$S2pyMXVb2aGiYg
# Aggressive mode is used so no need to know local IP for the tunnel termination
endpoint
# Due to this, tunnels can only be initiated from the remote facility to the
data center but not vice versa
set security ike gateway DCA_1 ike-policy preShared
set security ike gateway DCA_1 address 1.2.0.6
set security ike gateway DCA_1 local-identity hostname SRX-A_1
set security ike gateway DCA_1 external-interface ge-0/0/0.0
set security ike gateway DCA_2 ike-policy preShared_2
set security ike gateway DCA_2 address 1.3.0.6
set security ike gateway DCA_2 local-identity hostname SRX-A_2
set security ike gateway DCA_2 external-interface ge-2/0/0.0
# The IPsec configuration for the tunnels is presented as below
set security ipsec policy std proposal-set standard
set security ipsec vpn DCA_1 bind-interface st0.0
set security ipsec vpn DCA_1 vpn-monitor optimized
set security ipsec vpn DCA_1 ike gateway DCA_1
set security ipsec vpn DCA_1 ike no-anti-replay
set security ipsec vpn DCA_1 ike proxy-identity local 0.0.0.0/0
set security ipsec vpn DCA_1 ike proxy-identity remote 0.0.0.0/0
set security ipsec vpn DCA_1 ike proxy-identity service any
set security ipsec vpn DCA_1 ike ipsec-policy std

set security ipsec vpn DCA_1 establish-tunnels immediately


# Configuration for binding DCA_1 with tunnel interface st0.0
set security ipsec vpn DCA_2 bind-interface st0.1
set security ipsec vpn DCA_2 vpn-monitor optimized
set security ipsec vpn DCA_2 ike gateway DCA_2
set security ipsec vpn DCA_2 ike no-anti-replay
set security ipsec vpn DCA_2 ike proxy-identity local 0.0.0.0/0
set security ipsec vpn DCA_2 ike proxy-identity remote 0.0.0.0/0
set security ipsec vpn DCA_2 ike proxy-identity service any
set security ipsec vpn DCA_2 ike ipsec-policy std
set security ipsec vpn DCA_2 establish-tunnels immediately
# Configuration for binding DCA_2 with tunnel interface st0.1
set interfaces st0 unit 0 multipoint
set interfaces st0 unit 0 family inet mtu 1500
set interfaces st0 unit 0 family inet address 10.255.1.5/24
set interfaces st0 unit 1 multipoint
set interfaces st0 unit 1 family inet mtu 1500
set interfaces st0 unit 1 family inet address 10.255.2.5/24
# Set up tunnel interfaces st0.0 and st0.1
set routing-options static route 172.16.0.0/24 qualified-next-hop 10.255.1.254
set routing-options static route 172.16.0.0/24 qualified-next-hop 10.255.2.254
# Configuration equal-cost static route for both tunnel interfaces

The following command verifies the status of the tunnel interfaces configured above:
root@SRX-A# run show interfaces terse | match st
st0 up up
st0.0 up up inet 10.255.1.4/24
st0.1 up up inet 10.255.2.4/24

8 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Configuring Zones on an SRX Series Device


The following figure shows the zone configuration. VPN tunnels are part of a separate zone named the VPN zone.
This must be considered when designing security policies, as traffic going to the data centers (or other branches)
will exit through this zone.

IPsec to
Data Center A
10.255.1.5/24
st0.0 Legend
1.2.1.219/24 Zone
To ISP
ge-0/0/0.0
Color Count Description
Trust Network
10.16.2.0/24 4 VPN
SRX Series 1.4.0.219/24
To ISP 2 Untrust
ge-0/0/1.0
2 Trust
10.255.2.5/24
st0.1 IPsec to
Data Center A

Figure 7: Security zones on an SRX Series device

A sample configuration for setting up security zones on an SRX Series device is listed in the following:

[edit]
set security zones functional-zone management interfaces fe-0/0/2.0
set security zones functional-zone management host-inbound-traffic system-services all
set security zones functional-zone management host-inbound-traffic protocols all
# Management zone configuration
set security zones security-zone trust host-inbound-traffic system-services all
set security zones security-zone trust host-inbound-traffic protocols all
set security zones security-zone trust interfaces fe-0/0/3.0 host-inbound-traffic system-
services dhcp
set security zones security-zone trust interfaces fe-0/0/3.0 host-inbound-traffic system-
services ping
# trust zone configuration
set security zones security-zone untrust host-inbound-traffic system-services all
set security zones security-zone untrust host-inbound-traffic protocols all
set security zones security-zone untrust interfaces lo0.0
set security zones security-zone untrust interfaces ge-0/0/0.0 host-inbound-traffic system-
services dhcp
set security zones security-zone untrust interfaces ge-0/0/0.0 host-inbound-traffic system-
services ping
set security zones security-zone untrust interfaces ge-0/0/1.0 host-inbound-traffic system-
services dhcp
set security zones security-zone untrust interfaces ge-0/0/1.0 host-inbound-traffic system-
services ping
# untrust zone configuration
set security zones security-zone VPN host-inbound-traffic system-services all
set security zones security-zone VPN host-inbound-traffic protocols all
set security zones security-zone VPN interfaces st0.0
set security zones security-zone VPN interfaces st0.1
# VPN zone configuration

Copyright 2010, Juniper Networks, Inc. 9


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Configuring NAT on SRX Series


On SRX Series services gateways, NAT (Network Address Translation) is provided based on the egress interface.
Whenever traffic is routed through the interface connecting to the Internet, traffic is translated using the interfaces
IP address as the source address. Similarly, whenever traffic is routed through the redundant Internet interface, the
interface IP address is used to translate the source IP address of the traffic. As a result, there is no need to propagate
the addresses between service providers.
A sample configuration for setting up NAT on an SRX Series device is listed in the following:

[edit]
set security nat source rule-set default from zone trust
set security nat source rule-set default to zone untrust
set security nat source rule-set default rule default match source-address 0.0.0.0/0
set security nat source rule-set default rule default match destination-address 0.0.0.0/0
set security nat source rule-set default rule default then source-nat interface

Configuring Branch Office with Redundant SRX Series Devices


Branch offices with device-level redundancy generally consist of two SRX Series devices that are connected to two
different networks:
Peer-to-Peer
Internet
IPsec tunnels are configured to each data center using both networks in a similar fashion to the link-level
redundancy scenario. The difference is that the metrics on the tunnel interfaceswhich are terminating the IPsec
tunnels that transverse the peer-to-peer networkhave lower metrics. Thus, whenever possible, traffic going to and
from the data centers travels through the peer-to-peer network. Figure 8 shows the architecture of the device-level
redundancy deployment.

MEDIUM TO LARGE
BRANCH OFFICE
Access
Point

PoE

PRIVATE WAN Virtual


DATA CENTERS Chassis
SRX Series

Local
SRX Series Printer
INTERNET PoE

PSTN

WX Client

Figure 8: Device-level redundancy high availability architecture

Note: Each SRX Series device can terminate a pair of tunnels (one to each data center) because each is connected to
a different network.

10 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Configuring SRX Series Chassis Cluster (JSRP) For Device-Level WAN high availability
The SRX Series devices are deployed as an active/active chassis cluster in such a way that whenever a device or
tunnel fails, JSRP fails over to the other SRX Series device with an active tunnel. In this way, the peer-to-peer
network is normally preferred over the Internet, as long as the tunnels are active. Whenever a tunnel fails to any of
the data centers, traffic is rerouted to the secondary SRX Series device. Whenever the primary SRX Series device
is active, Internet traffic can be routed either by backhauling to the data centers or using the data link interface
connecting the SRX Series devices (see Figure 9 for detailed connections between SRX Series devices) to the
Internet connection on the secondary SRX Series device. Traffic, in turn, is translated to use the IP address of the
egress interface on the secondary SRX Series device as the source address. The peer-to-peer network is also used
to back up the Internet connection whenever the link between the secondary SRX Series device and the Internet fails.
One of the data centers advertises a default route over the peer-to-peer-transported IPsec tunnels. In this manner,
when the connection between the secondary SRX Series device and the Internet fails, the primary SRX Series device
selects the default route received through IPsec and consequently backhauls all of its Internet traffic to the data
center.
A sample configuration for setting up chassis cluster (JSRP) on an SRX Series device is listed in the following:
1. Prepare cluster mode on the first cluster memberSRX210-A.
Setting up the cluster and node ID is done in Operational mode and requires a reboot of the related devices:

root@SRX210-A> set chassis cluster cluster-id 1 node 0 reboot

The following message will appear after inputting the previous command.
Successfully enabled chassis cluster. Going to reboot now
It might take two to three minutes before the SRX-210 device actually reboots. The following message will
appear when it reboots.
*** FINAL System shutdown message from root@srx2100-a ***
System going down IMMEDIATELY
2. After the reboot, prepare the cluster configuration on SRX210-A:

{primary:node0}
root@SRX210-A> edit

The following warning will appear after entering the Configuration mode:
warning: Clustering enabled; using private edit
warning: uncommitted changes will be discarded on exit
Entering configuration mode
Configure the following on cluster node 0:

{primary:node0}[edit]
set interfaces fab0 fabric-options member-interfaces ge-1/0/0
set interfaces fab1 fabric-options member-interfaces ge-3/0/0
# set up data link interfaces between two SRXs
set chassis cluster redundancy-group 0 node 0 priority 100
# set up higher cluster node priority for primary SRX
set chassis cluster redundancy-group 0 node 1 priority 1
# set up lower cluster node priority for secondary SRX
commit
# commit the chassis cluster (JSRP) configuration

Copyright 2010, Juniper Networks, Inc. 11


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

3. For SRX210 devices, you must connect devices back-to-back over a pair of Fast Ethernet connections. The
connection that serves as the control link must be the built-in controller port (fe-0/0/7 and fe-2/0/7 in this case)
on each device. The fabric link connection can be a combination of any pair of Gigabit Ethernet interfaces on the
devices, as illustrated in detail in Figure 9.
WAN1 Mgmt
fe-0/0/0
ge-0/0/0 fab0
SRX210-A
fe-0/0/7
ge-0/0/1

Control link
Fabric link
WAN2
ge-2/0/0
fe-2/0/7
SRX210-B
fab1
ge-2/0/1
reth1

EX Series Switch
Figure 9: SRX Series chassis cluster (JSRP) configuration

4. Join the chassis cluster on SRX210-B by inputting the following command:

root@SRX210-B> set chassis cluster cluster-id 1 node 1 reboot

The following message displays after inputting the above command


Successfully enabled chassis cluster. Going to reboot now
5. Verify the chassis cluster is in the correct status:

{primary:node0}
root@SRX210-A> show chassis cluster status
Cluster ID: 1
Node name Priority Status Preempt Manual failover

Redundancy group: 0 , Failover count: 1


node0 100 primary no no
node1 1 secondary no no

When you initialize an SRX Series device in chassis cluster mode, the system creates a redundancy group 0.
Redundancy group 0 manages the primacy and failover between the routing engines on each node of the cluster.
As is the case for all redundancy groups, redundancy group 0 can be primary on only one node at a time. The
node on which redundancy group 0 is primary determines which routing engine is active in the cluster. A node is
considered the primary node of the cluster if its routing engine is the active one.
The redundancy group 0 configuration specifies the priority for each node. Redundancy group 0 is primary, and
the routing engine is active on the node with the higher priority. By default, both nodes have the same priority
for redundancy group 0, but you can change the default setting to specify which node is primary for redundancy
group 0. Here is how redundancy group 0 primacy is determined:
If both nodes of a cluster are initialized at the same time and you have not changed the default setting for
redundancy group 0 node priority, node 0 takes precedence.
If you have not changed the default setting for redundancy group 0 node priority and one node of the cluster
is initialized before the other, the first node to be initialized takes precedence. In this case, the routing engine
on the first initialized node is the active one and the node is considered primary. (The primary node is not
necessarily node 0. If you boot node 1 before node 0, node 1s routing engine takes precedence.)
The other node is considered secondary. The secondary nodes routing engine is synchronized with state
information from the primary node so that it is ready to take over if the primary node fails.

12 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

If you set the redundancy group 0 node priority, the routing engine on the node with the higher priority takes
precedence.
You cannot enable preemption for redundancy group 0. If you want to change the primary node for redundancy
group 0, you must do a manual failover.
6. Create redundant Ethernet interface for connecting to EX Series Ethernet Switches:
For creating redundant Ethernet interfaces, one or more redundancy groups numbered 1 through 255 needs
to be set up. Up to eight redundant Ethernet interfaces can be created on an SRX Series chassis cluster. Each
redundancy group acts as an independent unit of failover and is primary on only one node at a time.
Each redundancy group can contain one or more redundant Ethernet interfaces. A redundant Ethernet interface
is a pseudo interface that contains a pair of physical Gigabit Ethernet interfaces or a pair of Fast Ethernet
interfaces. Redundancy groups can contain one or more redundant Ethernet interfaces. A redundant Ethernet
interface has two child links, one from each node. If a redundancy group is active on node 0, then the child links
of all the associated redundant Ethernet interfaces on node 0 are active. If the redundancy group fails over to
node 1, then the child links of all redundant Ethernet interfaces on node 1 become active.
When you configure a redundancy group, you must specify a priority for each node to determine the node on
which the redundancy group is primary. The node with the higher priority is selected as primary. The primacy
of a redundancy group can fail over from one node to the other. When a redundancy group fails over to the other
node, its redundant Ethernet interfaces on that node are active and their interfaces are passing traffic.
A sample configuration for setting up the redundant Ethernet interface on the SRX Series is listed as follows:
For corresponding redundant interface configuration on EX Series switches, see the steps of the next section,
Configuring Device-Level LAN high availability with EX Series Virtual Chassis.

{primary:node0}[edit]
set chassis cluster reth-count 2
# Set the number of redundant interfaces in the cluster
set interfaces reth1 redundant-ether-options redundancy-group 1
# Create redundant group 1 for redundant ethernet interface reth1
set interfaces reth1 description Connect-to-EX-switch
set interfaces reth1 vlan-tagging
set interfaces reth1 unit 163 vlan-id 163
set interfaces reth1 unit 163 family inet address 10.16.3.1/24
# Configure the redundant ethernet interface reth1
set security zones security-zone trust interfaces reth1.163 host-inbound-traffic system-services
all
# Assign interface reth1.163 to trust zone with services enabled
set interfaces ge-0/0/1 gigether-options redundant-parent reth1
set interfaces ge-2/0/1 gigether-options redundant-parent reth1
# Join the redundant group on the physical interfaces
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/1 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-2/0/1 weight 255
# Track the physical interfaces in Redundancy Group 1
commit
# Commit the redundant Ethernet interface configuration
run show interface reth1 terse
run show chassis cluster interfaces
# Verify the redundant Ethernet interface is correctly defined

Copyright 2010, Juniper Networks, Inc. 13


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Configuring Device-Level LAN high availability with EX Series Virtual Chassis


In medium-to-large branch offices, the EX4200 line can accommodate greater port densities by including
additional EX4200 switches to form a Virtual Chassis configuration. Virtual Chassis configurations can be
created either by connecting EX4200 switches with the dedicated rear-panel Virtual Chassis Ports (VCPs) or
through the optional front-panel two-port 10-Gigabit Ethernet or four-port Gigabit Ethernet uplink module.

1 Gigabit Ethernet Uplink VCP (10 Gigabit Ethernet)

Slot 0/LC (128) Slot 2/LC (128) Slot 4/LC (128) Slot 6/LC (128) Slot 8/LC (128)
Slot 1/LC (128) Slot 3/LC (128) Slot 5/LC (128) Slot 7/LC (128) Slot 9/LC (128)

Figure 10: Configuring EX4200 Virtual Chassis in medium-to-large branch office

For more details about how to implement EX Series switches in branch offices, see the implementation guide
Deploying EX Series Switches in Branch Offices.

Enable Virtual Chassis Uplink Ports


1. To enable VCP on the uplink ports, the following command is required on both switches in JUNOS Software
operational mode.

root> request virtual-chassis vc-port set pic-slot 1 port 0 member 0

A single Virtual Chassis configuration allows up to 10 EX4200 switches to be interconnected and managed
as a single unit.

root> show virtual-chassis status


Virtual Chassis ID: 0019.e250.8240
Mastership Neighbor List
Member ID Status Serial No Model priority Role ID Interface
0 (FPC 0) Prsnt BM0207431981 ex4200-24t 128 Master* 1 vcp-0
1 vcp-255/1/0
1 (FPC 1) Prsnt BP0207452211 ex4200-48t 128 Backup 0 vcp-0
0 vcp-255/1/0
Member ID for next new member: 2 (FPC 2)

In the previous command, a Virtual Chassis configuration is formed through the dedicated VCPs (vcp-0) and the
front-panel uplink module (vcp-255/1/0).
When EX4200 switches are deployed in a Virtual Chassis configuration, the member switches automatically elect
a master and backup routing engine. The master routing engine is responsible for managing the Virtual Chassis
configuration, while the backup is available to take over in the event a master fails. All other switches in a Virtual
Chassis configuration take on the role of a line card and are eligible as a master or backup routing engine if the
original master or backup were to fail.

14 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

2. Configure the Virtual Chassis mastership priority on EX Series Ethernet Switches:


There is a specific master election process when a Virtual Chassis configuration is formed. Upon boot up, all
members are considered eligible candidates and participate in the election.
The Master Election Decision Tree determines which switch becomes the master. The master and backup
routing engines are assigned based on the following criteria:
Highest Mastership priority (default 128, user configurable 1 thru 255)
Master in previous boot among eligible switches
Uptime of the eligible master (if uptime difference is more than 1 minute)
Lowest switch-based MAC address

root# set virtual-chassis member 0 mastership-priority 250

3. Configure the interfaces on the EX Series Ethernet Switches for connectivity to the SRX Series cluster
The following figure shows an example connecting the EX4200 Virtual Chassis to redundant branch SRX Series
devices.

SRX210-A fab0

ge-0/0/1

data link
SRX210-B fab1
ge-2/0/1

reth1

EX Series Switch
Figure 11: Connecting EX Series Virtual Chassis to redundant branch SRX Series Services Gateways

A sample configuration for setting up Ethernet interfaces on an EX Series device is listed in the following:

{master:0}[edit]
set interfaces ge-0/0/0 unit 0 family ethernet-switching port-mode trunk
set interfaces ge-0/0/0 unit 0 family ethernet-switching vlan members
set interfaces ge-1/0/0 unit 0 family ethernet-switching port-mode trunk
set interfaces ge-1/0/0 unit 0 family ethernet-switching vlan members vlan163
# Allow interconnect vlan on EX trunk interfaces connecting to SRX cluster
set interfaces ge-0/0/12 description UC Machines in Remote Branch
set interfaces ge-0/0/12 unit 0 family ethernet-switching port-mode access
set interfaces ge-0/0/12 unit 0 family ethernet-switching vlan members vlan163
# Configure EX access downlink port connecting to the host
set vlans vlan163 vlan-id 163
set vlans vlan163 interface ge-0/0/12.0
# Configure the vlan and associate with the downlink access port
run show interfaces terse
run show vlans vlan163
# Verify that the vlan and interfaces are correctly configured

Copyright 2010, Juniper Networks, Inc. 15


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Connectivity Use Cases and Failover Scenarios


Because there are various traffic routing scenarios that are related to different levels of high availability, enterprises
need to identify what level of high availability they want to achieve, and then deploy the appropriate connectivity for
internal and external traffic flow to support the branch office high availability requirements. The following lists three
main kinds of traffic flow scenarios over IPsec tunnel between branch offices and the data centers:
Internet Backhaul Only
Private WAN as Primary and Internet as Backup
Internet and Private WAN Split Tunneling
In the link-level redundancy deployment, all the traffic from the branch office typically passes through the redundant
IPsec tunnels on either a private WAN or the Internet to the data centers. There are normally two kinds of traffic flow
consideration recommended for the branch offices that deploy only link-level redundancy.

Internet Backhaul Only


Normal State
In the Internet backhaul only scenario, all traffic from the branch offices passes through the IPsec tunnels on either
a private WAN including managed services or the Internet to the data centers. The Internet traffic from the branch
offices is backhauled into the data center and then flows to the Internet from the data center. This design generally
requires only point-to-point connectivity to increase revenue and primarily applies to small branch offices including
retail stores. But it also applies to all the SOHO and remote offices where link-level redundancy is needed but saving
costs is also paramount.

SRX Series

SOHO

SRX Series

SRX Series M Series


PRIVATE WAN
(Managed Services)
Virtual Chassis
SRX Series M Series
BRANCH OFFICE SRX Series HQ/CAMPUS

M Series
INTERNET
DATA CENTER

SA Series
NSMXpress

STRM500
ISG Series IC Series
NOC
IDP Series

Figure 12: Traffic flow example in the Internet Backhaul Only use case

16 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Failover Scenarios
The failover in Internet Backhaul Only scenario is relatively simple and straightforward. In this use case, each
branch has two WAN connections and two IPsec VPN tunnels for each data center if there are multiple data
centers in the enterprise.
Traffic is load-balanced across each pair of tunnels through equal-cost multipath (ECMP) routing configuration.
Whenever traffic is directed to a given data center, sessions are load-balanced in a round-robin fashion across each
IPsec tunnel going to that data center. In case a link or tunnel fails, the ECMP configuration will automatically detect
the failure and withdraw the route with the next hop of the failed link or tunnel from the SRX Series devices routing
table. Consequently, all the traffic on the failed link or tunnel will be forced to go through the remaining active tunnel
to the data center. Because no interface source NAT is deployed for this use case, the session will be normally
retained and no traffic loss would happen.

Private WAN as Primary and Internet as Backup


Normal State
In this deployment, the primary route for all the traffic from the branch offices goes to the data centers through
IPsec VPN tunnels on top of the private WAN links. The Internet traffic from the branch office is also backhauled
into the data center and then flows to its destination in the Internet. The secondary route is set up as an on-demand
circuit or dial-up Internet connection. In normal state, the Internet traffic from the branch offices is backhauled
through primary route to the data centers and then flows from the data center to its destination in the Internet.
When traffic flows over the backup link on the Internet to the data center, the Internet traffic can go directly into the
Internet to reduce the data payload to the data center and improve the performance because of the bandwidth limit
of the Internet backup connection. The IPsec VPN tunnels that connect the branch offices and the data center are
primarily on T1 or MPLS L2/L3 VPN; and secondary on 3G wireless, ISDN, xDSL, etc. This design profile requires
security along with integrated VPN and routing capabilities in the remote facility. The most relevant use case applies
to where both the cost effectiveness and security are critical.

SRX Series

SOHO

SRX Series

SRX Series M Series


PRIVATE WAN
(Managed Services)
Virtual Chassis
SRX Series M Series
BRANCH OFFICE SRX Series HQ/CAMPUS

M Series
INTERNET
DATA CENTER

SA Series
NSMXpress

STRM500
ISG Series IC Series
NOC
IDP Series

Figure 13: Traffic flow example in the Internet as Backup use case

Copyright 2010, Juniper Networks, Inc. 17


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Failover Scenarios
In case the primary route fails in the Internet as Backup use case, the data center routes will be advertised through
the remaining tunnel on top of the Internet link. All the internal traffic from the branch office to the data center
will go through the IPsec tunnel on the Internet link and flow into the respective data center. While considering the
bandwidth limit normally resides on the Internet backup link, it is recommended that the branch office flow the
Internet traffic directly to the Internet to increase performance in the failover situation. This design can be achieved
by configuring a default static route to use the primary tunnel to the data center as qualified-next-hop on the
SRX Series device with a lower metric than the default route from the Internet interface. When the primary tunnel
to the data center fails, this default route will be withdrawn from the SRX Series devices routing table. All the
Internet traffic will take the secondary default route and flow directly into the Internet. It is also recommended that
a certain level of security should be implemented locally at the branch office because of this failover nature.

Internet and Private WAN Split Tunneling


Normal State
All the internal traffic from the branch offices goes through both the primary and backup routes to the data centers,
while the Internet traffic flows directly from the branch office to the Internet. IPsec tunnels that connect the branch
and the data center reside on T1 or MPLS L2/L3 VPNor Internet connections such as 3G Wireless, ISDN, xDSL, etc.
Integrated security, VPN, and routing capabilities in the branch devices are critical in this use case because traffic
uses the VPN tunnels to the data centeras well as the Internetas destinations and travels through IPsec tunnels
over the Internet into the data center.
This is the most comprehensive scenario in which the branch office must take advantage of best-in-class security
and connectivity technologies. All medium-to-large branch offices can appropriately utilize this scenario because of
cost effectiveness and business functionalityfor example, stock and bank transactions.

SRX Series

SOHO

SRX Series

SRX Series M Series


PRIVATE WAN
(Managed Services)
Virtual Chassis
SRX Series M Series
BRANCH OFFICE SRX Series HQ/CAMPUS

M Series
INTERNET
DATA CENTER

SA Series
NSMXpress

STRM500
ISG Series IC Series
NOC
IDP Series

Figure 14: Traffic example in the Internet and PWAN Split Tunneling

18 Copyright 2010, Juniper Networks, Inc.


IMPLEMENTATION GUIDE - Branch High Availability in the Distributed Enterprise

Failover Scenarios
In the normal state, all the internal traffic is load-balanced over all the tunnels on top of both the private WAN and
Internet connections, while the Internet traffic flows directly from the branch office to the Internet.
In case the tunnel on the private WAN fails, the data center routes will be advertised through the remaining tunnel
on top of the Internet link. All the internal traffic from the branch office to the data center will go through the IPsec
tunnel on the Internet link and flow into the respective data center. The Internet traffic remains the same path and
flows directly from the branch office to the Internet.
In case the Internet connection fails, the internal traffic will go through the private WAN connection to the data
center, as the data center routes will be only advertised over this connection. The Internet traffic now takes the
private WAN connection and also backhauled through the data center to the Internet. This design can be achieved
by configuring a default static route to use the primary tunnel to the data center as a qualified-next-hop on the
SRX Series device with a higher metric than the default route from the Internet interface. When the Internet
connection fails, the default route using the Internet connection as a next hop will be withdrawn from the SRX Series
devices routing table. All the Internet traffic will take the default route to the data center and be backhauled through
the data center into the Internet.

Summary
Juniper Networks offers a set of compelling solutions to meet the needs of branch office deployments. The different
levels of high availability design with Juniper Networks SRX Series Services Gateways and EX Series Ethernet
Switchesthat both run industry-leading JUNOS Softwareaddress the requirements of todays distributed
enterprises, while ensuring that businesses continue to gain efficiencies in CapEx and OpEx.
Implementing different levels of high availability at all kinds of branch offices can be achieved by using Juniper
Networks SRX Series Services Gateways as integrated security, VPN, routing and switching devicesand by
using the EX Series with Virtual Chassis technology as branch local switches. By practicing the implementation
guidelines addressed, network administrators can better understand the different levels of high availability
deploymentsparticularly with Junipers innovative chassis cluster and virtual technologiesin building reliable
branch office networks.

About Juniper Networks


Juniper Networks, Inc. is the leader in high-performance networking. Juniper offers a high-performance network
infrastructure that creates a responsive and trusted environment for accelerating the deployment of services and
applications over a single network. This fuels high-performance businesses. Additional information can be found at
www.juniper.net.

Corporate and Sales Headquarters APAC Headquarters EMEA Headquarters To purchase Juniper Networks solutions,
Juniper Networks, Inc. Juniper Networks (Hong Kong) Juniper Networks Ireland please contact your Juniper Networks
1194 North Mathilda Avenue 26/F, Cityplaza One Airside Business Park representative at 1-866-298-6428 or
Sunnyvale, CA 94089 USA 1111 Kings Road Swords, County Dublin, Ireland
authorized reseller.
Phone: 888.JUNIPER (888.586.4737) Taikoo Shing, Hong Kong Phone: 35.31.8903.600
or 408.745.2000 Phone: 852.2332.3636 EMEA Sales: 00800.4586.4737
Fax: 408.745.2100 Fax: 852.2574.7803 Fax: 35.31.8903.601
www.juniper.net

Copyright 2010 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos,
NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other
countries. All other trademarks, service marks, registered marks, or registered service marks are the property of
their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper
Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.

8010017-003-EN Sept 2010 Printed on recycled paper

Вам также может понравиться