Академический Документы
Профессиональный Документы
Культура Документы
Chuck Semeria
John W. Stewart III
Marketing Engineers
List of Tables
Table 1: Traditional Router Cores vs. ATM Cores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Table 2: Label-switching Forwarding Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Packet
Forwarding
User Routing
Interface Protocols
CPU Route
Calculation
Policy Interface
Enforcement Management
Memory
Management
Because legacy routing systems were developed with the assumption that they would support
time-critical tasks (that is, packet forwarding), the designers had to develop an operating
environment that had elements of a real-time operating system. Even though they realized that
general-purpose operating systems provided extremely high levels of system reliability and
stability, they understood that there were a number of inerent performance inefficiencies
resulting from the overhead introduced by memory management and the duplication of effort
across multiple processes. The designers of these legacy routing systems decided that they
should enhance overall system performance by consolidating the code and eliminating the
duplication, even if this meant creating a single, monolithic code base.
In a historic perspective, this decision was probably the best compromise given the
technologies of the time. However, to achieve their efficiencies, the legacy routing systems
incur incredible costs, including the following:
Assume that one part of the monolithic program fails. For example, a task has a memory
leak or an error that causes it to write over another task’s code or data structures. These
types of errors can easily force other tasks to fail and eventually crash the entire operating
system. The only way to recover from this type of failure is to reboot the entire system.
The single monolithic program is required to run in a real-time mode to support
packet-forwarding requirements. Initially, these operating systems allowed packet
forwarding to have priority over all other system tasks. This meant that if the router
became extremely busy forwarding traffic, there might not be enough CPU cycles
remaining for it to complete any outstanding peer updates, hello time responses, or routing
table calculations. This introduces instability into the network, because routing and control
tasks are not completed in a timely manner, resulting in the loss of routing adjacencies and
line protocols.
The overall software architecture becomes so enormous that it loses its flexibility,
scalability, and stability. Modifications become extremely difficult to make because the act
of adding a single new feature can impact the entire code base. For example, has everything
been included that is required to provide a stable implementation? Does the build contain
code that is unnecessary for the application but has bugs that eventually cause the system
to crash? In addition, the size and complexity of the code determine how rapidly a vendor
can issue new software releases to correct acute internetworking problems or add
important new features. Finally, testing a monolithic code base is extremely difficult. No
laboratory can accurately reproduce the conditions found on the global Internet, but testing
issues are further complicated by the need to take the gigantic code base and then test
subsets of it in isolation. This challenge can easily overwhelm any well-designed testing
process.
As we approach the millennium, legacy routing software architectures based on real-time,
monolithic code bases are ill-equipped to support rapid development of features and stable
operation that are required in the Internet core. Today, forwarding traffic in real time over
high-performance optical interfaces mandates the deployment of hardware-based forwarding
engines. As a result, the next generation of routing software does not need to deal with the
competition of resources between packet forwarding and high-level system functions. The
effectiveness of hardware-based forwarding engines permits the routing software to execute in
a general-purpose operating system environment that provides greater reliability, scalability,
availability, and performance for mission-critical applications.
Routing Engine
JUNOS
(CPU-based)
Software
Routing Engine
The Routing Engine is responsible for executing the routing processes for the entire system. It
is powered by a processor that provides sufficient compute cycles to support the required
functionality and operate in any networking environment. The Routing Engine runs a version
of FreeBSD that has been customized by the Juniper Networks engineering team for robust
operation under heavily loaded conditions.
Routing protocols, interface management, chassis management, SNMP management, system
security, and the user interface all interact with the operating system as subsystems. Each
program executes as an independent process, complete with its own memory protection. This
removes many opportunities for runaway applications to corrupt each other or the kernel.
Figure 3 provides an overview of the major software design elements of JUNOS.
The JUNOS routing table contains the routes learned from neighbors and through static
configuration. The forwarding table is derived from the routing table and contains an index of
IP prefixes and Multiprotocol Label Switching (MPLS) labels that are actively used to associate
a prefix or label with an outgoing interface. The Packet Forwarding Engine uses the contents of
the forwarding table, not the routing table, to make its ultimate forwarding decision.
adjacencies never go down regardless of the load on the system. This prioritization of control
traffic prevents failures from cascading through the network, because it ensures that links and
routing adjacencies stay up regardless of what else is happening on the system.
Routing Protocols
The soundness of the implementations of the routing protocols is one of the critical factors
governing the successful operation of a service provider’s network. Stability and performance
are essential features for an Interior Gateway Protocol (IGP) to permit the management of
traffic flows within a service provider’s network. The soundness and scalability of the Exterior
Gateway Protocol (EGP) are critical for providing the connectivity and control that is required
between different service provider networks.
For example, assume that a large ISP provides Internet access to a second-tier ISP that
advertises several hundred routes into the large ISP (Figure 5). The large ISP configures policy
rules so that it accepts only the routes that it expects to receive from the smaller ISP. When the
smaller ISP gets a new customer, it informs the large ISP that it will be forwarding an
additional set of prefixes. The large ISP updates the list of routes that it will accept from the
smaller ISP by modifying its routing policy filtering rules.
200 Routes
Customers
Small Large
ISP
Internet
ISP
Time
Tree Search (0 (Log n))
1 Number of Items
Traffic Engineering
An ISP must provide a network that is capable of carrying its customers’ traffic. If this standard
is not met, the customers will be unhappy with the service and will find a new provider. At a
very low level, meeting this challenge requires that the ISP provision some number of circuits
of some bandwidth over some geography. In other words, the ISP must deploy a physical
topology that can meet the needs of the customers connected to the network.
Once the physical topology exists, the task of mapping traffic onto that topology must be
tackled. In the past, mapping traffic onto a physical topology was not approached in a
particularly scientific way. Instead, the mapping simply took place as a by-product of routing
configurations. The weaknesses in this haphazard mapping were often resolved by simply
overprovisioning bandwidth. As ISP networks grow larger, as the circuits supporting IP grow
faster, and as the demands of customers become greater, the mapping of traffic onto physical
topologies needs to be approached in a fundamentally different way. The offered load must be
supported in a controlled way and in an efficient manner. This mapping of traffic onto a
physical topology is called “traffic engineering” and is currently a hot topic inside of ISPs and
the IETF.
Did the ISP consider the lack of bandwidth a significant threat to their business that
required immediate resolution, even if the solution meant a serious overhaul to the core of
their network?
As we will see, the ISPs that chose to migrate to an ATM core thrived and continued to
experience growth. Meanwhile, the ISPs that remained with a traditional router core had
greater challenges to grow because of the late delivery and poor performance of OC-3 SONET
interfaces for routers. In the next several sections, we’ll discuss the advantages and
disadvantages of each of these choices.
San
Francisco Washington
Denver
D.C.
Atlanta
New
Phoenix Orleans
Dallas
S
R1
R1 S S
R3
S R3
S S S
R2
R2
New York
Chicago
Router
Router
ATM
Router Router
San Francisco Washington D.C.
Core
Router
Dallas
ATM PVCs provide a tool for supporting precision traffic engineering. The ISP determines the
path for the PVCs based on measured traffic patterns so that traffic flows are distributed across
different physical links. Each PVC is provisioned so that it is able to accommodate the
anticipated load. As the network’s traffic matrix evolves over time, the underlying physical
path supporting a particular PVC can be modified to accommodate shifting traffic loads across
the physical links.
After the PVCs are installed in the switched infrastructure, they are merged into the IP
network by running the IGP across each PVC. Between any two routers, the IGP metric for the
primary PVC is set such that it is more preferred than the backup PVC. This guarantees that
the backup PVC is used only when the primary PVC is not available.
5(4) = 20 6(5) = 30
The “n-squared” problem is also evidenced by the stress that a full-mesh of PVCs places on
routing protocols. Routing with a full-mesh of PVCs works over smaller networks. However,
with large networks there are significant scaling problems. The flooding of LSPs becomes
inefficient, each router has too many adjacencies with logical neighbors, and the Dijkstra
calculation becomes inefficient because of the large number of logical links.
Advantages Disadvantages
Back in 1994, ISPs were simply looking to obtain more bandwidth so they could handle
increasing amounts of network traffic. ISPs that decided to pursue an ATM-based core quickly
discovered that the ATM core provided them with two critical features: the ability to perform
traffic engineering to equalize the traffic loads across the network, and per-PVC statistics to
count traffic. ISPs have became very comfortable with the level of control that ATM-based
cores provide when compared to traditional routed cores. Today, ISPs are not willing to give up
the control that ATM provides. Despite the numerous limitation of ATM (that is, the cell tax,
the “n-squared” problem, and the cost of managing two separate networks), ISPs understand
that they require traffic-engineering capabilities similar to ATM to successfully run their
networks.
Juniper Networks believes that Multiprotocol Label Switching (MPLS) is the emerging solution
to support traffic engineering in large service provider networks. It combines the advantages
of router-based and ATM-based cores, while eliminating the disadvantages. A few of the
benefits offered by MPLS include:
Ability to precisely control the use of valuable resources during periods of rapid growth
Stability under congestion and failure modes
Providing ISPs the foundation for value-added services
This is an area where Juniper Networks has been extremely proactive through its efforts with
the MPLS and RSVP Working Groups of the IETF. Juniper Networks has also gained
considerable practical experience with MPLS based on field trials with several major ISPs.
San
Francisco Washington
Denver
D.C.
Atlanta
New
Phoenix Orleans
Dallas
An LSR that receives an IP packet can choose to forward it along an LSP by wrapping an MPLS
header around the packet and forwarding it to another LSR. The labeled packet will be
forwarded along the LSP by each LSR until it reaches the end of the LSP, at which time the
MPLS header will be removed and the packet will be forwarded based on Layer 3 information
like the IP destination address. The important point here is that the path of the LSP is not
limited to what the IGP would choose as the shortest path to reach the destination IP address.
2 25 5 185
2 256 3 735
Figure 13 depicts the operation of the label-swapping algorithm utilized by an LSR. When a
packet is received on Interface #4 of the LSR with a Label = 36, the LSR uses the outbound set
of information to forward the frame. In this example, the LSR builds a new packet with a Label
= 19 and forwards the packet over Interface #5.
In In Out Out
Interface Label Interface Label
4 36 5 19
Recall that the essence of traffic engineering is mapping traffic onto a physical topology. This
means that the crux of the issue for doing traffic engineering with MPLS is determining the
path for LSPs. The Juniper Networks implementation of MPLS supports a number of different
ways to route an LSP:
Calculate the full path for the LSP off-line and statically configure all LSRs in the LSP with
the necessary forwarding state. This is analogous to how some ISPs are currently using
ATM.
Calculate the full path for the LSP off-line and statically configure the head-end LSR with
the full path. The head-end LSR then uses the Resource Reservation Setup Protocol (RSVP)
as a dynamic signaling protocol to install forwarding state in each LSR. Note that RSVP is
being used only to install the forwarding state, and it does not reserve bandwidth or
provide any assurance of minimal delay or jitter. The Juniper Networks engineers were
involved in specifying the new Label Object, Explicit Route Object, and Record Route
Object for RSVP that allow it to operate as an LSP set-up protocol.
Calculate a partial path for the LSP off-line and statically configure the head-end LSR with
a subset of the LSRs in the path. The partial path that is specified can include any
combination of strict and loose routes. For example, imagine an ISP has a topology that
includes two east-west paths across the country: one in the north through Chicago and one
in the south through Dallas. Now imagine that the ISP wants to establish an LSP between a
router in New York and a router in San Francisco. The ISP could configure the partial path
for the LSP to include a single loose-routed hop of an LSR in Dallas and the result is that the
LSP will be routed along the southern path. The head-end LSR uses RSVP to install the
forwarding state along the LSP just as above.
Configure the head-end LSR with just the identification of the tail-end LSR. In this case,
normal IP routing is used to determine the path of the LSP. This configuration doesn’t
provide any value in terms of traffic engineering, though the configuration is easy and it
might be useful in situations where services like Virtual Private Networks (VPNs) are
needed.
In all these cases, any number of LSPs can be specified as backups for the primary LSP. If a
circuit on which the primary LSP is routed fails, the head-end LSR will notice because it will
stop hearing RSVP messages from the remote end. If this happens, the head-end LSR can call
on RSVP to create forwarding state for one of the backup LSPs.
MPLS Benefits
Previously, it was suggested that any emerging solution providing traffic engineering across
the optical Internet must combine the advantages of ATM and routed cores while eliminating
the disadvantages. Let’s conclude this section by examining how well MPLS meets this
challenge.
An MPLS core fully supports traffic engineering via LSP configuration. This permits the ISP
to precisely distribute traffic across all of their links so the trunks are evenly used.
In an MPLS core, the per-LSP statistics reported by the LSRs provide exactly the type of
information required for configuring new traffic engineering paths and also for deploying
new physical topologies.
In an MPLS core, the physical topology and the logical topology are identical. This
eliminates the “n-squared” problem associated with ATM networks.
The lack of a cell tax in an MPLS core means that the provisioned bandwidth is used much
more efficiently than in an ATM core.
An MPLS core converges the Layer 2 and Layer 3 networks required in an ATM-based core.
The management of a single network reduces costs, and permits routing and traffic
engineering to occur on the same platform. This simplifies the design, configuration,
operation, and debugging of the entire network.
MPLS support for a dynamic protocol, such as RSVP, simplifies the deployment of traffic
engineered LSPs across the network.
MPLS provides the foundation for ISPs to offer value-added services.
Future MPLS support for constraint-based routing achieves the same control as
more-manual traffic engineering but with less human intervention because the network
itself participates in LSP calculation.
User Interface
The user interface allows network personnel to interact with and configure JUNOS. Juniper
Networks has taken extraordinary effort to build an efficient human interface that minimizes
the chance of configuration errors.
Telnet/SSH
In-Band
Production
Interface
JUNOS Software
10/100
Serial Serial
Ethernet
Mgmt. Console Aux
Interface Interface Interface
Out-of-Band
Telnet/SSH Terminal Modem
The CLI permits the performance of various tasks including configuring the JUNOS system,
restarting system processes, and monitoring the operation of the routing protocols. The CLI
supports consistent command names, on-line help, and a command-completion feature.
software makes it possible for weeks of configuration changes to be archived, thus providing
the ability to track the history of a router’s configuration. Legacy routing systems simply log
the fact that the configuration was updated by a specific user. However, they do not provide
the tools to identify the specific modifications or the ability to return to a stable configuration.
CLI Script
Database
Management
Process
Network Interfaces
Routing Process
Process
Kernel
In the Juniper Networks design, the CLI communicates with the Management Process through
an exchange of well-formatted command strings. This arrangement permits the CLI to be
relatively simple because all the intelligence resides in the Management Process. The exchange
of these messages allows the CLI to understand which commands the JUNOS software
considers valid and invalid. If the CLI attempts to do something that the Management Process
considers incorrect, the Management Process will inform the CLI. However, there is no reason
that a Web browser or a customer written script could not replace the CLI. The only
prerequisite is that the alternative user interface “speak” the strings that the Management
Process does.
With legacy routing systems, the CLI is the only interface that supports device configuration. If
a user desires to write a script, the script must be written so that it speaks to the CLI.
Experience has shown that one of the most difficult tasks to automate against is one that was
originally designed to be interactive. For example, a typical CLI presents the script with a
prompt; the script enters a command; signals the <Enter> key; and then the system spews
output that must be parsed looking for a prompt before the next command can be entered. In
the JUNOS software, the output from the Management Process is ideal for parsing by a
computer, as opposed to a legacy CLI that is optimized for humans to read. Juniper Networks
provides not only a powerful CLI for humans, but also the Management Process that can be
leveraged to support other configuration models.
System Security
Strong security is a must for any system running in the core of an ISP network. The number of
Internet threats that the average customer and ISP are exposed to have been constantly
growing. As a result, network systems must support inherent security precautions that
safeguard the operation, service, and functionality of all supported services. This section
provides a brief discussion of just some of the security features supported by the JUNOS
software.
Denial-of-Service Attacks
Denial-of-service attacks are those which prevent access by authorized users. The attacker
floods the system with traffic that blocks entry by others or consumes all available resources
such as memory and CPU. The JUNOS software is designed so that traffic that is required to go
to the Routing Engine (such as pings, Telnet, routing protocol updates, and keepalives) is
categorized and placed into different priority queues. Routing protocol updates and link-level
keepalives receive the highest priority. This enhances network stability because routing
protocol adjacencies and line protocols should never go down regardless of the load on the
system.
Network Management
The JUNOS software is designed with an open-systems architecture. Management of the
device is accomplished via standard tools based on the IP protocol suite. These tools include,
but are not limited to, SNMP, secured TCP connections, Telnet, FTP, ICMP, and NTP. Our
fundamental approach is to fit into the typical model that ISPs are familiar with.
Management Console
The command-line interface (CLI) is fully accessible through the management console and the
management Ethernet interface. All information and tools needed for management of the
device are available from the management console. These tools include, but are not limited to,
protocol trace logs, system availability information, configurable debugging, Telnet,
traceroute, ping, and full CLI access.
SNMP Management
SNMP is required to integrate into standards-based network management and monitoring
systems. Juniper Networks supports industry-defined standard MIBs and provides private
MIB definitions when necessary. An example of this is the Juniper Networks chassis MIB,
which describes the physical status of the system, including inventory support. When private
MIBs are defined, industry-standard conventions are used so that the MIBs can be quickly
incorporated into standards-based SNMP management systems
SNMPv1 and SNMPv2 have relatively poor security models. Security is typically controlled
through a “community string,” which is a clear-text identifier that is passed with every packet
across the network. Because this security scheme is so easy to subvert, configuration
operations and most “operator” class operations (for example, clear a routing protocol
adjacency) are not supported under SNMPv1 and SNMPv2. However, system professionals
can still access all of the statistics that are normally collected by the SNMP MIBs. Only when
SNMPv3 support is commercially available with reasonable cryptographic security will
Juniper Networks incorporate remote SNMP configuration into the JUNOS software.
Copyright © 2000, Juniper Networks, Inc. All rights reserved. Juniper Networks is a registered trademark of Juniper Networks, Inc. JUNOS,
M20, and M40 are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks
may be the property of their respective owners. All specifications are subject to change without notice. Printed in USA.