Вы находитесь на странице: 1из 33

W HITE P APER

Designing High-Performance Campus


Intranets with Multilayer Switching
Author: Geoff Haviland
E-mail: haviland@cisco.com

Synopsis Contents
This paper briefly compares several approaches to designing • Campus Network Design Considerations
campus intranets using multilayer switching. Then it – Flat Bridged Networks
describes the hierarchical approach called multilayer campus – Routing and Scalability
network design in greater detail. The multilayer design – Layer 2 Switching
approach makes optimal use of multilayer switching to – Layer 3 Switching
build a campus intranet that is scalable, fault tolerant, – Layer 4 Switching
and manageable. – Virtual LANs and Emulated LANs
Whether implemented with an Ethernet backbone • Comparing Campus Network Design Models
or an Asynchronous Transfer Mode (ATM) backbone, the – Hub and Router Model
multilayer model has many advantages. A multilayer campus – Campus-Wide VLAN Model
intranet is highly deterministic, which makes it easy to – Multiprotocol over ATM
troubleshoot as it scales. The multilayer design is modular, • The Multilayer Model
so bandwidth scales as building blocks are added. Intelligent – The New 80/20 Rule
Layer 3 services keep broadcasts off the backbone. Intelligent – Components of the Multilayer Model
Layer 3 routing protocols such as Open Shortest Path First – Redundancy and Load Balancing
(OSPF) and Enhanced Interior Gateway Routing Protocol – Scaling Bandwidth
(IGRP) handle load balancing and fast convergence across – Policy in the Core
the backbone. The multilayer model makes migration easier, – Positioning Servers
because it preserves existing addressing. Redundancy and – ATM/LANE Backbone
fast convergence are provided by UplinkFast and Hot – IP Multicast
Standby Router Protocol (HSRP). Bandwidth scales from – Scaling Considerations
®,
Fast Ethernet to Fast EtherChannel and from Gigabit – Migration Strategies
Ethernet to Gigabit EtherChannel. The model supports all – Security Considerations
common campus protocols. – Bridging in the Multilayer Model
The ideas expressed in this paper reflect experience • Benefits of the Multilayer Model
with many large campus intranets. Detailed configuration • Appendix A: Implementing the Multilayer Model
examples are provided in the appendix to enable readers – Ethernet Backbone
to implement the multilayer model with either a switched – Server Farm
Ethernet backbone or an ATM LAN Emulation – ATM LANE Backbone
(LANE) backbone.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 1 of 33
Campus Network Design Considerations Routing and Scalability
A router is a packet switch that is used to create an
Flat Bridged Networks internetwork or internet, thereby providing connectivity
Originally campus networks consisted of a single local-area
between broadcast domains. Routers forward packets based
network (LAN) to which new users were added. This LAN
on network addresses rather than Media Access Control
was a logical or physical cable into which the network
(MAC) addresses. Internets are more scalable than flat
devices tapped. In the case of Ethernet, the half-duplex 10
bridged networks, because routers summarize reachability
Mbps available was shared by all the devices. The LAN can
by network number. Routers use protocols such as OSPF
be considered a collision domain, because all packets are
and Enhanced IGRP to exchange network reachability
visible to all devices on the LAN and are therefore free to
information.
collide, given the carrier sense multiaccess with collision
Compared with STP, routing protocols have the
detection (CSMA/CD) scheme used by Ethernet.
following characteristics:
When the collision domain of the LAN became
• Load balancing across many equal-cost paths (in the
congested, a bridge was inserted. A LAN bridge is a
Cisco implementation)
store-and-forward packet switch. The bridge segments
• Optimal or lowest-cost paths between networks
the LAN into several collision domains, and therefore
• Fast convergence when changes occur
increases the available network throughput per device.
• Summarized (and therefore scalable) reachability
Bridges flood broadcasts, multicasts, and unknown
information
unicasts to all segments. Therefore, all the bridged
segments in the campus together form a single broadcast In addition to controlling broadcasts, Cisco routers provide
domain. The Spanning Tree Protocol (STP) was developed a wide range of value-added features that improve the
to prevent loops in the network and to route around failed manageability and scalability of campus internets. These
elements. features are characteristics of the Cisco IOS™ software and
The following are characteristics of the STP are common to Cisco routers and multilayer switches. The
broadcast domain: IOS software has features specific to each protocol typically
• Redundant links are blocked and carry no data traffic. found in the campus, including the following:
• Suboptimal paths exist between different points. • TCP/IP
• STP convergence typically takes 40 to 50 seconds. • AppleTalk
• Broadcast traffic within the Layer 2 domain interrupts • DECnet
every host. • Novell IPX
• Broadcast storms within the Layer 2 domain affect the • IBM Systems Network Architecture (SNA), data-link
whole domain. switching (DLSw), and Advanced Peer-to-Peer
• Isolating problems can be time consuming. Networking (APPN)
• Network security within the Layer 2 domain is limited. When routers are used in a campus, the number of router
In theory, the amount of broadcast traffic sets a practical hops from edge to edge is called the diameter. It is considered
limit to the size of the broadcast domain. In practice, good practice to design for a consistent diameter within a
managing and troubleshooting a bridged campus becomes campus. This is achieved with a hierarchical design model.
increasingly difficult as the number of users increases. One Figure 1 shows a typical hierarchical model that combines
misconfigured or malfunctioning workstation can disable an routers and hubs. The diameter is always two router hops
entire broadcast domain for an extended period of time. from an end station in one building to an end station in
When designing a bridged campus, each bridged another building. The distance from end station to a server
segment corresponds to a workgroup. The workgroup server on the backbone Fiber Distributed Data Interface (FDDI)
is placed in the same segment as the clients, allowing most of is always one hop.
the traffic to be contained. This design principle is referred to
as the 80/20 rule and refers to the goal of keeping at least
80 percent of the traffic contained within the local segment.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 2 of 33
Layer 2 Switching of the past. The broadcast radiation increases with the
Layer 2 switching is hardware-based bridging. In particular, number of hosts, and broadcasts interrupt all the end
the frame forwarding is handled by specialized hardware, stations. The STP limitations of slow convergence and
usually application-specific integrated circuits (ASICs). Layer blocked links still apply.
2 switches are replacing hubs at the wiring closet in campus
Layer 3 Switching
network designs.
Layer 3 switching is hardware-based routing. In particular,
The performance advantage of a Layer 2 switch
the packet forwarding is handled by specialized hardware,
compared with a shared hub is dramatic. Consider a
usually ASICs. Depending on the protocols, interfaces, and
workgroup of 100 users in a subnet sharing a single
features supported, Layer 3 switches can be used in place of
half-duplex Ethernet segment. The average available
routers in a campus design. Layer 3 switches that support
throughput per user is 10 Mbps divided by 100, or just
standards-based packet header rewrite and time-to-live
100 kbps. Replace the hub with a full-duplex Ethernet
(TTL) decrement are called packet-by-packet Layer 3
switch, and the average available throughput per user is
switches.
10 Mbps times two, or 20 Mbps. The amount of network
High-performance packet-by-packet Layer 3 switching
capacity available to the switched workgroup is 200 times
is achieved in different ways. The Cisco 12000 Gigabit
greater than to the shared workgroup. The limiting factor
Switch Router (GSR) achieves wire-speed Layer 3 switching
now becomes the workgroup server, which is a 10-Mbps
with a crossbar switch matrix. The Catalyst® family of
bottleneck. The high performance of Layer 2 switching
multilayer switches performs Layer 3 switching with ASICs
has led to some network designs that increase the number
developed for the Supervisor Engine. Regardless of the
of hosts per subnet. Increasing the hosts leads to a flatter
underlying technology, Cisco’s packet-by-packet Layer 3
design with fewer subnets or logical networks in the campus.
switching implementations are standards-compliant and
However, for all its advantages, Layer 2 switching has all the
operate as a fast router to external devices.
same characteristics and limitations as bridging. Broadcast
domains built with Layer 2 switches still experience the same
scaling and performance issues as the large bridged networks

Figure 1 Traditional Router and Hub Campus

Workgroup Workgroup Workgroup


Server Server Server

Building A Building B Building C

Access
Layer
Hubs Hubs Hubs

Distribution
Layer

Core
FDDI Backbone FDDI Dual Ring
Layer
Dual Homed
ATM
VLAN Trunk Fast Ethernet
Fast EtherChannel Enterprise
Ethernet or Fast Ethernet Port Servers
Token Ring Port
FDDI Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 3 of 33
Cisco’s Layer 3 switching implementation on the Multilayer switching on the Catalyst family of switches
Catalyst family of switches combines the full multiprotocol can optionally be configured to operate as a Layer 3 switch
routing support of the Cisco IOS software with or a Layer 4 switch. When operating as a Layer 3 switch, the
hardware-based Layer 3 switching. The Route Switch NetFlow feature card caches flows based on destination IP
Module (RSM) is an IOS-based router with the same address. When operating as a Layer 4 switch, the card caches
Reduced Instruction Set Computing (RISC) processor flows based on source address, destination address, source
as the RSP2 engine in the high-end Cisco 7500 router family. port, and destination port. Because the NetFlow feature card
The hardware-based Layer 3 switching is achieved with performs Layer 3 or Layer 4 switching in hardware, there
ASICs on the NetFlow feature card. The NetFlow feature is no performance difference between the two modes of
card is a daughter-card upgrade to the Supervisor Engine operation. Choose Layer 4 switching if your policy dictates
on a Catalyst 5000 family multilayer switch. granular control of traffic by application or if you require
granular accounting of traffic by application.
Layer 4 Switching
Layer 4 switching refers to hardware-based routing Virtual LANs and Emulated LANs
that considers the application. In Transmission Control One of the technologies developed to enable Layer 2
Protocol (TCP) or User Datagram Protocol (UDP) flows, switching across the campus is Virtual LANs (VLANs).
the application is encoded as a port number in the packet A VLAN is a way to establish an extended logical network
header. Cisco routers have the ability to control traffic based independent of the physical network layout. Each VLAN
on Layer 4 information using extended access lists and functions as a separate broadcast domain and has
to provide granular Layer 4 accounting of flows using characteristics similar to an extended bridged network.
NetFlow switching. STP normally operates between the switches in a VLAN.

Figure 2 Virtual LAN (VLAN) Technologies

Y Z
Client Client
Workgroup Workgroup Server
Pink VLAN Green VLAN
Server Green VLAN
Pink VLAN

Catalyst
Switch

X
Catalyst 5000
With LANE Card
B C LANE Client (LEC)
Pink, Purple, Green
ISL Attached A
Enterprise Server

ATM
VLAN Trunk Fast Ethernet D ATM Workgroups:
Fast EtherChannel
Switch Pink 131.108.2.0
Ethernet or Fast Ethernet Port
Server LANE Client (LEC) Purple 131.108.3.0
Pink, Purple, Green Green 131.108.4.0

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 4 of 33
Figure 2 shows three VLANs labeled pink, purple, and Ethernet-attached hosts and servers in one VLAN
green. Each color corresponds to a workgroup, which is also cannot talk to Ethernet-attached hosts and servers in a
a logical subnet: different VLAN. In Figure 2, client Z in the green VLAN
• Pink = 131.108.2.0 cannot talk to server Y in the pink VLAN. That is because
• Purple = 131.108.3.0 there is no router to connect pink to green.
• Green = 131.108.4.0
Comparing Campus Network Design Models
One of the technologies developed to enable campus-wide
VLANs is VLAN trunking. A VLAN trunk between two The Hub and Router Model
Figure 1 shows a campus with the traditional router and hub
Layer 2 switches allows traffic from several logical networks
design. The access-layer devices are hubs that act as Layer 1
to be multiplexed. A VLAN trunk between a Layer 2 switch
repeaters. The distribution layer consists of routers. The core
and a router allows the router to connect to several logical
layer contains FDDI concentrators or other hubs that act as
networks over a single physical interface. In Figure 2, a
Layer 1 repeaters. Routers in the distribution layer provide
VLAN trunk allows server X to talk to all the VLANs
broadcast control and segmentation. Each wiring closet hub
simultaneously. The yellow lines in Figure 1 are Inter-Switch
corresponds to a logical network or subnet and homes to a
Link (ISL) trunks that carry the pink, purple, and
router port. Alternatively, several hubs can be cascaded or
green VLANs.
bridged together to form one logical subnet or network.
ISL, 802.10, and 802.1q are VLAN tagging protocols
The hub and router model is scalable because of the
that were developed to allow VLAN trunking. The VLAN
advantages of intelligent routing protocols such as OSPF and
tag is an integer incorporated into the header of frames
Enhanced IGRP. The distribution layer is the demarcation
passing between two devices. The tag value allows the data
between networks in the access layer and networks in the
from multiple VLANs to be multiplexed and demultiplexed.
core. Distribution-layer routers provide segmentation and
ATM LANE permits multiple logical LANs to exist over
terminate collision domains as well as broadcast domains.
a single switched ATM infrastructure. ATM Emulated LANs
The model is consistent and deterministic, which simplifies
(ELANs) use a similar integer index, because ISL, 802.10 and
troubleshooting and administration. This model also maps
802.1q, and are compatible with Ethernet VLANs from end
well to all the network protocols such as Novell IPX,
to end. In Figure 2, LANE cards in Catalyst switches B and
AppleTalk, DECnet, and TCP/IP.
C act as LANE clients (LECs) that connect the Ethernet
The hub and router model is straightforward to
VLANs pink, purple, and green across the ATM backbone.
configure and maintain because of its modularity. Each
The server D is ATM attached and has LECs for the pink,
router within the distribution layer is programmed with the
purple, and green ELANs. Thus server D can talk directly to
same features. Common configuration elements can be cut
hosts in the pink, purple, and green VLANs.
and pasted across the layer. Because each router is
ATM LANE emulates the Ethernet broadcast protocol
programmed the same way, its behavior is predictable, which
over connection-oriented ATM. Not shown in Figure 2 are
makes troubleshooting easier. Layer 3 packet switching load
the LANE Configuration Server (LECS), LANE Server (LES),
and middleware services are shared among all the routers in
and Broadcast and Unknown Server (BUS) that are required
the distribution layer.
to make ATM work like Ethernet. The LECS and LES/BUS
The traditional hub and router campus model can be
functions are supported by the Cisco IOS software and
upgraded as performance demands increase. The shared
can reside in a Cisco LightStream® 1010 switch, a Cisco
media in the access layer and core can be upgraded to Layer
Catalyst 5000 family switch with a LANE card, or a Cisco
2 switching, and the distribution layer can be upgraded to
router with an ATM interface.
Layer 3 switching with multilayer switching. Upgrading
shared Layer 1 media to switched Layer 2 media does not
change the network addressing, the logical design, or the
programming of the routers.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 5 of 33
The Campus-Wide VLAN Model The various VLAN connections to Router X could be
Figure 3 shows a conventional campus-wide VLAN design. replaced by an ISL trunk. In either case, Router X is typically
Layer 2 switching is used in the access, distribution, and core referred to as a “router on a stick” or a “one-armed router.”
layers. Four workgroups represented by the colors blue, red, More routers can be used to distribute the load, and each
purple, and green are distributed across several access-layer router attaches to several or all VLANs. Traffic between
switches. Connectivity between workgroups is by Router X workgroups must traverse the campus in the source VLAN
that connects to all four VLANs. Layer 3 switching and to a port on the gateway router, then back out into the
services are concentrated at Router X. Enterprise servers destination VLAN.
are shown behind the router on different logical networks
indicated by the black lines.

Figure 3 Traditional Campus-Wide VLAN Design

Building A Building B Building C

Access

Distribution

X Core
Server
Distribution

Four Workgroups
Blue 131.108.1.0
Enterprise Workgroup ISL–Attached Pink 131.108.2.0
Servers Server Enterprise Purple 131.108.3.0
Green Server Green 131.108.4.0

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 6 of 33
Figure 4 shows an updated version of the campus-wide The campus-wide VLAN model is highly dependent
VLAN model that takes advantage of multilayer switching. upon the 80/20 rule. If 80 percent of the traffic is within a
The switch marked X is a Catalyst 5000 family multilayer workgroup, then 80 percent of the packets are switched at
switch. The one-armed router is replaced by a RSM and Layer 2 from client to server. However, if 90 percent of the
the hardware-based Layer 3 switching of the NetFlow traffic goes to the enterprise servers in the server farm, then
feature card. Enterprise servers in the server farm may be 90 percent of the packets are switched by the one-armed
attached by Fast Ethernet at 100 Mbps, or by Fast router. The scalability and performance of the VLAN model
EtherChannel to increase the bandwidth to 200 Mbps FDX are limited by the characteristics of STP. Each VLAN is
or 400 Mbps FDX. equivalent to a flat bridged network.

Figure 4 Campus-Wide VLANs with Multilayer Switching

Building A Building B Building C

Access
Layer

Distribution
Layer

Core FEC/ISL Workgroup


Layer Server Server Green

X
Server Catalyst 5000
Si
Distribution Multilayer Switch

Fast EtherChannel–Attached Fast EtherChannel–Attached Four Workgroups


Enterprise Enterprise Blue 131.108.1.0
Servers Server Pink 131.108.2.0
Purple 131.108.3.0
Green 131.108.4.0

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 7 of 33
The campus-wide VLAN model provides the flexibility With MPOA, the new elements are the multiprotocol
to have statically configured end stations move to a different client (MPC) hardware and software on the access switches
floor or building within the campus. Cisco’s VLAN as well as the multiprotocol server (MPS), which is
Membership Policy Server (VMPS) and the VLAN Trunking implemented in software on Router X. When the client in
Protocol (VTP) make this possible. A mobile user plugs a the pink VLAN talks to an enterprise server in the server
laptop PC into a LAN port in another building. The local farm, the first packet goes from the MPC in the access switch
Catalyst switch sends a query to the VMPS to determine to the MPS using LANE. The MPS forwards the packet to the
the access policy and VLAN membership for the user. destination MPC using LANE. Then the MPS tells the two
Then the Catalyst switch adds the user’s port to the MPCs to establish a direct switched virtual circuit (SVC) path
appropriate VLAN. between the green subnet and the server farm subnet.
With MPOA, IP unicast packets take the cut-through
Multiprotocol over ATM
SVC as indicated. Multicast packets, however, are sent to the
Multiprotocol over ATM (MPOA) adds Layer 3 cut-through
BUS to be flooded in the originating ELAN. Then Router X
switching to ATM LANE. The ATM infrastructure is the
copies the multicast to the BUS in every ELAN that needs
same as in ATM LANE. The LECS and the LES/BUS for
to receive the packet as determined by multicast routing.
each ELAN are configured the usual way. Figure 5 shows
In turn, each BUS floods the packet again within each
the elements of a small MPOA campus design.
destination ELAN.
Figure 5 MPOA Campus Design Packets of protocols other than IP always proceed
LANE to router to LANE without establishing a direct
Client Client
Green Pink cut-through SVC. MPOA design must consider the amount
VLAN VLAN
of broadcast, multicast, and non-IP traffic in relation to the
performance of the router. MPOA should be considered for
networks with predominately IP unicast traffic and ATM
Access trunks to the wiring closet switch.
Switch MPC

X The Multilayer Model


Enterprise
Server MPC The New 80/20 Rule

MultiProtocol Server
The conventional wisdom of the 80/20 rule underlies the
Routes First Packet Access traditional design models discussed in the preceding section.
of IP Unicast Flow Switch
MPC With the campus-wide VLAN model, the logical workgroup
is dispersed across the campus, but still organized such that
Enterprise 80 percent of traffic is contained within the VLAN. The
Servers
remaining 20 percent of traffic leaves the network or subnet
through a router.
Fast EtherChannel–Attached
Enterprise Server The traditional 80/20 traffic model arose because each
ATM department or workgroup had a local server on the LAN.
VLAN Trunk Fast Ethernet
Fast EtherChannel The local server was used as file server, logon server, and
Ethernet or Fast Ethernet Port
application server for the workgroup. The 80/20 traffic
pattern has been changing rapidly with the rise of corporate
intranets and applications that rely on distributed IP services.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 8 of 33
Many new and existing applications are moving to with specialized ASICs. It is important to note that there is
distributed World Wide Web (WWW)-based data storage no performance penalty associated with Layer 3 switching
and retrieval. The traffic pattern is moving toward what versus Layer 2 switching with the NetFlow feature card.
is now referred to as the 20/80 model. In the 20/80 model, Figure 6 illustrates a simple multilayer campus network
only 20 percent of traffic is local to the workgroup LAN design. The campus consists of three buildings, A, B, and C,
and 80 percent of the traffic leaves. connected by a backbone called the core. The distribution
layer consists of Catalyst 5000 family multilayer switches.
Components of the Multilayer Model
The multilayer design takes advantage of the Layer 2
The performance of multilayer switching matches the
switching performance and features of the Catalyst family
requirements of the new 20/80 traffic model. The two
switches in the access layer and backbone and uses multilayer
components of multilayer switching on the Catalyst 5000
switching in the distribution layer. The multilayer model
family are the RSM and the NetFlow feature card. The RSM
preserves the existing logical network design and addressing
is a Cisco IOS-based multiprotocol router on a card. It has
as in the traditional hub and router model. Access-layer
performance and features similar to a Cisco 7500 router.
subnets terminate at the distribution layer. From the other
The NetFlow feature card is a daughter-card upgrade to the
side, backbone subnets also terminate at the distribution
Supervisor Engine of the Catalyst 5000 family switches.
layer. So the multilayer model does not consist of
It performs both Layer 3 and Layer 2 switching in hardware
campus-wide VLANs, but does take advantage of
VLAN trunking as we shall see.

Figure 6 Multilayer Campus Design with Multilayer Switching

Fast EtherChannel–Attached
Workstation

Building A Building B Building C

Access
Layer
Catalyst 5000
L2 Switch

Distribution Si Si Si
Catalyst 5000
Layer
Multilayer Switch

ISL–Attached
Core Catalyst 5000 Building Server
Layer L2 Switch

ATM Fast EtherChannel–Attached


VLAN Trunk Fast Ethernet Enterprise
Fast EtherChannel Server
Ethernet or Fast Ethernet Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 9 of 33
Because Layer 3 switching is used in the distribution can be substituted with Gigabit Ethernet, and so on. So
layer of the multilayer model, this is where many of the modularity makes both migration and integration of legacy
characteristic advantages of routing apply. The distribution technologies much easier.
layer forms a broadcast boundary so that broadcasts don’t Another key advantage of modular design is that each
pass from a building to the backbone or vice-versa. device within a layer is programmed the same way and
Value-added features of the Cisco IOS software apply at performs the same job, making configuration much easier.
the distribution layer. For example, the distribution-layer Troubleshooting is also easier, because the whole design
switches cache information about Novell servers and is highly deterministic in terms of performance, path
respond to Get Nearest Server queries from Novell clients in determination, and failure recovery.
the building. Another example is forwarding Dynamic Host In the access layer a subnet corresponds to a VLAN. A
Configuration Protocol (DHCP) messages from mobile VLAN may map to a single Layer 2 switch, or it may appear
IP workstations to a DHCP server. at several switches. Conversely, one or more VLANs may
Another Cisco IOS feature that is implemented at appear at a given Layer 2 switch. If Catalyst 5000 family
the multilayer switches in the distribution layer is called switches are used in the access layer, VLAN trunking
Local Area Mobility (LAM). LAM is valuable for campus provides flexible allocation of networks and subnets across
intranets that have not deployed DHCP services and permits more than one switch. In our later examples we will show
workstations with statically configured IP addresses and two VLANs per switch in order to illustrate how to use
gateways to move throughout the campus. LAM works VLAN trunking to achieve load balancing and fast failure
by propagating the address of the mobile hosts out into recovery between the access layer and the distribution layer.
the Layer 3 routing table. In its simplest form, the core layer is a single logical
There are actually hundreds of valuable Cisco IOS network or VLAN. In our examples, we show the core layer
features that improve the stability, scalability, and as a simple switched Layer 2 infrastructure with no loops.
manageability of enterprise networks. These features apply It is advantageous to avoid spanning tree loops in the core.
to all the protocols found in the campus, including DECnet, Instead we will take advantage of the load balancing and fast
AppleTalk, IBM SNA, Novell IPX, TCP/IP, and many others. convergence of Layer 3 routing protocols such as OSPF and
One characteristic shared by most of these features is that Enhanced IGRP to handle path determination and failure
they are “out of the box.” Out-of-the-box features apply recovery across the backbone. So all the path determination
to the functioning of the network as a whole. They are in and failure recovery is handled at the distribution layer in the
contrast with “inside-the-box” features, such as port density multilayer model.
or performance, that apply to a single box rather than to the
Redundancy and Load Balancing
network as a whole. Inside-the-box features have little to
A distribution-layer switch in Figure 6 represents a point of
do with the stability, scalability, or manageability of
failure at the building level. One thousand users in Building
enterprise networks.
A could lose their connections to the backbone in the event
The greatest strengths of the multilayer model arise from
of a power failure. If a link from a wiring closet switch to the
its hierarchical and modular nature. It is hierarchical because
distribution-layer switch is disconnected, 100 users on a floor
the layers are clearly defined and specialized. It is modular
could lose their connections to the backbone. Figure 7 shows
because every part within a layer performs the same logical
a multilayer design that addresses these issues.
function. One key advantage of modular design is that
Multilayer switches A and B provide redundant
different technologies can be deployed with no impact on the
connectivity to domain “North.” Redundant links from each
logical structure of the model. For example, Token Ring can
access-layer switch connect to distribution-layer switches A
be replaced by Ethernet. FDDI can be replaced by switched
and B. Redundancy in the backbone is achieved by installing
Fast Ethernet. Hubs can be replaced by Layer 2 switches. Fast
two or more Catalyst switches in the core. Redundant links
Ethernet can be substituted with ATM LANE. ATM LANE
from the distribution layer provide failover as well as load
balancing over multiple paths across the backbone.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 10 of 33
Redundant links connect access-layer switches to a pair Load balancing across the core is achieved by intelligent
of Catalyst multilayer switches in the distribution layer. Fast Layer 3 routing protocols implemented in the Cisco IOS
failover at Layer 3 is achieved with Cisco’s Hot Standby software. In this picture there are four equal-cost paths
Router Protocol. The two distribution-layer switches between any two buildings. In Figure 7, the four paths from
cooperate to provide HSRP gateway routers for all the IP domain North to domain West are AXC, AXYD, BYD, and
hosts in the building. Fast failover at Layer 2 is achieved BYXC. These four Layer 2 paths are considered equal by
by Cisco’s UplinkFast feature. UplinkFast is a convergence Layer 3 routing protocols. Note that all paths from domains
algorithm that achieves link failover from the forwarding North, West, and South to the backbone are single, logical
link to the backup link in about three seconds. hops. The Cisco IOS software supports load balancing over
up to six equal-cost paths for IP, and over many paths for
other protocols.

Figure 7 Redundant Multilayer Campus Design

North West South

Access
Layer

Distribution Si Si Si Si Si Si
Layer A B C D

Core
Layer ISL–Attached
X Y Building Servers

Enterprise
Servers Fast EtherChannel–Attached
Enterprise
ATM Server
VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 11 of 33
Figure 8 shows the redundant multilayer model with Putting servers in a server farm also avoids problems
an enterprise server farm. The server farm is implemented associated with IP redirect and selecting the best gateway
as a modular building block using multilayer switching. The router when servers are directly attached to the backbone
Gigabit Ethernet trunk labeled A carries the server-to-server subnet as shown in Figure 7. In particular, HSRP would not
traffic. The Fast EtherChannel trunk labeled B carries the be used for the enterprise servers in Figure 7; they would use
backbone traffic. All server-to-server traffic is kept off proxy Address Resolution Protocol (ARP), Internet Router
the backbone, which has both security and performance Discovery Protocol (IRDP), Gateway Discovery Protocol
advantages. The enterprise servers have fast HSRP (GDP), or Routing Information Protocol (RIP) snooping
redundancy between the multilayer switches X and Y. Access to populate their routing tables.
policy to the server farm can be controlled by access lists on
X and Y. In Figure 8, the Layer 2 core switches V and W are
shown separate from server distribution switches X and Y for
clarity. In a network of this size, V and W would collapse into
X and Y.

Figure 8 Multilayer Model with Server Farm

North West South

Access
Layer

Distribution Si Si Si Si Si Si
Layer

Core B
Layer ISL–Attached
V W Building Servers
Gigabit Ethernet Gigabit Ethernet

X Si A Si Y
Server
Distribution

Fast EtherChannel–Attached Fast EtherChannel–Attached


Enterprise Enterprise
Servers Server

VLAN Trunk Fast Ethernet


Fast EtherChannel
Ethernet or Fast Ethernet Port
Gigabit Ethernet/GEC

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 12 of 33
Figure 9 shows HSRP operating between two Figure 10 shows load balancing between the access
distribution-layer switches. Host systems connect at a switch layer and the distribution layer using Cisco’s ISL VLAN
port in the access layer. The even-numbered subnets map to trunking protocol. We have allocated VLANs 10 and 11 to
even-numbered VLANs, and the odd-numbered subnets map access-layer Switch A, and VLANs 12 and 13 to Switch B.
to odd-numbered VLANs. The HSRP primary for the Each access-layer switch has two trunks to the distribution
even-numbered subnets is distribution-layer Switch X, and layer. The STP puts redundant links in blocking mode
the HSRP primary for the odd-numbered subnets is Switch Y. as shown. Load distribution is achieved by making one
The HSRP backup for even-numbered subnets is Switch Y, trunk the active forwarding path for even-numbered
and the HSRP backup for odd-numbered subnets is Switch VLANs and the other trunk the active forwarding path
X. The convention followed here is that every HSRP gateway for odd-numbered VLANs.
router always has host address 100—so the HSRP gateway
Figure 10 VLAN Trunking for Load Balancing
for subnet 15.0 is 15.100. If gateway 15.100 loses power or
VLANs VLANs VLANs VLANs
is disconnected, Switch X assumes the address 15.100 as well 10, 11 12, 13 14, 15 16, 17
as the HSRP MAC address within about two seconds as
A B C D
measured in the configuration shown in Appendix A.

Figure 9 Redundancy with HSRP F 10 F 11 F 12 F 13 F 14 F 15 F 16 F 17


B 11 B 10 B 13 B 12 B 15 B 14 B 17 B 16
Host A 10.1 Host B 11.1 Host C 15.1 Host D 17.1
Even Subnet 10.0 Odd Subnet 11.0 Odd Subnet 15.0 Odd Subnet 17.0
Gateway 10.100 Gateway 11.100 Gateway 15.100 Gateway 17.100

ISL Trunks VLAN


Multiplexing
Fast Ethernet or
Fast EtherChannel

STP Root STP Root


Even VLANs Si Si Odd VLANs
Access
10, 12, 14, 16 11, 13, 15, 17
Layer
X Y

ISL Trunks F—Forwarding


VLAN Multiplexing B—Blocking
Fast Ethernet or
Fast EtherChannel ATM
VLAN Trunk Fast Ethernet
HSRP Primary HSRP Primary Fast EtherChannel
Even Subnets, Odd Subnets, Ethernet or Fast Ethernet Port
Even VLANs, Si Si Odd VLANs,
10, 12, 14, 16 11, 13, 15, 17
X Y On Switch A, the left-hand trunk is labeled F10, which
ATM means it’s the forwarding path for VLAN 10. The right-hand
VLAN Trunk Fast Ethernet
Fast EtherChannel
trunk is labeled F11, which means it’s the forwarding path
Ethernet or Fast Ethernet Port for VLAN 11. The left-hand trunk is also labeled B11, which
means it’s the blocking path for VLAN 11, and the
right-hand trunk is B10, which means blocking for VLAN

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 13 of 33
10. This is accomplished by making X the root for even Fast EtherChannel combines two or four Fast Ethernet
VLANs and Y the root for odd VLANs. See Appendix A for links together into a single high-capacity trunk. Fast
the configuration commands required. EtherChannel is supported by the Cisco 7500 family routers
with IOS Release 11.1.14CA and above. It is supported on
Figure 11 shows Figure 10 after a link failure, which is
Catalyst 5000 switches with the Fast EtherChannel line card
indicated by the big “X”. UplinkFast changes the left-hand
or on Supervisor II or III. Fast EtherChannel support has
trunk on Switch A to be the active forwarding path for
been announced by several partners, including Adaptec,
VLAN 11. Traffic is switched across Fast EtherChannel trunk
Auspex, Compaq, Hewlett-Packard, Intel, Sun
Z if required. Trunk Z is the Layer 2 backup path for all
Microsystems, and Znyx. With Fast EtherChannel trunking,
VLANs in the domain, and also carries some of the return
a high-capacity server can be connected to the core backbone
traffic that is load-balanced between Switch X and Switch Y.
at 400 Mbps FDX for 800 Mbps total throughput.
With conventional STP, convergence would take 40 to 50
Figure 12 shows three ways to scale bandwidth between
seconds. With UplinkFast, failover takes about three seconds
an access-layer switch and a distribution-layer switch. On the
as measured in the configuration shown in Appendix A.
configuration labeled “A - Best,” all VLANs are combined
Figure 11 VLAN Trunking with Uplink Fast Failover over Fast EtherChannel with ISL. In the middle configuration
labeled “B—Good,” a combination of segmentation and ISL
VLANs VLANs VLANs VLANs
10, 11 12, 13 14, 15 16, 17 trunking is used. On the configuration labeled “C—OK,”

A B C D
simple segmentation is used.

Figure 12 Scaling Ethernet Trunk Bandwidth


F 10 F 12 F 13 F 14 F 15 F 16 F 17
B 11 B 13 B 12 B 15 B 14 B 17 B 16 A B C
X Best Good OK
VLANs VLANs VLANs
1, 2, 3, 4, 5, 6 1, 2, 3, 4, 5, 6 1 2 3
ISL Trunks VLAN
Multiplexing
Fast Ethernet or
Fast EtherChannel

STP Root Z STP Root FEC ISL


Even VLANs Si Si Odd VLANs VLANs Fast
10, 12, 14, 16 11, 13, 15, 17 Fast
1, 2, 3, 4, 5, 6 1,2 3,4 5,6 Ethernet 1 2 3
X Y 400 Mbps Ethernet
ISL
F—Forwarding FDX
B—Blocking
Si Si Si
ATM
VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port

Scaling Bandwidth You should use model A if possible, because Fast


Ethernet trunk capacity in the multilayer model can be scaled EtherChannel provides more efficient bandwidth utilization
in several ways. Ethernet can be migrated to Fast Ethernet. by multiplexing traffic from multiple VLANs over one trunk.
Fast Ethernet can be migrated to Fast EtherChannel or If a Fast EtherChannel line card is not available, use model B
Gigabit Ethernet or Gigabit EtherChannel. Access-layer if possible. If neither Fast EtherChannel nor ISL trunking are
switches can be partitioned into multiple VLANs with possible, use model C. With simple segmentation, each
multiple trunks. VLAN multiplexing with ISL can be used VLAN uses one trunk, so one can be congested while another
in combination with the different trunks. is unused. More ports will be required to get the same
performance.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 14 of 33
Scale bandwidth within ATM backbones by adding Of course the simpler the backbone topology, the better.
more OC-3 or OC-12 trunks as required. The intelligent A small number of VLANs or ELANs is preferred. A
routing provided by Private Network-to-Network Interface discussion of the scaling issues related to large numbers of
(PNNI) handles load balancing and fast failover. Layer 3 switches peered across many networks appears later
in this paper in “Scaling Considerations”.
Policy in the Core
With Layer 3 switching in the distribution layer, it is possible Positioning Servers
to implement the backbone as a single logical network or It is very common for an enterprise to centralize servers.
multiple logical networks as required. VLAN technology can In some cases, services are consolidated into a single server.
be used to create separate logical networks that can be used In other cases, servers are grouped at a data center for
for different purposes. One IP core VLAN could be created physical security or easier administration. At the same time,
for management traffic and another for enterprise servers. A it is increasingly common for workgroups or individuals
different policy could be implemented for each core VLAN. to publish a Web page locally and make it accessible to
Policy is applied with access lists at the distribution layer. In the enterprise.
this way, access to management traffic and management With centralized servers directly attached to the
ports on network devices is carefully controlled. backbone, all client/server traffic crosses one hop from a
Another way to logically partition the core is by subnet in the access layer to a subnet in the core. Policy-based
protocol. Create one VLAN for enterprise IP servers and control of access to enterprise servers is implemented by
another for enterprise IPX or DECnet servers. The logical access lists applied at the distribution layer. In Figure 14,
partition can be extended to become complete physical server W is Fast Ethernet-attached to the core subnet.
separation on multiple core switches if dictated by security Server X is Fast EtherChannel-attached to the core subnet.
policies. Figure 13 shows the core separated physically into As mentioned, servers attached directly to the core must use
two switches. VLAN 100 on Switch V corresponds to IP proxy ARP, IRDP, GDP, or RIP snooping to populate their
subnet 131.108.1.0 where the World Wide Web (WWW) routing tables. HSRP would not be used within core subnets,
server farm attaches. VLAN 200 on Switch W corresponds because switches in the distribution layer all connect to
to IPX network BEEF0001 where the Novell server different parts of the campus.
farm attaches.

Figure 13 Logical or Physical Partitioning of the Core

Domain A Domain B Domain C Workgroup


Servers

Distribution
Si Si Si
Layer

VLAN 100 VLAN 200


IP Subnet V W IPX Network
Core 131.108.1.0 BEEF0001
Layer

X Y
Si Si
Server
Distribution
Novell
IP
IPX
Servers
File
World
Servers
Wide
Web

ATM
VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 15 of 33
Enterprise servers Y and Z are placed in a server farm Server N attaches to a distribution layer at switch H.
by implementing multilayer switching in a server Server N is a building-level server that communicates with
distribution building block. Server Y clients in VLANs A, B, C, and D. A direct Layer 2 switched
is Fast Ethernet-attached, and server Z is Fast path between server N and clients in VLANs A, B, C, and D
EtherChannel-attached. Policy controlling access can be achieved in two ways. With four network interface
to these servers is implemented with access lists on the core cards (NICs), it can be directly attached to each VLAN. With
switches. Another big advantage of the server distribution an ISL NIC, server N can talk directly to all four VLANs over
model is that HSRP can be used to provide redundancy with a VLAN trunk. Server N can be selectively hidden from the
fast failover. The server distribution model also keeps all rest of the enterprise with an access list on distribution layer
server-to-server traffic off the backbone. See Appendix A switch H if required.
for a sample configuration that shows how to implement
ATM/LANE Backbone
a server farm.
Figure 15 shows the multilayer campus model with ATM
Server M is within workgroup D, which corresponds
LANE in the backbone. For customers that require
to one VLAN. Server M is Fast Ethernet-attached at a port
guaranteed quality of service (QoS), ATM is a good
on an access-layer switch, because most of the traffic to
alternative. Real-time voice and video applications may
the server is local to the workgroup. This follows the
mandate ATM features like per-flow queuing, which provides
conventional 80/20 rule. Server M could be hidden from
granular control of delay and jitter.
the enterprise with an access list at the distribution layer
switch H if required.

Figure 14 Server Attachment in the Multilayer Model

VLANs
A, B, C, D

Access
Layer

Distribution Si Si Si Server N
Layer A, B, C, D Server M
H Workgroup D

Core
Layer W X

Server HSRP Si Si HSRP


Distribution ATM
VLAN Trunk Fast Ethernet
Y Z Fast EtherChannel
Fast EtherAttached Fast EtherChannel–Attached Ethernet or Fast Ethernet Port
Enterprise Enterprise Gigabit Ethernet
Servers Server

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 16 of 33
Each Catalyst 5000 multilayer switch in the distribution The trunks between the two LightStream 1010 core
layer is equipped with a LANE card. The LANE card acts as switches can be OC-3 or OC-12 as required. The PNNI
LEC so that the distribution-layer switches can communicate protocol handles load balancing and intelligent routing
across the backbone. The LANE card has a redundant ATM between the ATM switches. Intelligent routing is increasingly
OC-3 physical interface called dual-PHY. In Figure 15, the important as the core scales up from two switches to many
solid lines represents the active link and the dotted lines switches. STP is not used in the backbone. Intelligent Layer 3
represents the hot-standby link. routing protocols such as OSPF and Enhanced IGRP manage
Two LightStream 1010 switches form the ATM core. path determination and load balancing between
Routers and servers with native ATM interfaces attach distribution-layer switches.
directly to ATM ports in the backbone. Enterprise servers
Cisco has implemented the Simple Server Redundancy
in the server farm attach to multilayer Catalyst 5000
Protocol (SSRP) to provide redundancy of the LECS and
switches X and Y. Servers may be Fast Ethernet- or Fast
the LES/BUS. SSRP is available on Cisco 7500 routers,
EtherChannel-attached. These Catalyst 5000 switches are
Catalyst 5000 family switches, and LightStream 1010
also equipped with LANE cards and act as LECs that connect
switches and is compatible with all LANE 1.0 standard
Ethernet-based enterprise servers to the ATM ELAN in
LECs.
the core.

Figure 15 Multilayer Model with ATM LANE Core

Domain A Domain B Domain C


Building A Building B Building C

Distribution Si Si Si Si Si Si
Layer

OC-3 or
OC-12 Uplinks

Core Layer B LightStream 1010


LightStream 1010
ATM LANE ATM Switch LECS Backup
ATM Switch LECS Backup
Cisco 7500 LECS
Primary

X Y
Catalyst 5000 Catalyst 5000
Si
LES/BUS Primary Si LES/BUS Backup
Server
Distribution ATM
VLAN Trunk Fast Ethernet
Fast Ethernet-Attached Fast EtherChannel–Attached Fast EtherChannel
Enterprise Enterprise Ethernet or Fast Ethernet Port
Servers Server

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 17 of 33
The LANE card for the Catalyst 5000 family is an reason, there are few performance considerations associated
efficient BUS with broadcast performance of 120 kpps. with placing the primary and backup LECS. A good choice
This is enough capacity for the largest campus networks. for a primary LECS would be a Cisco 7500 router with direct
In Figure 15 we place the primary LES/BUS on Switch X and ATM attachment to the backbone, because it would not
the backup LES/BUS on Switch Y. For a small campus SSRP, be affected by ATM signaling traffic in the event of a
LES/BUS failover takes only a few seconds. For a very large LES/BUS failover.
campus, LES/BUS failover can take several minutes. In large Figure 16 shows an alternative implementation of
campus designs, dual ELAN backbones are frequently used the LANE core using the Catalyst 5500 switch. Here the
to provide fast convergence in the event of a LES/BUS failure. Catalyst 5500 operates as an ATM switch with the addition
As an example, two ELANs, “Red” and “Blue,” are of the ATM Switch Processor (ASP) card. It is configured
created in the backbone. If the LES/BUS for ELAN Red is as a LEC with the addition of the OC-12 LANE/MPOA
disconnected, traffic is quickly rerouted over ELAN Blue card. It is configured as an Ethernet frame switch with the
until ELAN Red recovers. After ELAN Red recovers, the addition of the appropriate Ethernet or Fast Ethernet line
multilayer switches in the distribution layer reestablish cards. The server farm is implemented with the addition
contact across ELAN Red and start load balancing between of multilayer switching. The Catalyst 5500 combines
Red and Blue again. This process applies to routed protocols the functionality of the LightStream 1010 and the
but not bridged protocols. Catalyst 5000 in a single chassis. See Appendix A for
The primary and backup LECS database is configured an example of configuring an ATM backbone with
on the LightStream 1010 ATM switches because of their Catalyst 5500 multilayer switches.
central position. When the ELAN is operating in steady state,
there is no overhead CPU utilization on the LECS. The LECS
is only contacted when a new LEC joins an ELAN. For this

Figure 16 ATM LANE Core with Catalyst 5500 Switches

Domain A Domain B Domain C


Building A Building B Building C

Distribution Si Si Si Si Si Si
Layer

OC-3 or
OC-12 Uplinks
Core Layer
ATM LANE
Catalyst 5500 Catalyst 5500
LES/BUS Primary LES/BUS Backup
Si Si

Server
Distribution
Enterprise Fast EtherChannel–Attached
Servers Enterprise
Server

ATM
VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 18 of 33
IP Multicast Catalyst switches receive the CGMP message and forward
Applications based on IP multicast represent a small but multicast traffic only to ports with the specific MAC address
rapidly growing component of corporate intranets. in the forwarding table. This blocks multicast packets from
Applications such as IPTV, Microsoft NetShow, and all switch ports that don’t have group members downstream.
NetMeeting are being tried and deployed. There are several The Catalyst 5000 family of switches have an
aspects to handling multicasts effectively: architecture that forwards multicast streams to one port,
• Multicast routing, Protocol Independent Multicast (PIM) many ports, or all ports with no performance penalty.
dense mode and sparse mode Catalyst switches will support one or many multicast groups
• Clients and servers join multicast groups with Internet operating at wire speed concurrently.
Group Management Protocol (IGMP) One way to implement multicast policy is to place
• Pruning multicast trees with Cisco Group Multicast multicast servers in a server farm behind multilayer Catalyst
Protocol (CGMP) or IGMP snooping Switch X as shown in Figure 17. Switch X acts as a multicast
• Switch and router multicast performance firewall that enforces rate limiting and controls access to
• Multicast policy multicast sessions. To further isolate multicast traffic, create
The preferred routing protocol for multicast is PIM. PIM a separate multicast VLAN/subnet in the core. The multicast
sparse mode is described in RFC 2117, and PIM dense mode VLAN in the core could be a logical partition of existing core
is on the standards track. PIM is being widely deployed in switches or a dedicated switch if traffic is very high. Switch X
the Internet as well as in corporate intranets. As its name is a logical place to implement the PIM rendezvous point.
suggests, PIM works with various unicast routing protocols The rendezvous point is like the root of the multicast tree.
such as OSPF and Enhanced IGRP. PIM routers may also
Figure 17 Multicast Firewall and Backbone
be required to interact with the Distance Vector Multicast
Routing Protocol (DVMRP). DVMRP is a legacy multicast Clients for Clients for Clients for
Multicast Multicast Multicast
routing protocol deployed in the Internet multicast backbone A Only B Only C Only
(MBONE). Currently 50 percent of the MBONE has
Distribution
converted to PIM, and it is expected that PIM will replace Layer
Si Si Si

DVMRP over time.


B Only
PIM can operate in dense mode or in sparse mode. C Only

Dense-mode operation is used for an application like A Only


IPTV where there is a multicast server with many clients
throughout the campus. Sparse-mode operation is used Multicast VLAN 100 Unicast VLAN 200
for workgroup applications like NetMeeting. In either case, IP Subnet 131.108.1.0 IP Subnet 131.108.2.0

PIM builds efficient multicast trees that minimize the amount


X
of traffic on the network. This is particularly important Si Si
Server
for high-bandwidth applications such as real-time video. Distribution
In most environments, PIM is configured as sparse-dense Multicast Unicast
Server Server
and automatically uses either sparse mode or dense mode Farm Farm
as required.
IGMP is used by multicast clients and servers to join
ATM
or advertise multicast groups. The local gateway router VLAN Trunk Fast Ethernet
makes a multicast available on subnets with active listeners, Fast EtherChannel
Ethernet or Fast Ethernet Port
but blocks the traffic if no listeners are present. CGMP Gigabit Ethernet
extends multicast pruning down to the Catalyst switch.
A Cisco router sends out a CGMP message to advertise all
the host MAC addresses that belong to a multicast group.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 19 of 33
Scaling Considerations Not all routing protocols are created equal, however.
The multilayer design model is inherently scalable. Layer 3 AppleTalk Routing Table Maintenance Protocol (RTMP),
switching performance scales because it is distributed. Novell Server Advertisement Protocol (SAP), and Novell
Backbone performance scales as you add more links or more Routing Information Protocol (RIP) are protocols with
switches. The individual switch domains or buildings scale to overhead that increases as the square of the number of
over 1000 client devices with two distribution-layer switches peers. For example, say there are 12 distribution-layer
in a typical redundant configuration. More building blocks switches attached to the backbone and running Novell SAP.
or server blocks can be added to the campus without If there are 100 SAP services being advertised throughout
changing the design model. Because the multilayer design the campus, each distribution switch injects 100/7 = 15
model is highly structured and deterministic, it is also SAP packets into the backbone every 60 seconds. All 12
scalable from a management and administration perspective. distribution-layer switches receive and process 12 * 15 = 180
In all the multilayer designs discussed, we have avoided SAP packets every 60 seconds. The Cisco IOS software
STP loops in the backbone. STP takes 40 to 50 seconds to provides features such as SAP filtering to contain SAP
converge and does not support load-balancing across advertisements from local servers where appropriate. The
multiple paths. Within Ethernet backbones, no loops are 180 packets is a reasonable number, but consider what
configured. For ATM backbones, PNNI handles load happens with 100 distribution-layer switches advertising
balancing. In all cases, intelligent Layer 3 routing protocols 1000 SAP services.
such as OSPF and Enhanced IGRP handle path Figure 18 shows a design for a large hierarchical,
determination and load balancing over multiple paths in the redundant ATM campus backbone. The ATM core
backbone. designated B consists of eight LightStream 1010 switches
OSPF overhead in the backbone rises linearly as the with a partial mesh of OC-12 trunks. Domain C consists
number of distribution-layer switches rises. This is because of three pairs of LightStream 1010 switches. Domain C can
OSPF elects one designated router and one backup be configured with an ATM prefix address that is
designated router to peer with all the other Layer 3 switches summarized where it connects to the core B. On this scale,
in the distribution layer. If two VLANs or ELANs are created manual ATM address summarization would have little
in the backbone, a designated router and a backup are elected benefit. The default summarization would have just 26
for each. So the OSPF routing traffic and CPU overhead routing entries corresponding to the 26 switches in Figure 18.
increase as the number of backbone VLANs or ELANs In domain A, pairs of distribution-layer switches attach to
increases. For this reason, it is recommended to keep the the ATM fabric with OC-3 LANE. A server farm behind
number of VLANs or ELANs in the backbone small. For Catalyst switches X and Y attaches directly to the core with
large ATM/LANE backbones, it is recommended to create OC-12 LANE/MPOA cards.
two ELANs in the backbone as was discussed in the “ATM/
LANE Backbone” section earlier in this paper.
Another important consideration for OSPF scalability
is summarization. For a large campus, make each building
an OSPF area and make the distribution-layer switches
area border routers (ABRs). Pick all the subnets within
the building from a contiguous block of addresses and
summarize with a single summary advertisement at the
ABRs. This reduces the amount of routing information
throughout the campus and increases the stability of the
routing table. Enhanced IGRP can be configured for
summarization in the same way.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 20 of 33
Migration Strategies Figure 19 shows a multilayer campus with a parallel
The multilayer design model describes the logical structure FDDI backbone. The FDDI backbone could be bridged to the
of the campus. The addressing and Layer 3 design are switched Fast Ethernet backbone with translational bridging
independent of choice of media. The logical design principles implemented at the distribution layer. Alternatively, the
are the same whether implemented with Ethernet, Token FDDI backbone could be configured as a separate logical
Ring, FDDI, or ATM. This is not always true in the case of network. There are several possible reasons for keeping an
bridged protocols such as NetBIOS and Systems Network existing FDDI backbone in place. FDDI supports 4500-byte
Architecture (SNA), which are media dependent. In frames, while Ethernet frames can be no larger than 1500
particular, Token Ring applications with frame sizes bytes. This is important for bridged protocols that originate
larger than the 1500 bytes allowed by Ethernet need on Token Ring end systems that generate 4500-byte frames.
to be considered. Another reason to maintain an FDDI backbone is for
enterprise servers that have FDDI network interface cards.

Figure 18 Hierarchical Redundant ATM Campus Backbone

A Si

Si

C
Si

Si
B

Si

Si

X Si Si Y

Server
Distribution
ATM OC-3
ATM OC-12
Gigabit Ethernet

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 21 of 33
Data-link switching plus (DLSw+) is Cisco’s Security in the Multilayer Model
implementation of standard DLSw. SNA frames from Access control lists are supported by multilayer switching
native SNA client B are encapsulated in TCP/IP by a router with no performance degradation. Because all traffic passes
or a distribution-layer switch in the multilayer model. through the distribution layer, this is the best place to
A distribution switch de-encapsulates the SNA traffic out to implement policy with access control lists. These lists can
a Token Ring-attached front-end processor (FEP) at a data also be used in the control plane of the network to restrict
center. Multilayer switches can be attached to Token Ring access to the switches themselves. In addition, the TACACS+
with the Versatile Interface Processor (VIP) card and the and RADIUS protocols provide centralized access control
Token Ring port adapter (PA). to switches. The Cisco IOS software also provides multiple
levels of authorization with password encryption. Network
managers can be assigned to a particular level at which
a specific set of commands are enabled.

Figure 19 FDDI and Token Ring Migration

NetBIOS SNA
Client A Client B

Token Token
Access
Ring Ring
Layer

Distribution Si Si Si
Layer

Switched Ethernet
Backbone

Dual-Homed FDDI
FDDI Backbone Dual Ring Si Si

Token Token
NetBIOS Ring Ring Server
Servers
Distribution
VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port IBM SNA FEPs
Token Ring Port TIC-Attached
FDDI Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 22 of 33
Implementing Layer 2 switching at the access layer Bridging in the Multilayer Model
and in the server farm has immediate security benefits. With For nonrouted protocols, bridging is configured. Bridging
shared media, all packets are visible to all users on the logical between access-layer VLANs and the backbone is handled
network. It is possible for a user to capture clear-text by the RSM. Because each access-layer VLAN is running
passwords or files. On a switched network, conversations IEEE spanning tree, the RSM must not be configured with
are only visible to the sender and receiver. And within a an IEEE bridge group. The effect of running IEEE bridging
server farm, all server-to-server traffic is kept off the on the RSM is to collapse all the spanning trees of all the
campus backbone. VLANs into a single spanning tree with a single root bridge.
WAN security is implemented in firewalls. A firewall consists Configure the RSM with a DEC STP bridge group to keep
of one or more routers and bastion host systems on a special all the IEEE spanning trees separate.
network called a demilitarized zone (DMZ). Specialized Web For a redundant bridging configuration as shown in
caching servers and other firewall devices may attach to the Figure 7, run IOS Release 11.2(13)P or higher on all RSMs.
DMZ. The inner firewall routers connect to the campus IOS Release 11.2(13)P has a feature that allows the DEC
backbone in what can be considered a WAN distribution bridge protocol data units (BPDUs) to pass between RSMs
layer. Figure 20 shows a WAN distribution building block through the Catalyst 5000 switches. With older versions of
with firewall components. the Cisco IOS software, the DEC bridges will not see each
other and will not block redundant links in the topology.
If running an older version of IOS software on the RSMs,
ensure that only RSM A bridges between the backbone and
even-numbered VLANs, and that only RSM B bridges
between the backbone and odd-numbered VLANs.

Figure 20 WAN Distribution to the Internet

Distribution Si Si Si Si Si Si
Layer

Core Layer

Multilayer Switch
Si Si as Inner Firewall
Router
WAN
Distribution Bastion Hosts
Web Servers
Firewall Devices
in the DMZ

Outer Firewall
Routers

To Internet Service Providers ISPs

ATM
VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 23 of 33
Advantages of the Multilayer Model core. In fact there are no ISL trunks in the core. An access
We have discussed several variations of the multilayer switch such as a1a is not a good choice for the VTP server,
campus design model. Whether implemented with because not all VLANs in domain North appear on this
frame-switched Ethernet backbones or cell-switched ATM switch. Switches not configured as VTP servers are
backbones, all share the same basic advantages. The model is configured in VTP transparent mode. Using transparent
highly deterministic, which makes it easy to troubleshoot as mode on access Switch a1a allows us to restrict the set of
it scales. The modular building-block approach scales easily VLANs known to the switch.
as new buildings or server farms are added to the campus.
Figure 21 Implementing the Multilayer Model with Ethernet Backbone
Intelligent Layer 3 routing protocols such as OSPF and
VTP Domain North VTP Domain South
Enhanced IGRP handle load balancing and fast convergence VLANs 1, 10, 11, 12, 13 VLANs 1, 20, 21, 22, 23
across the backbone. The logical structure and addressing VLAN 10 VLAN 11 VLAN 20 VLAN 21
131.108.10.1 131.108.11.1 131.108.20.1 131.108.21.1
of the hub and router model are preserved, which makes
migration much easier. Many value-added services of the a1a a1b a2a a2b

Cisco IOS software, such as server proxy, tunneling, and


summarization are implemented in the Catalyst multilayer
switches at the distribution layer. Policy is also implemented
with access lists at the distribution layer or at the server Si d1a Si d1b d2a Si d2b Si
distribution switches. r1a r1b r2a r2b

Redundancy and fast convergence are provided by


features such as UplinkFast and HSRP. Bandwidth scales VTP Domain
from Fast Ethernet to Fast EtherChannel to Gigabit Ethernet Backbone
VLAN 99
without changing addressing or policy configuration. With
ca cb
the features of the Cisco IOS software, the multilayer model
supports all common campus protocols including TCP/IP, ATM
AppleTalk, Novell IPX, DECnet, IBM SNA, NetBIOS, and VLAN Trunk Fast Ethernet
Fast EtherChannel
many more. Many of the largest and most successful campus Ethernet or Fast Ethernet Port

intranets are built with the multilayer model. It avoids all the
scaling problems associated with flat bridged or switched Figure 22 shows VLAN 10 in detail. The VLAN trunks
designs. And lastly, the multilayer model with multilayer that carry VLAN 10 form a triangle. Switch d1a at the lower
switching handles Layer 3 switching in hardware with no left is the root switch for VLAN 10. On Switch a1a, trunk 2/
performance penalty compared with Layer 2 switching. 1 is forwarding with respect to VLAN 10, and trunk 2/2 is
blocking. The blocking trunk is shown in purple. UplinkFast
Appendix A: Implementing the Multilayer Model
is enabled on Switch a1a. In addition to the three trunks,
Ethernet Backbone three ports attach to VLAN 10. PC A with IP 131.108.10.1
This section shows how to configure the multilayer attaches to Port 2/11 on Switch a1a. The two RSM modules
model with an Ethernet backbone. Figure 21 shows a small r1a and r1b are depicted as routers that attach logically to the
campus intranet. Two buildings are represented, VLAN. RSM r1a is attached to Port 3/1 of Switch d1a, and
corresponding to VTP domains North and South. The RSM r1b is attached to Port 3/1 of Switch d1b. RSM r1a has
backbone is VTP domain Backbone. Within each VTP IP address 131.108.10.151 on interface VLAN 10, but also
domain, at least one switch is configured as the VTP server. acts as primary HSRP default gateway 131.108.10.100.
The VTP server keeps track of all the VLANs configured in a
domain. Switch d1a is the VTP server for domain North.
Switch d2a is the VTP server for domain South. Both ca and
cb are VTP servers for domain Backbone. Both core switches
are VTP servers, because we are not trunking VLAN 1 in the

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 24 of 33
Figure 22 VLAN 10 Logical Topology within Multilayer Model Note that we have used a simple naming convention for
switches. The first letter ‘a’ in ‘a1a’ refers to access; the letter
PC A VTP Domain North ‘d’ in ‘d1a’ refers to distribution; and the letter ‘c’ in ‘ca’
131.108.10.1 VLANs 1, 10, 11, 12, 13 refers to core. The first RSM is r1a within Switch d1a.

2/11
Low IP addresses such as 131.108.10.1 represent hosts or
a1a
UpLinkfast clients, 131.108.10.20x addresses represent servers, and

2/1 F-Forwarding 2/2 B-Blocking


131.108.10.10x addresses represent HSRP gateway routers.
RSM host addresses other than for HSRP are the same for
2/1 2/2
2/3 d1b every VLAN, as follows:
d1a-ROOT
Si 2/3 Si
r1a r1b
RSM Host Address
3/1 3/1
r1a x.x.x.151
RSM r1a (Logical) RSM r1b (Logical)
131.108.10.151 131.108.10.152 r1b x.x.x.152
HSRP Primary HSRP Backup
131.108.10.100 Interface VLAN 10 Interface VLAN 10 131.108.10.100 r2a x.x.x.153

r2b x.x.x.154
Forwarding Trunk
rca x.x.x.155
Port
Blocking Trunk rcb x.x.x.156

Figure 23 shows VLAN 11 in detail. Note that Switch rwan x.x.x.157 (Cisco 7500 WAN router attached to the backbone)

d1b is the root for odd-numbered VLANs. On Switch a1a,


trunk 2/1 is in blocking mode and trunk 2/2 is in forwarding Domain North has the four subnets: 131.108.10.0,
mode. RSM r1b acts as the HSRP primary gateway 131.108.11.0, 131.108.12.0, and 131.108.13.0. These
131.108.11.100 for VLAN 11, and r1a is the backup correspond to VLANs 10, 11, 12, and 13. In addition, a
gateway. management subnet 131.108.1.0 corresponds to VLAN 1
within the domain. VLAN 1 does not extend beyond the
Figure 23 VLAN 11 Logical Topology within Multilayer Model
distribution-layer switches. Within domain South, VLAN 1
is also used, but it is a different subnet, 131.108.2.0. The
PC B VTP Domain North
131.108.11.1 VLANs 1, 10, 11, 12, 13 management port SC0 of each switch is in VLAN 1 and is
configured with an address on subnet 131.108.1.0 as
2/12
a1a follows:
Uplinkfast

2/1 B-Blocking 2/2 F-Forwarding


Device IP Address Gateway Address
2/1 2/2
2/3 a1a 131.108.1.1 131.108.1.100
d1a d1b-ROOT
Si 2/3 Si a1b 131.108.1.2 131.108.1.101
r1a r1b

3/1 3/1 d1a 131.108.1.3 131.108.1.100

RSM r1a (Logical) RSM r1b (Logical) d1b 131.108.1.4 131.108.1.101


131.108.11.151 131.108.11.152 r1a 131.108.1.151 N/A (HSRP primary for 131.108.1.100)
HSRP Backup HSRP Primary
131.108.11.100 Interface VLAN 11 Interface VLAN 11 131.108.11.100 r1b 131.108.1.152 N/A (HSRP backup for 131.108.1.100)

Forwarding Trunk
Port
Blocking Trunk

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 25 of 33
Domain South has four subnets: 131.108.20.0, The configuration for Switch a1a is shown below. Slot 2
131.108.21.0, 131.108.22.0, and 131.108.23.0. These has a 10/100 card, and Ports 2/1 and 2/2 are used to connect
correspond to VLANs 20, 21, 22, and 23. In addition, to Switches d1a and d1b respectively. The last command,
a management subnet 131.108.2.0 corresponds to VLAN set spantree uplinkfast enable, enables the fast STP
1 within the domain. The management port SC0 of each failover feature.
switch is configured with an address on subnet 131.108.2.0 set prompt a1a
as follows: set vtp mode transparent
set vtp domain North
set interface sc0 1 131.108.1.1 255.255.255.0
Device IP Address Gateway Address set ip route default 131.108.1.100 0
a2a 131.108.2.1 131.108.2.100 set trunk 2/1 on
set trunk 2/2 on
a2b 131.108.2.2 131.108.2.101
set vlan 10
d2a 131.108.2.3 131.108.2.100 set vlan 10 2/11 (assigns one host port in
d2b 131.108.2.4 131.108.2.101 VLAN 10)
set vlan 11
r2a 131.108.2.153 N/A (HSRP primary for 131.108.2.100)
set vlan 11 2/12 (assigns one host port in
r2b 131.108.2.154 N/A (HSRP backup for 131.108.2.100) VLAN 11)
set spantree uplinkfast enable
The configuration for Switch d1a follows. Slot 2 has a 10/100
Domain Backbone has the subnet 131.108.99.0, which
card, and Ports 2/1, 2/2, and 2/3 are used to connect to
is VLAN 99. HSRP is not configured on subnet 99.0, because
Switches a1a, a1b, and d1b respectively. This switch is the
configuring “standby” on a VLAN interface disables Internet
STP root for even VLANs 10 and 12. We remove VLANs 12,
Control Message Protocol (ICMP) redirects. The gateway for
13, and 99 from the trunk 2/1 to Switch a1a. We remove
ca and cb is configured to their own addresses with a default
VLANs 10, 11, and 99 from the trunk 2/2 to Switch a1b. We
class B mask, so ca and cb will use proxy ARP to route to any
remove VLAN 99 from trunk 2/3 to Switch d1b to eliminate
other networks.
spanning tree loops in the core. VLAN 1 cannot be removed
Device IP Address Gateway Address from the trunks within a VTP domain. Switch d1a is the VTP
ca 131.108.99.1 Proxy ARP Gateway 131.108.99.1
server for domain North.
- - - Mask 255.255.0.0
set prompt d1a
set vtp domain North
cb 131.108.99.2 Proxy ARP Gateway 131.108.99.2 set vtp mode server
- - - Mask 255.255.0.0 set interface sc0 1 131.108.1.3 255.255.255.0
r1a 131.108.99.151 N/A -
set ip route default 131.108.1.100 0
set trunk 2/1 on
r1b 131.108.99.152 N/A - set trunk 2/2 on
r2a 131.108.99.153 N/A - set trunk 2/3 on
r2b 131.108.99.154 N/A -
set vlan 10, 11, 12, 13, 99
set spantree root 10, 12
clear trunk 2/1 12,13,99
clear trunk 2/2 10,11,99
clear trunk 2/3 99

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 26 of 33
The configuration for RSM r1a follows. This switch acts Figure 24 Creating a Server Farm

as HSRP primary for 131.108.1.100. This switch is also VTP Domain North VTP Domain South
the HSRP primary gateway for even-numbered subnets a1a a1b a2a a2b
131.108.10.0 and 131.108.12.0 and the HSRP backup
gateway for odd-numbered subnets 131.108.11.0 and
131.108.13.0.
hostname r1a Si Si Si Si
d1a d1b d2a d2b
interface vlan 1 r1a r1b r2a r2b
ip address 131.108.1.151 255.255.255.0
standby 1 ip 131.108.1.100
standby 1 priority 100
VTP Domain
standby 1 preempt Backbone
interface vlan 10 VLAN 99 ca cb
ip address 131.108.10.151 255.255.255.0 Si Si
rca rcb
standby 1 ip 131.108.10.100
standby 1 priority 100
standby 1 preempt Server
VLAN 100
interface vlan 11
ip address 131.108.11.151 255.255.255.0 131.108.100.200 131.108.100.201
standby 1 ip 131.108.11.100
standby 1 priority 50 ATM
interface vlan 12 VLAN Trunk Fast Ethernet
Fast EtherChannel
ip address 131.108.12.151 255.255.255.0 Ethernet or Fast Ethernet Port
standby 1 ip 131.108.12.100
standby 1 priority 100
standby 1 preempt Figure 25 shows core Switches ca and cb in more detail.
interface vlan 13
A Fast EtherChannel VLAN 100 link connects ca and cb,
ip address 131.108.13.151 255.255.255.0
standby 1 ip 131.108.13.100 providing a redundant Layer 2 path from enterprise servers
standby 1 priority 50 to the HSRP primary gateways and backup gateways. This
interface vlan 99
link also carries all server-to-server traffic.
ip address 131.108.99.151 255.255.255.0
router ospf 777
network 131.108.0.0 0.0.255.255 area 0 Figure 25 Server Farm Detail

Implementing a Server Farm


Figure 24 shows enterprise servers attached to VLAN 100. VTP Domain
Backbone 1/1 1/2 1/1 1/2
We have added RSMs rca and rcb to core switches ca and cb. VLAN 99
ca 2/1-2 cb
RSM rca is HSRP primary gateway 131.108.100.100, and rca Si Si rcb
HSRP Primary 2/3-4 HSRP Backup
RSM rcb is HSRP backup gateway for 131.108.100.100. 2/12 2/12
RSM rcb is HSRP primary gateway for 131.108.100.101,
and RSM rca is HSRP backup gateway for 131.108.100.101. Server VLAN
100
Enterprise server 131.108.100.200 uses default gateway
131.108.100.200 131.108.100.201
131.108.100.100, and enterprise server 131.108.100.201
uses default gateway 131.108.100.101. This provides for
load distribution of outbound packets from the server farm
to the backbone.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 27 of 33
The configuration for Switch ca follows: Figure 26 Implementing the Multilayer Model—ATM/LANE Core
set prompt ca VTP Domain North VTP Domain South
set vtp domain Backbone a1a a1b a2a a2b
set vtp mode server
set interface sc0 99 131.108.99.1 255.255.255.0
set ip route default 131.108.99.100 0
set vlan 99 name 131.108.99.0
set vlan 100 name 131.108.100.0 Si d1a Si d1b d2a Si d2b Si
set port channel 2/1-2 on (Fast EtherChannel r1a r1b r2a r2b
VLAN 99)
set port channel 2/3-4 on (Fast EtherChannel VTP Domain
Backbone VLAN 98
VLAN 100) ELAN Atmbackbone
set vlan 99 2/1-2 IP Subnet 98.0
set vlan 100 2/12 ca cb
set trunk 1/1 off Catalyst 5500 Si Si Catalyst 5500
LES/BUS Primary LES/BUS Backup
set trunk 1/2 off
set trunk 2/1 off (FEC is not an ISL trunk)
set trunk 2/3 off (FEC is not an ISL trunk) Server Fast Ethernet Channel–Attached
Distribution Enterprise Server
The configuration for RSM rca follows:
Enterprise
hostname rca Servers

interface vlan 99
ip address 131.108.99.155 255.255.255.0 ATM
VLAN Trunk Fast Ethernet
interface vlan 100 Fast EtherChannel
Ethernet or Fast Ethernet Port
ip address 131.108.100.155 255.255.255.0
standby 1 ip 131.108.100.100
Standby 1 priority 100 Only one ELAN atmbackbone, which is subnet
standby 1 preempt 131.108.98.0, is provisioned in the core. This simplifies the
standby 2 ip 131.108.100.101 core and reduces the number of virtual circuits required. In a
standby 2 priority 50 large ATM campus backbone, two ELANs would be used for
router ospf 777 redundancy. Nine LANE clients attach to atmbackbone in
network 131.108.0.0 0.0.255.255. area 0 Figure 27. Each LANE card on switches d1a, d1b, d2a, d2b,
ATM LANE Backbone ca, and cb has one LEC associated with VLAN 98. Each
Figure 26 shows the multilayer model with an ATM/LANE ATM switch has a LEC associated with the management
core. Catalyst 5500 Switches ca and cb are used to provide port. Router rwan has a native ATM interface, and therefore
ATM switching in the core and Ethernet switching in the has a LEC.
server farm distribution. On all distribution-layer and
core-layer switches, a LANE card is used to connect Ethernet
VLAN 98 to ATM ELAN atmbackbone. The LANE card on
Switch ca is the LES/BUS primary for atmbackbone, and
the LANE card on Switch cb is the LES/BUS backup for
atmbackbone.

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 28 of 33
The number of virtual circuits in the backbone is given In the event that the primary LES/BUS is disconnected, SSRP
by the formula: failover is just a few seconds. To observe the number of open
6n < x < 6n + (n(n-1)/2) VCs on a LEC, use the command show atm vc.
Where n is the number of LECs on atmbackbone. With no Rwan#show atm vc
AAL / Peak Avg. Burst
traffic, each LEC has exactly six VCs. For atmbackbone with Interface VCD VPI VCI Type Encapsulation Kbps
nine LECs, we get: Kbps Cells Status
54 < x < 90 ATM0/0 1 0 5 PVC AAL5-SAAL 155000 155000 96
ACTIVE
With no traffic, ELAN atmbackbone has a total of 54 VCs. ATM0/0 2 0 16 PVC AAL5-ILMI 155000 155000 96
If every LEC has open connections to every other LEC, ACTIVE
ATM0/0.2 4 0 33 SVC LANE-LEC 155000 155000 32
atmbackbone has 90 VCs. This number is still relatively
ACTIVE
small, and the LES/BUS for atmbackbone will not be stressed. ATM0/0.2 5 0 34 MSVC LANE-LEC 155000 155000 32
ACTIVE
ATM0/0.2 6 0 35 SVC LANE-LEC 155000 155000 32
ACTIVE
ATM0/0.2 7 0 36 MSVC LANE-LEC 155000 155000 32
ACTIVE
ATM0/0.2 79 0 127 SVC LANE-DATA 155000 155000
32 ACTIVE

Figure 27 Management Subnet in the ATM/LANE Core

r1a r1b r2a r2b


98.151 Lane1a 98.152 Lane1b 98.153 Lane2a 98.154 Lane2b

VTP Domain rwan


Backbone 98.157
ATM ELAN
Atmbackbone
IP Subnet 131.108.98.0
ca cb
LES/BUS Primary Leneca LES/BUS Backup Lenecb
LECS Primary Aspca 98.171 LECS Backup aspcb 98.172
rca rcb

Si Si

Server
Distribution

Fast Ethernet Channel–Attached


Enterprise Server

Enterprise
Servers

ATM
VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 29 of 33
This output shows the six default VCs and one open data On the Supervisor Card of Switch ca, configure Ethernet
VC, which is because of the current Telnet session to rwan. VLAN 98. We are not using VLAN trunking or VTP in the
Permanent Virtual Circuit (PVC) 0 5 is SVC signaling, and core, so VLAN 1 is not used.
PVC 0 16 is Interim Local Management Interface (ILMI) et prompt ca
signaling. One open SVC and one open MSVC are to the set vtp domain backbone
set vtp mode server
LES; one open SVC and one open MSVC are to the BUS. set vlan 1 name default
The configuration of access-layer switches, set vlan 98 name atmbackbone
distribution-layer switches, and RSMs is basically the same set vlan 98 8/1-12 (attach servers and
Ethernet13/0/0 from aspca)
as described earlier. When configuring the LANE set interface sc0 98 131.108.98.1 255.255.255.0
components, it is good to observe conventions in order to set ip route default 131.108.98.1 255.255.0.0
keep everything straight. The following is the ATM (use proxy ARP)

subinterface convention used for LANE cards and other Perform show lane default to determine the ATM network
ATM interfaces: service access point (NSAP) address of the primary LES on
laneca and the backup LES on lanecb. The ATM interface of
Subinterface Association the LANE card is first physically connected to an ATM
n.0 Reserved for LECS if required switch port to allow ILMI to determine the 20-byte NSAP
n.1 Default ELAN (not used in this configuration) address. The active physical interface PHY A is reflected in
n.2 atmbackbone ELAN = VLAN 98 = subnet 131.108.98.0 the 13-byte prefix that is derived from the ATM switch. PHY
n.3 Not used A of laneca is connected to aspca, and PHY A of lanecb is
connected to aspcb.
laneca>show lane default
It is also important to keep track of the IP addresses
interface ATM0:
of management interfaces on VLAN 98. Here aspca is LANE Client:
the name of the LEC on port 13/0/0.2 of ASP on Switch 47.0091810000000010F6737401.0010F6737020.**
LANE Server:
ca. Port 13/0/0 is the internal management interface of the
47.0091810000000010F6737401.0010F6737021.**
ATM switch. LANE Bus:
47.0091810000000010F6737401.0010F6737022.**
Device IP Address LANE Config Server:
47.0091810000000010F6737401.0010F6737023.00
r1a 131.108.98.151 note: ** is the subinterface number byte in hex
r1b 131.108.98.152 lanecb>show lane default
interface ATM0:
r2a 131.108.98.153 LANE Client:
r2b 131.108.98.154 47.0091810000000010F6756301.0010F6755F20.**
LANE Server:
rca 131.108.98.155
47.0091810000000010F6756301.0010F6755F21.**
rcb 131.108.98.156 LANE Bus:
rwan 131.108.98.157 47.0091810000000010F6756301.0010F6755F22.**
LANE Config Server:
aspca 131.108.98.171 (active LECS) 47.0091810000000010F6756301.0010F6755F23.00
aspcb 131.108.98.172 (standby LECS) note: ** is the subinterface number byte in hex

ca 131.108.98.1
cb 131.108.98.2
laneca n/a (active LES/BUS)
lanecb n/a (standby LES/BUS)

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 30 of 33
Perform show lane default on aspca and aspcb to determine router to 131.108.98.171, which is the management
the primary and backup LECS address. interface of the switch. Configure the subnet mask to
aspca#show lane default 255.255.0.0.
interface ATM13/0/0: atm lecs-address-default
LANE Client: 47.0091.8100.0000.0010.f673.7401.0010.f673.7
47.0091810000000010F6737401.0010F6737402.** 405.00 1
LANE Server: atm lecs-address-default
47.0091810000000010F6737401.0010F6737403.** 47.0091.8100.0000.0010.f675.6301.0010.f675.6
LANE Bus: 305.00 2
47.0091810000000010F6737401.0010F6737404.** !
LANE Config Server: lane database atmbackbone
47.0091810000000010F6737401.0010F6737405.00 name atmbackbone server-atm-address
note: ** is the subinterface number byte in hex 47.0091810000000010F6737401.0010F6737021.02
aspcb#show lane default name atmbackbone server-atm-address
interface ATM13/0/0: 47.0091810000000010F6756301.0010F6755F21.02
LANE Client: name atmbackbone server-atm-address
47.0091810000000010F6756301.0010F6756302.** 47.0091810000000010F6737401.0010F6755F21.02
LANE Server: name atmbackbone server-atm-address
47.0091810000000010F6756301.0010F6756303.** 47.0091810000000010F6756301.0010F6737021.02
LANE Bus: default-name atmbackbone
47.0091810000000010F6756301.0010F6756304.** !
LANE Config Server: interface ATM13/0/0
47.0091810000000010F6756301.0010F6756305.00 no ip address
note: ** is the subinterface number byte in hex lane config auto-config-atm-address
Configure aspca as primary LECS and aspcb as backup lane config database atmbackbone
!
LECS. LANE card laneca on core switch ca will be the interface ATM13/0/0.2 multipoint
primary LES/BUS for ELAN atmbackbone. Because both ip address 131.108.98.171 255.255.255.0
laneca and lanecb are dual PHY cards, configure the active lane client ethernet atmbackbone
!
physical interface first. Set up the LECS database in the interface Ethernet13/0/0
following sequence: ip address 131.108.98.181 255.255.255.0
!
1 PHY A for laneca (active physical interface) ip default-gateway 131.108.98.171 255.255.0.0
2 PHY A for lanecb (active physical interface) Configure LANE cards laneca and lanecb to be LES/BUS for
3 PHY B for lanecb (standby physical interface) atmbackbone. Also configure a LEC to tie Ethernet VLAN 98
4 PHY B for laneca (standby physical interface) to ATM ELAN atmbackbone.
interface ATM0
atm preferred phy A
In the configuration shown, we connected Ethernet 13/ atm pvc 1 0 5 qsaal
0/0 to a port in VLAN 98 as a secondary out-of-band atm pvc 2 0 16 ilmi
!
management interface for testing purposes. Since this is a
interface ATM0.2 multipoint
switch, it needs a gateway router. Since HSRP does not work lane server-bus ethernet atmbackbone
into the backbone, use proxy ARP. Configure the gateway lane client ethernet 98 atmbackbone

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 31 of 33
Configure the LANE cards lane1a, lane1b, lane2a and lane2b To see if everything is working properly, enter show lane on
as follows. This ties Ethernet VLAN 98 to ATM ELAN a LEC.
atmbackbone at each switch. Rwan#show lane
interface ATM0 LE Client ATM0/0.2 ELAN name: atmbackbone
tm preferred phy A Admin: up State: operational
atm pvc 1 0 5 qsaal Client ID: 9 LEC up for 19 hours 16 minutes
atm pvc 2 0 16 ilmi 16 seconds
! Join Attempt: 2
interface ATM0.2 multipoint HW Address: 0010.a64e.b800 Type: ethernet Max
lane client ethernet 98 atmbackbone Frame Size: 1516
Figure 28 shows the physical connection of the primary ATM Address:
47.0091810000000010F6756301.0010A64EB800.02
and backup ATM connections from each port of the VCD rxFrames txFrames Type ATM Address
dual-PHY LANE cards to a port on Switches aspca and 0 0 0 configure
aspcb. 47.0091810000000010F6737401.0010F6737023.00
4 1 290 direct
47.0091810000000010F6737401.0010F6737021.02
5 46581 0 distribute
47.0091810000000010F6737401.0010F6737021.02
6 0 16254 send
47.0091810000000010F6737401.0010F6737022.02
7 156271 0 forward
47.0091810000000010F6737401.0010F6737022.02

Figure 28 ATM Connectivity in the ATM/LANE Core

r1a r1b r2a r2b


98.151 Lane1a 98.152 Lane1b 98.153 Lane2a 98.154 Lane2b

9/0/2
9/0/1 9/0/2 9/0/1
9/0/3
9/0/0 9/0/3 9/0/0
VTP Domain aspca
aspca 9/1/0 9/1/0 98.172
Backbone
98.171
ATM ELAN 9/1/1 9/1/1
Atmbackbone 9/1/2 9/1/2
9/1/3 9/1/3
ca cb
laneca lanecb rwan
rca 98.155 rcb 98.156 98.157

Server Si Si
Distribution
VLAN 100

ATM
VLAN Trunk Fast Ethernet
Fast EtherChannel
Ethernet or Fast Ethernet Port

Copyright © 1998 Cisco Systems, Inc. All Rights Reserved.


Page 32 of 33
If you have a parallel Ethernet backbone as a backup,
use a different VLAN number and a different IP subnet.
Avoid using the same VLAN for the Ethernet backbone and
the LANE backbone, because this causes a loop. So Ethernet
VLAN 99 with subnet 99.0 described earlier is compatible
with ATM ELAN atmbackbone with subnet 98.0 described
in this section.

Corporate Headquarters European Headquarters Americas Asia Headquarters


Cisco Systems, Inc. Cisco Systems Europe s.a.r.l. Headquarters Nihon Cisco Systems K.K.
170 West Tasman Drive Parc Evolic, Batiment L1/L2 Cisco Systems, Inc. Fuji Building, 9th Floor
San Jose, CA 95134-1706 16 Avenue du Quebec 170 West Tasman Drive 3-2-3 Marunouchi
USA Villebon, BP 706 San Jose, CA 95134-1706 Chiyoda-ku, Tokyo 100
http://www.cisco.com 91961 Courtaboeuf Cedex USA Japan
Tel: 408 526-4000 France http://www.cisco.com http://www.cisco.com
800 553-NETS (6387) http://www-europe.cisco.com Tel: 408 526-7660 Tel: 81 3 5219 6250
Fax: 408 526-4100 Tel: 33 1 6918 61 00 Fax: 408 527-0883 Fax: 81 3 5219 6001
Fax: 33 1 6928 83 26

Cisco Systems has more than 200 offices in the following countries. Addresses, phone numbers, and fax numbers are listed on the
Cisco Connection Online Web site at http://www.cisco.com.
Argentina • Australia • Austria • Belgium • Brazil • Canada • Chile • China (PRC) • Colombia • Costa Rica • Czech Republic • Denmark
England • France • Germany • Greece • Hungary • India • Indonesia • Ireland • Israel • Italy • Japan • Korea • Luxembourg • Malaysia
Mexico • The Netherlands • New Zealand • Norway • Peru • Philippines • Poland • Portugal • Russia • Saudi Arabia • Scotland •
Singapore
Copyright © 1998 Cisco Systems, Inc. All rights reserved. Printed in USA. AccessPath, AtmDirector, the CCIE logo, CD-PAC, Centri, Centri Bronze, Centri Gold, Centri Security Manager, Centri Silver, the
Cisco Capital logo, Cisco IOS, the Cisco IOS logo, CiscoLink, the Cisco NetWorks logo, the Cisco Powered Network logo, the Cisco Press logo, ClickStart, ControlStream, Fast Step, FragmentFree, IGX,
JumpStart, Kernel Proxy, LAN2LAN Enterprise, LAN2LAN Remote Office, MICA, Natural Network Viewer, NetBeyond, Netsys Technologies, Packet, PIX, Point and Click Internetworking, Policy Builder,
RouteStream, Secure Script, SMARTnet, StrataSphere, StrataSphere BILLder, StrataSphere Connection Manager, StrataSphere Modeler, StrataSphere Optimizer, Stratm, StreamView, SwitchProbe, The Cell,
TrafficDirector, VirtualStream, VlanDirector, Workgroup Director, Workgroup Stack, and XCI are trademarks; Empowering the Internet Generation and The Network Works. No Excuses. are service marks; and
BPX, Catalyst, Cisco, Cisco Systems, the Cisco Systems logo, EtherChannel, FastHub, FastPacket, ForeSight, IPX, LightStream, OptiClass, Phase/IP, StrataCom, and StrataView Plus are registered trademarks of
Cisco Systems, Inc. in the U.S. and certain other countries. All other trademarks mentioned in this document are the property of their respective owners. 9802R

Вам также может понравиться