Вы находитесь на странице: 1из 19

The Cisco Three-Layered Hierarchical Model

By SemSim.com

Cisco has defined a hierarchical model known as the hierarchical internetworking

model. This model simplifies the task of building a reliable, scalable, and less
expensive hierarchical internetwork because rather than focusing on packet
construction, it focuses on the three functional areas, or layers, of your network:

Core layer: This layer is considered the backbone of the network and includes the
high-end switches and high-speed cables such as fiber cables. This layer of the
network does not route traffic at the LAN. In addition, no packet manipulation is
done by devices in this layer. Rather, this layer is concerned with speed and ensures
reliable delivery of packets.

Distribution layer: This layer includes LAN-based routers and layer 3 switches. This
layer ensures that packets are properly routed between subnets and VLANs in your
enterprise. This layer is also called the Workgroup layer.

Access layer: This layer includes hubs and switches. This layer is also called the
desktop layer because it focuses on connecting client nodes, such as workstations to
the network. This layer ensures that packets are delivered to end user computers.

Figure INT.2.1 displays the three layers of the Cisco hierarchical model.

When you implement these layers, each layer might comprise more than two devices
or a single device might function across multiple layers.The benefits of the Cisco
hierarchical model include:
• High Performance: You can design high performance networks, where only
certain layers are susceptible to congestion.
• Efficient management & troubleshooting: Allows you to efficiently organize
network management and isolate causes of network trouble.
• Policy creation: You can easily create policies and specify filters and rules.
• Scalability: You can grow the network easily by dividing your network into
functional areas.
• Behavior prediction: When planning or managing a network, the model allows
you determine what will happen to the network when new stresses are placed
on it.

Core Layer
The core layer is responsible for fast and reliable transportation of data across a
network. The core layer is often known as the backbone or foundation network
because all other layers rely upon it. Its purpose is to reduce the latency time in the
delivery of packets. The factors to be considered while designing devices to be used
in the core layer are:

• High data transfer rate: Speed is important at the core layer. One way that
core networks enable high data transfer rates is through load sharing, where
traffic can travel through multiple network connections.

Low latency period: The core layer typically uses high-speed low latency
circuits which only forward packets and do not enforcing policy.
• High reliability: Multiple data paths ensure high network fault tolerance; if one
path experiences a problem, then the device can quickly discover a new

At the core layer, efficiency is the key term. Fewer and faster systems create a more
efficient backbone. There are various equipments available for the core layer.
Examples of core layer Cisco equipment include:

Cisco switches such as 7000, 7200, 7500, and 12000 (for WAN use)
Catalyst switches such as 6000, 5000, and 4000 (for LAN use)
T-1 and E-1 lines, Frame relay connections, ATM networks, Switched
Multimegabit Data Service (SMDS)

Distribution Layer
The distribution layer is responsible for routing. It also provides policy-based network
connectivity, including:

• Packet filtering (firewalling): Processes packets and regulates the

transmission of packets based on its source and destination information to
create network borders.
• QoS: The router or layer 3 switches can read packets and prioritize delivery,
based on policies you set.
• Access Layer Aggregation Point: The layer serves the aggregation point for
the desktop layer switches.
• Control Broadcast and Multicast: The layer serves as the boundary for
broadcast and multicast domains.
• Application Gateways: The layer allows you to create protocol gateways to
and from different network architectures.
• The distribution layer also performs queuing and provides packet
manipulation of the network traffic.

It is at this layer where you begin to exert control over network transmissions,
including what comes in and what goes out of the network. You will also limit and
create broadcast domains, create virtual LANs, if necessary, and conduct various
management tasks, including obtaining route summaries. In a route summary, you
consolidate traffic from many subnets into a core network connection. In Cisco
routers, the command to obtain a routing summary is:

show ip route summary

You can practice viewing routing information using a free CCNA exam router
simulator available from SemSim.com. You can also determine how routers update
each other’s routing tables by choosing specific routing protocols.

Examples of Cisco-specific distribution layer equipment include 2600,4000, 4500

series routers

Access Layer
The access layer contains devices that allow workgroups and users to use the
services provided by the distribution and core layers. In the access layer, you have
the ability to expand or contract collision domains using a repeater, hub, or standard
switch. In regards to the access layer, a switch is not a high-powered device, such as
those found at the core layer.

Rather, a switch is an advanced version of a hub.

A collision domain describes a portion of an Ethernet network at layer 1 of the OSI

model where any communication sent by a node can be sensed by any other node on
the network. This is different from a broadcast domain which describes any part of a
network at layer 2 or 3 of the OSI model where a node can broadcast to any node on
the network.

At the access layer, you can:

• Enable MAC address filtering: It is possible to program a switch to allow only

certain systems to access the connected LANs.
• Create separate collision domains: A switch can create separate collision
domains for each connected node to improve performance.
• Share bandwidth: You can allow the same network connection to handle all
• Handle switch bandwidth: You can move data from one network to another to
perform load balancing.
OSI Model:
The OSI model is a layered model and a conceptual standard used for defining
standards to promote multi-vendor integration as well as maintain constant
interfaces and isolate changes of implementation to a single layer. It is NOT
application or protocol specific. In order to pass any Cisco exam, you need to know
the OSI model inside and out.

The OSI Model consists of 7 layers.

Layer Description Device Protocol
Provides network access for applications, flow control and
error recovery. Provides communications services to
Application applications by identifying and establishing the availability of Gateway
SNMP, Telnet,
other computers as well as to determine if sufficient resources
exist for communication purposes.
Performs protocol conversion, encryption and data Gateway and
Presentation NCP, AFP, TDI
compression redirectors
Allows 2 applications to communicate over a network by
opening a session and synchronizing the involved computers.
Session Gateway NetBios
Handles connection establishment, data transfer and
connection release
Repackages messages into smaller formats, provides error free
Transport Gateway SPX, and
delivery and error handling functions

Handles addressing, translates logical addresses and names to Router and
Network NWLink,
physical addresses, routing and traffic management. brouter
Packages raw bits into frames making it transmitable across a
network link and includes a cyclical redundancy check(CRC).
It consists of the LLC sublayer and the MAC sublayer. The
MAC sublayer is important to remember, as it is responsible
**Data Link bridge and None
for appending the MAC address of the next hop to the frame
header. On the contrary, LLC sublayer uses Destination
Service Access Points and Source Service Access Points to
create links for the MAC sublayers.
Physical layer works with the physical media for transmitting
and receiving data bits via certain encoding schemes. It also Multiplexer
Physical None
includes specifications for certain mechanical connection and repeater
features, such as the adaptor connector.

Here is an easy way to memorize the order of the layers:

All People Seem To Need Data Processing. The first letter of each word
corresponds to the first letter of one of the layers. It is a little corny, but it works.

The table above mentions the term "MAC Address". A MAC address is a 48 bit
address for uniquely identifying devices on the network. Something likes 00-00-12-
33-FA-BC, we call this way of presenting the address a 12 hexadecimal digits format.
The first 6 digits specify the manufacture, while the remainders are for the host
itself. The ARP Protocol is used to determine the IP to MAC mapping. And of course,
MAC addresses cannot be duplicated in the network or problems will occur. For more
information about ARP and related protocols, read Guide To ARP, IARP, RARP, and
Proxy ARP.

Data encapsulation takes place in the OSI model. It is the process in which the
information in a protocol is wrapped in the data section of another protocol. The
process can be broken down into the following steps:

User information -> data -> segments -> packets/datagrams -> frames -> bits.

When discussing the OSI model it is important to keep in mind the differences
between "Connection-oriented" and "Connectionless" communications. A connection
oriented communication has the following characteristics:

A session is guaranteed.
Acknowledgements are issued and received at the transport layer, meaning if
the sender does not receive an acknowledgement before the timer expires, the
packet is retransmitted.
Phrases in a connection-oriented service involves Call Setup, Data transfer and
Call termination.
All traffic must travel along the same static path.
A failure along the static communication path can fail the connection.
A guaranteed rate of throughput occupies resources without the flexibility of
dynamic allocation.
Reliable = SLOW (this is always the case in networking).

In contrast, a connectionless communication has the following characteristics:

Often used for voice and video applications.

NO guarantee nor acknowledgement.
Dynamic path selection.
Dynamic bandwidth allocation.
Unreliable = FAST.

(Note: Connectionless communication does have some reliability PROVIDED by

upper layer Protocols.)

LAN Design:
When we talk about a LAN, Ethernet is the most popular physical layer LAN
technology today. Its standard is defined by the Institute for Electrical and Electronic
Engineers as IEEE Standard 802.3, but was originally created by Digital Intel Xerox
(DIX). According to IEEE, information for configuring an Ethernet as well as
specifying how elements in an Ethernet network interact with one another is clearly
defined in 802.3.

For half-duplex Ethernet 10BaseT topologies, data transmissions occur in one

direction at a time, leading to frequent collisions and data retransmission. In
contrast, full-duplex devices use separate circuits for transmitting and receiving data
and as a result, collisions are largely avoided. A collision is when two nodes are
trying to send data at the same time. On an Ethernet network, the node will stop
sending when it detects a collision, and will wait for a random amount of time before
attempting to resend, known as a jam signal. Also, with full-duplex transmissions the
available bandwidth is effectively doubled, as we are using both directions
simultaneously. You MUST remember: to enjoy full-duplex transmission, we need a
switch port, not a hub, and NICs that are capable of handling full duplex. Ethernet’s
media access control method is called Carrier sense multiple access with collision
dectection (CSMA/CD). Because of Ethernets collision habits it is also known as the
“best effort delivery system.” Ethernet cannot carry data over 1518 bytes, anything
over that is broken down into “travel size packets.”

Click here for a website with tons of information related to ethernet.

Fast Ethernet
For networks that need higher transmission speeds, there is the Fast Ethernet
standard called IEEE 802.3u that raises the Ethernet speed limit to 100 Mbps! Of
course, we need new cabling to support this high speed. In 10BaseT network we use
Cat3 cable, but in 100BaseT network we need Cat 5 cables. The three types of Fast
Ethernet standards are 100BASE-TX for use with level 5 UTP cable, 100BASE-FX for
use with fiber-optic cable, and 100BASE-T4 which utilizes an extra two wires for use
with level 3 UTP cable.

Gigabit Ethernet
Gigabit Ethernet is an emerging technology that will provide transmission speeds of
1000mbps. It is defined by the IEEE standard The 1000BASE-X (IEEE 802.3z). Just
like all other 802.3 transmission types, it uses Ethernet frame format, full-duplex and
media access control technology.

Token Ring
Token Ring is an older standard that isn't very widely used anymore as most have
migrated to some form of Ethernet or other advanced technology. Ring topologies
can have transmission rates of either 4 or 16mbps. Token passing is the access
method used by token ring networks, whereby, a 3bit packet called a token is passed
around the network. A computer that wishes to transmit must wait until it can take
control of the token, allowing only one computer to transmit at a time. This method
of communication aims to prevent collisions. Token Ring networks use multistation
access units (MSAUs) instead of hubs on an Ethernet network. For extensive
information on Token Ring, visit Cisco's website.

Network Devices:
In a typical LAN, there are various types of network devices available as outlined

• Hub Repeat signals received on each port by broadcasting to all the other
connected ports.
• Repeaters Used to connect two or more Ethernet segments of any media
type, and to provide signal amplification for a segment to be extended. In a
network that uses repeater, all members are contending for transmission of
data onto a single network. We like to call this single network a collision
domain. Effectively, every user can only enjoy a percentage of the available
bandwidth. Ethernet is subject to the "5-4-3" rule regarding repeater
placement, meaning we can only have five segments connected using four
repeaters with only three segments capable of accommodating hosts.
• Bridge A layer 2 device used to connect different networks types or networks
of the same type. It maps the Ethernet addresses of the nodes residing on
each segment and allows only the necessary traffic to pass through the
bridge. Packet destined to the same segment is dropped. This "store-and-
forward" mechanism inspects the whole Ethernet packet before making a
decision. Unfortunately, it cannot filter out broadcast traffic. Also, it introduces
a 20 to 30 percent latency when processing the frame. Only 2 networks can
be linked with a bridge.
• Switch Can link up four, six, eight or even more networks. Cut-through
switches run faster because when a packet comes in, it forwards it right after
looking at the destination address only. A store-and-forward switch inspects
the entire packet before forwarding. Most switches cannot stop broadcast
traffic. Switches are layer 2 devices.
• Routers Can filter out network traffic also. However, they filter based on the
protocol addresses defined in OSI layer 3(the network layer), not based on
the Ethernet packet addresses. Note that protocols must be routable in order
to pass through the routers. A router can determine the most efficient path
for a packet to take and send packets around failed segments.
• Brouter Has the best features of both routers and bridges in that it can be
configured to pass the unroutable protocols by imitating a bridge, while not
passing broadcast storms by acting as a router for other protocols.
• Gateway Often used as a connection to a mainframe or the internet.
Gateways enable communications between different protocols, data types and
environments. This is achieved via protocol conversion, whereby the gateway
strips the protocol stack off of the packet and adds the appropriate stack for
the other side. Gateways operate at all layers of the OSI model without
making any forwarding decisions.

The goal of LAN segmentation is to effectively reduce traffic and collisions by

segmenting the network. In a LAN segmentation plan, we do not consider the use of
gateways and hubs at all and the focus turns to device such as switches and routers.

Bridge - A layer 2 device used to connect different networks types or networks of
the same type. It maps the Ethernet addresses of the nodes residing on each
segment and allows only the necessary traffic to pass through the bridge. Packet
destined to the same segment is dropped. This "store-and-forward" mechanism
inspects the whole Ethernet packet before making a decision. Unfortunately, it
cannot filter out broadcast traffic. Also, it introduces a 20 to 30 percent latency when
processing the frame. Only 2 networks can be linked with a bridge.
Switch - Switches are layer 2 devices that can link up four, six, eight or even
more networks. Switches are the only devices that allow for microsegmentation. Cut-
through switches run faster because when a packet comes in, it forwards it right
after looking at the destination address only. A store-and-forward switch inspects the
entire packet before forwarding. Most switches cannot stop broadcast traffic.
Switches are considered dedicated data link device because they are close to a 100
% of the bandwidth. While bridging does most of its work by hardware, switches use
fabric/software to handle most of its work.

Store-and-forward - The entire frame is received before any forwarding takes

place. The destination and/or the source addresses are read and filters are applied
before the frame is forwarded. Latency occurs while the frame is being received; the
latency is greater with larger frames because the entire frame takes longer to read.
Error detection is high because of the time available to the switch to check for errors
while waiting for the entire frame to be received. This method discards frames
smaller than 64 bytes (runts) and frames larger than 1518 bytes (giants).

Cut-Through - The switch reads the destination address before receiving the entire
frame. The frame is then forwarded before the entire frame arrives. This mode
decreases the latency of the transmission and has poor error detection. This method
has two forms, Fast-forward and fragment-free.

• Fast-forward switching - Fast-forward switching offers the lowest level of

latency by immediately forwarding a packet after receiving the destination
address. Because fast-forward switching does not check for errors, there may
be times when frames are relayed with errors. Although this occurs
infrequently and the destination network adapter discards the fault frame
upon receipt. In networks with high collision rates, this can negatively affect
available bandwidth.
• Fragment Free Switching - Use the fragment-free option to reduce the
number of collisions frames forwarded with errors. In fast-forward mode,
latency is measured from the first bit received to the first bit transmitted, or
first in, first out (FIFO). Fragment-free switching filters out collision
fragments, which are the majority of packets errors, before forwarding
begins. In a properly functioning network, collision fragments must be smaller
then 64 bytes. Anything greater than 64 byes is a valid packet and is usually
received without error. Fragment-free switching waits until the received
packet has been determined not to be a collision fragment before forwarding
the packet. In fragment-free, latency is measured as FIFO.

Spanning-Tree Protocol - Allows duplicate switched/bridged paths without

incurring the latency effects of loops in the network.

The Spanning-Tree Algorithm, implemented by the Spanning-Tree Protocol, prevents

loops by calculating stable spanning-tree network topology. When creating a fault-
tolerant network, a loop-free path must exist between all nodes in the network The
Spanning-Tree Algorithm is used to calculate a loop-free paths. Spanning-tree
frames, called bridge protocol data units (BPDUs), are sent and received by all
switches in the network at regular intervals and are used to determine the spanning-
tree topology. A switch uses Spanning-Tree Protocol on all Ethernet-and Fast
Ethernet-based VLANs. Spanning-tree protocol detects and breaks loops by placing
some connections in standby mode, which are activated in the event of an active
connection failure. A separate instance Spanning-Tree Protocol runs within each
configured VLAN, ensuring topologies, mainly Ethernet topologies that conform to
industry standards throughout the network. These modes are as follows:

• Blocking- NO frames forwarded, BPDUs heard.

• Listening – No frames forwarded, listening for frames
• Learning- No frames forwarded, learning addresses.
• Forwarding- Frames forwarded, learning addresses.
• Disabled- No frames forwarded, no BPDUs heard.
The state for each VLAN is initially set by the configuration and later modified by the
Spanning-Tree Protocol process. You can determine the status, cost and priority of
ports and VLANs, by using the show spantree command. After the port-to-VLAN state
is set, Spanning-Tree Protocol determines whether the port forwards or blocks

A VLAN is a logical grouping of devices or users. These devices or users can be
grouped by function, department application and so on, regardless of their physical
segment location. VLAN configuration is done at the switch via switching fabric. A
VLAN can be used to reduce collisions by separating broadcast domains within the
switch. In other words, VLANs create separate broadcast domains in a switched
network. Frame tagging at layer 2 does this. Frame tagging is a gaining recognition
as the standard for implementing VLANs, and is recognized by IEEE 802.1q. Frame
tagging uniquely assigns a VLAN ID to each frame. This identifier is understood and
examined by each switch prior to any broadcasts or transmissions to other switches,
routers, and end-stations devices. When the frame exits the network backbone, the
switch removes the identifier before the frame is transmitted to the target end
station. This effectively creates an environment with fewer collisions. The key to this
is that ports in a VLAN share broadcasts, while ports not in that VLAN cannot share
the broadcasts. Thus users in the same physical location can be members of different
VLANs. We can plug existing hubs into a switch port and assign them a VLAN of their
own to segregates users on the hubs. Frame filtering examines particular information
about each frame. A filtering table is developed for each switch; this provides a high
level of administrative control because it can examine many attributes of each frame.
Frame filtering is slowly being erased and replaced by the frame tagging method.

VLANs can be complicated to set up. VLANs use layer 2 addressing, meaning that
routers are required between separate VLANs. The advantage of deploying layer 2
addresses is that layer 2 addressing is faster to process. It is also quite common for
administrators to set up multiple VLANs with multiple access lists to control access.
Layer 3 routing provides the ability for multiple VLANs to communicate with each
other, which means that users in different locations can reside on the same VLAN.
This is a flexible approach to network design.

VLANs are configured on the switch three ways, port centric, static and dynamically.
In port-centric VLANs, all the nodes connected to ports in the same VLAN are
assigned the same VLAN ID. Packets do not “leak” into other domains, and are easily
administered and provide great security between VLANs. Some say that static
configured VLANs are the same as port centric, because static VLANs use the port
centric method for assigning them to switch ports. Dynamic VLANs are ports on a
switch that can automatically determine their VLAN assignments. Dynamic VLAN
functions are based on MAC addresses, logical addressing, or protocol type of the
data packets. When a station is initially connected to an unassigned switch port, the
appropriate switch checks the MAC entry in the management database and
dynamically configures the port with the corresponding VLAN configuration. The
major high points of this method are less administration overhead, of course only
after the first administration of the database within the VLAN management software.
VLAN Switching
VLAN Considerations

Lan Protocols:
The following sections will introduce the core LAN protocols that you will need to
know for the exam.

Every IP address can be broken down into 2 parts, the Network ID(netid) and the
Host ID(hostid). All hosts on the same network must have the same netid. Each of
these hosts must have a hostid that is unique in relation to the netid. IP addresses
are divided into 4 octets with each having a maximum value of 255. We view IP
addresses in decimal notation such as, but it is actually utilized as
binary data so one must be able to convert addresses back and forth.

The following table explains how to convert binary into decimal and visa versa:
Decimal Binary When converting binary data to decimal,
a "0" is equal to 0. "1" is equal to the
128 10000000 number that corresponds to the field it is
64 01000000 in. For example, the number 213 would
32 00100000 be 11010101 in binary notation. This is
calculated as follows:
16 00010000 128+64+0+16+0+4+0+1=213.
8 00001000 Remember that this only represents 1
4 00000100 octet of 8 bits, while a full IP address is
32 bits made up of 4 octets. This being
2 00000010 true, the IP address
would look like 11010101 10000000
1 00000001
01000100 10000010.

IP addresses are divided into 3 classes as shown below:

Class Range
A 1-126
B 128-191 IP addresses can be class A, B or C. Class A addresses are for networks with a
large number of hosts. The first octet is the netid and the 3 remaining octets are
C 192-223
the hostid. Class B addresses are used in medium to large networks with the first
224-239 2 octets making up the netid and the remaining 2 are the hostid. A class C is for
Multicasting smaller networks with the first 3 octets making up the netid and the last octet
240-255 comprising the hostid. The later two classes aren’t used for networks.

A subnet mask blocks out a portion of an IP address and is used to differentiate

between the hostid and netid. The default subnet masks are as follows:
Class Default Subnet # of Subnets # of Hosts Per Subnet
Class A 126 16,777,214
Class B 16,384 65,534
Class C 2,097,152 254
In these cases, the part of the IP address blocked out by 255 is the netid.

In the table above, the it shows the default subnet masks. What subnet mask do you
use when you want more that 1 subnet? Lets say, for example, that you want 8
subnets and will be using a class C address. The first thing you want to do is convert
the number of subnets into binary, so our example would be 00001000. Moving from
left to right, drop all zeros until you get to the first "1". For us that would leave 1000.
It takes 4 bits to make 8 in binary so we add a "1" to the first 4 high order bits of the
4th octet of the subnet mask(since it is class C) as follows:
11111111.11111111.11111111.11110000 = There is our subnet
Lets try another one...Lets say that you own a chain of stores that sell spatulas in
New York and you have stores in 20 different neighborhoods and you want to have a
separate subnet on your network for each neighborhood. It will be a class B network.
First, we convert 20 to binary - 00010100. We drop all zeros before the first "1" and
that leaves 10100. It takes 5 bits to make 20 in binary so we add a "1" to the first 5
high order bits which gives: 11111111.11111111.11111000.00000000 = The following table shows a comparison between the different subnet
Mask # of Subnets Class A Hosts Class B Hosts Class C Hosts
192 2 4,194,302 16,382 62
224 6 2,097,150 8,190 30
240 14 1,048,574 4,094 14
248 30 524,286 2,046 6
252 62 262,142 1,022 2
254 126 131,070 510 Invalid
255 254 65,534 254 Invalid

Note: 127.x.x.x is reserved for loopback testing on the local system and is not used
on live systems.

TCP/IP Ports - Ports are what an application uses when communicating between a
client and server computer. Some common TCP/IP ports are:
21 FTP
110 POP3
137 NetBIOS name service
138 NetBIOS datagram service
139 NetBIOS
161 SNMP

You need to understand Buffering, Source quench messages and Windowing.

Buffering allows devices to temporarily store bursts of excess data in memory.
However, if data keep arriving at high speed, buffers can go overflow. In this case,
we use source quench messages to request the sender to slow down.

Windowing is for flow-control purpose. It requires the sending device to send a few
packets to the destination device and wait for the acknowledgment. Once received, it
sends the same amount of packets again. If there is a problem on the receiving end,
obviously no acknowledgement will ever come back. The sending source will then
retransmits at a slower speed. This is like trial and error, and it works. Note that the
window size should never be set to 0 - a zero window size means to stop
transmittion completely.

3COM’s IP addressing tutorial is just superior. It covers basic IP addressing options

as well as subnetting and VLSM/CIDR.

IPX will also be an important issue to consider in network management given the fact
there many companies still use Netware servers. There are two parts to every IPX
Network address - the Network ID and the Host ID. The first 8 hex digits represent
the network ID, while the remaining hex digits represent the host ID, which is most
likely the same as the MAC address, meaning we do not need to manually assign
node addresses. Note that valid hexadecimal digits range from 0 through 9, and
hexadecimal letters range from A through F. FFFFFFFF in hexadecimal notation =
4292967295 in decimal.

Sequenced Packet Exchange(SPX) belongs to the Transport layer, and is connection-

oriented. It creates virtual circuits between hosts, and that each host is given a
connection ID in the SPX header for identifying the connection. Service
Advertisement Protocol(SAP) is used by NetWare servers to advertise network
services via broadcast at an interval of every 60 minutes by default.

WAN Protocols:
In general, there are three broad types of WAN access technology. With Leased
Lines, we have point-to-point dedicated connection that uses pre-established WAN
path provided by the ISP. With Circuit Switching such as ISDN, a dedicated circuit
path exist only for the duration of the call. Compare to traditional phone service,
ISDN is more reliable and is faster. With Packet Switching, all network devices share
a single point-to-point link to transport packets across the carrier network - this is
known as virtual circuits.

When we talk about Customer premises equipment(CPE), we are referring to devices

physically located at the subscriber’s location. Demarcation is the place where the
CPE ends and the local loop begins. A Central Office(CO) has switching facility that
provides point of presence for its service. Data Terminal Equipment(DTE) are devices
where the switching application resides, and Date Circuit-terminating
Equipment(DCE) are devices that convert user data from the DTE into the
appropriate WAN protocol. A router is a DTE, while a DSU/CSU device or modem are
often being referred to as DCEs.

Frame Relay:
Frame Relay has the following characteristics:
successor to X.25
has less overhead than X.25 because it relies on upper layer protocols to
perform error checking.
Speed in between the range of 56 Kbps to 2.078 Mbps.
uses Data Link Connection Identifiers(DLCI) to identify virtual circuits, with
DLCI number between 16 and 1007.
uses Local Management Interfaces(LMI) to provide info on the DLCI values as
well as the status of virtual circuits. Cisco routers support Cisco(Default), ANSI and
to set up frame relay, we need to set the encapsulation to frame-relay in either
the Cisco(Default) mode or the IETF mode, although Cisco encapsulation is required
to connect two Cisco devices.
LMI type is configurable, but by default it is being auto-sensed.
generally transfer data with permanent virtual circuits (PVCs), although we can
use switched virtual circuits (SVCs) as well.
SVC is for transferring data intermittently.
PVC does not have overhead of establishing and terminating a circuit each time
communication is needed.
Committed Information Rate(CIR) is the guaranteed minimum transfer rate of a

Cisco has a web page that describes the configuration and troubleshooting of Frame
relay at http://www.cisco.com/warp/public/125/13.html

ISDN has the following characteristics:
Works at the Physical, Data Link, and Network Layers.
Often used in backup DDR Dial on Demand Routing.
Makes use of existing telephone.
Supports simultaneous data and voice.
Max speed at 125 Kbps with PPP Multilink.
Call setup and data transfer is faster than typical modems.
BRI has 2 x 64 1Kbps B Channels for data and one 16 Kbps D Channel for
PRI has 23 x B Channels and one D Channel in the US, or 30 x B Channel and
one D Channel in Europe.
E protocol specifies ISDN on existing telephone network
I protocol specifies Concepts, terminology, and Services
Q protocol specifies switching and signaling
ISDN Reference Points include R(between non ISDN equipment and TA),
S(between user terminals and NT2), T(between NTI and NT2 devices) and
U(between NTI devices and Line Termination Equipment in North America)
router always connected by the U interface into NT1
BRI interface is considered Terminal Equipment type 1 TE1
TE1 is built into the ISDN standards
Needs to have Terminal Adapter TA to use TE2

For more information about ISDN, read our Introduction to ISDN.

Cisco has a web page with links about the configuration and troubleshooting of ISDN

ATM stands for Asynchronous Transfer Mode and is a high-speed, packet-switching
technique that uses short fixed length packets called cells which are about 53 bits in
length. ATM can transmit voice, video, and data over a variable-speed LAN and WAN
connections at speeds ranging from 1.544Mbps to as high as 622Mbps. I recently
read that the new standard may be 2Gbps. ATM's speed is derived from the use of
short fixed length cells, which reduce delays, and the variance of delay for delay-
sensitive services such as voice and video. ATM is capable of supporting a wide range
of traffic types such as voice, video, image and data.
As an improvement to Serial Line Internet Protocol (SLIP), Point-to-Point Protocol
(PPP) was mainly for the transfer of data over slower serial interfaces. It is better
than SLIP because it provides multiprotocol support, error correction as well as
password protection. It is a Data Link Layer protocol used to encapsulate higher
protocols to pass over synchronous or asynchronous communication lines. PPP is
capable of operating across any DTE/DCE device, most commonly modems, as long
as they support duplex circuits. There are 3 components to PPP:

HDLC(High-level Data Link Control) - Encapsulates the data during transmission

and is a link layer protocol which is also the default Cisco encapsulation protocol for
synchronous serial links. HDLC is supposed to be an open standard, but Cisco's
version is proprietary, meaning it can only function with Cisco routers.
LCP(Link Control Protocol) - Establishes, tests and configures the data link
NCPs(Network Control Protocols) - Used to configure the different
communication protocols, allowing them on the same line simultaneously. Microsoft
uses 3 NCPs for the 3 protocols at the Network Layer (IP, IPX and NetBEUI)

PPP communication occurs in the following manner: PPP sends LCP frames to test
and configure the data link. Next, authentication protocols are negotiated to
determine what sort of validation is used for security. Below are 2 common
authentication protocols:

PAP is similar to a network login but passwords are sent as clear text. It is
normally only used on FTP sites.
CHAP uses encryption and is a more secure way of sending passwords.

Then NCP frames are used to setup the network layer protocols to be used. Finally,
HDLC is used to encapsulate the data stream as it passes through the PPP

Point-to-Point Tunneling Protocol(PPTP) provides for the secure transfer of data from
a remote client to a private server by creating a multi-protocol Virtual Private
Network(VPN) by encapsulating PPP packets into IP datagrams. There are 3 steps to
setup a secure communication channel:

1. PPP connection and communication to the remote network are established.

2. PPTP creates a control connection between the client and remote PPTP server
3. PPTP creates the IP datagrams for PPP to send.

The packets are encrypted by PPP and sent through the tunnel to the PPTP server
which decrypts the packets, disassembles the IP datagrams and routes them to the
host. Setting Up PPTP requires a PPTP Client, PPTP Server and a Network Access

There is a very helpful web site with detailed tutorials on ISDN, Frame Relay, X.25,
ATM and other serial WAN technologies located here.

Cisco IOS:
Cisco routers use the Internetworking Operating System(IOS) which stores the
configuration information in Non-Volatile RAM(NVRAM) and the IOS itself is stored in
flash. The IOS can be accessed via Telnet, console connection(such as hyperterminal)
or dialin connection. You can also configure the router as a web server and then
access a web-based configuration panel via http.

There are a variety of sources for booting include Flash memory, TFTP and ROM. It is
always recommended that new image of IOS be loaded on a TFTP server first, and
then copy the image from the TFTP server to the flash memory as a backup
mechanism. The copy command such as "copy tftp flash" allows us to copy the IOS
image from TFTP server to the Flash memory. And of course, we can always do the
reverse. Now, we need to inform the router to boot from the correct source. The
following commands are examples of what we should type in depending on the
situation. Typically, it is a good idea to specify multiple boot options as a fall back

boot system flash {filename}

boot system tftp {filename} {tftp server IP address}
boot system rom

After the boot up process we can prepare to login. The User EXEC is the first mode
we encounter. It gives us a prompt of "Router>". To exit this mode means to log out
completely, this can be done with the logout command. If we want to proceed to the
Privileged EXEC, we need to use the enable EXEC command. Once entered, the
prompt will be changed to ‘Router#". To go back to user EXEC mode, we need to use
the disable command. Note that all the configuration works requires the
administrator to be in the Privileged mode first. Put it this way, Privileged EXEC mode
includes support for all commands in user mode plus those that provide access to
global and system settings.

The setup command facility is for making major changes to the existing
configurations, such as adding a protocol suite, modifying a major addressing
scheme changes, or configuring a newly installed interface.

If you aren't big on reading manuals, finding out the way to access help information
is a MUST. To display a list of commands available for each command mode, we can
type in a ? mark. IOS also provides context-sensitive help feature to make life easier.
In order to pass this exam, you will need to be able to find your away around the
IOS. We will list some the information here, but there is too much to list all of it. You
will definitely need access to a router or get the software listed at the beginning of
this study guide so that you can practice.

Useful editing commands include:

Command Purpose
Recall commands in the history buffer starting
with the most recent command.
Return to more recent commands in the history
Crtl-N buffer after recalling commands with Crtl-P or the
up arrow key.
Crtl-B Move the cursor back one character
Crtl-F Move the cursor forward one character
Crtl-A Move the cursor to the beginning of the command
Crtl-E Move the cursor to the end of the command line
Esc B Move the cursor back one word
Esc F Move the cursor forward one word
Crtl-R or
Redisplay the current command line

You will find most of the IOS commands at the following 2 links:
Router and Switch Commands

Access Lists allow us to implement some level of security on the network by
inspecting and filtering traffic as it enters or exits an interface. Each router can have
many access lists of the same or different types. However, only one can be applied in
each direction of an interface at a time (keep in mind that inbound and outbound
traffic is determined from the router's perspective). The two major types of access
lists that deserve special attention are the IP Access Lists and the IPX Access Lists.

Standard IP access lists can be configured to permit or deny passage through a

router based on the source host's IP address. Extended IP access list uses
destination address, IP protocol and port number to extend the filtering capabilities.
Access can be configured to be judged based on a specific destination address or
range of addresses, on an IP protocol such as TCP or UDP, or on port information
such as http, ftp, telnet or snmp. We use access list number to differentiate the type
of access list. In standard IP access lists we have numbers from 1 through 99, and in
extended IP access lists we have numbers from 100 through 199:

1-99 Standard IP
100-199 Extended IP
200-299 Protocol type-code
300-399 DECnet
600-699 Appletalk
700-799 Standard 48-bit MAC
800-899 Standard IPX
900-999 Extended IPX
1000- IPX SAP
1100- Extended 48-bit MAC
1199 Address
1200- IPX Summary Address
When dealing with Access Control Lists or preparing for your CCNA exam, you have
to deal with a 32-bit wild card address in dotted-decimal form, known as your
inverse mask. By Cisco’s definition it is called inverse, but you can think of it as the
“reverse” of your subnet mask in most cases. When dealing with your wild card
mask, you have two values that you are working with. Like subnetting you have a 0
as "off" and a 1 as the "on" value. Wild cards deal with the 0 value as “match” and
the 1 value as "ignore". What do I mean by ignore or match? If you have studied
ACLs you should know that your goal is to set criteria to deny or permit and that is
where your Inverse mask comes into play. It tells the router which values to seek out
when trying to deny or permit in your definition. If you have dealt with subnetting
you know that most of your address ended with an even number. With your inverse
mask you will end up with an odd number. There are several different ways to come
up with your inverse mask; the easiest is to subtract your subnet mask from the all
routers broadcast address of

Example: You have a subnet mask of To get your wild card mask all
you have to do is:

Then you can apply it to the definition, whether using a standard or extended ACL.

Standard example:
Router(config)# access-list 3 deny

How you would read this list. With this wild card you told the router to “match” the
first three octets and you don’t care what’s going on in the last octet.

Extended example:
Router(config)# access-list 103 permit
eq 80

How you would read this list? With this wild card you have told the router to match
The first three octets and you don’t care what’s going on in the last octet.

Thank of it this way. If you had broken the decimal form down to binary. The wild
card mask would look like this. 00000000.00000000.00000000.11111111 As you
know the “1” means ignore and “0” means match. So in that last octet it could have
been any value on that subnet line ranging from 0-255.

For more information on IP Access Lists, read Configuring IP Access Lists

For IPX access list configuration, read Control Access to IPX Networks

There are 2 main types of routing, which are static and dynamic, the third type of
routing is called Hybrid. Static routing involves the cumbersome process of manually
configuring and maintaining route tables by an administrator. Dynamic routing
enables routers to "talk" to each other and automatically update their routing tables.
This process occurs through the use of broadcasts. Next is an explanation of the
various routing protocols.

Routing Information Protocol(RIP) is a distance vector dynamic routing protocol. RIP
measures the distance from source to destination by counting the number of
hops(routers or gateways) that the packets must travel over. RIP sets a maximum of
15 hops and considers any larger number of hops unreachable. RIP's real advantage
is that if there are multiple possible paths to a particular destination and the
appropriate entries exist in the routing table, it will choose the shortest route.
Routers can talk to each other, however, in the real routing world, there are so many
different routing technologies available, that it is not as simple as just enabling
Routing Information Protocol (RIP).

For information on RIP configuration, read Configuring RIP

Open Shortest Path First (OSPF) is a link-state routing protocol that converges faster
than a distance vector protocol such as RIP. What is convergence? This is the time
required for all routers to complete building the routing tables. RIP uses ticks and
hop counts as measurement, while OSPF also uses metrics that takes bandwidth and
network congestion into making routing decisions. RIP transmits updates every 30
seconds, while OSPF transmits updates only when there is a topology change. OSPF
builds a complete topology of the whole network, while RIP uses second handed
information from the neighboring routers. To summarize, RIP is easier to configure,
and is suitable for smaller networks. In contrast, OSPF requires high processing
power, and is suitable if scalability is the main concern.

We can tune the network by adjusting various timers. Areas that are tunable include:
the rate at which routing updates are sent, the interval of time after which a route is
declared invalid, the interval during which routing information regarding better paths
is suppressed, the amount of time that must pass before a route is removed from
the routing table, and the amount of time for which routing updates will be
postponed. Of course, different setting is needed in different situation. In any case,
we can use the "show ip route" command to display the contents of routing table as
well as how the route was discovered.

For commands and methods to configure OSPF read Configuring OSPF on Cisco


RIP and OSPF are considered "open", while IGRP and EIGRP are Cisco proprietary.
Interior Gateway Routing Protocol(IGRP) is a distance vector routing protocol for the
interior networks, while Enhanced Interior Gateway Routing Protocol (EIGRP) is a
hybrid that combines distance vector and link-state technologies. Do not confuse
these with NLSP. Link Services Protocol (NLSP) is a proprietary link-state routing
protocol used on Novell NetWare 4.X to replace SAP and RIP. For IGRP, the metric is
a function of bandwidth, reliability, delay and load. One of the characteristics of IGRP
is the deployment of hold down timers. A hold-down timer has a value of 280
seconds. It is used to prevent routing loops while router tables converge by
preventing routers from broadcasting another route to a router which is off-line
before all routing tables converge. For EIGRP, separate routing tables are maintained
for IP, IPX and AppleTalk protocols. However, routing update information is still
forwarded with a single protocol.
(Note: RIPv2, OSPF and EIGRP include the subnet mask in routing updates which
allows for VLSM (Variable Length Subnet Mask), hence VLSM is not supported by
RIP-1 or IGRP.)

For more information about IGRP, read Configuring IGRP

For a detailed guideline on configuring EIGRP, read Configuring IP Enhanced IGRP

Other Routing Info:

In the routing world, we have the concept of autonomous system AS, which
represents a group of networks and routers under a common management and share
a common routing protocol. ASs are connected by the backbone to other ASs. For a
device to be part of an AS, it must be assigned an AS number that belongs to the
corresponding AS.

Route poisoning intentionally configure a router not to receive update messages from
a neighboring router, and sets the metric of an unreachable network to 16. This way,
other routers can no longer update the originating router's routing tables with faulty

Hold-downs prevent routing loops by disallowing other routers to update their

routing tables too quickly after a route goes down. Instead, route can be updated
only when the hold-down timer expires, if another router advertises a better metric,
or if the router that originally advertised the unreachable network advertises that the
network has become reachable again. Note that hold down timers need to work
together with route poisoning in order to be effective.

Split horizon simply prevents a packet from going out the same router interface that
it entered. Poison Reverse overrides split horizon by informing the sending router
that the destination is inaccessible, while Triggered Updates send out updates
whenever a change in the routing table occurs without waiting for the preset time to