Вы находитесь на странице: 1из 190

INTERNETWORKING

FUNDAMENTALS:
An Overview

N1 N2

N3 N4

Nn

N5 N6

N7 N8
INTERNETWORKING FUNDAMENTALS
An Overview

2008 Andres Rengifo


All Rights Reserved
Version 5.0

ii
To my wife, my eternal soul mate!

iii
It is with great pleasure and appreciation that I dedicate this book to the great loves of
my life, my daughters Paola and Gabriela. They are the engine of my life. In addition, I
would also like to thank my parents, sisters, and entire family for their constant support
and encouragement.

iv
Authors Acknowledgment
I would like to acknowledge the following authors and their books, which I used to
complete this Internetworking Fundamentals Overview. Without these sources, this
project would not have been possible. This overview is an attempt to present all this
material compiled and written by such authors in a very compact and general way to my
students. The hope is that they can use this book as the initial source and continue to
research answers to more difficult questions with the resources listed here.

Internetworking with TCP/IP, Principles, Protocols, and Architecture Volume I,


Douglas E. Comer, Prentice Hall, Fourth Edition

Communication Networks: A First Course, Jean Walrand, Aksen Associates

TCP/IP Illustrated, Volume I, W. Richard Stevens, Addison Wesley

Internetworking Technologies Handbook, Merilee Ford, Cisco Press

Interconnections, Radia Perlman, Addison Wesley.

Routing in the Internet, Christian Huitema, Prentice Hall

CCIE Professional Development: Routing TCP/IP, Jeff Doyle, Cisco Press

IP Fundamentals, Thomas A. Maufer, Prentice Hall

IP Routing Primer, Robert Wright, Cisco Press www.cisco.com

Dictionary of Networking, Peter Dyson, Sybex Inc

v
Chapter 1: INTRODUCTION TO COMMUNICATIONS SYSTEMS ............................. 1
1.1 Introduction.............................................................................................................. 2
1.1.1 Communication Devices .................................................................................... 2
1.1.2 Key Functions for Designing a Network............................................................ 3
1.1.3 Evolution of Communication Networks ............................................................ 5
1.1.4 Conclusion........................................................................................................ 11
Chapter 2: LAYER NETWORK ARCHITECTURES..................................................... 12
2.1 Introduction ......................................................................................................... 13
2.1.1 Network Services and Architecture.................................................................. 13
2.1.2 Protocols........................................................................................................... 14
2.1.3 How Information is transmitted in a Network?................................................ 17
2.1.4 Classes of Communication Services ................................................................ 18
2.1.5 Switching.......................................................................................................... 18
2.1.6 Conclusion........................................................................................................ 21
Chapter 3: OSI REFERENCE MODEL ........................................................................... 22
3.1 Introduction ......................................................................................................... 23
3.1.1 Multiplexing ..................................................................................................... 23
3.1.2 The OSI Model................................................................................................. 25
3.1.3 Conclusion........................................................................................................ 32
Chapter 4: INTRODUCTION TO INTERNETWORKING ........................................... 33
4.1 Introduction ......................................................................................................... 34
4.1.1 LAN Systems ................................................................................................... 34
4.1.2 Backend Networks ........................................................................................... 34
4.1.3 Storage Area Networks .................................................................................... 35
4.1.4 Backbone LANs ............................................................................................... 37
4.1.5 Topologies........................................................................................................ 38
4.1.6 Medium Access Control................................................................................... 42
4.1.7 MAC Frame Format ......................................................................................... 43
4.1.8 MAC Protocols: The ALOHA Network ......................................................... 44
4.1.9 Token Ring....................................................................................................... 45
4.1.10 Conclusion .................................................................................................... 46
Chapter 5: INTRODUCTION TO BOOLEAN BASICS & BRIDGING ........................ 47
5.1 Introduction............................................................................................................. 48

vi
5.1.1 Bridges ............................................................................................................. 48
5.1.2 Function of a Basic Bridge............................................................................... 49
5.1.3 Binary Numbering versus Decimal Numbering............................................... 55
5.1.4 Hexadecimal Numbering versus Decimal Numbering..................................... 58
5.1.5 Conclusion........................................................................................................ 59
Chapter 6: IP INTERNETWORKING ............................................................................. 60
6.1 Introduction ......................................................................................................... 61
6.1.1 Internet Protocol............................................................................................... 61
6.1.2 Classes of IP networks ..................................................................................... 64
6.1.3 What is a MASK? ............................................................................................ 69
6.1.4 Subnetting......................................................................................................... 71
6.1.5 Address Resolution Problem............................................................................ 76
6.1.6 Conclusion........................................................................................................ 77
Chapter 7: IP ROUTING PRIMER .................................................................................. 78
7.1 Introduction ......................................................................................................... 79
7.1.1 What is routing? ............................................................................................... 79
7.1.2 Forwarding Decisions ...................................................................................... 84
7.1.2 Routing Protocols............................................................................................. 88
7.1.3 Classful and Classless Routing Protocols ........................................................ 94
7.1.4 Routing Information Protocol (RIP) ................................................................ 95
7.1.5 Conclusion........................................................................................................ 95
ROUTING LAB SCENARIO ....................................................................................... 97
Chapter 8: OVERVIEW OF ROUTING PROTOCOLS ................................................ 110
8.1 Introduction ....................................................................................................... 111
8.1.1 Routing Information Protocol v1 (RIP) ......................................................... 111
8.1.2 Interior Gateway Routing Protocol (IGRP) ................................................... 116
8.1.3 Enhanced Interior Gateway Routing Protocol (EIGRP) ................................ 124
8.1.4. Open Shortest Path Forwarding (OSPF) ........................................................ 127
8.1.4 Border Gateway Protocol (BGP).................................................................... 130
8.1.5 Hot Standby Routing Protocol (HSRP).......................................................... 136
8.1.7 Conclusion...................................................................................................... 138
Chapter 9: TRANSMISSION CONTROL PROTOCOL ............................................... 139
9.1 Introduction.......................................................................................................... 140

vii
9.1.1 Transmission Control Protocol (TCP)............................................................ 140
9.1.2 How does Three-way Handshaking work in detail?................................... 143
9.1.3 Sliding Windows ............................................................................................ 148
9.1.4 Conclusion...................................................................................................... 155
Chapter 10: INTRODUCTION TO IP MULTICAST................................................... 156
10.1 Introduction ....................................................................................................... 157
10.1.1 Multicast Addresses .................................................................................... 158
10.1.2 Multicast Distribution Trees ....................................................................... 160
10.1.3 Multicast Routing Protocols ....................................................................... 163
10.1.4 Conclusion .................................................................................................. 165
Glossary .......................................................................................................................... 166
About the Author ............................................................................................................ 182

viii
Chapter 1: INTRODUCTION TO COMMUNICATIONS SYSTEMS
1.1 Introduction
Communication systems are composed of hardware and software that allow users to
exchange information. These communication systems were developed throughout the
years to allow users to exchange information in the form of data, such as text and
graphics as well as video and voice streams including teleconferencing meetings and
Video on Demand, among others. The telephone system served as a model for these
communication systems providing the base ground for the development of data networks.

The telephone network has been in place for a while and has been able to provide
services to multiple areas in the world. It is a network that provides a service to users,
which guarantees immediate connectivity between two parties. There is a perception by
any user that the network, in this case, the telephone system, will always have a line
ready for use at all times. Data networks are modeled using the telephone network
functionality to provide information transfer without losing any portion of the data
exchanged between users.

Fig 1.1

Transmitter R2 R4
Receiver
Destination
Source
010101010 Transmission 010101010 01010

01010
1010 Switch R1 System R6 Switch
1010

Digital R3 R5
Information

Analog Analog
Signal Signal

1.1.1 Communication Devices

There are a few network devices that, when joined together, will create a
communication system. These devices are necessary for the exchange of information and
each has a particular functionality. These are described below in detail to understand
their importance:

1. Source: Information will be generated at a given point in the network. This


information, which can be data, video, or voice, has to be transmitted across the
network. The source creates this information. Applications such as email software,
word processors, video cameras, etc create information that is sent across the system.
2. Transmitter: All the information that was created with specific applications has to be
converted into a form acceptable by the system in use. The conversion mechanism
changes the way the information is sent across the network without changing the
content. The process encodes the information to facilitate signaling over the specific
physical medium attached to the transmitter. What does this mean? For example, a
computer using a modem to communicate the information created by an application

2
such as email is digitized. In other words, the data is converted into a long stream of
1s and 0s and the modem will take those digital streams and convert them to analog
signals according to the network they connect to. A modem is short for modulator-
demodulator. This conversion is necessary, as information sometimes has to traverse
networks, such as the telephone system, where signaling is mostly analog and cannot
accept digitized information. Signals are also very specific to the physical medium.
Fiber optic cables use light (photons) to carry the information across to its destination
while cables use electrical signals to do the same. The same can be said about a
computer using a Network Interface Card (NIC) to connect to an Ethernet network.
The NIC card is the transmitter which converts and encodes the digital information
into the proper signaling to be put on the medium.
3. Transmission system or Network Cloud: The transmission system or network
cloud is the most important portion of the communication system. The transmission
system connects ALL networks to form a mesh to allow vast amounts of data to be
exchanged between two points. It can actually be a point-to-point connection
between end users, or can be a complex network with intermediate nodes that act as
relays for that information exchange. The transmission system is usually referred to
as the cloud where all networks meet.
4. Receiver: The receiver is the opposite of the transmitter. It accepts the encoded
signals traversing the transmission system and decodes it back to a form acceptable
by the destination device. The modem and the NIC card perform both functions,
digital to analog conversion and vice versa.
5. Destination: The destination acts on the incoming data that has been decoded by the
receiver, in this case, the actual email that was sent by the email application. Every
communication system has to be able to deliver information as efficiently and
optimally possible.

1.1.2 Key Functions for Designing a Network

Several key functions need to be considered when designing a communication network.


The first function is maintaining the proper utilization of the network. Multiple users
with multiple communication flows sending information at the same time will share the
network. This information has to arrive at the destination very fast, reliably and with no
errors. The efficiency of the network is measured based on how much utilization is there
while in an active state, meaning, exchanging multiple data flows. The more utilized the
network is, the less efficient and reliable it will become. Information bits can possibly be
dropped and become corrupted.

It would be impossible to connect every system in the world with a point-to-point link to
form a fully mesh transmission system. Hence, sharing network links becomes a
necessity. Congestion control mechanisms are used to prevent data loss across the
network. The control mechanism throttles the data rate by decreasing the buffer space
or congestion window size after a segment is lost. When the window size is reduced, the
systems will inject fewer segments into the communication flow. On a steady state
connection, a congestion window size is equal to the receivers advertised window size.
These mechanisms allow multiple communication data flows to traverse shared links,
which are used when interconnecting multiple systems and other networks.

3
Flow control is used to prevent data from being dropped. Flow control is a mechanism
that each device uses to control how much data can be send or receive without
overflowing the devices buffers. Overflowing of these buffers will cause data to be
corrupted and dropped. At the same time, there most be some type of error detection
and correction mechanisms introduced along the path from source to destination.
Errors have to be detected and corrected before they are delivered up to the source or
destinations applications. This information will pass through multiple error detection
and correction layers to ensure a more accurate delivery.

In order to provide a clean, optimal communication between two devices, there should
be a hand-shake in place prior to the exchanging of data. This handshake is necessary
because it establishes the process in which each source and destination understands what
is being transported across the system. There is a mutual agreement ahead of time about
the language each device is using to communicate, how much information can be sent
without overflowing each other, how fast can this information be sent, what applications
are being used, etc. Once these parameters are known, the conversation can take place
accordingly.

Another key function that needs to be incorporated on a communication system is the


capability to allow for recovery of the information if there is a break in the path.
The recovery mechanism should be in place to allow the process to return to the point
where the break occurred and allow the transfer to continue from that point without
having to restart again. Assume that there is a file being transferred between two end
users or nodes. During the file transmission, one of the links or intermediate nodes
becomes congested or goes down. The data flow is interrupted while the system or
network recuperates using the parameters exchanged during the handshake process to
keep the connection alive to allow for the retransmission of the data segments and the
continuation of the communication flow.

Another important function that a communication network should implement is a good


security scheme. Security on a network is necessary to prevent unwanted users from
taking control of information not meant for everyone. Security mechanisms should be
able to detect and monitor the information flow, to make sure that user As information is
truly coming from user A and not from a masked entity. For this reason, security
infrastructures are necessary on a communication network.

Finally, there has to be a way to monitor the performance and health of your
communication network. Network management is a key process required on every
network. This management allows for the day-to-day operation of the network where
someone monitoring it can determine if there is a problem along the path or if there is
going to be an issue in the near future. This is called, proactive management, which
means never having to wait for someone to reach out to inform the network group when
there are problems on the network.

4
Network management tools also allow for proper trending and modeling. Trending will
provide the network administrator a history of the communication systems utilization
and performance. Modeling also allows the network administrators to plan ahead of time
how a specific device will interact with the system prior to connecting it to a production
environment.

1.1.3 Evolution of Communication Networks

What is a computer network?


A computer network is designed to connect multiple computers that can share
information by means of programs or applications that provide data to multiple users in a
very efficient way. Data can be referred to as text files, graphics, spreadsheets, as well as
any type of multimedia information such as video streams, live broadcasts, voice
conversations, etc. There are other types of devices that also connect to these types of
networks such as scanners, printers, fax machines, etc, which are used by many users on
the network. The purpose of centralizing the process of printing, scanning, or faxing, into
specific network devices that are very fast and, with great amounts of memory saves
money and allows for better support and service.

How is this information exchanged on a given network?


Data is converted into streams of 1s and 0s called bits, which are sent across physical
connections to a receiver in the form of, electrical, optical or microwave signals to
mentioned some. These signals traverse the physical network and when they reach their
destination, the receiver converts those signals again back in to bits, which are
reconstructed into the application that was used to send such information. The sending of
1s and 0s across a network is called digital transmission.

How are these networks interconnected?


There are specifications created by international organizations, such as the Electronic
Industry Association (EIA) http://www.eia.org or the Institute of Electrical and Electronic
Engineers (IEEE) www.ieee.org, which defined the type of connectors or cables that are
necessary to attach physically the end device or intermediate node to a communication
system. The standards are very clear with respect to how much data can be transferred,
what speeds to use for the exchange of the data and how the interface to that device
should be wired. It is all relative to the material used for the cable or connector, how
long or wide the cable is, how much shielding should it have to protect against other
signal interruption, etc. Propagation of such signal is relative to the material used to
connect to the network causing signals to travel at different speeds given a specific
material.

During the early stages of computer networks, protocols were developed to allow for
communication between nodes. Protocols are referred to as sets of rules, which are
followed in order to perform a given task. A protocol is the language used by the
nodes to exchange this information and is relative to the medium being used to transport
the data. When peripherals such as printers and modems were invented, the speeds at
which they connected to computers used protocols with very low speed transfer rates,

5
such as 1200 bits per second or 2400 bits per second eventually reaching a maximum of
38,400 bits per second. The need to increase transfer speeds between nodes and between
computers and their directly connected peripherals gave rise to the developing of
protocols capable of carrying much higher amounts of data at faster speeds. These are
called data link layer protocols. These protocols had to introduce error detection and
correction mechanisms and used packets of data instead of continuous streams of 1s
and 0s to communicate across a system. Packets transported along a network use
synchronous communications, which use the notion of a clock. This means that every
packet is separated by an equal amount of time, which allows for faster communication.
Instead, asynchronous communications has no notion of a clock where the transfer of
data is sporadic and very bursty.

How are all these computers (nodes) interconnected to form a network?


A network can be created in a number of ways such as connecting every computer to
each other via a cable or some type of link that will connect them directly. This is known
as a point-to-point connection. Computers can also connect to a centralized device or
concentrator, which will then connect to other centralized devices by means of a link.
What this means is that all computers connected to each centralized device will share
the common link between them. Trying to interconnect every device via a point-to-point
link (directly) would be an impossible task as the network grows. It will also mean that
between networks (communication systems also talk to other communication systems,
something called Internetworking), extending point to point connections between each
networks computers would require cables long enough or links long enough to allow for
this type of direct connectivity. This creates non-scalable and impractical networks, as
the number of links required to interconnect every device via a point-to-point link would
grow almost exponentially.

Fig 1.2

Point to P oint
Connection

N1 N2 N1 N2 N1 N2 N3

N3 N4 N3 N4 N4 N5 N6

N5
Fully Meshed Networks
A s you increase the num ber of nodes, the am ount
of links needed to m esh all nodes grows
extrem ely fast . N(N-1)/2 links are required

N1

N1 N1

N1 N1
N1

Shared Links are preferred

6
Since point-to-point connections are not feasible between all computers, the need to share
the links between networks becomes necessary. All computer networks deployed at this
time have common links that are used to transport the data for multiple users and multiple
networks as well. Shared links which carry the traffic between end users need to be
implemented with the proper mechanisms that will organize the data transfer in a manner
that will not congest the link, overflow any devices data buffers, or slow the various
communication flows that are sharing those links. The intermediate nodes that make up
the network cloud are very fast, have vast amounts of memory, highly reliable and robust
and can switch and route the traffic in minimal amounts of time. These devices are
referred to as routers and switches.

Many current environments are connected usingcampus design architecture as shown


on figure 1.3. It is highly redundant and highly resilient to any outages that may occur in
any of the routing or switching devices that interconnect someones network. It does not
have to be fully meshed to be very effective. The links connecting multiple buildings are
used as shared infrastructure where packets traversed without being dropped. These are
very high bandwidth capable links. Most enterprise architectures (major corporations in
the industry) follow this approach, which is layered to provide specific functionalities at
each level.

Generally, users will connect to a layer called the Access Layer or User access. PCs and
workstations have high-speed access to the network via 10 Mbps, 100 Mbps or 1 Gbps
ports on these switches. Each Access Layer switch has two major uplinks connecting the
switch to a set of routers or gateways, which allow connectivity to the cloud. An
uplink is usually referred to as a point-to-point connection between two communication
nodes. It is capable of handling all or most of the traffic generated by the locally attached
systems to that specific switch.

The Distribution Layer is the middle layer that introduces the routing functionality to the
user community. This layer is designed to be very fast and optimal. This layer contains
all the access rules or encryption mechanisms needed on the network which may act as
firewalls to other access layer switches. It is also the primary place to allocate any type
of server farms or application clusters because it gives everyone on the network equal
access in terms of number of hops from anywhere in the network. The Core Layer is
meant to switch (route) as fast as possible between LANs. There should be no access
filters, encryption, firewall rules or any type of extra processing introduced which will
slow down the transferring of packets across a network. It is the cloud.

7
Fig 1.3

Store and Forward Mechanism


When user A sends a message to user B, this information is digitized and segmented.
These segments are divided into packets and sent across the network. Each intermediate
node between users will take in, process the packets, and pass them along to the next
node without having to wait for the entire message to arrive. When user A sends
packet number 1 to intermediate node 1 and the packet is received by that node, packet
number 2 is already leaving user A towards intermediate node 1. At the same time,
packet number 1 is leaving intermediate node 1 to intermediate node 2 while it receives
packet number 2 from user A.

Every intermediate node on the network executes this process, which is called store and
forward transmission. This process is much faster than having user A send the entire
message to intermediate node 1, where node 1 waits for the whole message to arrive.
Data is not grouped into packets but instead is sent as a stream of 1s and 0s, and will
only forward to intermediate node 2 once the entire message has arrived. For example, if
the string of 1s and 0s takes one minute to go between two directly connected nodes and
each node has to wait for all the 1s and 0s of the stream to arrived, it will take the
number of nodes in minutes plus one minute to transfer the message from user A to
user B. This means that if there are three nodes between user A and user B, the time it

8
will take this message to completely reach its destination from A to B will be three
minutes plus one more minute or four minutes to complete the transmission.
In contrast, when the same message is broken into equal size packets (i.e. 60 packets) and
each packet takes one second to get between two nodes, it will then take only one minute
plus the number of intermediate nodes in seconds for the message to reach its
destination. Obviously, packet switching is a more efficient way to send data between to
end points.

Fig 1.4

R
R 010101010
R 001011111

Packets
010101010
R
001011111

Data
Packets
R R
Stream

Network Cloud
Routers send packets
across a network

Fig 1.5

User
Header Trailer
Data

- A typical packet
Header - usually contains source and
destination address as well as sequence
numbers to identify and verify that all packets
are received by a given destination
User Data - actual information from sender to
receiver
Trailer - contains control information specific
for detection and correction of errors

9
Internet
One of the most interesting communication systems is the Internet. This system is made
up of millions of computers that interconnect and use common networks to transport
everyones data. These common networks are made up of interconnections between
Internet Service Providers (ISPs) high-speed routers and switches, which create what it
is called the Internet backbone. These backbones allow for the transfer of data at
very high speeds and provide connections to other smaller networks, which then provide
connectivity to end-users. On very large communication systems like the Internet, there
are also centralized points called Network Access Points (NAPs), strategically located
around the world to allow for the exchange of Internet traffic specific to the region. In
the United States, there are multiple NAPs that interconnect multiple ISPs, which in
turned, connect to other networks.

Fig 1.6

ISPs ISPs
R
RR R R R
R R R
R NAP
R RRR
R R R
NAP R ISPs
R R R
R
Internet R
R
ISPs R Backbone
R NAP R
R
R R R

R
R NAP R
R R R R
RR
ISPs R
R ISPs
R R
R R R R

ISPs
R
R R

ISPs connect to Network Access Points


(NAPs) all over the world. There are
very high speed switches that
interconnect these routers. These NAPs
then interconnect with each other using
very high speed backbones that allow
for vast amount of data to be transfer.

10
1.1.4 Conclusion

In this session, certain networking fundamentals have been introduced that allow you to
understand what is considered to be a network and what devices make up a
communication system. Some facts behind the evolution of the computer and the creation
of standards that allow computers to interconnect have been introduced as well
concluding with the introduction of the idea of grouping streams of digits into packets for
a more efficient and fast communication.

Discussion questions
1. Why is there a need to network systems?
2. Why is data converted into packets?

11
Chapter 2: LAYER NETWORK ARCHITECTURES

12
2.1 Introduction
Network architectures are both complex and simple. The network can be defined as a
combination of specific devices that make up the system to allow for any transmission.
These networks use specific languages to carry the information allowing all the data to
be delivered without errors delivering the information in the most optimal way.

In this session, you will learn what functionality these languages should have as well as
how this information is transferred across the cloud. The final section will discuss the
three different switching mechanisms that enable packets of data to be delivered to their
destinations.

2.1.1 Network Services and Architecture

What is a network service?


A network service provides end users with a complete communication solution that
allows them to use the system for their own benefit as well as allows the network
administrators to have some control over the data flow. A network service includes
information about how the data will be exchanged or transported, how signaling will
affect the communication flow and how the system flow will be audited to allow for a
charge back.

One very important aspect of a network service relates to how each client should be
charged back for using such a service. Currently, there are very few applications
developed (if any) for the sole purpose of auditing data usage and charging back via some
mechanism that "captures" every users' bit transfer. It is almost impossible to correlate
the amount of bits transferred by a single user to a charge back scheme accordingly.
Instead, every user is "charged" a monthly rate fee per connection to a specific network,
which is fixed regardless if the user sends 10 packets per second or one million packets
per second.

The idea behind the charge back is to pay for the equipment that was purchased and all
the cabling needed to connect to that specific network. Eventually, the switch will be
paid off in a few months and the rest of the budget is allocated for future infrastructure.

A great system, for example, which has been able to implement exact usage of a network
service, is the telephone system. Every second used on the phone is audited and logged
to the user's specific account. The charge back is relative to the usage so it is much easier
to trend and follow historically.

Certain actions executed on network resources are defined as services. These actions will
be executed in order and follow a script or program. A script is a set of steps that are
followed to perform some type of action. Almost everything around us and especially
with computer networks uses a set of steps or procedures to manipulate how that system
should behave and transport data efficiently and optimally.

13
A switch has a script specific to how frames should be forwarded across each port and
towards the network and how these are sent over a physical wire or wireless
infrastructure, etc. The script or basic steps to follow are relative to that specific
communication device and all computer nodes that are involved with the data transfer
have such scripts embedded within them.

What is a process?
A process can be defined as a set of active scripts or programs running on a system in
order to obtain some type of output or result. Users terminal nodes or machines such as
PCs are known as clients and high-end servers that handle these connections from clients
have active programs or processes that handle connections.

2.1.2 Protocols

What is a protocol?
A protocol is a set of rules that are executed in a given order to complete a given task. In
other words, a protocol is a language that is used to communicate between two entities.
The language is understood and both entities act on it to achieve a common task. There
are multiple protocols, which act differently depending on how the network infrastructure
is connected. There can be point-to-point protocols, which are called direct protocols
where there is a direct link between two nodes. Other protocols work indirectly between
two nodes. This means there are other nodes such as a switch network that separates both
entities and proxy the connection between the two end points.

Another important characteristic of a protocol is that it should be able to interact with any
type of device regardless of the brand or make. This means that the protocol follows a
standard that has been defined by the International Organization for Standardization
(ISO) http://www.iso.org. This group meets on a regular basis to decide how the
protocol will behave given a specific functionality and requirement.

These protocols are known as standard protocols. Using non-standard protocols will
cause every device to implement a process to translate to its own independent language or
script, which will be non-scalable and non-optimal. This process is similar to a person
attempting to communicate very fast using two different languages such as English and
Spanish, while trying to translate every word into the persons native language. This
process takes a great deal of time and every word converted must have the correct
meaning. In short, if there is an agreement to speak a universal language, then
communication of ideas will flow effortlessly.

What are some of the functions that a protocol should have in order to be accepted as a
standard language? There should be at least the following:

Protocol Functionalities
All protocols have the following functionalities: 1) encapsulation 2) segmentation & re-
assembly 3) connection control 4) flow control 5) error control.

14
Encapsulation
Data is usually passed along from a given application down to the wire or physical
medium. This data has to be enclosed with specific protocol units (PDU) which permit
the information to be sent across a network using the correct format and making sure that
it is delivered without any errors.

The encapsulation process introduces control bits, which are used for this purpose. These
control bits are not part of the data and will not be needed as the PDU is passed along the
network. Once the data reaches its destination, the control bits will be removed from the
protocol data unit as it traverses up to the actual application. This process is called de-
capsulation. The result is pretty much the data that was enclosed by the senders
encapsulation process.
Fig 2.1

PDU

Header Data

Encapsulated by a given
Protocol Data Unit (PDU)

Segmentation and Re-assembly


Segmentation is a necessary process, since data that is being transferred is normally very
large with respect to the amounts of bits that make up a specific flow. Data is broken into
multiple segments, which contain portions of the data flow, which the user had generated
through a specific application. Segmentation provides segments of specific size that
when put together generate back the entire message or data flow.

Segmentation is required to facilitate the transfer of the flow more efficiently through the
network. Smaller segments are easier to verify because control bits are being added as
the information is being transferred. Therefore, the error detection and correction
mechanism has to work less given the smaller size of the segment or packet.
Additionally, it allows for the store and forward mechanism to work faster and provide
everyone equal opportunity while sending this information. This guarantees that no user
will utilize the entire bandwidth due to a segment being millions of bytes long (1 byte = 8
bits). However, in spite of the many advantages to the segmentation process, there are
also disadvantages. For example, the smaller the segment, the more interruption there
will be from the mechanism that is checking for errors. This process then creates higher
CPU utilization, which can eventually affect the transport of data.

15
Fig 2.2

Data Stream

........010101010101010000011111111111000000000000110101010101010101010......

1 1010100101 2 1111111101 3 1010100101 4 1010100101

Segment IDs

Re-assembly is the process of taking back all the segments that have been sent across the
network in the form of packets. The segments are then put back together in order to
ensure that the end user receives the actual data flow that was sent by the sender.

Connection Control
In data networks, managing data transfer is one of the most important processes required
to have an efficient and optimal network. Data flow between two end devices or entities
has to be managed and controlled to prevent data loss, congestion, or undesired queuing
somewhere in the network (delayed). Prior to every connection, if the two end devices
want to participate on a connection oriented process, there has to be specific parameters
setup before the conversation begins so that each one agrees to those parameters to
prevent retransmissions, congestion, and delays.

In TCP networks, there is a mechanism called three-way handshaking. This means


that there is some kind of connection establishment, where the sender asks the receiver,
Can you accept my request to connect to you so that we can exchange traffic? If the
request is accepted, the receiver will send back an acknowledgement to the sender
indicating that it is okay to send the data when it is ready. Once this is done, the
connection is maintained and monitored to make sure that either side is really getting
what they agreed too. Maintaining the connection is important to keep track of a specific
communication flow. For example, on a client-server scenario where multiple sources
ask the server to handshake with them, there are multiple processes running at one time.

Finally, the connection has to be properly terminated at a given time. This means that the
sender or the receiver, (client-server) has to initiate the termination process by asking
either party to stop transmitting data, since enough has been received. Connection
control also uses mechanisms such as flow control and congestion control to make sure
that either device does not overflow each other. These parameters are all exchanged
during the connection establishment. Connection oriented processes are usually used
when long-term connectivity is required between two end devices. There is also the

16
notion of ordering or sequencing the segments, which allows for proper deliver between
two end devices. Any segment or packet out of sequence has to be retransmitted by
either end to make sure the data that was sent is the same as the one received

Fig 2.3

Sender Receiver
"Can we connect?"
"Yes, send me data"
"Here is the data"
"I want to stop"
"We can stop"

PC Server

There is also another type of connection known as connectionless. This means that the
transfer of data is done one packet or segment at a time without any connection setting.
In other words, there is no handshaking. These types of connections are mainly used
with multimedia applications where there is either voice or video streams that need to
reach multiple users. There is no congestion control or flow control mechanism that can
tell each entity to stop sending or processing so much data because there is no
handshaking ahead of time. Therefore, delivery of data is faster but it lacks the quality
control such as error detection and correction as well as flow control.

Error control
Error control is introduced at different levels of the data exchanged. Every bit of the
communication flow is checked to make sure that a 1 sent is a 1 received and the
same holds for any 0. Detecting and correcting these errors should be done on the bit
level to ensure a much faster process, rather than waiting for the actual application to
detect or correct the issue.

How Information is transmitted in a Network?

Information that is generated by an end device such as an email text file or image created
with a graphics program has to be converted into bits in order to be transported across a
network. These bits have to be converted to electrical, optical, or wireless signals, which
is the only way that the information can cross the physical medium. These bits are
actually a representation of specific voltages, which when oscillated between negative
voltages to positive voltages or vice versa create pulses which in turned are interpreted as
digital 1s or 0s. For example, if a 0 is given a +3 V value and a 1 is given a 3 V value,
the oscillating voltages create the patterns of ones and zeros. This is how the bit stream is
transported on the network. A bit stream is a sequence of bits that follow one another at a
given rate continuously.

17
Fig 2.4

Text
(Application)
Message
"Hello"

0001101110

Digital
(bits)

Electrical Signals, Optical,


Microwave, etc.

Classes of Communication Services

End to end communication service can be classified as synchronous or asynchronous


communication. There are various explanations for synchronous and asynchronous
communication but as far as the user is concerned synchronous communication refers to
the information transported in the forms of bit streams where each bit stream is separated
by a specific delay which arrives at the destination at equal times. This means that there
is a notion of a clock that keeps ticking and every tick represents the delivery of a bit.

In an asynchronous communication, the bits are grouped into packets. There is no given
fixed delay between each packet hence the notion of a clock is not present. The arrival
of each packet can be sporadic and at different delays. One thing to remember is that
although all these packets have arbitrary times of arrival, the bits grouped within the
packets are still separated by a fixed delay. Using packets to transfer data is a more
efficient and optimal way to send data, therefore the importance of focusing on packet
switching.

Switching

Switching is referred to as how bits are routed across a network. Do these bits follow a
given path permanently present from beginning to end? Do they follow a path that is in
place only when the communication service needs to pass the data? Alternatively, do

18
these bits follow independent paths to reach their destination? These three scenarios are
referred to as Circuit switching, Virtual-circuit packet switching and Datagram packet
switching respectively.

The introduction of packet switching is important as the information is grouped into


packets and sent across a network using the store and forward mechanism. Each packet
arrives to a packet switched node, it gets stored, and then a route determination is
implemented to find out where that packet needs to be sent.

Let us begin by introducing the concept of routing. Routing is based on destination


prefixes. A switch or router does not need to know where the final destination is. The
idea is to find a path to the next hop switch or router (closest neighbor) and let that
neighbor decide where to send the packet using the same decision process. Eventually,
the last switch or router will forward the packet directly to the destination. At this point,
we will start concentrating on how the intermediate network really connects. Previously
this was referred to as the network cloud which contains routers, switches, gateways,
etc. Multiple users share links that interconnect these nodes, which make up the cloud.
As you will recall the best solution and design of a network is not to mesh every device
with a point-to-point connection but instead share links to pass the data.

Circuit Switching
The telephone system is a topology based on circuit switching. Switches and links create
a mesh-like topology, which allows these links to establish circuits between telephones.
The switches establish these circuits when the communication is initiated. Usually the
path followed between two telephones use the same route as the switches that route the
call or information have a determined path stored in memory, which can be loosely
referred to as a routing table.

The problem sometimes with circuit switching mechanisms is that if the circuit is broken
or is disrupted, the information cannot arrive to its destination until the circuit is repaired.
In order to reroute the information, there has to be some type of manual intervention.
Circuit switching is ideal for synchronous transmissions.

Fig 2.5

Same
dedicated path

N2

N3
N1
N4

Circuit Switching
Telephone System 19
Virtual-Circuit Packet Switching
Another switching mechanism that is often used is virtual circuit packet switching. A
virtual circuit is set up for the duration of the transmission. Packets of a specific
communication flow will follow one another. The packets of that transmission are
labeled with a virtual circuit number, which designates the path. The routing decisions
are made when the virtual circuit is set up. Each node stores these routing decisions in a
routing table that indicates the path to be followed by a packet from the virtual circuit
number. After the virtual circuit is setup, the node determines the path to be followed by
a packet by looking at the routing table. This means that each packet of one specific
communication flow will arrive at the destination in order as if they were following a
dedicated path.

At the same time, this virtual circuit is also carrying other communication flows, which
followed the same packet delivery process. Since packet switching is more efficient
when store and forward is used, it gives each intermediate node a chance to introduce
error control mechanisms to make sure that the packet is not corrupted. If there is a
problem with a packet, the node can request for a retransmission of that packet. Virtual
circuit packet switching is ideal for long-term transmissions.

Fig 2.6

Data follows a
virtual circuit in
the form of
packets

N2

N3
N1
N4

N5
N6

Virtual Circuit Packet Switching

Datagram Packet Switching


Lastly, there is Datagram packet switching. In datagram packet switching, each packet
is independently routed across the network. There is no circuit set up required and each
packet can follow any given path. It allows for a quick reaction to changing conditions
on a given transmission network. It can dynamically find a different path or link to get to
the final destination. The routing decisions are still made by every intermediate node on

20
the network. Datagram packet switching is ideal for short-term transmissions. (When
packets are slightly out of order, the sliding window algorithm can reorder packets
correctly using the sequence number. The real issue is how far out-of-order packets can
get, or how late a packet can arrive at a destination. TCP assumes that each packet has a
lifetime of as the Maximum segment lifetime which is recommended to be 120 seconds.
Anything after that will be considered lost.)

Fig 2.7

Packets are
routed
independently
as there is no
established
path

N2

N3
N1
N4

N5
N6

Datagram Packet Switching

Conclusion

In this session, you have learned that languages used to transmit; data across the
network are called protocols. These protocols followed a set of rules or script that
executes some type of functions that allows information to be transmitted. Protocols are
often described in an industry or international standard. Additionally, we also briefly
discussed how information is transferred across the network and how information bits
should be grouped into packets to make the transmission more optimal and efficient.
Finally, we described different mechanisms for packet switching across a network.

Discussion Questions
1. Describe why protocols should be standard.
2. Describe switching methods.

21
Chapter 3: OSI REFERENCE MODEL

22
3.1 Introduction
In previous sessions, the importance of understanding how a network is connected
together to form a path for data to travel between end points has been discussed. Along
with the specific languages and protocols that enable end devices to interact with the
networks to which they connect.

Our focus in this session will be placed on the Open Systems Interconnection (OSI). The
OSI is a standard reference model for communication between two end users in a
network. This model is a layered architecture model, which is very specific for packet
networks. The idea behind the OSI model is that every device that connects to a network
follows this specific layer architecture, which makes deployment and interoperability
possible. In this session, the seven layers of the OSI model will be discussed in detail,
along with the specific LAN devices that allow for connectivity to a network.

3.1.1 Multiplexing

What is multiplexing?
Multiplexing is the transmission of different flows of information on the same link. This
mechanism allows multiple users to share the links across networks to prevent point-to-
point connections between every device on a given network. There are various
multiplexing mechanisms such as Statistical Multiplexing, Time Division Multiplexing,
Frequency Division Multiplexing and Wave Division Multiplexing to mention a few.

Statistical Multiplexing
Statistical Multiplexing refers to the storage of all packets in a specific buffer
(multiplexer) where every packet needs to be transmitted over a given time. The packets
can then be transmitted in some order. Transmission of these packets may depend on
how early or late these arrived to the buffer, what priority does each packet have, i.e. is it
a voice or video packet which should be sent out before a data packet? Statistical
multiplexing keeps the link busy as long as there are bits to transmit. It analyzes the
statistics of a certain communication flows peak loads, how often does that flow use the
link, etc. It gives every flow a better chance to use the link.

Time Division Multiplexing (TDM)


Time Division Multiplexing refers to the allocation of equal time slots on a link to
facilitate the transfer of data. Multiple users can share the link and each communication
flow will be allocated specific time slots to transport the data. Once again, good
congestion avoidance mechanisms should be implemented to allow for the data not to be
dropped or over flow the link. With Time Division Multiplexing, sometimes, time slots
are allocated for specific transmission that may not have any data to be sent at that
specific time. This means that the link may not be use optimally as the time slot passes
by empty.

23
Frequency Division Multiplexing (FDM)
Frequency Division Multiplexing refers to the allocation of specific frequencies to allow
multiple users to send their data over a common link. One main channel is broken into
multiple frequency bands to allow for multiple communication flows. An example of
frequency division multiplexing is the television.

Wave Division Multiplexing (WDM)


Wave Division Multiplexing refers to the allocation of specific wavelengths that
represent one specific light color for the transmission of data over optical fiber cables.
Each wavelength or channel carries a Time Division Multiplexed signal that has ample
bandwidth. Each wavelength can actually carry different type of data formats over the
same fiber optic cable. It is the same as a frequency or time slot allocation. The major
difference is that the bandwidth or capacity available for transfer of data is in the gigabits
per second.

In order to gain a better understanding of the OSI model, it is important to have a firm
understanding of the how the model was developed using generic layer architecture
definitions. Multiple layers make up the network architecture. Each layer has specific
functions that perform tasks independent of any other layer of that architecture. However
when they are put together they act in unison to form the architecture being discussed.

These layers can be considered modules, which can be interchanged with other
architectures. These modules or layers can be upgraded or changed without affecting
the global functionality of the architecture. At the same time, each layer or module can
be developed using standards that will allow for interoperability and functionality of
different vendor components without the need to design new modules or layers. For
example, think of the millions of code lines needed to maintain the space shuttle orbiting
the earth for it to be a successful operation. The programming of these code lines was not
done solely by one group or a person but instead the coding was broken into many
modules where each module controls one portion of the process such as aerodynamics,
control systems, electronics, etc. Once these modules are put together, they interact with
each other in unison to generate the final task of deploying to space. Should there be a
problem with the aerodynamics portion, changes or fixes can be made to address the
problem without affecting the other modules.

The same notion holds true for network-layered architectures. Each layer is responsible
for specific tasks that control a portion of the data transfer. Each layer of the architecture
has services that are specific to that layer and when they are put together form the
communication service as a whole. In a layered architecture, a service of layer N uses
only services of layer N-1. This means that the entire services specific to layer two for
example are using layer 1 services. A service of layer N is executed by peer protocol
entities in different nodes of a communication network. Peer protocol entities are
protocols that work on the same layer level. User As layer 2 protocol entities are peers
of users B layer 2 protocol entities. Peering is only done across the same layer not to the
layer below or above. The messages exchanged by peer protocol entities of layer N are
called Layer N protocol data units or N_PDUs. Data usually enters the top layer (N+1)

24
of User A and navigates down the layer stack to Layer N-n (where n is the lowest layer
available for that architecture) of User A onto the network cloud which also follows the
layered architecture. Since the network is only interested in switching or routing the
information, fewer layers need to be traversed up the stack. Once the specific layer for
routing or switching is reached, a decision is made and the network device then sends that
information down its own stack across multiple nodes. Once the destination has been
reached, the layer architecture of User B, takes the information all the way up the stack
from Layer N-n to Layer N+1 in which case, the information has been delivered. Keep in
mind that peer layers are acting on each other in a logical way even though the
information is moving up or down the layered stack.

Fig 3.1

B
Node 1 Node 2
N+1 PDU
Layer N+1 Layer N+1

Layer N Layer N Layer N Layer N

Layer N-1 Layer N-1 Layer N-1 Layer N-1

Layer N-2 Layer N-2 Layer N-2 Layer N-2

Data flowing down the Layered Architecture


stack of Layer N+1

3.1.2 The OSI Model

In the late 1970s to promote the compatibility of network designs, the International
Organization for Standardization (ISO) proposed an architecture model called the open
systems interconnection reference model (OSI model). The OSI model is a layered
architecture with seven layers. The bottom three layers deal with transmission of bits,
frames, and packets. The upper four layers deal with the setting of communication
services as well as applications between users. The OSI model was developed for packet
switched networks.

To demonstrate the OSI model, take the connectivity between two systems where system
A is sending an email to system B and which is separated by many intermediate nodes.

25
The message typed on system As email application is converted to a stream of 1s and
0s destined for system B.

This information traverses the stack (the set of layers that make up the OSI) where every
stack along the way, introduces a header with its own information. The data can then
traverse the stack down towards the physical medium, which is known as encapsulation
since every layer down the stack introduces headers and other important control
information. Alternatively, the information can move up towards the application layer.
This is known as the de-capsulation process where all the headers and trailers of the
lower layers are stripped away.

The thousands of bits are placed into packets each with system Bs destination address to
allow all the intermediate communication nodes to route the packets to their closest
neighbors. These neighbors have routing tables that allow each packet to take the correct
path to the final destination. Routing is done on a per hop basis where the first hop
communication node (directly connected to the source) only needs to know its closest
peer that provides a path to the final destination. The packets, which contain the email
message, are then disassembled back into the streams of 1s and 0s and fed back to the
email application that allows the message to be read. When system B needs to
communicate back to system A, the same procedure is followed across the packet
switched networks.

The OSI Layers

OSI divides telecommunication into seven layers, which are the following:

Physical Layer
When system A needs to send information to system B, the bits have to traverse some
type of physical medium to reach their final destination. These bits have to be converted
into some type of electrical, optical, wireless, or microwave signals. These signals are
specific to the actual medium and have to be converted back to digital information as
they arrived at each intermediate node. Each intermediate node has to pass that
information as well across the transport medium, where the conversion from digital to
some type of signaling has to happen until the information arrives at the destination
where the final conversion is made from signals to digital information. The physical
layer deals with the physical interfaces of every device on a network. It also deals with
how the bits are converted into signals for proper transport. Repeaters and multiplexers,
for example are devices that work on this layer.

Data Link Layer


This layers function is to take the information provided by upper layers by encapsulating
it on a frame. The frame is made up of a header, the data encapsulated from upper layers
and a trailer. The header portion of the frame contains the physical source and
destination address of the device and its closest physical peer. The data link layer also
makes sure that the physical medium is ready for information transfer. The trailer portion

26
of the frame does all of the error detection and correction and supervises the
retransmission of frames that arrived incorrectly.

This layer is subdivided into two sub layers:

1. The MAC or Media Access Control is in charge of framing. Framing is the


mechanism by which the information is encapsulated into frames specific to the
topology where a hardware source and destination address is included in a header as
well as a trailer for error control. It is also responsible for the media access. The
MAC sub layer has to use specific languages or protocols to be able to
communicate with other MAC sub layers to transport the data. This layer also
provides error detection.

2. The LLC sub layer of Logical Link Control is in charge of flow control and error
correction. The LLC sub layer interacts with upper layer protocols as well as assigns
sequence numbers to frames and tracks acknowledgments. Bridges and switches are
devices that work on this layer. Protocols such as the entire IEEE 802.x suite, Frame
relay, SONET, to mention a few, are typical of this layer.

Network Layer
This is the layer used for routing packets. The network layer keeps track of how
congested the network is, using this information, it can then select specific paths for these
packets to reach their destination. This layer creates a logical map of the network to
determine which path packets should follow; how to switch them and how to make the
routing decisions. The network layer also introduces headers that contain logical
addressing such as source and destination IP addresses. Routers use the destinations
logical address encapsulated in the header of a network layer packet to determine how to
route that packet. The network layer also provides special network facilities such as
prioritization of packet routing. This layer makes a decision of which packet is more
important given a priority assignment. Routers work on this layer. Protocols such as IP
and IPX are very specific to this layer.

Transport Layer
This layer is probably the most important one. This layer provides a mechanism for the
exchange of data between end systems. This transport layer also provides services for
both connection-oriented transmissions as well as for connectionless. With connection-
oriented services, it ensures that the data is delivered error free, in sequence, with no
losses or duplications. It establishes, maintains and terminates the connection as well as
provides mechanisms for multiplexing upper layer applications. It maintains flow control
and other congestion avoidance mechanism since during the initial session set up,
specific handshaking, or agreement takes place to allow for a reliable transfer between
two systems. This method is preferred for long-term connections between two devices
that required very reliable communication.

During connectionless transmissions, packets are delivered across the network without
having to set up a connection between two end users. This means that the delivery is best

27
effort and cannot be guaranteed, as there is no handshaking ahead of time.
Connectionless modes are mostly used with multimedia applications where there is no
need to have a session setup in order to speed the communication. The drawback to this
is that since there is no handshaking ahead of time like in connection oriented
mechanisms, there is no way to introduce flow control or congestion avoidance.

Session Layer
The session layer provides the mechanism for controlling the dialogue between
applications in an end system. It acts as the mediator or referee between upper layers.
The session layer establishes, manages, and terminates sessions between applications. It
also provides data transfer control and management between co-operating application
processes over a session connection. It is also responsible for coordinating data
communication between two presentation layer systems.

The session layer organizes their communication by offering three different modes,
which are the following:

1. Simplex: Communication will flow only in one direction without the possibilities of
flowing back the other way. It is like watching a classic television show where there
is no interaction from the viewer.

2. Half duplex: Communication flows can be sent or received on the same wire but not
at the same time. For example, think of a walkie-talkie conversation, you only need
to press a button to speak to the other person. However, the person receiving the
message is not able to speak until you release the button. Once the button has been
released, the other person can start sending a message on the same channel

3. Full duplex: The communication flow can be transmitted and received at the same
time. A common everyday example is the telephone conversation.

The session layer also acts like the transport layer where it needs to establish, maintain,
and release a connection. Protocols that work on this layer, to mention a few, are RPC,
NFS, SQL, X window, etc.

Presentation Layer
The presentation layer defines the format of the data to be exchanged between
applications and offers application programs a set of data transformation services.
The presentation layer acts as the translator to the application layer. It makes sure that
the data is readable by the application layer of the receiving station. This layer is
responsible for ensuring the correct formatting is completed on the data being sent across
the network. Additional responsibilities include the introduction of encryption,
decryption, data compression, and decompression, compatibility with the host operating
system, etc.

28
Application Layer
The application layer provides services for the application programs; thus ensuring that
communication is possible. This layer provides a means for application programs to
access the layer architecture. One service that is provided by the application layer is to
manage the communication between the end user applications by checking the systems
resources and availability. In addition, this layer contains some error control mechanisms
to check for data integrity of both communicating entities. This is the layer where
applications reside. Typical protocols that work on this layer are File Transfer Protocol
(FTP), World Wide Web (www), Simple Mail Transfer Protocol (SMTP), and many
others.

Fig 3.2

Node 1 Node 2 B
End to End Communication Services

Application Application
End to End Communication

Presentation Presentation
End to End Sessions

Session Session
End to End Messages

Transport Transport
End to End Packets

Network Network Network Network


Frames Frames Frames

Data Link Data Link Data Link Data Link


bits bits bits

Physical Physical Physical Physical

Signals Signals Signals

LAN Devices

What kinds of components are commonly used in a network topology? Many LAN
components are necessary for a minimal LAN topology. You need to have concentrators
that allow multiple PCs to interconnect in a common place using one specific link to the
outside world, which is usually connected to a repeater, a bridge, a router, or another
switch. There are devices that shape the specific signal and create specific patterns such

29
as CSUs/DSUs, as well as modems. There are multiplexing devices that split the signals
in to multiple ones, etc. Let us describe them in more detail:

Repeaters
A repeater is a physical layer device used to interconnect the media segments of an
extended network. A repeater extends the signal from one segment to another, amplifies
it, re-clocks it, and then retransmits it along to the next segment. A repeater will not
change the signal, nor would it correct or filter it, as it is not an intelligent device. If
the signal has noise, it will take the same noise across to the next segment. There is also
a limitation of how many repeaters can be connected across network segments, as it has
to do with the issue of timing and other distance limitations.

Hub
A Hub is a physical-layer device that connects multiple user stations, each via a
dedicated cable. A hub can be thought of a concentrator where multiple devices attach
to share the bandwidth (how many bits per second can be sent over a specific
transmission link) available for transmission. Every device that attaches to this hub via a
cable will convert its digital streams into electrical signals that are exchanged within the
internal bus (a bus is the back plane where all signals are exchanged) of the hub. These
electrical signals are converted back into digital streams as they reach another station on
the same hub or outside of that concentrator. Hubs were deployed during the early stages
of LAN topologies where the bandwidth had to be shared.

Modem
A Modem is a device that interprets digital and analog signals, enabling data to be
transmitted over voice-grade telephone lines. At the source, digital signals are converted
to a form suitable for transmission over analog communication facilities. Modem is a
short representation of modulator/demodulator.

LAN Bridge
A LAN bridge is a device that interconnects two separate Local Area Network segments.
This device is a bit more intelligent than a repeater as it can make routing and
switching decisions within network topologies. The bridge is able to filter and drop
unwanted traffic by checking its internal Media Access Control tables, which are tables
created by understanding the physical topology of the local segment. When a device
needs to send information across the bridge, the frames that leave that device are
analyzed and the physical address of that device is then copied and mapped to a
specific port location on that bridge. It does this by maintaining a cached table, which is
kept in memory. These physical addresses mapped to specific ports of that bridge allow
it to exchange traffic very fast.

A basic bridge can only interconnect segments that use the same data link layer protocols
such as IEEE 802.x. Translational bridges are able to take a specific Protocol Data Unit
(PDU) frame from one topology and convert it to another frame type such as when the
bridge connects a token ring segment and an Ethernet segment. The bridge is a layer two
device (data link layer).

30
LAN Switch
Currently a LAN switch is the most used device on LAN networks. The hub gave way to
the switch, as there was a need to be able to switch information much faster and at
much higher bandwidths without sharing it. The switch introduced new mechanisms that
allowed frames to be switched at very high speeds (100 Mbps/100000Mbps). It also
creates a hardware table that it uses to map physical device hardware addresses (Network
Interface Card addresses, NIC addresses) to specific ports. The MAC table allows frames
to be switched very fast. The first frame is used to determine who the sender is. The
frame contains physical source and destination addresses. The source address is copied
and mapped to a port and the frame is then flooded to all the ports available as it is
trying to locate its destination. When the destination device responds back with its own
frame, the switch takes that devices source address and maps it to that specific port. The
destination address of that frame is destined for the sender who has already been mapped
on the MAC table hence the path or flow is completed between the two devices. The
second frame and consecutives after that now have a path or a flow to follow which
allows for very high speed switching and less processing. This table is maintained
dynamically as devices insert and leave the switch environment. The LAN switch is a
layer two device.

Routers
A router is a device that interconnects multiple segments and can interact with multi -
protocol architectures. It has the capability of translating protocol data units from one
specific protocol and converts them to other type of protocol data units. It takes in
packets from one segment, analyzes where that packet is destined to and by looking up its
routing table it can route those packets to a next hop neighbor or peer. Routers do not
specifically have to know where every final destination is located on a network. The
routing tables created on each router are updates that are passed along from other routers
or neighbors that know of that destination via their own neighbors or peer routers.
Eventually, every router in a specific network will have a map of all the possible
destinations available for the routing of packets. Routers make routing decisions by
using software capable of calculating how to reach a destination in the most efficient and
optimal way.

For additional CPU and memory processing, the router needs to use its software portion
to make these decisions. The traditional router is now being replaced by something
called, layer 3 switching which means, the routing algorithms running in software are
now moved to the hardware portion of the switch. This enables faster switching and
routing decisions as the code is embedded on the ASICs (Application Specific integrated
circuit/microprocessors) using native computer languages such as Assembler, or other
type of binary coding.

The term CSU/DSU stands for Channel Service Unit/Data Service Unit. A digital-
interface device adapts the physical interface on data terminal equipment (DTE) device
(such as a terminal) to the interface of a data circuit-terminating device (DCE) (such as a

31
switch) in a switched-carrier network. The CSU/DSU also provides signal timing for
communication between these devices (WAN implementations).

An ISDN Terminal Adapter (Integrated Services Digital Networks) is a device used to


connect ISDN Basic Rate Interface (BRI) connections to other interfaces such as
EIA/TIA-232. A terminal adapter is essentially an ISDN modem.

3.1.3 Conclusion

In this session, the different methods available for multiplexing have been discussed.
These mechanisms are used in many of our existing network topologies. Multiplexing is
very important, as it is necessary for the sharing of the multiple links that provide access
to millions of users over a specific topology like the Internet. In recent years, the layered
architecture has given way to the Open System Interconnection model or OSI model. All
packets switched networks as well as every device that uses TCP/IP as their protocol
suite follows this model. There are seven layers in the OSI model and each layer has its
own functionality. This modular design allows for independence between layers, where a
fix on one layer will not affect that of another. It also gives specific responsibilities to
each layer to perform a given task. In addition, the common LAN components that make
up a network were explored in this session.

Discussion Questions
1. Describe briefly some functions and responsibilities of every layer of the OSI model.
2. Describe how a switch switches its frames.
3. Why is it better to do layer three switching than traditional routing?

32
Chapter 4: INTRODUCTION TO INTERNETWORKING

33
4.1 Introduction

This session introduces the basics of a Local Area Network. What is a LAN? How many
different LANs are there? What are the most commonly used topologies for these
infrastructures? What Data Link layer protocols are used for the transmission of frames
across a network and finally what is the difference between Ethernet and Token ring?

4.1.1 LAN Systems

A LAN is a high-speed, fault tolerant data network that covers a relatively small
geographic area. It typically connects workstations, personal computers, printers and
other devices. LANs provide multiple advantages to users as it allows for the sharing of
access to servers or applications that provide a service. For example, there is no need to
have every single user connected to a local printer for printing purposes. This would be a
costly deployment as well it would be very difficult to support the amount of printers per
floor or LAN available to every user. Instead, having one printer that can print very fast
with large amounts of buffer memory space will allow multiple users to print their
material quickly and reliable. The key behind this is that the printer has to be powerful
enough to handle a multitude of users.

The same setup can be applied to a client-server communication. Not every client has an
individual server to access and use for data entry, programming, email, etc. This will not
be a scalable solution as multiple servers running the same application could only interact
with one user provided it followed the one-on-one scenario. If the information were
changed on one server, it would have to be downloaded manually to the other users
servers. This is not acceptable, as it becomes a huge administrative overhead. Instead,
there are very powerful servers that connect to the LAN, which allow multiple user
connections to the application stack where information can be shared and updated in real
time. This type of LAN where users have access to applications on a server or printer is
referred to as High-Speed Office networks that use 100 Megabits per second connectivity
to the desktop.

There are different types of LANs used for different types of connectivity. Let us take a
closer look.

4.1.2 Backend Networks

Backend networks are very specific for the interconnection of very large and high-speed
systems such as mainframes, supercomputers, and storage devices, backup and restore,
etc. Usually, these types of LANs are not connected to an end user device. The only
traffic that is meant to be traversing the Local Area Network is server traffic. Think of
backend networks as the networks that process the majority of data in a specific
application.

For example, company A offers some type of e-commerce functionality to the public or
its employees. The front-end access to this application is probably a secured network that

34
has a web browser compiling all the information requested by the end user. The front
end is the graphical interface that the user sees when he or she logs into the service in
question. Let us say now the user wants to perform a trading operation. A specific link
or URL is highlighted on the web page for the user to process the transaction. When the
user clicks the URL, the system sends the request to a backend network where multiple
backend systems analyze the request and provide an output. The output is fed back to the
client via the user interface. The crunching of the data was done at the server level at
high speeds and replicated probably across multiple servers in case one of the servers
becomes unavailable.

Fig 4.1

Application X

S Backend
Servers are dual
attached one side to the
Networks production network
(user net) and the other
side to the Backend
Server to Server network
communication

S S S

User request information


from application X
Production
Networks

4.1.3 Storage Area Networks

Now, think of an application that requires continual backup and storage of information in
case the application crashes or malfunctions. Storage networks provide a historical and
chronological way of saving data to servers that can store the information for long
periods.

Typical characteristics of backend and storage networks should include the following:

1. High data rate transmissions that allow these types of transactions.

35
2. High-speed interfaces that enable the devices to be interconnected using optical or
other interfaces that allow for the exchange of Gigabits per second. These
interfaces have to contain a lot of buffer space to store and process these frames at
very high speeds.
3. Special medium access control (MAC) techniques to enable the reliable and
efficient use of the shared or switched network.
4. Most LANs are confined to a single building or group of buildings and are not
intended for deployment across large distances.
5. There should be few devices connected to these networks, since they are not
considered end user devices.

Usually, the storage, backup, backend, and other similar networks do not use the same
infrastructure as the users networks. This is usually a production network that should be,
both, physically and logical separated at all times. Backup and storage network traffic
should never flow on the same network topologies as production, day to day traffic. This
is to avoid congestion issues as well as to clearly separate the environment from a
production one and a purely system oriented one. The most common way to do this is to
allow every system to be attached between both worlds. This will enable the data to
traverse the servers. There are usually high-speed connections between storage/backup
networks using routers dedicated only for this traffic. Production traffic does not take the
path over the backup network and vice versa.

Fig 4.2

NY Separate Infrastructure to NJ
segregate Backup/Storage
traffic from Production
traffic Backup/Storage
Network

Switch R1 R2 Switch

Very fast
connections using
fiber optic muxes
to provide large
amounts of
bandwidth

Production Production
R3 R4
Network Network

36
4.1.4 Backbone LANs

A backbone LAN is a LAN used to interconnect multiple segments via a repeater or tap
to access the backbone and exchange information. The typical backbone LANs
(Ethernet) was deployed along a buildings structure (literally forming a backbone where
floors could tap into to transfer data). The long Ethernet cables were laid from top to
bottom floors and could not be longer than 500 meters, with two 50 terminators
(resistors) at each end. This Ethernet also had a cable diameter of 0.4in and each tap was
separated using multiples of 2.5 meters to prevent adjacent taps signals from reflecting
and adding phase to the signal, which cause distortions, etc.

The are also other type of Ethernet topologies which covered shorter distances (200
meters) as well as Unshielded Twisted Pair which has a distance of about 150 meters.
These can be referred to as 10Base5 (10 meaning 10 Megabits per second, Base refers to
Baseband, which means only one signal on the wire, and five refers to the distance
limitation, in this case, 500 meters), 10Base2, and 10BaseT respectively.

These specific parameters had to be followed if one was to call the cable in question an
Ethernet backbone cable. Every floor connected to specific hubs that had a specific port,
which tapped into the backbone allowing every floor to participate on the backbone in
exchanging traffic. The backbone was shared and typically would run at speeds of
10Mbps (early 1990s). This type of backbone allowed every segment to SHARE the
bandwidth of only 10 Mbps.

This meant that there were a lot of collisions, retransmissions, and over-subscription
(more clients than the actual capacity of the cable with respect to available bandwidth).
If one user was sharing the bandwidth with 1000 other users connected to the
backbone, and that user grabbed all the bandwidth available because of a large file
transfer, etc, everyone else sharing the bandwidth had to fight for what was left or drop
communication until the bandwidth was available.

Switching mechanisms and new backbone topologies gained more popularity and
eventually dismissed the shared backbone LANs for a faster delivery. For example, the
collapsed backbone enabled multiple switches to be stacked together that created a
logical BUS across multiple switches while switching all frames at high rates of speed.
Switching is preferred because it creates an environment where every user has a chance
to use the entire bandwidth of either 10 Mbps or 100 Mbps at one given time. This is
because the switching mechanism switches very fast across each port. Switching
architectures are now mostly used for every LAN backbone deployment.

37
Fig 4.3

50 Ohm
Resistor

Floor
Segme
TAPS nt
Separation of
TAPS
Multiple of 2.5
Meters
500 Meters

50 Ohm
Resistor

4.1.5 Topologies

The common topologies for LANs are bus, tree, star, and ring. The bus is a special case
of the tree, with only one trunk and no branches. Most of these topologies use MAC
protocols or Data Link Protocols such as the IEEE 802.x protocols used to access media
using bus topologies, ring topologies, wireless environments, fiber optic channels,
bridging and switching, etc. Each protocol development is steered by a committee, which
meets regularly and tries to improve its functionality. Some of the common committees
include the following:

1. The IEEE 802.3 committee deals with Ethernet issues


http://grouper.ieee.org/groups/802/3/index.html

2. The 802.5-committee deals with Token Ring issues


http://www.ieee802.org/5/www8025org/

3. The 802.11b committee deals with wireless issues


http://grouper.ieee.org/groups/802/11/index.html

38
The name 802 originated during the first meeting of the IEEE
http://standards.ieee.org/getieee802/ in February 1980, when subcommittees were formed
to breakdown the data link layer protocols.

Bus and Tree topologies


For a bus topology, all stations are attached through appropriate taps or repeaters that
allow each device to transmit on the bus at full duplex modes. This means that every
device using the bus medium can send and receive information at the same time. The old
type of bus topologies, like the building backbones LANs that were discussed previously
had taps that attached to every device at each end of the bus and had to have terminators
or resistors which served to dissipate the signal once it was no longer needed.

The tree topology is a generalization of the bus topology. The transmission medium is a
branching cable with no closed loops. One or more cables start at the head-end, and each
of these may have branches. The branches in turn may have additional branches to allow
quite complex layouts.

In bus topologies, all stations can receive a transmission from any one station. This
means that there has to be a mechanism that allows the signal to be directed to the correct
device and there has to be a mechanism that regulates transmission. To make sure that
this does not happen, data is sent across in the form of frames (small blocks of data,
which contains headers and control information). Each station on the bus has a unique
address or identifier, which is included on a frames header portion as a destination or
source. Having this functionality, the transmission of data is kept within a source and
destination as well as allows each station to wait for its turn to send data frames across
the medium. This can be considered a type of regulation of traffic.

One of the advantages in using a bus topology is that when a node fails, the entire bus is
not affected. It allows the individual node to be removed and replaced without affecting
service to all other nodes attached to the bus. It has also less cabling requirements than a
star topology where there has to be more point- to- point connections between the central
node and each device.

There are also disadvantages such as the contention of bandwidth, which is a problem
when two stations want to send at the same time and do. The information sometimes
collides and has to be dropped or retransmitted. There is also less control with respect
to security since there is no central node watching the network. A device can actually
tap on to the bus causing possible problems, including security breaches.

39
Fig 4.4

Bus Topology
Tap

50 Ohm
Resistor

Tree Topology

Tap

50 Ohm
Resistor

Ring Topology
The network consists of a set of repeaters joined by point-to-point links in a closed loop.
The links are unidirectional; that is, data is transmitted in one direction only, so that data
circulate around the ring in one direction (clockwise or counterclockwise).

Each station attaches to the network at a repeater and can transmit data on the network
through the repeater. Data is also transmitted in frames. As a frame circulates past all
the other stations, the destination station recognizes its address and copies the frame into
a local buffer as it goes by. The frame continues to circulate until it returns to the source
station, where it is removed.

40
A major advantage that a ring topology provides is that it gives each station an equal
opportunity to send its information. One of the disadvantages to a ring topology is that in
order to add or remove a device from the ring, operation of the topology will be affected,
as the ring is broken to allow for the change.

Fig 4.5

Frame is copied by Node D's


buffer and the original frame
continues on the ring

Node D Node C
D

D
Closed
D

Node A discards the frame


when it gets back to itself
Loop

D
D
Repeater
D

Node A Frame destined to D

Node B
Star Topology
Each station is directly connected to a common central node. Typically, each station
attaches to a central node via two point-to-point links, one for transmission and one for
reception. The central node or center for the physical star topology has two different
approaches for delivery of data. One is a purely broadcast mechanism where one frame
from a given station is sent to every device on the topology via the outgoing links. This
topology is a physical star but behaves like a logical bus. The other approach is to send
frames only to a device in question via the outgoing interface.

41
This means that the central node acts more like a frame switch and does not have to flood
every outgoing link with copies of frames only intended for one of the attached devices.
Each frame is stored or buffered and then forwarded to the destination following the
store-and-forward mechanism discussed previously. An advantage of this topology
includes the central node having better control of data transmission as it can monitor all
the traffic flowing across different stations. It can also provide specific priorities to given
ports. There are also disadvantages including becoming the single point of failure in the
topology. If the central node is affected or cannot control data forwarding across its
ports, all network devices attached to it will be able to communicate.

Fig 4.6

Star Topology

Frames can be sent to every


device or it can be switched
to individual ports.
It is a logical bus topology
but a physical star topology.

4.1.6 Medium Access Control

Medium Access Control is an effective mechanism that allows each node to access the
medium in an orderly and efficient way in order to use the mediums capacity. A central
node can exercise this control, where the controller allows or grants each node access to
the network. If there is no central node controlling access to the medium, there will be
some type of collective determination as to how every node attached to a given non-
centralized topology should access the medium. Access to the medium can be in an
orderly fashion, it can allocate specific slots of capacity for future transmissions, or it can
literally fight for the actual allocation of the capacity. The methods used by MAC
protocols are known as round robin, reservation, and contention respectively.

42
Round Robin
Each station, in order, is given the opportunity to transmit. During that opportunity, the
station may decline to transmit or may transmit its data. If there are N nodes on a
topology, the process will start from the n (n is the first node, N is the last Node) node,
and continue in order towards N as such, n, n+1, n+2, n+3.N and the process wraps
around again back to n. It allocates equal amount of time and capacity for each node to
send its data. It provides equal opportunity to all nodes.

Reservation
For streaming traffic such as video or voice, reservation techniques are well suited. In
general, for these techniques, time on the medium is divided into slots, much as with
synchronous TDM. A station wishing to transmit reserves future slots for an extended or
even and indefinite period, which guarantees the node capacity on the link.

Contention
For bursty traffic, contention techniques are usually appropriate. No control is exercised
to determine whose turn it is; all stations contend for time in a way that can be. It is
mostly used in shared environments. Every node actually fights for available
bandwidth. Contenting for bandwidth can cause already transmitting nodes to be
dropped or bump for another data flow of a different source node. The competition for
bandwidth is akin to survival of the fittest, with the most powerful and fastest device
winning over the others.

4.1.7 MAC Frame Format

The MAC layer receives a block of data from the LLC layer and is responsible for
performing functions related to medium access and for transmitting the data. It makes
use of a protocol data unit at its layer. The PDU is referred to as the MAC frame.

In general, all MAC frames have the following format and are made up of the following
fields:

MAC Control: This field contains any protocol control information needed for the
functioning of the MAC protocol.
Destination MAC Address: The destination MAC Address is the destination physical
attachment point on the LAN for this frame.
Source MAC Address: The Source MAC Address is the source physical attachment
point on the LAN for this frame.
LLC: The LLC data from the next higher layer
CRC: This field is also known as the frame check sequence, (FCS) field. This is an
error-detecting code.
Fig 4.7

MAC Destination
Source MAC LLC CRC
Contr MAC
Address
ol Address

Actual
DATA
43
4.1.8 MAC Protocols: The ALOHA Network

The first development of multiple access technology was explored at the University of
Hawaii. This new multiple access technology was called ALOHANET, which can be
referred to as the first modern data network. Dr. Norman Abramson,
(http://winwww.rutgers.edu/focus/Focus1998/Abramson%20Bio.html) developed the
idea, which gave way to what we now know as modern Ethernet.

The ALOHA method, refers to a mechanism where each connected source to the network
sends a data frame when it needs to. If the frame arrives to the destination, there will be a
typical acknowledgement sent back to the source. If there is no acknowledgement in a
period specifically selected for an acknowledgement, the sending source will retransmit
the frame again. If the source fails to receive an acknowledgement after a few
retransmission times, it gives up. There can be multiple sources sending at the same
time. These multiple frames can interfere with each other at the receiver so that neither
gets through. This is known as a collision. The problem with this type of multiple
accesses was that multiple sources and destinations share a common data path hence
frames collided causing congestion and degradation of the system efficiency. Once a
collision occurs, the data contained on each frame is lost.

The ALOHA method worked well for the transfer of data across a few devices, however
as the network grew in size, it became very inefficient. There was a need to develop new
protocols that used the ALOHA method as the preferred method but introduced some
collision and congestion control mechanism that allowed it to be more efficient. Dr.
Robert Metcalfe, working for Xerox Corporation, set about on inventing what is now
known as Ethernet. He took the principles of ALOHA and created a mechanism called,
Carrier Sense Multiple Access with Collision Detect or (CSMA/CD) or more typically
known as the Ethernet. To learn more about this work you can visit
http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/ethernet.htm .

How does CSMA/CD Work?


A station wishing to transmit will first listen to the medium to determine if another
transmission is in progress (Carrier Sense). If the medium is idle, the station will
transmit, if the medium is in use, the station must wait. While the station is transmitting,
another station can also be transmitting, which will cause frames to collide, and data
encapsulated on the frame will be lost. To make sure this does not happen, a station
waits a reasonable period after transmitting for an acknowledgement.

Here is how the algorithm determines access to the medium:

1. If the medium is idle, transmit, otherwise, go to step 2


2. If the medium is busy, continue to listen until the channel is idle, and then transmit
immediately.
3. If a collision is detected during transmission, transmit a brief jamming signal to
assure that all stations know that there has been a collision and then cease
transmission.

44
4. After transmitting the jamming signal, wait a random period of time, then attempt to
transmit again and repeat step 1.

This means that now, stations will have a more efficient way to access the medium. The
algorithm does not guaranteed a collision-free environment as any two stations following
the CSMA/CD method can be waiting the same amount of random time, which will
cause their transmitted frames to collide again, but at least it is a more manageable
solution.

For additional information on how CSMA/CD works, you can visit this web site
http://www.erg.abdn.ac.uk/users/gorry/course/lan-pages/csma-cd.html.

4.1.9 Token Ring

The token technique is based on the use of a small frame, called a token that circulates
when all stations are idle. IBM popularized this topology by introducing the token factor.
A station wishing to transmit must wait until it detects a token passing by. It takes the
token at a specific moment, converts the token frame into a start of frame sequence for a
data frame. A start of frame sequence tells the destination where the actual frame begins
with a special bit pattern. The station then appends the remainder of the data frame to get
it out to the network. When one station is sending data, there is no token on the ring, so
other stations wishing to transmit need to wait until that specific source is done sending.
This data frame will make a round trip around the ring until it reaches the destination.
The destination station copies the frame into its buffer as the data frame passes through it.
The frame then gets back to the source where it is absorbed and taken off the ring at
which time, the source inserts a new token back onto the ring to allow other stations to
seize it and begin communication.

This type of access is very reliable as it gives every node an equal opportunity to send
data as long as the node has control of the token. The token is released after a very short
period (in the milliseconds range) to make sure that everyone has a chance to send.
During very heavy loads, the process works like a round robin process, which is both
efficient and fair. A disadvantage of a token ring is the requirement for token
maintenance since the loss of a token will prevent further utilization of the ring. There
can be also duplication of the token, which can disrupt ring operation. One station must
be selected as the ring monitor to ensure that exactly one token is on the ring and to
reinsert a free token if necessary.

For more information on Token Rings, you can visit the Cisco System documentation
page at http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/tokenrng.htm .

45
4.1.10 Conclusion

In this session, you learned that there are many different types of LANs that can be
configured on a network, such as high speed LANs, storage and backup and backbone
LANs. Each type of LAN has advantages and disadvantages that are specific to their
purpose and the type environment they are employed in. The Local Area Network can be
physically configured using different topologies such as bus, ring, star, and tree. The
final topic discussed in this session was specific Media Access Control mechanisms.
These mechanisms allow access to the specific medium as well as two of the most
important implementation of Bus and Ring topology, CSMA/CD and Token Ring.

Now, that a basic understanding of the principles of Internetworking is in place, you will
be able use this information as a building block for future sessions on routing and
switching.

Discussion Questions
1. How many different Ethernet MAC frames are there?
2. Is Ethernet topology more efficient than a token ring topology? Explain your answer.

46
Chapter 5: INTRODUCTION TO BOOLEAN BASICS &
BRIDGING

47
5.1 Introduction
In this session, our focus will be placed on the basics of bridging. The different types of
bridges available and how these bridges interface with a network will be discussed. A
basic understanding of bridges is necessary for a future understanding of how packets are
routed by a router and frames are switched using switches. The Spanning Tree
Algorithm will be presented as a method used to provide a loop-free network. In
preparation for future sessions in IP addressing and subnetting the fundamental concepts
of binary math will be introduced. Lastly, our attention will turn to the binary system and
the hexadecimal system. The session will conclude with a discussion on the importance
of logical tables.

Bridges

As you will recall from earlier sessions, LAN infrastructures and the devices that connect
to these topologies are used to exchange traffic. Bridges were developed to allow
separate LANs to interconnect across local topologies or over WAN infrastructures. A
repeater is not effective enough as it is not a very smart device since it takes the signals
that are generated on one end and repeats them on the other side including errors and
noise. It does not really separate LANs since a repeater extends one LAN between two
different locations. There was a need for smarter devices that could switch a frame and
determine the best path between two points by analyzing source and destination and
checking bridging paths available within the device. At the same time, the device has
to be able to filter frames not intended to a specific destination on the bridges
routing/bridging table and drop them.

A bridge is a much simpler device than a router. It works on the data link layer or layer
2. There are two types of bridges, the basic bridge, and the translational bridge. The
basic bridge is designed for use between local area networks (LANs) that use identical
physical and data link layer protocols such as the entire IEEE 802.x suite of protocols as
well as same physical media like the Ethernet, token rings, or high speed serial. The idea
behind using identical data link layer protocols allow the bridge to process information
faster through its CPU creating a very fast bridging scenario. MAC frames from one
LAN/WAN topology do not have to be converted to another MAC frame type when
using basic bridges.

Bridge Protocol Architecture


The IEEE 802.1D specification defines the protocol architecture for MAC bridges.
Within the 802 architecture, the endpoint, or station address is designated at the MAC
level. Thus, it is at the MAC level that a bridge can function. A frame destined to a final
destination is captured by the MAC Bridge, stored temporally, and then sent over the next
segment. The LLC sub layer is not involved because the bridge is simply relaying MAC
frames.

48
Fig 5.1

User A User B

Data Data
LLC LLC
MAC MAC MAC
Physical PHY PHY Physical

There are many reasons why multiple LANs should be separated by bridges or routers,
such as:

1. Reliability: If there are issues on a device that connects to LAN1, LAN2, LAN3,
etc connected to the bridge will not be affected as the separation creates an
individual domain.
2. Performance: The more devices you connect to a LAN, the less efficient
performance will be. If there are multiple networks attached through a bridge,
each segment can grow to its optimal capacity without affecting every LAN
attached to the bridge.
3. Security: Different LAN segments have different needs. A Research and
Development LAN has different traffic patterns then a Human Resources LAN.
Security is important to protect data that is restricted to the public. A bridge
allows separate physical interfaces to deal with other LANs own data patterns.
4. Geography: The obvious reason to use a bridge is to interconnect two separate
topologies that could be located in different buildings, regions, etc.

Function of a Basic Bridge

The simplest implementation of a transparent bridge is the basic bridge. The functions of
a basic bridge are simple and few:

1. Read all frames transmitted on A and accept those addressed to any station on B.
2. Using the medium access control protocol for B, retransmit each frame on B.
3. Do the same for B-to-A traffic.
4. The basic bridge makes no modification to the content or format of the frames it
receives, nor does it encapsulate them with an additional header. Each frame to be
transferred is simply copied from one LAN and repeated with exactly the same bit
pattern as the other LAN.

49
5. The bridge should contain enough buffer space to meet peak demands. Sometimes
frames will arrive at a faster pace than it the bridge can handle to process
immediately; therefore, buffer space should be sufficient.
6. A bridge may connect more than two LANs.

Bridging Mechanisms
A learning bridge will analyze incoming frames, make forwarding decisions based on
information contained in the frames, and forward the frames toward the destination.
There are two types of bridging mechanisms, Transparent Bridging and Source Route
Bridging. Transparent Bridges were developed by the Digital Equipment Corporation
and are considered by the 802.1 committee. Source Route Bridging was developed by
IBM and proposed to the 802.5 committee.

For additional information on Learning Bridges, you can visit the Cisco documentation
page at http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/bridging.htm .

In Transparent bridging (Learning Bridge), frames are sent one hop at a time towards
the destination. This means that there is no pre-determined path between source and
destination. Each bridge along the way will process the frame according to its learning
table. This table was obtained by mapping physical devices to ports of that bridge. The
source does not need to know the final destination. Each hop makes its own
determination to switch the frame. The bridge must contain addressing and routing
intelligence. At a minimum, the bridge must know which addresses are on each network
to know which frames to pass. Further, there may be more than two LANs
interconnected by a number of bridges. In that case, a frame may have to be routed
through several bridges in its journey from source to destination.

For additional information on transparent bridging, you can visit the Cisco documentation
page at http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/bridging.htm .

In Source Route Bridging, the path between source and destination is predetermined and
included on the frame as it traverses the network. Each frame has a map or topology of
bridges that need to be followed. The addition of this route bridge map is done via a
discovery frame (explorer frame) that goes out prior to sending data between source and
destination and maps the path ahead of time. This is a more deterministic way of
delivery of data. If one of the bridges along the path is affected, a new discovery frame
has to be sent to append that information for future data frames exchanged between
source and destination.

For additional information on source route bridging you can visit the Cisco
documentation page at http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/srb.htm .

How does the Learning Bridge work?


A learning bridge examines the source field of every frame it sees on each port and
creates a picture of which addresses are connected to which ports. It will not retransmit a
frame if it knows the destination address is connected to the same port that initiated that
frame.

50
If the destination was not in the bridge address table it would have to retransmit the frame
on every port except to the one it was received on. This is known as flooding of all the
ports. It is a mechanism used to allow the frame to discover where the destination is
located. The idea behind this is that only the first frame of a data flow should be flooded
to every port if there is no entry on the learning table. Once the response from the
destination is mapped, the second frame of the data flow to the last frame of the same
flow will not be flooded to every port. A flow between source and destination will be
formed. If there was no table caching or memory to store this hardware address map, the
bridge will process every frame in order to determine where to send the data, thus causing
the CPU to be very high.

Now suppose stations A and B start up and A attempts to communicate with D through
two directly attached bridges (creating a loop-free or tree topology) A frame from station
A addressed to D will reach the local bridge on port 1. B1 can now learn that station A is
connected to port 1 but it knows nothing about the final destination D so it has to
retransmit the frame destined to D to all available ports except on the port where the
frame was first heard from. This means that bridge B2 (directly connected to a port (of
B1) will receive the frame and since it does not know yet where station D is, it retransmit
the frame to all available ports (except the port where B1 connects to B2). B2 also
examines the incoming frames and knows that A is reachable via the directly connected
port that attaches to B1.

When D responds, it will reach B2 via its own local port at which time, B2 then knows
that D is directly connect to port 1. B2 already knows how to get to A and sends the
frame to the port where station A has been mapped. When the frames arrive at B1, it
then examines the incoming frames and determines that D is reachable via the directly
connected port where each bridge is attached so it maps it to its table. At this point, B1
and B2 know how to send frames from A to D and back. They can now create a flow
between source and destination. Should B need to communicate with D, the same
process begins. A frame leaves B destined to D. In this case, there is no need to flood
every port to determine where D is because, B1 already has a port assigned to a
physical address representing a path to get to the D from the previous flooding between A
and D.

All that B1 needs to do is to look at the portion of the frame for the source and map the
source address (Bs) to the specific port. At, which point, a new flow can take place
between B and D. In order for a bridge to update a learning table, the entries mapped to
each port that have not received any data frames for a set period, will be aged out, and
removed from the table.

This kind of bridge topology is only effective in loop-free or tree topologies where there
are no closed loops. Therefore, to truly have a full implementation of a complete bridge
(as it is presently known); the concept of the Spanning Tree Algorithm must be
introduced. This bridge must not only listen and learn but it must implement a spanning
tree algorithm to conform to the 802.1d implementation and specifications.

51
Fig 5.2
B1 B2
3 6 3 6
A 1 2 2 1

B
4 5 4 5

B1 knows that A is connected directly to Port 1

B1 B2
3 6 3 6
A 1 2 2 1

B
4 5 4 5

B1 still does not know where D is so it "floods" all ports except port 1.
B2 also examines incoming frames and knows that A is reachable via
the directly connected port that attaches to B1 (port 2)

B1 B2
3 6 3 6
A 1 2 2 1

B
4 5 4 5

When D responds, B2 knows that D is directly connected to port 1. B2


already knows how to get to A and sends the frame. When the frame
gets to B1, it then examines the incoming frame and determines that
D is reachable via ports 2 of B1 and B2

Spanning Tree Algorithm


The purpose of the spanning tree algorithm is to have bridges dynamically create loop
free topologies to provide a path between every pair of LANs in the network. What is a
loop? A loop is created when there are alternate routes between two hosts. Bridges can
forward traffic indefinitely, which can degrade a network. Bridges on a network
exchanged special messages with each other that allow them to calculate a spanning tree
or a subset of the topology, which is loop free. These special messages are called BPDUs
or Bridge Protocol Data Units.

These BPDUs contain the following information to allow the bridges to decide how to
create the spanning tree:

Find one Root Bridge among all the bridges exchanging the BPDUs.
Determine the shortest path distance between the Root Bridge and themselves.
Elect a Designated Bridge for each LAN. If there are multiple bridges on the same
LAN, one of them has to become the designated one. The one selected is the closest
to the Root Bridge and will only forward frames from that LAN to the Root Bridge.

52
Choose which interface or port, known as the root port, gives them the best path from
themselves to the Root Bridge.
Determine and select ports that should be included in the spanning tree. Only
forward traffic to and from these ports.

BPDU's are sent every 2 seconds on every port in order to ensure a stable, loop-free
topology. How is a root bridge selected? First the bridge is turned on. Every time a bridge
comes up it assumes that it is the root bridge. They set their ID equal to the root ID. The
bridge ID is actually made up of two components, as follows:

1. A two byte priority. The switch sets this number which, by default, is the same for all
switches. The default priority on Cisco switches is 32,768 or 0x8000.

2. A 6 byte Media Access Control (MAC) address. This is the MAC address of the
switch or the bridge. The combination of these two numbers determines which switch
will become the root bridge. The lower the number the more likely this switch will
become the root. By exchanging BPDUs, the switches determine which one is the root
bridge. Below is an example of the bridge ID:

80.00.00.00.0c.12.34.56

The first two bytes represent the priority and the next 6 bytes represent the MAC address
of the switch. The structure of a configuration BPDU is shown below:

Bytes | Field
2 protocol ID
1 Version
1 Message Type
1 Flags
8 Root ID
4 Cost of Path
2 Port ID
2 Message Age
2 Maximum time
2 Hello Time
2 Forward Delay

In Spanning Tree Algorithm, there are 5 Spanning Tree Protocol Port States:

1) Blocking, 2) Listening, 3) Learning, 4) Forwarding and 5) Disabled

Blocking - All ports start in this mode to prevent the bridge from creating a bridging
loop. (20 seconds to Listening mode) Listening - All ports attempt to learn if there are
any other paths to the root bridge. (15 seconds to Learning mode) Learning - Similar to
Listening state except the port can add information that the port learned into its address

53
table.(15 seconds to Forwarding mode) Forwarding - The port is capable of sending and
receiving data.

Spanning Tree Algorithm: An Example


To illustrate the need for Spanning Tree Algorithm, look at the following example as
explained on Radia Perlmans Interconnection book. Assume that there are two LANs
interconnected with three bridges. Host A is sending from Subnet A to Subnet B. Since
there is no notion of where the destination resides all three bridges store the frame, begin
to look up their tables to see where the destination is, and forward the frame to Subnet B.
All three bridges know about host A and add that device to their table.

By nature, one of the devices will be the first one to forward the frame across to Subnet
B. Assume that bridge #3 was the one that forwards the frame first. Since each bridge is
transparent to the other, it will look as if host A is directly connected to Subnet B from
the point of view of B1 and B2. Bridge #1 and Bridge #2 will take in the frame re-
compute their learning tables by re-defining host A to be residing on Subnet B and the
frame is then forward to Subnet A.

This has created a loop. To see how this topology begins to degrade, now assume that
Bridge#1 succeeds in forwarding a frame back to Subnet B. Bridge#2 will note that A is
still on Subnet B but Bridge#3 realizes that host A has now moved to Subnet A. It then
prepares itself to forward the frame towards Subnet A.

Now assume that Bridge#1 sends a frame onto Subnet A. Bridges#2 and #3 will take
notice of host A has now moved to Subnet A and will re-compute their learning tables
and begin forwarding towards Subnet B. Not only has there been a loop but also frames
have been duplicated out of proportion causing a network to break. The introduction of
the Spanning Tree Algorithm will prevent these loops by blocking specific ports and
only allowing proper ports to forward the given traffic.

Fig 5.3

Host A

Subnet A

B1 B2 B3

Subnet B

Host B

54
Binary Numbering versus Decimal Numbering

Binary numbering uses only 0s and 1s. The base unit is called a bit (short for binary
integer). It is a base-2 numbering system in which 1 is the largest digit that can be used
in any position. This is the same as 9 being the largest number that can be used in any
position in a decimal number. A decimal number is any number that uses base-10
numbering system. Each digit in a binary number is multiplied by 2 to the power of the
digits position in the binary number, with the first position being the power of (0). Any
number to the power of 0 is 1. Therefore, (1*(10)) ==1.

Consider the binary number 101010. It can be written in the more explicit form of
1*(25)+0*(24)+1*(23)+0*(22)+1*(21)+0*(20). This is one way to convert a binary number
to a decimal number. In this case, 1*(25) equals 32, 0*(24) equals 0, 1*(23) equals 8,
0*(22) equals 0, 1*(21) equals 2 and 0*(20) equals 0, resulting in the following formula:
32+0+8+0+2+0=42. Therefore, 101010 in binary are 42 in decimal format.

Compare the binary (base-2) number 101010 with the decimal (base-10) number 101010.
You write out the base-10 number using the same scheme as the base-2 number system
explained above, resulting in 1*(105) +0*(104)+1*(103)+0(102)+1(101)+0*(100), which
gives you the formula (1*100000)+(0*10000)+(1*1000)+(0*100)+(1*10)+(0*1)=101010.
This is also written as 101010.

The most common use of binary numbers is in groups of 8 bits or what is referred to as
octet or byte. Computer addressing schemes use groups of 4 bytes to represent an
address or a character.

Assume the following binary number: 10101010. You can write it out using the same
system as before: 1*(27)+0*(26)+1*(25)+0*(24)+(1*23)+0*(22)+1*(21)+0*(20)=170 in
decimal format. How do you arrive at this answer? You can think about these octets in
terms of the values represented by each position. The positions are 7,6,5,4,3,2,1 and 0.
These are powers that the base is raised to for each position and the resulting values in
decimal format are 128,64,32,16,8,4,2 and 1.

You can think of a binary system as a logic system. It can be either True or False or ON
or OFF. To make this conversion much simpler and not have to go through the long
representation of base 2 computation, let us put together a simple scenario that will
always provide the user a decimal value given a binary number.

Example
Convert binary number 10111100 to decimal. This number is 8 bits long.

The best way to show this is using a graphical method. It is much easier to see it this
way. Therefore, draw a box with 8 spaces around the binary number in question. On top
of each box, place the decimal value specific for that position. This comes from the
previous explanation about the bit multiplied by 2 to the power of the bits position in the
binary system. Once this is done, take the decimal values of the binary number that are 1,

55
which are above the boxes and add them together. This will give you the result you are
looking for.
Fig 5.4

27 26 25 24 23 22 21 20

128 64 32 16 8 4 2 1

1 0 1 1 1 1 0 0

128 + 32 + 16 + 8+ 4 = 188

In order to convert decimal numbers into binary the same principal can be used, but the
other way around. To convert, first find the highest power of two that does not exceed
the given number and place a 1 in the corresponding position in the binary number. For
example, the highest power of two in the decimal number 36 is 25 =32. Insert a 1 as the
6th digit counted from the right: 100000. In the remainder, 36-32 =4, the highest power
of 2 is 22 = 4, so counting from right the third zero can be replace by a 1 given the
following binary number, 00100100.

The simplest way to display this information graphically is to return to the drawing
boxes. The drawing boxes have specific decimal values with the long base -2
computation such as position 0s value being 1, position 1s value is 2, positions 2 value
is 4, 8,16,32,64,128,256,512,1024, etc ( take note of this pattern).

Once the first position value is obtained, you can double that value to obtain the next
position. With this information, you can put a 1 on the values that will add up to the
decimal number in question. If the requirement is to convert 36 into binary, you can
simply turn the switches to ON or True where the value is 32 and where the value is 4
as shown in the diagram.

Fig 5.5

128 64 32 16 8 4 2 1

0 0 1 0 0 1 0 0

if 36 is required to be
converted to binary turn
the bits to ON which add
up to 36 in the above
boxes resulting in
00100100
56
Logic Tables
When dealing with binary numbers, rules of addition, subtraction, division and
multiplication are not followed in the same manner, as you normally would with base-10
numbering schemes. Two very important operations that should be remembered are the
OR operation and the AND operation. The OR operation can be considered to be the like
the addition operation in base-10 numbering scheme although, the only values to add are
1 and 0.

An OR operation will be performed in the following way:

Assume there are two binary bits available. The number of combinations that these two
bits can provide can be OFF/OFF, OFF/ON, ON/OFF or ON/ON. This is the same as
saying 00, 01, 10, and 11. If two bits are OR together, the answer will be always, 1
provided that, at least one of the bits is equal to 1. It will be 0 always when both bits are
0.

B1 B2 OR
0 0 0
0 1 1
1 0 1
1 1 1

The AND operation can be considered to be the like the multiplication operation in base-
10 numbering scheme, although again, the only values to AND will be 1 or 0. An AND
operation will be performed in the following way. Assume there are two binary bits
available. The number of combinations that these two bits provide has already been
described above. If two bits are AND together, the answer will be always 1 provided that
both bits are 1, otherwise the answer will be always 0 as shown below.

B1 B0 AND
0 0 0
0 1 0
1 0 0
1 1 1

Should there be three bits instead of two, there will be a total of 8 different combinations
of 0s and 1s as shown below.

B2 B1 B0
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1

57
As you can see, a pattern emerges. If there is 1 bit available, the number of possible
combinations for that bit can only be 2, 1, or 0. For 2 bits, the number of combination is
4, for three bits, the number of combinations is 8, for four bits is 16, for 5 is 32, etc. The
pattern can be written in the following way:

Number of Bits Possible Combinations


1 2
2 4
3 8
4 16
5 32
6 64
7 128
8 256
9 512
10 1024
The given result of the first number of combinations is doubled every time you add a bit
to the number of bits involved in a binary number.

Hexadecimal Numbering versus Decimal Numbering

Hexadecimal (Hex) numbering uses 0-F (A=10, B=11, C=12, D=13, E=14 and F=15). It
is a base-16 numbering system. F is the largest digit that can be used in any position.
This is the same as having 9 be the largest number that can be used in any position in a
decimal number.

Each digit in a hex number is multiplied by 16 to the power of the digits position in the
hex number, with the first position being the power of (0). Consider the number AA. It
can be written in the more explicit form of A*(161)+A*(160), which is one way to convert
a hex number into a decimal number. In this case, A*(161) equals 160, and A*(160)
equals 10, resulting in the formula 160+10=170. AA in hex equals 170 in decimal.
Notice that it takes eight binary digits 10101010 to write the binary equivalent to 170
(decimal), but it takes only two hex numbers to do the same. This is why hex is such a
popular numbering system. The primary use of hex numbering is addressing schemes is
to provide a shorthand method of writing binary octets. You seldom have to deal with hex
numbers greater than two digits. Layer two MAC addresses are typically written in hex.
For Ethernet and Token Ring, these addresses are 48 bits or six octets. A typical MAC
address will be represented in the following way:

10005ABABCBB

The first six digits represent the MAC identifier or company that manufactures the
Network Interface Card. It is a value assigned by IEEE to represent what company owns
and manufactures such interface. In this example, 10005A is an identifier for IBM. If
you follow the rules described for hex conversion, the values are 0-15. Given this
information, you can write the MAC address above in binary in the following way:

58
0001 0000 0000 0000 0101 1010 1011 1010 1011 1100 1011 1011

It is obviously much easier to remember the hex address that trying to remember 48 bits
corresponding to your MAC address.

5.1.5 Conclusion

Bridges are devices that can allow interconnecting multiple LANs. These devices are
much more complex and required a lot more parameters to manipulate. Some network
topologies nowadays still use bridges to exchange information although the bridge has
almost been completely replaced by the use of routers. Spanning Tree Algorithm is still
in use today as switching environments; need to provide loop-free environments to
prevent degradation and collapse of a network. Bridges opened the doors for routers, as
you will see in future sessions.

During this session, the topic of Boolean math or binary math was discussed briefly.
This topic of course, could not be discussed just in a few simple paragraphs as I have
done since Boolean math is a field of study in the mathematics arena, which is complex
and much more detailed than this explanation.

The final section of this session dealt with the basic conversion mechanisms between
decimal and binary numbers; as well as hexadecimal conversion to binary and decimal
numbers. Additionally, logic tables were introduced as the necessary mechanisms
required for performing the routing and switching functions on a network. In future
sessions you will have an opportunity to learn more about these functions.

Discussion Questions

1. Why is using a bridge a better choice than a repeater?


2. How does the learning bridge create a map of the local topology?
3. How can a bridging and switching loop be prevented?
4. What function in Boolean or binary math is similar to multiplication?

59
Chapter 6: IP INTERNETWORKING

60
6.1 Introduction

The information covered in this section is one of most important topics covered in this
course. The ability to understand the IP addressing scheme and the different types of
networks available is necessary for a clear understanding of how the Internet works. As
well, this information is essential for understanding the principles of subnetting.

Our focus in this session is to examine how IP addresses originated and how the Internet
makes sense of each address. Our attention will then turn to the topic subnetting which is
a process that enables a network to expand. The final section will deal with the
importance of the Address Resolution Protocol (ARP). This session is meant to be a
mere introduction to the topic and provides the groundwork for future sessions. You are
encouraged to practice the techniques taught in this session, given that without practice,
future sessions will be more difficult to understand.

6.1.1 Internet Protocol

The Internet Protocol (IP) is a network protocol used on the Internet to communicate and
transport information across nodes. Intranets, Extranets and Internets use this protocol as
the de-facto protocol for routing. It is one of the most important protocols of the TCP/IP
suite of communication protocols. The TCP/IP suite of protocols uses IP as the network
layer protocol and TCP as the transport layer protocol. IP protocol is a connectionless
protocol that uses best-effort delivery of datagrams as well as provides fragmentation and
re-assembly of datagrams, whereas TCP is a connection-oriented protocol that requires
connection setup, maintenance, and termination via a mutual handshake between source
and destination.

A very common comparison is done with the IP protocol and the post office system,
which says the IP protocol is like the actual postal truck that carries the letter written to
the first postal center near the sender. At the postal center (first hop router), destinations
are sorted according to the postal code of the receiver. Intermediate postal centers will
receive the letter and continue sorting until the closest postal center to the receivers
postal code is reached. At, which time, another truck will deliver the letter to its final
destination. The TCP portion will make certain that there is a guaranteed delivery of this
letter, which can be thought of as certified mail. When the other party has received
your certified letter, you are notified by the post office that the party has signed for it.

Many other protocols that pertain to the TCP/IP suite reside on different OSI layers like
in the application layer such as:
File Transfer Protocol (FTP): http://www.ietf.org/rfc/rfc959
Simple Mail Transfer Protocol (SMTP): http://www.ietf.org/rfc/rfc821
Hyper Text Transfer Protocol (HTTP): http://www.ietf.org/rfc/rfc2616

These protocols were invented during the early days of the Internet, which was developed
by the Defense Advanced Research Projects Agency (DARPA) http://www.darpa.mil/, and
other scientific research centers, to allow intercommunication between dissimilar

61
computer systems. IP addresses logically identify a specific host or device on a network.
You can literally see what network the IP address belongs to and how it has been
assigned with respect to that network. It can also help you identify who owns the IP
address and the source that generated the address.

Designers of the IP protocol decided to use 32-bit integer address (4 bytes or 4 octets)
space since at the time common computer processor handled 32-bit words. It was logical
to define an IP address as just another word handled by the processor on each
computer. Routing was also considered when designing the addressing scheme as
routing needs to be as optimal and efficient as possible. Routers only route network
prefixes and not individual hosts. This allows the network to perform better in terms of
finding destinations across networks.

IP Packet Format
The IP datagram or packet contains specific fields that provide certain functionalities,
which are version, IP header length, type of service, total length, identification, flags,
fragment offset, and time to live, protocol, header checksum, source address, destination
address, options, and data.

Version: This field currently determines the IP version being used on the network. The
current version is IPv4. There were other IP versions but version 4 became the standard.
The new proposed standard for IP is called IP next generation of IPv6. This version uses
128-bit integers and includes authentication, QoS, and other features not in used on IPv4.

IP Header Length (IHL): This field indicates how long the datagram header is in 32- bit
words. You will recall that the header contains control information. In an IP packet,
there are 24 bytes of header information.

Type of Service: Type of service is very important when there is a need to prioritize the
packet. It specifies how an upper-layer protocol would like to handle the current packet
by manipulating the bits assigned to this field to give it different levels of importance.
This field is used often with multimedia applications. Video and voice packets have to be
preferred over data packets in a network. Quality of Service is also provided via this
field.

Total length: This field specifies the entire IP packet length in bytes including data and
header.

Identification: This field contains an ID number which identifies the packet should there
be any fragmentation along the way. This ID helps put together the IP packet fragments.

Flags: This field consists of three bits only. The low order bits controls the
fragmentation process. The low order bit specifies whether the packet can be
fragmented. The middle bit specifies whether the packet is the last fragment in a series of
fragmented packets. The third high order bit is not used.

62
Fragment Offset: This indicates the position of the fragments data relative to the
beginning of the data in the original packet, which allows the destination IP process to
properly re-construct the original datagram.

Time-to-Live: This is a counter that gradually decrements, usually from 255 to 0 as the
packet traverses the network. Every hop decreases the counter by one and it is used to
prevent a packet from looping endlessly on a network.

Protocol: This field indicates which upper- layer protocol is expected to process the
information passed onto the IP packet.

Header Checksum: This control field checks for errors on the header field to check for
proper integrity.

Source Address: This is the IP address of the sending node.

Destination Address: This is the IP address of the receiving node.

Options: This is not used to often but allows IP to use various options such as security.
Padding is also important as it adds 0s to the datagram should it need to have the right
byte size without being discarded as a malformed or too short of a packet.

Data: This is the actual users information, which is generated by upper-layer protocols.

Fig 6.1

32 bits

Type of Total
Version IHL Lengh
Service

Fragment
Identification Flags
Offset

Time to Header
Protocol
Live Checksum

Source
Address

Destination
Address

Options
(Padding)

DATA

63
6.1.2 Classes of IP networks

There are very specific ranges that were selected by the designers of the IP protocol that
represent different classes of networks. These were assigned with respect to how large or
small an entity could be and how these networks could be distributed around the world to
make sure routing was optimal. The IP address has been defined in such a way that it is
possible to extract the host ID or net ID portions quickly. Routers, which use the net ID
portion of an address when deciding where to send a packet, depend on efficient
extraction to achieve high speed.

Classification of networks are divided into three main groups, Class A, Class B and Class
C. Class A networks, were allocated for very large corporations or entities which had
millions of hosts available to addressed. Class B networks were intended for middle size
entities which have hosts in the thousands to be addressed. Class C networks were
allocated for small entities or individual users for allocation of addresses to a few
hundred hosts. There are also Class D and E networks. Class D networks are specific for
multicast applications and are assigned to entities, as they need it to develop their
applications. Class E networks are experimental and not in use right now.

In the section below, you will have an opportunity to examine the notion of Classical
networks from A through C in great detail. Classical networks are those networks
considered without any subnetting done to them. They are fully utilized without breaking
them into subnet ranges.

Class A networks
First octet ranges from 0.0.0.0 through 127.0.0.0 (written in decimal). The network 0 is
never assigned. The network 127 (loop back) is used specifically for some local
applications that run internally such as email servers or other web servers on a system
that acts both as client and server hence it is never assigned. All Class A networks have
been assigned and there are no more available networks to hand out at this point.

The usable class range is between 1.0.0.0-126.0.0.0. The highest order bit in the 32-bit
address is always set to zero. For example, a class a network of 122.0.0.0 will be written
this way in binary, for the network field and all zeros for the host field

128 64 32 16 8 4 2 1
0 1 1 1 1 0 1 0. 00000000.00000000.00000000

The numbers on top of the first octet represent the decimal value of each bit integer as
described on session 5.

Why is the highest order bit of the highest octet always 0? Because if that bit was turned
to ON or equal to binary 1, then it will no longer be in a class A network range (1-126). It
would jump to the next class.

64
In a class A network, the highest order 8 bits represent the network and the other 24 bits
left represent the hosts as shown below:

122 . 0. 0. 0
network. host. host .host

Fig 6.2
Decimal Value of first
Octet between 0 -127

First bit is
8 bits 24 bits
always = 0
representing representing
Network Hosts

Class A Network

Another rule implemented is that there should always be (2n-2) hosts available for use
where n represents the number of bits dedicated for hosts. In this case, there are 24 bits
representing hosts, which will provide (224-2) or 16,777,214 devices. This rule is in place
because you need to define a network address as well as a broadcast address for that
specific network. You should not assigned the first address of the range, in this case,
12.0.0.0 or the last address on that network, 12.255.255.255, as the 0's represent the
network (12) and the 255's represent the broadcast address. This is the reason for
subtracting the first and the last host addresses from the range.

Now as an example, Classical IP address 12.12.12.1 represents a host (device, router, PC,
printer, etc) in network 12.0.0.0. Host addresses 12.0.0.1, 12.0.0.255, 12.12.12.0,
12.255.255.254, 12.255.254.255 among others, represent valid hosts on this range but it
is sometimes customary not to assign these to avoid confusions.

In practical worlds, LAN administrators sometimes will not use the hosts with 0s or
255s. To avoid confusion they will start with 12.1.1.1 up to 12.254.254.255 However, it
can be proven that hosts with 0s and 255s will work. Take a look at the following
example (Figure 3) where the machine was configured for 10.10.10 (10 being Class A
network) and it responds to ICMP (Pings). This holds true for Class B and C networks.

65
Fig 6.3

Class B networks
First 2 octet ranges from 128.0.0.0 through 191.254.0.0 (written in decimal). The first
octet is always used to determine the network class. If it is between 128 and 191, the
network is a class B.

The definition for class B says that there are 16 bits for the network and 16 bits for the
host as such

network. network. host .host

Network 138.93.0.0 written in binary,

128 64 32 16 8 4 2 1
1 0 0 0 1 0 1 0 =128+8+2=138

&

128 64 32 16 8 4 2 1
0 1 0 1 1 1 0 1 = 64+16+8+4+1=93

Hence, you need to consider not only the first octet but also the second octet as well when
reading back the network.

The definition also says that you can determine a class B network if the highest 2 bits of
the highest octet in that 32-bit string is 10 regardless of how the next bits are defined as

66
shown below:

10001010.01011101.00000000.00000000 = 138.93.0.0 is the actual string.

If the second highest order bit on the highest octet were turned to 1 or ON, then it would
no longer be a class B network but instead jump to the next class range, which is C.

Fig 6.4

Decimal Value of first Decimal Value of second


Octet between 128 -191 Octet between 0 -255

10

First 2 bits 16 bits 16 bits


always = 10 representing representing
Network Hosts
Class B Networks
Decimal value for Class B
Networks determined by
highest Octet from decimal
ranges 128-191

Again, the host rule is implemented which says that there should always be (2n-2) hosts
available for use where n represents the number of bits dedicated for hosts. In this case,
there are 16 bits representing hosts, which will provide (216-2) or 65,543 devices. This
rule is in place because we need to define a network address as well as a broadcast
address for that specific network.

You should not assigned the first address of the range, in this case, 138.93.0.0 or the last
address on that network, 138.93.255.255, as the 0's represent the network (138.93) and
the 255's represent the broadcast address. This is the reason for subtracting the first and
the last host address from the range.

Now as an example, Classical IP address 138.93.125.12 represents a host (device, router,


PC, printer, etc) in network 138.93.0.0. Host addresses 138.93.0.1, 138.93.0.255,
138.93.12.0, 138.93.255.254, 138.93.254.255 among others, represent valid hosts on this
range but it is sometimes customary not to assign these to avoid confusions as shown
above.

67
Class C networks
First 3 octet ranges from 192.0.1.0 through 223.255.254.0 (written in decimal). The first
octet is always used to determine the network class. If it is between 192 and 223, the
network is a class C.

The definition for class C says that there are 24 bits for the network and 8 bits for the host
as such

network. network. network .host

Network 192.168.30.0 is written in binary,

128 64 32 16 8 4 2 1
1 1 0 0 0 0 0 0 =128+64=192

&

128 64 32 16 8 4 2 1
1 0 1 0 1 0 0 0 = 128+32+8=168

&

128 64 32 16 8 4 2 1
0 0 0 1 1 1 1 0 = 16+8+4+2=30

192.168.30.0 is a class C network and if you had a binary string representing this number
then the highest three bits in the string will always be 110.

You still need to take into consideration the next 21 bits as this make up the network
field.

11000000.10101000.00011110.00000000 = 192.168.30.0 is the actual string

Again, the host rule is implemented which says that there should always be (2n-2) hosts
available for use where n represents the number of bits dedicated for hosts. In this case,
there are 8 bits representing hosts, which will provide (28-2) or 254 devices. This rule is
in place because you need to define a network address as well as a broadcast address for
that specific network.

You should not assigned the first address of the range, in this case, 192.168.30.0 or the
last address on that network, 192.168.30.255, as the 0's represent the network
(192.168.30) and the 255's represent the broadcast address. This is the reason for
subtracting the first and the last host address from the range.

68
Fig 6.5

Decimal Value of
Decimal Value of first
second and third
Octet between 192 -223
Octets between 0 -255

110

First 3 bits 24 bits 8 bits


always = 110 representing representing
Network Hosts
Class C Networks
Decimal value for Class C
Networks determined by
highest Octet from decimal
ranges 192-223

Class D networks are networks assigned to multicast groups. The ranges are from
224.0.0.0 through 239.255.255.255. Class D networks will be discussed in the last
session of this course.

6.1.3 What is a MASK?

A MASK represents the network and how IP addresses are interpreted locally on that IP
network segment. The router uses it, for example, to determine where the network begins
and ends (it uses the logical AND operation to determine the network). There are
classical masks and there are subnet masks. For now, assume the following for classical
networks and masks:

The standard (classical) network mask says that all the network bits in an address are set
to 1 and all the host bits are set to 0. Every single IP address assigned to a node
anywhere has a mask defined otherwise it is not going to work. IP addressing needs a
mask. These need to co-exist for proper IP processing to work. Standard network masks
for the three major classes of networks are:

Class A network mask 255.0.0.0 = 11111111.00000000.00000000.00000000


Class B network mask 255.255.0.0 = 11111111.11111111.00000000.00000000
Class C network mask 255.255.255.0 = 11111111.11111111.11111111.00000000

This is exactly what the definition says. Place a 1 on all the bits that represent the
network and a 0 on the bits that make up the hosts. Therefore, for a class A network, a
classical or standard mask will only have 8 bits, a class B standard mask will only have
16 bits, and a class C standard mask will only have 24 bits.

69
The network mask is not an IP address it is used to modify how local IP numbers are
interpreted locally in that subnet. Routers use this information to decide where that IP
address has come from. When an IP address is provided or configured somewhere in a
device, there HAS to be a mask included. This is the only way that you can tell if the IP
address given is a classical IP address or a subnet address.

Using the explanation above it can be said that:

138.93.0.0 has a classical or standard mask of 255.255.0.0 or


11111111.11111111.00000000.00000000
172.25.0.0 has a classical or standard mask of 255.255.0.0 or
11111111.11111111.00000000.00000000
10.0.0.0 has a classical or standard mask of 255.0.0.0 or
11111111.00000000.00000000.00000000
125.0.0.0 has a classical or standard mask of 255.0.0.0 or
11111111.00000000.00000000.00000000
192.24.1.0 has a classical or standard mask of 255.255.255.0 or
11111111.11111111.11111111.00000000
192.168.30.0 has a classical or standard mask of 255.255.255.0 or
1111111.11111111.11111111.00000000

Do you see the pattern here? It is not an IP address or number. It represents the network,
as you know it. Given a classical network IP, you can easily provide the mask by writing
all 1 where the network field is and all 0 where the host field is.

Finally yet importantly, there are two different ranges of networks. The public domain
or registered networks, which are assigned by the Internet group to any company,
university, or individual requesting, address space for connectivity to the Internet. These
networks are strictly owned by the company, which requests them by paying a yearly fee
to guaranty this network allocation.

A number of different entities allocate address space. The American Registry for Internet
Numbers (http://www.arin.net) manages the Internet numbering resources for North &
South America, the Caribbean, and sub-Saharan Africa. The Asia Pacific Network
Information Center (http://www.apnic.net) manages the Asia Pacific region. The
Reseaux IP Europeens (RIPE) (http://www.ripe.net) manages Internet numbering
resources for Europe, The Middle East, The North of Africa, and sections of Asia

Within the class A, B and C range, there are networks referred to as private networks.
These networks are not assigned to a specific corporation, University or individual as
they are reserved for internal use within any given network, anyone can use them. These
networks cannot route out to the Internet and are meant to be local since company A can
be using the same network as company B.

70
The following are private ranges that can be use to expand your local segments:

Class A: Only 10.0.0.0 with a mask of 255.0.0.0


Class B: Only ranges from 172.16.0.0 through 172.31.255.255
Class C: Only ranges from 192.168.0.0 through 192.168.255.255

6.1.4 Subnetting

The reason for subnetting is very simple. There is a need to create more networks to
utilize the address space more efficiently. This allows for expansion and growth of your
companys infrastructure allowing multiple networks to be deployed in different places
without having to request more address space from the Internet administrating body as
this is very complex and tedious.

Subnetting is implemented by borrowing bits from the host field. For example, a class
A network such as 125.0.0.0 has a classical mask of 11111111.00000000.00000000.00000000.
If you borrow bits from the host field that would mean starting from bit number 9
counting from left to right. Bit 9 is the first bit that begins representing hosts on a
classical A network. Therefore, you can borrow from 1 bit to up to 22 bits to create
subnets. The reason you need to leave the last two bits available for hosts, is because of
the (2n-2) rule, which says, you need to have one network and one broadcast address.
Two bits will give you four combinations (24-2), where the first and the last address are
removed and all you have left are two usable host addresses.

When bits are borrowed from the host field, the number of bits borrowed creates a
number of combinations that will provide the actual number of subnets after all the
combinations have been written. For example, take three bits borrowed from the host
field. This means that there will be 8 different combinations that you can have which
will represent the number of subnets you can obtained from these combinations as
follows:

128 64 32
B2 B1 B0
0 0 0 = 1st subnet
0 0 1= 2nd subnet
0 1 0 = 3rd subnet
0 1 1 = 4th subnet
1 0 0 = 5th subnet
1 0 1 = 6th subnet
1 1 0 = 7th subnet
1 1 1 = 8th subnet

In this example, you had used a class A network, 125.0.0.0 with mask 255.0.0.0. Once
you borrowed bits from the host field, you no longer have a classical or standard
network so you need to apply a new mask. Remember that a mask writes all 1s where

71
the network or sub network is and all 0s where the hosts are. Using this definition, you
obtain the following sub networks:

125.0.0.0 with a mask of 255.224.0.0 or 11111111.11100000.00000000.00000000


125.32.0.0 with a mask of 255.224.0.0 or 11111111.11100000.00000000.00000000
125.64.0.0 with a mask of 255.224.0.0 or 11111111.11100000.00000000.00000000
125.96.0.0 with a mask of 255.224.0.0 or 11111111.11100000.00000000.00000000
125.128.0.0 with a mask of 255.224.0.0 or 11111111.11100000.00000000.00000000
125.160.0.0 with a mask of 255.224.0.0 or 11111111.11100000.00000000.00000000
125.192.0.0 with a mask of 255.224.0.0 or 11111111.11100000.00000000.00000000
125.224.0.0 with a mask of 255.224.0.0 or 11111111.11100000.00000000.00000000

Network subnet host host host


11111111 111 00000 00000000 00000000

The 224 in the new mask is obtained by adding the decimal value of each bits
position.

By subnetting this network, you have now incremented the number of possible networks
that can be used for deployment around your network globally but at the same time you
have reduced the amount of hosts by n number of bits borrowed. In this case, the
number of hosts is (221-2).

Keeping up with the idea of having (2n-2) usable hosts, there are also (2n-2) usable
networks. This means that the first and last subnet is not considered when allocating
network address space. This is not to say that they cannot be used. It is just another
customary deployment to make things simpler.

Subnet Addressing Plan


Thomas A. Maufer in the textbook IP Fundamentals describes another method for
subnetting. When creating a design Maufer states there are four key questions that must
be answered which are the following:

1. How many total subnets does the organization need today?


2. How many total subnets will the organization need in the future?
3. How many hosts are there on the organizations largest subnet today?
4. How many hosts will there be on the organizations largest subnet in the future?

The first step in the planning process is to take the maximum number or subnets required
and round up that number to the nearest power of two. For example, if an organization
needs nine subnets, 23 (or 8) will not provide enough subnet addressing space, therefore
the network administrator will need to round up to 24 (or 16). It is imperative that the
administrator or designer allows for future growth. For example, if 14 subnets are
required today, then 16 subnets might not be enough in two years when the 17th subnet
needs to be deployed. It might be better to allow for more growth and select 25 (or 32) as
the maximum number of subnets. However, as you expand the number of subnets, you

72
simultaneously reduce the amount of hosts as the subnetting process steals away bits
from the host field.

The second step is to make sure that there are enough hosts addresses for the
organizations largest subnet. If the largest subnet needs to support 50 host addresses
today, 25 (or 32) will not provide enough host address space so the network administrator
will need to round up to 26 (or 64).

Defining the subnet mask / extended prefix length


Extended Prefix length is another way of writing a mask. It is a number that tells the
user the number of bits used in the mask. For example, a network can be written as
14.0.0.0/8 or 130.15.0.0/16 or 193.15.1.0/24. The extended prefix length helps the user
determine what type of networks these are, classical or sub netted. In this example, only
8, 16, and 24-bit masks are being used for classical class A, B and C networks
respectively. Any other number reflected on the prefix will tell the user that the network
in question has been sub netted. For example, a network 140.125.114.0/24 tells the user
that 8 more bits have been added to the mask which means that 8 bits were borrowed
from the host field providing (2n-2) more networks or (28-2)=254 subnets and
140.125.114.0 is one of these subnets. It is a short way of writing 140.125.114.0 with a
mask of 255.255.255.0 or 11111111.11111111.11111111.00000000.

Subnet Example # 1
An organization has been assigned the network number 193.1.1.0/24 (255.255.255.0) and
it needs to define six subnets. The largest subnet is required to support 12 hosts. The
first step is to determine the number of bits required to define six subnets keeping in mind
the idea about growth. Since a network address can only be sub netted along binary
boundaries, subnets must be created in blocks of powers of two, i.e., 2, 4, 8, 16, 32, 64,
etc. Thus, it is impossible to define an IP address block such that it contains exactly 6
subnets. Here, the subnet block should be defined as 16 (24) and have 8 unused subnets
that can be reserved for future growth. Selecting only three bits from the host field will
provide 23 or 8 subnets but if you use the notion of (2n-2) usable networks, you would
have left only 6 without any room for growth.

Since 16 = 24, 4 bits are required to enumerate the sixteen subnets in the block. In this
example, the organization is subnetting a /24 so it will need 4 more bits, or /28, as the
extended-network prefix. A 28 bit extended network prefix can be expressed in dotted-
decimal notation as 255.255.255.240. A 28-bit network prefix leaves only 4 bits to
define host addresses on each subnet. 24 = (16) 2 =14 hosts.

Defining each of the subnet numbers


The sixteen subnets will be numbered zero through fifteen. The 4-bit binary
representation of the decimal values zero through fifteen are: 0 (0000), 1 (0001), 2
(0010), 3 (0011), 4 (0100), 5 (0101), 6 (0110) ,7 (0111), 8 (1000), 9 (1001), 10 (1010), 11
(1011), 12 (1100), 13 (1101), 14 (1110) and 15 (1111).

73
In general, to define subnet #n, the network administrator places the binary representation
of n into the bits of the subnet-number field. For example, to define subnet #6, the
network administrator simply places the binary representation of 6 (0110) into the four
bits of the subnet-number first.

Subnet

193.1.1.0 /24 = 11000001.00000001.00000001.|0000|0000

255.255.255.240 = 11111111.11111111.11111111.|1111|0000

Here are the sixteen subnets for this example.

Base net: 11000001.00000001.00000001.00000000 = 193.1.1.0/24

Subnet#0: 11000001.00000001.00000001.00000000 = 193.1.1.0/28


Subnet#1: 11000001.00000001.00000001.00010000 = 193.1.1.16/28
Subnet#2: 11000001.00000001.00000001.00100000 = 193.1.1.32/28
Subnet#3: 11000001.00000001.00000001.00110000 = 193.1.1.48/28
Subnet#4: 11000001.00000001.00000001.01000000 = 193.1.1.64/28
Subnet#5: 11000001.00000001.00000001.01010000 = 193.1.1.80/28
Subnet#6: 11000001.00000001.00000001.01100000 = 193.1.1.96/28
Subnet#7: 11000001.00000001.00000001.01110000 = 193.1.1.112/28
Subnet#8: 11000001.00000001.00000001.10000000 = 193.1.1.128/28
Subnet#9: 11000001.00000001.00000001.10010000 = 193.1.1.144/28
Subnet#10: 11000001.00000001.00000001.10100000 = 193.1.1.160/28
Subnet#11: 11000001.00000001.00000001.10110000 = 193.1.1.176/28
Subnet#12: 11000001.00000001.00000001.11000000 = 193.1.1.192/28
Subnet#13: 11000001.00000001.00000001.11010000 = 193.1.1.208/28
Subnet#14: 11000001.00000001.00000001.11100000 = 193.1.1.224/28
Subnet#15: 11000001.00000001.00000001.11110000 = 193.1.1.240/28

Determining Host addresses for each subnet


According to Internet practices, the host-number field of an IP address cannot contain all
0-bits or all 1 bits. The all 0-bits number represents the base network (or sub network)
number, while the all-1s hosts represents the broadcast address for the network (or sub
network).

Broadcast to Subnet: 193.1.1.240/28  11000001.00000001.00000001.1111|1111


Broadcast to Subnet: 193.1.1.0/24  11000001.00000001.00000001|.11111111

Now assuming subnet #2, we can define the following hosts for that sub network:

11000001.00000001.00000001.00100000 = 193.1.1.32/28

host #1: 11000001.00000001.00000001.00100001 = 193.1.1.33/28

74
host #2: 11000001.00000001.00000001.00100010 = 193.1.1.34/28
host #3: 11000001.00000001.00000001.00100011 = 193.1.1.35/28
host #4: 11000001.00000001.00000001.00100100 = 193.1.1.36/28
host #5: 11000001.00000001.00000001.00100101 = 193.1.1.37/28
host #11: 11000001.00000001.00000001.00101011 = 193.1.1.43/28
host #12: 11000001.00000001.00000001.00101100 = 193.1.1.44/28
host #13: 11000001.00000001.00000001.00101101 = 193.1.1.45/28
host #14: 11000001.00000001.00000001.00101110 = 193.1.1.46/28

Determining the Broadcast address for each subnet


The broadcast address for subnet#2 is the all-1s host address, or:

11000001.00000001.00000001. 00101111 = 193.1.1.47

The broadcast address for subnet#2 is the next-lower address than the base address for
subnet#3 (193.1.1.48). This is always the case; the broadcast address for subnet#n
always has a value that is numerically one less than the base address for subnet# (n+1)

Fig 6.6
193.1.1.0/28
193.1.1.64/28

193.1.1.16/28 193.1.1.80/28

NET

193.1.1.96/28

193.1.1.32/28

193.1.1.48/28 193.1.1.112/28

Subnets are useful to separate


different LAN Domains

75
Address Resolution Problem
The idea behind Address Resolution Problem (ARP) http://www.ietf.org/rfc/rfc826 is
very simple. Assume one of the hosts on the local segment is a router that is trying to
determine which machines are connected locally to its local interfaces. Packets arriving
at that router are delivered directly to connected devices that need to be framed using a
PDU specific to that topology. Having just an IP address is not enough because the
frame will be using physical MAC addresses to determine where each machine resides
physically. Therefore, ahead of time, the router broadcasts an ARP request to all the
machines requesting their physical MAC addresses.

The mapping occurs when host A says to the local segment I have IP address IPB ,
please respond to me with a MAC or physical address PB only if you have IPB as the IP
address configured. The router or host uses this information to create an ARP table,
which is updated usually every four hours. This table can change if the hosts ARP table
is cleared. The clearing will create the process to start again querying everyone locally
attached to respond with physical addresses.

To summarize:

The Address Resolution Protocol, ARP, allows a host to find the physical address of a
target host on the same physical network, given only the targets IP address

Fig 6.7
" Do you have IP address for host B?"
" Give me physical address PB"

A D H B

Router MAC address


asking information sent
for PB back to A

A D H B

Router

One implementation that ARP allows us to use is for example, if A is sending to B


and A needs to find out how to send to B by means of ARP process, the probability
that B will need to find out where A is to send data is very high. Therefore, in order
to avoid extra traffic, A includes its IP to physical address binding when sending a
request to B. This allows B to know about A. At the same time, not only B will

76
know where A is but also since A uses a broadcast packet, the IP to physical address
binding is also included so that everyone in the wire knows where A is.

6.1.6 Conclusion

In this session, you have examined the functions of an IP address, how these addresses
are represented on a network, the different classes of networks and also how and why to
subnet. The session was concluded with a brief look at the APR protocol.

This session requires a great of practice and extra examples have been provided in the
assignment section to allow you to practice further.

Discussion Questions

1. Why are there different Classes of networks?


2. Why do we need to subnet?
3. How do we know that a network is classical or sub netted?
4. Please complete these examples:

Subnet

1. 140.50.0.0/16 into 7 sub networks. (Remember multiples of 2)

2. 25.0.0.0/8 into 16 sub networks.

3. 220.15.30.0/24 into 32 sub networks.

4. Given subnet 8.1.1.0/24, how many total subnets are there with how many
hosts?

5. What is the host range of sub network 192.169.32.16/28?

6. Write the mask in decimal for 10.4.5.6/25?

7. Given classical network 128.129.0.0/16, subnet it to provide 32


subnets.

8. What is the maximum amount of bits you can borrow from the host
field, if you want to subnet a classical A network? B? C?

9. Subnet 128.1.0.0/16 by borrowing 7 bits from the host field.

10. How many hosts are there if the subnet is 192.168.10.0/30?

Suggested reading: IP Fundamentals by Thomas A. Maufer. Chapters 2 and 3.

77
Chapter 7: IP ROUTING PRIMER

78
7.1 Introduction
Up to this point, references have been made to what packets are and what they actually
contain and carry across a network. Routing is the function that allows these packets to
reach their final destination. Routing processes performed by nodes called routers
usually run in software programs. Routing decisions are included in the operating system
of the router or sometimes, the process is done at the hardware level where much of the
routing decision functionality is moved to the ASIC or microprocessor. This allows the
algorithm to work at chip level making routing decisions extremely fast.

The focus of this session will be on how a router makes a decision to route a packet.
Additionally one type of dynamic routing algorithm called Distance Vector algorithm
will be discussed in detail. The session will conclude by examining the differences
between the Classful and Classless routing protocols. Finally, there will be a practical lab
scenario created specifically to show some of the characteristics and parameters
discussed in this session.

7.1.1 What is routing?

Routing is the process of moving packets across a network using a router from a source to
a destination. Routers are multi-protocol devices used to exchange these packets and
hence are layer three devices (Network layer of the OSI model). Multi-protocol means
that these devices can actually convert packets from a specific protocol such as Novell
IPX to an IP packet as well as it can take dissimilar frame PDUs such as Token Ring
frames and convert them to Ethernet PDUs.

Routing is necessary in order to move traffic flows from one network to another.
Multiple routers can be traversed when sending packets across a network. Each routers
function is to determine the best path to get the specific packet flow from source to
destination. Routers know of the entire topology by means of exchanging routing
updates that are kept in routing tables. These tables are analyzed every time there is a
need to determine the best path to the destination. The topology update is done through
continuous updates via hello packets spaced out every 30, 60, or 90 seconds depending
of the routing protocol in used. These hello packets contain the routing information
encapsulated for the entire topology. Routers then use these updates to maintain their
tables in a steady state form and only re-calculate the tables when there are changes in the
topology.

Changes in the topology can be related to networks that are being affected due to outages
attributed to additional networks being introduced to the topology, or routers failing due
to hardware issues, etc. Routing is based on destination prefixes. The main goal of a
router is to locate a way to deliver the packets to the closest router advertising that
network. Routers do not know of specific hosts on any network because trying to keep a
table on a router with every host on the network would require incredible amounts of
memory and very powerful hardware to maintain these enormous routing tables. This
course would not be scalable or even feasible.

79
Processing takes much less time when all that needs to be calculated is a network
destination. The Internet right now has over 110,000 network prefixes listed on routing
tables. Imagine if every host on the Internet was cached (stored) on every router
connecting to the Internet? It would be impossible to keep track of this process.

Traditional routing, which is done in software, uses specific algorithms to determine the
most optimal and efficient path to a destination. Routing infrastructures now are being
deployed with Layer 3 switching architectures. This means that the routing algorithms
(mechanics) have been moved down from the software operating level to the hardware
level right onto the ASIC (Application Specific Integrated Circuit) or the chip level
increasing the processing power allowing for faster routing calculations and less CPU
processing.

Table Driven IP Routing


The usual IP routing algorithm employs an Internet routing table (IP routing table) on
each machine that stores information about possible destinations and how to reach them.
Each router acts as a gateway to either the source or the destination hosts. When a packet
needs to be routed from a specific host A on network N, the hosts gateway G (in this
case, the router) will look up its routing table and decide what is the best path to use to
send that packet towards the destination.

When host A sends its packet destined to host B on network X, gateway G will intercept
that packet. It will strip away the destination prefix from the destination address and it
will look at its routing table to determine what path to follow which was provided by
updates that arrived to that gateway G from its peers. The gateway does not know
exactly where the actual destination is but does have a next hop peer or neighbor
available, which provided the updates for that specific destination network and many
others. This means that routers use their neighbors to get updates for their tables.

Peer or neighbors are routers that are directly connected to the same LAN or network to
the hosts gateway router. These peer or neighbor routers also have their own peers and
neighbors, which pass on information about their local networks through updates that
have been provided by their peers or neighbors. This makes the process much easier as a
packet traveling through, say five different routers, will need to be completely analyzed
by the last hop router where the actual destination host is directly connected. The three
routers in between are not required to perform an analysis, other than to determine the
next hop path for the packet. This makes routing very simple and efficient.

80
Fig 7.1

1
R2 R4
1 1

R1 1 R6

A 1 1 B
R3 R5
1

Best route is always


calculated between source
A and destination B. Cost
of link and number of hops
are some of the
parameters taken into
consideration

Typically, a routing table contains pairs (N, G), where N is the IP address of the
destination network, and G is the IP address of the next gateway along the path to
network N. Thus, the routing table in gateway G only specifies one-step along the path
from G to a destination network the gateway does not know the complete path to the
destination.

In short, you want to make sure that routers (gateways) maintain routing tables to route
packets and the hosts use these gateways to transport their information across a network.
Hosts should not have routing tables larger than, its default gateway, unless these hosts
are systems that also participate in routing such as Unix systems that run RIP to
determine which gateway dynamically is the best one available.

To illustrate this information, let us look at a simple network topology, which shows a
couple of routers interconnecting a set of networks. The routing tables will reflect this
topology. The size of the routing table increases as the amount of routers increases on
the network. Each router has a table that reflects its knowledge of every network in the
topology. Conversely, the router does not need to know how many hosts are connected
to every network that is listed on the routing tables. Even if the router is not directly
connected to network Y, it will know of that network because its peer passes on the
update for network Y via a specific interface. This process should be dynamic, as you
want the network to determine another path to a destination in case the most efficient or
optimal path disappears due to hardware failure, link outages, etc.

81
Fig 7.2

30.0.0.0

.1
.1 RA .1
20.0.0.0 .3
10.0.0.0
RC
.2
.2
RB
RA's Table .1
Network Via Gateway 40.0.0.0 RD 50.0.0.0
.3 .1
10.0.0.0 Directly Connected
20.0.0.0 Directly Connected
30.0.0.0 Directly Connected
40.0.0.0 Via 20.0.0.3 or .2
50.0.0.0 Via 20.0.0.3 or .2

Choosing routes based on the destination network ID alone has several consequences.
First, in most implementations, all traffic headed for a given network prefix follows the
same path. Even if the there are multiple paths available advertising that network prefix
with equal costs or metrics, other paths might not be used concurrently. In addition, all
types of traffic follow the same path without regard to the delay or throughput of physical
networks.

Second, because only the final gateway along the path attempts to communicate with the
destination hosts, it can only determine if the host exists or is operational. Thus, you
need to arrange a way for that gateway to send reports of delivery problems back to the
original source.

Third, since each gateway routes traffic independently, packets traveling from host A to
host B may follow an entirely different path then packets traveling from host B back to
host A. This is a very important point. The packet will find a way to be routed and it
allows the higher OSI layers to worry about the reassembly process. This process
composes the actual data encapsulated in the packet.

82
Default Routes
Default routes are often used to route packets using a default gateway in case there are no
networks listed on the routing table for that specific destination network. Therefore if a
host is sending a packet to network Y, but router G does not have an entry on its routing
table for that specific network, it will use the default network entry to route that packet.
It says basically, let the next hop peer take care of that packet.

Host-Specific Routes
Although it can be stated that routing is based on networks and not on individual hosts,
most IP routing software allows per-host routes to be specified as a special case.

Routing with IP Addresses


It is important to understand that IP routing does not alter the original packet. The source
IP address and the destination IP address never change. The source host executes its IP
routing algorithm by computing a new address. This new address is the IP address of the
first hop gateway router where the packet should be sent next. This IP address computed
by the algorithm is known as the next hop address because it tells where the packet
should be sent next. This IP address is not stored by the IP process. It is only meant to
allow the source host to send its packets to its gateway destination.

After executing the routing algorithm, IP passes the packet and the next hop address to
the network interface software responsible for the physical network over which the
packet must be sent. The network interface software binds the next hop address to a
physical address; it forms a frame using that physical MAC address, encapsulates the
packet in the data portion of the frame, and sends the frame across. This is only a
temporary process used to determine the physical address of the next hop gateway. Once
this is done, the network interface software discards the next hop address and the process
starts again as it traverses the network over each router hop.

For example, PC1 on network 10.10.1.0 with IP address 10.10.1.1 creates a packet
destined to PC2 on network 20.20.1.0 with IP address 20.20.1.1. The PC takes a packet
and encapsulates it into a frame. PC1 knows that the traffic is not destined for its local
network because of the mask therefore it sends the information on a frame using R1s
destination MAC address as R1 is the local gateway for PC1. If the MAC address is not
known, PC1 will ARP to find out R1s physical address. Router R1 receives the frame
and in order to route it punts it up the network layer to determine if the packet is destined
for itself. The router sees that the IP destination is not local to itself hence it must route
the packet to the destination. R1 then looks at its routing table to determine which next
hop gateway is advertising that packets destination and via which interface. R1 then
frames the original packet again with a new frame header using R2s MAC address as the
next hop physical destination to move the packet closer to PC2. R2 then picks up this
new frame (with the original IP information which has not changed) and sends it up to the
network layer again. It strips away the headers until it finds the IP destination. At this
time, the router determines that it knows about that very specific network and even about
the local device (PC2) because it is directly connected. It once again frames the

83
information with the destination MAC address of PC2 which will allow the data to be
delivered as originally intended by PC1.

Fig 7.3

NEXT HOP IP ADDRESS MECHANISM

Routing Algorithm
making decisions

Table2 - ARP Table for


OSI Process De- 6 Ethernet 0 segment- Same Process repeats itself as it
encapsulation to
Table 1 - Routing Table
Router B found through finds its Next Hop Gateway
strip frame to find ARP process where IP to
out original Next Hop Gateway -
MAC address lists
Destination Found through AND
Router B as next hop
Original SA and DA Address 5 Operation and Longest
gateway
information Match Lookup
4

SA DA 0111110111101010
IP Packet
Network

3
H 0111110111101010101 T Router A Router B
1 H 0111110111101010101 T
MAC Frame Data Link
New MAC Frame
PHY E0 -NIC containing Next hop E0 -NIC
2
IP and MAC
1 1 7 1 1
address of Router B
0 0 0 0
1 1 1 1
011111011110101010110101 0 0 011111011110101010110101 0 0

8
1

Next Hop IP Packet is discarded once it


is delivered to the Next Hop Gateway
Host sends information
to destination IP host
beyond Router B

1. Host sends data information to the network. First hop router/gateway to that host "intercepts" the data and process it.

2. Data enters the NIC card of router A as a framed with MAC header and Trailer

3 - 4. Router A needs to know where to route the data. It takes the frame and strips away control information until it
reaches the Destination IP Address (DA).

5. Routing mechanism used by Router A (AND operation and Longest Match Lookup) is used to determined where to
send the packet next (determining next hop gateway)

6. ARP table is used to determine MAC address of next hop gateway (Router B).

7. New frame is created with new "temporary" IP address and MAC address of next hop gateway (Router B)

8. Next hop IP address packet is discarded and process begins again from the point of view of Router B

7.1.2 Forwarding Decisions

Cisco routers determine the path of a packet by doing a couple of things:

1. Routing is obviously based on the existing routing tables learned through a discovery
process were all available routes are determined to provide all possible paths for
every single packet that needs to be routed.
2. In addition, the routing process uses the technique of the Longest Match Lookup.

Before the longest match lookup, the router has to do the following. All routing tables
are made up of networks and sub networks and not specific hosts to lessen the amount of
cache and memory required storing every single host on the network. Every time a

84
packet is sent from a source, the first hop router to that source strips the IP destination
address and retrieves its internal subnet masks. The retrieval of the internal subnet mask
is necessary because the host does not include its mask on the IP datagram with routing
protocols such as RIP v1 and IGRP. It has to rely on the routers local mask
configuration to determine how that network is divided and what hosts belong to that
network.

The router strips away the destination IP address and executes an AND operation with the
mask obtained from the routers interface. This is done in order to obtain the final
destination network or sub network that the router needs to look up on its routing table to
send the packet to the closest router or next hop peer which provided that network update.
It then uses that AND operation result number, in our case, the destination sub network,
and matches it with the existing routing table entries using the longest match lookup.
When it finds the network that matches the destination route, it sends out the available
interface. The next router that receives that packet executes the same exact process until
the final destination is reached.

Longest match lookup


The first hop router takes the destination address obtained from the AND operation, looks
at its routing table starting from the first entry and starts matching taking the bits from
left to right. Every bit is checked to see which one matches. The route entries are usually
populated in descending order although most specific route entries are placed on top of
the routing table; the process begins to match every bit from left to right until a mismatch
is encountered. The matches are kept in memory.

As the router is making comparisons, it will look at its cached information to determine
whether the next match was more accurate than the previous one. If the next match has
fewer bits in common than the previous match does, it realizes that anything below that
comparison will not be needed as it knows that in memory it has a closer match from an
earlier comparison. It will then stop the process and save the CPU cycles. Next, it looks
at where this path was learned from and forwards the packet to that interface for the next
router. This router will perform the same process. It will find the unique match and once
the match has been located, the packet will have reached its destination

For example, destination IP address 172.16.10.1 is encapsulated on a packet intercepted


by a first hop router RA running RIP or IGRP to the source looking to transfer
information to that end device. The router is going to do the AND operation. The
routers interface (Ethernet 1) to the sources local segment has a mask defined as /24 in
extended prefix length formats or 255.255.255.0. The first hop router has a routing table
entry of the following networks: 172.15.0.0, 172.16.0.0, 172.31.0.0.

Network via Interface


172.16.10.0 Ethernet 0
172.31.0.0 Ethernet 0
172.16.0.0 Ethernet 0
172.15.0.0 Ethernet 2

85
The first step that is implemented by the router is the AND operation:

It takes the incoming packet and strips away the destination IP address:

10101100.00010000.00001010.00000001

Then it ANDs it with the routers local mask of 255.255.255.0 or

11111111.11111111.11111111.00000000

Or

10101100.00010000.00001010.00000001
11111111.11111111.11111111.00000000
---------------------------------------------------

10101100.00010000.00001010.00000000

This yields network 172.16.10.0. The router now uses this result to query its internal
routing table by executing the longest match lookup. This process will tell the routing
process what interface of that router should be used to send this specific packet.

Routing table for router RA in binary will look like this:

Network Via Interface


10101100.00010000.00001010.00000000 Ethernet 0
10101100.00011111.00000000.00000000 Ethernet 0
10101100.00010000.00000000.00000000 Ethernet 0
10101100.00001111.00000000.00000000 Ethernet 2

Now every bit obtained from the AND operation is match with this table starting from
left to right.

First entry: 10101100.00010000.00001010.00000000


AND result: 10101100.00010000.00001010.00000000

The first entry in the routing table is a perfect match on all the bits. The router caches
this information and continues to the second one. It does not know that this is the best
match. The second entry is matched from left to right and on the 13th bit, there is a
mismatch.

10101100.00011111.00000000.00000000
10101100.00010000.00001010.00000000

86
Fig 7.4

Internal Mask /24


R
R
RA R

D=172.16.10.1 R
R
packet D=172.16.10.1
Route R
updates
172.16.10.0
172.31.0.0
172.16.0.0
172.15.0.0
AND process between 172.16.10.0 and mask
255.255.255.0 in binary mode

Result = What interf ace to use which learned of


network 172.16.10.0

Host A 5.1.2.0/24 5.1.6.0/24


Host B
5.1.1.1/24 5.1.7.1/24
E1 E1
E0
E0
Packet
RA E3 RB
D=5.1.7.1
E3
E2 E2
5.1.1.0/24 5.1.4.0/24 5.1.7.0/24

5.1.3.0/24 5.1.5.0/24

RoutingTable -RA RoutingT able -RA


00000101.00000001.00000111.00000000| viaE3 00000101.00000001.00000111.00000000| viaE3
00000101.00000001.00000110.00000000| viaE3 00000101.00000001.00000110.00000000| viaE1
00000101.00000001.00000101.00000000| viaE3 00000101.00000001.00000101.00000000| viaE2
00000101.00000001.00000100.00000000| viaE3 00000101.00000001.00000100.00000000| viaE0
00000101.00000001.00000011.00000000| viaE2 00000101.00000001.00000011.00000000| viaE0
00000101.00000001.00000010.00000000| viaE1 00000101.00000001.00000010.00000000| viaE0
00000101.00000001.00000001.00000000| viaE0 00000101.00000001.00000001.00000000| viaE0
OR OR
5.1.7.0/24 via E3 5.1.7.0/24DirectlyConnected
5.1.6.0/24 via E3 5.1.6.0/24DirectlyConnected
5.1.5.0/24 via E3 5.1.5.0/24DirectlyConnected
5.1.4.0/24DirectlyConnected 5.1.4.0/24DirectlyConnected
5.1.3.0/24DirectlyConnected 5.1.3.0/24 via E0
5.1.2.0/24DirectlyConnected 5.1.2.0/24 via E0
5.1.1.0/24DirectlyConnected 5.1.1.0/24 via E0

At this time, the router knows that there is no need to continue matching any further as it
has a better match cached in memory. It then uses that entry and routes the packet out
interface Ethernet 0. This procedure is done every single time a packet arrives to the
routing process.

87
7.1.2 Routing Protocols

Routing protocols fall into two categories: interior routing protocols, or IGPs, and
exterior routing protocols, or EGPs. IGPs are used to route IP traffic within a single
autonomous system. An Autonomous System (AS) is, generally speaking, a group of
networks under the same administrative authority. EGPs are used to route traffic
between autonomous systems that are connected to the Internet. Again, the Internet is a
great example of this scenario.

Every entity that has a presence on the Internet such as a company, University,
organization, etc will have to identify itself with a specific autonomous System. These
autonomous systems are identifiers like IP address ranges allocated to an entity. The AS
numbers are also registered through either ARIN, RIPE or any other registry available to
a region. EGP protocols such as BGP are used to exchange information between
different AS.

There is also a notion of internal AS systems. A company can have multiple AS running
on their internal network. For example, company A has been trying to migrate an old
legacy infrastructure to a new infrastructure created for improving service, efficiency, etc.
Legacy routers will be probably running one routing process using a specific AS, say
AS1 and new topology routers running a different AS number, say AS2. This allows the
two internal topologies to be separate and more controlled. Routers on AS1 will not see
updates or participate on any routing exchange with routers on AS2 even if they are
connected to the same local backbone. Now, if both topologies need to interface with
each other, there has to be a router in between the two worlds that understands both
routing processes. In other words, the router in between is a gateway between AS1 and
AS2. This is very common in the industry.

88
Fig 7.5

C om pany B

AS3
AS20

IS P B

AS30
AS2
IS P A
AS40
C om pany A

C om pany X

IN T E R N E T

IG P s
IG P s

C o m pa ny X 's
C om pa ny A 's Inte rna l N e tw o rk
In terna l N e tw o rk

It takes routers a certain amount of time to communicate information about network


changes with each other. The process of making the routing tables of all routers
consistent is called convergence. The convergence time depends on the employed
dynamic routing protocol.

Dynamic routing protocols can also be categorized specifically in using two types of
algorithms: Distance vector and link state.

Distance Vector Algorithm


The distance vector routing protocol,
http://www.fact-index.com/d/di/distance_vector_routing_protocol.html was invented by
Bellman in 1957 and then by Ford and Fulkerson in 1962. Routers that use the same
distance vector routing protocol can only exchange routing updates if they are separated
by a single physical network. This means that routing updates are themselves not routed
but only exchanged locally between peer routers, which in turn will do the same with
their peer routers and so on.

89
Routers running the same routing process use specific multicast groups allocated for that
routing protocol where routers can listen and exchange their routing updates. Multicast is
used many times to provide updates between systems or just maintain keepalives. There
are defined multicast groups for each routing protocol. This gives the opportunity for
other routers running different routing protocols to co-exist on the same wire and not
have to process packets not meant for them. Routers that exchange routing updates of the
same routing process are called peers or neighbors.

Routing updates using distance vector algorithms are periodic. Every route entry is sent
during the update period. There is less CPU processing required as routing updates are
done only between peers. However, convergence time is much longer. These protocols
differ from Link- state protocols because they send incremental updates making them
more efficient and easier to converge. In addition, link-state protocols have a database of
all the routers in the topology allowing for very fast convergence, as there is no need to
wait for a peer to pass on the route failure information. One disadvantage about link-state
protocols is that CPU requirement is very large since it needs to keep the entire routing
topology cached on the routers database.

Fig 7.6

Another important component of routing and that is called metrics. Metrics generally
reflect how far away from the advertised network prefix the router perceives itself to be.
Metrics are used to find the best route to a specific destination prefix, which is listed on a
routing table. There can be multiple metrics advertised to the same network, which are
learned by a router.

The idea behind the metric is to have the router select the best metric available. This is
the shortest path in terms of hops, or best path in terms of bandwidth or delay, etc to the
final destination prefix and disregard other updates, which show a less optimal path.

90
Even though these are discarded by the routing process, they are still available by
means of updates in case there are issues with the best path. Cisco installs all RIP routes.
If there is a problem with the optimal path, the less optimal path becomes the best and
only path to get to the destination. This less optimal path will be installed on the
routing table allowing the routing process to select it and use it to send packets towards
that destination prefix. This is the advantage of having dynamic routing updates.

Fig 7.7

1
RB RC
1
1
RD
1
RA 1 B

1 RE RF RG
A 1 1

The best metric from RA to


destination B is RB-RC-RD with a
cost of 3 only. RA-RE-RF-RG-RD
has a cost of 4 and is less optimal.
RA-RB-RF-RG-RD also has a cost
of 4 and it is also less optimal.
Route installed on RA's table to get
to B is RB-RC-RD with a cost of 3.

All distance vector routing protocols follow these considerations when calculating the
metrics:

Each interface on a router serviced by a certain routing protocol has a cost. The cost
is protocol specific.
When a router receives a routing update, it recalculates the metrics of the advertised
network prefix using the cost of the interface, which will be use to forward packets
out destined for that network prefix. In most cases, this interface coincides with the
interface on which the routing update was received.
The new metric that the router calculates must be greater than the one it received in
the original routing update, because it must incorporate the cost of the output
interface.
The router advertises the remote network prefixes using the recalculated metrics.

Initially, before a router has received any routing updates, it only advertises the networks
to which it is directly connected. As it receives routing updates from other routers on
directly connected segments, it recalculates the metrics for the learned network prefixes
and starts advertising this network prefix with a new metric. Therefore, router R1 hears
some updates that come from peer router R2, R1 uses those updates to create a routing
table and will allocate the correct metric to those network prefixes. R1 will also update

91
R2 with R2s updates and vice versa. Eventually all routers on all segments will learn the
entire network prefixes used throughout the network.

Routing instability can also occur throughout a network. This means that routing table
stability will no longer be available and a new convergence process has to take place.
Instability on a routing network can be caused by the following:

1. When new segments are added to a network, the router for the segment being
connected starts advertising the network prefix assigned to that segment. Other
routers on the network participating on the same routing process will listen to the new
update and will begin forwarding the update to their own peers until the update
reaches every router in the topology. At the same time, other routers will begin
updating back their routing tables to the first router that advertised the additional
segment.

2. Network component removals and failures can also cause instability on a routing
network.

Routers exchange routing updates on a regular basis for example, every 30 seconds in the
RIP protocol http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/rip.htm . Other
routers use the RFC standard http://www.ietf.org/rfc/rfc1058 and the IGRP protocol
http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/igrp.htm updates every 90
seconds

These hello packets contain the routing information for every network prefix learned
via neighbors and peer routers. Every time a route is installed on a routing table, a timer
is kept for that specific routing entry. This timer is reset every time that route update is
received by that router via a hello packet update. Therefore, if the update timer is 30
seconds and network prefix A is included on the periodic update, the timer will be reset
back to 0 seconds until the next update for prefix A. Should the update for network
prefix A not arrived on the next periodic update, a new timer begins that will mark that
network prefix A as possibly down and a new set of timers begin to allow the network
prefix A to recuperate or if down hard to be removed from the routing tables completely.

The removal of the routing entry completely is loosely referred to as flushing out the
entry off the routing table. Every router on the network has to flush out the entry to reach
convergence once again. Should the network prefix A come back into operation, the
updates will include that prefix again, which will cause another convergence throughout
the network.

Split Horizon Rule, Hold-downs, and Triggered Updates


All routing protocols based on the distance vector algorithm suffer from the same
problem known as counting to infinity. Let us suppose all three routers use the same
distance vector routing protocol. Also, assume that convergence is achieved. Router R2
advertises segment S1 via both of its interfaces. Router R3, not having a better route for

92
segment S1, picks up router R2s update. R2 is the only peer of router R3. Because
router R1 is directly connected to segment S1 and therefore has a better metric for
segment S1, it ignores router R2s updates for segment S1. This is a normal update
condition. Routers use their peers to obtain updates about network prefixes across a
network.
Fig 7.8

S1 S2 S3 S4

R1 R2 R3

S1 S1 S1 S1
S2 S2 S2 S2
S3 S3 S3 S3
S4 S4 S4 S4

R1 advertises its directly


connected segments plus
others. R2 is a peer of R1 and
R3. R1 is not a peer of R3.

Now suppose that segment S2 fails. Router R2 starts a timer for segment S1 that marks it
as invalid of possible down. Eventually, the timer expires, and router R2 stops
considering segment S1 as available via router R1. Until this happens, router R2
continues to advertise segment S1, which means that router R3 does not even know that
router R2 stopped receiving updates from router R1. After router, R2s timer for segment
S1 expires; router R2 considers an alternative route for that segment. Eventually, router
R2 gets the update from router R3, with R3s metrics. Router R2 recalculates its own
metric for this new route and sends it back to router R3 in the next regular update.
Router R3 then sees that the routing updates for segment S1 from router R2 have a worse
metric, so router R3 recalculates its own metric, which it sends out with the next regular
update.

All iterations of this process make the route to segment S1 worse, although it will never
disappear. This then becomes a ping-pong game between two routers, each
announcing what they think is the best metric to S1. The easiest way to solve this
problem is to consider any network prefix unreachable if the metric of the route is greater
than a certain value. RIP uses a value of 16 hops to determine a network not reachable
(16 hops is considered infinity). Since both routers point to each other, this creates a
routing loop. A few techniques have been developed to eliminate or lessen this counting
to infinity problem

93
Split Horizon
The split horizon rule http://www.freesoft.org/CIE/RFC/1058/9.htm forbids a route to
advertise a network prefix via the interface from which it learned of the prefix. A more
aggressive version is called split horizon with poisoned reverse. This rule instructs the
router to advertise a network prefix via the interface from which it learned of the prefix
with the metric set to infinity.

Hold-downs
Another technique is called hold-downs. The route, in addition to the regular route timer,
also maintains a garbage collection timer. The router starts the garbage collection timer
when the regular timer expires. The router also declares the corresponding network
prefix unreachable by setting the routes metric to infinity. Until the garbage collection
timer expires, the route cannot be removed from the routing table or modified even if
another routing update for that network prefix arrives.

Administrative Distance
A single router may possibly run several IP routing protocols that can be advertising the
same network prefixes. These separate processes are going to try to install routes for the
same destination, which will cause problems. The routing process needs to know which
routing protocol to execute with respect to resolving or installing a route on its routing
table. For this reason, there has been a parameter created to give a routing process
priority over another. This priority is called administrative distance. A numeric value is
associated with each routing protocol like RIP, IGRP, OSPF, EIGRP, BGP, etc. The
administrative distance reflects the level of trust that a router has in the routing
information supplied by the routing updates.

The higher the administrative distance, the less preferred that routing process would be.
For instance, router R1 sees that network 10.0.0.0 is being advertised via a routing
protocol with an administrative distance of 120. An update arrives for the network prefix
that has an administrative distance of 100. The route updates from the protocol with
higher administrative distance will be disregarded and not used to create routing tables
(see the lab at the end of the session for more details).

7.1.3 Classful and Classless Routing Protocols

Classful routing protocols are protocols that do not include the subnet mask in their
routing advertisement hence devices connected to a local segment assume to have the
same mask as the local router interface. Routing updates between major nets are
summarized across the routers connecting these networks. For example, if there are sub
networks 172.16.10.0/24 and 10.2.0.0/16 passing across a router running a specific
classful routing protocol, the updates will be summarized to the net boundaries of
172.16.0.0/16 and 10.0.0.0/8.

Another important issue to know before implementing classful routing protocols is that a
technology group in charge of addressing has to agree ahead of time to a specific mask
for every segment connected to the topology supported by these routers. There is no

94
notion of a subnet mask included in the routing updates hence routers will not know how
to handle a mask other that the one agreed too ahead of time. These protocols lead to a
less efficient use of network address space since a point-to-point link, which only
requires two IP addresses will have to use the same mask as the user segment with 60
users with 60 IP addresses. The point-to-point link, which had a subnet mask identical to
the user segment, now has 58 IP addresses wasted, as this segment cannot be used
anywhere else on the network. Protocols that are considered Classful routing protocols
are RIPv1 and IGRP.

Classless routing protocols actually exchange the subnet mask on their routing updates.
This allows any network to be non-homogeneous with respect to the subnet masks
assigned per each segment. This basically means that if an entity requires a segment to
support only two hosts, such as point to point links, the given subnet mask of usually
255.255.255.252 or /30 will be assigned to that network. For example, 10.10.10.0/30
with usable hosts of 10.10.10.1 and 10.10.10.2 therefore 10.10.10.0 is the sub network
and 10.10.10.3 is the broadcast address.

Being able to create subnets with different masks allows for a better and more efficient
use of network space. You can continue allocating /30 subnets to all your point-to-point
links. This way, subnets are created by varying the mask as needed. This is called
Variable Length Subnet Mask (VLSM). Protocols like EIGRP, OSPF, BGP, and RIPv2
all are classless routing protocols.

7.1.4 Routing Information Protocol (RIP)

The Routing Information Protocol (RIP) is the de facto standard protocol available
virtually on any IP host and router regardless of the vendor, whereas IGRP is a Cisco
proprietary protocol. This protocol uses very simple metrics, represented by positive
integers 1 through 16, where the metric of 16 is considered infinity. RIP calculates the
metric of a route as the sum of segments composing the route. IGRP metrics is rather
complex and calculated using five characteristics of the paths, which are bandwidth,
delay, load, and reliability and MTU size. Therefore, the network diameter- the
maximum number of router hops that a routing protocol can handle-of IGRP is much
bigger than that of RIP.

There can be only one RIP routing process on a router. On the other hand, there can be
multiple IGRP routing processes running on a single router, each servicing a separate
autonomous system. RIP can advertise individual IP addresses, such as 10.1.0.1/32
whereas IGRP cannot.

7.1.5 Conclusion

In this session, the basics of routing were introduced. It is very difficult to explain all the
characteristics and functionalities of every routing algorithm in this session. The
introduction serves as a guide for future understanding of how a network communicates

95
and exchanges traffic in the form of packets. The goal for this session is to provide you
the notion of routing without focusing specifically on a particular vendor so that when
time comes, you can actually work with a router and know what functionality to expect.
The idea is to make you think about the routing algorithm and its process and not the
actual commands as anyone can pick up a manual and configure a Cisco router, or a
Nortel router or a 3Com router.

Discussion Questions
1. What is the advantage of routing over switching?
2. How does a packet know what path to follow towards its final destination?
3. Why is there a need for EGPs?
4. What is one of the drawbacks of a distance vector algorithm?
5. What does it means when a router places a route into hold down?

96
ROUTING LAB SCENARIO

The following lab has been set up to illustrate the basis of routing. There are two routers
connected via a serial cable simulating a 56K circuit. One router is NY and the other is
London. NY router has one Ethernet segment and a couple of loop back interfaces.
London router has a couple of Loop back interfaces as well. You sometimes create loop
back interfaces, which are logical interfaces that never go down in the routers. They are
used to provide a keep-alive network.

Fig 7.9

L0 L1
10.3.3.0/24 10.6.6.0/24

.1

S0 S0
.1 NY .2 .1 London

E0 .1 .1
10.2.2.0/24

L1 L0
10.4.4.0/24 10.5.5.0/24

L0 and L1 are loopback


interfaces. These are virtual
interfaces created on a router
that are always up

Both routers were first setup to run RIP between them using network10.0.0.0/8. The
capture file shows the normal operation of the routers and their routing tables. The next
step is to show how their timers are setup such as update timers, hold down timers, flush
timers, etc. Then the command debug is used, which enables the student to see detailed
information with respect to IP routing. Debug is not always used because it can crash a
router if the debug process is CPU intensive. It has to be used with care.

As the updates are being sent every 30 seconds (for RIP), the Ethernet cable for the router
NY is disconnected. It is important to notice how the routing updates display network
10.2.2.0 as inaccessible and with a metric of 16. The route becomes invalid after 180
seconds; it is then put on hold down for another 180 sec and then flushed after 240
seconds. You can see how long it takes this process to converge. Keep in mind that you
can tweak those timers. Also, notice the administrative distance of RIP that is 120 on the
show imp route statement below:

97
sh ip int brief This command shows how many interfaces are up and configured##
Interface IP-Address OK? Method Status Protocol
Ethernet0 10.2.2.1 YES manual up up
Loopback0 10.3.3.1 YES manual up up
Loopback1 10.4.4.1 YES manual up up
Serial0 10.1.1.2 YES manual up up
Serial1 unassigned YES unset administratively down down
TokenRing0 unassigned YES unset administratively down down
NY#sh ip route ## this command shows the routing table ##
Codes: C connected, S static, I IGRP, R RIP, M mobile, B BGP
D EIGRP, EX EIGRP external, O OSPF, IA OSPF inter area
E1 OSPF external type 1, E2 OSPF external type 2, E EGP
I IS-IS, L1 IS-IS level-1, L2 IS-IS level-2, * - candidate default
U per-user static route

Gateway of last resort is not set


10.0.0.0/8 is subnetted, 6 subnets
C 10.2.2.0 is directly connected, Ethernet0
R 10.5.5.0 [120/1] via 10.1.1.1, 00:00:01, Serial0 120 is the administrative
distance of RIP
R 10.6.6.0 [120/1] via 10.1.1.1, 00:00:02, Serial0
C 10.4.4.0 is directly connected, Loopback1
C 10.3.3.0 is directly connected, Loopback0
C 10.1.1.0 is directly connected, Serial0
NY#sh ip prot this command shows how the IP routing protocol is configured ##
Routing Protocol is "rip"
Sending updates every 30 seconds, next due in 21 seconds
Invalid after 180 seconds, hold down 180, flushed after 240
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Redistributing: rip
Default version control: send version 1, receive any version
Interface Send Recv Key-chain
Ethernet0 1 12
Loopback0 1 12
Loopback1 1 12
Serial0 1 12
Routing for Networks:
10.0.0.0
Routing Information Sources:
Gateway Distance Last Update
10.1.1.1 120 00:00:12
Distance: (default is 120)

98
NY#debug ip rip allows us to see more detail information of packet process in router
##
RIP protocol debugging is on
NY#This is a normal update routing table
RIP: received v1 update from 10.1.1.1 on Serial0
10.6.6.0 in 1 hops
10.5.5.0 in 1 hops
RIP: sending v1 update to 255.255.255.255 via Ethernet0 (10.2.2.1)
subnet 10.5.5.0, metric 2 this is telling us how far away the network is from
NY##
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.2.2.0, metric 1
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.2.2.0, metric 1
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.2.2.0, metric 1
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
RIP: received v1 update from 10.1.1.1 on Serial0
10.6.6.0 in 1 hops
10.5.5.0 in 1 hops
RIP: sending v1 update to 255.255.255.255 via Ethernet0 (10.2.2.1)
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.2.2.0, metric 1
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)

99
subnet 10.2.2.0, metric 1
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.2.2.0, metric 1
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
NY#u all
All possible debugging has been turned off
NY#debug ip rip
RIP protocol debugging is on
NY#
RIP: received v1 update from 10.1.1.1 on Serial0
10.6.6.0 in 1 hops
10.5.5.0 in 1 hops
RIP: sending v1 update to 255.255.255.255 via Ethernet0 (10.2.2.1)
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.2.2.0, metric 16 I disconnected the Ethernet segment off router NY
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.2.2.0, metric 16
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.2.2.0, metric 16
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
RIP: received v1 update from 10.1.1.1 on Serial0
10.2.2.0 in 16 hops (inaccessible)
10.6.6.0 in 1 hops
10.5.5.0 in 1 hops
%LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0, changed state to
down
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)

100
subnet 10.2.2.0, metric 16
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.2.2.0, metric 16
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.2.2.0, metric 16
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
RIP: sending general request on Loopback0 to 255.255.255.255
RIP: sending general request on Loopback1 to 255.255.255.255
RIP: sending general request on Serial0 to 255.255.255.255
RIP: received v1 update from 10.1.1.1 on Serial0
10.2.2.0 in 16 hops (inaccessible)
10.6.6.0 in 1 hops
10.5.5.0 in 1 hops
10.1.1.0 in 1 hops
RIP: received v1 update from 10.1.1.1 on Serial0
10.2.2.0 in 16 hops (inaccessible)
10.6.6.0 in 1 hops
10.5.5.0 in 1 hops
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
NY#sh ip route Showing the ip route table of NY when 10.2.2.0/24 is not available.
The route disappears from its routing table as it was a directly connected network but it
will not go away immediately as you will see on router London
Codes: C connected, S static, I IGRP, R RIP, M mobile, B BGP
D EIGRP, EX EIGRP external, O OSPF, IA OSPF inter area
E1 OSPF external type 1, E2 OSPF external type 2, E EGP

101
I IS-IS, L1 IS-IS level-1, L2 IS-IS level-2, * - candidate default
U per-user static route

Gateway of last resort is not set


10.0.0.0/8 is sub netted, 5 subnets
R 10.5.5.0 [120/1] via 10.1.1.1, 00:00:14, Serial0
R 10.6.6.0 [120/1] via 10.1.1.1, 00:00:14, Serial0
C 10.4.4.0 is directly connected, Loopback1
C 10.3.3.0 is directly connected, Loopback0
C 10.1.1.0 is directly connected, Serial0
NY#un all
All possible debugging has been turned off
NY#london I created a host name on NY router so that I can telnet using a name not
an IP address.
Trying London (10.1.1.1)... Open

User Access Verification

Password:
London>sh ip route
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/24 is sub netted, 6 subnets


R 10.2.2.0/24 is possibly down, you can see how it sees it as possibly down. No
routing via 10.1.1.2, Serial0 convergence has happened until all the timers
expired.
R 10.3.3.0 [120/1] via 10.1.1.2, 00:00:26, Serial0
R 10.4.4.0 [120/1] via 10.1.1.2, 00:00:26, Serial0
C 10.6.6.0 is directly connected, Loopback1
C 10.5.5.0 is directly connected, Loopback0
C 10.1.1.0 is directly connected, Serial0

London>en
Password:
London
London#debug ip rip
RIP protocol debugging is on
London#
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.5.5.1)
subnet 10.2.2.0, metric 16

102
subnet 10.3.3.0, metric 2
subnet 10.4.4.0, metric 2
subnet 10.6.6.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.6.6.1)
subnet 10.2.2.0, metric 16
subnet 10.3.3.0, metric 2
subnet 10.4.4.0, metric 2
subnet 10.5.5.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.1)
subnet 10.2.2.0, metric 16
subnet 10.6.6.0, metric 1
subnet 10.5.5.0, metric 1
RIP: received v1 update from 10.1.1.2 on Serial0
10.4.4.0 in 1 hops
10.3.3.0 in 1 hops
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.5.5.1)
subnet 10.2.2.0, metric 16
subnet 10.3.3.0, metric 2
subnet 10.4.4.0, metric 2
subnet 10.6.6.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.6.6.1)
subnet 10.2.2.0, metric 16
subnet 10.3.3.0, metric 2
subnet 10.4.4.0, metric 2
subnet 10.5.5.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.1)
subnet 10.2.2.0, metric 16
subnet 10.6.6.0, metric 1
subnet 10.5.5.0, metric 1
London#

London#exit

[Connection to London closed by foreign host]


NY#sh ip prot
Routing Protocol is "rip"
Sending updates every 30 seconds, next due in 6 seconds
Invalid after 180 seconds, hold down 180, flushed after 240
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Redistributing: rip
Default version control: send version 1, receive any version

103
Interface Send Recv Key-chain
Ethernet0 1 12
Loopback0 1 12
Loopback1 1 12
Serial0 1 12
Routing for Networks:
10.0.0.0
Routing Information Sources:
Gateway Distance Last Update
10.1.1.1 120 00:00:25
Distance: (default is 120)

NY#
%LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet0, changed state
to up I reconnect the Ethernet interface again. Routing updates then get back to
normal##
NY#debug ip rip
RIP protocol debugging is on
NY#
RIP: sending v1 update to 255.255.255.255 via Ethernet0 (10.2.2.1)
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.2.2.0, metric 1
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.4.4.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.2.2.0, metric 1
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.3.3.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.2.2.0, metric 1
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
NY#u all
All possible debugging has been turned off
NY#london
Trying London (10.1.1.1)... Open

104
User Access Verification

Password:
London>en
Password:
London#debug ip rip
RIP protocol debugging is on
London#term mon
RIP protocol debugging is on
London#
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.5.5.1)
subnet 10.2.2.0, metric 2
subnet 10.3.3.0, metric 2
subnet 10.4.4.0, metric 2
subnet 10.6.6.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.6.6.1)
subnet 10.2.2.0, metric 2
subnet 10.3.3.0, metric 2
subnet 10.4.4.0, metric 2
subnet 10.5.5.0, metric 1
subnet 10.1.1.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.1)
subnet 10.6.6.0, metric 1
subnet 10.5.5.0, metric 1
London#
RIP: received v1 update from 10.1.1.2 on Serial0
10.2.2.0 in 1 hops
10.4.4.0 in 1 hops
10.3.3.0 in 1 hops
All possible debugging has been turned off
London#sh ip rout
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/24 is sub netted, 6 subnets


R 10.2.2.0 [120/1] via 10.1.1.2, 00:00:07, Serial0
R 10.3.3.0 [120/1] via 10.1.1.2, 00:00:07, Serial0
R 10.4.4.0 [120/1] via 10.1.1.2, 00:00:07, Serial0
C 10.6.6.0 is directly connected, Loopback1
C 10.5.5.0 is directly connected, Loopback0

105
C 10.1.1.0 is directly connected, Serial0
London#

Below is each routers configuration. (This is the most simplistic configuration you can
have but it proves what I said in class)

NY#sh run this command is used to show the running configuration of the router
Building configuration...

Current configuration:
!
version 11.1
service udp-small-servers
service tcp-small-servers
!
hostname NY
!
enable password cisco
!
!
interface Loopback0
ip address 10.3.3.1 255.255.255.0
!
interface Loopback1
ip address 10.4.4.1 255.255.255.0
!
interface Ethernet0
ip address 10.2.2.1 255.255.255.0
!
interface Serial0
ip address 10.1.1.2 255.255.255.0
no fair-queue
!
interface Serial1
no ip address
shutdown
!
interface TokenRing0
no ip address
shutdown
!
router rip
network 10.0.0.0
!
router igrp 1000

106
network 10.0.0.0
!
ip host London 10.1.1.1 We used this to actually telnet using names not addresses
no ip classless
!
!
line con 0
line aux 0
line vty 0 4
password cisco
login
!
end

London#sh run
Building configuration...

Current configuration:
!
version 11.1
service udp-small-servers
service tcp-small-servers
!
hostname London
!
enable password cisco
!
no ip domain-lookup
!
interface Loopback0
ip address 10.5.5.1 255.255.255.0
!
interface Loopback1
ip address 10.6.6.1 255.255.255.0
!
interface Serial0
ip address 10.1.1.1 255.255.255.0
clock rate 56000 - A device has to provide the clock. Since I do not have a
CSU/DSU on each side of the routers, we simulate the clocking by entering this
command and putting a cross over cable between the serial ports.

!
interface Serial1
no ip address
shutdown
!

107
interface TokenRing0
no ip address
shutdown
!
You can have multiple routing processes working on the router. Nevertheless, since
IGRP has a more trusted administrative distance, (100) notice how the new routing table
for the same segments prefer to use the IGRP protocol to RIP.

router rip
network 10.0.0.0
!
router igrp 1000
network 10.0.0.0
!
ip host NY 10.1.1.2
no ip classless
!
line con 0
line aux 0
line vty 0 4
password cisco
login
!
end

Here are the routing tables from both routers as they prefer the IGRP protocol to RIP due
to the administrative distance value.

NY#sh ip rout
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/8 is sub netted, 6 subnets


I 100 is the administrative
10.5.5.0 [100/8976] via 10.1.1.1, 00:00:14, Serial0
distance of IGRP hence it is trusted more than RIP and the routing updates will be
sent using IGRP not RIP as seen below with the debug command.
I 10.6.6.0 [100/8976] via 10.1.1.1, 00:00:14, Serial0
C 10.1.1.0 is directly connected, Serial0
C 10.4.4.0 is directly connected, Loopback1
C 10.3.3.0 is directly connected, Loopback0
C 10.2.2.0 is directly connected, Ethernet0

108
London#sh ip rout
Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route
Gateway of last resort is not set

10.0.0.0/24 is sub netted, 6 subnets


I 10.2.2.0 [100/8576] via 10.1.1.2, 00:01:02, Serial0
I 10.3.3.0 [100/8976] via 10.1.1.2, 00:01:02, Serial0
I 10.4.4.0 [100/8976] via 10.1.1.2, 00:01:02, Serial0
C 10.6.6.0 is directly connected, Loopback1
C 10.5.5.0 is directly connected, Loopback0
C 10.1.1.0 is directly connected, Serial0

These are the routing updates via IGRP.

NY#
IGRP: received update from 10.1.1.1 on Serial0
subnet 10.6.6.0, metric 8976 (neighbor 501)
subnet 10.5.5.0, metric 8976 (neighbor 501)
IGRP: sending update to 255.255.255.255 via Ethernet0 (10.2.2.1)
subnet 10.5.5.0, metric=8976
subnet 10.6.6.0, metric=8976
subnet 10.1.1.0, metric=8476
subnet 10.4.4.0, metric=501
subnet 10.3.3.0, metric=501
IGRP: sending update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.5.5.0, metric=8976
subnet 10.6.6.0, metric=8976
subnet 10.1.1.0, metric=8476
subnet 10.4.4.0, metric=501
subnet 10.2.2.0, metric=1100
IGRP: sending update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.5.5.0, metric=8976
subnet 10.6.6.0, metric=8976
subnet 10.1.1.0, metric=8476
subnet 10.3.3.0, metric=501
subnet 10.2.2.0, metric=1100
IGRP: sending update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.4.4.0, metric=501
subnet 10.3.3.0, metric=501
subnet 10.2.2.0, metric=1100

109
Chapter 8: OVERVIEW OF ROUTING PROTOCOLS

110
8.1 Introduction
In this session, the goal is to relate the theory learned on session seven. This will be
accomplished by creating various lab scenarios using different routing protocols. This
session is intended to provide a practical overview of the theory learned thus far. The
student will understand the concepts much faster if actual commands and outputs are
executed to correlate theories and other concepts.

8.1.1 Routing Information Protocol v1 (RIP)

The Routing Information Protocol v1 (RIP v1) was one of the first Distance Vector
protocols developed to work on the Internet during its early stages. This routing protocol
was used as the basis for many other routing protocols that enhanced and optimized RIPs
functionality. It is a protocol that is not scalable and not very efficient on large enterprise
entities. Its convergence time is very long and can cause a network to be out of reach for
at least 5 minutes, which is unacceptable in a production environment.

For more information on the RIP v1 protocol, you can view following document at
http://www.ietf.org/rfc/rfc1058

Protocol Operation
How does a router know what networks or segments are located around its infrastructure?
The only way is for peer routers to send updates to each other advertising what segments
and networks are directly connected to them. These updates are passed along peer
routers, which in turn pass them to their peer, or neighbor routers until the entire topology
is covered. RIP v1 sends updates periodically (regular intervals) as well as when there
are changes on the topology. RIP v1 uses only hop counts, as it only metric.

When a RIP router receives an update about a network that has been added to the
topology, the router updates its routing table and increases the metric path value by one.
The closest neighbor router to that network will be considered as the next hop. RIP v1
only understands the best route available on the network.

For example, if router RA connects to RB and RC directly, both routers know of network
X but RB sends the metric of hop counts to network X of 2 and router RC sends the
metric of hop counts to network X of 3. Only RBs updates will be considered as the
only and best path (It has the lowest metric). RIP v1 does not understand equal metrics
so it is a protocol that by nature cannot load balanced between two links equidistant to the
same destination. Changes in the topology cause updates to be sent across the board but
they are independent of the periodic updates that are sent every 30 seconds between peer
RIP routers.

RIP v1 updates are not incremental. This means that when there is a change, additional
subnets advertised or some type of issue with the network, the entire routing tables are

111
exchanged instead of just the route that changed. Link State protocols use incremental
updates, which make them more efficient routing protocols.

Since RIP v1 uses a Distance Vector Algorithm as its main process, it suffers from
counting to infinity issues. In order to avoid this, RIP v1 prevents the routing loops that
counting to infinity creates by introducing a hop count infinity value of 16. This means
that the network diameter or a network packet can only traverse 15 routers or hops until it
is considered infinity. As a result, this makes RIP a non-scalable routing protocol that
can only be used in small venues. RIP also uses split horizon and hold down mechanisms
to prevent routing information from creating loops on the network. RIP v1 uses a set of
timers (used for the hold down mechanisms) which are the following:

1. Routing update timer: This timer is usually set to 30 seconds and measures interval
between periodic routing updates.
2. Route-timeout timer: This timer begins only when a route on a routing table does
not appear. The route is marked invalid but it is retained in the table until the route
is removed.
3. Route-flush timer: This timer is used to remove the invalid entry off the routing
table. The timer flushes out the invalid route.

RIP v1 does not support sub net masks hence it is considered a classful routing protocol.
RIP v2 is an enhancement to RIP v1. For more information on this topic, you can visit
the following website http://www.ietf.org/rfc/rfc2453. The following are some
characteristics of the protocol as they work in a real life scenario. Use figure 1 as a
reference for all routing protocols scenarios described here.

Fig 8.1

10.1.1.0/24 L0
10.3.3.0/24

S0 S0
RA .1 .2 RB

L0
10.5.5.0/24
L1
L1 10.4.4.0/24
10.6.6.0/24

L0 and L1 are loopback


interfaces. These are virtual
interfaces created on a router
that are always up

112
Routing timers as viewed on router RA. This command, show ip protocol displays the
routing protocols that are currently running on a router. In this case, you are only running
RIP as the only routing protocol. Notice the timers discussed above. The update timer is
every 30 seconds. The route-timeout timer, which creates the entry invalid, is 6 times the
update timer or after 180 seconds. Hold down timer is also 180 seconds and the route
gets flushed after 240 seconds.

RA# show ip protocol


Routing Protocol is "rip"
Sending updates every 30 seconds, next due in 14 seconds
Invalid after 180 seconds; hold down 180, flushed after 240
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Redistributing: rip
Default version control: send version 1, receive any version
Interface Send Recv Key-chain
Loopback0 1 12
Loopback1 1 12
Serial0 1 12
Routing for Networks:
10.0.0.0
Routing Information Sources:
Gateway Distance Last Update
10.1.1.2 120 00:00:25
Distance: (default is 120)

RIP updates are periodic as shown below from the point of view of both routers. Updates
are done via broadcast address of 255.255.255.255.

RA#
RIP: received v1 update from 10.1.1.2 on Serial0 this is the IP address of RB interface
10.4.4.0 in 1 hop
10.3.3.0 in 1 hop
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.5.5.1)
Subnet 10.3.3.0, metric 2 this metric tells RA that there are two hops to this net
subnet 10.4.4.0, metric 2
subnet 10.1.1.0, metric 1
subnet 10.6.6.0, metric 1 10.5.5.0 net is not advertised to itself- Split Horizon
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.6.6.1)
subnet 10.3.3.0, metric 2
subnet 10.4.4.0, metric 2
subnet 10.1.1.0, metric 1
subnet 10.5.5.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.1)
subnet 10.6.6.0, metric 1

113
subnet 10.5.5.0, metric 1

RB#
RIP: received v1 update from 10.1.1.1 on Serial0 this is the IP address of RA interface
10.6.6.0 in 1 hop
10.5.5.0 in 1 hop
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.1.1.0, metric 1
subnet 10.4.4.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.5.5.0, metric 2
subnet 10.6.6.0, metric 2
subnet 10.1.1.0, metric 1
subnet 10.3.3.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1

If you look at the routing tables created after these updates are cached you will see that
every network directly or not directly connected is listed on the table. The process also
tells the router the next hop interface that needs to be used to reach that particular
network. The command, show ip route details the table. The codes are important
because it lets the administrator know about the routing protocol in used.

RA# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/24 is sub netted, 5 subnets


R 10.3.3.0 [120/1] via 10.1.1.2, 00:00:04, Serial0 [120/1] = admin distance/1 hop
R 10.4.4.0 [120/1] via 10.1.1.2, 00:00:04, Serial0
C 10.1.1.0 is directly connected, Serial0 admin distance = 0
C 10.6.6.0 is directly connected, Loopback1
C 10.5.5.0 is directly connected, Loopback0

RB# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP

114
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/8 is sub netted, 5 subnets


R 10.5.5.0 [120/1] via 10.1.1.1, 00:00:00, Serial0
R 10.6.6.0 [120/1] via 10.1.1.1, 00:00:00, Serial0
C 10.1.1.0 is directly connected, Serial0
C 10.4.4.0 is directly connected, Loopback1
C 10.3.3.0 is directly connected, Loopback0

Now assume that one network goes away (network 10.5.5.0/24). How does the update
and routing table change?

RB# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/8 is sub netted, 5 subnets


R 10.6.6.0 [120/1] via 10.1.1.1, 00:00:13, Serial0
R 10.5.5.0/24 is possibly down, route is put into a hold down state
routing via 10.1.1.1, Serial0
C 10.1.1.0 is directly connected, Serial0
C 10.4.4.0 is directly connected, Loopback1
C 10.3.3.0 is directly connected, Loopback0
RB#
RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.6.6.0, metric 2
subnet 10.5.5.0, metric 16 metric of 16 means infinity, not reachable
subnet 10.1.1.0, metric 1
subnet 10.4.4.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.6.6.0, metric 2
subnet 10.5.5.0, metric 16
subnet 10.1.1.0, metric 1
subnet 10.3.3.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.5.5.0, metric 16
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1

115
RIP: received v1 update from 10.1.1.1 on Serial0
10.6.6.0 in 1 hop

It will continue for a while until the timer that flushes out the entry is activated and the
route is no longer available as shown below:

RIP: sending v1 update to 255.255.255.255 via Loopback0 (10.3.3.1)


subnet 10.6.6.0, metric 2
subnet 10.1.1.0, metric 1
subnet 10.4.4.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.6.6.0, metric 2
subnet 10.1.1.0, metric 1
subnet 10.3.3.0, metric 1
RIP: sending v1 update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.4.4.0, metric 1
subnet 10.3.3.0, metric 1
RIP: received v1 update from 10.1.1.1 on Serial0
10.6.6.0 in 1 hops

The whole process from beginning to end (from the network marked as possibly down)
and route for 10.5.5.0/24 no longer available took 600 seconds or 180 + 180 + 240 as
specified by the protocol. This is about 10 minutes of convergence for a network that has
only two routers connected via a high-speed line and six networks. Imagine how long it
would take for a network that had many more routers. It would be unacceptable in a
production environment. There are ways to manipulate the timers to reduce the
convergence time but this has to be done with a great deal of care.

8.1.2 Interior Gateway Routing Protocol (IGRP)

The Interior Gateway Routing Protocol (IGRP) is another protocol based on Distance
Vector algorithms. It is a Cisco proprietary routing protocol. It is also a classful routing
protocol and can be thought of RIP on steroids. IGRP is a more robust routing protocol
and can be used within an Autonomous System (AS). This routing protocol is considered
an Interior Gateway Protocol. IGRP has more parameters taken into consideration with
respect to calculating its metrics. The five parameters in question are bandwidth, delay,
reliability, load, and MTU size.

For more information about the IGRP protocol, you can view the following document at
http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/igrp.htm.

Protocol Operation
The metric that IGRP uses is called a composite metric because it includes all five
parameters. These parameters are taken into consideration when calculating the
mathematics to provide a metric for an IGRP router. For example, reliability and load
can take on values from 1 to 255; bandwidth can take on different speed values from very
low bandwidths like 1200 bits per second (bps) to 10 Gbps while delay can take on

116
values from 1 to 224 measured in microseconds. The network administrator can actually
manipulate the metrics by changing the values of these five parameters. These changes
are sometimes necessary to influence routing paths to be specifically selected but the
general rule is that the only values that should be manipulated are bandwidth and delay
statements. Cisco suggests leaving reliability, load, and MTU size alone.

Below is the actual formula that is used to calculate IGRP metrics on a network. Routers
that run this routing protocol will always use this formula to calculate a metric to a
specific network destination.

MetricIGRP = K1*BIGRP + {(K2* BIGRP)/ (256-L)} + K3*DIGRP + K5/[R+K4]

Where BIGRP is the minimum bandwidth on the outgoing interfaces towards the final
destination scaled by 10,000,000 kbps or 10000000/Bmin. DIGRP is equal to the sum of
delays from the outgoing interfaces divided by 10. Constants are usually by default equal
to K1 = K3 = 1 and K2 = K4 = K5 = 0, which reduces the formula to:

MetricIGRP = K1*BIGRP + K3*DIGRP

Returning to our network example lets now apply this formula to verify the information
displayed by the router in fact matches the mathematical explanation.

Fig 8.2

Bandwidth = 1.544 Kbps


Delay = 20000 usecs

10.1.1.0/24 L0
10.3.3.0/24

S0 S0
RA .1 .2 RB

L0
10.5.5.0/24
L1
L1 10.4.4.0/24
10.6.6.0/24
Bandwidth = 8000000 kpbs
Delay = 5000 usecs
To calculate metric take
Outbound interfaces only
from RA to segment
10.3.3.0/24

117
In order to calculate the metrics from RA to a network behind RB (10.3.3.0/24), let us
look at the path a packet will take to get from RA to segment 10.3.3.0/24. Packet
destined to 10.3.3.0 will route out interface serial 0 of RA and enter RB on interface
serial 0 and route out interface Loop back 0 to 10.3.3.0/24 network. Taking into
consideration only the outbound parameters as specified by the protocol, you have:

RAout Serial 0 in Serial 0 RBout Loop back 0 10.3.3.0/24

Bandwidth of serial 0 on RA as shown below is 1544 Kbits.


Delay of serial 0 on RA as shown below is 20000 secs
Bandwidth of Loopback 0 on RB as shown below is 8000000 Kbits.
Delay of Loopback 0 on RB as shown below is 5000 secs.

Having this information, you can now calculate the metric from RA to destination
network 10.3.3.0/24 as follows:

MetricIGRP = K1*BIGRP + K3*DIGRP

MetricIGRP = 1*(10000000/1544 Kbps) + 25000 secs/10 = 6477 + 2500 = 8976.

RA# show interface serial0


Serial0 is up, line protocol is up
Hardware is HD64570
Internet address is 10.1.1.1/24
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec, rely 255/255, load 1/255
(This commands output has been clipped to show only the pertinent information)

RB# show interface loopback 0


Loopback0 is up, line protocol is up
Hardware is Loopback
Internet address is 10.3.3.1/24
MTU 1500 bytes, BW 8000000 Kbit, DLY 5000 usec, rely 255/255, load 1/255
(This commands output has been clipped to show only the pertinent information)

Now, from RA, issue command, show ip route 10.3.3.0 and compare to see if the
mathematics make sense:

RA# show ip route 10.3.3.0


Routing entry for 10.3.3.0/24
Known via "igrp 1000", distance 100, metric 8976
Redistributing via igrp 1000
Advertised by igrp 1000 (self originated)
Last update from 10.1.1.2 on Serial0, 00:00:38 ago
Routing Descriptor Blocks:
* 10.1.1.2, from 10.1.1.2, 00:00:38 ago, via Serial0
Route metric is 8976, traffic share count is 1

118
Total delay is 25000 microseconds, minimum bandwidth is 1544 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 0

As illustrated by the example the information has been verified by the mathematical
explanation. This is how every IGRP router calculates a metric to a network. It is done
per every network listed on their routing tables.

In the following example, the actual routing process can be viewed as well as how each
parameter is used. Additionally, the routing table of RA, which shows the code used for
each route entry, is displayed along with the administrative distance with its specific
metric.

RA# show ip protocol


Routing Protocol is "igrp 1000" 1000 is the Autonomous System in use at the time
Sending updates every 90 seconds, next due in 17 seconds
Invalid after 270 seconds; hold down 280, flushed after 630
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
Default networks flagged in outgoing updates
Default networks accepted from incoming updates
IGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0
IGRP maximum hopcount 100
IGRP maximum metric variance 1
Redistributing: igrp 1000
Routing for Networks:
10.0.0.0
Routing Information Sources:
Gateway Distance Last Update
10.1.1.2 100 00:00:37
Distance: (default is 100)

RA# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set


10.0.0.0/24 is sub netted, 5 subnets
C 10.5.5.0 is directly connected, Loopback0
I 10.3.3.0 [100/8976] via 10.1.1.2, 00:00:56, Serial0 admin dist=100, metric=8976
I 10.4.4.0 [100/8976] via 10.1.1.2, 00:00:56, Serial0
C 10.1.1.0 is directly connected, Serial0
C 10.6.6.0 is directly connected, Loopback1

119
IGRP gives the network much more flexibility and reliability by permitting multi-path
routing. If router RA connects via RB and RC and both connections have identical
metrics, the routing mechanism can actually forward packets on a destination based
(round robin) process or on a packet per packet process. The routing table will actually
show two equal metrics to the same destination.

Take a look at these two routing entries for network 10.3.4.0 that appear on
MyRouterTEST (test router) routing table. Routing protocols (IGRP) allow you to load
balanced multiple paths to the same destination. In RIP, you will only have one
advertised route entry on MyRouterTEST table to network 10.3.4.0. This is the big
difference between RIP and IGRP and other advance distance vector protocols.
(See the five parameters discussed above, bandwidth, delay, reliability, load, and MTU
size). RIP only uses hop count as its metric and NOTHING else.

What you see here is that there are two possible ways to get to network 10.3.4.0 using the
two neighbors, 10.2.2.1, and 10.1.1.1. When packets are routed, they use a processing
method like fast switching (round robin) or process-switching (packet per packet).
In this case, the * (asterisk) represents the time the queried is done on MyRouterTEST.
The route entries, neighbor 10.1.1.1 are being used as the next hop gateway to reach the
final destination, 10.3.4.10.

_____
/ R1 ------> |
MyRouterTEST | ------10.3.4.10
\______*R2 -------> |

MyRouterTEST# show ip route 10.3.4.10


Routing entry for 10.3.0.0/16
Known via "igrp 1000", distance 90, metric 3328, type internal
Redistributing via igrp 1000
Last update from 10.1.1.1 on Vlan10, 16:51:30 ago
Routing Descriptor Blocks:
10.2.2.1, from 10.2.2.1, 16:51:30 ago, via Vlan11
Route metric is 3328, traffic share count is 1
Total delay is 30 microseconds, minimum bandwidth is 1000000 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 2

* 10.1.1.1, from 10.1.1.1, 16:51:30 ago, via Vlan10


Route metric is 3328, traffic share count is 1
Total delay is 30 microseconds, minimum bandwidth is 1000000 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 2

120
The asterisk represents the router or first hop gateway used at the time to reach the
destination 10.3.4.10.

IGRP also uses features such as split horizon and hold down timers to control against
counting to infinity or other types of instabilities. There are also specific timers for this
routing protocol, which were developed modeling a lot of the functionality that RIP had
provided. IGRP has been a very successful IGP protocol. Of course, one of its major
disadvantages is that it is a protocol that cannot support Variable Length Subnet Masks
(VLSM) as well as it takes a few minutes to converge.

Below you can see the updates that are sent between routers in a periodic matter.

RA#
IGRP: sending update to 255.255.255.255 via Loopback0 (10.5.5.1)
subnet 10.3.3.0, metric=8976
subnet 10.4.4.0, metric=8976
subnet 10.1.1.0, metric=8476
subnet 10.6.6.0, metric=501
IGRP: sending update to 255.255.255.255 via Loopback1 (10.6.6.1)
subnet 10.5.5.0, metric=501
subnet 10.3.3.0, metric=8976
subnet 10.4.4.0, metric=8976
subnet 10.1.1.0, metric=8476
IGRP: sending update to 255.255.255.255 via Serial0 (10.1.1.1)
subnet 10.5.5.0, metric=501
subnet 10.6.6.0, metric=501
IGRP: received update from 10.1.1.2 on Serial0
subnet 10.4.4.0, metric 8976 (neighbor 501)
subnet 10.3.3.0, metric 8976 (neighbor 501)

RB#
IGRP: received update from 10.1.1.1 on Serial0
subnet 10.5.5.0, metric 8976 (neighbor 501)
subnet 10.6.6.0, metric 8976 (neighbor 501)
IGRP: sending update to 255.255.255.255 via Loopback0 (10.3.3.1)
subnet 10.5.5.0, metric=8976
subnet 10.6.6.0, metric=8976
subnet 10.1.1.0, metric=8476
subnet 10.4.4.0, metric=501
IGRP: sending update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.5.5.0, metric=8976
subnet 10.6.6.0, metric=8976
subnet 10.1.1.0, metric=8476
subnet 10.3.3.0, metric=501
IGRP: sending update to 255.255.255.255 via Serial0 (10.1.1.2)
subnet 10.4.4.0, metric=501

121
subnet 10.3.3.0, metric=501

RA# she ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/24 is sub netted, 5 subnets


C 10.5.5.0 is directly connected, Loopback0
I 10.3.3.0 [100/8976] via 10.1.1.2, 00:00:36, Serial0
I 10.4.4.0 [100/8976] via 10.1.1.2, 00:00:36, Serial0
C 10.1.1.0 is directly connected, Serial0
C 10.6.6.0 is directly connected, Loopback1

Now assume that network 10.5.5.0/24 goes down. This is what happens to the routing
updates:

RB# show ip route 10.5.5.0


Routing entry for 10.5.5.0/24
Known via "igrp 1000", distance 100, metric 4294967295 (inaccessible)
Redistributing via igrp 1000
Advertised by igrp 1000 (self originated)
Last update from 10.1.1.1 on Serial0, 00:02:26 ago
Hold down timer expires in 135 sacs

RB# show ip route 10.5.5.0


Routing entry for 10.5.5.0/24
Known via "igrp 1000", distance 100, metric 4294967295 (inaccessible)
Redistributing via igrp 1000
Advertised by igrp 1000 (self originated)
Last update from 10.1.1.1 on Serial0, 00:02:50 ago
Hold down timer expires in 111 sacs

After some time,

RB#sh ip route 10.5.5.0


Routing entry for 10.5.5.0/24
Known via "igrp 1000", distance 100, metric 4294967295 (inaccessible)
Redistributing via igrp 1000
Advertised by igrp 1000 (self originated)
Last update from 10.1.1.1 on Serial0, 00:04:31 ago
Hold down timer expires in 9 secs

122
RB#sh ip route 10.5.5.0
Routing entry for 10.5.5.0/24
Known via "igrp 1000", distance 100, metric 4294967295 (inaccessible)
Redistributing via igrp 1000
Advertised by igrp 1000 (self originated)
Last update from 10.1.1.1 on Serial0, 00:04:36 ago
Hold down timer expires in 5 secs

RB#sh ip route 10.5.5.0


Routing entry for 10.5.5.0/24
Known via "igrp 1000", distance 100, metric 4294967295 (inaccessible)
Redistributing via igrp 1000
Advertised by igrp 1000 (self originated)
Last update from 10.1.1.1 on Serial0, 00:04:39 ago
Hold down timer expires in 2 secs

RB#sh ip route 10.5.5.0


Routing entry for 10.5.5.0/24
Known via "igrp 1000", distance 100, metric 4294967295 (inaccessible)
Redistributing via igrp 1000
Advertised by igrp 1000 (self originated)
Last update from 10.1.1.1 on Serial0, 00:04:41 ago
Hold down timer expires in 0 secs

RB# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/8 is sub netted, 5 subnets


I 10.5.5.0/24 is possibly down,
routing via 10.1.1.1, Serial0
I 10.6.6.0 [100/8976] via 10.1.1.1, 00:00:34, Serial0
C 10.1.1.0 is directly connected, Serial0
C 10.4.4.0 is directly connected, Loopback1
C 10.3.3.0 is directly connected, Loopback0

After a while, 630 seconds, the route is flushed out of the routing tables. Convergence
takes very long as well.

IGRP: sending update to 255.255.255.255 via Loopback0 (10.3.3.1)

123
subnet 10.6.6.0, metric=8976
subnet 10.1.1.0, metric=8476
subnet 10.4.4.0, metric=501
IGRP: sending update to 255.255.255.255 via Loopback1 (10.4.4.1)
subnet 10.6.6.0, metric=8976
subnet 10.1.1.0, metric=8476
subnet 10.3.3.0, metric=501
subnet 10.3.3.0, metric=501
subnet 10.4.4.0, metric=501

RB# show ip route 10.5.5.0


% Subnet not in table

If you check the routers routing table, network 10.5.5.0 that was possibly down it will be
no longer listed.

RB# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/8 is sub netted, 4 subnets


I 10.6.6.0 [100/8976] via 10.1.1.1, 00:00:29, Serial0
C 10.1.1.0 is directly connected, Serial0
C 10.4.4.0 is directly connected, Loopback1
C 10.3.3.0 is directly connected, Loopback0

8.1.3 Enhanced Interior Gateway Routing Protocol (EIGRP)

The Enhanced Interior Gateway Routing Protocol (EIGRP) exchanges information more
efficiently than with earlier network protocols. The update is incremental and only sent
across the network when there is a need for it. Below, one of the segments on our lab
scenario (10.5.5.0/24) is shut down. See how the metric is immediately turned into
infinity (4294967295). Every router in the topology has a database about every router in
the network. There is no notion of waiting for the peer to update its peer periodically.
Every router has a map of the network and as soon as a segment goes down, every peer
sends a query to each other claiming that a network is no longer available and it is
removed from their routing tables.

For more information on this protocol, you can view the EIGRP document at
http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/en_igrp.htm.

124
Protocol Operation
EIGRP exchanges neighbor relationships through hello packets. These hello packets
are exchanged using multicast. A specific multicast address is dedicated for the updating
of EIGRP routers. These routers will join the specific multicast group to obtain the
routing information. EIGRP uses the multicast address 224.0.0.10. Updates and queries
are always sent as multicast packets while acknowledgements to these queries or updates
use Unicast packets. The foundations of EIGRP are the Diffusing Update Algorithm or
DUAL, this is a method of finding loop-free paths through a network, proposed by J.J.
Garcia-Luna. This is the actual theoretical work by Dr. Garcia-Luna:
http://www.cse.ucsc.edu/research/ccrg/publications/shree.infocom95.pdf .

The concept behind this algorithm states it is mathematically possible to determine


whether any route is loop free, based on the information provided in standard distance
vector routing. With DUAL, there is a notion of a feasible successor. This is a
neighboring router used to forward packets that is a least-cost path to a destination that is
guaranteed not to be part of a routing loop. Should there be any changes on the topology,
DUAL will check for any feasible successors on its tables. If one is found, then there
will be no need to re-compute a new path to a destination. If there is no feasible
successor and neighboring routers still advertised a destination route, there has to be a re-
computation to find a successor. Having to do a re-computation adds to the convergence
time.

EIGRP relies on four fundamentals routing concepts: neighbor tables, topology tables,
route states, and route tagging.

Neighbor Tables
Routers add their neighbors IP address and interface when they discover them as peers.
These become part of the neighbor tables. Each neighbor advertises a hold time within
each hello packet exchanged among neighbor routers. This hold time is the amount of
time that a router uses to see how reachable a neighbor is. If the hello packet is not
received within the hold time, the DUAL process is informed of the topology change.

Topology Tables
This is what was referred to earlier regarding the router having the entire topology map
defined as a database. This topology table contains all the destinations advertised by
neighboring routers. DUAL uses this table to create the loop-free paths.

Route States
A topology table entry for a destination can exist on two states, active or passive. It is
very simple. If an entry on the table is in passive state, there is no re-computation going
on but if the entry is on active state, a re-computation is taking place.

Route Tagging
Route tagging refers to EIGRP being able to tag external or internal routes. External
routes are learned from another protocol or they can be static routes entered on one of the
routers. EIGRP tags the routes with information that will identify certain parameters

125
such as ID of the external protocol, metric from the external protocol and AS number of
the destination among others.

EIGRP also uses the same IGRP parameters to compute its metrics. These are
bandwidth, delay, reliability, load, and MTU size. Metrics calculation follows exactly
how IGRP metrics are found except that

MetricIGRP = K1*BIGRP + {(K2* BIGRP)/ (256-L)} + K3*DIGRP + K5/[R+K4]

is multiplied by 256 or

MetricIGRP = [K1*BIGRP + K3*DIGRP] * 256

Below you can see the incremental updates using DUAL when a network is down hard or
is having problems like 10.5.5.0/24:

RA#
IP-EIGRP: Callback: route_adjust Loopback0
IP-EIGRP: 10.5.5.0/24, - do advertise out Serial0
IP-EIGRP: Int 10.5.5.0/24 metric 4294967295 - 0 4294967295
IP-EIGRP: Processing incoming REPLY packet
IP-EIGRP: Int 10.5.5.0/24 M 4294967295 - 1657856 4294967295 SM 4294967295 -
1657
856 4294967295
IP-EIGRP: 10.5.5.0/24, - do advertise out Serial0
IP-EIGRP: Int 10.5.5.0/24 metric 4294967295 - 0 4294967295
IP-EIGRP: Callback: redist connected 10.5.5.0/24. Event: 2
IP-EIGRP: Callback: reload_iptable Loopback0
IP-EIGRP: Processing incoming UPDATE packet
IP-EIGRP: Int 10.5.5.0/24 M 4294967295 - 1657856 4294967295 SM 4294967295 -
1657
856 4294967295

Now you can see the incremental update sent across the network once the segment
10.5.5.0/24 comes back on line.

RA#
IP-EIGRP: 10.5.5.0/24, - do advertise out Serial0
IP-EIGRP: Int 10.5.5.0/24 metric 128256 - 256 128000
IP-EIGRP: Processing incoming UPDATE packet
IP-EIGRP: Int 10.5.5.0/24 M 4294967295 - 1657856 4294967295 SM 4294967295 -
1657856 4294967295

126
8.1.4. Open Shortest Path Forwarding (OSPF)

The Open Shortest Path Forwarding (OSPF) protocol is considered another Interior
Gateway Protocol. In the mid 80s it was necessary to come up with a successor to the
RIP protocol as networks were increasingly growing and getting more complex. RIP and
OSPF are two of the few routing protocols that are open standard. This means that any
router regardless of the vendor can actually implement this process.

For more information about the OSPF protocol, you can view the following document at
http://www.ietf.org/rfc/rfc2328

Protocol Operation
OSPF algorithms are based on the SPF algorithm, which is sometimes referred to as the
Dijkstra algorithm. OSPF is referred to as a Link State Protocol that uses Link State
Advertisement (LSAs) to all routers within the same area. Metric information as well as
other parameters is exchanged among OSPF routers by means of LSAs. As a result of
these LSAs, routers running OSPF can create a SPF path to each node. There is a prior
knowledge of the routing topology ahead of time. OSPF can operate within a hierarchy
where the largest hierarchy is the Autonomous System (AS). An AS can be divided into
multiple areas where there can be routers acting as gateways between these multiple areas
and are called Area Border Routers. Each Area Border Router maintains its topological
database for that area. This basically reduces the amount of routing exchanged between
networks. Routers within the same area exchanged LSAs that allow them to maintain
identical topological databases. Routing can occur within the same area (intra-domain)
or between areas (inter-domain).The ABRs will provide access between multiple areas.

There is a notion of an OSPF backbone area, which is the area, formed between all the
Area Border Routers. If host A connected to a router RA on area 1 needs to send
information to host B connected to a router RB on area 2. Packets from host A would
have to go to RA and then to RC, which will then send the packet over to RB to host B.
RC, is an Area Border Router, which acts as the gateway between different domains or
areas. OSPF is a classless routing protocol. It actually understands Variable Length
Subnet Masks as the subnet masks are included in the advertisements.

Below you will see output from two OSPF routers on the same area 1 connected as
shown on Figure 1. RA and RB are directly connected.

When the following command is issued, show ip route, you obtain the following

RA# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default

127
U - per-user static route

Gateway of last resort is not set

10.0.0.0/8 is variably sub netted, 5 subnets, 2 masks


C 10.5.5.0/24 is directly connected, Loopback0
C 10.1.1.0/24 is directly connected, Serial0
C 10.6.6.0/24 is directly connected, Loopback1
O 10.4.4.1/32 [110/65] via 10.1.1.2, 00:00:33, Serial0 see admin distance of 110
O 10.3.3.1/32 [110/65] via 10.1.1.2, 00:00:33, Serial0

RB> show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/8 is variably sub netted, 5 subnets, 2 masks


C 10.1.1.0/24 is directly connected, Serial0
C 10.4.4.0/24 is directly connected, Loopback1
C 10.3.3.0/24 is directly connected, Loopback0
O 10.5.5.1/32 [110/65] via 10.1.1.1, 00:00:49, Serial0
O 10.6.6.1/32 [110/65] via 10.1.1.1, 00:00:49, Serial0 See O for OSPF

If you want to see which neighbors connect to the same Area 1, you can issue the
following command: show ip ospf neighbor details

RA# show ip ospf neighbor detail


Neighbor 10.4.4.1, interface address 10.1.1.2 loopback L0 is used as the neighbor
In the area 1 via interface Serial0
Neighbor priority is 1, State is FULL
Options 2
Dead timer due in 00:00:34

RB> show ip ospf neighbor detail


Neighbor 10.6.6.1, interface address 10.1.1.1
In the area 1 via interface Serial0
Neighbor priority is 1, State is FULL
Options 2
Dead timer due in 00:00:30

When a segment goes down, SPF (Shortest Path Forwarding), recalculation has to be
done to make sure that every segment is reachable via the shortest path.

128
RA#
OSPF: Interface Loopback0 going Down
OSPF: neighbor 10.6.6.1 is dead, state DOWN
OSPF: Build router LSA, router ID 10.6.6.1
OSPF: Build router LSA, router ID 10.6.6.1
OSPF: running SPF for area 1
OSPF: Initializing to run spf
It is a router LSA 10.6.6.1. Link Count 3
Processing link 0, id 10.4.4.1, link data 10.1.1.1, type 1
Add better path to LSA ID 10.4.4.1, gateway 10.1.1.2, dist 64
Add path: next-hop 10.1.1.2, interface Serial0
Processing link 1, id 10.1.1.0, link data 255.255.255.0, type 3
Add better path to LSA ID 10.1.1.255, gateway 10.1.1.0, dist 64
Add path: next-hop 10.1.1.1, interface Serial0
Processing link 2, id 10.6.6.1, link data 255.255.255.255, type 3
Add better path to LSA ID 10.6.6.1, gateway 10.6.6.1, dist 1
Add path: next-hop 10.6.6.1, interface Loopback1
It is a router LSA 10.4.4.1. Link Count 4
Processing link 0, id 10.6.6.1, link data 10.1.1.2, type 1
Ignore newdist 128 olddist 0
Processing link 1, id 10.1.1.0, link data 255.255.255.0, type 3
Add better path to LSA ID 10.1.1.255, gateway 10.1.1.0, dist 128
Add path: next-hop 10.1.1.2, interface Serial0
Processing link 2, id 10.4.4.1, link data 255.255.255.255, type 3
Add better path to LSA ID 10.4.4.1, gateway 10.4.4.1, dist 65
Add path: next-hop 10.1.1.2, interface Serial0
Processing link 3, id 10.3.3.1, link data 255.255.255.255, type 3
Add better path to LSA ID 10.3.3.1, gateway 10.3.3.1, dist 65
Add path: next-hop 10.1.1.2, interface Serial0
OSPF: Adding Stub nets
OSPF: delete lsa id 10.1.1.255, type 0, adv rtr 10.6.6.1 from delete list
OSPF: insert route list LS ID 10.1.1.255, type 0, adv rtr 10.6.6.1
OSPF: delete lsa id 10.3.3.1, type 0, adv rtr 10.4.4.1 from delete list
OSPF: Add Network Route to 10.3.3.1 Mask /32. Metric: 65, Next Hop: 10.1.1.2
OSPF: insert route list LS ID 10.3.3.1, type 0, adv rtr 10.4.4.1
OSPF: delete lsa id 10.4.4.1, type 0, adv rtr 10.4.4.1 from delete list
OSPF: Add Network Route to 10.4.4.1 Mask /32. Metric: 65, Next Hop: 10.1.1.2
OSPF: insert route list LS ID 10.4.4.1, type 0, adv rtr 10.4.4.1
OSPF: delete lsa id 10.6.6.1, type 0, adv rtr 10.6.6.1 from delete list
OSPF: insert route list LS ID 10.6.6.1, type 0, adv rtr 10.6.6.1
OSPF: Entered old delete routine
OSPF: No ndb for STUB NET old route 10.5.5.1, mask /32, next hop 10.5.5.1
OSPF: delete lsa id 10.5.5.1, type 0, adv rtr 10.6.6.1 from delete list
OSPF: running spf for summaries area 1
OSPF: sum_delete_old_routes area 1

129
OSPF: Started Building External Routes
OSPF: ex_delete_old_routes

OSPF as any other routing protocol has many other characteristics but because this
session is meant really to be an introduction, time, and complexity of the topic does not
allow this to be in more detail. Use the URLs provided to obtain additional information.

Fig 8.3

ABR
RC

AREA 2
AREA 1

RB
RA

B
A
8.1.4 Border Gateway Protocol (BGP)

The Border Gateway Protocol (BGP) is the most important EGP protocol in existence
right now. It is the routing protocol that every Internet router uses to communicate over
different Autonomous Systems or domains. It is the de facto protocol for the Internet.
Every Internet Service Provider (ISP), corporation, University or even a small entity has
to use BGP if they want to route packets over the cloud called the Internet.

BGP is a distance vector routing protocol. BGP uses route parameters called attributes
that are taken into consideration when calculating a route in metric as well as use to
create route policies and stability. It is very robust and scalable as there are over 110,000
BGP routes advertised at the time of this writing. Keep in mind the idea about routing;
only network and sub network prefixes are kept on routing tables and not individual
hosts.

BGP is capable of handling many routes because it uses a Classless Inter-domain Routing
(CIDR) http://www.ietf.org/rfc/rfc1519 . This means that a router from an ISP can
advertise to its peers or neighbors a supernet representing a conglomeration of multiple
class C networks, that are contiguous and that can be advertised to the world with one
mask of a class B network.

130
For example, assume that an ISP owns a specific block of C networks, 202.15.x.x. This
block consists of 202.15.0.x through 202.15.255.x. 256 clients to that ISP could be
assigned one of this Class C networks. If there were, no CIDR or supernetting
implemented, the ISP would have to advertise 256 Class C networks over to its BGP
peers by individually sending routes with a mask of /24. Instead, CIDR allows the ISP to
send one route (supernet) 202.15.0.0/16. It uses a Class B mask but it makes the routing
incredibly efficient and optimal.

For additional information on the Border Gateway Protocol, you can visit the following
website: http://www.ietf.org/rfc/rfc1771 .

Fig 8.4

ISP B

BGP

only send
202.15.0.0/
16 route or
CIDR block
ISP A
BGP

202.15.1.0/24
Company A

202.15.100.0/24
Company B
202.15.254.0/24
Company C

Protocol Operation
BGP neighbors exchange their information using a TCP connection (usually over port
179). During this peering, all the routing updates are exchanged but there are no periodic
updates as other routing protocols do. Only when there are changes on the network, BGP
routers will send to their neighbors these changes. Again, BGP routing always looks for
the best path available to the final destination.

BGP uses the following attributes to compute route selection and metrics:

131
1. Weight
2. Local Preference
3. Multi-exit discriminator
4. Origin
5. AS_Path
6. Next hop
7. Community

Weight: This is a Cisco defined attribute local to a router. If a router has multiple routes
to the same destination, the route with the highest weight will be considered and installed
on the routing table. This weight attribute is not advertised to neighboring routers.

Local Preference: Local preference is an attribute used to select the best exit point out of
a local Autonomous System. The attribute is actually advertised to neighboring routers
within the same AS. If there are multiple exit points from the local AS, the BGP router
with the highest local preference will be used as the exit point towards a network
destination. Local routers on the local AS exchange this attribute and that is how they
determine which router will be use to exit out.

Multi-Exit Discriminator (MED): This is a metric attribute. It influences preferred paths


into an AS that has multiple entry points. A lower MED value is preferred over a higher
MED value. MEDs are advertised throughout the local AS.

Origin: This attribute tells the BGP router how the BGP route was learned. There are
three different ways that a route can be learned: IGP, EGP, and Incomplete. IGP means
that the route is interior to the originated AS. EGP means the route was learned via an
external routing process and Incomplete means that the origin is unknown but it usually
means that the route has been redistributed into BGP.

AS Path: Every time a route advertisement goes through an Autonomous System (AS),
the AS number is added to a list kept by the route as it traverses from source to
destination. The shorter the AS list is the more preferred that path will be between source
and destination. The AS_Path attribute can tell you how many AS or domains were
crossed. This can help in the troubleshooting process should there be issues with optimal
routing, etc.

Next-Hop: The Next-Hop attribute is an IP address that is used to reach the advertising
router of a specific route. In order to exchange BGP routing updates, all participating
routers have to be peers or neighbors of each other. This creates a fully mesh network.
The Next-hop does not necessarily have to be directly connected to the BGP router using
the next-hop address.

Community: The community attribute is a way of grouping destinations, called


communities, to which routing decisions can be applied.

132
Now, using all these attributes or parameters, how does BGP select the right path to a
destination? BGP uses an algorithm based on these parameters called the BGP Path
Selection.

The following is the criteria used to select a path. This information was obtained from
the Cisco website.

1. If the path specifies a next hop that is inaccessible, drop the update.
2. Prefer the path with the largest weight.
3. If the weights are the same, prefer the path with the largest local preference.
4. If the local preferences are the same, prefer the path that was originated by BGP
running on this router.
5. If no route was originated, prefer the route that has the shortest AS_path.
6. If all paths have the same AS_path length, prefer the path with the lowest origin
type (where IGP is lower than EGP and EGP is lower than incomplete).
7. If the origin codes are the same, prefer the path with the lowest MED attribute.
8. If the paths have the same MED, prefer the external path to the internal path.
9. If the paths are still the same, prefer the path through the closest IGP neighbor.
10. Prefer the path with the lowest IP address, as specified by the BGP router ID.

Using Figure 1 as the example, some outputs with respect to BGP have been obtained.
On router RB, networks on interfaces L0 and L1 were changed from 10.x.x.x networks to
20.x.x.x networks to simulate the connection between one AS (RA with AS=100) and
another AS (RB with AS=200)

The following is the output from router RA on AS 100.

RA# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/24 is sub netted, 3 subnets


C 10.5.5.0 is directly connected, Loopback0
C 10.1.1.0 is directly connected, Serial0
C 10.6.6.0 is directly connected, Loopback1
B 20.0.0.0/8 [20/0] via 10.1.1.2, 00:01:40

133
RA# show ip bgp summary
BGP table version is 4, main routing table version 4
2 network entries (3/4 paths) using 372 bytes of memory
2 BGP path attribute entries using 176 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State


10.1.1.2 4 200 6 6 4 0 0 00:02:03

RA# show ip protocol this is the command that shows you routing protocol
Routing Protocol is "bgp 100"
Sending updates every 60 seconds, next due in 0 seconds
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
IGP synchronization is enabled
Automatic route summarization is enabled
Neighbor(s):
Address FiltIn FiltOut DistIn DistOut Weight RouteMap
10.1.1.2
Routing for Networks:
10.0.0.0
Routing Information Sources:
Gateway Distance Last Update
10.1.1.2 20 00:02:28
Distance: external 20 internal 200 local 200

RA# show ip bgp


BGP table version is 7, local router ID is 10.6.6.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal
Origin codes: i - IGP, e - EGP? - incomplete

Network Next Hop Metric LocPrf Weight Path


*> 10.0.0.0 0.0.0.0 0 32768 i
*> 20.0.0.0 10.1.1.2 0 0 200 i

This is the output from router RB on AS 200.

RB> show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

134
10.0.0.0/8 is sub netted, 1 subnets
C 10.1.1.0 is directly connected, Serial0
20.0.0.0/8 is sub netted, 2 subnets
C 20.4.4.0 is directly connected, Loopback1
C 20.3.3.0 is directly connected, Loopback0

RB# show ip route


Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, * - candidate default
U - per-user static route

Gateway of last resort is not set

10.0.0.0/8 is variably sub netted, 2 subnets, 2 masks


B 10.0.0.0/8 [20/0] via 10.1.1.1, 00:00:09
C 10.1.1.0/24 is directly connected, Serial0
20.0.0.0/8 is sub netted, 2 subnets
C 20.4.4.0 is directly connected, Loopback1
C 20.3.3.0 is directly connected, Loopback0

RB# show ip bgp summary


BGP table version is 4, main routing table version 4
2 network entries (2/4 paths) using 344 bytes of memory
2 BGP path attribute entries using 208 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory

Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State


10.1.1.1 4 100 8 9 4 0 0 00:04:45

RB# show ip protocol


Routing Protocol is "bgp 200"
Sending updates every 60 seconds, next due in 0 seconds
Outgoing update filter list for all interfaces is not set
Incoming update filter list for all interfaces is not set
IGP synchronization is enabled
Automatic route summarization is enabled
Neighbor(s):
Address FiltIn FiltOut DistIn DistOut Weight RouteMap
10.1.1.1
Routing for Networks:
20.0.0.0
Routing Information Sources:

135
Gateway Distance Last Update
10.1.1.1 20 00:01:08
Distance: external 20 internal 200 local 200

RB> show ip bgp


BGP table version is 7, local router ID is 10.4.4.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal
Origin codes: i - IGP, e - EGP? - incomplete

Network Next Hop Metric LocPrf Weight Path


*> 10.0.0.0 10.1.1.1 0 0 100 i
*> 20.0.0.0 0.0.0.0 0 32768 i

When one of the connections between the two AS goes down and recuperates, you can
see how the protocol tries to calculate its routing tables:

RA#
BGP: no valid path for 20.0.0.0/8
BGP: nettable_walker 20.0.0.0/255.0.0.0 no best path selected
BGP: 10.1.1.2 computing updates, neighbor version 0, table version 6, starting at 0.0.0.0
BGP: 10.1.1.2 sends UPDATE 10.0.0.0/8, next 10.1.1.1, metric 0, path 100
BGP: 10.1.1.2 1 updates enqueued (average=50, maximum=50)
BGP: 10.1.1.2 update run completed, ran for 4ms, neighbor version 0, start version 6,
throttled to 6, check point net 0.0.0.0
BGP: 10.1.1.2 rcv UPDATE about 20.0.0.0/8, next hop 10.1.1.2, path 200 metric 0
BGP: nettable_walker 20.0.0.0/255.0.0.0 calling revise_route
BGP: revise rouconfig tte installing 20.0.0.0/255.0.0.0 ->
BGP: 10.1.1.2 computing updates, neighbor version 6, table version 7, starting at 0.0.0.0
BGP: 10.1.1.2 update run completed, ran for 0ms, neighbor version 6, start version 7,
throttled to 7, check point net 0.0.0.0

Again, BGP has so many intricacies that it is impossible to go over all the characteristics
of this routing protocol. Please refer to all the links about BGP.

8.1.5 Hot Standby Routing Protocol (HSRP)

The Hot Standby Routing Protocol (HSRP) protocol is a Cisco proprietary protocol. It is
meant to be a redundant routing protocol that uses an active and standby set of routers
meant to provide users with transparent connectivity to a network should the active router
have any issues.

For more information on the Hot Standby Protocol, you can visit the following website
http://www.ietf.org/rfc/rfc2281 .

136
Protocol Operation
Both of these routers connected to the same user segment share one IP address (HSRP IP
address) and MAC address which is used by hosts as a default gateway or next hop
router. The virtual IP address and MAC address is tied to the active router, which will
act as the next, hop router for the local users. Both active and standby HSRP routers
exchange hello packets with each other including parameters such as group number,
priority, virtual IP address, etc between each other. These hello packets once again use
multicast groups specific for HSRP to exchange this information.

A Group number is necessary as there can be multiple routers on the same segment that
belong to different HSRP processes. It is also used to determine which routers should
participate on that specific HSRP process. Priority is necessary to determine which of the
two or more routers under the same group number serving a specific segment should be
selected as the active router.

An active router is selected when the priority of that router is higher than the others under
the same group number. Virtual IP addresses are the addresses used to create the illusion
to the users that there is only one device acting as the next hop router. Hello packets or
keepalive packets are exchanged every three seconds. During those three seconds, each
router exchanges all the parameters mentioned above. Should there be a problem with
the active router, which caused an update to be missed after three seconds, a second timer
begins where the HSRP process will wait at least three times the update timer, or 10
seconds before a standby router becomes the active one. This means that for at least 10
seconds, there is no next hop router available for the users. However, since most of the
connections between end user and server are TCP based, the TCP protocol is connection-
oriented; it will try to maintain the session by sending retransmissions until the network
path is once again available. HSRP process waits 10 seconds and then begins the
switching process. The active routers priority is reduced by some value until the value
of the standby router is greater than the active router one.

When this happens, the standby router begins to route traffic as it recognizes itself to
have a higher priority. The standby router starts advertising to other HSRP routers under
the same group number that it is now the new active router, the virtual IP and MAC
address shift over to that router and connectivity continues. The outage is minimal
where the users do not even notice the problem. This is a very important redundant
protocol that should be implemented whenever there are Cisco networks deployed. It
gives the users the benefit of rarely losing their default gateway or next hop router.

HSRP also has a way to detect other uplinks connected locally that can be experiencing
issues, which could prevent packets from leaving off towards the network. If there is no
issue with the local segment, packets are always going to be routed to the active router.
But if the active router has an uplink (another interface connecting other networks) that
goes down or has issues, packets will not be able to route out towards the destination
hence these packets need to be able to re- route without having to afford a long down
time for the user.

137
Therefore, local interfaces where user segments attached can be configure to keep track
of how the uplinks of those routers towards the rest of the network are behaving. If there
are any issues detected where the active router knows that the uplink to the next network
is having problems, tracking will instigate the updating between active and standby and
will cause the HSRP process to fail over towards the standby router. This means that
traffic gets re- routed to the standby router until the uplink on the active router that was
having problems or that was down is fixed. Tracking is important as it provides the extra
redundancy required for multiple user segments.

Fig 8.5

NET
Tracking
these links

HSRP
Address
Priority 105 Priority 100
Active Router RA RB Standby Router
Group 1 Group 1
.3
.1 .2
10.1.1.0/24

- Normal conditions - PC routes through RA


- When RA fails - RB takes over

8.1.7 Conclusion

This session was meant to provide the student some insight to the most commonly used
routing protocols. There are books written on every routing protocol described here so it
would be very difficult to provide all the functionalities and properties of these protocols
without extending the session. The student should be able to recognize the most
important facts about these protocols and use the URLs that have been provided for
additional reading.

138
Chapter 9: TRANSMISSION CONTROL PROTOCOL

139
9.1 Introduction
This session is dedicated to introducing how a portion of the Transport Control Protocol
(TCP) operates. Along with explaining how it has been designed to provide reliable
connectivity between source and destination. The student will gain a clear understanding
of connection establishment between client and server or source and destination. The
session will also include information on how the TCP works with respect to flow control
and how windowing is an important role in any type of TCP connection.

A lab scenario will be set up that provides details of data that has captured from a TCP
connection between a client and a web server. All the specific parameters that were
negotiated during the three-way handshake will also be described. The session will
conclude with a few sets of application protocols that use TCP as the underlying transport
protocol.

9.1.1 Transmission Control Protocol (TCP)

The Transmission Control Protocol (TCP) http://www.ietf.org/rfc/rfc793 is a protocol


that exists on the Transport Layer of the OSI model. TCP protocol is used to create
connection-oriented and reliable connections between client and server where the session
is established, maintained, and terminated by the TCP protocol. This is known as the
Three-way Handshaking. This means that TCP negotiates a variety of parameters
related to the conversation between client and server. TCP handles the handshaking
during initial establishment to determine parameters such as window size, flow control,
buffer space, TCP ports, source and destination addresses, sequence numbers, etc.

TCP uses specific ports to represent different applications. There are well known TCP
ports that describe the application in use. TCP ports less than 1024 are considered well-
known server/application ports such as port 23 for Telnet, 25 for SMTP (Simple Mail
Transfer Protocol), port 80 for http, etc. One of the most important functions that the
TCP protocol has is that it uses sequence numbers to keep a session active.

The sequence number is a special number that is randomly generated which is exchanged
during the initial handshake and is used to keep proper data flow exchange resulting in
order and sequence. Once the session is established between two TCP devices, the
sequence number is exchanged and agreed with. TCP works together with the IP
protocol and they are known as TCP/IP processes. Most of the applications that run on
the Internet are TCP oriented. This means that the protocol is necessary to operate
applications such as web browsers, databases, and other programs commonly used in the
industry.

TCP segments encapsulate the data that has been passed from above layers such as the
Application, Presentation, and Session layers. Each one of these layers adds their own
header (encapsulation). When TCP gets this information, it creates a segment. IP
protocol encapsulates this segment into a packet and it then routes the packets across the
network. Eventually the Data Link Layer encapsulates the packet and creates a frame.

140
The frame is then moved across physical networks until the information reaches its
final destination. (Go back to OSI model and see review this process). We discussed the
datagram layout of an IP packet and how it was divided into fields, each with a specific
function. The TCP segment is also divided into fields that are interpreted by the network
accordingly.

How does a TCP segment break down? TCP segments are also 32-bit words just as IP
packets. It includes a 20-byte header and its layout is the Protocol Fields are the
following: Source Port number, Destination Port number, Sequence number,
Acknowledgement number, Header length, Flag bits, Window size, Checksum, Urgent
pointer, Options & Data

Source port number: TCP uses this port to determine what upper layer application needs
data to be delivered. There can be multiple TCP data streams between client and server.
TCP source and destination ports make it easier to identify a specific conversation. It is
part of the de-multiplexing and multiplexing that Transport Layer uses to maintain
sessions. This field uses 16 bits, which allows for over 65,000 ports.

Destination port number: This is the target port on the receiving end. This field uses 16
bits, which allows for over 65,000 ports.

Sequence number: This 32-bit number identifies the first byte of the data in the segment.

Acknowledgement number: This 32-bit number is the number that the source expects to
receive next.

Header Length: This number identifies how many bytes are used for header. It uses 4
bits.

Flag bits: These bits are important because they identify special states in the protocol.
There can be segments that only have an acknowledgement on them, there are others with
data, others are reset because of a malfunctioning in the process, other segments are
specific replies to the starting and closing of a session, etc.

There are 6 bits that make up this field.

1. URG: Urgent Pointer field is valid. Urgent data included


2. ACK: The acknowledgement field is valid. This bit is usually set.
3. PSH: This segment requests a push. Data passed to the application as soon as
possible.
4. RST: Reset the connection. The sequence number becomes invalid.
5. SYN: Synchronize sequence numbers. Agreement of sequence numbers between
source and destination
6. FIN: Sender has finished sending data. The session is closed.

141
Window size: This translates to buffer space on the system side. This field is 16 bits and
it is measured in bytes.

Checksum: It consists of 16-bit integer checksum used to verify the integrity of the data
as well as the TCP header.

Urgent Pointer: This is how TCP signals that there is urgent data to be sent. It overtakes
normal data stream. The URG bit is set when this is the case.

Options: This field is used to negotiate with the TCP software at the other end of the
connection options such as maximum segment size (MSS). It is a form of introducing
flow control.

Fig 9.1

TCP Segment
32 bits

Source Port Destination Port

Sequence number

Acknowledge number

U A P R S F
HLEN R C S S Y I Window Size
G K H T N N

Checksum Urgent Pointer

Options
(Padding)

DATA

142
9.1.2 How does Three-way Handshaking work in detail?

Assume there are two devices that wish to communicate with each other. The protocol
selected is TCP hence this means that the connectivity is going to use a connection-
oriented mechanism and specific parameters need to be exchanged to allow this to work.

To illustrate this concept an example (see below) has been setup between a browser on a
client machine and a web server. The sniffer is a device that is used in industry to
analyze data flows on networks. It decodes the data into a readable form (as shown
below) that allows anyone doing troubleshooting to follow the protocol in detail. The
trace below will show exactly how the three-way handshaking works.

Let us follow each line systematically. The first shaded region shows a client with IP
10.2.2.1 setting up a TCP session with web server, 10.1.1.1. How do we know this is the
case? Well, look at the source and destination ports. Port 80 is the destination port (80 is
http and is within the range less than 1024 which represent well-known TCP ports)
Source port 1829 is the clients TCP port in use to set up the connection. There are other
sessions from the same machine, 10.2.2.1, using source port 1830, which clearly indicate
the multiplexing discussed before that transport layer uses. Now follow the connection
using TCP ports 1829 and 80 on the trace below which is highlighted by the gray shade.
Start with line three in the trace.

The third line on the sniffer trace shows the first step of the three-way handshaking.
This is done via SYN. Client 10.2.2.1 sends a request to connect to the server, which is
interpreted as a SYN segment. A sequence number is randomly created which reads
202223088. Length is equal to 0 because this is not a data segment. It is meant to be a
connection segment. The client also exchanges its window size, which equals to
win=8192.

Destination Source Protocol Summary


[10.1.1.1] [10.2.2.1] TCP D=80 S=1829 SYN SEQ=20223088 LEN=0 WIN=8192"

The second step of the three-way handshaking begins the moment the server replies
back to the client. Notice on line 6, how the server is now replying to the SYN from
client 10.2.2.1. It accepts the segment, increases the acknowledgement number by one,
20223089, the source port number is now S=80 and the destination is the clients TCP
port number or D=1829. After accepting the connection with an acknowledgement, the
server generates a random sequence number as shown equal to 4187161162. It too has no
data to send so its Length is equal to 0 and its window is advertised back to the client as
being equal to win=8760.

Destination Source Protocol Summary


[10.2.2.1] [10.1.1.1] TCP D=1829 S=80 SYN ACK=20223089 SEQ=4187161162 LEN=0 WIN=8760"

For the third and final step of the three-way handshaking, look at line 9. This TCP
segment shows that the client with IP address of 10.2.2.1 and with TCP source port 1829
is acknowledging the sequence number generated by the web server by one as shown

143
4187161163. There is no data hence no length field is present and once again, the
window size is equal to 8760. Line 21 (next shaded region), continues with the setup by
getting the client ready to send data with parameters negotiated within the handshake.
The client, uses the acknowledge obtained before on line 9, 4187161163 and the
sequence number obtained on line 6 of 20223089 and now it begins to send data. The
Length is now LEN=468 (in bytes) with the same window size of 8760.

Destination Source Protocol Summary


[10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187161163 SEQ=20223089 LEN=468 WIN=8760"

The client will continue sending data maintaining the connection until some one closes it.
Now the next acknowledgement from the server to the data sent by the client will include
the number 20223089 plus 468 as seen on line 29.

Destination Source Protocol Summary


[10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 WIN=8760"

ACK=20223557 is equal to 20223089 + 468

On line 35, the server now is ready to send data back to the client. The client will now
use its sequence number obtained before during the handshake of 4187161163 and will
send data of LEN=132 with a constant window equal to WIN=8760.

Destination Source Protocol Summary


[10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187161163 LEN=132 WIN=8760"

On line 43, the client acknowledges the server data by increasing its sequence to
4187161295 and a window equal to WIN= 8628. This window is smaller because data is
being buffered in the client side hence reducing the memory space to store data. If this
number goes down to zero and does not come back would mean no more buffer space
and the session would have to be reset (abnormal termination)

Destination Source Protocol Summary


[10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187161295 WIN=8628"

This is how the sequencing is maintained. The most important process in a TCP session is
keeping segments in order. Continue looking through the shaded regions of this trace to
see the sequencing and data flow.

On line 109, the server requests a FIN. This means that session is going to close in a
normal way.

Destination Source Protocol Summary


[10.2.2.1] [10.1.1.1] TCP D=1829 S=80 FIN ACK=20223557 SEQ=4187171153 LEN=0 WIN=8760"

144
The client acknowledges the FIN segment with line 112 by increasing its sequence
number by one as shown below.

Destination Source Protocol Summary


[10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187171154 WIN=8101"

And the client then sends its FIN segment to terminate the session as requested by the
server as shown on line 115.

Destination Source Protocol Summary


[10.1.1.1] [10.2.2.1] TCP D=80 S=1829 FIN ACK=4187171154 SEQ=20223557 LEN=0 WIN=8101"

This is a clear example of a TCP connection.

Destination Source Protocol Summary

1 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"


2 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=24 ID=57496"
3 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 SYN SEQ=20223088 LEN=0 WIN=8192"
4 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=60 bytes"
5 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=24 ID=7411"
6 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 SYN ACK=20223089 SEQ=4187161162 LEN=0 WIN=8760"
7 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
8 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=57752"
9 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187161163 WIN=8760"
10 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
11 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=24 ID=58008"
12 [10.1.1.1] [10.2.2.1] TCP D=80 S=1830 SYN SEQ=20223090 LEN=0 WIN=8192"
13 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=60 bytes"
14 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=24 ID=7667"
15 [10.2.2.1] [10.1.1.1] TCP D=1830 S=80 SYN ACK=20223091 SEQ=4187283539 LEN=0 WIN=8760"
16 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
17 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=58264"
18 [10.1.1.1] [10.2.2.1] TCP D=80 S=1830 ACK=4187283540 WIN=8760"
19 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=522 bytes"
20 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=488 ID=58520"
21 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187161163 SEQ=20223089 LEN=468 WIN=8760"
22 [10.1.1.1] [10.2.2.1] ","HTTP", " C Port=1829 GET http://xms/Reports/XmsNavigator.asp HTTP/1.0"
23 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=520 bytes"
24 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=486 ID=58776"
25 [10.1.1.1] [10.2.2.1] TCP D=80 S=1830 ACK=4187283540 SEQ=20223091 LEN=466 WIN=8760"
26 [10.1.1.1] [10.2.2.1] ","HTTP", " C Port=1830 GET http://xms/Reports/XmsReports.asp HTTP/1.0"
27 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=60 bytes"
28 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=20 ID=7923"
29 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 WIN=8760"
30 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=60 bytes"
31 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=20 ID=8179"

145
32 [10.2.2.1] [10.1.1.1] TCP D=1830 S=80 ACK=20223557 WIN=8760"
33 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=186 bytes"
34 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=152 ID=8691"
35 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187161163 LEN=132 WIN=8760"
36 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
37 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=186 bytes"
38 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=152 ID=8947"
39 [10.2.2.1] [10.1.1.1] TCP D=1830 S=80 ACK=20223557 SEQ=4187283540 LEN=132 WIN=8760"
40 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1830 HTML Data"
41 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
42 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=59288"
43 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187161295 WIN=8628"
44 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
45 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=59544"
46 [10.1.1.1] [10.2.2.1] TCP D=80 S=1830 ACK=4187283672 WIN=8628"
47 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=1514 bytes"
48 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=1480 ID=9203"
49 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187161295 LEN=1460 WIN=8760"
50 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
51 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=777 bytes"
52 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=743 ID=9459"
53 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187162755 LEN=723 WIN=8760"
54 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
55 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=656 bytes"
56 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=622 ID=9715"
57 [10.2.2.1] [10.1.1.1] TCP D=1830 S=80 FIN ACK=20223557 SEQ=4187283672 LEN=602 WIN=8760"
58 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1830 HTML Data"
59 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
60 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=59800"
61 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187163478 WIN=8760"
62 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
63 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=60056"
64 [10.1.1.1] [10.2.2.1] TCP D=80 S=1830 ACK=4187284275 WIN=8026"
65 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=1514 bytes"
66 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=1480 ID=9971"
67 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187163478 LEN=1460 WIN=8760"
68 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
69 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
70 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=61848"
71 [10.1.1.1] [10.2.2.1] TCP D=80 S=1830 FIN ACK=4187284275 SEQ=20223557 LEN=0 WIN=8026"
72 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=60 bytes"
73 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=20 ID=11507"
74 [10.2.2.1] [10.1.1.1] TCP D=1830 S=80 ACK=20223558 WIN=8760"
75 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
76 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=62104"
77 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187164938 WIN=8760"
78 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=1514 bytes"

146
79 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=1480 ID=14067"
80 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187164938 LEN=1460 WIN=8760"
81 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
82 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=1230 bytes"
83 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=1196 ID=14323"
84 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187166398 LEN=1176 WIN=8760"
85 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
86 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=1514 bytes"
87 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=1480 ID=14579"
88 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187167574 LEN=1460 WIN=8760"
89 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
90 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
91 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=63896"
92 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187167574 WIN=8760"
93 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=1514 bytes"
94 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=1480 ID=14835"
95 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187169034 LEN=1460 WIN=8760"
96 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
97 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
98 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=64152"
99 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187170494 WIN=8760"
100 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=713 bytes"
101 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=679 ID=15091"
102 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223557 SEQ=4187170494 LEN=659 WIN=8760"
103 [10.2.2.1] [10.1.1.1] ","HTTP", " R Port=1829 HTML Data"
104 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
105 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=64408"
106 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187171153 WIN=8101"
107 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=60 bytes"
108 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=20 ID=15347"
109 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 FIN ACK=20223557 SEQ=4187171153 LEN=0 WIN=8760"
110 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
111 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=64664"
112 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 ACK=4187171154 WIN=8101"
113 [10.1.1.1] [10.2.2.1] DLC Ethertype=0800, size=60 bytes"
114 [10.1.1.1] [10.2.2.1] IP D=[10.1.1.1] S=[10.2.2.1] LEN=20 ID=64920"
115 [10.1.1.1] [10.2.2.1] TCP D=80 S=1829 FIN ACK=4187171154 SEQ=20223557 LEN=0 WIN=8101"
116 [10.2.2.1] [10.1.1.1] DLC Ethertype=0800, size=60 bytes"
117 [10.2.2.1] [10.1.1.1] IP D=[10.2.2.1] S=[10.1.1.1] LEN=20 ID=15603"
118 [10.2.2.1] [10.1.1.1] TCP D=1829 S=80 ACK=20223558 WIN=8760"

147
Fig 9.2

TCP
"Three Way
Client Handshaking" Server

1 requ
est
- SY
N

2
SYN
ed -
w ledg
no
Ac k

3
Esta
blis
hed
- Ack
+1

+2
ck
-A
ed
b lish
E sta

Data

Data

Data

9.1.3 Sliding Windows

What is the idea behind sliding windows? Sliding windows protocols are used because
they can use network bandwidth better and more efficiently. They allow the sender to
transmit multiple packets before waiting for an acknowledgement. Protocols that do not
use sliding windows are considered to be non-efficient. A sender using a non-sliding
window protocol will send a packet and expect an acknowledgement for every packet
sent. This makes communication flow non-optimal.

148
A sliding window can be thought of as a sequence of packets that are transmitted
together. The protocol places a fixed window size on the sequence and transmits all
packets that lie inside the window. Assume there are 8 packets in the window. All 8
packets can be sent expecting only one acknowledgement. If the last packet inside the
window is acknowledged then there was no packet loss. If packet 5 is acknowledged and
the sender sent all 8 packets, this means that packets 6, 7 and 8 were lost along the way.
The sender will retransmit packet 6, 7, and 8 and expect an acknowledgement for 8.
Only packets 1 through 8 will be sent as these are all inside the window. The window will
only slide to include the 9th packet when packet 1 has been acknowledged. The motion
is very fast from left to right. A host sending packets advertises the maximum window
size.

The window is constantly moving back and forth but never reaching zero. If a window
reaches a value of zero, then data can no longer be sent. A window size is not negotiated
so it is up to the sender not to over run the receivers buffer space. This is known as flow
control. Flow control is a mechanism that allows two devices to manage and control data
flow between each other. The purpose of flow control is to have either sender or receiver
tell each other to slow down or speed up data transfer. Flow control is necessary so not
to overrun each others buffers. This is the basic idea of congestion control.

How does TCP use the sliding window mechanism?


TCP views the data stream as a sequence of octets or bytes that it divides into segments
for transmission. It uses a specialized sliding window mechanism to address two issues:
efficient transmission and flow control. The TCP window mechanism operates on the
octet level. Octets of the data stream are numbered sequentially within the window.

The sender keeps three pointers while the host is connected. One pointer represents the
octets that have been sent and acknowledged. These octets are outside the window to the
left. The second pointer represents octets that have been sent but are yet to be
acknowledged. These octets are not outside the window. Once they leave the window,
they are acknowledged. The third pointer represents octets that have not been sent but
will be sent without any delay. Any other octet outside the window cannot be send until
the window moves. The receiver is also keeping the same type of window mechanism to
properly send its data stream back to the sender. These mechanisms act under a full
duplex mode, which means that they simultaneously interact in opposite directions.

To learn more about the sliding window protocol you can visit this website
http://authors.phptr.com/tanenbaumcn4/samples/section03_04.pdf or you can read the
following paper at http://www.ietf.org/rfc/rfc2001 .

TCP differs from other sliding window implementations by allowing the window size to
vary over time. Each acknowledgement contains a window advertisement that specifies
how many additional octets of data the receiver is prepared to accept. This creates a more
reliable transfer and more efficient use of the network bandwidth. As the buffer space or
window starts to become smaller, the transfer becomes proportional to that window. In
other words, the receiver will advertise a smaller window size allowing the sender to

149
fulfill such advertisement without overflowing the receivers buffer. This is how
congestion control works.

Fig 9.3

TCP Sliding Window

Octets

...... 1 2 3 4 5 6 7 8 9 10 11 12 ....

...... 1 2 3 4 5 6 7 8 9 10 11 12 13 ....

Window = 7
Window moves from
left to right very quickly

What upper level protocols are encapsulated on a TCP segment?


There are multitudes of protocols that reside on the application layer that rely on the
TCP/IP set of protocols to get data across the network. As you will recall the transport
layer, takes care of the connection across the network. The following are some of the
more common protocols that TCP encapsulates and that operate on the application layer.

File Transfer Protocol (FTP)


The File Transfer Protocol http://www.ietf.org/rfc/rfc959 is used to access remote nodes
or machines on a network in order to obtain information (files) stored in the remote
system under a specific directory. It also is used to deposit files on the remote system so
that others may access them. It allows you to download the file, either binary or text,
across a network using TCP port 20 and 21. The TCP port 21 on the server side is used to
set up the connection which also known as the control information.

This is the port used by the client to execute the commands to get the file ready for
download. TCP port 20 is used strictly to download the actual data. It is the data port.

Telnet
The Telnet http://www.ietf.org/rfc/rfc137 is a terminal emulator program used on TCP/IP
networks. The program runs on a computer and it is used to connect to servers or other

150
communication nodes. These commands can be entered on these servers or
communication nodes just as if the user was entering them directly on the server or
communication console port. Telnet sessions allow the user to control and communicate
with the end device and allow the user to execute commands that perform specific
functions. Telnet usually uses some type of authentication where the user is prompted for
username and password. The server or communication node uses this mechanism to
restrict access to the device.

Returning back to the previous example of our simple network of routers RA and RB
interconnected.

Fig 9.4
10.1.1.0/24 L0
10.3.3.0/24

S0 S0
RA .1 .2 RB

L1
L1 10.4.4.0/24
10.6.6.0/24

L0 and L1 are loopback


interfaces. These are virtual
interfaces created on a router
that are always up

The following router debug will show how a session is created using Telnet from RA to
RB.

RA# debug ip tcp packet


TCP Packet debugging is on
RA# telnet 10.3.3.1
Trying 10.3.3.1 ... Open

User Access Verification

Password:
tcp0: O CLOSED 10.3.3.1:23 10.1.1.1:11000 seq 2841117597
OPTS 4 SYN WIN 2144
tcp0: I SYNSENT 10.3.3.1:23 10.1.1.1:11000 seq 2827296498

151
OPTS 4 ACK 2841117598 SYN WIN 2144
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117598
ACK 2827296499 WIN 2144
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117598
DATA 12 ACK 2827296499 PSH WIN 2144
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117610
ACK 2827296499 WIN 2144
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296499
DATA 9 ACK 2841117598 PSH WIN 2144
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117610
DATA 3 ACK 2827296508 PSH WIN 2135
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117613
DATA 3 ACK 2827296508 PSH WIN 2135
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296508
DATA 42 ACK 2841117598 PSH WIN 2144
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296550
DATA 3 ACK 2841117610 PSH WIN 2132
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296553
DATA 3 ACK 2841117610 PSH WIN 2132
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117616
DATA 3 ACK 2827296556 PSH WIN 2087
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117619
DATA 3 ACK 2827296556 PSH WIN 2087
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296556
DATA 3 ACK 2841117610 PSH WIN 2132
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296559
DATA 6 ACK 2841117613 PSH WIN 2129
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296565
DATA 3 ACK 2841117619 PSH WIN 2123
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117622
ACK 2827296568 WIN 2075
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296568
ACK 2841117622 WIN 2120
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117622
DATA 1 ACK 2827296568 PSH WIN 2075
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117623
DATA 1 ACK 2827296568 PSH WIN 2075
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117624
DATA 1 ACK 2827296568 PSH WIN 2075
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117625
DATA 1 ACK 2827296568 PSH WIN 2075
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296568
ACK 2841117625 WIN 2117
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117626
DATA 1 ACK 2827296568 PSH WIN 2075
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296568

152
ACK 2841117627 WIN 2115
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117627
DATA 2 ACK 2827296568 PSH WIN 2075
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296568
DATA 5 ACK 2841117629 PSH WIN 2113
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117629
ACK 2827296573 WIN 2070
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117629
DATA 2 ACK 2827296573 PSH WIN 2070
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296573
DATA 2 ACK 2841117631 PSH WIN 2111
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296575
DATA 3 ACK 2841117631 PSH WIN 2111
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117631
ACK 2827296578 WIN 2065
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117631
DATA 2 ACK 2827296578 PSH WIN 2065
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296578
DATA 2 ACK 2841117633 PSH WIN 2109
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296580
DATA 3 ACK 2841117633 PSH WIN 2109
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117633
DATA 1 ACK 2827296583 PSH WIN 2060
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296583
DATA 1 ACK 2841117634 PSH WIN 2108
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117634
DATA 1 ACK 2827296584 PSH WIN 2059
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296584
DATA 1 ACK 2841117635 PSH WIN 2107
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117635
DATA 1 ACK 2827296585 PSH WIN 2058
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296585
DATA 1 ACK 2841117636 PSH WIN 2106
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117636
DATA 1 ACK 2827296586 PSH WIN 2057
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296586
DATA 1 ACK 2841117637 PSH WIN 2105
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117637
DATA 2 ACK 2827296587 PSH WIN 2056
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296587
DATA 2 ACK 2841117639 PSH WIN 2103
tcp0: I ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2827296589
ACK 2841117639 FIN PSH WIN 2103
tcp0: O ESTAB 10.3.3.1:23 10.1.1.1:11000 seq 2841117639
ACK 2827296590 WIN 2054
tcp0: O LASTACK 10.3.3.1:23 10.1.1.1:11000 seq 2841117639

153
ACK 2827296590 FIN PSH WIN 2054
tcp0: I LASTACK 10.3.3.1:23 10.1.1.1:11000 seq 2827296590
ACK 2841117640 WIN 2103

(See all the parameters discussed during the analysis of previous data trace explained in
the handshaking session).

Simple Mail Transfer Protocol (SMTP)


The Simple Mail Transfer Protocol http://www.ietf.org/rfc/rfc2821 is a protocol used to
transfer mail between servers. Servers on the Internet use this method to send messages
across servers. Messages are then retrieved using email clients like POP (Post Office
Protocol) http://www.ietf.org/rfc/rfc1939 or IMAP (Internet Message Access Protocol)
http://www.ietf.org/rfc/rfc2062 . SMTP servers are also relays that take inside mail from
an entity to an outside SMTP relay, which in turn forwards the email message to other
servers on the Internet.

HyperText Transfer Protocol (WWW- HTTP)


The HyperText Transfer Protocol http://www.ietf.org/rfc/rfc2616 is the protocol used by
the World Wide Web. It sends specific commands to web servers to be executed when
the client clicks on a link or URL (Uniform Resource Locators
http://www.ietf.org/rfc/rfc1738 ). Uniform Resource Locators are universal addresses of
documents and other information located on the World Wide Web. The URL has two
parts. The first part represents what protocol is being used to access that information and
the second part of the address is the actual location where the information resides. See all
the links included in the sessions to see the breakdown like, ftp: // relates to the protocol
in use and there rest is the actual address.

154
9.1.4 Conclusion

Due to the complex nature of the TCP protocol, a brief introduction was provided during
this session. As mentioned in previous sessions there is a large number of books that
have been written on this subject, which cannot be covered during the duration of this
course. However, this session has provided a basic understand of the TCP connectivity
between client and server.

An important concept covered in this session was three-way handshaking. This concept
requires a three-way exchange of messages to set up a connection. Understanding three-
way handshaking will allow the student to continue studying all the characteristics and
intricacies that TCP uses. The idea of flow control and windowing was introduced which
led to the mentioning of congestion control. Congestion control is a topic that will
require an entire semester to discuss and it is beyond the scope of this course.

Finally, a few application protocols that use TCP as the underlying protocol for its
transport mechanism where introduced. These are just a few of many protocols that are
using TCP now.

Discussion Questions

1. Describe the three-way handshaking process.


2. Why are the TCP sequence numbers on a data flow process?
3. Why is there a need for a sliding window protocol?

155
Chapter 10: INTRODUCTION TO IP MULTICAST

156
10.1 Introduction
IP Multicast is the transmission of data to a selected group of hosts in a network.
Broadcast is the transmission of data to ALL the hosts in a network. Multicast is a more
reliable way to deliver data as not everyone connected to the same network needs to
process or manipulate unwanted data.

How was IP Multicast developed?


Steve Deering was a doctoral student at Stanford University in the early 1980s. He was
working on a project where multiple computers connected on an Ethernet segment had to
communicate using special messages sent across the wire. He studied routing protocols
such as OSPF and RIP to see if link-state mechanisms as well as distance vector
algorithms could be extended to support multicasting.

His thesis http://www.cs.duke.edu/~vahdat/ps/deering.pdf described how hosts signaled


to their local routers that they wanted to join a multicast group. He referred to this as the
Host Membership Protocol. This was the basis for what it is now known as the Internet
Group Membership Protocol (IGMP). His thesis became the groundwork for what is now
known as IP Multicast.

To lean more about Deerings multicast thesis you can visit this website
http://www.ietf.org/rfc/rfc1075 .

Unicast Traffic
In order to understand why multicast works better in some environments it is important to
discuss the difference between Unicast traffic and Multicast traffic. Unicast traffic is an
application that sends one copy of each packet to every clients Unicast address. This
creates restrictions in scaling the network. If there is a large set of hosts requesting the
same information, that information has to be carried multiple times, even on shared links.
If the request is for a video feed, every client that needs the video feed will request
directly to the server which then will provide n number of video feeds to n hosts
requesting that file. As a result, saturation issues can occur due to multiple requests for
the same information. Additionally, the server can crash because the network is over
utilized.

Multicast Traffic
IP multicasting is the transmission of an IP data frame to a host group. Multicast
addresses may be dynamically or statically allocated. Dynamic multicast addressing
provides applications with a group address on demand. Statically allocated addresses are
reserved for specific protocols that require well-known ports. This is similar in concept to
the well-known TCP and UDP ports.

Addresses range from 224.0.0.0 through 224.0.0.255, are reserved for local purposes,
such as administrative, and maintenance tasks and has a TTL of only 1.

157
Addresses ranging from 239.0.0.0 to 239.255.255.255 are reserved for administrative
scoping. Scoping allows network administrators to control multicast traffic from
leaking across regions or other domains that might not have enough bandwidth to
support the traffic. Routers can deny multicast traffic in a particular address range from
entering or leaving the zone. This technique prevents high bandwidth streams from
entering zones defined with other lower stream rates. This address range is considered a
private range very similar to the 10.x.x.x. 172.16.x.x through 172.31.x.x and 192.168.1.x
through 192.168.255.x private ranges.

10.1.1 Multicast Addresses

Ethernet frames have a 48-bit destination address field. IANA (http://www.iana.org/)


designated a range of Ethernet addresses for multicast to be 0100.5e00.0000 through
0100.5e7f.ffff.

01-00-5E identifies the frame as multicast; the next bit is always 0, leaving only 23 bits
for the multicast addresses. Because IP multicast groups are 28 bits long, the mapping
cannot be one-to-one. Only the 23 least-significant bits of the IP multicast group are
placed in the frame. The remaining five high order bits are ignored, resulting in 32
different multicast groups being mapped to the same Ethernet address. Since 32
multicast addresses can be mapped to a single Ethernet address, the address is not unique.

When a host wants to receive multicast group 224.1.1.1, it will program the hardware
registers in the network interface card to interrupt the CPU when a frame with a
destination multicast address of 0x0100.5e00.0101 is received. Unfortunately, this same
multicast MAC address is also used for 31 other IP multicast groups. If any of these 31
groups are also active on the local LAN, the hosts CPU will receive interrupts any time a
frame is received for any of these other groups. The CPU will have to examine the IP
portion of each received frame to determine whether it is the desired group.

Advantages of IP Multicasting

1. One IP address defines a specific group, which allows any host capable of joining that
group to participate in the exchanging of data.
2. Hosts can join multiple groups.
3. Servers do not need to provide multiple feeds for every host on the network as in
Unicast. Only one feed is necessary to let everyone interested in that group join.
4. Bandwidth utilization is reduced.

Disadvantages of IP Multicasting

1. It uses unreliable delivery mechanisms such as UDP


2. There is no state created which allows for packet loss. State created is the state
created when TCP sessions are established.
3. There is packet duplication, sometimes multiple copies arrive at a receiver until the
multicast routing protocol converges and eliminates the redundant path.

158
4. Network congestion is a problem since there is no established connection created
between source and destination. There is no built-in congestion avoidance
mechanism to prevent a multicast stream from over-utilizing a links bandwidth.

Subscribing and Maintaining Groups


Internet Group Management Protocol (IGMP) http://www.ietf.org/rfc/rfc1112 is the
protocol used to join specific groups. It is used to establish host membership in a
particular multicast group. The hosts let the local router know that it wants to join a
specific group by means of host membership report. The local router listens to these
reports and responds back with Queries that include the multicast group information
allowing the host to join that group.

The process works in this manner:

Router A (the IGMP queries) periodically (default is 60 seconds) multicasts an IGMP


v1 Membership query to the all hosts multicast group (224.0.0.1) on the local subnet.
All hosts must listen to this group as long as they have enabled multicast so that these
queries can be received.

All hosts receive the IGMPv1 Membership Query, and one host responds first by
multicasting an IGMPv1 Membership report to the multicast group 224.1.1.1 of
which the host is a member. This report informs the routers on the subnet that a host
is interested in receiving multicast traffic for group 224.1.1.1.

Since other hosts are listening to multicast group 224.1.1.1, they hear the IGMPv1
Membership Report that was multicast by the first host. Other hosts, therefore,
suppress the sending of their report for that group. This mechanism helps reduce the
amount of traffic on the local network.

IGMPv1 does not know of hosts that leave the group so there are timers that are initiated
when there is no activity for that specific group. If there are no Membership Reports
from any host to a specific group, the router assumes that no group members are present
on a particular interface and removes the group after a specific amount of time.

IGMPv2 improves the mechanism by providing a Leave Message and reducing the
amount of time the router needs to wait to remove the non-active multicast group from its
cache.

159
Fig 10.1

IGMP v1 Example
IGMP v1 joining a group

H1 H2 H3 H1 H2 H3
IGMP Report
trying to join
group 224.2.2.2

General Query to
224.0.0.1. Trying
RA RA to find which host
is interested in
which group
Multicast Multicast
Router Router
IGMP Report
IGMP Report
H2 trying to join
Suppressed
group 224.2.2.2

H1 H2 H3 H1 H2 H3

It sensed H3
Report already
requested to the
same group
RA RA

Multicast Multicast
Router Router

10.1.2 Multicast Distribution Trees

Designated routers construct a tree that connects all members of an IP multicast group. A
distribution tree specifies a unique forwarding path between the subnet of the source and
each subnet containing members of the multicast group. A distribution tree is a loop-free
path created between every source and receiver. Since multicast groups are dynamic,
with members joining or leaving a group at any time, the distribution tree must be
dynamically updated. This means that branches have to be grafted or pruned
accordingly. Grafting means that a branch needs to be added to the multicast tree. It will
be created should a receiver request a join towards a multicast tree that has not reached
the receivers first hop router. Pruning means that the branch is no longer needed. In
simple words, the receivers of that branch no longer need multicast traffic or have left the
tree. There are two basic specific trees: Source Trees and Shared Trees.

160
Source Trees
It is the simplest form of multicast distribution tree where the root of the tree is the actual
source of the multicast traffic. It builds a spanning tree for each potential source. A
spanning tree is a loop-free path created on a network. This kind of distribution tree is
also known as the Shortest Path Tree network (SPT). There is also a special notation that
needs to be kept in mind when checking for multicast routes on a router. The special
notation of (S1, G1) identifies an SPT where S1 is the IP address of the source and G1 is
the multicast group address. In addition, a separate SPT exists for every multicast source
on the network hence increasing CPU processing on a multicast router. This means that
if there are 1000 hosts joining multicast groups and the distribution tree in use is a source
tree or SPT, there will be 1000 (S,G)s cached on the multicast routing table. The more
routing entries in the table, the more processing the router needs to do to direct the data
stream.

Fig 10.2

Source Trees

Source Source
S1 S1

10.1.1.1 10.1.1.1

RA RB RC RD RA RB RC RD

RE RF Traffic to RE RF
group
224.2.2.2

20.1.1.1 30.1.1.1 20.1.1.1 30.1.1.1

Receiver Receiver Receiver Receiver

Fig 2.a Fig 2.b

Source
S1

10.1.1.1

RA RB RC RD

Traffic to RE RF
group
224.2.2.2

20.1.1.1 30.1.1.1

Receiver Receiver

Fig 2.c
161
Shared Trees
Shared trees use a single common root placed at some chosen point in the network. The
chosen root in the network is called a Rendezvous Point or RP. The RP is the central
point in the network where all multicast traffic using Shared trees must meet. All sources
in the multicast network must send their traffic to the root for the traffic to reach all
receivers. The RP will have a shared tree path to most of the sources utilizing only one
path to multiple areas instead of installing individual paths as source trees. There is also
a special notation that needs to be understood to identify a shared tree when looking at a
multicast routing table. The special notation is (*, G1), which identifies a common shared
tree (*) for multicast group G1. Now since there is one common tree for each multicast
group, there is less CPU processing on a multicast router. Even if there are 1000 hosts
joining group G1, there will only be one entry in the multicast table.

Fig 10.3
Shared Trees

Source
Shared
S1
Root or RP
10.1.1.1
Source
S2

RA RB RC RD

Source
S2
Source Shared
RE RF S1 Root or RP

10.1.1.1

20.1.1.1 30.1.1.1

Receiver Receiver RA RB RC RD

Fig 3.a
Traffic to
group RE RF
224.2.2.2

20.1.1.1 30.1.1.1

Shared Receiver Receiver


Source
S1 Root or RP

10.1.1.1
Fig 3.b

RA RB RC RD

Source
S2

Traffic to RE RF
group
224.2.2.2

20.1.1.1 30.1.1.1

Receiver Receiver
162

Fig 3.c
10.1.3 Multicast Routing Protocols

Multicast routing protocols are responsible for the construction of multicast delivery trees
and enabling multicast packet forwarding. Different IP multicast routing protocols use
different techniques when constructing multicast spanning trees and forward packets.

These routing protocols are defined in two groups:

Routing protocols that flood the entire network regardless of the hosts needs,
are referred to as Dense protocols
Routing protocols that sparsely distribute multicast information throughout the
network, which are referred to as Sparse mode

Dense Mode Routing Protocols

There are many types of dense mode routing protocols available such as the following:

DVMRP
The Distance Vector Multicast Routing Protocol (DVMRP)
http://www.ietf.org/rfc/rfc1075 is widely used on the MBone (Multicast Backbone)
http://www-mice.cs.ucl.ac.uk/multimedia/projects/mice/mbone_review.html which is
used regularly to provide multicast connectivity across the Internet. Multiple entities can
exchange videoconferencing information as well training and distance learning at very
high speeds. It uses reverse path flooding. When a router receives a packet, it floods the
packet out all paths except the one that leads back to the packet source. DVMRP will
periodically flood packets in order to reach any new hosts that want to receive a
particular group. DVMRP implements its own Unicast routing protocol in order to
determine which interface leads back to the source of the data stream. It is similar to RIP
and it is based on hop count.

MOSPF
Multicast Open Shortest Path First (MOSPF) http://www.ietf.org/rfc/rfc1585 is intended
for use within a single routing domain, such as a network controlled by a single
organization. It is depending on the OSPF Unicast routing protocol. Each router
maintains an up-to-date image of the entire topology. It includes multicast information
on the link state advertisements. MOSPF is best suited for environments that have
relatively few (source, group) pairs active at any given time. Cisco routers do not support
MOSPF.

PIM-DM
The Protocol Independent Multicast (PIM-DM) http://netweb.usc.edu/pim/internet-
drafts/draft-ietf-pim-dm-new-v2-01.txt is similar to DVMRP. This protocol works best
when there are numerous members belonging to each multimedia group. PIM floods the
multimedia packet out to all routers in the network and then prune routers that do not
support members of that particular multicast group.

163
It is most useful when:
senders and receivers are in close proximity to one another
there are few senders and many receivers
the volume of multicast traffic is high
the stream of multicast traffic is constant
it creates a lot of state on the routers as it caches many (S, G)

Sparse Mode Routing Protocols

PIM-SM
The Protocol Independent Multicast http://www.ietf.org/rfc/rfc2362 is based on the
assumption that the multicast group members are sparsely distributed throughout the
network and bandwidth is not necessarily widely available. Sparse mode is optimized for
environments where there are many multipoint data streams.

It is most useful when:


there are few receivers in a group
the type of traffic is intermittent

Instead of flooding the network to determine the status of multicast members, PIM sparse
mode defines a rendezvous point or RP. The RP is configured with multicast addresses
specific to different applications. The RP then announces to the completely multicast
world that it is a candidate RP for some given multicast addresses. The RP also discovers
other candidates RPs in the network, which advertised their own multicast networks.
Cisco implements something called Auto-RP, which is a dynamic way for a router to
announce itself as a candidate RP as well as allow it to discover other group to RP
mappings around the topology from other RPs.

Auto-RP uses Dense mode to flood every Cisco router running IP multicast using Cisco
registered multicast addresses of 224.0.1.39 for announcements and 224.0.1.40 for
discoveries. If a router wants to know about possible candidate RPs, it will join group
224.0.1.39. If a router wants to know about the entire active group to RP mappings in the
network, it will join group 224.0.1.40. Multicast addresses advertised by RPs represent
an application, a video feed, a market data feed, etc. When a sender wants to send data,
the sender first sends to the RP. The RP is once again, the root of the network. When a
receiver wants to receive data, the receiver signals its local router that it wants to join a
specific group advertised by one of the RPs in the network. The local router, called the
first hop router will register with the RP using a Unicast packet, which encapsulates the
multicast information. The RP gets the registration packet, strips away the Unicast
information, leaving the multicast portion intact.

It then starts to create a tree down towards that receiver. Once the first hop router starts
receiving multicast traffic for that specific group the receiver signaled to join, the
registration process stops and a tree is now build between receiver and RP. Once the data
stream begins to flow from the sender to RP to receiver, the routers in the path will

164
optimize the path automatically to remove any unnecessary hops. PIM Sparse mode
assumes that no hosts want the multicast traffic unless they specifically ask for it.

In Sparse mode, a router assumes that other routers do not want to forward multicast
packets for a group unless there is an explicit request for the traffic. Rendezvous points
(RPs) are used by senders to a multicast group to announce their existence and by
receivers of multicast packets to learn about new senders. The RP keeps track of
multicast groups. Hosts that send multicast packets are registered with the RP. The RP
sends join messages toward the source. Packets are then forwarded on a shared
distribution tree. When no RP is known, the packet is flooded in a dense mode fashion.
Sparse mode interfaces are added to the table only when periodic join messages are
received from downstream routers, or when there is a directly connected member on the
interface.

10.1.4 Conclusion

This session was meant to introduce the idea of multicasting on a network and how the
network benefits from having this protocol over it. A very important topic was described,
which summarized the differences between source and shared trees. Understanding these
two tree processes is crucial for the understanding of multicast.

165
Glossary
ABR: Available Bit Rate A type 3 or 4 Asynchronous Transfer Mode Adaptation Layer
(AAL) service designed for non-time-critical applications such as LAN emulation and
LAN internetworking.

Address Resolution Protocol: ARP A protocol within TCP/IP (Transmission Control


Protocol/Internet Protocol) and AppleTalk networks that allows a host to find the
physical address of a node on the same network when it knows only the targets logical
IP address.

Analog: Describes any device that represents changing values by a continuously variable
physical property, such as voltage in a circuit. Analog often refers to transmission
methods developed to transmit voice signals rather than high-speed digital signals.

Application layer: The seventh, or highest, layer in the OSI Reference Model for
computer-to-computer communications. This layer uses services provided by the lower
layers but is completely insulated from the details of the network hardware. It describes
how applications interact with the network operating system, including database
management, electronic mail, and terminal emulation programs.

ASIC: Application-specific integrated circuit A computer chip developed for a specific


purpose, designed by incorporating standard cells from a library rather than created from
scratch. Also known as gate arrays, ASICs are found in all sorts of appliances, including
modems, security systems, digital cameras, and even microwave ovens and automobiles.

Asynchronous Transmission: A method of data transmission that uses start bits and stop
bits to coordinate the flow of data so that the time intervals between individual characters
do not need to be equal. Parity also may be used to check the accuracy of the data
received.

Autonomous System: On the Internet, an autonomous system (AS) is the unit of router
policy, either a single network or a group of network that is controlled by a common
network administrator (or group of administrators) on behalf of a single administrative
entity (such as a university, a business enterprise, or a business division). An autonomous
system is also sometimes referred to as a routing domain. An autonomous system is
assigned a globally unique number, sometimes called an Autonomous System Number
(ASN).

Bandwidth: The transmission capacity of a computer or a communications channel,


stated in megabits per second (Mbps).

Baseband network: A technique for transmitting signals as direct-current pulses rather


than as modulated signals. The entire bandwidth of the transmission medium is used by a
single digital signal, so computers in a baseband network can transmit only when the
channel is not busy. However, the network can use techniques such as multiplexing to

166
allow channel sharing. A baseband network can operate over relatively short distances
(up to 2 miles if network traffic is light) at speeds from 50Kbps to 100Mbps. Ethernet,
AppleTalk, and most PC local-area networks (LANs) use baseband techniques.

BGP: Border Gateway Protocol a routing protocol designed to replace EGP (External
Gateway Protocol) and interconnect organizational networks. BGP, unlike, EGP,
evaluates each of the possible routes for the best one.

Bit: A bit (short for binary digit) is the smallest unit of data in a computer. A bit has a
single value, either 0 or 1.

BPDU: Acronym for bridge protocol data unit. BPDUs are data messages that are
exchanged across the switches within an extended LAN that uses a spanning tree protocol
topology. BPDU packets contain information on ports, addresses, priorities and costs and
ensure that the data ends up where it was intended to go. BPDU messages are exchanged
across bridges to detect loops in a network topology. The loops are then removed by
shutting down selected bridge interfaces and placing redundant switch ports in a backup,
or blocked, state.

Bridge: It is a device that connects a local area network (LAN) with another local area
network that uses the same protocol (for example Ethernet or token ring). Bridges learn
which addresses are on which network and develop a learning table so that subsequent
messages can be forwarded to the right network.

Broadband network: A network in which a wide band of frequencies is available to


transmit information. Because a wide band of frequencies is available, information can be
multiplexed and sent on many different frequencies or channels within the band
concurrently, allowing more information to be transmitted in a given amount of time.

Broadcast: To send a message to all users currently logged in to the network.

Buffer: A buffer is a data area shared by hardware devices or program processes that
operate at different speeds or with different sets of priorities. The buffer allows each
device or process to operate without being held up by the other. In order for a buffer to be
effective, the size of the buffer and algorithms for moving data into and out of the buffer
need to be considered by the buffer designer. Like a cache, a buffer is a midpoint holding
place but exists no so much to accelerate the speed of an activity as to support the
coordination of separate activities.

Bus: A bus is a transmission path on which signals are dropped off or picked up at every
device attached to that line. Only devices addressed by the signals pay attention to them;
the others discard the signals.

CIDR: Classless Inter- Domain Routing also known as supernetting is a way to


allocate and specify the Internet addresses used in inter-domain routing more flexibly
than with the original system of Internet Protocol address classes. As a result, the number

167
of available Internet addresses has been greatly increased. CIDR is now the routing
system used by virtually all gateway hosts on the Internets backbone network. The
Internets regulating authorities now expect every Internet service provider (ISP) to use it
for routing.

Circuit Switching: Circuit switching is a process that establishes connections on demand


and permits exclusive use of those connections until they are released.

Connectionless: Communication between two network end points in which a message


can be sent from one end point to another without prior arrangement. The device at one
end of the communication transmits data to the other, without first ensuring that the
recipient is available and ready to receive the data. The device sending a message simply
sends it addressed to the intended recipient. If there are problems with the transmission, it
may be necessary to resend the data several times.

Connection-Oriented: Communication between two network end points in which the


devices use a preliminary protocol to set up and end-to-end connection before any data
can be sent. Connection-oriented protocol service is sometimes called a reliable
network service, because it guarantees that data will arrive in the proper sequence.

Contention: A type of network protocol that allows nodes to contend for network access.
That is, two or more nodes may try to send messages across the network simultaneously.
The contention protocol defines what happens when this occurs. The most widely used
contention protocol is CSMA/CD, used by Ethernet.

Convergence: The time it takes for a router to update its routing tables.

Counting to Infinity: In distance vector routing (like RIP) a condition can be reached
when destination networks which become unreachable will be advertised with increased
distance until their distance reaches infinity. To speed up this process, infinity is set to
16.

CPU: Central Processing Unit is an older term for processor and microprocessor, the
central unit in a computer containing the logic circuitry that performs the instructions of a
computers programs.

CSMA/CD: Carrier Sense Multiple Access/Collision Detect is the protocol for carrier
transmission access in Ethernet networks. On Ethernet, any device can try to send a frame
at any time. Each device senses whether the line is idle and therefore available to be used.
If it is, the device begins to transmit its first frame. If another device has tried to send at
the same time, a collision is said to occur and the frames are discarded. Each device then
waits a random amount of time and retries until successful in getting its transmission
sent.

CSU/DSU: Channel Service Unit/Data Service Unit is a hardware device about the size
of an external modem that converts a digital data frame from the communications

168
technology used on a local area network (LAN) into a frame appropriate to a wide-area
network (WAN) and vice versa.

Dark Fiber: Optical fiber infrastructure (cabling and repeaters) that is currently in place
but is not being used. Dark fiber service is service provided by local exchange carriers
(LECs) for the maintenance of optical fiber transmission capacity between customer
locations in which the light for the fiber is provided by the customer rather than the LEC.

Data link layer: The second of seven layers of the OSI Reference Model for computer-
to-computer communications. The data-link layer validates the integrity of the flow of
data from one node to another by synchronizing blocks of data and controlling the flow
of data. The Institute of Electrical and Electronic Engineers (IEEE) has divided the data-
link layer into two other layers--the logical link control (LLC) layer sits above the media
access control (MAC) layer.

Datagram: A self-contained, independent entity of data carrying sufficient information


to be routed from the source to the destination computer without reliance on earlier
exchanges between this source and destination computer and the transporting network.

Datagram Packet switching: Refers to protocols in which messages are divided into
packets before they are sent. Each packet is then transmitted individually and can even
follow different routes to its destination. Once all the packets forming a message arrive at
the destination, they are recompiled into the original message.

Debug: To find and remove errors (bugs) from a program or design.

Diffusing Update Algorithm: This is a method of finding loop-free paths through a


network, proposed by J.J. Garcia-Luna. The concept behind this algorithm states it is
mathematically possible to determine whether any route is loop free, based on the
information provided in standard distance vector routing. With DUAL, there is a notion
of a feasible successor. This is a neighboring router used to forward packets that is a
least-cost path to a destination that is guaranteed not to be part of a routing loop.

Digital: Electronic technology that generates, stores, and processes data in terms of two
states, positive and non-positive. Positive is expresses or represented by the number 1 and
non-positive by the number 0. Thus, data transmitted or stored with digital technology is
expressed as a string of 0s and 1s. Each of these state digits is referred to as a bit.

Distance Vector Algorithm: The distance vector routing protocol was invented by
Bellman in 1957 and then by Ford and Fulkerson in 1962. Routers that use the same
distance vector routing protocol can only exchange routing updates if they are separated
by a single physical network. Routing updates using distance vector algorithms are
periodic. Every route entry is sent during the update period. There is less CPU
processing required as routing updates are done only between peers. However,
convergence time is much longer.

169
DVMRP: Distance Vector Multicast Routing Protocol is the oldest routing protocol that
has been used to support multicast data transmission over networks. The protocol sends
multicast data in the form of Unicast packets that are reassembled into multicast data at
the destination.

EGP: Exterior Gateway Protocol is a protocol for exchanging routing information


between two neighbor gateway hosts (routers) in a network of autonomous systems. EGP
is commonly used between hosts on the Internet to exchange routing table information.

EIGRP: Enhanced Interior Gateway Routing Protocol is a network protocol that lets
routers exchange information more efficiently than the earlier network protocols. EIGRP
evolved from IGRP (Interior Gateway Routing Protocol) and routers using either EIGRP
or IGRP can interoperate because the metric used with one protocol can be translated into
the metrics of the other protocol.

Encapsulation: Encapsulation is the inclusion of one data structure within another


structure so that the first data structure is hidden for the time being.

Encryption: Encryption is the conversion of data into a form, called a cipher text, which
cannot be easily understood by unauthorized people.

Error Correction: In communications, error detection refers to a class of techniques for


detecting garbled messages. Two of the simplest and most common techniques are called
checksum and CRC.

Ethernet: Specified in a standard, IEEE 802.3, Ethernet was originally developed by


Xerox and then developed further by Xerox, DEC and Intel. An Ethernet LAN typically
uses coaxial cable or special grades of twisted pair wires. The most commonly installed
Ethernet systems are called 10Base-T and provide transmission speeds up to 10 Mbps.
Devices are connected to the cable and compete for access using CSMA/CD protocol.

Extranet: An extranet is a private network that uses the Internet Protocol and the public
telecommunication system to securely share part of a businesss information or
operations with suppliers, vendors, partners, customers, or other businesses.

File Transfer Protocol: Abbreviated FTP. The TCP/IP Internet protocol used when
transferring single or multiple files from one computer system to another.

FTP uses a client/server model, in which a small client program runs on your computer
and accesses a larger FTP server running on an Internet host. FTP provides all the tools
needed to look at directories and files, change to other directories, and transfer text and
binary files from one system to another.

Firewall : A barrier established in hardware or in software, or sometimes in both, that


monitors and controls the flow of traffic between two networks, usually a private LAN

170
and the Internet.A firewall provides a single point of entry where security can be
concentrated. It allows access to the Internet from within the organization and provides
tightly controlled access from the Internet to resources on the organization's internal
network.

Flow Control: In communications, control of the rate at which information is exchanged


between two computers over a transmission channel. Flow control is needed when one of
the devices cannot receive the information at the same rate as it can be sent, usually
because some processing is required on the receiving end before the next transmission
unit can be accepted. Flow control can be implemented either in hardware or in software.

Frame : A block of data suitable for transmission as a single unit; also referred to as a
packet or a block. Some media can support multiple frame formats.

Frame Relay: A CCITT standard for a packet-switching protocol, running at speeds of


up to 2Mbps, that also provides for bandwidth on demand. Frame relay is less robust than
X.25 but provides better efficiency and higher throughput.

Frequency Division Multiplexing: Abbreviated FDM. A method of sharing a


transmission channel by dividing the bandwidth into several parallel paths, defined and
separated by guard bands of different frequencies designed to minimize interference. All
signals are carried simultaneously. FDM is used in analog transmissions, such as in
communications over a telephone line.

Full-duplex : Abbreviated FDX. The capability for simultaneous transmission in two


directions so that devices can be sending and receiving data at the same time.

Gateway : A shared connection between a LAN and a larger system, such as a


mainframe computer or a large packet-switching network, whose communications
protocols are different. Usually slower than a bridge or router, a gateway is a combination
of hardware and software with its own processor and memory used to perform protocol
conversions.

Half-duplex : Abbreviated HDX. In asynchronous transmissions, this is referred to as the


ability to transmit on the same channel in two directions, but only in one direction at a
time.

Handshaking: The exchange of control codes or particular characters to maintain and


coordinate data flow between two devices so that data is only transmitted when the
receiving device is ready to accept the data. Handshaking can be implemented in either
hardware or software, and it occurs between a variety of devices. For example, the data
flow might be from one computer to another computer or from a computer to a peripheral
device, such as a modem or a printer.

171
Hardware Address: The address assigned to a network interface card (NIC) by the
original manufacturer or by the network administrator if the interface card is
configurable. This address identifies the local device address to the rest of the network
and allows messages to find the correct destination. Also known as the physical address,
media access control (MAC) address, or Ethernet address.

High Data Link Control: Abbreviated HDLC. An international protocol defined by the
ISO (International Organization for Standardization), included in CCITT X.25 packet-
switching networks. HDLC is a bit-oriented, synchronous protocol that provides error
correction at the data-link layer. In HDLC, messages are transmitted in variable-length
units known as frames.

Header : In a data transmission, the header may contain source and destination address
information, as well as other control data.

Hexadecimal System : Abbreviated hex. The base-16 numbering system that uses the
digits 0 through 9, followed by the letters A through F, which are equivalent to the
decimal numbers 10 through 15. Hex is a convenient way to represent the binary numbers
that computers use internally because it fits neatly into the 8-bit byte. All the 16 hex
digits 0 through F can be represented in 4 bits, and 2 hex digits (1 digit for each set of 4
bits) can be stored in a single byte. This means that 1 byte can contain any one of 256
different hex numbers, from 0 through FF.

Hot Standby Routing Protocol: Is a proprietary protocol from Cisco. HSRP is a routing
protocol that provides backup to a router in the event of failure. Using HSRP, several
routers are connected to the same segment of an Ethernet, FDDI or token-ring network
and work together to present the appearance of a single virtual router on the LAN. The
routers share the same IP and MAC addresses, therefore in the event of failure of one
router, the hosts on the LAN are able to continue forwarding packets to a consistent IP
and MAC address. The process of transferring the routing responsibilities from one
device to another is transparent to the user.

Hyper Text Protocol : Abbreviated HTTP. The command and control protocol used to
manage communications between a Web browser and a Web server. When you access a
Web page, you see a mixture of text, graphics, and links to other documents or other
Internet resources. HTTP is the mechanism that opens the related document when you
select a link, no matter where that document is located.

Hub: A device used to extend a network so that additional workstations can be attached.
There are two main types of hubs:

Active hubs amplify transmission signals to extend cable length and ports.
Passive hubs split the transmission signal, allowing additional workstations to be
added, usually at a loss of distance.

172
In some star networks, a hub is the central controlling device.

Internet Group Multicast Protocol: Abbreviated IGMP. An Internet protocol used in


multicasting. IGMP allows hosts to add or remove themselves from a multicast group. A
multicast group is a collection of computers receiving packets from a host that is
transmitting multicast packets with IP Class D addresses. Group members can join the
group and leave the group; when there are no more members, the group simply ceases to
exist.

Interior Gateway Protocol: Abbreviated IGP. The protocol used on the Internet to
exchange routing information between routers within the same domain.

Interior Group Routing Protocol: Abbreviated IGRP. A distance-vector routing


protocol from Cisco Systems for use in large heterogeneous networks.

Internet: The world's largest computer network, consisting of millions of computers


supporting tens of millions of users in hundreds of countries. The Internet is growing at
such a phenomenal rate that any size estimates are quickly out of date.

Internet Protocol: Abbreviated IP, IP version 4, and IPv4. The session-layer protocol
that regulates packet forwarding by tracking addresses, routing outgoing messages, and
recognizing incoming messages in TCP/IP networks and the Internet.

Integrated Services Digital Network: Abbreviated ISDN. A standard for a worldwide


digital communications network originally designed to replace all current systems with a
completely digital, synchronous, full-duplex transmission system. Computers and other
devices connect to ISDN via simple, standardized interfaces. They can transmit voice,
video, and data, all on the same line.

Internet Service Provider: Abbreviated ISP. A company that provides commercial or


residential customers access to the Internet via dedicated or dial-up connections. An ISP
will normally have several servers and a high-speed connection to an Internet backbone.
Some ISPs also offer Web site hosting services and free e-mail to their subscribers.

IP Multicast: An Internet standard that allows a single host to distribute data to multiple
recipients. IP multicasting can deliver audio and video content in real time so that the
person using the system can interact with the data stream. A multicast group is created,
and every member of the group receives every datagram. Membership is dynamic; when
you join a group, you start to receive the data stream, and when you leave the group, you
no longer receive the data stream.

IP Switching: A switch, developed by Ipsilon Networks, that combines intelligent


Internet Protocol (IP) routing with high-speed Asynchronous Transfer Mode (ATM)

173
switching hardware. The IP protocol stack is implemented on ATM hardware, allowing
the system to adapt dynamically to the flow requirements of the network traffic as
defined in the packet header. A technique that uses network-layer protocols which
provide routing services to add capabilities to layer 2 switching. IP switching locates
paths in a network by using routing protocols and then forwards packets along that route
at layer 2. IP switching is designed for networks that use switches rather than networks
built around repeater hubs and routers.

Internetwork Packet eXchange: Abbreviated IPX. Part of Novell NetWare's native


protocol stack, used to transfer data between the server and workstations on the network.
IPX packets are encapsulated and carried by the packets used in Ethernet and the frames
used in Token Ring networks. IPX packets consist of a 30-byte header which includes the
network, node, and socket addresses for the source and the destination, followed by the
data area, which can be from 30 bytes (only the header) to 65,535 bytes in length. Most
networks impose a more realistic maximum packet size of about 1500 bytes.

Intranet: A private corporate network that uses Internet software and TCP/IP networking
protocol standards. Many companies use intranets for tasks as simple as distributing a
company newsletter and for tasks as complex as posting and updating technical support
bulletins to service personnel worldwide. An intranet does not always include a
permanent connection to the Internet.

Link State Algorithm: A routing algorithm in which each router broadcasts information
about the state of the links to all other nodes on the internetwork. This algorithm reduces
routing loops but has greater memory requirements than the distance vector algorithm.

Logical Link Control: Abbreviated LLC. The upper component of the data-link layer
that provides data repackaging functions for operations between different network types.
The media access control is the lower component that gives access to the transmission
medium itself.

Local Area Network: Abbreviated LAN. A group of computers and associated


peripheral devices connected by a communications channel, capable of sharing files and
other resources among several users.

Media Access Control: Abbreviated MAC. The lower component of the data-link layer
that governs access to the transmission medium. The logical link control layer is the
upper component of the data-link layer. MAC is used in CSMA/CD and token-ring LANs
as well as in other types of networks.

Mask: A binary number that is used to remove bits from another binary number by use of
one of the logical operators (AND, OR, NOT, XOR) to combine the binary number and
the mask. Masks are used in IP addresses and file permissions.

174
Modem: Contraction of modulator/demodulator; a device that allows a computer to
transmit information over a telephone line. The modem translates between the digital
signals that the computer uses and analog signals suitable for transmission over telephone
lines. When transmitting, the modem modulates the digital data onto a carrier signal on
the telephone line. When receiving, the modem performs the reverse process to
demodulate the data from the carrier signal. Modems usually operate at speeds up to
56Kbps over standard telephone lines and at higher rates over leased lines.

Multicast: A special form of broadcast in which copies of a message are delivered to


multiple stations but not to all possible stations. A data stream from a server from which
multiple viewers can simultaneously watch a video.

Multicast Backbone: Abbreviated Mbone. A method of transmitting digital video over


the Internet in real time. The TCP/IP protocols used for Internet transmissions are
unsuitable for real-time audio or video; they were designed to deliver text and other files
reliably, but with some delay. MBONE requires the creation of another backbone service
with special hardware and software to accommodate video and audio transmissions; the
existing Internet hardware cannot manage time-critical transmissions.

Multiplexing: A technique that transmits several signals over a single communications


channel. Frequency-division multiplexing separates the signals by modulating the data
into different carrier frequencies. Time-division multiplexing divides the available time
among the various signals. Statistical multiplexing uses statistical techniques to
dynamically allocate transmission space depending on the traffic pattern.

Network layer: The third of seven layers of the OSI Reference Model for computer-to-
computer communications. The network layer defines protocols for data routing to ensure
that the information arrives at the correct destination node and manages communications
errors.

Network Management: Refers to the broad subject of managing computer networks.


There exists a wide variety of software and hardware products that help network system
administrators manage a network. Network management covers a wide area, including:
Security: Ensuring that the network is protected from unauthorized users.
Performance: Eliminating bottlenecks in the network.
Reliability: Making sure the network is available to users and responding to hardware
and software malfunctions.

Octet: The Internet's own term for a unit of data containing exactly eight bits. Some of
the computer systems attached to the Internet use bytes with more than eight bits; hence,
the need for this term.

175
Open Systems Interconnect: A networking reference model defined by the ISO
(International Organization for Standardization) that divides computer-to-computer
communications into seven connected layers. Such layers are known as a protocol stack.

Open Shortest Path First: Abbreviated OSPF. A routing protocol used on TCP/IP
networks that takes into account network loading and bandwidth when routing
information over the network. Routers maintain a map of the network and swap
information on the current status of each network link. OSPF incorporates least-cost
routing, equal-cost routing, and load balancing.

Packet: Any block of data sent over a network or communications link. Each packet may
contain sender, receiver, and error-control information, in addition to the actual message,
which may be data, connection management controls, or a request for a service. Packets
may be fixed- or variable-length, and they will be reassembled if necessary when they
reach their destination. The actual format of a packet depends on the protocol that creates
the packet; some protocols use special packets to control communications functions in
addition to data packets.

Physical layer: The first and lowest of the seven layers in the OSI Reference Model for
computer-to-computer communications. The physical layer defines the physical,
electrical, mechanical, and functional procedures used to connect the equipment.

Protocol Independent Mode Dense Mode: Abbreviated PIM, a multicasting routing


protocol that runs over an existing Unicast infrastructure. PIM-DM -- Short for PIM-
Dense Mode, which is used when the targeted recipients are in a concentrated area

Protocol Independent Mode-Sparse Mode: Abbreviated PIM, a multicasting routing


protocol that runs over an existing Unicast infrastructure. PIM-SM -- Short for PIM-
Sparse Mode, which is used when recipients are scattered over a large area.

Presentation layer: The sixth of seven layers of the OSI Reference Model for computer-
to-computer communications. The presentation layer defines the way in which data is
formatted, presented, converted, and encoded.

Process: In a multitasking operating system, a program or a part of a program. All EXE


and COM files execute as processes, and one process can run one or more other
processes.

Protocol: In networking and communications, the formal specification that defines the
procedures to follow when transmitting and receiving data. Protocols define the format,
timing, sequence, and error checking used on the network.

176
Repeater: A simple hardware device that moves all packets from one local-area network
segment to another by regenerating, retiming, and amplifying the electrical signals. The
main purpose of a repeater is to extend the length of the network transmission medium
beyond the normal maximum cable lengths.

Request For Comment: Abbreviated RFC. A document or a set of documents in which


proposed Internet standards are described or defined. Well over a thousand RFCs are in
existence, and they represent a major method of online publication for Internet technical
standards.

Router Information Protocol: Abbreviated RIP. A routing protocol used on TCP/IP


(Transmission Control Protocol/Internet Protocol) networks that maintains a list of
reachable networks and calculates the degree of difficulty involved in reaching a specific
network from a particular location by determining the lowest hop count. The Internet
standard routing protocol Open Shortest Path First (OSPF) is the successor to RIP

Router: An intelligent connecting device that can send packets to the correct LAN
segment to take them to their destination. Routers link LAN segments at the network
layer of the OSI Reference Model for computer-to-computer communications. The
networks connected by routers can use similar or different networking protocols.

Storage Area Network: Abbreviated SAN. A method used to physically separate the
storage function of the network from the data-processing function. SAN provides a
separate network devoted to storage and so helps to reduce network traffic by isolating
large data transfers such as backups. Most of the SAN vendors, including StorageTek and
Compaq, use a Fibre Channel-based SAN system, although IBM has proposed a
proprietary architecture.

Script: A small program or macro invoked at a particular time. For example, a login
script may execute the same specific set of instructions every time a user logs in to a
network. A communications script may send user-identification information to an
Internet Service Provider (ISP) each time a subscriber dials up the service.

Segment: In networks, a section of a network that is bounded by bridges, routers or


switches. Dividing an Ethernet into multiple segments is one of the most common ways
of increasing available bandwidth on the LAN. If segmented correctly, most network
traffic will remain within a single segment, enjoying the full 10 Mbps bandwidth. Hubs
and switches are used to connect each segment to the rest of the LAN.

Server: Any computer that makes access to files, printing, communications, and other
services available to users of the network. In large networks, a dedicated server runs a
special network operating system; in smaller installations, a non-dedicated server may

177
run a personal computer operating system with peer-to-peer networking software running
on top. A server typically has a more advanced processor, more memory, a larger cache,
and more disk storage than a single-user workstation. A server may also have several
processors rather than just one and may be dedicated to a specific support function such
as printing, e-mail, or communications. Many servers also have large power supplies,
UPS (uninterruptible power supply) support, and fault-tolerant features, such as RAID
technology. On the Internet, a server responds to requests from a client, usually a Web
browser.

Session layer: The fifth of seven layers of the OSI Reference Model for computer-to-
computer communications. The session layer coordinates communications and maintains
the session for as long as it is needed, performing security, logging, and administrative
functions.

Simple Mail Transfer Protocol: Abbreviated SMTP. The TCP/IP (Transmission


Control Protocol/Internet Protocol) protocol that provides a simple e-mail service and is
responsible for moving e-mail messages from one e-mail server to another. SMTP
provides a direct end-to-end mail delivery, which is rather unusual; most mail systems
use store-and-forward protocols. The e-mail servers run either Post Office Protocol (POP)
or Internet Mail Access Protocol (IMAP) to distribute e-mail messages to users.

Simplex: Refers to transmission in only one direction. Simplex refers to one-way


communications where one party is the transmitter and the other is the receiver. An
example of simplex communications is a simple radio, which you can receive data from
stations but can't transmit data.

Sniffer: A small program loaded onto a system by an intruder, designed to monitor


specific traffic on the network. The sniffer program watches for the first part of any
remote login session that includes the user name, password, and host name of a person
logging in to another machine. Once this information is in the hands of the intruder, he or
she can log on to that system at will. One weakly secured network can therefore expose
not only the local systems, but also any remote systems to which the local users connect.

Sniffer is also the name of a network analyzer product from Network General.

Synchronous Optical Network: Abbreviated SONET. A set of fiber-optic-based


communications standards with transmission rates from 51.84Mbps to 13.22Gbps.First
proposed by Bellcore in the mid-1980s, SONET was standardized by ANSI, and the ITU
adapted SONET in creating the worldwide Synchronous Digital Hierarchy (SDH)
standard. SONET uses synchronous transmissions in which individual channels (called
tributaries) are merged into higher-level channels using time-division multiplexing
techniques. Data is carried in frames of 810 bytes, which also includes control
information known as the overhead.

178
Source address: The portion of a packet or datagram that identifies the sender.

Spanning Tree Algorithm: A technique based on the IEEE 802.1 standard that finds the
most desirable path between segments of a multilooped, bridged network. If multiple
paths exist in the network, the spanning tree algorithm finds the most efficient path and
limits the link between the two networks to this single active path. If this path fails
because of a cable failure or other problem, the algorithm reconfigures the network to
activate another path, thus keeping the network running.

Split Horizon: The split horizon rule forbids a route to advertise a network prefix via the
interface from which it learned of the prefix.

Stack: A set of network protocol layers that work together. The OSI Reference Model
that defines seven protocol layers is often called a stack, as is the set of TCP/IP protocols
that define communication over the internet.

Star topology: A network topology in the form of a star. At the center of the star is a
wiring hub or concentrator, and the nodes or workstations are arranged around the central
point representing the points of the star. Wiring costs tend to be higher for star networks
than for other configurations, because each node requires its own individual cable. Star
networks do not follow any of the IEEE standards.

Statistical Multiplexing: Abbreviated stat mux. In communications, a method of


sharing a transmission channel by using statistical techniques to allocate resources. A
statistical multiplexer can analyze traffic density and dynamically switch to a different
channel pattern to speed up the transmission. At the receiving end, the different signals
are merged back into individual streams.

Store and Forward: A method that temporarily stores messages at intermediate nodes
before forwarding them to the next destination. This technique allows routing over
networks that are not available at all times and lets users take advantage of off-peak rates
when traffic and costs might be lower.

Subnet: A logical network created from a single IP address. A mask is used to identify
bits from the host portion of the address to be used for subnet addresses.

Switch: Is a device that filters and forwards packets between LAN segments. Switches
operate at the data link layer (layer 2) and sometimes the network layer (layer 3) of the
OSI Reference Model and therefore support any packet protocol. LANs that use switches
to join segments are called switched LANs or, in the case of Ethernet networks, switched
Ethernet LANs.

Synchronous Transmission: A transmission method that uses a clock signal to regulate


data flow. In synchronous transmissions, frames are separated by equal-sized time

179
intervals. Timing must be controlled precisely on the sending and the receiving
computers. Special characters are embedded in the data stream to begin synchronization
and to maintain synchronization during the transmission, allowing both computers to
check for and correct any variations in timing.

Telnet: A terminal emulation protocol, part of the TCP/IP suite of protocols and
common in the Unix world, that provides remote terminal-connection services. The most
common terminal emulations are for Digital Equipment Corporation (DEC) VT-52, VT-
100, and VT-220 terminals, although many companies offer additional add-in emulations.

Time Division Multiplexing: Abbreviated TDM. A method of sharing a transmission


channel by dividing the available time equally between competing stations. At the
receiving end, the different signals are merged back into their individual streams.

Token Ring network: A LAN with a ring structure that uses token passing to regulate
traffic on the network and avoid collisions. On a token-ring network, the controlling
network interface card generates a token that controls the right to transmit. This token is
continuously passed from one node to the next around the network. When a node has
information to transmit, it captures the token, sets its status to busy, and adds the message
and the destination address. All other nodes continuously read the token to determine if
they are the recipient of a message. If they are, they collect the token, extract the
message, and return the token to the sender. The sender then removes the message and
sets the token status to free, indicating that it can be used by the next node in sequence.

Transmission Control Protocol: Abbreviated TCP. The transport-level protocol used in


the TCP/IP suite of protocols. It works above IP in the protocol stack and provides
reliable data delivery over connection-oriented links. TCP adds a header to the datagram
that contains the information needed to get the datagram to its destination. The source
port number and the destination port number allow data to be sent back and forth to the
correct processes running on each computer. A sequence number allows the datagrams to
be rebuilt in the correct order in the receiving computer, and a checksum verifies that the
data received is the same as the data sent.

Transmission Medium : The physical cabling used to carry network information, such
as fiber-optic, coaxial, shielded twisted-pair (STP), and unshielded twisted-pair (UTP)
cabling.

Transport layer: The fourth of seven layers of the OSI Reference Model for computer-
to-computer communications. The transport layer defines protocols for message structure
and supervises the validity of the transmission by performing some error checking.

Tree topology: A tree topology combines characteristics of linear bus and star
topologies. It consists of groups of star-configured workstations connected to a linear bus
backbone cable.

180
Unicast: The broadcast of individual audio or video signals from a server to individual
clients to provide an on-demand video service.

Universal Resource Locator: Abbreviated URL. An address for a resource on the


Internet. URLs are used as a linking mechanism between Web pages and as a method for
Web browsers to access Web pages. A URL specifies the protocol to be used to access
the resource (such as HTTP or FTP), the name of the server where the resource is located
(as in www.sybex.com), the path to that resource (as in /catalog), and the name of the
document to open (/index.html).

Wave Division Multiplexing: Abbreviated WDM. A frequency-division multiplexing


(FDM) technique that allows a single fiber-optic cable to carry multiple light signals
rather than a single light signal. WDM places each signal on a different frequency.

181
About the Author
Andres Rengifo obtained his undergraduate degree in Electrical
Engineering from the City College of New York in 1992. He holds a
Masters degree in Educational Computing from Stony Brook University
and it is currently enrolled in an Instructional Technology and
Distance Education Ph.D. program at Nova Southeastern University. He
has over 15 years experience in data networking ranging from operation
support, implementations, network engineering, and architecture.
Andres has spent his entire professional career in the financial
industry providing solutions to customers and clients using multiple
technologies such as multicast, TCP/IP and various other network
protocols. Currently, he is a vice president at Barclays Capital under
the Infrastructure Engineering and Support group. His main interest
revolves around routing protocols and data center architectures. In
addition to his busy career he is also an Adjunct Assistant Professor
at New York University's School of Continuing and Professional Studies,
Information Technologies Institute were he teaches Internetworking
Fundamentals.

182

Вам также может понравиться