Вы находитесь на странице: 1из 86

SUBMITTED BY: Hemant Agarwal

College: IIIT-A
SUBMITTED TO: ONGC PROJECT ICE
Topic: Networking and Security
Contents
1. Introduction to Networking and Security
2. OSI AND TCP/IP model
3. Routing
4. Switching
5. Spanning tree protocol
6. VLANS
7. VTP
8. Security
9. Access list
10. NAT & PAT
A Word of Thanks
This page is solely dedicated to appreciate the very generous
help and support that I have received from the whole
Networking Team of O.N.G.C Project ICE. The whole team has
contributed a lot in my learning process at project ICE. Without
them my training programme here would not have been
successful. Specially Mr. Dharamraj , Mr. Rakesh Arora , Mr
Dushyant and above all Mr A.S Dobre have shared some very
important facts about the Networking Industry, which I am sure
I would have never received by studying in a Classroom. I
Thank all the people I mentioned above for being very patient
and generous with their time even when they had so much of
commitment towards their work.
OVERVIEW OF THE REPORT

The given report is a study of the basic networking


fundamentals that are used in the study. As I was a
beginner in this field, I started from the basics and
then developed as much as I could during this
training period. The report mainly focuses on
VLANS, their requirement, their implementation,and
its security issues etc;

Also today’s industries demand security in a


network. At O.N.G.C project ice security is given the
highest priority. So learning about networking by
default means having some knowledge about the
security issues. So the report emphasizes a lot on
network security also.

My main objective while making this report has


been to include as much useful information so that
anytime later I can use this report as a reference
also.
INTRODUCTION TO NETWORKING
AND SECURITY
Computer networking is the engineering discipline concerned
with the communication between computer systems or devices.
A computer network is any set of computers or devices
connected to each other with the ability to exchange data.
Computer networking is sometimes considered a sub-discipline
of telecommunications, computer science, information
technology and/or computer engineering since it relies heavily
upon the theoretical and practical application of these scientific
and engineering disciplines. The three types of networks are:
the Internet, the intranet, and the extranet. Examples of different
network methods are:
 Local area network (LAN), which is usually a small network
constrained to a small geographic area. An example of a LAN
would be a computer network within a building.
 Metropolitan area network (MAN), which is used for medium

size area. examples for a city or a state.


 Wide area network (WAN) that is usually a larger network

that covers a large geographic area.


 Wireless LANs and WANs (WLAN & WWAN) are the

wireless equivalent of the LAN and WAN.


All networks are interconnected to allow communication with a
variety of different kinds of media, including twisted-
pair copper wire cable, coaxial cable, optical fiber, power
lines and various wireless technologies.The devices can be
separated by a few meters (e.g. via Bluetooth) or nearly
unlimited distances (e.g. via the interconnections of
the Internet). Networking, routers, routing protocols, and
networking over the public Internet have their specifications
defined in documents called RFCs.
Users and network administrators often have different views of
their networks. Often, users who share printers and some servers
form a workgroup, which usually means they are in the same
geographic location and are on the same LAN. A community of
interest has less of a connection of being in a local area, and
should be thought of as a set of arbitrarily located users who
share a set of servers, and possibly also communicate via peer-
to-peer technologies.
Network administrators see networks from both physical and
logical perspectives. The physical perspective involves
geographic locations, physical cabling, and the network
elements (e.g., routers, bridges and application layer
gateways that interconnect the physical media. Logical
networks, called, in the TCP/IP architecture, subnets, map onto
one or more physical media. For example, a common practice in
a campus of buildings is to make a set of LAN cables in each
building appear to be a common subnet, using virtual LAN
(VLAN)technology.
Both users and administrators will be aware, to varying extents,
of the trust and scope characteristics of a network. Again using
TCP/IP architectural terminology, an intranet is a community of
interest under private administration usually by an enterprise,
and is only accessible by authorized users (e.g.
employees).[5] Intranets do not have to be connected to the
Internet, but generally have a limited connection. An extranet is
an extension of an intranet that allows secure communications to
users outside of the intranet (e.g. business partners, customers).
Informally, the Internet is the set of users, enterprises, and
content providers that are interconnected by Internet Service
Providers (ISP). From an engineering standpoint, the Internet is
the set of subnets, and aggregates of subnets, which share the
registered IP address space and exchange information about the
reach ability of those IP addresses using the Border Gateway
Protocol. Typically, the human-readable names of servers are
translated to IP addresses, transparently to users, via the
directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business
(B2B), business-to-consumer (B2C) and consumer-to-consumer
(C2C) communications. Especially when money or sensitive
information is exchanged, the communications are apt to
be secured by some form of communications
security mechanism. Intranets and extranets can be securely
superimposed onto the Internet, without any access by general
Internet users, using secure Virtual Private Network (VPN)
technology.
When used for gaming one computer will have to be the server
while the others play through it
OSI and TCP/IP models

The International Standards Organization (ISO) Open


Systems Interconnect (OSI) Reference Model defines seven
layers of communications types, and the interfaces among them.
(See Figure 1.) Each layer depends on the services provided by
the layer below it, all the way down to the physical network
hardware, such as the computer's network interface card, and the
wires that connect the cards together.

An easy way to look at this is to compare this model with


something we use daily: the telephone. In order for you and I to
talk when we're out of earshot, we need a device like a
telephone. (In the ISO/OSI model, this is at the application
layer.) The telephones, of course, are useless unless they have
the ability to translate the sound into electronic pulses that can
be transferred over wire and back again. (These functions are
provided in layers below the application layer.) Finally, we get
down to the physical connection: both must be plugged into an
outlet that is connected to a switch that's part of the telephone
system's network of switches.
If I place a call to you, I pick up the receiver, and dial your
number. This number specifies which central office to which to
send my request, and then which phone from that central office
to ring. Once you answer the phone, we begin talking, and our
session has begun. Conceptually, computer networks function
exactly the same way.
It isn't important for you to memorize the ISO/OSI Reference
Model's layers; but it's useful to know that they exist, and that
each layer cannot work without the services provided by the
layer below it.

The TCP/IP model is a description framework for computer


network protocols created in the 1970s by DARPA, an agency
of the United States Department of Defense. It evolved from
ARPANET, which was the world's first wide area network and a
predecessor of the Internet. The TCP/IP Model is sometimes
called the Internet Model or the DoD Model.
The TCP/IP model, or Internet Protocol Suite, describes a set of
general design guidelines and implementations of specific
networking protocols to enable computers to communicate over
a network. TCP/IP provides end-to-end connectivity specifying
how data should be formatted, addressed,
transmitted, routed and received at the destination. Protocols
exist for a variety of different types of communication services
between computers.
TCP/IP is generally described as having four abstraction
layers (RFC 1122). This layer architecture is often compared
with the seven-layer OSI Reference Model; using terms such as
Internet Reference Model in analogy is however incorrect as the
Internet Model is descriptive while the OSI Reference Model
was intended to be prescriptive, hence Reference Model.
The TCP/IP model and related protocols are maintained by
the Internet Engineering Task Force (IETF).
Layer 1: Physical Layer
The Physical Layer defines the electrical and physical
specifications for devices. In particular, it defines the
relationship between a device and a physical medium. This
includes the layout off pins, voltages, cable specifications, hubs
,repeaters, network adapters, host bus adapters (HBAs used
in storage area networks) and more.
To understand the function of the Physical Layer, contrast it
with the functions of the Data Link Layer. Think of the Physical
Layer as concerned primarily with the interaction of a single
device with a medium, whereas the Data Link Layer is
concerned more with the interactions of multiple devices (i.e., at
least two) with a shared medium. Standards such as RS-232 do
use physical wires to control access to the medium.
The major functions and services performed by the Physical
Layer are:
 Establishment and termination of a connection to
a communications medium.
 Participation in the process whereby the communication

resources are effectively shared among multiple users. For


example, contention resolution and flow control.
 Modulation, or conversion between the representation

of digital data in user equipment and the corresponding


signals transmitted over a communications channel. These are
signals operating over the physical cabling (such as copper
and optical fiber) or over a radio link.
Parallel SCSI buses operate in this layer, although it must be
remembered that the logical SCSI protocol is a Transport Layer
protocol that runs over this bus. Various Physical Layer Ethernet
standards are also in this layer; Ethernet incorporates both this
layer and the Data Link Layer. The same applies to other local-
area networks, such as token ring, FDDI, ITU-TG.hn and IEEE
802.11, as well as personal area networks such
as Bluetooth and IEEE 802.15.4.

Layer 2: Data Link Layer


The Data Link Layer of the OSI model is responsible for
communications between adjacent network nodes. Switches
operate at the Data Link Layer. It is further more responsible for
monitoring, correcting the flow of data as well as errors that
creep up in transmission of data . It employs the use of block
and convoluted coding to check the flow and error mechanism in
transmission of data. Data link layer consists of two sub-layers:
1. Logical Link Control (LLC) sub layer
2. Medium Access Control (MAC) sub layer.
LLC sub layer provides interface between the media access
methods and network layer protocols such as internet protocol
which is a part of TCP/IP protocol suite. LLC sub layer
determines whether the communication is going to be
connectionless or connection-oriented at the data link layer.
MAC sub layer is responsible for connection to physical media.
At the MAC sub layer of data link layer, the actual physical
address of the device, called the MAC address, is added to the
frame (which contains the packets inside). The frame contains
all the information necessary to travel from source device to
destination device. Each time a frame is created while it travels
the path, it gets stamped with the MAC address of the last
sending device in the "source" address, whereas the
"destination" address gets the MAC of the adjacent receiving
device. In simple words, a frame is needed to carry packets
between two adjacent devices where they get discarded and
recreated each time they are received/sent. MAC address is the
12 digit hexadecimal number unique to every computer in this
world. A device's MAC address is located on its Network
Interface Card (NIC). In these 12 digits of MAC address, the
first six digits indicate the NIC manufacturer and the last six
digits are unique. For example, 32-14-a6-42-17-0c is a 12 digit
hexadecimal MAC address. Thus MAC address represents the
physical address of a device in the network
Layer 3: Network Layer
The Network Layer provides the functional and procedural
means of transferring variable length data sequences from a
source to a destination via one or more networks, while
maintaining the quality of service requested by the Transport
Layer. The Network Layer performs network routing functions,
and might also perform fragmentation and reassembly, and
report delivery errors. Routers operate at this layer—sending
data throughout the extended network and making the Internet
possible. This is a logical addressing scheme – values are chosen
by the network engineer. The addressing scheme is hierarchical.
Careful analysis of the Network Layer indicated that the
Network Layer could have at least 3 sublayers: 1.Subnetwork
Access - that considers protocols that deal with the interface to
networks, such as X.25; 2.Subnetwork Dependent Convergence
- when it is necessary to bring the level of a transit network up to
the level of networks on either side; 3.Subnetwork Independent
Convergence - which handles transfer across multiple networks.
The best example of this latter case is CLNP, or IPv7 ISO 8473.
It manages the connectionless transfer of data one hop at a time,
from end system to ingress router, router to router, and
from egress router to destination end system. It is not
responsible for reliable delivery to a next hop, but only for the
detection of error packets so they may be discarded. In this
scheme, IPv4 and IPv6 would have to be classed with X.25 as
Subnet Access protocols because they carry interface addresses
rather than node addresses.
A number of layer management protocols, a function defined in
the Management Annex, ISO 7498/4, belong to the Network
Layer. These include routing protocols, multicast group
management, Network Layer information and error, and
Network Layer address assignment. It is the function of the
payload that makes these belong to the Network Layer, not the
protocol that carries them.
Layer 4: Transport Layer
The Transport Layer Provides End-to-End Communication and
connections between two systems on the network. It is
responsible for transmitting the complete message from one
process on the host machine to another process on the
destination machine. It divides the message into numbered
packets which, after transmission are reassembled in the correct
order by the Transport Layer on the destination machine.
It is also responsible for End-to-End Flow Control and Error
Control.
Layer 6: Presentation Layer
The Presentation Layer is Layer 6 of the seven-layer OSI
model of computer networking.
The Presentation Layer is responsible for the delivery and
formatting of information to the application layer for further
processing or display. It relieves the application layer of concern
regarding syntactical differences in data representation within
the end-user systems. Note: An example of a presentation
service would be the conversion of an EBCDIC-coded
text file to an ASCII-coded file.
The Presentation Layer is the lowest layer at which application
programmers consider data structure and presentation, instead of
simply sending data in form of datagram or packets between
hosts. This layer deals with issues of string representation -
whether they use the Pascal method (an integer length field
followed by the specified amount of bytes) or the C/C++ method
(null-terminated strings, i.e. "thisisastring\0"). The idea is that
the application layer should be able to point at the data to be
moved, and the Presentation Layer will deal with the rest.
Serialization of complex data structures into flat byte-strings
(using mechanisms such as TLV or XML) can be thought of as
the key functionality of the Presentation Layer.
Encryption is typically done at this level too, although it can be
done on the Application, Session, Transport, or Network Layers;
each having its own advantages and disadvantages. Another
example is representing structure, which is normally
standardized at this level, often by using XML. As well as
simple pieces of data, like strings, more complicated things are
standardized in this layer. Two common examples are 'objects'
in object-oriented programming, and the exact way that
streaming video is transmitted.
In many widely used applications and protocols, no distinction is
made between the presentation and application layers. For
example, HTTP, generally regarded as an application layer
protocol, has Presentation Layer aspects such as the ability to
identify character encoding for proper conversion, which is then
done in the Application Layer.
Within the service layering semantics of the OSI network
architecture, the Presentation Layer responds to service requests
from the Application Layer and issues service requests to
the Session Layer.
Comparison with TCP/IP
In the TCP/IP model of the Internet, protocols are deliberately
not as rigidly designed into strict layers as the OSI model. RFC
3439 contains a section entitled "Layering considered harmful."
However, TCP/IP does recognize four broad layers of
functionality which are derived from the operating scope of their
contained protocols, namely the scope of the software
application, the end-to-end transport connection, the
internetworking range, and lastly the scope of the direct links to
other nodes on the local network.
Even though the concept is different from the OSI model, these
layers are nevertheless often compared with the OSI layering
scheme in the following way: The Internet Application
Layer includes the OSI Application Layer, Presentation Layer,
and most of the Session Layer. Its end-to-end Transport
Layer includes the graceful close function of the OSI Session
Layer as well as the OSI Transport Layer. The internetworking
layer (Internet Layer) is a subset of the OSI Network Layer (see
above), while the Link Layer includes the OSI Data Link and
Physical Layers, as well as parts of OSI's Network Layer. These
comparisons are based on the original seven-layer protocol
model as defined in ISO 7498, rather than refinements in such
things as the internal organization of the Network Layer
document.
The presumably strict peer layering of the OSI model as it is
usually described does not present contradictions in TCP/IP, as
it is permissible that protocol usage does not follow the
hierarchy implied in a layered model. Such examples exist in
some routing protocols (e.g., OSPF), or in the description
of tunneling protocols, which provide a Link Layer for an
application, although the tunnel host protocol may well be a
Transport or even an Application Layer protocol in its own
right.
Routing
Routing (or routeing) is the process of selecting paths in a
network along which to send network traffic. Routing is
performed for many kinds of networks, including the telephone
network, electronic data networks (such as the Internet),
and transportation networks. This article is concerned primarily
with routing in electronic data networks using packet
switchingtechnology.
In packet switching networks, routing directs packet forwarding,
the transit of logically addressed packets from their source
toward their ultimate destination through intermediatenodes;
typically hardware devices
called routers, bridges, gateways, firewalls, or switches.
General-purpose computers with multiple network cards can
also forward packets and perform routing, though they are not
specialized hardware and may suffer from limited performance.
The routing process usually directs forwarding on the basis
of routing tables which maintain a record of the routes to various
network destinations. Thus, constructing routing tables, which
are held in the routers' memory, is very important for efficient
routing. Most routing algorithms use only one network path at a
time, but multipath routing techniques enable the use of multiple
alternative paths.
Routing, in a more narrow sense of the term, is often contrasted
with bridging in its assumption that network addresses are
structured and that similar addresses imply proximity within the
network. Because structured addresses allow a single routing
table entry to represent the route to a group of devices,
structured addressing (routing, in the narrow sense) outperforms
unstructured addressing (bridging) in large networks, and has
become the dominant form of addressing on the Internet, though
bridging is still widely used wi
within
thin localized environments.
Delivery semantics

Routing Schemes

anycast

broadcast
multicast

unicast

Routing schemes differ in their delivery semantics:


 unicast delivers a message to a single specified node;
 broadcast delivers a message to all nodes in the network;

 multicast delivers a message to a group of nodes that have

expressed interest in receiving the message;


 anycast delivers a message to any one out of a group of nodes,

typically the one nearest to the source.


Unicast is the dominant form of message delivery on the
Internet, and this article focuses on unicast routing algorithms.
Small networks may involve manual
manually
ly configured routing tables
(static routing)) or non
non-adaptive
adaptive routing, while larger networks
involve complex topologies and may change rapidly, making the
manual construction of routing tables unfeasible. Nevertheless,
most of the publicc switched telephone network (PSTN) uses pre-pre
computed routing tables, with fallback routes if the most direct
route becomes blocked (see routing in the PSTN). Adaptive
routing or dynamic routing attempts to solve this problem by
constructing routing tables automatically, based on information
carried by routing protocols, and allowing the network to act
nearly autonomously in avoiding network failures and
blockages.
For (static routing) or Non-Adaptive routing there is no
algorithm, and is manually engineered. The advantage of this
routing type is maximum computing resources are saved but are
conditioned. Networks have to be prepared for disaster, by
additional planning. For larger networks, static routing is
avoided.
Examples for (Dynamic routing) or Adaptive routing algorithms
are Routing Information Protocol(RIP), Open Shortest Path
First(OSPF). Dynamic routingdominates the Internet. However,
the configuration of the routing protocols often requires a skilled
touch; one should not suppose that networking technology has
developed to the point of the complete automation of routing.
Distance vector algorithms
Distance vector algorithms use the Bellman-
Ford algorithm. This approach assigns a number, the cost,
to each of the links between each node in the network.
Nodes will send information from point A to point B via
the path that results in the lowest total cost (i.e. the sum of
the costs of the links between the nodes used).
The algorithm operates in a very simple manner. When a
node first starts, it only knows of its immediate
neighbours, and the direct cost involved in reaching them.
(This information, the list of destinations, the total cost to
each, and the next hop to send data to get there, makes up
the routing table, or distance table.) Each node, on a
regular basis, sends to each neighbour its own current idea
of the total cost to get to all the destinations it knows of.
The neighbouring node(s) examine this information, and
compare it to what they already 'know'; anything which
represents an improvement on what they already have,
they insert in their own routing table(s). Over time, all the
nodes in the network will discover the best next hop for
all destinations, and the best total cost.
When one of the nodes involved goes down, those nodes
which used it as their next hop for certain destinations
discard those entries, and create new routing-table
information. They then pass this information to all
adjacent nodes, which then repeat the process. Eventually
all the nodes in the network receive the updated
information, and will then discover new paths to all the
destinations which they can still "reach".
Link-state algorithms
When applying link-state algorithms, each node uses as
its fundamental data a map of the network in the form of
a graph. To produce this, each node floods the entire
network with information about what other nodes it can
connect to, and each node then independently assembles
this information into a map. Using this map, each router
then independently determines the least-cost path from
itself to every other node using a standard shortest
paths algorithm such as Dijkstra's algorithm. The result is
a tree rooted at the current node such that the path through
the tree from the root to any other node is the least-cost
path to that node. This tree then serves to construct the
routing table, which specifies the best next hop to get
from the current node to any other node.
Path vector protocol
Distance vector and link state routing are both intra-
domain routing protocols. They are used inside
an autonomous system, but not between autonomous
systems. Both of these routing protocols become
intractable in large networks and cannot be used in Inter-
domain routing. Distance vector routing is subject to
instability if there are more than a few hops in the
domain. Link state routing needs huge amount of
resources to calculate routing tables. It also creates heavy
traffic because of flooding.
Path vector routing is used for inter-domain routing. It is
similar to distance vector routing. In path vector routing
we assume there is one node (there can be many) in each
autonomous system which acts on behalf of the entire
autonomous system. This node is called the speaker node.
The speaker node creates a routing table and advertises it
to neighboring speaker nodes in neighboring autonomous
systems. The idea is the same as distance vector routing
except that only speaker nodes in each autonomous
system can communicate with each other. The speaker
node advertises the path, not the metric of the nodes, in its
autonomous system or other autonomous systems. Path
vector routing is discussed in RFC 1322; the path vector
routing algorithm is somewhat similar to the distance
vector algorithm in the sense that each border router
advertises the destinations it can reach to its neighboring
router. However, instead of advertising networks in terms
of a destination and the distance to that destination,
networks are advertised as destination addresses and path
descriptions to reach those destinations. A route is defined
as a pairing between a destination and the attributes of the
path to that destination, thus the name, path vector
routing, where the routers receive a vector that contains
paths to a set of destinations. The path, expressed in terms
of the domains (or confederations) traversed so far, is
carried in a special path attribute that records the
sequence of routing domains through which the
reachability information has passed.
Comparison of routing algorithms
Distance-vector routing protocols are simple and efficient
in small networks, and require little, if any management.
However, distance-vector algorithms do not scale well
(due to thecount-to-infinity problem), have
poor convergence properties and are based on a 'hop
count' metric rather than a 'link-state' metric thus they
ignore bandwidth (a major drawback) when calculating
the best path.
This has led to the development of more complex but
more scalable algorithms for use in large networks.
Interior routing mostly uses link-state routing
protocols such as OSPF and IS-IS.
A more recent development is that of loop-free distance-
vector protocols (e.g. EIGRP). Loop-free distance-vector
protocols are as robust and manageable as distance-vector
protocols, while avoiding counting to infinity and hence
having good worst-case convergence times.
Path selection
Path selection involves applying a routing metric to multiple
routes, in order to select (or predict) the best route.
In the case of computer networking, the metric is computed by a
routing algorithm, and can cover such information
as bandwidth, network delay, hop count, path cost, load, MTU,
reliability, and communication cost (see e.g. this survey for a list
of proposed routing metrics). The routing table stores only the
best possible routes, while link-state or topological databases
may store all other information as well.
Because a routing metric is specific to a given routing protocol,
multi-protocol routers must use some external heuristic in order
to select between routes learned from different routing
protocols. Cisco's routers, for example, attribute a value known
as the administrative distance to each route, where smaller
administrative distances indicate routes learned from a
supposedly more reliable protocol.
A local network administrator, in special cases, can set up host-
specific routes to a particular machine which provides more
control over network usage, permits testing and better overall
security. This can come in handy when required to debug
network connections or routing tables.
Multiple agents
In some networks, routing is complicated by the fact that no
single entity is responsible for selecting paths: instead, multiple
entities are involved in selecting paths or even parts of a single
path. Complications or inefficiency can result if these entities
choose paths to selfishly optimize their own objectives, which
may conflict with the objectives of other participants.
A classic example involves traffic in a road system, in which
each driver selfishly picks a path which minimizes their own
travel time. With such selfish routing, the equilibrium routes can
be longer than optimal for all drivers. In particular, Braess
paradox shows that adding a new road can lengthen travel times
for all drivers.
The Internet is partitioned into autonomous systems (ASs) such
as internet service providers (ISPs), each of which has control
over routes involving its network, at multiple levels. First, AS-
level paths are selected via the BGP protocol, which produces a
sequence of ASs through which packets will flow. Each AS may
have multiple paths, offered by neighboring ASs, from which to
choose. Its decision often involves business relationships with
these neighboring ASs,which may be unrelated to path quality or
latency. Second, once an AS-level path has been selected, there
are often multiple corresponding router-level paths, in part
because two ISPs may be connected in multiple locations. In
choosing the single router-level path, it is common practice for
each ISP to employ hot-potato routing: sending traffic along the
path that minimizes the distance through the ISP's own
network—even if that path lengthens the total distance to the
destination.
Consider two ISPs, A and B, which each have a presence in New
York, connected by a fast link with latency 5 ms; and which
each have a presence in London connected by a 5 ms link.
Suppose both ISPs have trans-Atlantic links connecting their
two networks, but A's link has latency 100 ms and B's has
latency 120 ms. When routing a message from a source in A's
London network to a destination in B's New York
network, A may choose to immediately send the message to B in
London. This saves A the work of sending it along an expensive
trans-Atlantic link, but causes the message to experience latency
125 ms when the other route would have been 20 ms faster.
A 2003 measurement study of Internet routes found that,
between pairs of neighboring ISPs, more than 30% of paths have
inflated latency due to hot potato routing, with 5% of paths
being delayed by at least 12 ms. Inflation due to AS-level path
selection, while substantial, was attributed primarily to BGP's
lack of a mechanism to directly optimize for latency, rather than
to selfish routing policies. It was also suggested that, were an
appropriate mechanism in place, ISPs would be willing to
cooperate to reduce latency rather than use hot-potato routing.
Route Analytics
As the Internet and IP networks become mission critical
business tools, there has been increased interest in techniques
and methods to monitor the routing posture of networks.
Incorrect routing or routing issues cause undesirable
performance degradation, flapping and/or downtime. Monitoring
routing in a network is achieved using Route analytics tools and
techniques
Routing algorithms and techniques
 Adaptive routing
 Alternative-path routing

 Deflection routing

 Edge Disjoint Shortest Pair Algorithm

 Dijkstra's algorithm

 Fuzzy routing

 Geographic routing

 Heuristic routing

 Hierarchical routing

 IP Forwarding Algorithm

 Multipath routing

 Overlay network routing schemes

 Key-based routing (KBR)

 Decentralized object location and routing (DOLR)

 Group anycast and multicast (CAST)

 Distributed hash table (DHT)

 Path computation element (PCE)

 Policy-based routing

 Quality of Service in routing

 Static routing

 Backward learning routing

Routing in specific networks


 Route assignment in transportation networks
 National Routeing Guide: passenger routing in the UK rail
network
 Routing in the PSTN

 Small world routing - the internet is approximately a small

world network
Routing protocols
Classless inter-domain routing (CIDR): Classless Inter-Domain
Routing (CIDR) is a methodology of allocating IP
addresses and routing Internet Protocol packets. It was
introduced in 1993 to replace the prior addressing architecture
of classful network design in the Internet with the goal to slow
the growth of routing tables on routers across the Internet, and to
help slow the rapid exhaustion of IPv4addresses.
IP addresses are described as consisting of two groups of bits in
the address: the most significant part is the network
address which identifies a whole network or subnet and the least
significant portion is the host identifier, which specifies a
particular host interface on that network. This division is used as
the basis of traffic routing between IP networks and for address
allocation policies. Classful network design for IPv4 sized the
network address as one or more 8-bit groups, resulting in the
blocks of Class A, B, or C addresses. Classless Inter-Domain
Routing allocates address space to Internet service providers and
end users on any address bit boundary, instead of on 8-bit
segments. In IPv6, however, the host identifier has a fixed size
of 64 bits by convention, and smaller subnets are never allocated
to end users.
 MPLS routing: Multiprotocol Label Switching (MPLS) is
a mechanism in high-performance telecommunications
networks which directs and carries data from one network
node to the next. MPLS makes it easy to create "virtual links"
between distant nodes. It can encapsulate packets of
variousnetwork protocols.
MPLS is a highly scalable, protocol agnostic, data-carrying
mechanism. In an MPLS network, data packets are assigned
labels. Packet-forwarding decisions are made solely on the
contents of this label, without the need to examine the packet
itself. This allows one to create end-to-end circuits across any
type of transport medium, using any protocol. The primary
benefit is to eliminate dependence on a particularData Link
Layer technology, such as ATM, frame
relay, SONET or Ethernet, and eliminate the need for multiple
Layer 2 networks to satisfy different types of traffic. MPLS
belongs to the family of packet-switched networks.
 ATM routing: Asynchronous Transfer Mode is a cell-based
switching technique that uses asynchronous time
division multiplexing. It encodes data into small fixed-sized
cells (cell relay) and provides data link layer services that run
over OSI Layer 1 physical links. This differs from other
technologies based on packet-switched networks (such as
the Internet Protocol or Ethernet), in which variable
sized packets (known as frames when referencing Layer 2)
are used. ATM exposes properties from both circuit switched
and small packet switched networking, making it suitable for
wide area data networking as well as real-time media
transport.ATM uses a connection-oriented model and
establishes a virtual circuit between two endpoints before the
actual data exchange begins.
 RPSL: The Routing Policy Specification Language
(RPSL) is a language commonly used by ISPs to describe
their routing policies.
The routing policies are stored at various whois databases
including RIPE, RADB and APNIC. ISPs (using automated
tools) then generate router configuration files that match their
business and technical policies.
RFC 2622 describes RPSL, and replaced RIPE-181.
RFC 2650 provides a reference tutorial to using RPSL in the
real-world.
RPSL has been extended with RPSL-NG (RPSL-Next
Generation) effort to support IPv6 routing policies
and multicast routing policies. RPSL-NG is defined in RFC
4012
RSMLT: Routed-SMLT (R-SMLT) is a computer
networking protocol designed by Nortel (now acquired
by Avaya) as an enhancement to SMLT enabling the exchange
of Layer 3 information between peer nodes in a Switch Cluster
for unparalleled resiliency and simplicity for both L3 and L2.
In many cases, core network convergence-times after a failure is
dependent on the length of time a routing protocol requires to
successfully converge (change or re-route traffic around the
fault). Depending on the specific routing protocol, this
convergence time can cause network interruptions ranging from
seconds to minutes. The Nortel R-SMLT feature works
withSMLT, and DSMLT technologies to provide sub-second
failover (normally less than 100 millisecond[1] so no outage is
noticed by end users. This high speed recovery is required by
many critical networks where outages can cause loss of life or
very large monetary losses in critical networks.
RSMLT routing topologies providing an active-active router
concept to core SMLT networks. The protocol supports
networks designed with SMLT or DSMLT triangles, squares,
andSMLT or DSMLT full mesh topologies, with routing
enabled on the core VLANs. R-SMLT takes care of packet
forwarding in core router failures and works with any of the
following protocol types: IP Unicast Static Routes, RIP1, RIP2,
OSPF, BGP and IPX RIP.
Alternative methods for network data flow
 Peer-to-peer
 Network coding
Switches
A network switch or switching hub is a computer networking
device that connects network segments.
The term commonly refers to a network bridge that processes
and routes data at the data link layer (layer 2) of the OSI model.
Switches that additionally process data at the network
layer (layer 3 and above) are often referred to as Layer 3
switches or multilayer switches.
The term network switch does not generally encompass
unintelligent or passive network devices such
as hubs and repeaters.
The first Ethernet switch was introduced by Kalpana in 1990
Function
The network switch, packet switch (or just switch) plays an
integral part in most Ethernet local area networks or LANs. Mid-
to-large sized LANs contain a number of linked managed
switches. Small office/home office (SOHO) applications
typically use a single switch, or an all-purpose converged
device such as a gateway access to small office/home broadband
services such as DSL router or cable Wi-Fi router. In most of
these cases, the end-user device contains a router and
components that interface to the particular physical broadband
technology, as in Linksys 8-port and 48-port devices. User
devices may also include a telephone interface for VoIP.
A standard 10/100 Ethernet switch operates at the data-link
layer of the OSI model to create a different collision domain for
each switch port. If you have 4 computers (e.g., A, B, C, and D)
on 4 switch ports, then A and B can transfer data back and forth,
while C and D also do so simultaneously, and the two
"conversations" will not interfere with one another. In the case
of a "hub," they would all share the bandwidth and run in Half
duplex, resulting in collisions, which would then necessitate
retransmissions. Using a switch is called micro-segmentation.
This allows you to have dedicated bandwidth on point-to-point
connections with every computer and to therefore run in Full
duplex with no collisions.
Role of switches in networks

Switches may operate at one or more OSI layers,


including physical, data link, network, or transport (i.e., end-to-
end). A device that operates simultaneously at more than one of
these layers is known as a multilayer switch.
In switches intended for commercial use, built-in or modular
interfaces make it possible to connect different types of
networks, including Ethernet, Fibre Channel, ATM, ITU-
T G.hn and802.11. This connectivity can be at any of the layers
mentioned. While Layer 2 functionality is adequate for speed-
shifting within one technology, interconnecting technologies
such as Ethernet and token ring are easier at Layer 3.
Interconnection of different Layer 3 networks is done by routers.
If there are any features that characterize "Layer-3 switches" as
opposed to general-purpose routers, it tends to be that they are
optimized, in larger switches, for high-density Ethernet
connectivity.
In some service provider and other environments where there is
a need for a great deal of analysis of network performance and
security, switches may be connected between WAN routers as
places for analytic modules. Some vendors provide firewall,
network intrusion detection, and performance analysis modules
that can plug into switch ports. Some of these functions may be
on combined modules.
In other cases, the switch is used to create a mirror image of data
that can go to an external device. Since most switch port
mirroring provides only one mirrored stream, network hubscan
be useful for fanning out data to several read-only analyzers,
such as intrusion detection systems and packet sniffers.
Configuration options
 Unmanaged switches — These switches have no
configuration interface or options. They are plug and play.
They are typically the least expensive switches, found in
home, SOHO, or small businesses. They can be desktop or
rack mounted.
 Managed switches — These switches have one or more
methods to modify the operation of the switch. Common
management methods include: a serial console or command
line interface (CLI for short) accessed via telnet or Secure
Shell, an embedded Simple Network Management
Protocol (SNMP) agent allowing management from a remote
console or management station, or a web interface for
management from a web browser. Examples of configuration
changes that one can do from a managed switch include:
enable features such as Spanning Tree Protocol, set port
speed, create or modify Virtual LANs (VLANs), etc. Two
sub-classes of managed switches are marketed today:
 Smart (or intelligent) switches — These are managed

switches with a limited set of management features.


Likewise "web-managed" switches are switches which fall
in a market niche between unmanaged and managed. For a
price much lower than a fully managed switch they provide
a web interface (and usually no CLI access) and allow
configuration of basic settings, such as VLANs, port-speed
and duplex.[10]
 Enterprise Managed (or fully managed) switches — These

have a full set of management features, including


Command Line Interface, SNMP agent, and web interface.
They may have additional features to manipulate
configurations, such as the ability to display, modify,
backup and restore configurations. Compared with smart
switches, enterprise switches have more features that can
be customized or optimized, and are generally more
expensive than "smart" switches. Enterprise switches are
typically found in networks with larger number of switches
and connections, where centralized management is a
significant savings in administrative time and effort.
A stackable switch is a version of enterprise-managed
switch.
Layer 2 Switching
A network bridge, operating at the Media Access
Control (MAC) sublayer of the data link layer, may interconnect
a small number of devices in a home or office. This is a trivial
case of bridging, in which the bridge learns the MAC address of
each connected device. Single bridges also can provide
extremely high performance in specialized applications such
as storage area networks.
Classic bridges may also interconnect using a spanning tree
protocol that disables links so that the resulting local area
network is a tree without loops. In contrast to routers, spanning
tree bridges must have topologies with only one active path
between two points. The older IEEE 802.1D spanning tree
protocol could be quite slow, with forwarding stopping for 30
seconds while the spanning tree would reconverge. A Rapid
Spanning Tree Protocol was introduced as IEEE 802.1w, but the
newest edition of IEEE 802.1D-2004, adopts the 802.1w
extensions as the base standard. The IETF is specifying
the TRILL protocol, which is the application of link-state
routing technology to the layer-2 bridging problem. Devices
which implement TRILL, called RBridges, combine the best
features of both routers and bridges.
While "layer 2 switch" remains more of a marketing term than a
technical term,[citation needed] the products that were introduced as
"switches" tended to use microsegmentation andFull duplex to
prevent collisions among devices connected to Ethernets. By
using an internal forwarding plane much faster than any
interface, they give the impression of simultaneous paths among
multiple devices.
Once a bridge learns the topology through a spanning tree
protocol, it forwards data link layer frames using a layer 2
forwarding method. There are four forwarding methods a bridge
can use, of which the second through fourth method were
performance-increasing methods when used on "switch"
products with the same input and output port speeds:
1. Store and forward: The switch buffers and, typically,
performs a checksum on each frame before forwarding it.
2. Cut through: The switch reads only up to the frame's
hardware address before starting to forward it. There is no
error checking with this method.
3. Fragment free: A method that attempts to retain the benefits
of both "store and forward" and "cut through". Fragment
free checks the first 64 bytes of the frame,
whereaddressing information is stored. According to
Ethernet specifications, collisions should be detected
during the first 64 bytes of the frame, so frames that are in
error because of a collision will not be forwarded. This
way the frame will always reach its intended destination.
Error checking of the actual data in the packet is left for the
end device in Layer 3 or Layer 4 (OSI), typically a router.
4. Adaptive switching: A method of automatically switching
between the other three modes.
Cut-through switches have to fall back to store and forward if
the outgoing port is busy at the time the packet arrives. While
there are specialized applications, such as storage area networks,
where the input and output interfaces are the same speed, this is
rarely the case in general LAN applications. In LANs, a switch
used for end user access typically concentrates lower speed (e.g.,
10/100 Mbit/s) into a higher speed (at least 1 Gbit/s).
Alternatively, a switch that provides access to server ports
usually connects to them at a much higher speed than is used by
end user devices.Cypress Semiconductor design and
manufacturing company along with TPACK offers the
flexibility to cope with various system architecture for Ethernet
switches through reference design. The reference design
involves TPX4004 and CY7C15632KV18 72-Mbit SRAMs.
Spanning tree Protocol

The Spanning tree protocol (STP) is a link layer network


protocol that ensures a loop-free topology for any bridged LAN.
Thus, the basic function of STP is to prevent bridge loops and
ensuing broadcast radiation.
In the OSI model for computer networking, STP falls under
the OSI layer-2. It is standardized as 802.1D. As the name
suggests, it creates aspanning tree within a mesh network of
connected layer-2 bridges (typically Ethernet switches), and
disables those links that are not part of the spanning tree, leaving
a single active path between any two network nodes.
Spanning tree allows a network design to include spare
(redundant) links to provide automatic backup paths if an active
link fails, without the danger of bridge loops, or the need for
manual enabling/disabling of these backup links. Bridge loops
must be avoided because they result in flooding the internet
network.
STP is based on an algorithm invented by Radia Perlman while
working for Digital Equipment Corporation

Protocol operation
The collection of bridges in a LAN can be considered
a graph whose nodes are the bridges and the LAN segments (or
cables), and whose edges are the interfaces connecting the
bridges to the segments.
To break loops in the LAN while maintaining access to all LAN
segments, the bridges collectively compute a spanning tree.
tree The
spanning tree is not necessarily a minimum cost spanning tree.
A network administrator can reduce the cost of a spanning tree,
if necessary, by altering some of the configuration parameters in
such a way as to affect the choice of the root of the spanning
tree.
The spanning tree that the bridges compute using the Spanning
Tree Protocol can be determined using the following rules. The
example network at the right, below, will be used to illustrate
the rules.

1. An example
xample network. The numbered boxes represent
bridges (the number represents the bridge ID). The
lettered clouds represent network segments.
2. The smallest bridge ID is 3. Therefore, bridge 3 is the
root bridge.

3. Assuming that the cost of traversing any network


segment is 1, the least cost path from bridge 4 to the root
bridge goes through network segment c. Therefore, the
root port for bridge 4 is the one on network segment c.

4. The least cost path to the root from network segment e


goes through bridge 92. Therefore the designated port for
network segment e is the port that connects bridge 92 to
network segment e.
5. This diagram illustrates all port states as computed by
the spanning tree algorithm. AnAnyy active port that is not a
root port or a designated port is a blocked port.

6. After link failure the spanning tree algorithm computes


and spans new leastleast-cost tree.
Select a root bridge. The root bridge of the spanning tree is the
bridge with the smallest (lowest) bridge ID. Each bridge has a
unique identifier (ID) and a configurable priority number; the
bridge ID contains both numbers. To compare two bridge IDs,
the priority is compared first. If two bridges have equal priority,
then the MAC addresses
sses are compared. For example, if switches
A (MAC=0200.0000.1111) and B (MAC=0200.0000.2222) both
have a priority of 10, then switch A will be selected as the root
bridge. If the network administrators would like switch B to
become the root bridge, they m must
ust set its priority to be less than
10.
Determine the least cost paths to the root bridge. The
computed spanning tree has the property that messages from any
connected device to the root bridge traverse a least cost path,
i.e., a path from the device to the root that has minimum cost
among all paths from the device to the root. The cost of
traversing a path is the sum of the costs of the segments on the
path. Different technologies have different default costs for
network segments. An administrator can configure the cost of
traversing a particular network segment.
The property that messages always traverse least-cost paths to
the root is guaranteed by the following two rules.
Least cost path from each bridge. After the root bridge has been
chosen, each bridge determines the cost of each possible path
from itself to the root. From these, it picks the one with the
smallest cost (the least-cost path). The port connecting to that
path becomes the root port (RP) of the bridge.
Least cost path from each network segment. The bridges on a
network segment collectively determine which bridge has the
least-cost path from the network segment to the root. The port
connecting this bridge to the network segment is then
thedesignated port (DP) for the segment.
Disable all other root paths. Any active port that is not a root
port or a designated port is a blocked port (BP).
Modifications in case of ties. The above rules over-simplify the
situation slightly, because it is possible that there are ties, for
example, two or more ports on a single bridge are attached to
least-cost paths to the root or two or more bridges on the same
network segment have equal least-cost paths to the root. To
break such ties:
Breaking ties for root ports. When multiple paths from a bridge
are least-cost paths, the chosen path uses the neighbor bridge
with the lower bridge ID. The root port is thus the one
connecting to the bridge with the lowest bridge ID. For example,
in figure 3, if switch 4 were connected to network segment d,
there would be two paths of length 2 to the root, one path going
through bridge 24 and the other through bridge 92. Because
there are two least cost paths, the lower bridge ID (24) would be
used as the tie-breaker in choosing which path to use.
Breaking ties for designated ports. When more than one bridge
on a segment leads to a least-cost path to the root, the bridge
with the lower bridge ID is used to forward messages to the root.
The port attaching that bridge to the network segment is
thedesignated port for the segment. In figure 4, there are two
least cost paths from network segment d to the root, one going
through bridge 24 and the other through bridge 92. The lower
bridge ID is 24, so the tie breaker dictates that the designated
port is the port through which network segment d is connected
to bridge 24. If bridge IDs were equal, then the bridge with the
lowest MAC address would have the designated port. In either
case, the loser sets the port as being blocked.
The final tie-breaker. In some cases, there may still be a tie, as
when two bridges are connected by multiple cables. In this case,
multiple ports on a single bridge are candidates for root port . In
this case, the path which passes through the port on the neighbor
bridge that has the lowest port priority is used.
Bridge Protocol Data Units (BPDUs)
The above rules describe one way of determining what spanning
tree will be computed by the algorithm, but the rules as written
require knowledge of the entire network. The bridges have to
determine the root bridge and compute the port roles (root,
designated, or blocked) with only the information that they have.
To ensure that each bridge has enough information, the bridges
use special data frames called Bridge Protocol Data
Units (BPDUs) to exchange information about bridge IDs and
root path costs.
A bridge sends a BPDU frame using the unique MAC address of
the port itself as a source address, and a destination address of
the STP multicast address 01:80:C2:00:00:00.
There are three types of BPDUs:
 Configuration BPDU (CBPDU), used for Spanning Tree
computation
 Topology Change Notification (TCN) BPDU, used to

announce changes in the network topology


 Topology Change Notification Acknowledgment (TCA)

BPDUs are exchanged regularly (every 2 seconds by default)


and enable switches to keep track of network changes and to
start and stop forwarding at ports as required.
When a device is first attached to a switch port, it will not
immediately start to forward data. It will instead go through a
number of states while it processes BPDUs and determines the
topology of the network. When a host is attached such as a
computer, printer or server the port will always go into the
forwarding state, albeit after a delay of about 30 seconds while it
goes through the listening and learning states (see below). The
time spent in the listening and learning states is determined by a
value known as the forward delay (default 15 seconds and set by
the root bridge). However, if instead another switch is
connected, the port may remain in blocking mode if it is
determined that it would cause a loop in the network. Topology
Change Notification (TCN) BPDUs are used to inform other
switches of port changes. TCNs are injected into the network by
a non-root switch and propagated to the root. Upon receipt of the
TCN, the root switch will set a Topology Change flag in its
normal BPDUs. This flag is propagated to all other switches to
instruct them to rapidly age out their forwarding table entries.
STP switch port states:
 Blocking - A port that would cause a switching loop, no user
data is sent or received but it may go into forwarding mode if
the other links in use were to fail and the spanning tree
algorithm determines the port may transition to the
forwarding state. BPDU data is still received in blocking
state.
 Listening - The switch processes BPDUs and awaits possible

new information that would cause it to return to the blocking


state.
 Learning - While the port does not yet forward frames

(packets) it does learn source addresses from frames received


and adds them to the filtering database (switching database)
 Forwarding - A port receiving and sending data, normal

operation. STP still monitors incoming BPDUs that would


indicate it should return to the blocking state to prevent a
loop.
 Disabled - Not strictly part of STP, a network administrator

can manually disable a port


To prevent the delay when connecting hosts to a switch and
during some topology changes, Rapid STP was developed and
standardized by IEEE 802.1w, which allows a switch port to
rapidly transition into the forwarding state during these
situations.
VLANS
A virtual LAN, commonly known as a VLAN, is a group of
hosts with a common set of requirements that communicate as if
they were attached to the same broadcast domain, regardless of
their physical location. A VLAN has the same attributes as a
physical LAN, but it allows for end stations to be grouped
together even if they are not located on the samenetwork switch.
Network reconfiguration can be done through software instead
of physically relocating devices.
To physically replicate the functions of a VLAN, it would be
necessary to install a separate, parallel collection of network
cables and switches/hubs which are kept separate from the
primary network. However unlike a physically separate network,
VLANs must share bandwidth; two separate one-gigabit VLANs
using a single one-gigabit interconnection can both suffer
reduced throughput and congestion.
Uses
VLANs are created to provide the segmentation services
traditionally provided by routers in LAN configurations. VLANs
address issues such as scalability, security, and network
management. Routers in VLAN topologies provide broadcast
filtering, security, address summarization, and traffic flow
management. By definition, switches may not bridge IP traffic
between VLANs as it would violate the integrity of the VLAN
broadcast domain.
This is also useful if someone wants to create multiple Layer
3 networks on the same Layer 2 switch. For example, if
a DHCP server (which will broadcast its presence) is plugged
into a switch it will serve any host on that switch that is
configured to get its IP from a DHCP server. By using VLANs
you can easily split the network up so some hosts won't use that
DHCP server and will obtain link-local addresses, or obtain an
address from a different DHCP server.
Virtual LANs are essentially Layer 2 constructs, compared with
IP subnets which are Layer 3 constructs. In an environment
employing VLANs, a one-to-one relationship often exists
between VLANs and IP subnets, although it is possible to have
multiple subnets on one VLAN or have one subnet spread across
multiple VLANs. Virtual LANs and IP subnets provide
independent Layer 2 and Layer 3 constructs that map to one
another and this correspondence is useful during the network
design process.
By using VLANs, one can control traffic patterns and react
quickly to relocations. VLANs provide the flexibility to adapt to
changes in network requirements and allow for simplified
administration.
Motivation
In a legacy network, users were assigned to networks based on
geography and were limited by physical topologies and
distances. VLANs can logically group networks so that the
network location of users is no longer so tightly coupled to their
physical location. Technologies able to implement VLANs are:
 Asynchronous Transfer Mode (ATM)
 Fiber Distributed Data Interface (FDDI)
 Ethernet
 Fast Ethernet
 Gigabit Ethernet
 10 Gigabit Ethernet
 HiperSockets

Protocols and design


The protocol most commonly used today in configuring virtual
LANs is IEEE 802.1Q. The IEEE committee defined this
method of multiplexing VLANs in an effort to provide
multivendor VLAN support. Prior to the introduction of the
802.1Q standard, several proprietary protocols existed, such
as Cisco's ISL (Inter-Switch Link) and 3Com's VLT (Virtual
LAN Trunk). Cisco also implemented VLANs over FDDI by
carrying VLAN information in an IEEE 802.10 frame header,
contrary to the purpose of the IEEE 802.10 standard.
Both ISL and IEEE 802.1Q tagging perform "explicit tagging" -
the frame itself is tagged with VLAN information. ISL uses an
external tagging process that does not modify the existing
Ethernet frame, while 802.1Q uses a frame-internal field for
tagging, and so does modify the Ethernet frame. This internal
tagging is what allows IEEE 802.1Q to work on both access and
trunk links: frames are standard Ethernet, and so can be handled
by commodity hardware.
The IEEE 802.1Q header contains a 4-byte tag header
containing a 2-byte tag protocol identifier (TPID) and a 2-byte
tag control information (TCI). The TPID has a fixed value of
0x8100 that indicates that the frame carries the 802.1Q/802.1p
tag information. The TCI contains the following elements:
 Three-bit user priority
 One-bit canonical format indicator (CFI)
 Twelve-bit VLAN identifier (VID)-Uniquely identifies the
VLAN to which the frame belongs
The 802.1Q standard can create an interesting scenario on the
network. Recalling that the maximum size for an Ethernet frame
as specified by IEEE 802.3 is 1518 bytes, this means that if a
maximum-sized Ethernet frame gets tagged, the frame size will
be 1522 bytes, a number that violates the IEEE 802.3 standard.
To resolve this issue, the 802.3 committee created a subgroup
called 802.3ac to extend the maximum Ethernet size to 1522
bytes. Some network devices that do not support a larger frame
size will process the frame successfully but may report these
anomalies as a "baby giant."[1]
Inter-Switch Link (ISL) is a Cisco proprietary protocol used to
interconnect multiple switches and maintain VLAN information
as traffic travels between switches on trunk links. This
technology provides one method for multiplexing bridge groups
(VLANs) over a high-speed backbone. It is defined for Fast
Ethernet and Gigabit Ethernet, as is IEEE 802.1Q. ISL has been
available on Cisco routers since Cisco IOS Software Release
11.1.
With ISL, an Ethernet frame is encapsulated with a header that
transports VLAN IDs between switches and routers. ISL does
add overhead to the packet as a 26-byte header containing a 10-
bit VLAN ID. In addition, a 4-byte CRC is appended to the end
of each frame. This CRC is in addition to any frame checking
that the Ethernet frame requires. The fields in an ISL header
identify the frame as belonging to a particular VLAN.
A VLAN ID is added only if the frame is forwarded out a port
configured as a trunk link. If the frame is to be forwarded out a
port configured as an access link, the ISL encapsulation is
removed.
Early network designers often configured VLANs with the aim
of reducing the size of the collision domain in a large
single Ethernet segment and thus improving performance. When
Ethernet switches made this a non-issue (because each switch
port is a collision domain), attention turned to reducing the size
of the broadcast domain at the MAC layer. Virtual networks can
also serve to restrict access to network resources without regard
to physical topology of the network, although the strength of this
method remains debatable as VLAN Hopping [2] is a common
means of bypassing such security measures.
Virtual LANs operate at Layer 2 (the data link layer) of the OSI
model. Administrators often configure a VLAN to map directly
to an IP network, or subnet, which gives the appearance of
involving Layer 3 (the network layer). In the context of VLANs,
the term "trunk" denotes a network link carrying multiple
VLANs, which are identified by labels (or "tags") inserted into
their packets. Such trunks must run between "tagged ports" of
VLAN-aware devices, so they are often switch-to-switch or
switch-to-router links rather than links to hosts. (Note that the
term 'trunk' is also used for what Cisco calls "channels" : Link
Aggregation or Port Trunking). A router (Layer 3 device) serves
as the backbone for network traffic going across different
VLANs.
Establishing VLAN memberships
The two common approaches to assigning VLAN membership
are as follows:
 Static VLANs
 Dynamic VLANs
Static VLANs are also referred to as port-based VLANs. Static
VLAN assignments are created by assigning ports to a VLAN.
As a device enters the network, the device automatically
assumes the VLAN of the port. If the user changes ports and
needs access to the same VLAN, the network administrator must
manually make a port-to-VLAN assignment for the new
connection.
Dynamic VLANs are created through the use of software. With
a VLAN Management Policy Server (VMPS), an administrator
can assign switch ports to VLANs dynamically based on
information such as the source MAC address of the device
connected to the port or the username used to log onto that
device. As a device enters the network, the device queries a
database for VLAN membership. See also FreeNAC which
implements a VMPS server.
Protocol Based VLANs
In a protocol based VLAN enabled switch, traffic is forwarded
through ports based on protocol. Essentially, the user tries to
segregate or forward a particular protocol traffic from a port
using the protocol based VLANs; traffic from any other protocol
is not forwarded on the port. For example, if you have connected
a host, pumping ARP traffic on the switch at port 10, connected
a Lan pumping IPX traffic to the port 20 of the switch and
connected a router pumping IP traffic on port 30, then if you
define a protocol based VLAN supporting IP and including all
the three ports 10, 20 and 30 then IP packets can be forwarded to
the ports 10 and 20 also, but ARP traffic will not get forwarded
to the ports 20 and 30, similarly IPX traffic will not get
forwarded to ports 10 and 30.
VTP
VLAN Trunking Protocol (VTP) is a Cisco proprietary Layer
2 messaging protocol that manages the addition, deletion, and
renaming of Virtual Local Area Networks (VLAN) on a
network-wide basis. Cisco's VLAN Trunk Protocol reduces
administration in a switched network. When a new VLAN is
configured on one VTP server, the VLAN is distributed through
all switches in the domain. This reduces the need to configure
the same VLAN everywhere. To do this, VTP carries VLAN
information to all the switches in a VTP domain. VTP
advertisements can be sent over ISL, 802.1q, IEEE
802.10 and LANE trunks. VTP is available on most of the
Cisco Catalyst Family products.[1]
The comparable IEEE standard in use by other manufacturers
is GVRP or the more recent MVRP.

VTP Modes
VTP operates in one of three modes:
 Server – In this VTP mode you can create, remove, and
modify VLANs. You can also set other configuration
options like the VTP version and also turn on/off VTP
pruning for the entire VTP domain. VTP servers
advertise their VLAN configuration to other switches in
the same VTP domain and synchronize their VLAN
configuration with other switches based on messages
received over trunk links. VTP server is the default
mode. The VLANs information are stored on NVRAM
and they are not lost after a reboot.
 Client – VTP clients behave the same way as VTP
servers, but you cannot create, change, or delete
VLANs on the local device. Remember that even in VTP
client mode, a switch will store the last known VTP
information—including the configuration revision
number. Don’t assume that a VTP client will start with a
clean slate when it powers up.
 Transparent – When you set the VTP mode to

transparent, then the switches do not participate in VTP.


A VTP transparent switch will not advertise its VLAN
configuration and does not synchronize its VLAN
configuration based on received messages. VLANS can
be created, changed or deleted when in transparent
mode. In VTP version 2, transparent switches do
forward VTP messages that they receive out their trunk
ports.
VTP sends messages between trunked switches to
maintain VLANs on these switches in order to properly
trunk. VTP messages are exchanged between switches
within a common VTP domain. If the domain name is
different, the switch simply ignores the packet. If the name
is the same then it checks by a revision number. If the
revision number of an update received on a client or
server VTP switch is higher than the previous revision,
then the new configuration is applied. Otherwise, the
configuration is ignored.
When new devices are added to a VTP domain, revision
numbers should be reset on the entire domain to prevent
conflicts. Utmost caution is advised when dealing with
VTP topology changes, logical or physical. Exchanges of
VTP information can be controlled by passwords. You
need to put the same password on every switch for it to
work.
VTP Versions
VTP version 2 supports the following features not supported in version 1:

VTP Functionality Support/Processing in Version 2

Token Ring Bridge Relay Function (TrBRF) and Token Ring Concentrator Relay
Token Ring Function (TrCRF) VLAN are supported

Unrecognized In V2, a server will propagate TLVs even those it does not understand. It also saves
Type-Length-Value them in NVRAM when the switch is in VTP server mode. This could be useful if not all
devices are at the same version or release level.
(TLV)

Version-Dependent Version 1 supports multiple domains while Version 2 supports only 1. Normal
behavior for V1 would be to forward messages only if they match the destination
Transparent Mode domain name and version. VTPv2 does not do this check before forwarding.

VTPv1 does more consistency checking on messages, which can add overhead. As
long as the MD5 digest on a message is correct, VTPv2 will forward it. VTPv2 will
Consistency Checks consistency-check new configuration information added through the configuration
editor, Cluster Management Software or SNMP.
VTP version 3: is a protocol that is only responsible for
distributing a list of opaque databases over an administrative
domain. When enabled, VTP version 3 provides the following
enhancements to previous VTP versions:
 Support for extended VLANs.
 Support for the creation and advertising of private
VLANs.
 Improved server authentication.
 Protection from the "wrong" database accidentally being
inserted into a VTP domain.
 Interaction with VTP version 1 and VTP version 2.
 Provides the ability to be configured on a per-port basis.
 Provides the ability to propagate the VLAN database and
other databases.

VTP Version 1 and 2 Configuration Guidelines


This section describes the guidelines for implementing VTP in
your network:
 All switches in a VTP domain must run the same VTP
version.
 You must configure a password on each switch in the

management domain when you are in secure mode.


Caution If you configure VTP in secure mode, the
management domain will not function properly if you do not
assign a management domain password to each switch in the
domain.
 A VTP version 2-capable switch can operate in the same
VTP domain as a switch running VTP version 1 if VTP
version 2 is disabled on the VTP version 2-capable
switch (VTP version 2 is disabled by default).
 Do not enable VTP version 2 on a switch unless all of
the switches in the same VTP domain are version 2
capable. When you enable VTP version 2 on a switch, all
of the version 2-capable switches in the domain enable
VTP version 2.
 In a Token Ring environment, you must enable VTP
version 2 for Token Ring VLAN switching to function
properly.
 Enabling or disabling VTP pruning on a VTP server
enables or disables VTP pruning for the entire
management domain.
 Making VLANs pruning eligible or pruning ineligible on
a switch affects pruning eligibility for those VLANs on
that device only (not on all switches in the VTP domain).

Configuration Commands

Task Command

Step Define the VTP mode


vtp mode mode
0 (server/client/transparent)

Step Define the VTP domain name (Case vtp domain name
1 sensitive)

Step
Set which VTP version to run vtp version #
2

Step (Optional) Set a password for the vtp


3 VTP domain. password password

Step
Verify the VTP configuration. show vtp status
4
VLAN Pruning

VTP can prune unneeded VLANs from trunk links. VTP


maintains a map of VLANs and switches, enabling traffic to be
directed only to those switches known to have ports on the
intended VLAN. This enables more efficient use of trunk
bandwidth.
Each switch will advertise which VLANs it has active to
neighboring switches. The neighboring switches will then
"prune" VLANs that are not active across that trunk, thus saving
bandwidth. If a VLAN is then added to one of the switches, the
switch will then re-advertise
advertise its active VLANs so that pruning
can be updated by its neighbors. For this to work, VLAN
pruning must be enabled on both ends of the trunk. It is easiest
to enable VLAN pruning for an entire VTP management domain
by simply enabling it on one of the VTP servers for that domain.
To enable VLAN pruning for a VTP domain, enter the following
command on a VTP server for that domain:

VTP_Server_Sw1(config)# vtp pruning

Configure VLAN Pruning

Task Command

Step Enable VTP pruning in the management


set vtp pruning enable
1 domain.

(Optional) Make specific VLANs pruning-


Step ineligible on the device. clear vtp
2 pruneeligible vlan_range
(By default, VLANs 2-1000 are pruning-
eligible.)

Step (Optional) Make specific VLANs pruning-eligible set vtp


3 on the device. pruneeligible vlan_range

Step
Verify the VTP pruning configuration. show vtp status
4

Step Verify that the appropriate VLANs are being


show interface trunk
5 pruned on trunk ports.
VTP security
VTP may operate unauthenticated, in which case an attacker can
easily inject spoofed VTP packets in order to add/delete VLAN
information. Tools such as Yersinia are freely available to do
that. A password can be set for the VTP domain: it is used in
conjunction with the MD5 hash function to provide
authentication of VTP packets. However, this optional password
authentication should not conceal the fact that it is very risky to
use VTP in sensitive environments.
VTP Problems
When inserting a vtp server with a higher config revision
number, the other switches will delete their configuration
information and take the VLAN information from the inserted
switch. The only way to get the deleted information back is to
add the missing VLANs and delete the unwanted VLANs. To
avoid this you should set the switch you're inserting into the
network to transparent mode because that resets the
configuration number, then switch it back to client or server
mode. Another way of resetting the configuration number is to
change the domain name to something else, like "test", then
change it back.
Network security
In the field of networking, the specialist area of network
security consists of the provisions made in an
underlying computer network infrastructure, policies adopted by
the network administrator to protect the network and the
network-accessible resources from unauthorized access, and
consistent and continuous monitoring and measurement of its
effectiveness (or lack) combined together.
The first step to information security
The terms network security and information security are often
used interchangeably. Network security is generally taken as
providing protection at the boundaries of an organization by
keeping out intruders (hackers). Information security, however,
explicitly focuses on protecting data resources
from malware attack or simple mistakes by people within an
organization by use of data loss prevention (DLP) techniques.
One of these techniques is to compartmentalize large networks
with internal boundaries.
Network security concepts
Network security starts from authenticating the user, commonly
with a username and a password. Since this requires just one
thing besides the user name, i.e. the password which is
something you 'know', this is sometimes termed one factor
authentication. With two factor authentication something you
'have' is also used (e.g. a security token or 'dongle', an ATM
card, or your mobile phone), or with three factor authentication
something you 'are' is also used (e.g. a fingerprint or retinal
scan).
Once authenticated, a firewall enforces access policies such as
what services are allowed to be accessed by the network
users.Though effective to prevent unauthorized access, this
component may fail to check potentially harmful content such
as computer worms or Trojans being transmitted over the
network. Anti-virus software or an intrusion prevention
system (IPS) help detect and inhibit the action of such malware.
An anomaly-based intrusion detection system may also monitor
the network and traffic for unexpected (i.e. suspicious) content
or behaviour and other anomalies to protect resources, e.g.
from denial of service attacks or an employee accessing files at
strange times. Individual events occurring on the network may
be logged for audit purposes and for later high level analysis.
Communication between two hosts using a network could be
encrypted to maintain privacy.
Honeypots, essentially decoy network-accessible resources,
could be deployed in a network as surveillance and early-
warning tools. Techniques used by the attackers that attempt to
compromise these decoy resources are studied during and after
an attack to keep an eye on new exploitation techniques. Such
analysis could be used to further tighten security of the actual
network being protected by the honeypot.
Security management
Security Management for networks is different for all kinds of
situations. A small home or an office would only require basic
security while large businesses will require high maintenance
and advanced software and hardware to prevent malicious
attacks from hacking and spamming.
Small homes
 A basic firewall like COMODO Internet Security or a unified
threat management system.
 For Windows users, basic Antivirus software like AVG
Antivirus, ESET NOD32
Antivirus, Kaspersky, McAfee, Avast!, Zone Alarm Security
Suite or Norton AntiVirus. An anti-spyware program such
as Windows Defender or Spybot – Search & Destroy would
also be a good idea. There are many other types of antivirus
or anti-spyware programs out there to be considered.
 When using a wireless connection, use a robust password.
Also try to use the strongest security supported by your
wireless devices, such as WPA2 with AES encryption.
 If using Wireless: Change the default SSID network name,
also disable SSID Broadcast; as this function is unnecessary
for home use. (However, many security experts consider this
to be relatively
useless. http://blogs.zdnet.com/Ou/index.php?p=43 )
 Enable MAC Address filtering to keep track of all home
network MAC devices connecting to your router.
 Assign STATIC IP addresses to network devices.
 Disable ICMP ping on router.
 Review router or firewall logs to help identify abnormal
network connections or traffic to the Internet.
 Use passwords for all accounts.
 Have multiple accounts per family member, using non-
administrative accounts for day-to-day activities. Disable the
guest account (Control Panel> Administrative Tools>
Computer Management> Users).
 Raise awareness about information security to children.

Medium businesses
 A fairly strong firewall or Unified Threat
Management System
 Strong Antivirus software and Internet Security Software.
 For authentication, use strong passwords and change it on a
bi-weekly/monthly basis.
 When using a wireless connection, use a robust password.
 Raise awareness about physical security to employees.
 Use an optional network analyzer or network monitor.
 An enlightened administrator or manager.
Large businesses
 A strong firewall and proxy to keep unwanted people out.
 A strong Antivirus software package and Internet Security
Software package.
 For authentication, use strong passwords and change it on a
weekly/bi-weekly basis.
 When using a wireless connection, use a robust password.
 Exercise physical security precautions to employees.
 Prepare a network analyzer or network monitor and use it
when needed.
 Implement physical security management like closed circuit
television for entry areas and restricted zones.
 Security fencing to mark the company's perimeter.
 Fire extinguishers for fire-sensitive areas like server rooms
and security rooms.
 Security guards can help to maximize security.
School
 An adjustable firewall and proxy to allow authorized users
access from the outside and inside.
 Strong Antivirus software and Internet Security Software
packages.
 Wireless connections that lead to firewalls.
 Children's Internet Protection Act compliance.
 Supervision of network to guarantee updates and changes
based on popular site usage.
 Constant supervision by teachers, librarians, and
administrators to guarantee protection against attacks by
both internet and sneakernet sources.

Large government
 A strong firewall and proxy to keep unwanted people
out.
 Strong Antivirus software and Internet Security Software
suites.
 Strong encryption.
 Whitelist authorized wireless connection, block all else.
 All network hardware is in secure zones.
 All host should be on a private network that is invisible
from the outside.
 Put web servers in a DMZ, or a firewall from the outside
and from the inside.
 Security fencing to mark perimeter and set wireless
range to this.
ACCESS CONTROL LIST
An access control list (ACL), with respect to a computer
file system, is a list of permissions attached to an object.
An ACL specifies which users or system processes are
granted access to objects, as well as what operations are
allowed on given objects. Each entry in a typical ACL
specifies a subject and an operation. For instance, if a file
has an ACL that contains (Alice, delete), this would
give Alice permission to delete the file.

ACL-based security models


When a subject requests an operation on an object in an
ACL-based security model the operating system first
checks the ACL for an applicable entry to decide whether
the requested operation is authorized. A key issue in the
definition of any ACL-based security model is determining
how access control lists are edited, namely which users
and processes are granted ACL-modification access. ACL
models may be applied to collections of objects as well as
to individual entities within the system hierarchy.
Filesystem ACLs
A Filesystem ACL is a data structure (usually a table)
containing entries that specify individual user or group
rights to specific system objects such as programs,
processes, or files. These entries are known as access
control entries (ACEs) in the Microsoft Windows
NT, OpenVMS, Unix-like, and Mac OS X operating
systems. Each accessible object contains an identifier to
its ACL. The privileges or permissions determine specific
access rights, such as whether a user
can read from, write to, or execute an object. In some
implementations an ACE can control whether or not a
user, or group of users, may alter the ACL on an object.
Most of the Unix and Unix-like operating systems
(e.g. Linux,[1] BSD, or Solaris) support so called POSIX.1e
ACLs, based on an early POSIX draft that was
abandoned. Many of them, for example AIX, Mac OS
X beginning with version 10.4 ("Tiger"),
or Solaris with ZFS filesystem[2], support NFSv4 ACLs,
which are part of the NFSv4 standard. FreeBSD 9-
CURRENT supports NFSv4 ACLs on
both UFS and ZFS file systems; full support is expected to
be backported to version 8.1[3]. There is an experimental
implementation of NFSv4 ACLs for Linux.[4]
Networking ACLs
On some types of proprietary computer hardware,
an Access Control List refers to rules that are applied
to port numbers or network daemon names that are
available on a host or other layer 3, each with a list of
hosts and/or networks permitted to use the service. Both
individual servers as well as routers can have network
ACLs. Access control lists can generally be configured to
control both inbound and outbound traffic, and in this
context they are similar to firewalls.
NAT AND PAT
In computer networking, network address
translation (NAT) is the process of modifying network
address information in datagram (IP) packet headers while
in transit across a trafficrouting device for the purpose of
remapping one IP address space into another.
Most often today, NAT is used in conjunction with network
masquerading (or IP masquerading) which is a technique
that hides an entire IP address space, usually consisting
of private network IP addresses (RFC 1918), behind a
single IP address in another, often public address space.
This mechanism is implemented in a routing device that
uses statefultranslation tables to map the "hidden"
addresses into a single IP address and then readdresses
the outgoing Internet Protocol (IP) packets on exit so that
they appear to originate from the router. In the reverse
communications path, responses are mapped back to the
originating IP address using the rules ("state") stored in
the translation tables. The translation table rules
established in this fashion are flushed after a short period
without new traffic refreshing their state.
As described, the method enables communication through
the router only when the conversation originates in the
masqueraded network, since this establishes the
translation tables. For example, a web browser in the
masqueraded network can browse a website outside, but
a web browser outside could not browse a web site in the
masqueraded network. However, most NAT devices today
allow the network administrator to configure translation
table entries for permanent use. This feature is often
referred to as "static NAT" or port forwarding and allows
traffic originating in the 'outside' network to reach
designated hosts in the masqueraded network.
Because of the popularity of this technique (see below),
the term NAT has become virtually synonymous with the
method of IP masquerading.
Network address translation has serious consequences,
both drawbacks and benefits, on the quality of Internet
connectivity and requires careful attention to the details of
its implementation. As a result, many methods have been
devised to alleviate the issues encountered. See article
on NAT traversal.
Overview
In the mid-1990s NAT became a popular tool for
alleviating the IPv4 address exhaustion. It has become a
standard, indispensable feature in routers for home and
small-office Internet connections.
Most systems using NAT do so in order to enable
multiple hosts on a private network to access
the Internet using a single public IP address
(see gateway). However, NAT breaks the originally
envisioned model of IP end-to-end connectivity across the
Internet, introduces complications in communication
between hosts, and affects performance.
NAT obscures an internal network's structure: all traffic
appears to outside parties as if it originated from the
gateway machine.
Network address translation involves over-writing the
source or destination IP address and usually also
the TCP/UDP port numbers of IP packets as they pass
through the router. Checksums (both IP and TCP/UDP)
must also be rewritten to take account of the changes.
In a typical configuration, a local network uses one of the
designated "private" IP address subnets (the RFC 1918).
Private Network Addresses are 192.168.x.x, 172.16.x.x
through 172.31.x.x, and 10.x.x.x (or using CIDR notation,
192.168/16, 172.16/12, and 10/8), and a router on that
network has a private address (such as 192.168.0.1) in
that address space. The router is also connected to the
Internet with a single "public" address (known as
"overloaded" NAT) or multiple "public" addresses assigned
by an ISP. As traffic passes from the local network to the
Internet, the source address in each packet is translated
on the fly from the private addresses to the public
address(es). The router tracks basic data about each
active connection (particularly the destination address and
port). When a reply returns to the router, it uses the
connection tracking data it stored during the outbound
phase to determine where on the internal network to
forward the reply; the TCP or UDP client port numbers are
used to demultiplex the packets in the case of overloaded
NAT, or IP address and port number when multiple public
addresses are available, on packet return. To a host on
the Internet, the router itself appears to be the
source/destination for this traffic.
Basic NAT and PAT
There are two levels of network address translation.
 Basic NAT. This involves IP address translation only,
not port mapping.
 PAT (Port Address Translation). Also called simply

"NAT" or "Network Address Port Translation, NAPT".


This involves the translation of both IP addresses and
port numbers.
All Internet packets have a source IP address and a
destination IP address. Both or either of the source and
destination addresses may be translated.
Some Internet packets do not have port numbers. For
example, ICMP packets have no port numbers. However,
the vast bulk of Internet traffic is TCP and UDP packets,
which do have port numbers. Packets which do have port
numbers have both a source port number and a
destination port number. Both or either of the source and
destination ports may be translated.
NAT which involves translation of the source IP address
and/or source port is called source NAT or SNAT. This
re-writes the IP address and/or port number of the
computer which originated the packet.
NAT which involves translation of the destination IP
address and/or destination port number is
called destination NAT or DNAT. This re-writes the IP
address and/or port number corresponding to the
destination computer.
SNAT and DNAT may be applied simultaneously to
Internet packets.
Types of NAT
Network address translation is implemented in a variety of
schemes of translating addresses and port numbers, each
affecting application communication protocols differently.
In some application protocols that use IP address
information, the application running on a node in the
masqueraded network needs to determine the external
address of the NAT, i.e., the address that its
communication peers detect, and, furthermore, often
needs to examine and categorize the type of mapping in
use. For this purpose, the Simple traversal of UDP over
NATs (STUN) protocol was developed (RFC 3489, March
2003). It classified NAT implementation as full cone
NAT, (address) restricted cone NAT, port restricted
cone NAT orsymmetric NAT[1] and proposed a
methodology for testing a device accordingly. However,
these procedures have since been deprecated from
standards status, as the methods have proven faulty and
inadequate to correctly assess many devices. New
methods have been standardized in RFC 5389 (October
2008) and the STUN acronym now represents the new
title of the specification: Session Traversal Utilities for
NAT.
Full cone NAT, also known as one-to-
one NAT

 Once an internal address


(iAddr:iPort) is mapped to an
external address (eAddr:ePort),
eAddr:ePort), any
packets from iAddr:iPort will be
sent through eAddr:ePort.
 Any external host can send packets
to iAddr:iPort by sending packets
to eAddr:ePort.
(Address) Restricted cone NAT

 Once an internal address


(iAddr:iPort) is mapped to an
external
al address (eAddr:ePort), any
packets from iAddr:iPort will be
sent through eAddr:ePort.
 An external host (hAddr:any) can
send packets to iAddr:iPort by
sending packets to eAddr:ePort
only if iAddr:iPort had previously
sent a packet to hAddr:any.. "any"
means the port number doesn't
matter.
Port-Restricted cone NAT

Like an (Address) Restricted cone


NAT, but the restriction includes port
numbers.

 Once an internal address


(iAddr:iPort) is mapped to an
external address (eAddr:ePort), any
packets from iAddr:iPort will be
sent through eAddr:ePort.
 An external host (hAddr:hPort)) can
send packets to iAddr:iPort by
sending packets to eAddr:ePort
only if iAddr:iPort had previously
sent a packet to hAddr:hPort.
Symmetric NAT

 Each request from the same


internal IP address and port to a
specific destination IP address and
port is mapped to a unique
external source IP address and
port. (this is ambiguous)
 If the same internal host sends a
packet even with the same source
address and port but to a different
destination, a different mapping is
used. (this is not clear)
 Only an external host that receives
a packet from an internal host can
send a packet back.

This terminology has been the source of much confusion,


as it has proven inadequate at describing re real-life
life NAT
[2]
behavior. Many NAT implementations combine these
types, and it is therefore better to refer to specific
individual NAT behaviors instead of using the
Cone/Symmetric terminology. Especially, most NAT
translators combine symmetric NAT for outgoing
connections with static port mapping
mapping,, where incoming
packets to the external address and port are redirected to
a specific internal address and port. (This last sentence is
non-sensical.)
sensical.) Some products can redirect packets to
several internal hosts, e.g. to divide the load between a
few servers. However, this introduces problems with more
sophisticated communications that have many
interconnected packets, and thus is rarely used.
Many NAT implementations follow the port
preservation design. For most communications, they use
the same values as internal and external port numbers.
However, if two internal hosts attempt to communicate
with the same external host using the same port number,
the external port number used by the second host will be
chosen at random. Such NAT will be sometimes perceived
as (address) restricted cone NAT and other times
as symmetric NAT.

NAT and TCP/UDP


"Pure NAT", operating on IP alone, may or may not
correctly parse protocols that are totally concerned with IP
information, such as ICMP, depending on whether the
payload is interpreted by a host on the "inside" or "outside"
of translation. As soon as the protocol stack is climbed,
even with such basic protocols as TCP and UDP, the
protocols will break unless NAT takes action beyond the
network layer.
IP has a checksum in each packet header, which provides
error detection only for the header. IP datagrams may
become fragmented and it is necessary for a NAT to
reassemble these fragments to allow correct recalculation
of higher level checksums and correct tracking of which
packets belong to which connection.
The major transport layer protocols, TCP and UDP, have a
checksum that covers all the data they carry, as well as
the TCP/UDP header, plus a "pseudo-header" that
contains the source and destination IP addresses of the
packet carrying the TCP/UDP header. For an originating
NAT to successfully pass TCP or UDP, it must recompute
the TCP/UDP header checksum based on the translated
IP addresses, not the original ones, and put that
checksum into the TCP/UDP header of the first packet of
the fragmented set of packets. The receiving NAT must
recompute the IP checksum on every packet it passes to
the destination host, and also recognize and recompute
the TCP/UDP header using the retranslated addresses
and pseudo-header. This is not a completely solved
problem. One solution is for the receiving NAT to
reassemble the entire segment and then recompute a
checksum calculated across all packets.
Originating host may perform Maximum transmission
unit (MTU) path discovery (RFC 1191) to determine the
packet size that can be transmitted without fragmentation,
and then set the "don't fragment" bit in the appropriate
packet header field.

Destination network address translation (DNAT)


DNAT is a technique for transparently changing the
destination IP address of an en-route packet and
performing the inverse function for any replies.
Any router situated between two endpoints can perform
this transformation of the packet.
DNAT is commonly used to publish a service located in a
private network on a publicly accessible IP address. This
use of DNAT is also called port forwarding.
SNAT
The usage of the term SNAT varies by vendor. Many
vendors have proprietary definitions for SNAT. A common
definition is Source NAT, the counterpart of Destination
NAT (DNAT).
Microsoft uses the term for Secure NAT, in regard to the
ISA Server extension discussed below. For Cisco
Systems, SNAT means Stateful NAT.
The Internet Engineering Task Force (IETF) defines SNAT
as Softwires Network Address Translation. This type of
NAT is named after the Softwires working group that is
charged with the standardization of discovery, control and
encapsulation methods for connecting IPv4 networks
across IPv6 networks and IPv6 networks across IPv4
networks.
Dynamic network address translation
Dynamic NAT, just like static NAT, is not common in
smaller networks but is found within larger corporations
with complex networks. The way dynamic NAT differs from
static NAT is that where static NAT provides a one-to-one
internal to public static IP address mapping, dynamic NAT
doesn't make the mapping to the public IP address static
and usually uses a group of available public IP addresses.
Applications affected by NAT
Some Application Layer protocols (such as FTP and SIP)
send explicit network addresses within their application
data. FTP in active mode, for example, uses separate
connections for control traffic (commands) and for data
traffic (file contents). When requesting a file transfer, the
host making the request identifies the corresponding data
connection by its network layer and transport
layer addresses. If the host making the request lies behind
a simple NAT firewall, the translation of the IP address
and/or TCP port number makes the information received
by the server invalid. The Session Initiation Protocol (SIP)
controls Voice over IP (VoIP) communications and suffers
the same problem. SIP may use multiple ports to set up a
connection and transmit voice stream via RTP. IP
addresses and port numbers are encoded in the payload
data and must be known prior to the traversal of NATs.
Without special techniques, such as STUN, NAT behavior
is unpredictable and communications may fail.
Application Layer Gateway (ALG) software or hardware
may correct these problems. An ALG software module
running on a NAT firewall device updates any payload
data made invalid by address translation. ALGs obviously
need to understand the higher-layer protocol that they
need to fix, and so each protocol with this problem
requires a separate ALG.
Another possible solution to this problem is to use NAT
traversal techniques using protocols such
as STUN or ICE or proprietary approaches in a session
border controller. NAT traversal is possible in both TCP-
and UDP-based applications, but the UDP-based
technique is simpler, more widely understood, and more
compatible with legacy NATs. In either case, the high level
protocol must be designed with NAT traversal in mind, and
it does not work reliably across symmetric NATs or other
poorly-behaved legacy NATs.
Other possibilities are UPnP (Universal Plug and Play)
or Bonjour (NAT-PMP), but these require the cooperation
of the NAT device.
Most traditional client-server protocols (FTP being the
main exception), however, do not send layer 3 contact
information and therefore do not require any special
treatment by NATs. In fact, avoiding NAT complications is
practically a requirement when designing new higher-layer
protocols today.
NATs can also cause problems where IPsec encryption is
applied and in cases where multiple devices such
as SIP phones are located behind a NAT. Phones which
encrypt their signaling with IPsec encapsulate the port
information within the IPsec packet meaning that NA(P)T
devices cannot access and translate the port. In these
cases the NA(P)T devices revert to simple NAT operation.
This means that all traffic returning to the NAT will be
mapped onto one client causing the service to fail. There
are a couple of solutions to this problem, one is to
use TLS which operates at level 4 in the OSI Reference
Model and therefore does not mask the port number, or to
Encapsulate the IPsec within UDP - the latter being the
solution chosen by TISPAN to achieve secure NAT
traversal.
The DNS protocol vulnerability announced by Dan
Kaminsky on 2008 July 8 is indirectly affected by NAT port
mapping. To avoid DNS server cache poisoning, it is
highly desirable to not translate UDP source port numbers
of outgoing DNS requests from any DNS server which is
behind a firewall which implements NAT. The
recommended work-around for the DNS vulnerability is to
make all caching DNS servers use randomized UDP
source ports. If the NAT function de-randomizes the UDP
source ports, the DNS server will be made vulnerable.
Drawbacks
Hosts behind NAT-enabled routers do not have end-to-
end connectivity and cannot participate in some Internet
protocols. Services that require the initiation
of TCP connections from the outside network, or stateless
protocols such as those using UDP, can be disrupted.
Unless the NAT router makes a specific effort to support
such protocols, incoming packets cannot reach their
destination. Some protocols can accommodate one
instance of NAT between participating hosts ("passive
mode" FTP, for example), sometimes with the assistance
of anapplication-level gateway (see below), but fail when
both systems are separated from the Internet by NAT. Use
of NAT also complicates tunneling protocols such
as IPsec because NAT modifies values in the headers
which interfere with the integrity checks done by IPsec and
other tunneling protocols.
End-to-end connectivity has been a core principle of the
Internet, supported for example by the Internet
Architecture Board. Current Internet architectural
documents observe that NAT is a violation of the End-to-
End Principle, but that NAT does have a valid role in
careful design.[3] There is considerably more concern with
the use of IPv6 NAT, and many IPv6 architects believe
IPv6 was intended to remove the need for NAT.[4]
Because of the short-lived nature of the stateful translation
tables in NAT routers, devices on the internal network lose
IP connectivity typically within a very short period of time
unless they implement NAT keep-alive mechanisms by
frequently accessing outside hosts. This dramatically
shortens the power reserves on battery-operated hand-
held devices and has thwarted more widespread
deployment of such IP-native Internet-enabled devices.
Some Internet service providers (ISPs), especially in
Russia, Asia and other "developing" regions provide their
customers only with "local" IP addresses, due to limited
number of external IP addresses allocated to those
entities.[citation needed] Thus, these customers must access
services external to the ISP's network through NAT. As a
result, the customers cannot achieve true end-to-end
connectivity, in violation of the core principles of the
Internet as laid out by the Internet Architecture Board.
Benefits
The primary benefit of IP-masquerading NAT is that it has
been a practical solution to the impending exhaustion of
IPv4 address space. Even large networks can be
connected to the Internet with as little as a single IP
address. The more common arrangement is having
machines that require end-to-end connectivity supplied
with a routable IP address, while having machines that do
not provide services to outside users behind NAT with only
a few IP addresses used to enable Internet access.
Some[5] have also called this exact benefit a major
drawback, since it delays the need for the implementation
of IPv6, quote:
"... it is possible that its [NAT] widespread use will
significantly delay the need to deploy IPv6. ... It is probably
safe to say that networks would be better off without NAT,
..."

Вам также может понравиться