Вы находитесь на странице: 1из 45

453 Telecommunication For

Business

To: Prof Rachna Gupta


A Managerial Orientation ver 1.0
By: Bhisham Padha 45 MBA 07
History
• The very first device that had fundamentally
the same functionality as a router does today,
i.e a packet switch, was the Interface Message
Processor (IMP); IMPs were the devices that
made up the ARPANET, the first packet
switching network.
• The idea for a router (although they were called
"gateways" at the time) initially came about
through an international group of computer
networking researchers called the International
Network Working Group (INWG). Set up in 1972
as an informal group to consider the technical
issues involved in connecting different
networks, later that year it became a
subcommittee of the International Federation
"Basically, what
I did for my PhD
research in
1961–1962 was
to establish a
mathematical
theory of packet
networks."

Leonard Kleinrock,
Ph.D. (born June 13,
1934 in New York) is a
computer scientist, and
a professor of computer
science at UCLA
Made fundamental contributions to the
mathematical theory of modern data networks,
for the functional specification of packet
switching which is the foundation of the Internet
made several important contributions to the
field of computer networking, in particular
to the theoretical side of computer
networking. He also played an important
role in the development of the ARPANET at
UCLA.
His most well-known and significant work is
his early work on queueing theory, which
has applications in many fields, among
them as a key mathematical background to
packet switching, the basic technology
behind the Internet. His initial contribution
to this field was his doctoral thesis in 1962,
published in book form in 1964; he later
Packet Switching & ARPA
• In 1969, ARPANET, the world's first packet switched
computer network, was established on October 29 between
nodes at Kleinrock's lab at UCLA and Douglas Engelbart's
lab at SRI. Interface Message Processors (IMP) at both sites
served as the backbone of the first Internet.
• In addition to SRI and UCLA, the University of California at
Santa Barbara, and the University of Utah were part of the
original four network nodes. By December 5, 1969, the
initial 4-node network was connected.
• In 1988, Kleinrock was the chairman of a group that
presented the report Toward a National Research Network
to the U.S. Congress . This report was highly influential
upon then-Senator Al Gore who used it to develop the Gore
Bill or the High Performance Computing Act of 1991, which
was influential in the development of the Internet as it is
known today. In particular, it led indirectly to the
development of the 1993 web browser MOSAIC, which was
created at National Center for Supercomputing
switching
Packet switching is a network communications
method that groups all transmitted data,
irrespective of content, type, or structure into
suitably-sized blocks, called packets. The
network over which packets are transmitted is a
shared network which routes each packet
independently from all others and allocates
transmission resources as needed. The principal
goals of packet switching are to optimize
utilization of available link capacity and to
increase the robustness of communication.
When traversing network adapters, switches
and other network nodes, packets are buffered
and queued, resulting in variable delay and
throughput, depending on the traffic load in the
Repeater
• As signals travel along a network cable or
any other medium of transmission), they
degrade and become distorted in a process
that is called attenuation. If a cable is long
enough, the attenuation will finally make a
signal unrecognizable by the receiver.
• A Repeater enables signals to travel longer
distances over a network. Repeaters work at
the OSI's Physical layer. A repeater
regenerates the received signals and then
retransmits the regenerated (or
conditioned) signals on other segments.
To pass data through the repeater in a
usable fashion from one segment to the
next, the packets and the Logical Link
Control (LLC) protocols must be the
same on the each segment. This means
that a repeater will not enable
communication, for example, between
an 802.3 segment (Ethernet) and an
802.5 segment (Token Ring). That is,
they cannot translate an Ethernet
packet into a Token Ring packet. In
Bridges
Like a repeater, a bridge can
join segments or workgroup
LANs. However, a bridge can
also divide a network to isolate
traffic or problems. For example,
if the volume of traffic from one
or two computers or a single
department is flooding the
network with data and slowing
down entire operation, a bridge
can isolate those computers or
Bridges can be used to:
• Expand the distance of a segment.
• Provide for an increased number of
computers on the network.
• Reduce traffic bottlenecks resulting
from an excessive number of
attached computers.
• Bridges work at the Data Link Layer of the OSI model.
Because they work at this layer, all information
contained in the higher levels of the OSI model is
unavailable to them. Therefore, they do not
distinguish between one protocol and another.
• Bridges simply pass all protocols along the network.
Because all protocols pass across the bridges, it is up
to the individual computers to determine which
protocols they can recognize.
• A bridge works on the principle that each network
node has its own address. A bridge forwards the
packets based on the address of the particular
destination node.
• As traffic passes through the bridge, information about
the computer addresses is then stored in the bridge's
RAM. The bridge will then use this RAM to build a
Routers

In an environment consisting of
several network segments with
different protocols and architecture, a
bridge may not be adequate for
ensuring fast communication among
all of the segments. A complex
network needs a device, which not
only knows the address of each
segment, but also can determine the
best path for sending data and
filtering broadcast traffic to the local
is a networking device whose
software and hardware are
usually tailored to the tasks
Routing
Routing is the process of selecting paths in a network
along which to send network traffic. Routing is
performed for many kinds of networks, including the
telephone network, electronic data networks (such as
the Internet), and transportation networks.

This article is concerned primarily with routing in


electronic data networks using packet switching
technology.
In packet switching networks, routing directs packet
Forwarding is the relaying of packets from
one network segment to another by
nodes in a computer network.
The simplest forwarding model - unicasting -
involves a packet being relayed from link
to link along a chain leading from the
packet's source to its destination.
However, other forwarding strategies are
commonly used. Broadcasting requires a
packet to be duplicated and copies sent
on multiple links with the goal of
delivering a copy to every device on the
network. In practice, broadcast packets
are not forwarded everywhere on a
network, but only to devices within a
broadcast domain, making broadcast a
relative term. Less common than
broadcasting, but perhaps of greater
utility and theoretical significance is
multicasting, where a packet is
selectively duplicated and copies
You said 453
Networking technologies tend to naturally support
certain forwarding models. For example, fiber optics
and copper cables run directly from one machine to
another form natural unicast media - data transmitted
at one end is received by only one machine at the
other end. However, as illustrated in the diagrams,
nodes can forward packets to create multicast or
broadcast distributions from naturally unicast media.
Likewise, traditional Ethernet (10BASE5 and 10BASE2,
but not the more modern 10BASE-T) are natural
broadcast media - all the nodes are attached to a
single, long cable and a packet transmitted by one
device is seen by every other device attached to the
cable. Ethernet nodes implement unicast by ignoring
packets not directly addressed to them. A wireless
network is naturally multicast - all devices within a
reception radius of a transmitter can receive its
Get this
The forwarding decision is generally made using
one of two processes: routing, which uses
information encoded in a device's address to
infer its location on the network, or bridging,
which makes no assumptions about where
addresses are located and depends heavily on
broadcasting to locate unknown addresses. The
heavy overhead of broadcasting has led to the
dominance of routing in large networks,
particularly the Internet; bridging is largely
relegated to small networks where the
overhead of broadcasting is tolerable. However,
since large networks are usually composed of
many smaller networks linked together, it would
be inaccurate to state that bridging has no use
At nodes where multiple
outgoing links are available,
the choice of which, all, or
any to use for forwarding a
given packet requires a
decision making process
that, while simple in
concept, is of sometimes
bewildering complexity.
Since a forwarding decision
must be made for every
packet handled by a node,
the total time required for
this can become a major
limiting factor in overall
network performance. Much
of the design effort of high-
speed routers and switches
has been focused on
Routers work at the Network layer of the
OSI model meaning that the Routers
can switch and route packets across
multiple networks. They do this by
exchanging protocol-specific
information between separate
networks. Routers have access to more
information in packets than bridges,
and use this information to improve
packet deliveries. Routers are usually
used in a complex network situation
because they provide better traffic
management than bridges and do not
Routers operate in two different planes
[2]:
Control plane, in which the router
learns the outgoing interface that is
most appropriate for forwarding
specific packets to specific
destinations,
Forwarding plane, which is responsible
for the actual process of sending a
packet received on a logical interface
Control plane
Control plane processing leads to the
construction of what is variously called a
routing table or routing information base
(RIB). The RIB may be used by the
Forwarding Plane to look up the outbound
interface for a given packet, or, depending
on the router implementation, the Control
Plane may populate a separate forwarding
information base (FIB) with destination
information. RIBs are optimized for efficient
updating with control mechanisms such as
routing protocols, while FIBs are optimized
for the fastest possible lookup of the
information needed to select the outbound
Forwarding a.k.a data plane
For the pure Internet Protocol (IP) forwarding
function, router design tries to minimize the
state information kept on individual packets.
Once a packet is forwarded, the router should
no longer retain statistical information about it.
It is the sending and receiving endpoints that
keeps information about such things as errored
or missing packets.
Forwarding decisions can involve decisions at
layers other than the IP internetwork layer or
OSI layer 3. Again, the marketing term switch
can be applied to devices that have these
capabilities. A function that forwards based on
data link layer, or OSI layer 2, information, is
properly called a bridge. Marketing literature
• Routers can share status and routing
information with one another and use
this information to bypass slow or
malfunctioning connections.
• Routers do not look at the destination
node address; they only look at the
network address. Routers will only pass
the information if the network address
is known. This ability to control the data
passing through the router reduces the
amount of traffic between networks and
allows routers to use these links more
Types
• Routers for Internet connectivity and
internal use
• Small Office Home Office (SOHO)
connectivity
• Enterprise routers
Gateways
Gateways make communication
possible between different
architectures and environments. They
repackage and convert data going
from one environment to another so
that each environment can
understand the other's environment
data.
A gateway repackages information to
match the requirements of the
destination system. Gateways can
change the format of a message so that
it will conform to the application
program at the receiving end of the
transfer.
A gateway links two systems that do not
use the same:
Communication protocols 
Data formatting structures
Languages
Architecture
Firewall
A firewall is a part of a computer
system or network that is designed to
block unauthorized access while
permitting outward communication. It
is also a device or set of devices
configured to permit, deny, encrypt,
decrypt, or proxy all computer traffic
between different security domains
based upon a set of rules and other
criteria.
Firewalls can be implemented in both
hardware and software, or a
combination of both. Firewalls are
frequently used to prevent
unauthorized Internet users from
accessing private networks
connected to the Internet, especially
intranets. All messages entering or
leaving the intranet pass through the
firewall, which examines each
message and blocks those that do
Firewall technique
Packet filter: Looks at each packet entering or leaving
the network and accepts or rejects it based on user-
defined rules. Packet filtering is fairly effective and
transparent to users, but it is difficult to configure. In
addition, it is susceptible to IP spoofing.
Application gateway: Applies security mechanisms to
specific applications, such as FTP and Telnet servers.
This is very effective, but can impose a performance
degradation.
Circuit-level gateway: Applies security mechanisms when
a TCP or UDP connection is established. Once the
connection has been made, packets can flow between
the hosts without further checking.
Proxy server: Intercepts all messages entering and
leaving the network. The proxy server effectively hides
the true network addresses.
A firewall's basic task is to regulate
some of the flow of traffic between
computer networks of different trust
levels. Typical examples are the
Internet which is a zone with no trust
and an internal network which is a
zone of higher trust. A zone with an
intermediate trust level, situated
between the Internet and a trusted
internal network, is often referred to
Without proper configuration, a firewall can
often become worthless. Standard security
practices dictate a "default-deny" firewall
ruleset, in which the only network
connections which are allowed are the ones
that have been explicitly allowed.
Unfortunately, such a configuration requires
detailed understanding of the network
applications and endpoints required for the
organization's day-to-day operation. Many
businesses lack such understanding, and
therefore implement a "default-allow"
ruleset, in which all traffic is allowed unless
it has been specifically blocked. This
Firewall History
Firewall technology emerged in
the late 1980s when the Internet
was a fairly new technology in
terms of its global use and
connectivity. The predecessors to
firewalls for network security were
the routers used in the late 1980s
to separate networks from one
another. The view of the Internet
as a relatively small community of
compatible users who valued
openness for sharing and
collaboration was ended by a
I
First generation - packet filters
The first paper published on firewall technology was in 1988, when engineers from
Digital Equipment Corporation (DEC) developed filter systems known as packet
filter firewalls. This fairly basic system was the first generation of what would
become a highly evolved and technical internet security feature. At AT&T Bell
Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet
filtering and developed a working model for their own company based upon
their original first generation architecture.
Packet filters act by inspecting the "packets" which represent the basic unit of data
transfer between computers on the Internet. If a packet matches the packet
filter's set of rules, the packet filter will drop (silently discard) the packet, or
reject it (discard it, and send "error responses" to the source).
This type of packet filtering pays no attention to whether a packet is part of an
existing stream of traffic (it stores no information on connection "state").
Instead, it filters each packet based only on information contained in the packet
itself (most commonly using a combination of the packet's source and
destination address, its protocol, and, for TCP and UDP traffic, the port number).
TCP and UDP protocols comprise most communication over the Internet, and
because TCP and UDP traffic by convention uses well known ports for particular
types of traffic, a "stateless" packet filter can distinguish between, and thus
control, those types of traffic (such as web browsing, remote printing, email
transmission, file transfer), unless the machines on each side of the packet filter
are both using the same non-standard ports.
II
Second generation - "stateful" filters
Main article: Stateful firewall
From 1989-1990 three colleagues from AT&T Bell Laboratories,
Dave Presetto, Janardan Sharma, and Kshitij Nigam
developed the second generation of firewalls, calling them
circuit level firewalls.
Second(2nd) Generation firewalls in addition regard placement
of each individual packet within the packet series. This
technology is generally referred to as a stateful packet
inspection as it maintains records of all connections passing
through the firewall and is able to determine whether a
packet is either the start of a new connection, a part of an
existing connection, or is an invalid packet. Though there is
still a set of static rules in such a firewall, the state of a
connection can in itself be one of the criteria which trigger
specific rules.
This type of firewall can help prevent attacks which exploit
III
Third generation - application layer
Main article: Application layer firewall
Publications by Gene Spafford of Purdue University, Bill Cheswick at AT&T
Laboratories, and Marcus Ranum described a third generation firewall known as
an application layer firewall, also known as a proxy-based firewall. Marcus
Ranum's work on the technology spearheaded the creation of the first
commercial product. The product was released by DEC who named it the DEC
SEAL product. DEC’s first major sale was on June 13, 1991 to a chemical
company based on the East Coast of the USA.
TIS, under a broader DARPA contract, developed the Firewall Toolkit (FWTK), and
made it freely available under license on October 1, 1993. The purposes for
releasing the freely-available, not for commercial use, FWTK were: to
demonstrate, via the software, documentation, and methods used, how a
company with (at the time) 11 years' experience in formal security methods,
and individuals with firewall experience, developed firewall software; to create a
common base of very good firewall software for others to build on (so people
did not have to continue to "roll their own" from scratch); and to "raise the bar"
of firewall software being used.
The key benefit of application layer filtering is that it can "understand" certain
applications and protocols (such as File Transfer Protocol, DNS, or web
browsing), and it can detect whether an unwanted protocol is being sneaked
through on a non-standard port or whether a protocol is being abused in any
harmful way.
Types
There are several classifications of
firewalls depending on where the
communication is taking place,
where the communication is
intercepted and the state that is
being traced.
Network layer and packet
filters
Network layer firewalls, also called packet filters, operate at a relatively low level of
the TCP/IP protocol stack, not allowing packets to pass through the firewall
unless they match the established rule set. The firewall administrator may
define the rules; or default rules may apply. The term "packet filter" originated
in the context of BSD operating systems.
Network layer firewalls generally fall into two sub-categories, stateful and stateless.
Stateful firewalls maintain context about active sessions, and use that "state
information" to speed packet processing. Any existing network connection can
be described by several properties, including source and destination IP address,
UDP or TCP ports, and the current stage of the connection's lifetime (including
session initiation, handshaking, data transfer, or completion connection). If a
packet does not match an existing connection, it will be evaluated according to
the ruleset for new connections. If a packet matches an existing connection
based on comparison with the firewall's state table, it will be allowed to pass
without further processing.
Stateless firewalls require less memory, and can be faster for simple filters that
require less time to filter than to look up a session. They may also be necessary
for filtering stateless network protocols that have no concept of a session.
However, they cannot make more complex decisions based on what stage
communications between hosts have reached.
Modern firewalls can filter traffic based on many packet attributes like source IP
address, source port, destination IP address or port, destination service like
WWW or FTP. They can filter based on protocols, TTL values, netblock of
originator, domain name of the source, and many other attributes.
Application layer
Application-layer firewalls work on the application level
of the TCP/IP stack (i.e., all browser traffic, or all telnet
or ftp traffic), and may intercept all packets traveling
to or from an application. They block other packets
(usually dropping them without acknowledgment to
the sender). In principle, application firewalls can
prevent all unwanted outside traffic from reaching
protected machines.
On inspecting all packets for improper content, firewalls
can restrict or prevent outright the spread of
networked computer worms and trojans. In practice,
however, this becomes so complex and so difficult to
attempt (given the variety of applications and the
diversity of content each may allow in its packet
traffic) that comprehensive firewall design does not
generally attempt this approach.
The XML firewall exemplifies a more recent kind of
proxies
A proxy device (running either on dedicated hardware or
as software on a general-purpose machine) may act as
a firewall by responding to input packets (connection
requests, for example) in the manner of an
application, whilst blocking other packets.
Proxies make tampering with an internal system from the
external network more difficult and misuse of one
internal system would not necessarily cause a security
breach exploitable from outside the firewall (as long as
the application proxy remains intact and properly
configured). Conversely, intruders may hijack a
publicly-reachable system and use it as a proxy for
their own purposes; the proxy then masquerades as
that system to other internal machines. While use of
internal address spaces enhances security, crackers
may still employ methods such as IP spoofing to
attempt to pass packets to a target network.

Вам также может понравиться