Вы находитесь на странице: 1из 59

BIT 2108: COMPUTER NETWORKS

The Concept of Networking


The idea of networking has been around for a long time and has taken on many
meanings. Consider the following definitions

An openwork fabric; netting


A system of interlacing lines, tracks, or channels
Any interconnected system; for example, a television-broadcasting network
A system in which a number of independent computers are linked together to
share data and peripherals, such as hard disks and printers

The last definition is more appealing to us in this course. The key word in the definition
is "share." Sharing is the purpose of computer networking. The ability to share
information efficiently is what gives computer networking its power and its appeal i.e.
Networking is the concept of sharing resources and services.

Computer Networking
A computer network consists of two computers connected to each other by a cable that
allows them to share data i.e a network of computers is a group of interconnected systems
sharing resources and interacting using a shared communications link. A network,
therefore, is a set of interconnected systems with something to share such as data, a
printer,
a fax modem, or a service such as a database or an email system. Computer networking
arose as an answer to the need to share data.
The individual systems must be connected through a pathway (called the transmission
medium) that is used to transmit the resource or service between the computers. All
systems on the pathway must follow a set of common communication rules for data to
arrive at its intended destination and for the sending and receiving systems to understand
each other. The rules governing computer communication are called protocols.

Connecting together of computers and other devices is called a network, and the concept
of connected computers sharing resources is called networking
N.B
Sneakernet is the early form of computer networking but many of us have used it and
perhaps still use today. Sneakernet involves copying files onto floppy disks and giving
them to others to copy onto their computers. Think about it advantages and
disadvantages.
Reasons for using computer networking
Why Use a Computer Network?
The reasons for using computer networking are to: Provide services and to reduce equipment costs. Networks enable computers to
share their resources by offering services to other computers and users on a
network.
Sharing files (data)
Sharing printers and other devices (Hardware)
Enabling centralized administration and security of the resources within the
system
Supporting network applications such as electronic mail and database services
Basic components of a network
In general, all networks have certain components, functions, and features in common

ServersComputers that provide shared resources to network users.


ClientsComputers that access shared network resources provided by a server.
MediaThe wires that make the physical connections.
Shared dataFiles provided to clients by servers across the network.
Shared printers and other peripheralsAdditional resources provided by servers.
ResourcesAny service or device, such as files, printers, or other items, made
available for use by members of the network.

More specifically a network has the following components


Hosts (Servers, PCs, laptops, handhelds)
Routers & switches (IP router, Ethernet switch)
Links (wired, wireless)
Protocols (IP, TCP, CSMA/CD, CSMA/CA)
Applications (network services)
Hosts, routers & links form the hardware side. Protocols & applications form the
software side. Protocols can be viewed as the glue that binds every- thing else together
Models of Network Computing
Three methods of organization, or models, generally are recognized. The following are
the three models for network computing:
Centralized computing
Distributed computing
Collaborative or cooperative computing
Centralized Computing
The first computers were large, expensive, and difficult to manage. These large
mainframe computers were not networked as you are familiar with today. Terminals,
which came later, provided the user with a new mechanism to interact with the
centralized computer. These terminals, however, were merely input/output devices that
had no independent processing power. All processing still took place on the central
mainframe, hence the name centralized computing.
Large mainframe systems are still being operated around the world, most often by
governments and large corporations. An example of centralized computing to which
everyone can relate is using an ATM machine. ATMs function as terminals. All
processing is done on the mainframe computer to which the ATMs are connected.

Distributed Computing
Distributed computing emerged PCs were introduced into organizations. Instead of
centralized processing PCs provided multiple computers capable of independent
processing. Each PC could receive input and could process information locally, without
the aid of another computer. These PCs, however, did not have the computing power of a
mainframe. Thus, in most instances, a companys mainframe could not be replaced by a
PC.
Distributed computing was a major step forward in how businesses leveraged their
hardware resources. It provided smaller businesses with their own computational
capabilities, enabling them to perform less-complex computing tasks on the smaller,
relatively inexpensive machines.

Collaborative Computing
Collaborative computing enables computers in a distributed computing environment to
share processing power in addition to data, resources, and services. One computer might
borrow processing power by running a program on another computer on the network. Or,
processes might be designed so they can run on two or more computers. Collaborative
computing cannot take place without a network to enable the various computers to
communicate.

Types of Networking
Networks generally fall into one of two broad network categories:
Client-server networks or Server-Based Networks
Peer-To-Peer Networks
Peer-to-Peer Networking
In a peer-to-peer network, there are no dedicated servers, and there is no hierarchy among
the computers. All the computers are equal and therefore are known as peers. Each
computer functions as both a client and a server, and there is no administrator responsible
for the entire network. The user at each computer determines what data on that computer
is shared on the network. Small networksusually with fewer than 10 machinescan
work well in this configuration.

Size
Peer-to-peer networks are also called workgroups. The term "workgroup" implies a small
group of people. There are typically 10 or fewer computers in a peer-to-peer network.
Cost
Peer-to-peer networks are relatively simple. Because each computer functions as a client
and a server, there is no need for a powerful central server or for the other components
required for a high-capacity network. Peer-to-peer networks can be less expensive than
server-based networks.
Operating Systems
In a peer-to-peer network, the networking software does not require the same standard of
performance and level of security as the networking software designed for dedicated
servers. Dedicated servers function only as servers and not as clients or workstations.
Peer-to-peer networking is built into many operating systems. In those cases, no
additional software is required to set up a peer-to-peer network.
Where a Peer-to-Peer Network Is Appropriate
Peer-to-peer networks are good choices for environments where:
There are 10 users or fewer.
Users share resources, such as files and printers, but no specialized servers exist.
Security is not an issue.
The organization and the network will experience only limited growth within the
foreseeable future.
Advantages of Peer-To-Peer Network

Easy to install and configure.

Individual machines do not depend on the presence of a dedicated server.

Individual users control their own-shared resources.

Its inexpensive to purchase and operate.

No additional software or hardware beyond a suitable operating system is needed.

No dedicated administrators are needed to run the network.

It works best for network with 10 of fewer users

Disadvantages of Peer-To-Peer Network

Network security applies only to a single resource at a time.


Users may be forced to use as many passwords as there are shared resources
Each machine must be backed up individually to protect all shared data.
There is no centralized organizational scheme to locate or control access to data.
Not suitable for more than 10 users

Suitability of Peer-To-Peer Network


In the following situations peer-to-peer is appropriate.

There are fewer than ten people in your organization


The people in your organization are sophisticated computer users
Security is not an issue or the user can be trusted to maintain good security
There is no one central administrator who sets network policies.
Costly to have an additional computer just to server files
User can be relied upon to back up their own data
User are physically close and no plans for expansion on the network

Client/Server-Based Networking
A client/server network consists of a group of user-oriented PCs (called clients) that issue
requests to a server. The client PC is responsible for issuing requests for services to be
rendered. The servers function on the network is to service these requests. Servers
generally are higher-performance systems that are optimized to provide network services
to other PCs. The server machine often has a faster
CPU, more memory, and more disk space than a typical client machine. The client/server
model is a network in which the role of the client is to issue requests and the role of the
server is to service requests.
As networks increase in size, more than one server is usually needed. Spreading the
networking tasks among several servers ensures that each task will be performed as
efficiently as possible.

Specialized Servers
Servers for large networks have become specialized to accommodate the expanding
needs of users. Following are examples of different types of servers included on many
large networks.
File and Print Servers
File and print servers manage user access and use of file and printer resources. In other
words, file and print servers are used for file and data storage.
Application Servers
Application servers make the server side of client/server applications, as well as the data,
available to clients. For example, servers store vast amounts of data that is organized to
make it easy to retrieve. An application server differs from a file and print server. With a
file and print server, the data or file is downloaded to the computer making the request.
With an application server, the database stays on the server and only the results of a
request are downloaded to the computer making the request.
Mail Servers
Mail servers operate like application servers in that there are separate server and client
applications, with data selectively downloaded from the server to the client.
Fax Servers
Fax servers manage fax traffic into and out of the network by sharing one or more fax
modem boards.
Communications Servers
Communications servers handle data flow and e-mail messages between the servers' own
networks and other networks, mainframe computers, or remote users who dial in to the
servers over modems and telephone lines.
Directory Services Servers
Directory services servers enable users to locate, store, and secure information on the
network. For example, some server software combines computers into logical groupings
(called domains) that allow any user on the network to be given access to any resource on
the network.
Planning for specialized servers becomes important with an expanded network. The
planner must take into account any anticipated network growth so that network use will
not be disrupted if the role of a specific server needs to be changed.
The Role of Software in a Server-Based Environment
No matter how powerful or advanced a server might be, it is useless without an operating
system that can take advantage of its physical resources. Advanced server operating
systems, such as those from Microsoft and Novell NetWare, Windows NT Server, and
Banyan Vines are designed to take advantage of the most advanced server hardware

Server-Based Network Advantages


Although it is more complex to install, configure, and manage, a server-based network
has many advantages over a simple peer-to-peer network.
Sharing Resources
Server-based data sharing can be centrally administered and controlled. Because these
shared resources are centrally located, they are easier to find and support than resources
on individual computers.
Security
In a server-based environment, one administrator who sets the policy and applies it to
every user on the network can manage security.
Backup
Backups can be scheduled several times a day or once a week depending on the
importance and value of the data. Server backups can be scheduled to occur
automatically, according to a predetermined schedule, even if the servers are located on
different parts of the network.
Redundancy
Through the use of backup methods known as redundancy systems, the data on any server
can be duplicated and kept online. Even if harm comes to the primary data storage area, a
backup copy of the data can be used to restore the data.
Number of Users
A server-based network can support thousands of users. This type of network would be
impossible to manage as a peer-to-peer network, but current monitoring and networkmanagement utilities make it possible to operate a server-based network for large
numbers of users.
Hardware Considerations
Client computer hardware can be limited to the needs of the user because clients do not
need the additional random access memory (RAM) and disk storage needed to provide
server services. A typical client computer often has no more than a Pentium processor
and 32 megabytes (MB) of RAM.
Advantages of Server-Based Network

Centralized user accounts, security, and access controls to simplify network


administration.
More powerful equipment means more efficient access to network resources.
A single password for network login deliver access to all.
Server-based networking makes the most sense for networks with 10 or more
users or any networks where resources are used heavily.

Disadvantages of Server-Based Network

At worst, server failure leads to whole network failure.


Complex, special-purpose server software requires allocation of expert staff,
which increases expanses.
Dedicated hardware (server) and special software (NOS) add to the cost.

Suitability of Server-Based Network


In the following situations server-based is appropriate.

There are more than ten people in your organization.


Many of the people are not sophisticated computer users.
Your organisation maintains information that must be centrally controlled.
A central administrator will be Assigned for network setup and maintenance

Categories of the Networks


Networks come in all shapes and sizes. Networks may be categorized into three distinct
groups depending upon the physical or geographical (size) area that they cover.
These groups are:
Local Area Network (LAN)

Metropolitan Area Network (MAN) And

Wide Area Network (WAN


Local Area Networks (LANs)
A local area network (LAN) is a group of computers and network communication devices
interconnected within a geographically limited area, such as a building or a campus. A
local area network is usually privately owned and links the devices in a single office,
building or campus of up to a few kilometers in size. A LAN can be as simple as two PCs
and printer in someones whole office, or it can extend throughout a company and
include voice, sound, and video peripherals. LANs are distinguished from other types of
networks by their transmission media and topology. In general, a given LAN will use
only one type of transmission medium. The most common LAN topologies are bus, ring,
and star. Traditionally, LANs have data rates in the 4 to 16 Mbps range. Today, however,
speeds are increasing and can reach 100 Mbps.
LANs are characterized by the following
They transfer data at high speeds (higher bandwidth).
They exist in a limited geographical area.
Connectivity and resources, especially the transmission media, usually are
managed by the company running the LAN.
Wide Area Networks (WANs)
A wide area network (WAN) interconnects LANs. A WAN can be located entirely within
a state or a country, or it can be interconnected around the world. A wide area network
provides long-distance transmission of data, voice, image, and video information over a
large geographical area that may comprises a country, a continent or even the whole
world
WANs are characterized by the following:
They exist in an unlimited geographical area.
They usually interconnect multiple LANs.
They often transfer data at lower speeds (lower bandwidth).
Connectivity and resources, especially the transmission media, usually are
managed by a third-party carrier such as a telephone or cable company.

WANs can be further classified into two categories:


Enterprise WANs
An enterprise WAN connects the widely separated computer resources of a single
organization. An organization with computer operations at several distant sites can
employ an enterprise
WAN to interconnect the sites. An enterprise WAN can combine private and commercial
network services, but it is dedicated to the needs of a particular organization.
Global WANs
A global WAN interconnects networks of several corporations or organizations.
Metropolitan Area Network
A metropolitan area network is designed to extend over an entire city. It may be a single
network such as a cable television network, or it may be a means of connecting a number
of LANs into a larger network so that resources may be shared LAN-to-LAN as well as
device-to-device. A MAN may be wholly owned and operated by a private company, or it
may be a service provided by a public company, such as a local telephone company.
Many telephone companies provide a popular MAN service called Switched Multimegabit Data Services (SMDS).
Internetworks (Intranets and Internets)
In recent years, two new terms have been introduced: internet, intranet and extranet
A company that has a LAN has a network of computers as a LAN grows; it develops into
an internetwork of computers, referred to as an internet i.e when two or more networks
are connected, they become Internetwork, or internet (lowercase i).
In the 1990s, graphical utilities (or browsers) were developed to view information on a
serve. These browsers are used to navigate the Internet (note the capital I). The term
Internet (lowercase i) should not be confused with the Internet (uppercase I). The first is a
generic term used to mean an interconnection of networks. The second is the name of a
specific worldwide network.
This terminology initially led to much confusion in the industry because an internet is a
connection of LANs, and the Internet is the connection of servers on various LANs that
is available to various browser utilities To avoid this confusion, the term intranet was
coined. This term describes an internetwork of computers on a LAN for a single
organization; the term Internet describes the network of computers you can connect to
using a browseressentially, an internetwork of LANs available to the public.
Intranet is accessed only by authorized persons, especially members or employees of the
organization. Extranet is an intranet for outside authorized users using same internet
technology. Inter-organizational information system that enable outsiders to work
together with companys employees and open to selected suppliers, customers & other
business partners.

Communication Media
Communication is the activity or process of exchanging information in mutual
understanding form. A computer system can be vast resource of information. Once this
system is connected to a network, this information can be shared among all other users. A
communication media is required to connect different computer systems to facilitate the
information exchange.
Two main categories:
Guided wires, cables
Unguided wireless transmission, e.g. radio, microwave, infrared, sound, sonar
Guided Media
a) Twisted-Pair cables:
i.
Unshielded Twisted-Pair (UTP) cables
ii.
Shielded Twisted-Pair (STP) cables
b) Coaxial cables
c) Fiber-optic cables
Unguided Media

Guided Transmission Media


Guided/physical/non-wireless/bounded media have a physical link between sender and
receiver (wires, cables). Mainly there are three categories of guided media: twisted-Pair,
coaxial, and fiber-optic
Twisted-Pair Cable
A twisted consist of two conductors, usually two strands of copper wire twisted together
each with its own colored plastic insulation. In the past, two parallel wires were used for
communication. However, electromagnetic interference from devices such as a motor can
create over noise those wire i.e. if the two wires are parallel, the wire closest to the source
of the noise gets more interference than the wire further away which results in an uneven
load and a damaged signal. If the pair of wires are not twisted, electromagnetic noises
from, e.g., motors, will affect the closer wire more than the further one, thereby causing
errors in communication. The twisting and also reduces the tendency of the cable to
radiate radio frequency noise that interferes with nearby cables and electronic
components, because the radiated signals from the twisted wires tend to cancel each other
out

If, however, the two wires are twisted around each other at a regular intervals (between 2
to 12 twist per foot), each wire is the closer to the noise source for half the time and the

further away the other half. With the twisting interference can be equalized for both
wires. Twisting does not always eliminate the impact of noise, but does significantly
reduce it
Twisted-pair cable has become the dominant cable type for all new network designs that
employ copper cable. Among the several reasons for the popularity of twisted-pair cable,
the most significant is its low cost. Your telephone cable is an example of a twisted-pair
type cable.
Twisted cable comes in two forms: unshielded and shielded.
1. Unshielded Twisted (UTP) cable
UTP consists of a number of twisted pairs with simple plastic casing, usually wrapped
inside a plastic cover (for mechanical protection). Unshielded twisted-pair cable doesnt
incorporate a braided shield into its structure. UTP is commonly used in telephone
system.

The Electrical Industry Association (EIA) divides UTP into different categories by
quality grade. The rating for each category refers to conductor size, electrical
characteristics, and twists per foot.
Categories UTP Cables
Category 1: Applies to transmit traditional UTP telephones cabling, which is designed to
carry voice but not data. It is the lowest quality, only good for voice, mainly found in
very old buildings, not recommended now
Category 2: Certifies UTP cabling for bandwidth up to 4 Mbps and consists of four pair
of wires. Since 4 Mbps is slower than most networking technologies in the use today.
Category 2 is rarely encountered in networking environment, however it is good for voice
and low data rates (up to 4Mbps for low-speed token ring networks)

Category 3: Certifies UTP cabling for bandwidth up to 10Mbps. This includes most
conventional networking technologies, such as 10BaseT Ethernet and 4Mbps token ring
etc. Category 3 consists of four pairs, each having minimum 3 twists per foot. It is
common in phone networks in residential buildings.
Category 4: Certifies UTP cabling for bandwidth up to 10Mbps. This includes primarily
10BaseT Ethernet and 16Mbps token ring. Category 4 consists of four pairs. It is mainly
for token rings
Category5 (or 5e): Used for data transmission up to 100Mbps Category 5 also consists
of four pairs. (or 5e). Up to 100 Mbps and it is common for networks targeted for highspeed data communications
Category 6 more twists than Cat 5, up to 1 Gbps
N.B The price of the grades of cable increase as you move from Category 1 to Category 6
2. Shielded Twisted (STP)
UTP is particularly prone to cross talk, and the shielding included with STP is designed
specifically to reduce this problem. STP includes shielding to reduce cross talk as well as
to limit the effects of external interference. For most STP cables, this means that the
wiring includes a wire braid inside the cladding or sheath material as well as a foil wrap
around each individual wire. Shielded twisted-pair cabling consists of one or more
twisted pairs of cables enclosed in a foil wrap and woven copper shielding. This shield
improves the cable's transmission and interference characteristics, which, in turn, support
higher bandwidth over longer distance than UTP.
STP cables are similar to UTP cables, except there is a metal foil or braided-metal-mesh
cover that encases each pair of insulated wires

Coaxial Cable
Coaxial cables were the first cable types used in LANs. Coaxial cable gets its name
because two conductors share a common axis; the cable is most frequently referred to as
a coax. In general, coaxial cables, or coax, carry signals of higher freq (100KHz
500MHz) than UTP cables. Outer metallic wrapping serves both as a shield against noise
and as the second conductor that completes the circuit. Coaxial cable, commonly called
coax, has two conductors that share the same axis. A solid copper wire runs down the
center of the cable, and this wire is surrounded by plastic foam insulation. The foam is
surrounded by a second conductor, wire mesh tube, metallic foil, or both. The wire mesh
protects the wire from EMI. It is often called the shield. A tough plastic jacket forms the
cover of the cable, providing protection and insulation.
The components of a coaxial cable are as follows:
A center conductor, although usually solid copper wire, is sometimes made of
stranded wire.
.An outer conductor forms a tube surrounding the center conductor.
This conductor can consist of braided wires, metallic foil, or both. The outer conductor,
frequently called the shield, serves as a ground and also protects the inner conductor from
EMI.
An insulation layer keeps the outer conductor spaced evenly from the inner
conductor.
.A plastic encasement (jacket) protects the cable from damage

A type of coaxial cable that you may be familiar with is your television cable.
Types of Coaxial Cable (Thinnet, Thicknet)
Where Ethernet is concerned, there are two types of coaxial cable, called this Ethernet
(also known as thinnet or thinwire,) and thick Ethernet (also known as thinnet or
thickwire). The Institute of Electrical and Electronics Engineers (IEEE) designates these
cable types as 10Base2 and 10Base5, respectively, where these notations indicates:
Total bandwidth for the technology: in this case, 10 means 10Mbps
Base: indicates that the network uses baseband signaling and this applies to both types of
cable.
2 or 5: a rough indicator of maximum segment length, measured in hundreds of meters;
thinwire support a maximum segment length of 185 meters, which rounds up to 200;
thickwire supports a maximum segment length of 500 meter

Thinnet
Thinnet is a light and flexible cabling medium that is inexpensive and easy to install.
Thinnet cable can reliably transmit a signal for 185 meters (about 610 feet).

Thicknet
Thicknet (big surprise) is thicker than Thinnet. Thicknet coaxial cable is approximately
0.5 inches (13 mm) in diameter. Because it is thicker and does not bend as readily as
Thinnet, Thicknet cable is harder to work with. A thicker center core, however, means
that

Thicknet can carry more signals a longer distance than Thinnet. Thicknet can transmit a
signal approximately 500 meters (1,650 feet). Thicknet cable is sometimes called
Standard Ethernet . Thicknet can be used to connect two or more small Thinnet LANs
into a larger network. Because of its greater size, Thicknet is also more expensive than
Thinnet. However, Thicknet can be installed relatively safely outside, running from
building to building.

Fiber Optic Cable


Fibre optic cable transmits light signals rather than electrical signals. It is enormously
more efficient than the other network transmission media. As soon as it comes down in
price (both in terms of the cable and installation cost), fiber optic will be the choice for
network cabling.
Light travels at 3x108 ms-1 in free space and is the fastest possible speed in the Universe
Light slows down in denser media, e.g. glass. Refraction occurs at interface, with light
bending away from the normal when it enters a less dense medium

A light pulse can be used to signal a 1 bit; the absence of a pulse signals a 0 bit.
Visible light has a frequency of about 108 MHz, so the bandwidth of an optical
transmission system is potentially enormous.
An optical transmission system has three components: the transmission medium, the light
source and the detector. The transmission medium is an ultra-thin fiber of glass or fused
silica. The light source is either a LED (Light Emit Diode) or a laser diode, both of which
emits light pulses when a electrical current is applied. The detector is a photo diode,
which generates an electrical pulse when light falls on it.
An optical fiber consists of a core (denser material) and a cladding (less dense material)
Simplest one is a multimode step-index optical fiber
Multimode = multiple paths, whereas step-index = refractive index follows a stepfunction profile (i.e. an abrupt change of refractive index between the core and the
cladding)
Light bounces back and forth along the core
Common light sources: LEDs and lasers

Advantages
Noise resistance external light is blocked by outer jacket
Less signal attenuation a signal can run for miles without regeneration
(currently, the lowest measured loss is about ~4% or 0.16dB per km)
Higher bandwidth- Higher bandwidth: fiber optic cable can support dramatically
higher bandwidths (and hence data rate) than all other cables. Currently, data rates
and bandwidth utilization over fiber-optic cable are limited not by the medium but
by the signal generation and reception technology available. A typical bandwidth
for fiber optic is 100Mbps to 1Gbps.
Disadvantages
Cost Optical fibers are expensive
Installation/maintenance any crack in the core will degrade the signal, and all
connections must be perfectly aligned
Fragility : glass fiber is more easily broken than wire

Unguided Transmission Media


Unguided/non-physical/wireless/unbounded media have no physical link between sender
and receiver.
The most common wireless transmission methods are as follows:
Infrared Transmission
Infrared media uses infrared light to transmit signals. LEDs transmit the signals, and
photodiodes receive the signals. The remote control we use for television, VCR and CD
player use infrared technology to send and receive signals. This technology also is used
for network communication. Infrared transmissions are commonly used for LAN
transmissions, yet can also be employed for WAN transmissions
Because infrared signals are in high frequency range, they have good throughput. Infrared
signals do have a downside; the signals cannot penetrate walls or other objects, and they
are diluted by strong light sources.

Microwave
Microwave technology has applications in all three of the wireless networking scenarios:
LAN, extended LAN, and mobile networking. Microwave communication can take two
forms: terrestrial (ground) links and satellite links. The frequencies and technologies
employed by these two forms are similar, but distinct differences exist between them.

Terrestrial Microwave
Microwaves do not follow the curvature of the earth therefore require line of sight
transmission and reception equipment. The distance coverable by line of sight signals
depend to a large extend on the height of the antenna: the taller the antenna, the longer
the sight distance. Height allows the signals to travel farther without being stopped by the
curvature of the earth and raises the signals above many surface obstacles, such as low
hills and tall buildings that would otherwise block transmission.
Microwave signals propagate in one direction at a time, which means that two
frequencies are necessary for two ways communication such as telephone
communication. One frequency is reserved for transmission in one direction and other for
transmission in other. Each frequency requires its own transmitter and receiver. Today,
both pieces of equipment usually are combined in a single piece of equipment called
transceiver, which allows a single antenna to serve both frequencies and functions.
Microwave technology has applications in all three of the wireless networking scenarios:
LAN, extended LAN, and mobile networking. A microwave link is used frequently to
transmit signals in instances in which it would be impractical to run cables.

Satellite communication
Satellite transmission is much like line of sight microwave transmission in which one of
the stations is a satellite orbiting the earth. Satellite microwave systems relay
transmissions through communication satellites that operate in geosynchronous orbits
22,300 miles above the earth. Satellites orbiting at this distance remain located above a
fixed point on earth. The principle is the same as terrestrial microwave, with a satellite
acting as a super-tall antenna and repeater. Earth stations use parabolic antennas (satellite
dishes) to communicate with satellites. These satellites then can retransmit signals in
broad or narrow beams, depending on the locations set to receive the signals. When the
destination is on the opposite side of the earth, for example, the first satellite cannot
transmit directly to the receiver and thus must relay the signal through another satellite

Although in satellite transmission signals must still travel in straight lines, the limitations
imposed on distance by the curvature of the earth are reduced. In this way, satellite relays
allow microwave signals to span continents and ocean with a single bounce
Satellite microwave can provide transmission capability to and from any location on
earth, no matter how remote. This advantage makes high quality communication
available to undeveloped parts of the world without requiring a huge investment in
ground based infrastructure. Satellite themselves are extremely expensive, of course, but
leasing time or frequencies on one can be relatively cheap.

Transmission Impairments:
With any communication system, there is a high possibility that the signal that is received
will differ from the signal that is transmitted as a result of various transmission
impairments. For analog signals, these impairments introduce various random
modifications that degrade the signal quality. For digital signals, bit errors are introduced:
A binary 1 is transformed into a binary 0, and vice versa.
The most significant impairments are the following:
Attenuation
Noise
EMI
Crosstalk
Attenuation
When an electromagnetic signal is transmitted along any medium, it gradually become
weaker at greater distances, this is referred to as attenuation. To solve this problem
amplifier is used. The amplifier boosts the signals and extends the transmission distance
Attenuation is a measure of how much a signal weakens as it travels through a medium,
Attenuation is a contributing factor to why cable designs must specify limits in the
lengths of cable runs. When signal strength falls below certain limits, the electronic
equipment that receives the signal can experience difficulty isolating the original signal
from the noise present in all electronic transmissions.

Noise

Random electrical signals that can be picked up by the transmission medium and result in
degradation of the data.
Electromagnetic Interference (EMI)
Electromagnetic interference (EMI) consists of outside electromagnetic noise that
distorts the signal in a medium. When you listen to an
AM radio, for example, you often hear EMI in the form of noise caused by nearby motors
or lightning. Some network media are more susceptible to EMI than others.
Cross talk
Crosstalk is a special kind of interference caused by adjacent wires. Crosstalk occurs
when the signal from one wire is picked up by another wire. You may have experienced
this when talking on a telephone and hearing another conversation going on in the
background. Crosstalk is a particularly significant problem with computer networks
because large numbers of cables often are located close together, with minimal attention
to exact placement.

NETWORK TOPOLOGIES AND ARCHITECTURES


Networks come in a few standard forms or architectures, and each form is a complete
system of compatible hardware, protocols, transmission media, and topologies. A
topology is a map of the network. It is a plan for how the cabling will interconnect the
nodes, or devices, and how the nodes will function in relation to one another.
Several factors shape the various network topologies, and one of the most important is
the choice of an access method. An access method is a set of rules for sharing the
transmission medium.
Media access methods (Access Methods)
An access method is a set of rules governing how the network nodes share the
transmission medium. Just like humans sharing rules (philosophies) computers sharing
rules are guided by two fundamental philosophies i.e. 1) first come, first served and 2)
take turns
These philosophies are the principles defining the three most important types of media
access methods:
a) Contention (Contention-based access control)
Contention means that the computers are contending for use of the transmission medium.
In pure contention-based access control, any computer can transmit at any time (first
come, first served). This system breaks down when two computers attempt to transmit at
the same time, in which case a collision occurs.

Mechanisms exist to minimize the number of collisions. They include


Carrier sensing, whereby each computer listens to the network before attempting
to transmit and hence if the network is busy, the computer refrains from
transmitting until the network quiets down. This simple listen before talking
strategy can significantly reduce collisions.
Carrier detection whereby computers continue to listen to the network as they
transmit. If a computer detects another signal that interferes with the signal its
sending, it stops transmitting. Both computers then wait a random amount of time
and attempt to retransmit.
Unless the network is extremely busy, carrier detection along with carrier sensing can
manage a large volume of transmissions. Carrier detection and carrier sensing used
together form the protocol used in all types of Ethernet, which is Carrier Sense Multiple
Access with Collision Detection (CSMA/CD).
Contention-based networks are called probabilistic because a computers chance of being
permitted to transmit cannot be precisely predicted.
b) Token Passing(token passing access control)
Token passing utilizes a frame called a token, which circulates around the network. A
computer that needs to transmit must wait until it receives the token, at which time the
computer is permitted to transmit. When the computer is done transmitting, it passes the
token frame to the next station on the network. Token ring uses a token-passing
architecture that adheres to the IEEE 802.5 standard. The topology is physically a star,
but token ring uses a logical ring to pass the token from station to station. Each node must
be attached to a concentrator called a multistation access unit (MSAU or MAU).

Several network standards employ token passing access control:


Token ring. The most common token-passing standard, embodied in IEEE
standard 802.5.
IEEE standard 802.4. Implemented infrequently; defines a bus network that also
employs token passing. ARCNet can deploy this standard
FDDI. A 100Mbps fiber-optic network standard that uses token passing and rings
in much the same manner as 802.5 token ring.
Token passing is more appropriate than contention under the following conditions:
When the network is carrying time-critical data. Because token passing results in
more predictable delivery, token passing is called deterministic.
When the network experiences heavy utilization. Performance typically falls off
more gracefully with a token-passing network than with a contention-based
network. Token-passing networks cannot become gridlocked due to excessive
numbers of collisions.
When some stations should have higher priority than others. Some token-passing
schemes support priority assignments
c) Demand Priority
Demand priority is an access method used with the new 100Mbps 100VG-AnyLAN
standard. Although demand priority is officially considered a contention-based access
method, demand priority is considerably different from the basic Carrier Sense Multiple
Access with Collision Detection (CSMA/CD) ethernet. In demand priority, network
nodes are connected to hubs, and those hubs are connected to other hubs. Contention,
therefore, occurs at the hub. (100VG-AnyLAN cables can actually send and receive data
at the same time.) Demand priority provides a mechanism for prioritizing data types. If
contention occurs, data with a higher priority takes precedence.

d) Polling (polling-based access control)


Polling-based systems require a device (called a controller, or master device) to poll other
devices on the network to see whether they are ready to either transmit or receive data.
This access method is not widely used on networks because the polling itself can cause a

fair amount of network traffic. A common example of polling is when your computer
polls its printer to receive a print job.

Network Topologies
A topology defines the arrangement of nodes, cables, and connectivity devices that make
up the network. Two categories form the basis for all discussions of topologies:
Physical topology-describes the actual layout of the network transmission media.
Logical topology-describes the logical pathway a signal follows as it passes
among the network nodes.
Physical and logical topologies can take several forms. The most Common topologies for
understanding the Ethernet and token-ring topologies are:
Bus topologies
Ring topologies
Star topologies
Mesh topology
Each topology has its own strengths and weaknesses.
a) Bus Topologies
A bus physical topology is one in which all devices connect to a common, shared cable
(sometimes called the backbone). Bus topology is suited for the networks that use
contention-based access methods such as Carrier Sense Multiple Access with Collision
Detection (CSMA/CD) Ethernet which is the most common contention-based network
architecture, typically uses bus as a physical topology. Even 10BASE-T ethernet
networks use the bus as a logical topology but are configured in a star physical topology.

Its important to note that the bus topology is a Passive topology. This means that
computers on the bus only listen for data being sent, they are not responsible for moving
the data from one computer to the next. If one computer fails it has no effect on the rest
of the network. In an active topology network, the computers regenerate signals and are
responsible for moving the data through the network.
Above figure gives an example. In this example, Computer 1, 2, 3, 4 and 5, in that order
are part of logical ring. Computer 1 passes the token to computer 2, which passes it to
computer 3, which passes to computer 4, which passes the token to station 5 and finally
computer 1 will get the token (starting point).

How a Bus Network Works


On a typical bus network, the entire computers are connected to a single cable. When one
computer sends a signals using the cable, all the computers on the network receive the
information, but only one (the one with the address that matches the one encoded in the
message) accepts the information. The rest disregard the message. Only one computer at
a time can send a message; therefore, the number of computers attached to a bus network
can significantly affect the speed of the network. A computer must wait until the bus is
free before it can transmit.
Another important issue in bus network is termination. Without termination, when the
signal reaches the end of the wire, it bounces back and travels back up the wire. When a
signal echoes back and forth along the un-terminated bus, it is called ringing. To stop the
signals from ringing, terminators are attached at either end of the cable. The terminator
absorbs the signals and stops the ringing. Terminator must be placed at the end of the
backbone cable to prevent signals from reflecting back on the cable and causing
interference
Advantages of Bus topology
The bus is simple, reliable in very small network, and easy to use.

The bus requires the least amount of cable to connect the computers together and
is therefore less expensive than other cabling arrangements.

It is easy to extend a bus. Two cables can be joined into one longer cable with a
BNC barrel connector, making a longer cable and allowing more computers to
join the network.

Disadvantages of Bus topology

Heavy network traffic can slow a bus considerably.

A break in the cable or lack of proper termination can bring the network down.

It is difficult to troubleshoot a bus.

Bus topology is appropriate in following situation:


The network is small
The network will not be frequently reconfigured.
The least expensive solution is required.
The network is not expected to grow much
b) Ring Topologies
Ring topologies are wired in a circle. Each node is connected to its neighbors on either
side, and data passes around the ring in one direction only. Each device incorporates a
receiver and a transmitter and serves as a repeater that passes the signal on to the next
device in the ring. Because the signal is regenerated at each device, signal degeneration is
low. Ring topologies are ideally suited for token-passing access methods. The token
passes around the ring, and only the node that holds the token can transmit data. Ring
physical topologies are quite rare.

The ring topology is almost always implemented as a logical topology. Token ring, for
example, the most widespread token-passing network, always arranges the nodes in a
physical star (with all nodes connecting to a central hub), but passes data in a logical ring.
How a Ring Network Works
Every computer is connected to the next compute in the ring, and each retransmit what it
receives from the previous computer. The message flow around the ring in one direction.
Since each computer retransmits what it receives, a ring is an active network and is not
subject to the signal loss problem a bus experience. There is no termination because there
is no end to the ring
Token passing a method of sending data in a ring. A small packet called the token
passed around the ring to each computer in turn. If a computer has information to send, it
modifies the token, adds address information and the data and sends it down the ring. The

information travels around the ring until it either reaches its destination or returns to the
sender. When the intended destination computer receives the packet, it returns a message
to the sender including its arrival. A new token is then created by the sender and sent
down the ring, allowing another station to capture the token and begin transmission.
A token can circle a ring 200 meters in diameter at about 10,000 times a second.
Advantages of Ring topology
All the computers have equal access to the network.
Even with many users, network performance is even
Allows error checking, and acknowledgement.
Disadvantages of Ring topology
Failure of one computer can affect the whole network.
It is difficult to troubleshoot the ring network.
Adding or removing computers disturbs the network.
Ring Topology is Appropriate in Following Situation:

The network must operate reasonably under a heavy load


A higher-speed network is required.
The network will not be frequently reconfigured.

c) Star Topologies
Star topologies require that all devices connect to a central hub. The hub receives signals
from other network devices and routes the signals to the proper destinations. Star hubs
can be interconnected to form tree, or hierarchical, network topologies. A star physical
topology is often used to implement a bus or ring logical topology.

How a Star Network Works


Each computer on a star network communicates with a central hub that resends the
message either to all the computers (in a broadcast star network) or only to the
destination computer (in a switched star network). The hub can be active or passive. An
active hub regenerates the electrical signal and sends it to all the computers connected to

it. This type of hub is often called a multiport repeater. Active hub requires electrical
power to run. A passive hub, such as wiring panels, merely acts as a connection point and
does not amplify or regenerate the signal. Passive hubs do not require electrical power to
run. Using a hybrid hub, several types of cable can be used to implement a star network.
Hybrid hub is used to connect different types of cables. It is used to maximise the
networks efficiency and utilise the benefits different cables.
Advantages of the Star
It is easy to modify and add new computers to a star network without disturbing
the rest of the network. You simply run a new line from the computer to the
central location and plug it into the hub. When the capacity of the central hub is
exceeded, it can be replaced with one that has a larger number of ports to plug
lines into (or multiple hubs can be connected together to extend the number of
ports)
The centre of a star network is a good place to diagnose network faults. Intelligent
hubs (hubs with microprocessors that implement features in addition to repeating
network signals) also provide for centralised monitoring and management of the
network.
Single computer failure does not necessarily bring down the whole star network.
Several types of cable can be used in the same network with a hybrid hub.
Disadvantages of Star
If the central hub fails, the whole network fails to operate.
It cost more to cable a star network.
Star topology is appropriate in following situation:
It must be easy to add or remove client computer.
It must be easy to troubleshoot.
The network is large.
The network is expected to grow in the future.
Mesh Topology
A mesh topology is really a hybrid model representing an all-channel sort of physical
topology. It is a hybrid because a mesh topology can incorporate all the topologies
covered to this point. It is an all-channel topology in that every device is directly
connected to every other device on the network. When a new device is added, a
connection to all existing devices must be made. This provides for a great deal of fault
tolerance, but it involves extra work on the part of the network administrator. That is, if
any transmission media breaks, the data transfer can take alternative routes. However,
cabling becomes much more extensive and complicated
Most mesh topology networks are not true mesh networks. Rather, they are hybrid mesh
networks, which contain some redundant links but not all.
Advantages of Mesh
Because of the dedicated link, no traffic between computers.
Failures of one node computer not affect rest of the network.
Because of the dedicated link privacy and security are guaranteed

Point to point links make fault identification and fault isolation easy.
Disadvantages of Mesh
Due to the amount of cabling and number of input output ports, it is expensive.
Large space is required to run the cables.
Installation and reconfiguration are difficult.
When a Mesh Appropriates to Use
Direct transmission is required for privacy reason
Need to have dedicated link for fast transmission.
Variations of the Major Topologies
Hybrid Star
A star network can be extended by placing another star hub where a computer might
otherwise go, allowing several more computers or hubs to be connected to that hub
Star Bus

Star Bus Topology


The star bus topology combines the bus and the star, linking several star hubs together
with bus trunks. If one computer fails, the hub can detect the fault and isolate the
computer. If a hub fails, computers connected to it will not be able to communicate, and
the bus network will be broken into two segments that cannot reach each other.

Hybrid Topologies
Often a network combines several topologies, as subnetworks linked together are a large
topology. For instance one department of business may have decided to use a bus
topology while another department has a ring. The two can be connected to each other a
central controller in a star topology.

NETWORK ARCHITECTURES
A network architecture is the design specification of the physical layout of connected
devices. This includes the cable being used (or

wireless media being deployed), the types of network cards being deployed, and the
mechanism through which data is sent on to the network and passed to each device.
Network architecture encompasses the total design and layout of the network
Ethernet
Ethernet is a very popular local area network architecture based on the CSMA/CD access
method.
The original ethernet specification was the basis for the IEEE 802.3 specifications
(Networking Standards to be covered later)
In present usage, the term ethernet refers to original ethernet (or Ethernet II, the latest
version) as well as the IEEE 802.3 standards
The different varieties of Ethernet networks are commonly referred to as ethernet
topologies. Typically, ethernet networks can use a bus physical topology, although, as
mentioned earlier, many
varieties of ethernet such as 10BASE-T use a star physical topology and a bus logical
topology.
Ethernet topologies:
10BASE2
10BASE5
10BASE-T
10BASE-FL
100VG-AnyLAN
100BASE-X
Note that the name of each ethernet topology begins with a number (10 or 100). That
number specifies the transmission speed for the network. For instance, 10BASE5 is
designed to operate at 10Mbps, and 100BASE-X operates at 100Mbps. BASE specifies
that baseband transmissions are being used. The T is for unshielded twisted-pair
wiring, FL is for fiber optic cable, VG-AnyLAN implies Voice Grade, and X
implies multiple media types.
Ethernet networks transmit data in small units called frames. The size of an ethernet
frame can be anywhere between 64 and 1,518 bytes. Eighteen bytes of the total frame
size are taken up by frame overhead, such as the source and destination addresses,
protocol information, and error-checking information
There are many different types of ethernet frames, such as the Ethernet II, 802.2, and
802.3 frames to name a few. It is important to remember that 802.2 and 802.3 are IEEE
specifications on how information is transferred onto the transmission media (Data Link
layer) as well as the specification on how the data should be packaged.
A typical Ethernet II frame has the following sections:
. Preamble. A field that signifies the beginning of the frame.

. Addresses. A field that identifies the source and destination addresses for the
frame.
. Type. A field that designates the Network layer protocol.
. Data. The data being transmitted.
. CRC. Cyclical Redundancy Check for error checking

Ethernet generally is used on light-to-medium traffic networks and performs best when a
networks data traffic transmits in short bursts. Ethernet is the most commonly used
network standard.
Ethernet Cabling
You can use a variety of cables to implement ethernet networks. Many of these cable
types, such as Thinnet, Thicknet, UTP, and STP,
Ethernet networks traditionally used coaxial cables of several different types. Fiber-optic
cables now are frequently employed to extend the geographic range of Ethernet networks.
Token Ring
Token ring uses a token-passing architecture that adheres to the 802.5 standard (describe
later)
The topology is physically a star, but token ring uses a logical ring to pass the token from
station to station. Each node must be attached to a concentrator called a multistation
access unit (MSAU or MAU).
In the earlier discussion of token passing, it may have occurred to you that if one
computer crashes, the others will be left waiting forever for the token. MSAUs add fault
tolerance to the network, so that a single failure doesnt stop the whole network. The
MSAU can determine when the network adapter of a PC fails to transmit and can bypass
it.

Token-ring network interface cards can run at 4Mbps or 16Mbps. Although 4Mbps cards
can run at that data rate only, 16Mbps cards can be configured to run at 4 or 16Mbps. All
cards on a given network ring must run at the same rate. If all cards are not configured
this way, either the machine connected to the card cannot have network access, or the
entire network can be ground to a halt.
As shown in Figure belowe, each node acts as a repeater that receives tokens and data
frames from its nearest active upstream neighbor (NAUN). After the node processes a
frame, the frame transmits downstream to the next attached node. Each token makes at
least one trip around the entire ring and then returns to the originating node. Workstations
that indicate problems send a beacon to identify an address of the potential failure.

Passing Data on Token Rings


A frame called a token perpetually circulates around a token ring. The computer that
holds the token has control of the transmission medium. The actual process is as follows:
1. A computer in the ring captures the token.
2. If the computer has data to transmit, it holds the token and transmits a data frame. A
token-ring data frame contains the fields listed in Table 4.1.
3. Each computer in the ring checks to see whether it is the intended recipient of the
frame.
4. When the frame reaches the destination address, the destination PC copies the frame to
a receive buffer, updates the frame status field of the data frame (see step 2), and puts the
frame
back on the ring.
In 16Mbps token-ring networks, the sending device can utilize an optional enhancement,
known as early token release. This is where the sending device issues a token
immediately after sending a frame, not waiting for its own header to return. This speeds
up the data transfers on the network.

5. When the computer that originally sent the frame receives it from the ring, it
acknowledges a successful transmission, takes the frame off the ring, and places the
token back on the ring.

ARCNet
ARCNet is an older architecture that is not found too often in the business world.
ARCNet utilizes a token-passing protocol that can have a star or bus physical topology.
These segments can be connected with either active or passive hubs.
ARCNet, when connected in a star topology, can use either twisted pair or coaxial cable
(RG-62).
If coaxial cable is used to create a star topology, the ends of the cable can be attached
directly to a BNC connector, without a terminator. When in a bus topology, ARCNet uses
a 93-ohm terminator, which is attached to each end of the bus in a similar fashion to an
Ethernet bus.
Some important facts about ARCNet are as follows: ARCNet uses a 93-ohm terminator. (Ethernet uses a 50-ohm terminator.)
ARCNet uses a token-like passing architecture, but does not require a
MAU(multistation access unit)

The maximum length between a node and an active hub is 610 meters.
The maximum length between a node and a passive hub is 30.5 meters.
The maximum network segment cable distance ARCNet supports is 6100 meters.
ARCNet can have a total of only 255 stations per network segment.

FDDI
FDDI is very similar to token ring in that it relies on a node to have token before it can
use the network. It differs from token ring in that it utilizes fiber-optic cable as its
transmission media, allowing for transmissions of up to 100Km. This standard permits up
to 100 devices on the network with a maximum distance between stations of up to 2
kilometers.
FDDI has two different configurations: Class A and B. Class A uses two counteracting
rings. Devices are attached to both rings. If one of these rings develops a fault, the other
ring can still be used to transmit data. Class B uses a single ring to transmit data.

NETWORKING STANDARDS
Communication between computers requires cabling to conncect the coummunicating
devices but byond he cabling numberous processes operate behind the scenes to keep
things running smoothly. For these processes to operate smoothly in a deverse and
complex copuitng environment, the comping community had established several
standared and speciifcations that define the interaction and interrelation of the various
components of network architecture.
The network industry uses two types of standards: de facto standards and de jure
standards. To understand the concept of open systems architecture, you must be familiar
with the concepts of de facto and de jure standards. As a society, people have
mechanisms in place to get the attention of others, to let them know that someone is
talking to them, and to establish when they finish talking. They also have methods for
verifying that the information passed along to a person was received and understood by
that person.
Network communication is very similar to human communication. People follow sets of
rules when they talk to one another. Like human communication, computer
communication is an extremely complex process, one that is often too complex to solve
all at once using just one set of rules. As a result, the industry has chosen to solve
different parts of the problem with compatible standards so that the solutions can be put
together like pieces of a puzzlea puzzle that comes together differently each time to
build a complete communication approach for any given situation
Rules and the Communication Process
Networks rely on many rules to manage information interchange. Some of the procedures
governed by network standards are as follows:
a) . Procedures used to communicate the establishment and ending of
communication
b) Signals used to represent data on the transmission media
c) . Types of signals to be used
d) . Access methods for relaying a signal across the media
e) . Methods used to direct a message to the intended destination
f) . Procedures used to control the rate of data flow
g) . Methods used to enable different computer types to communicate
h) . Ways to ensure that messages are received correctly

THE OSI REFERENCE MODEL


Having a model in mind helps you understand how the pieces of the networking puzzle
fit together. The most commonly used model is the Open Systems Interconnection (OSI)
reference model. In essence the OSI model is a framework that describes how a function
from one computer is transmitted to another computer on the network. OSI model is a
framework in which various networking components can be placed into context.

OSI (Open System Interconnection) is the most widely accepted model for understanding
the network communication. It is developed by ISO (International Standards
Organization) in 1977. ISO is a multinational body dedicated to worldwide agreement on
international standards. It covers all aspects of network communications in OSI reference
model. An open system is a set of protocols that allows any two different systems to
communicate regardless of the underlying architecture. Vendor-specific protocol close
off communication between unrelated systems.
The purpose of OSI model is to open communication between different system without
requiring changes to the logic of the underlying hardware and software. The OSI is not a
protocol; it is model for understanding and designing a network architecture that is
flexible, robust and open for communication with other systems.
The OSI model, first released in 1984 by the International Standards Organization (ISO),
provides a useful structure for defining and describing the various processes underlying
networking communications.
. The OSI model organizes communication protocols into seven levels with each level
addressing a narrow portion of the communication process.
In essence the OSI model is a framework that describes how a function from one
computer is transmitted to another computer on the network.

Layer 1, the Physical layer (Hardware layer)-consists of protocols that control


communication on the network media. Essentially, this layer deals with how data
is transferred across the transmission media.
Layer 7, the Application layer interfaces the network services with the
applications in use on the computer. These services, such as file and print
services.
The five layers in betweenData Link, Network, Transport, Session, and
Presentationperform intermediate communication tasks

How Peer OSI Layers Communicate


Communication between OSI layers is both vertical within the OSI layers, and also
horizontal between peer layers in another computer
When information is passed within the OSI model on a computer, each protocol layer
adds its own information to the message being sent. This information takes the form of a
header added to the beginning of the original message. The sending of a message always
goes down the OSI stack, and hence headers are added from the top to the bottom
When the frame travels within the OSI model on a computer each protocol layer(except
physical layer) adds a header to the frame as it goes down the OSI layers of the sender
computer and each layer of the receiving computer removes the head form its peer
layer(strip) in the reverse order in which they were added. It should be noted that the
Physical layer does not append a header on to the information, because this layer deals
with providing a transmission route between computers.

When the message is received by the destination computer, each layer removes the
header from its peer layer (stripped). Headers are removed by the receiving computer
after the information in the header has been utilized. Stripped headers are removed in the
reverse order in which they were added.

In summary, the information between the layers is passed along vertically. The
information between computers is essentially horizontal, though, because each layer in
one computer talks to its respective layer in the other computer.

It should be noted that the Physical layer does not append a header on to the information,
because this layer deals with providing a transmission route between computers.

Protocol Stacks
The OSI model (and other non-OSI protocol standards) break the complex process of
network communication into layers. Each layer represents a category of related tasks. A
protocol stack is an implementation of this layered protocol architecture. The protocols
and services associated with the protocol stack interact to prepare, transmit, and receive
network data.
Two computers must run compatible protocol stacks before they can communicate,
because each layer in one computers protocol stack must interact with a corresponding
layer in the other computers protocol stack. The message travels down the protocol
stack, through the network medium, and up the protocol stack of the receiving computer.
OSI Physical Layer Concepts
Physical layer is concerned with transmitting and receiving bits.
This layer defines several key characteristics of the Physical network, including the
following:
Physical structure of the network (physical topology)
Mechanical and electrical specifications for using the medium (not the medium
itself )
Bit transmission, encoding, and timing
Although the Physical layer does not define the physical medium, it defines clear
requirements that the medium must meet. These specifications differ depending on the
physical medium. Ethernet for UTP, for example, has different specifications from
coaxial Ethernet.

It also handles:

Line configuration: how can two or more devices be linked physically? Are
transmission lines to be shared or limited to use between two devices?

Data transmission mode: Is the transmission mode simplex or duplex?

Topology: How are the networking devices arranged?

Bit synchronization: deals with synchronization between sender and receiver


Repeaters are used at this level as communication components.
OSI Data Link Layer Concepts
OSI Physical layer is concerned with moving messages between two machines. Real
messages consist not of single bits but of meaningful groups of bits. The Data Link layer
receives messages, called frames, from upper layers. A primary function of the Data Link
layer is to disassemble these frames into bits for transmission and then to reconstruct the
frames from the bits received. The Data Link layer has other functions as well, such as
addressing, error control, and flow control for a single link between network devices.
Flow control and error control are defined as follows:

Flow control. Flow control determines the amount of data that can be
transmitted in a given time period. Flow control prevents the transmitting
device from overwhelming the receiver.

Error control. Error control detects errors in received frames and requests
retransmission of frames.
Data link layer is responsible for following:

Node to node delivery: the data link layer is responsible for node to node
delivery. The Data Link layer maintains physical device addresses (unique
addresses for networking hardware) and used to address data frames, and each
device is responsible for monitoring the network and receiving frames
addressed to that device

Addressing: Adds header and trailer to the data packet.

Flow control: It regulates the amount of data that can be transmitted at one
time.

Error control. Error control detects errors in received frames and requests
retransmission of frames.
The IEEE 802 standard divides the Data Link layer into two sublayers:

. Media Access Control (MAC). The MAC sublayer controls the means by
which multiple devices share the same media channel for the transmission of
information.

. Logical Link Control (LLC). The LLC sublayer establishes and maintains
links between communicating devices
Addressing: The Data Link layer maintains device addresses that enable messages to be
sent to a particular device. The addresses are called physical device addresses. Physical
device addresses are unique addresses associated with the networking hardware in the

computer. Physical device addresses are used to address data frames, and each device is
responsible for monitoring the network and receiving frames addressed to that device.
A bridge is a connectivity device that operates at the OSI Data Link layer.
OSI Network Layer Concepts
The Network layer handles communication with devices on logically separate networks
that are connected to form internetworks. Because internetworks can be large and can be
constructed of different types of networks, the Network layer utilizes routing algorithms
that guide packets from their source to their destination networks.
Within the Network layer, each network in the internetwork is assigned a network
address that is used to route packets. The Network layer manages the process of
addressing and delivering packets on internetworks.
Whereas the data link layer oversees station to station (node to node) delivery. The
network layer ensures that each packet gets from its point of origin to its destination
successfully and efficiently.
For this purpose the network layer provides two reliable services ie. switching and
routing.

Switching refers to temporary connection between physical links, resulting in


longer links for network transmission; i.e. long distance telephone services.

Routing means selecting the best path for sending a packet from one point to
another when more than one path is available. In this case, each packet may
take a different route to the destination. Where the packets are collected and
resembled into their original order.

Network layer is responsible for following:

Source to destination delivery: moving the packet from its point of origin to its intended destination
across multiple network links. ie. selecting the best path for sending a packet from one

point to another when more than one path is available

Routing: Deciding which of the multiple paths a packet should take. Routing
considerations include speed and cost.

Multiplexing: using a single physical line to carry data between many devices at the
same time.

Switching refers to temporary connection between physical links, resulting in longer


links for network transmission; i.e. long distance telephone services

A router is a connectivity device that operates at the OSI Network


layer
OSI Transport Layer Concepts
The Transport layer implements procedures to ensure the reliable delivery of messages to
their destination devices. The Transport layer enables upper-layer protocols to interface

with the network but hides the complexities of network operation from them. One of the
functions of the Transport layer is to break large messages into segments suitable for
network delivery
The transport layer is responsible for source to destination (end to end) delivery of the
entire message. Whereas the network layer oversees end to end delivery of individual
packets, it does not recognize any relationship between those packets.
Transport layer is responsible for following:

End to end message delivery: confirms the transmission and arrival of all packets of
a message at the destination point.

Segmentation and reassembling: The transport layer Header contains sequence, or


segmentation number. These numbers enable the transport layer to reassemble the
message correctly at the destination and to identify and replace packet lost in
transmission.

. Repackaging. When large messages are divided into segments for transport, the Transport layer must
repackage the segments when they are received before reassembling the original message.
Error control. When segments are lost during transmission or when segments have duplicate segment
IDs, the Transport layer must initiate error recovery. The Transport layer also detects corrupted
segments by managing end-to-end error control using techniques such as checksums.
End-to-end flow control. The Transport layer uses acknowledgments to manage end-to-end flow
control between two connected devices. Besides negative acknowledgments, some Transport layer
protocols can request the retransmission of the most recent segments.

OSI Session Layer Concepts


The session layer is the network dialog controller. It manages dialogs between two
computers by establishing, managing, and terminating communications. It establishes,
maintains, and synchronizes the link between communicating devices. It also ensures that
each session close appropriately rather than shutting down abruptly and leaving the user
hanging.
Session layer is responsible for following:

Session management: Dividing a session into subsessions by the introduction of


checkpoint ad separating long messages into shorter units, called dialog units
appropriate for transmission.

Synchronization: Deciding in what order to pass the dialog units to the transport
layer, and where in the transmission to require conformation from the receiver.

Dialog control: Deciding who sends, and when.

Graceful close: Ensuring that the exchange has been completed appropriately before
the session close.

Dialogs can take three forms:

Simplex dialogs. These dialogs are responsible for one-way data transfers only.

Half-duplex dialogs. These dialogs handle two-way data transfers in which


the data flows in only one direction at a time. When one device completes a
transmission, this device must turn over the medium to the other device so
that this second device has a turn to transmit.
. Full-duplex dialogs. This third type of dialog permits two-way simultaneous
data transfers by providing each device with a separate communication
channel. Voice telephones are full duplex devices, and either party to a
conversation can talk at any time. Most computer modems can operate in fullduplex mode.

Presentation Layer Concepts


The Presentation layer deals with the syntax, or grammatical rules, needed for
communication between two computers. The Presentation layer converts system-specific
data from the Application layer into a common, machine-independent format that
supports a more standardized design for lower protocol layers.
On the receiving end, the Presentation layer converts the machine independent data from
the network into the format required for the local system.
The presentation layer ensures interoperability among communicating devices. It is
responsible for code conversion (e.g. from ASCII to EBCDIC and vice versa), if
required.
The presentation layer is also responsible for the encryption and decryption of data for
security purposes. It also handles the compression and expansion of data when necessary
for transmission efficiency.
Presentation layer is responsible for following:

Translation: changing the format of message (e.g. from ASCII to EBCDIC and vice
versa).

Encryption/Decryption: handles encryption and decryption of data for security


purposes.

Compression: It also handles the compression and expansion of data when necessary
for transmission efficiency.

Security: validates passwords and log-in codes.

Application Layer Concepts


The Application layer is concerned with providing services on the network, including file
services, print services, application services such as database services, messaging
services, and directory services among others. The Application layer, however, does
provide an interface whereby applications can communicate with the network.
The application layer enables the user, whether human or software, to access the network.
It provides user interface and support for services such as electronic mail, remote file
access and transfer.
Presentation layer is responsible for following:

Mail services: provides the basis for electronic mail forwarding and storage.
Directory services: Provides distributed database sources and access for global
information about various object and services.
File access, transfer, and management: Allows a user at a remote computer to
access files in another host (to make changes or read data); to retrieve files
from a remote computer for use in the local computer.

Delivering Packets
Many internetworks often include redundant data paths that you can use to route
messages. Typically, a packet passes from the local LAN segment of the source PC
through a series of other LAN segments, until it reaches the LAN segment of the
destination PC. The OSI Network layer oversees the process of determining paths and
delivering packets across the internetwork.
Switching Techniques
The main objective of networking is to connect all the devices so that resources and
information can be shared efficiently. Whenever we have multiple devices, we have
problem of connecting them to make one-to-one connection possible. One solution is to
install a point to point link between each pair of devices such as in mesh topology or
between a central device and every other device as in star topology. These methods,
however, are impractical and wasteful when applied to very large network. The number
and length of the links require too many infrastructures to be cost efficient; and majority
of those links would be idle most of the time.
A better solution is to uses switching. A switch network consists of a series of interlinked nodes, called switches. Switched are hardware and/or software capable of creating
temporary connection between two or more devices linked to switch but not to each
other.
Switching techniques are mechanisms for moving data from one network segment to
another. These techniques are as follows:
Circuit switching
Message switching
Packet switching
Circuit Switching
Switching networks establish a path through the internetwork when the devices initiate a
conversation. Circuit switching provides devices with a dedicated path and a well-defined
bandwidth. These paths tend to be reliable and fast in performance.

Communication via circuit switching implies that there is a dedicated communication


path between two stations. The path is a connected sequence of links between network
nodes. On each physical link, a channel is dedicated to the connection. A common
example of circuit switching is the telephone network.
Communication via circuit switching involves three phases:

Circuit Establishment

Information Transfer

Circuit Disconnection
Disadvantages.

Establishing a connection between devices can be time consuming.

Because other traffic cannot share the dedicated media path, bandwidth might
be inefficiently utilized. This can be compared to having a telephone
conversation, yet not speaking. You are using the line, thus not allowing
others to use it, but you are not transmitting any data.

Circuit-switching networks must have a surplus of bandwidth, so these types


of switches tend to be expensive to construct.

Message Switching
Message switching treats each message as an independent entity. Each message carries
address information that describes the messages destination, and this information is used
at each switch to transfer the message to the next switch in the route. Message switches
are programmed with information concerning other switches in the network that can be
used to forward messages to their destinations. Message switches also may be
programmed with information about the most efficient routes
Message switching transfers the complete message from one switch to the next, where the
message is stored before being forwarded again. Because each message is stored before
being sent on to the next switch, this type of network frequently is called a store-and-

forward network. The message switches often are general-purpose computers and must
be equipped with sufficient storage (usually hard drives) to enable them to store messages
until forwarding is possible.

Message switching is commonly used in email because some delay is permissible in the
delivery of email. Other applications for message switching include group applications
such as workflow, calendaring, and groupware. The primary uses of message switching
have been to provide high-level network service (e.g. delayed delivery, broadcast) for
unintelligent devices. Since such devices have been replaced, message switching has
virtually disappeared. Also delays inherent in the process, as well as the requirement for
large capacity storage media at each node, make it unpopular for direct communication.

Message switching offers the following advantages:

Data channels are shared among communicating devices, improving the


efficiency of available bandwidth.

Message switches can store messages until a channel becomes available,


reducing sensitivity to network congestion.

Message priorities can be used to manage network traffic.

Broadcast addressing uses network bandwidth more efficiently by delivering


messages to multiple destinations.
Disadvantage
The chief disadvantage of message switching is that message switching is not suited for
real-time applications, including data communication, video, and audio.
Packet Switching
In packet switching, messages are divided into smaller pieces called packets. . A typical
packet length is 1000 byte. If a source has longer message to send, the message is broken
up into a series of packets. Each packet contains a portion (or the entire short message) of
the users data plus some control information. These packets are routed to the destination
via different available nodes. Each packet includes source and destination address

information so that individual packets can be routed through the internetwork


independently through the internetwork.
Packet switching looks considerably like message switching, but the distinguishing
characteristic is that packets are restricted to a size that enables the switching devices to
manage the packet data entirely in memory. This eliminates the need for switching
devices to store the data temporarily on disk. Packet switching, therefore, routes packets
through the network much more rapidly and efficiently than is possible with message
switching.
Packet Switching Networks
A transmitting computer or other device sends a message as a sequence of packets. Each
packet includes control information including the destination station. The packets are
initially sent to the node to which the sending station attaches. As each packet arrives at
these nodes, the node stores the packet briefly, and determines the next available link.
When the link is available, the packet is transmitted to the next node. The entire packet
eventually delivered to the intended node.
There are two popular approaches to packet switching: datagram and virtual circuit.
Datagram
In the datagram approach to packet switching, each packet is treated independently from
all others and each packet can be sent via any available path, with no reference to packet
that have gone before. In the datagram approach packets, with the same destination
address, do not all follow the same route, and they may arrive out of sequence at the exit
point
Virtual Circuit
In this approach, a pre-planned route is established before any packets are sent. Once the
route is established, all the packets between a pair of communicating parties follow this
same route through the network. Each packet now contains a virtual circuit identifier as
well as the data. Each node on the pre-established route knows where to direct such
packet. No routing decisions are required. At any time, each station can have more than
one virtual circuit to any other station and can have virtual circuits to more than one
station.
Network Communication and Data Packets
Network communication usually involves long messages. However, networks do not
handle large chunks of data well, so they reformat the data into smaller, more manageable
pieces, called packets or frames.
Networks split data into small pieces for two reasons:

Large units of data dent across a network hamper effective communications


by saturating the network.. If a sender and receiver are using all possible
bandwidth, other computer can not communicate.

Network can be unreliable. If errors occur during transmission of a packet, the


entire packet must be re-sent. If data is into many smaller packets, only the
packet in which the error occurred must be re-sent. This is more efficient and
makes it easier to recover from errors.

With data split into packets, individual communications are faster and more
efficient, which allows more computers to use network. When the packets
reach their destination, the computer collects and reassembles them in their
proper order to re-create the original data.

Packet Structure
All the packets have three basic parts:
1. Packet header: The packet header usually contains the source address and
destination address of the packet.
2. Data section: The data section consists of the actual data being sent. The sizes
of this section can very depending on the network type, 512 bytes to 4K.
3. Packet trailer: The packet trailer contains information to verify the validity of
the packet. Using a cyclic redundancy check (CRC) usually does this. The
CRC is a number on the packet calculated by the sending computer and added
to the trailer. When the receiving computer gets the packet, it recalculates the
CRC and compares it to the one in the trailer. If the CRCs match, it accepts
the packet as undamaged. If CRCs dont match, the receiving computer
requests that the packet be re-sent.

PROTOCOLS AND PROTOCOL LAYERS


Many of the addressing, error-checking, retransmission, and acknowledgment services
most commonly associated with networking take place at the Network and Transport OSI
layers
In TCP/IP, for instance, TCP is a Transport layer protocol and IP is a Network layer
protocol.
IPX/SPX is another protocol suite known by its Transport and Network layer protocols.
TCP/IP. IPX is the Network and Transport layer protocol; SPX is the Transport layer
protocol.
The lower Data Link and Physical layers of the OSI model provide a hardware-specific
foundation, addressing items such as the network adapter driver, the media access
method, and the transmission medium.
Transport and Network layer protocols such as TCP/IP and IPX/SPX rest on that Physical
and Data Link layer foundation.

Upper-level protocols, those from the Network layer and higher, allow for the connection
of services and the services themselves. This can imply routing programs, addressing
schemes, and File and Print services.

In addition to TCP/IP and IPX/SPX, some of the common Transport and Network
layer protocols are the following:
NWLink. Microsofts version of the IPX/SPX protocol essentially spans the
Transport and Network layers.
NetBEUI. Designed for Microsoft networks, NetBEUI includes functions at the
Network and Transport layers.
NetBEUI isnt routable and therefore doesnt make full use of Network layer
capabilities.
AppleTalk Transaction Protocol (ATP) and Name Binding Protocol (NBP). ATP
and NBP are AppleTalk Transport layer protocols.
Data Link Control (DLC). This is used to connect to IBM Mainframes and
Hewlett-Packard JetDirect printers.

TCP/IPInternet Protocols
One reason for the popularity of TCP/IP is that no one vendor owns it, unlike the
IPX/SPX, DNA, SNA, or AppleTalk protocol suites, all of which are controlled by
specific companies. TCP/IP evolved in response to input from a wide variety of industry
sources.
The TCP/IP protocol suite (also commonly called the Internet protocol suite) was
originally developed by the United States Department of Defense (DoD) to provide
robust service on large internetworks that incorporate a variety of computer types.
TCP/IP is the most open of the protocol suites and is supported by the widest variety of
vendors. Virtually every brand of computing equipment now supports TCP/IP. This
TCP/IP was designed to be hardware-independent and thus is able to work over
established standards such as ethernet, token-ring, and ARCnet, to name but a few lower
OSI layer standards. Over time,
TCP/IP has been interfaced to the majority of Data Link and Physical layer technologies
The Internet protocols do not map cleanly to the OSI reference model. The DoD model
was, after all, developed long before the OSI model was defined. The model for the
Internet protocol suite has four layers.
.
The DoD models layers function as follows (see Figure 7.3).

The DoD models layers function as follows


The Network Access layer corresponds to the bottom two layers of the OSI model.
This correspondence enables the DoD protocols to coexist with existing Data
Link and Physical layer standards.
The Internet layer corresponds roughly to the OSI Network layer. Protocols at this
layer move data between devices on networks.
The Host-to-Host layer can be compared to the OSI Transport layer. Host-to-Host
protocols enable peer communication between hosts on the internetwork.
The Process/Application layer embraces functions of the OSI Session,
Presentation, and Application layers. Protocols at this layer provide network
services.
Addressing in TCP/IP
A large number of protocols are associated with TCP/IP. These different protocols can be
grouped into the following categories:
General TCP/IP Transport Protocols
TCP/IP Services
TCP/IP Routing
1. General TCP/IP Transport Protocols
Addressing in TCP/IP
TCP/IP uses a unique numbering scheme that encapsulates the network and node address
into a set of numbers. This number is what is known as an IP address. All devices on a
network that runs the TCP/IP protocol suite need a unique IP address.
An IP address is a set of four numbers, or octets that can range in value between 0 and
255. Each octet is separated by a period. Some examples are shown here:
. 34.120.66.79
. 200.200.20.2
. 2.5.67.123

. 107.219.2.34

Internet Protocol (IP)


The Internet Protocol (IP) is a connectionless protocol that provides datagram service,
and IP packets are most commonly referred to as IP datagrams.
IP is a packet-switching protocol that performs the addressing and route selection. An IP
header is appended to packets, which are transmitted as frames by lower-level protocols.
IP routes packets through internetworks by utilizing routing tables that are
referenced at each hop.
Routing determinations are made by consulting logical and physical network device
information, as provided by the Address Resolution Protocol (ARP).
IP performs packet disassembly and reassembly as required by packet size limitations
defined for the Data Link and Physical layers being implemented. IP also performs error
checking on the header data using a checksum, although data from upper layers is not
error checked.
Transmission Control Protocol (TCP)
The Transmission Control Protocol (TCP) is an internetwork connection-oriented
protocol that corresponds to the OSI Transport layer. TCP provides full-duplex, end-toend connections. When the overhead of end-to-end communication acknowledgment isnt
required, the User Datagram Protocol (UDP) can be substituted for TCP at the Transport
(host-to-host) level. TCP and UDP operate at the same layer. TCP corresponds to SPX in
the NetWare environment
TCP also provides and assumes message fragmentation and reassembly and can accept
messages of any length from upper-layer protocols. TCP fragments message streams into
segments that can be
handled by IP. This process enables the application being used to not
break up the data into smaller blocks.
IP still can perform fragmentation for UDP packets and further fragmentation for TCP
packets.
When used with IP, TCP adds connection-oriented service and performs segment
synchronization, adding sequence numbers at the byte level.

In addition to message fragmentation, TCP can multiplex conversations with upper-layer


protocols and can improve use of network bandwidth by combining multiple messages
into the same segment.

User Datagram Protocol (UDP)


The User Datagram Protocol (UDP) is a connectionless Transport (host-to-host) layer
protocol. UDP does not provide message acknowledgments; rather, it simply transports
datagrams.
Like TCP, UDP utilizes port addresses to deliver datagrams. UDP is preferred over TCP
when high performance or low network overhead is more critical than reliable delivery.
Because UDP doesnt need to establish, maintain, and close connections, or control data
flow, it generally outperforms TCP.
The downfall in UDP is that it does not perform as reliably as TCP when transmitting
data; thus, UDP is often used when transmitting smaller amounts of data
UDP is the Transport layer protocol used with the Simple Network Management Protocol
(SNMP), the standard network management protocol used with TCP/IP networks. UDP
enables SNMP to provide network management with a minimum of network overhead.
Address Resolution Protocol (ARP)
Three types of address information are used on TCP/IP internetworks:
.Physical addresses. Used by the Data Link and Physical layers.
IP addresses. Provide logical network and host IDs. IP addresses consist of four
numbers typically expressed in dotted-decimal form.
Logical node names. Identify specific hosts with alphanumeric identifiers, which
are easier for users to recall than the numeric IP addresses. An example of a
logical node name is MYHOST.COM.
Given an IP address, the Address Resolution Protocol (ARP) can determine the physical
address used by the device containing the IP address. ARP maintains tables of address
resolution data and can broadcast packets to discover addresses on the network segment
or use previously cached entries. The physical addresses discovered by
ARP can be provided to Data Link layer protocols. All addresses in the ARP table are
only local addresses. Any non-local address contains the hardware address of the local
port on the router that is used to access that non-local segment.

Internet Control Message Protocol (ICMP)


The Internet Control Message Protocol (ICMP) enhances the error control provided by
IP. Connectionless protocols, such as IP, cannot detect internetwork errors, such as
congestion or path failures.
ICMP can detect such errors and notify IP and upper-layer protocols. A network card that
is generating an error often delivers a message to other network cards, via an ICMP
packet.

2. TCP/IP Services
Dynamic Host Configuration Protocol (DHCP)
When dealing with IP addressing, it can be very management intensive to manually
assign IP addresses and subnet masks to every computer on the network. The Dynamic
Host Configuration
Protocol (DHCP) enables automatic assignment of IP addresses.
This is usually performed by one or more computers (DHCP Servers) that assigns IP
addresses and subnet masks, along with other configuration information, to a computer as
it initializes on the network.
Domain Name System (DNS)
The Domain Name System (DNS) protocol provides host name and IP address resolution
as a service to client applications. DNS servers enable humans to use logical node names,
utilizing a fully qualified domain name structure, to access network resources. Host
names can be up to 260 characters long.

Windows Internet Naming Services (WINS)


Windows Internet Naming Service (WINS) provides a function similar to that of DNS,
with the exception that it provides NetBIOS names to IP address resolution. This is
important, because all of
Microsofts networking requires the ability to reference NetBIOS names.
Normally NetBIOS names are obtained with the issuance of broadcasts, but because
routers normally do not forward broadcasts, a WINS server is one alternative that can be
used to issue IP addresses to NetBIOS name requests.
File Transfer Protocol (FTP)
The File Transfer Protocol (FTP) is a protocol for sharing files between networked hosts.
FTP enables users to log on to remote hosts. Logged-on users can inspect directories,
manipulate files, execute commands, and perform other commands on the host. FTP also
has the capability of transferring files between dissimilar hosts by supporting a file
request structure that is independent of specific operating systems.
Simple Mail Transfer Protocol (SMTP)
The Simple Mail Transfer Protocol (SMTP) is a protocol for routing mail through
internetworks. SMTP uses the TCP and IP protocols. SNMP doesnt provide a mail
interface for the user. Creation, management, and delivery of messages to end users must
be performed by an email application.
Remote Terminal Emulation (TELNET)
TELNET is a terminal emulation protocol. TELNET enables PCs and workstations to
function as dumb terminals in sessions with hosts on internetworks. TELNET

implementations are available for most end-user platforms, including UNIX, DOS,
Windows, and Macintosh OS.

Network File System (NFS)


Network File System (NFS), developed by Sun Microsystems, is a family of file-access
protocols that are a considerable advancement over FTP and TELNET. Because Sun
made the NFS specifications available for public use, NFS has achieved a high level of
popularity.
3. TCP/IP Routing Protocols
Routing Information Protocol (RIP)
Internet RIP performs route discovery by using a distance-vector method, calculating the
number of hops that must be crossed to route a packet by a particular path.
Although it works well in localized networks, RIP presents many weaknesses that limit
its utility on wide-area internetworks. RIPs distance vector route discovery method,
for example, requires more broadcasts and thus causes more network traffic than some
other methods.
The entire route table is also sent out on the broadcast, causing large amounts of traffic as
route tables become large. The Open Shortest Path First (OSPF) protocol, which uses the
link-state route discovery method, is gradually replacing RIP (Read more on OSPF).
Open Shortest Path First (OSPF)
The Open Shortest Path First (OSPF) protocol is a link-state route discovery protocol
that is designed to overcome the limitations of RIP. On large internetworks, OSPF can
identify the internetwork topology and improve performance by implementing load
balancing and class-of-service routing.
READ ON NetWare IPX/SPX

NETWORK SECURITY
Security
Security does prevent unauthorized access to a system, but only makes such access more
difficult. Organizations cannot afford to be without their IT capability for any length of
time or to allow the data to become corrupted or passed to unauthorized person. The basic
dangers are:
1. Loss of confidentiality, where secret information is made available to the
wrong people. This can rival the organizations future plans or give details of
customer list or product specifications to a rival.
2. Loss of integrity, where the data or the software are corrupted, either
deliberately or accidentally. When these corrupted, the reliability of the whole
IT system is put into question.

3. Loss of availability, where any part of the system is unavailable to the user.
This means that for the period of unavailability, the expensively provided IT
system is of no use.
IT security is intended to preserve the confidentiality, integrity and availability of the
system.
Threat Classification
Threats are things that could go wrong and may be classified as:
1.

Environmental;

2.

Logical;

3.

Procedural.

All of these threats can arise from the activities of people both inside and outside the
organization. Furthermore, they may also be classed as deliberate or accidental. In fact,
most research shows that organizations own staffs cause over 70% of all security
problems.
Environmental Threats
This involves physical damage to buildings, hardware, software, data, and
documentation, of personnel. Accidental threats include fire, flood, building collapse and
failure of essential services. Deliberate threats include sabotage and vandalism.
a) Fire
Fire is one of the more common causes of serious environmental computer
disaster. A frequent cause of damage is fire in the air conditioning system.
It is important to understand that most damage is caused by the corrosive
effect of smoke, rather than by the fire itself.
b) Flood
Floods are more likely to be caused by dripping tapes and burst pipes that
by rivers braking their banks. Fire is another part of the building may lead
to damaging the vital equipment.
c) Building Collapse
Earthquakes and subsidence of land could make a building unsafe. Partial
or total collapse can be caused by impact from falling aircraft or road
accident. Also, the threat of terrorist bomb attacks cannot be ignored!
Essential Services
All computers require power supply and mainframe computers usually require air
conditioning. Often communication links are essential. If any of these fail, computer
systems may be inaccessible or access to them restricted.
Prevention from Environmental threats:

Use smoke detector


Use non liquid fire extinguisher
Avoid locating the resources/offices near to main road
Use anti virus software
Avoid having water pipe line in the computer rooms of office

Logical Threats
Logical threats are those affects access to and the integrity of data and software.
Accidental threats include software fault (bug), communication errors and inaccurate
input. Deliberate threats include unauthorized access to computer program and data
(hacking) and malicious or fraudulent a alteration of software and data (including the
introducing of computer viruses).
a) Unauthorized Access
Unauthorized access to a compute system by outsiders will usually be
achieved by hacking into dial-up communication links or by tapping into
private leased lines. Members of staff may also use these means to access
unauthorized information. Such activities can cause a loss of
confidentiality of data.
b) Virus:
A virus is a program of piece of coding which is originally deliberately
introduced into a computer system with the intention of corrupting
software and/or data. The effect of virus can vary from the mere
annoyance of message appearing on the screen to the destruction of
software and data file. A deliberate introduction of a virus is a serious
criminal offence under computer misuse act, 1990.
c) Theft:
The theft of large pieces of computer equipment is unusual, but personal
and laptop computers, floppy discs and software. The theft of hardware,
software or data not only causes unavailability but may also result in loss
of confidentiality.
Prevention from Logical Threats
Use security guard stop/check unauthorized personnel
Use video camera
Lock the room after office hours
Use password
Use anti virus software
Procedural Failure

Procedural threats arise from personnel failing to obey the rules. Accidental threats can
arise from ignorance of the correct procedures. Deliberate threats arise from personnel
failing to follow known procedures because they either find them too troublesome.
Physical Access
Only authorised personnel should be allowed restricted area.
Prevention from Procedural Failure

Provide regular training to the staff about security threats and its
prevention methods

Do periodic checking of equipment

Have extra UPS


Network Security Techniques
In a network environment when applying security to network is
Access Control Techniques
It is important to understand the primary function of access control techniques. First and
foremost their purpose is to permit only authorized parties to us particular facilities. Some
access control techniques are discussed below.
Password
A password is familiar to people who work with or use IT. In general these are string
characters. Primary advantage of using a password based access control system is that it
is easily understood by those having to use it. Furthermore, they are easy to implement.
Following thoughts can help to improve password security:
1. All users must have a password.
2. Passwords must be at least six characters long.
3. Passwords must be changed at least monthly.
4. Passwords must be changed immediately if there is suspicion that some has
figured out your password.
5. Password must not be easy enough to guess.
6. It must not be written down.
7. There must be limited entries to try the password (no more that three).
Dial Back
In remote access applications the dial back modem has become a familiar tool. The
principle behind this technique is straightforward. A user wishing to access must first
leave a message which indicates who they are, some corroboration of this (a password)
and a telephone number. The target system may then examine the details and decide

whether of not to re-establish connect. A decision in affirmative requires the target


system to contact the user on the telephone number given.
Biometric Techniques
Biometrics characteristics are unique feature of an individual. Such as fingerprints, voice
patterns retinal image and DNA typing. Biometrics are an extremely accurate way of
providing access control. It is also a very easily manages system. The difficulty arises
because there has to be some way in which these biological or subliminally habitual
characteristics can be represented in electronic form. As can be imagined, the equipment
required to produce a quantitative version of, say, retinal scan, is quite complex and
expensive. This means that in general, use of biometric techniques well beyond the reach
of most security practitioners.
Encryption Techniques
To carry sensitive information, such as military or financial data, a system must be able
to assure privacy. Microwave, satellite, and other wireless media, however, cannot be
protected from the unauthorized reception (or interception) of transmission. Even cable
system cannot always prevent unauthorized access. Cable pass through out of the way
areas (such as basements) that provide opportunities for malicious access to the cable and
illegal reception of information.
It is unlike that any system can completely prevent unauthorized access to transmission
media. A more practical way to protect information is to alter it so that only an authorized
receiver can understand it. This can be achieved by encryption and decryption of
information. Encryption means that the sender transforms the original information to
another form and sends the resulting unintelligible message out over the network.
Decryption reverses the encryption process in order to transform the message back to its
original form.
Following model can be used to explain the whole process.
Passive intruder
(Just listens)

Plaintext, P

Active intruder
(Can Change)

Intruder

Encryption
Method
Encryption
Key, K

Decryption
Method
Ciphertext
C=EK(P)

Decryption
Key

Figure 8.1 Encryption Model

Plaintext, P

Terms used in diagram:


1. The message to be encrypted, known as the plaintext
2. The output of the encryption process, known as the ciphertext.
3. The unauthorized person or process known as the intruder
4. Passive intruder : can only listen ciphertext, unable to make changes
5. Active intruder : can listen the ciphertext and able to alter
6. The art of devising ciphers is called cryptography.
7. The art of breaking ciphers is called cryptoanalysis.
Notations:

C=Ek(P) => encryption of the plain text P


P=Dk(C) => Decryption of C to get plain text P
DK(EK(P)) =P

Encryption Methods
1. Substitution ciphers: In substitution cipher each letter or another letter or
group of letter to disguise it replaces group of letter.
Ex. Caesar Cipher ATTACK => DWWDFN
(Key: substitute each letter by its third successor)
2. Transposition Ciphers: Substitution ciphers preserve the order of the
plaintext symbols but disguise them. Transposition ciphers, in contrast,
reorder the letters but do not disguise them.
F

Plaintext: How are you man


Key: Fine
Ciphertext: A ON H R U O EM W
YA

Techniques for preventing message content integrity.

Error detection techniques.


Error correction methods.

Вам также может понравиться