Академический Документы
Профессиональный Документы
Культура Документы
An ISO standard that covers all aspects of network communications is the Open
Systems Interconnection model. It was first introduced in the late 1970s. An open system
is a set of protocols that allows any two different systems to communicate regardless of their
underlying architecture.
The purpose of the OSI model is to show how to facilitate communication between
different systems without requiring changes to the logic of the underlying hardware and
software. The OSI model is not a protocol; it is a model for understanding and designing a
network architecture that is flexible, robust, and interoperable.
The OSI model is a layered framework for the design of network systems that allows
communication between all types of computer systems. It consists of seven separate but
related layers, each of which defines a part of the process of moving information across a
network.
Line configuration: The physical layer is concerned with the connection of devices to
the media. In a point-to-point configuration, two devices are connected through a
dedicated link. In a multipoint configuration, a link is shared among several
devices.
Physical topology: The physical topology defines how devices are connected to make
a network. Ex: mesh topology, a star topology, a ring topology, a bus topology, a hybrid
topology.
Transmission mode: The physical layer also defines the direction of transmission
between two devices: simplex, half-duplex, or full-duplex.
Data Link Layer
The data link layer transforms the physical layer, a raw transmission facility, to a
reliable link. It makes the physical layer appear error-free to the upper layer (network layer).
Following Figure shows the relationship of the data link layer to the network and physical
layers.
Framing: The data link layer divides the stream of bits received from the network layer
into manageable data units called frames.
Physical addressing. If frames are to be distributed to different systems on the
network, the data link layer adds a header to the frame to define the sender and/or
receiver of the frame. If the frame is intended for a system outside the sender's
network, the receiver address is the address of the device that connects the network to
the next one.
Flow control: If the rate at which the data are absorbed by the receiver is less than
the rate at which data are produced in the sender, the data link layer imposes a flow
control mechanism to avoid overwhelming the receiver.
Error control: The data link layer adds reliability to the physical layer by adding
mechanisms to detect and retransmit damaged or lost frames. It also uses a
mechanism to recognize duplicate frames. Error control is normally achieved through a
trailer added to the end of the frame.
Access control: When two or more devices are connected to the same link, data
link layer protocols are necessary to determine which device has control over the link at any
given time.
Hop-to-hop delivery
As the figure shows, communication at the data link layer occurs between two adjacent
nodes.
To send data from A to F, three partial deliveries are made.
First, the data link layer at A sends a frame to the data link layer at B (a router).
Second, the data link layer at B sends a new frame to the data link layer at E.
Finally, the data link layer at E sends a new frame to the data link layer at F.
Network Layer:
The network layer is responsible for the source-to-destination delivery of a packet, possibly
across multiple networks (links). Whereas the data link layer oversees the delivery of the
packet between two systems on the same network (links), the network layer ensures that
each packet gets from its point of origin to its final destination. If two systems are connected
to the same link, there is usually no need for a network layer. However, if the two systems are
attached to different networks (links) with connecting devices between the networks (links),
there is often a need for the network layer to accomplish source-to-destination delivery.
Following Figure shows the relationship of the network layer to the data link and transport
layers.
Transport layer
Other responsibilities of the transport layer include the following:
Service-point addressing: Computers often run several programs at the same time.
For this reason, source-to-destination delivery means delivery not only from one
computer to the next but also from a specific process (running program) on one
computer to a specific process (running program) on the other. The transport layer
header must therefore include a type of address called a service-point address (or port
address). The network layer gets each packet to the correct computer; the transport
layer gets the entire message to the correct process on that computer.
Segmentation and reassembly: A message is divided into transmittable segments,
with each segment containing a sequence number. These numbers enable the
transport layer to reassemble the message correctly upon arriving at the destination
and to identify and replace packets that were lost in transmission.
Connection control: The transport layer can be either connectionless or connection
oriented. A connectionless transport layer treats each segment as an
independent packet and delivers it to the transport layer at the destination machine.
A connection oriented transport layer makes a connection with the transport
layer at the destination machine first before delivering the packets. After all
the data are transferred, the connection is terminated.
Flow control: Like the data link layer, the transport layer is responsible for flow
control. However, flow control at this layer is performed end to end rather than across a
single link.
Error control: Like the data link layer, the transport layer is responsible for error
control. However, error control at this layer is performed process-to process rather than
across a single link. The sending transport layer makes sure that the entire message
arrives at the receiving transport layer without error (damage, loss, or duplication).
Error correction is usually achieved through retransmission.
Session layer
Presentation Layer:
The presentation layer is concerned with the syntax and semantics of the information
exchanged between two systems. Following Figure shows the relationship between the
presentation layer and the application and session layers.
The presentation layer is responsible for translation, compression, and encryption.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it is
essential to know what types of errors may occur.
Types of Errors
There may be three types of errors:
Single bit error
The receiver simply counts the number of 1s in a frame. If the count of 1s is even and even
parity is used, the frame is considered to be not-corrupted and is accepted. If the count of 1s
is odd and odd parity is used, the frame is still not corrupted.
If a single bit flips in transit, the receiver can detect it by counting the number of 1s. But
when more than one bits are erro neous, then it is very hard for the receiver to detect the
error.
Cyclic Redundancy Check (CRC)
CRC is a different approach to detect if the received frame contains valid data. This technique
involves binary division of the data bits being sent. The divisor is generated using
polynomials. The sender performs a division operation on the bits being sent and calculates
the remainder. Before sending the actual bits, the sender adds the remainder at the end of
the actual bits. Actual data bits plus the remainder is called a codeword. The sender
transmits data bits as codewords.
At the other end, the receiver performs division operation on codewords using the same CRC
divisor. If the remainder contains all zeros the data bits are accepted, otherwise it is
considered as there some data corruption occurred in transit.
Error Correction
In the digital world, error correction can be done in two ways:
Backward Error Correction When the receiver detects an error in the data received,
it requests back the sender to retransmit the data unit.
Forward Error Correction When the receiver detects some error in the data
received, it executes error-correcting code, which helps it to auto-recover and to
correct some kinds of errors.
The first one, Backward Error Correction, is simple and can only be efficiently used where
retransmitting is not expensive. For example, fiber optics. But in case of wireless
transmission retransmitting may cost too much. In the latter case, Forward Error Correction is
used.
To correct the error in data frame, the receiver must know exactly which bit in the frame is
corrupted. To locate the bit in error, redundant bits are used as parity bits for error
detection.For example, we take ASCII words (7 bits data), then there could be 8 kind of
information we need: first seven bits to tell us which bit is error and one more bit to tell that
there is no error.
For m data bits, r redundant bits are used. r bits can provide 2r combinations of information.
In m+r bit codeword, there is possibility that the r bits themselves may get corrupted. So the
number of r bits used must inform about m+r bit locations plus no-error information, i.e.
m+r+1.
2)b) ATM Networks
Use optical fibre similar to that used for FDDI networks
ATM runs on network hardware called SONET
ATM cells (packets) are 53 octets long
5 bytes of header information
48 bytes of data
ATM networks are packet-switched, but still create a (virtual) circuit through the
network
Before transfer can occur, the network must create a path (called a virtual
circuit) between the two machines
Once the virtual circuit (VC) has been established, packets can be transferred
between the machines
Internet
A means of connecting a computer to any other computer anywhere in the
world via dedicated routers and servers. When two computers are connected over the
Internet, they can send and receive all kinds of information such as text, graphics,
voice, video, and computer programs.
No one owns Internet, although several organizations the world over collaborate in its
functioning and development. The high-speed, fiber-optic cables (called backbones) through
which the bulk of the Internet data travels are owned by telephone companies in their
respective countries.
The Internet grew out of the Advanced Research Projects Agency's Wide Area Network (then
called ARPANET) established by the US Department Of Defense in 1960s for collaboration in
military research among business and government laboratories. Later universities and other
US institutions connected to it. This resulted in ARPANET growing beyond
everyone's expectations and acquiring the name 'Internet.'
The development of hypertext based technology (called World Wide web, WWW, or just
the Web) provided means of displaying text, graphics, and animations, and
easy search and navigation tools that triggered Internet's explosive worldwide growth.
4)a)
IP Multicasting
(Page 1 of 2)
The great bulk of TCP/IP communications uses the Internet Protocol to send messages from
one source device to one recipient device; this is called unicast communications. This is the
type of messaging we normally use TCP/IP for; when you use the Internet you are using
unicast for pretty much everything. For this reason, most of my discussion of IP has been
oriented around describing unicast messaging.
IP does, however, also support the ability to have one device send a message to a set of
recipients. This is called multicasting. IP multicasting has been officially supported since
IPv4 was first defined, but has not seen widespread use over the years, due largely to lack of
support for multicasting in many hardware devices. Interest in multicasting has increased in
recent years, and support for multicasting was made a standard part of the next generation IP
version 6 protocol. Therefore, I felt it worthwhile to provide a brief overview of IP multicasting.
It's a large and very complex subject, so I will not be getting into it in detailyou'll have to
look elsewhere for a full description of IP multicasting. (Sorry, it was either a brief summary or
nothing; maybe I'll write more on multicasting in the future.)
The idea behind IP multicasting is to allow a device on an IP internetwork to send datagrams
not to just one recipient but to an arbitrary collection of other devices. IP multicasting is
modeled after the similar function used in the data link layer to allow a single hardware device
to send to various members of a group. Multicasting is relatively easy at the data link layer,
however, because all the devices can communicate directly. In contrast, at the network layer,
we are connecting together devices that may be quite far away from each other, and must
route datagrams between these different networks. This necessarily complicates multicasting
when done using IP (except in the special case where we use IP multicasting only between
devices on the same data link layer network.)
There are three primary functions that must be performed to implement IP multicasting:
addressing, group management, and datagram processing / routing.
Multicast Addressing
Special addressing must be used for multicasting. These multicast addresses identify not
single devices but rather multicast groups of devices that listen for certain datagrams sent to
them. In IPv4, 1/16th of the entire address space was set aside for multicast addresses: the
Class D block of the original classful addressing scheme.
Multicast Group Management
Group management encompasses all of the activities required to set up groups of devices.
They must be able to dynamically join groups and leave groups, and information about groups
must be propagated around the IP internetwork. To support these activities, additional
techniques are required. The Internet Group Management Protocol (IGMP) is the chief tool
used for this purpose. It defines a message format to allow information about groups and
group membership to be sent between devices and routers on the internet.
Multicast Datagram Processing and Routing
This is probably the most complicated: handling and routing datagrams in a multicast
environment. There are several issues here:
o Since we are sending from one device to many devices, we need to actually create
multiple copies of the datagram for delivery, in contrast to the single datagram used in
the unicast case. Routers must be able to tell when they need to create these copies.
o Routers must use special algorithms to determine how to forward multicast datagrams.
Since each one can lead to many copies being sent various places, efficiency is
important to avoid creating unnecessary volumes of traffic.
o Routers must be able to handle datagrams sent to a multicast group even if the source
is not a group member.
4)b) Tunneling is a protocol that allows for the secure movement of data from one network to
another. Tunneling involves allowing private network communications to be sent across a
public network, such as the Internet, through a process called encapsulation. The
encapsulation process allows for data packets to appear as though they are of a public nature
to a public network when they are actually private data packets, allowing them to pass
through unnoticed.
Point-to-Point Tunneling Protocol (PPTP): PPTP keeps proprietary data secure even when
it is being communicated over public networks. Authorized users can access a private
network called a virtual private network, which is provided by an Internet service
provider. This is a private network in the virtual sense because it is actually being
created in a tunneled environment.
Layer Two Tunneling Protocol (L2TP): This type of tunneling protocol involves a
combination of using PPTP and Layer 2 Forwarding.
Tunneling is a way for communication to be conducted over a private network but tunneled
through a public network. This is particularly useful in a corporate setting and also offers
security features such as encryption options.
We have already encountered the problem of packet sizes when we looked at Ethernets. The
data link layer had no ability to deal with the problem and so bridges were unable to cope.
The Ethernet problem was due to a different definition of maximum packet sizes. Other
causes include: maximum size a router can handle, maximum length slot available for
transmission, errors necessitate reducing the packet length, standards.
There are two forms of packet fragmentation, transparent and non-transparent. Each of these
happens on a network by network basis. In other words, there is no end to end agreement
about which process to use.
Transparent fragmentation occurs when a router splits the packet into smaller packets, sends
them to the next router which reconstructs the original packet. The next network is not aware
that any fragmentation has happened.
Non-transparent fragmentation occurs when a router splits up a packet and it then remains
split until the destination is reached.
Irrespective of which form of fragmentation is used, it is clear that we need to be able to
reconstruct the original packet from the fragments. This means that some form of labelling
will have to be used.
Fragmentation is also known as segmentation.