Вы находитесь на странице: 1из 13

Q1(a)

As you can see, TCP/IP has four layers. Programs talk to the Application layer. On the Application layer
you will find Application protocols such as SMTP (for e-mail), FTP (for file transfer) and HTTP (for web
browsing). Each kind of program talks to a different Application protocol, depending on the program
purpose.

After processing the program request, the protocol on the Application layer will talk to another protocol
from the Transport layer, usually TCP. This layer is in charge of getting data sent by the upper layer,
dividing them into packets and sending them to the layer below, Internet. Also, during data reception, this
layer is in charge of putting the packets received from the network in order (because they can be received
out-of-order) and also checking if the contents of the packets are intact.

On the Internet layer we have the IP (Internet Protocol), which gets the packets received from the Transport
layer and adds virtual address information, i.e. adds the address of the computer that is sending data and the
address of the computer that will receive this data. These virtual addresses are called IP addresses. Then the
packet is sent to the lower layer, Network Interface. On this layer packets are called datagrams.

The Network Interface will get the packets sent by the Internet layer and send them over the network (or
receive them from the network, if the computer is receiving data). What is inside this layer will depend on
the type of network your computer is using. Nowadays almost all computers use a type of network called
Ethernet (which is available in several different speed grades; wireless networks are also Ethernet
networks) and thus you should find inside the Network Interface layer the Ethernet layers, which are Logic
Link Control (LLC), Media Access Control (MAC) and Physical, listed from up to bottom. Packets
transmitted over the network are called frames.

Q1(ii)

As you can see, TCP/IP has four layers. Programs talk to the Application layer. On the Application layer
you will find Application protocols such as SMTP (for e-mail), FTP (for file transfer) and HTTP (for web
browsing). Each kind of program talks to a different Application protocol, depending on the program
purpose.
After processing the program request, the protocol on the Application layer will talk to another protocol
from the Transport layer, usually TCP. This layer is in charge of getting data sent by the upper layer,
dividing them into packets and sending them to the layer below, Internet. Also, during data reception, this
layer is in charge of putting the packets received from the network in order (because they can be received
out-of-order) and also checking if the contents of the packets are intact.

On the Internet layer we have the IP (Internet Protocol), which gets the packets received from the Transport
layer and adds virtual address information, i.e. adds the address of the computer that is sending data and the
address of the computer that will receive this data. These virtual addresses are called IP addresses. Then the
packet is sent to the lower layer, Network Interface. On this layer packets are called datagrams.

The Network Interface will get the packets sent by the Internet layer and send them over the network (or
receive them from the network, if the computer is receiving data). What is inside this layer will depend on
the type of network your computer is using. Nowadays almost all computers use a type of network called
Ethernet (which is available in several different speed grades; wireless networks are also Ethernet
networks) and thus you should find inside the Network Interface layer the Ethernet layers, which are Logic
Link Control (LLC), Media Access Control (MAC) and Physical, listed from up to bottom. Packets
transmitted over the network are called frames.

Q1(iii)

for high data rate ( data transfer speed ) the bandwidth of the signal carrying system should be high.
because a high data rate means a high frequency ( square ) wave actually such a wave consists of a large
number of waves having a range of frequencies Fourier analysis) so to get a distortion less transmission the
bandwidth should be high

Bandwidth may refer to bandwidth capacity or available bandwidth in bit/s, which typically means the net
bit rate, channel capacity or the maximum throughput of a logical or physical communication path in a
digital communication system. For example, bandwidth test implies measuring the maximum throughput of
a computer network. The reason for this usage is that according to Hartley's law, the maximum data rate of
a physical communication link is proportional to its bandwidth in hertz, which is sometimes called
frequency bandwidth, radio bandwidth or analog bandwidth, the latter especially in computer networking
literature.
Bandwidth may also refer to consumed bandwidth (bandwidth consumption), corresponding to achieved
throughput or goodput, i.e. average data rate of successful data transfer through a communication path. This
meaning is for example used in expressions such as bandwidth shaping, bandwidth management,
bandwidth throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and
dynamic bandwidth allocation), etc. An explanation to this usage is that digital bandwidth of a bit stream is
proportional to the average consumed signal bandwidth in Hertz (the average spectral bandwidth of the
analog signal representing the bit stream) during a studied time interval.

Q3(i)

The Network Layer is Layer 3 of the seven-layer OSI model of computer networking.

The Network Layer is responsible for end-to-end (source to destination) packet delivery including routing
through intermediate hosts, whereas the Data Link Layer is responsible for node-to-node (hop-to-hop)
frame delivery on the same link.
The Network Layer provides the functional and procedural means of transferring variable length data
sequences from a source to a destination host via one or more networks while maintaining the quality of
service and error control functions.

Functions of the Network Layer include:

• Connection model: connection-oriented and connectionless communication

For example, snail mail is connectionless, in that a letter can travel from a sender to a recipient
without the recipient having to do anything. On the other hand, the telephone system is
connection-oriented, because the other party is required to pick up the phone before
communication can be established. The OSI Network Layer protocol can be either connection-
oriented, or connectionless. In contrast, the TCP/IP Internet Layer supports only the
connectionless Internet Protocol (IP); but connection-oriented protocols exist higher at other layers
of that model.

• Host addressing

Every host in the network needs to have a unique address which determines where it is. This
address will normally be assigned from a hierarchical system, so you can be "Fred Murphy" to
people in your house, "Fred Murphy, Main Street 1" to Dubliners, or "Fred Murphy, Main Street
1, Dublin" to people in Ireland, or "Fred Murphy, Main Street 1, Dublin, Ireland" to people
anywhere in the world. On the Internet, addresses are known as Internet Protocol (IP) addresses.

• Message forwarding

Since many networks are partitioned into subnetworks and connect to other networks for wide-
area communications, networks use specialized hosts, called gateways or routers to forward
packets between networks. This is also of interest to mobile applications, where a user may move
from one location to another, and it must be arranged that his messages follow him. Version 4 of
the Internet Protocol (IPv4) was not designed with this feature in mind, although mobility
extensions exist. IPv6 has a better designed solution

Transport Layer services

There is a long list of services that can be optionally provided by the Transport Layer. None of them are
compulsory, because not all applications require all available services.

• Connection-oriented: This is normally easier to deal with than connection-less models, so


where the Network layer only provides a connection-less service, often a connection-oriented
service is built on top of that in the Transport Layer.

• Same Order Delivery: The Network layer doesn't generally guarantee that packets of data
will arrive in the same order that they were sent, but often this is a desirable feature, so the
Transport Layer provides it. The simplest way of doing this is to give each packet a number, and
allow the receiver to reorder the packets.

• Reliable data: Packets may be lost in routers, switches, bridges and hosts due to network
congestion, when the packet queues are filled and the network nodes have to delete packets.
Packets may be lost or corrupted in Ethernet due to interference and noise, since Ethernet does not
retransmit corrupted packets. Packets may be delivered in the wrong order by an underlying
network. Some Transport Layer protocols, for example TCP, can fix this. By means of an error
detection code, for example a checksum, the transport protocol may check that the data is not
corrupted, and verify that by sending an ACK message to the sender. Automatic repeat request
schemes may be used to retransmit lost or corrupted data. By introducing segment numbering in
the Transport Layer packet headers, the packets can be sorted in order. Of course, error free is
impossible, but it is possible to substantially reduce the numbers of undetected errors.

• Flow control: The amount of memory on a computer is limited, and without flow control a
larger computer might flood a computer with so much information that it can't hold it all before
dealing with it. Nowadays, this is not a big issue, as memory is cheap while bandwidth is
comparatively expensive, but in earlier times it was more important. Flow control allows the
receiver to respond before it is overwhelmed. Sometimes this is already provided by the network,
but where it is not, the Transport Layer may add it on.

• Congestion avoidance: Network congestion occurs when a queue buffer of a network node
is full and starts to drop packets. Automatic repeat request may keep the network in a congested
state. This situation can be avoided by adding congestion avoidance to the flow control, including
slow-start. This keeps the bandwidth consumption at a low level in the beginning of the
transmission, or after packet retransmission.

• Byte orientation: Rather than dealing with things on a packet-by-packet basis, the Transport
Layer may add the ability to view communication just as a stream of bytes. This is nicer to deal
with than random packet sizes, however, it rarely matches the communication model which will
normally be a sequence of messages of user defined sizes.

• Ports: (Part of the Transport Layer in the TCP/IP model, but of the Session Layer in the
OSI model) Ports are essentially ways to address multiple entities in the same location. For
example, the first line of a postal address is a kind of port, and distinguishes between different
occupants of the same house. Computer applications will each listen for information on their own
ports, which is why you can use more than one network-based application at the same time.

Application Layer is a term used in categorizing protocols and methods in architectural models of
computer networking. Both the OSI model and the Internet Protocol Suite (TCP/IP) contain an application
layer.

In TCP/IP, the Application Layer contains all protocols and methods that fall into the realm of process-to-
process communications via an Internet Protocol (IP) network using the Transport Layer protocols to
establish underlying host-to-host connections.

In the OSI model, the definition of its Application Layer is narrower in scope, distinguishing explicitly
additional functionality above the Transport Layer at two additional levels: Session Layer and Presentation
Layer. OSI specifies strict modular separation of functionality at these layers and provides protocol
implementations for each layer.

The common application layer services provide semantic conversion between associated application
processes. Note: Examples of common application services of general interest include the virtual file,
virtual terminal, and job transfer and manipulation protocols

Q3(ii)
In a Datagram Network, there's no "network-layer-connection" between the two hosts. So Theres no
garuanteed bandwith and the packets may take different paths.

In a Virutal Circuit network, its oppositly and a connection is established.

————————————————————————————————————

•There are a number of important differences between virtual circuit and datagram networks.
•The choice strongly impacts complexity of the different types of node.
•Use of datagrams between intermediate nodes allows relatively simple protocols at this level,
-but at the expense of making the end (user) nodes more complex when end-to-end virtual circuit service is
desired.
•The Internet transmits datagrams between intermediate nodes using IP.
•Most Internet users need additional functions such as end-to-end error and sequence control to give a
reliable service (equivalent to that provided by virtual circuits).
•This reliablility may be provided by
-the Transmission Control Protocol (TCP) which is used end-to-end across the Internet,

Q3(iii)

Flow Control: –
In communications, the process of adjusting the flow of data from one device to another to ensure that the
receiving device can handle all of the incoming data. This is particularly important where the sending
device is capable of sending data much faster than the receiving device can receive it.

Error Control: –
Error control is a method that can be used to recover the corrupted data whenever possible. There are two
basic types of error control which are backward error control and forward error control. In backward error
control, the data is encoded so that the encoded data contains additional redundant information which is
used to detect the corrupted blocks of data that must be resent. On the contrary, in forward error control
(FEQ), the data is encoded so that it contains enough redundant information to recover from some
communications errors.

Q5

Circuit Switching

In telecommunications, a circuit switching network is one that establishes a circuit (or channel) between
nodes and terminals before the users may communicate, as if the nodes were physically connected with an
electrical circuit.

The bit delay is constant during a connection, as opposed to packet switching, where packet queues may
cause varying packet transfer delay. Each circuit cannot be used by other callers until the circuit is released
and a new connection is set up. Even if no actual communication is taking place in a dedicated circuit that
channel remains unavailable to other users. Channels that are available for new calls to be set up are said to
be idle.
In circuit-switching, this path is decided upon before the data transmission starts. The system decides on
which route to follow, based on a resource-optimizing algorithm, and transmission goes according to the
path. For the whole length of the communication session between the two communicating bodies, the route
is dedicated and exclusive, and released only when the session terminates.

Packet Switch

In packet-switching, the packets are sent towards the destination irrespective of each other. Each packet
has to find its own route to the destination. There is no predetermined path; the decision as to which node to
hop to in the next step is taken only when a node is reached. Each packet finds its way using the
information it carries, such as the source and destination IP addresses.

FRAME RELAY

Frame Relay is a standardized wide area networking technology that specifies the physical and logical link
layers of digital telecommunications channels using a packet switching methodology. Originally designed
for transport across Integrated Services Digital Network (ISDN) infrastructure, it may be used today in the
context of many other network interfaces. Network providers commonly implement Frame Relay for voice
(VoFR) and data as an encapsulation technique, used between local area networks (LANs) over a wide area
network (WAN). Each end-user gets a private line (or leased line) to a frame-relay node. The frame-relay
network handles the transmission over a frequently-changing path transparent to all end-users.

Frame Relay is a standardized wide area networking technology that specifies the physical and logical link
layers of digital telecommunications channels using a packet switching methodology. Originally designed
for transport across Integrated Services Digital Network (ISDN) infrastructure, it may be used today in the
context of many other network interfaces. Network providers commonly implement Frame Relay for voice
(VoFR) and data as an encapsulation technique, used between local area networks (LANs) over a wide area
network (WAN). Each end-user gets a private line (or leased line) to a frame-relay node. The frame-relay
network handles the transmission over a frequently-changing path transparent to all end-users.
Cell Relay

A data transmission technology based on transmitting data in relatively small, fixed-size packets or cells.
Each cell contains only basic path information that allows switching devices to route the cell quickly. Cell
relay systems can reliably carry live video and audio because cells of fixed size arrive in a more predictable
way than systems with packets or frames of varying size.

ISDN

ISDN, which stands for Integrated Services Digital Network, is a system of digital phone connections
which has been available for over a decade. This system allows voice and data to be transmitted
simultaneously across the world using end-to-end digital connectivity.

With ISDN, voice and data are carried by bearer channels (B channels) occupying a bandwidth of 64 kb/s
(bits per second). Some switches limit B channels to a capacity of 56 kb/s. A data channel (D channel)
handles signaling at 16 kb/s or 64 kb/s, depending on the service type. Note that, in ISDN terminology, "k"
means 1000 (103), not 1024 (210) as in many computer applications (the designator "K" is sometimes used to
represent this value); therefore, a 64 kb/s channel carries data at a rate of 64000 b/s. A new set of standard
prefixes has recently been created to handle this. Under this scheme, "k" (kilo-) means 1000 (10 3), "M"
(mega-) means 1000000 (106), and so on, and "Ki" (kibi-) means 1024 (210), "Mi" (mebi-) means 1048576
(220), and so on.
There are two basic types of ISDN service: Basic Rate Interface (BRI) and Primary Rate Interface
(PRI). BRI consists of two 64 kb/s B channels and one 16 kb/s D channel for a total of 144 kb/s. This basic
service is intended to meet the needs of most individual users.

PRI is intended for users with greater capacity requirements. Typically the channel structure is 23 B
channels plus one 64 kb/s D channel for a total of 1536 kb/s. In Europe, PRI consists of 30 B channels plus
one 64 kb/s D channel for a total of 1984 kb/s. It is also possible to support multiple PRI lines with one 64
kb/s D channel using Non-Facility Associated Signaling (NFAS).

H channels provide a way to aggregate B channels. They are implemented as:

• H0=384 kb/s (6 B channels)

• H10=1472 kb/s (23 B channels)

• H11=1536 kb/s (24 B channels)

• H12=1920 kb/s (30 B channels) – International (E1) only

To access BRI service, it is necessary to subscribe to an ISDN phone line. Customer must be within 18000
feet (about 3.4 miles or 5.5 km) of the telephone company central office for BRI service; beyond that,
expensive repeater devices are required, or ISDN service may not be available at all. Customers will also
need special equipment to communicate with the phone company switch and with other ISDN devices.
These devices include ISDN Terminal Adapters (sometimes called, incorrectly, "ISDN Modems") and
ISDN Routers.

Q4(i)

Bridging is a forwarding technique used in packet-switched computer networks. Unlike routing, bridging
makes no assumptions about where in a network a particular address is located. Instead, it depends on
flooding and examination of source addresses in received packet headers to locate unknown devices. Once
a device has been located, its location is recorded in a table where the MAC address is stored so as to
preclude the need for further broadcasting. The utility of bridging is limited by its dependence on flooding,
and is thus only used in local area networks.

Bridging generally refers to Transparent bridging which predominates in Ethernet. Another form of
bridging, Source route bridging, was developed for token ring networks.

A Network bridge connects multiple network segments at the data link layer (Layer 2) of the OSI model.
In Ethernet networks, the term Bridge formally means a device that behaves according to the IEEE 802.1D
standard. A bridge and switch are very much alike; a switch being a bridge with numerous ports. Switch or
Layer 2 switch is often used interchangeably with Bridge.

Bridges are similar to repeaters or network hubs, devices that connect network segments at the physical
layer; however, with bridging, traffic from one network is managed rather than simply rebroadcast to
adjacent network segments. Bridges are more complex than hubs or repeaters. Bridges can analyze
incoming data packets to determine if the bridge is able to send the given packet to another segment of the
network
Q4(ii)

CASCADED HUB NETWORK

A network hub or repeater hub is a device for connecting multiple twisted pair or fiber optic Ethernet
devices together and making them act as a single network segment. Hubs work at the physical layer (layer
1) of the OSI model. The device is a form of multiport repeater. Repeater hubs also participate in collision
detection, forwarding a jam signal to all ports if it detects a collision.

Half duplex

A duplex communication system is a system composed of two connected parties or devices that can
communicate with one another in both directions.

A half-duplex system provides for communication in both directions, but only one direction at a time (not
simultaneously). Typically, once a party begins receiving a signal, it must wait for the transmitter to stop
transmitting, before replying.

An example of a half-duplex system is a two-party system such as a "walkie-talkie" style two-way radio,
wherein one must use "Over" or another previously-designated command to indicate the end of
transmission, and ensure that only one party transmits at a time, because both parties transmit on the same
frequency.

full-duplex, or sometimes double-duplex system, allows communication in both directions, and, unlike half-
duplex, allows this to happen simultaneously. Land-line telephone networks are full-duplex, since they
allow both callers to speak and be heard at the same time. A good analogy for a full-duplex system would
be a two-lane road with one lane for each direction.

Examples: Telephone, Mobile Phone, etc.

Ethernet Hub

Definition: In computer networking, a hub is a small, simple, inexpensive device that joins multiple
computers together. Many network hubs available today support the Ethernet standard. Other types
including USB hubs also exist, but Ethernet is the type traditionally used in home networking.

Working With Ethernet Hubs

To network a group of computers using an Ethernet hub, first connect an Ethernet cable into the unit, then
connect the other end of the cable to each computer's network interface card (NIC). All Ethernet hubs
accept the RJ-45 connectors of standard Ethernet cables.

Switching Hub

Short for port-switching hub, a special type of hub that forwards packets to the appropriate port based on
the packet's address. Conventional hubs simply rebroadcast every packet to every port. Since switching
hubs forward each packet only to the required port, they provide much better performance. Most switching
hubs also support load balancing, so that ports are dynamically reassigned to different LAN segments based
on traffic patterns.

Q4(iii)
OSPF

Open Shortest Path First (OSPF) is a dynamic routing protocol for use in Internet Protocol (IP)
networks. Specifically, it is a link-state routing protocol and falls into the group of interior gateway
protocols, operating within a single autonomous system (AS).

OSPF is perhaps the most widely-used interior gateway protocol (IGP) in large enterprise networks

OSPF is an interior gateway protocol that routes Internet Protocol (IP) packets solely within a single
routing domain (autonomous system). It gathers link state information from available routers and constructs
a topology map of the network. The topology determines the routing table presented to the Internet Layer
which makes routing decisions based solely on the destination IP address found in IP datagrams. OSPF was
designed to support variable-length subnet masking (VLSM) or Classless Inter-Domain Routing (CIDR)
addressing models.

Q2(i)

Actually data link layer has two sub layers one MAC and second LLC.
Mac address deals with the hardware address of Lan card which is hard coded and cant be changed , it is
unique for every lan card.
second Logical link control (LLC) protocol data units (PDUs) contain addressing information. This
addressing information consists of two fields; the Destination Service Access Point (DSAP) address field,
and the Source Service Access Point (SSAP) address field. Each of these is an 8-bit field and each is made
up of two components

Data Link layer:

Data link layer has several functions to perform they are providing a well defined service to the network
layer, determining how the bits of the physical layer are grouped into frames, dealing with transmission
errors, regulating the flow of frames so that slow receivers are not swamped by fast senders, and general
link management.

Services provided to the network layer:

The principal service provided by the data link layer to the network layer is the transmission of data from
the source network layer to destination network layer. This can be accompolished in 3 ways:

1. Un acknowledged connectionless service.

2. Acknowledged connectionless service.

3. Connection oriented service.

Unacknowledged connectionless service consists of having the source machine send independent frames to
the destination machine without having the destination machine acknowledge them. This is used where the
re is a very low chance of transmission errors.
In Acknowledged connectionless service the source machine send the frames indepenedently to the
destination machine, but with acknowlegement to each and every frame from the destination machine.

MAC PROTOCOLS

(i)RARP
(ii)ICMP
(iii)RIP

Q2
(ii)

Stop-and-wait ARQ is the simplest kind of automatic repeat-request (ARQ) method. A stop-and-wait
ARQ sender sends one frame at a time. After sending each frame, the sender doesn't send any further
frames until it receives an ACK (acknowledgement) signal. After receiving a good frame, the receiver
sends an ACK. If the ACK does not reach the sender before a certain time, known as the timeout, the
sender sends the same frame again.

The above behavior is the simplest Stop-and-Wait implementation. However, in a real life implementation
there are problems to be addressed.

Typically the transmitter adds a redundancy check number to the end of each frame. The receiver uses the
redundancy check number to check for possible damage. If the receiver sees that the frame is good, it sends
an ACK. If the receiver sees that the frame is damaged, the receiver discards it and does not send an ACK
— pretending that the frame was completely lost, not merely damaged.

One problem is where the ACK sent by the receiver is damaged or lost. In this case, the sender doesn't
receive the ACK, times out, and sends the frame again. Now the receiver has two copies of the same frame,
and doesn't know if the second one is a duplicate frame or the next frame of the sequence carrying identical
data.

Another problem is when the transmission medium has such a long latency that the sender's timeout runs
out before the frame reaches the receiver. In this case the sender resends the same packet. Eventually the
receiver gets two copies of the same frame, and sends an ACK for each one. The sender, waiting for a
single ACK, receives two ACKs, which may cause problems if it assumes that the second ACK is for the
next frame in the sequence.

To avoid these problems, the most common solution is to define a 1 bit sequence number in the header of
the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When the receiver sends an
ACK, it includes the sequence number of the next packet it expects. This way, the receiver can detect
duplicated frames by checking if the frame sequence numbers alternate. If two subsequent frames have the
same sequence number, they are duplicates, and the second frame is discarded. Similarly, if two subsequent
ACKs reference the same sequence number, they are acknowledging the same frame.

Sliding Window Protocols are a feature of packet-based data transmission protocols. They are used
anywhere reliable in-order delivery of packets is required, such as in the data link layer (OSI model) as well
as in TCP (transport layer of the OSI model).
Conceptually, each portion of the transmission (packets in most data link layers, but bytes in TCP) is
assigned a unique consecutive sequence number, and the receiver uses the numbers to place received
packets in the correct order, discarding duplicate packets and identifying missing ones. The problem with
this is that there is no limit of the size of the sequence numbers that can be required.

By placing limits on the number of packets that can be transmitted or received at any given time, a sliding
window protocol allows an unlimited number of packets to be communicated using fixed-size sequence
numbers.

For the highest possible throughput, it is important that the transmitter is not forced to stop sending by the
sliding window protocol earlier than one round-trip delay time (RTT). The limit on the amount of data that
it can send before stopping to wait for an acknowledgment should be larger than the bandwidth-delay
product of the communications link. If it is not, the protocol will limit the effective bandwidth of the link.

Stop and Wait ARQ

Stop and Wait transmission is the simplest reliability technique and is adequate for a very simple
communications protocol. A stop and wait protocol transmits a Protocol Data Unit (PDU) of information
and then waits for a response. The receiver receives each PDU and sends an Acknowledgement (ACK)
PDU if a data PDU is received correctly, and a Negative Acknowledgement (NACK) PDU if the data was
not received. In practice, the receiver may not be able to reliably identify whether a PDU has been received,
and the transmitter will usually also need to implement a timer to recover from the condition where the
receiver does not respond.

Under normal transmission the sender will receive an ACK for the data and then commence transmission of
the next data block. For a long delay link, the sender may have to wait an appreciable time for this
response. While it is waiting the sender is said to be in the "idle" state and is unable to send further data.

Stop and Wait ARQ – Waiting for Acknowledgment (ACK) from the remote node.

The blue arrows show the sequence of data PDUs being sent across the link from the sender (top to the
receiver (bottom). A Stop and Wait protocol relies on two way transmission (full duplex or half duplex) to
allow the receiver at the remote node to return PDUs acknowledging the successful transmission. The
acknowledgements are shown in green in the diagram, and flow back to the original sender. A small
processing delay may be introduced between reception of the last byte of a Data PDU and generation of the
corresponding ACK.
When PDUs are lost, the receiver will not normally be able to identify the loss (most receivers will not
receive anything, not even an indication that something has been corrupted). The transmitter must then rely
upon a timer to detect the lack of a response.

Stop and Wait ARQ – Retransmission due to timer expiry

In the diagram, the second PDU of Data is corrupted during transmission. The receiver discards the
corrupted data (by noting that it is followed by an invalid data checksum). The sender is unaware of this
loss, but starts a timer after sending each PDU. Normally an ACK PDU is received before this the timer
expires. In this case no ACK is received, and the timer counts down to zero and triggers retransmission of
the same PDU by the sender. The sender always starts a timer following transmission, but in the second
transmission receives an ACK PDU before the timer expires, finally indicating that the data has now been
received by the remote node.

Вам также может понравиться