Вы находитесь на странице: 1из 105

Chapter 3 The Data Link Layer

user A
user B
3.1 Data Link Layer Design Issues
Providing a well-defined service interface to the network layer

Determining how the bits of the physical layer are grouped into
frames

Dealing with transmission errors

Regulating the flow of frames so that slow receivers are not
swamped by fast senders
Functions of Data Link Layer
3.1.1 Services Provided to the Network layer
Model
used
3.1.1 Services Provided to the Network layer
Three reasonable possibilities for data link services:

1. Unacknowledged connectionless service

2. Acknowledged connectionless service

3. Acknowledged connection-oriented service
3.1.1 Services Provided to the Network layer
Unacknowledged connectionless service
1. The source sends independent frames to the destination without
the destination acknowledging them.

2. No connection is established beforehand or released afterward

3. If a frame is lost due to noise on the line, no attempt is made to
recover it in the data link layer

4. Appropriate when the error rate is very low

5. Appropriate for real-time traffic, such as speech,


3.1.1 Services Provided to the Network layer
Acknowledged connectionless service
Each frame sent is individually acknowledged. In this way, the
sender knows whether or not a frame has arrived safely. If it has
not arrived within a specified time interval, it can be sent again.
It is perhaps worth emphasizing that providing
acknowledgements in the data link layer is just an optimization,
never a requirement. The transport layer can always send a
message and wait for it to be acknowledged.
3.1.1 Services Provided to the Network layer
Acknowledged connectionless service
The trouble with this strategy is that if the average message is
broken up into, say, 10 frames, and 20 percent of all frames are lost,
it may take a very long time for the message to get through.

If individual frames are acknowledged and retransmitted, entire
messages get through much faster.
3.1.1 Services Provided to the Network layer
Acknowledged connection-oriented service
1. The source and the destination establish a connection before and
data are transferred.

2. Each frame sent over the connection is numbered

3. The data link layer guarantees that each frame sent is indeed
received.

4. Furthermore, it guarantees that each frame is received exactly once
and that all frames are received in the right order.

5. Connection-oriented service provides the network layer with the
equivalent of a reliable bit stream.
3.1.1 Services Provided to the Network layer
Role of Data Link Protocol
The routing code frequently wants the
job done right by data link protocol.
3.1 Data Link Layer Design Issues
3.1.2 Framing
The data link uses the services provided by the physical layer.
The physical layer will not be perfect. It is up to the data link
layer to detect, and if necessary, correct errors.
The usual approach is for the data link layer to break the bit
stream up into discrete frames and compute the checksum for
each frame.

When a frame arrives at the destination, the checksum is
recomputed.

If the newly computed checksum is different from the one
contained in the frame, the data link layer knows that an error
has occurred and takes steps to deal with it.
Chapter 3 The Data Link Layer
3.1 Data Link Layer Design Issues
3.1.2 Framing
Breaking the bit stream up into frames is more difficult than
it first appears.
Four methods:

1. Character count.

2. Starting and ending characters, with character stuffing.

3. Starting and ending flags, with bit stuffing.

4. Physical layer coding violations.
Chapter 3 The Data Link Layer
3.1 Data Link Layer Design Issues
3.1.2 Framing
Character count
Without error
With one error
3.1.2 Framing
Character count
Even if the checksum is incorrect so the destination knows that
the frame is bad, it still has no way of telling where the next
frame starts.

Sending a frame back to the source asking for a retransmission
does not help either, since the destination does not know how
many characters to skip over to get to the start of the
retransmission.
The second framing method gets around the problem of
resynchronization after an error by having each frame start and end
with special bytes.

In the past, the starting and ending bytes were different, but in recent
years most protocols have used the same byte, called a flag byte, as
both the starting and ending delimiter, as shown in Fig.(a) as FLAG.
3.1.2 Framing
Starting and ending characters
A major disadvantage of using this framing method is that it is
closely tied to 8-bit characters in general and the ASCII
character code in particular.
We need a technique to allow arbitrary sized characters.
Chapter 3 The Data Link Layer
3.1 Data Link Layer Design Issues
3.1.2 Framing
Flags (01111110) at the beginning and end
3.1.2 Framing
Physical layer coding violations
This method of framing is only applicable to networks in which the
encoding on the physical medium contains some redundancy.

For example, some LANs encode 1 bit of data by using 2 physical
bits. Normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair.

The combinations high-high and low-low are not used for data and
they can be used as frame boundaries.
3.1.3 Error Control
The usual way to ensure reliable delivery is to provide the sender
with some feedback about what is happening at the other end of the
line.
Sender
Data Frame
Receiver
Acknowledgement (ACK)
Or negative
Acknowledgement (NAK)
3.1.3 Error Control
An additional complication comes from the possibility that
hardware troubles may cause a frame to vanish completely. In
this case, the receiver will not react at all.
This possibility is dealt with by introducing timers into the
data link layer.
3.1.3 Error Control
However, when frames may be transmitted multiple times there is
a danger that the receiver will accept the same frame two or more
times, and pass it to the network layer more than once. (When will
this happen?)

To prevent this from happening, it is generally necessary to assign
sequence numbers to outgoing frames, so that the receiver can
distinguish retransmissions from originals.
3.1.4 Flow Control
What to do with a sender that systematically wants to transmit
frames faster than the receiver can accept them?
The usual solution is to introduce flow control to throttle the
sender into sending no faster than the receiver can handle the
traffic.

This throttling generally requires some kind of a feedback
mechanism, so the sender can be made aware of whether or not
the receiver is able to keep up.
3.1.4 Flow Control
Various flow control schemes are known, but most of them use
the same basic principle. The protocol contains well-defined
rules about when a sender may transmit the next frame.

These rules often prohibit frames from being sent until the
receiver has granted permission, either implicitly or explicitly.
Chapter 3 The Data Link Layer
3.2 Error Detection and Correction
Transmission errors are going to be a fact of life for many years
to come. (in local loop, in wireless communications)
As a result of the physical processes that generate them, errors
on some media (e.g., radio) tend to come in bursts rather than
singly.
Disadvantage: They are much harder to detect and correct than
isolated errors.
Single-bit error
10.26
Figure 10.2 Burst error of length 8
XORing of two single bits or two words
3.2.1 Error-Correcting Codes
m data bits r check bits
codeword
There are 2
n
possible codewords and 2
m
possible data messages.
Hamming distance between codewords:

min d(C
1
,C
2
)=number of (same bit position) bits which differ
d(10010010,00010001)=3
n=m+r
If two codewords are a hamming distance d apart, it will require d
single-bit errors to convert one into the other.
Figure Datawords and codewords in block coding
Table A code for error detection (Example)
3.2.1 Error-Correcting Codes
In most data transmission applications, all 2
m
possible data
messages are legal, but due to the way the check bits are
computed, not all 2
n
possible codewords are used.

Given the algorithm for computing the check bits, it is
possible to construct a complete list of codewords, and from
this list find the two codewords whose Hamming distance is
minimum. This distance is the Hamming distance of the
complete code.
3.2.1 Error-Correcting Codes
To detect d errors, you need a distance d+1 code.
d+1
>
radius=d bits
distance<d+1
distance
3.2.1 Error-Correcting Codes
<2d+1 2d+1 >
radius=d bits
To correct d errors, you need a distance 2d+1 code.
An error-detecting code can detect
only the types of errors for which it is designed; other
types of errors may remain undetected.
Note
3.2.1 Error-Correcting Codes
Parity check is a code with distance 2. Therefore, it can detect
single-bit error.
Consider a code with only 4 valid codewords:
0000000000, 0000011111, 1111100000, 1111111111

This code has a distance of 5, which means that it can detect
double errors.
If the codeword 0000000111 arrives and we know at most two
bits are in error, the receiver knows that the original must have
been 0000011111. If, however, a triple error changes
0000000000 into 0000000111, the error will not be corrected
properly.
Chapter 3 The Data Link Layer
3.2 Error Detection and Correction
3.2.1 Error-Correcting Codes
A 1-bit error correcting code (Hamming code)
m data bits r check bits
n=m+r
Each of the 2
m
legal messages has n illegal codewords at a
distance 1 from it.
Thus each of the 2
m
legal messages requires n+1 bit patterns
dedicated to it.
r
r m m
n m
r m
r m
n
2 ) 1 (
2 2 ) 1 (
2 2 ) 1 (
s + +
s + +
s +
+
Therefore,
The bits of the codeword are numbered consecutively, starting with bit
1 at the left end, bit 2 to its immediate right, and so on.

The bits that are powers of 2 (1, 2, 4, 8, 16, etc.) are check bits. The
rest (3, 5, 6, 7, 9, etc.) are filled up with the m data bits.

Each check bit forces the parity of some collection of bits, including
itself, to be even (or odd).

A bit may be included in several parity computations. To see which
check bits the data bit in position k contributes to, rewrite k as a sum
of powers of 2.

For example, 11 = 1 + 2 + 8 and 29 = 1 + 4 + 8 + 16. A bit is checked
by just those check bits occurring in its expansion (e.g., bit 11 is
checked by bits 1, 2, and 8).
3.2.1 Error-Correcting Codes
A 1-bit error correcting code (Hamming code)
Given m, this puts a lower limit on the number of check bits needed
to correct single errors.

This theoretical lower limit can, in fact, be achieved using a method
due to Hamming.

The bits of codewords are numbered consecutively, starting with bit
1 at the left end. The bits that are power of 2 (1, 2, 4, 8, ) are
check bits. The rest are filled up with the data bits. Each check bit
forces the parity of some collection of bits, including itself, to be
even (or odd).
3.2.1 Error-Correcting Codes
When a codeword arrives, the receiver initializes a counter to zero.

It then examines each check bit, k, to see if it has the correct parity.
If not, it adds k to the counter.

If the counter is zero afterwards, the codeword is accepted as valid.
If the counter is nozero, it contains the number of the incorrect bit.
3.2.2 Error-detecting Codes
error control: error detection or error correction?
assume error rate=1/10
6
data=1000 bits
error detecting check bits: 1 bit (e.g. parity check)
error correcting check bits: 10 bits (e.g. Hamming code)
In 10
3
packets:
error detecting overhead=10
3
+1001=2001 bits
error correcting overhead=10*10
3
bits=10000 bits
retransmission
Therefore, error correction codes is used in some critical situation.
For example, real-time memory system (in space shuttle).
3.2.2 Error-detecting Codes
CRC (Cyclic Redundancy Code)
Treat bit strings as polynomials with coefficient 0 and 1.
(E.g. 10001001:x
7
+x
3
+1, 01010111=x
6
+x
4
+x
2
+x+1)
1. Sender and receiver agree upon a polynomial of degree r.
(the generating polynomial, G(x))
2.
Assume with remainder
Then transmit
M x x
G x
A x R x
T x M x x R x
r
r
( )
( )
( ) ( ).
( ) ( ) ( ).

=
=
3. When receiver receives T(x), it divides it by G(x). If there
is a remainder, then an error has occurred.
(All the above operations are done in modulo 2.)
3.2.2 Error-detecting Codes
3.2.2 Error-detecting Codes
3.2.2 Error-detecting Codes
error detecting capabilities of CRC
E(x): error (E.g., E(x)=x
5
+x+1 means bits 0,1,6 are in error)
Errors can go undetected if T(x)+E(x) can be divided by G(x)
with no remainder.
If there are k 1 bits in E(x), k single-bit errors have occurred. A
single burst error is characterized by an initial 1, a mixture of 0s
and 1s, and a final 1, with all other bits being 0.
3.2.2 Error-detecting Codes
error detecting capabilities of CRC
1. single bit error E(x)=x
i
if G(x) contains a constant term, then E(x)/G(x) will have
remainder.
To see that no polynomial with an odd number of terms is divisible
by x + 1, assume that E(x) has an odd number of terms and is
divisible by x + 1.

Factor E(x) into (x + 1) Q(x). Now evaluate E(1) = (1 + 1)Q(1).
Since 1 + 1 = 0 (modulo 2), E(1) must be zero.

If E(x) has an odd number of terms, substituting 1 for x
everywhere will always yield 1 as the result.
3.2.2 Error-detecting Codes
error detecting capabilities of CRC
3. If there have been two isolated single-bit errors, E(x)=x
i
+x
j
,
where i>j.
E(x)=x
i
+x
j
=x
j
(x
i-j
+1)

If we assume that G(x) is not divisible by x, a sufficient condition
for all double errors to be detected is that G(x) does not divide
x
k
+1 for any k up to the maximum value of i-j (i.e., up to the
maximum frame length). Simple, low-degree polynomials that
give protection to long frames are known.
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
As an initial example we will consider a protocol that is as simple as
it can be.

Data are transmitted in one direction only. Both the transmitting and
receiving network layers are always ready.

Processing time can be ignored. Infinite buffer space is available.
And best of all, the communication channel between the data link
layers never damages or loses frames.

This thoroughly unrealistic protocol, which we will nickname
''utopia,'' is shown in Fig.
3.3.1 An Unrestricted Simplex Protocol
3.3.1 An Unrestricted Simplex Protocol
3.3.2 A Simplex Stop-and-Wait Protocol
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
3.3.2 A Simplex Stop-and-Wait Protocol
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
3.3.2 A Simplex Stop-and-Wait Protocol
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
3.3.3 A Simplex Protocol for Noisy Channel
sequence numbers (to number packets and ACKs)
sender
receiver
P(1) P(2)
ACK
sender
receiver
P(1) P(1)
ACK
error
time-out
If there is no sequence number for packets, the receiver
can not distinguish between these two cases.
new
old
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
3.3.3 A Simplex Protocol for Noisy Channel
ACKs must also be numbered.
sender
receiver
P(1) P(1) P(2)
error
ACK ACK
time-out
The sender cannot tell this ACK is
for P(1) or P(2) if not numbered.
A potential failure
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
3.3.3 A Simplex Protocol for Noisy Channel
Protocols in which the sender waits for a positive
acknowledgement before advancing to the next data item are
often called PAR (Positive Acknowledgement with
Retransmission) or ARQ (Automatic Repeat reQuest).

Although it can handle lost frames by timing out, it requires
the timeout interval to be long enough to prevent premature
timeouts. If the sender times out early, while the
acknowledgement is still on the way, it will send a duplicate.
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
3.3.3 A Simplex Protocol for Noisy Channel
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
3.3.3 A Simplex Protocol for Noisy Channel
Chapter 3 The Data Link Layer
3.3 Elementary Data Link Protocols
3.3.3 A Simplex Protocol for Noisy Channel
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
Piggybacking: The technique of temporarily delaying outgoing
acknowledgements so that they can be hooked onto the next
outgoing data frame.
Full-duplex operation: data and acknowledgements are bi-
directional and intermixed
The principal advantage of using piggybacking over having
distinct acknowledgement frames is a better use of the available
channel bandwidth (because it saves the overhead of a distinct
frame).
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
However, piggybacking introduces a complication not present
with separate acknowledgements. How long should the data link
layer wait for a packet onto which to piggyback the
acknowledgements?
If the data link layer waits longer than the senders timeout
period, the frame will be retransmitted, defeating the whole
purpose of having acknowledgements.
Resort to some ad hoc schemes, such as waiting for a fixed
number of milliseconds.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
The ultimate purpose: To design a protocol that remained
synchronized in the face of any combination of garbled frames,
lost frames, and premature timeouts.
Sliding window protocols: The essence of all sliding window
protocols is that at any instant of time, the sender maintains a set of
sequence numbers corresponding to frames it is permitted to send.
These frames are said to fall within the sending window.

Similarly, the receiver also maintains a receiving window
corresponding to the set of frames it is permitted to accept.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
The senders window and the receivers window need not have
the same lower and upper limits, or even have the same size. In
some protocols they are fixed in size, but in others they can
grow or shrink as frames are sent and received.
The sequence numbers within the senders window represent
frames sent but as yet not acknowledged. Whenever a new packet
arrives from the network layer, it is given the next higher sequence
number, and the upper edge of the window is advanced by one.
When an acknowledgement comes in, the lower edge is advanced
by one. In this way the window continuously maintains a list of
unacknowledged frames.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
A sliding window of
size 1 with 3-bit
sequence number.
(a) Initially
(b) After the first frame
has been sent.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
A sliding window of
size 1 with 3-bit
sequence number.
(c) After the first frame
has been received.
(d) After the first
acknowledgement has
been received.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
Since frames currently within the senders window may
ultimately be lost or damaged in transmit, the sender must keep
all these frames in its memory for possible retransmission.

Thus if the maximum window size is n, the senders need n
buffers to hold the unacknowledged frames.

If the window ever grows to its maximum size, the sending data
link layer must forcibly shut off the network layer until another
buffer becomes free.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
The receiving data link layers window corresponds to the frames
it may accept. Any frame falling outside the window is discarded
without comment.
When a frame whose sequence number is equal to the lower edge
of the window is received, it is passed to the network layer, an
acknowledgement is generated, and the window is rotated by one.

Unlike the senders window, the receivers window always
remains at its initial size. Note that a window size of 1 means that
the data link layer only accepts frames in order, but for larger
windows this is not so.

The network layer, in contrast, is always fed data in the proper
order, regardless of the data link layers window size.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.1 A One Bit Sliding Window Protocol
normal scenario
sender
receiver
0 1 0 1
ACK0 ACK1 ACK0 ACK1

sender time-out
sender
receiver
0 1
ACK0 ACK1 ACK1 ACK0

1
time-out
0 ignored
Stop-and-Wait
protocol
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.1 A One Bit Sliding Window Protocol
ACK lost or corrupted
sender
receiver
0 1
ACK0 ACK1 ACK1 ACK0

1
time-out
0
Packet lost or corrupted
sender
receiver
0 1
ACK0 ACK1 ACK0

1
time-out
0
NAK can be used here to speed up processing.
NAK
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.1 A One Bit Sliding Window Protocol
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.1 A One Bit Sliding Window Protocol
Chapter 3 The Data Link Layer
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.1 A One Bit Sliding Window Protocol
A normal
scenario
(seq, ack, packet number)
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.1 A One Bit Sliding Window Protocol
A peculiar
scenario
(half of the
frames are
duplicates)
(seq, ack, packet number)
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
sender
receiver
TRANSP
PROP PROC
TRANSA
PROP PROC
S=minimum time between successive packet transmissions
processing time
S=TRANSP+TRANSA+2(PROC+PROP)
efficiency(without error)=TRANSP/S
Efficiency of sliding window of size 1 (stop and wait) protocol
propagation
delay
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
Example 1: a transmission line of 500km at 56kbps, packet and ack
are 700 bits: TRANSA = TRANSP =
700bits
56000bps
s
PROP =
500000m
m/ s
s. Consequently,
S 0.0125s + 2 (0.00217s) + 2 PROC + 0.0125s
= 0.02934s + 2 PROC
efficiency
0.0125s
0.02934s + 2 PROC
0.0125s
0.02934s
=

=
~

s ~
0 0125
2 3 10
0 00217
43%
8
.
.
.
Example 2: 500km optical fiber at 100Mbps, packet and ack
are 10000 bits: TRANSA = TRANSP =
10000bits
10 bps
s
PROP =
500000m
m/ s
s. Assume PROC =1ms.
Consequently, S 10 s + 2 (0.0025s) + 2 0.001s +10 s
= 7.2 10 s
efficiency
10 s
7.2 10 s
8
-4 -4
-3
-4
-3
=

=
~

10
2 10
0 0025
14%
4
8
.
.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
Example 3: Using satellite link
50-kbps satellite channel with a 500 msec round-trip delay
1000-bit frame, TRANSP=20 msec,
Assume all other times are negligible.

Efficiency=20/520=4%
Clearly, the combination of a long transmit time, high
bandwidth, and short frame length is disastrous in terms of
efficiency.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
The technique of pipelining: Filling the pipe with frames
For example, in the last satellite example, if the senders
window size is 26, after sending the 26th frame (at time 520
msec), the acknowledgement of the first frame will be back.
The 27th frame can be sent now without any idle time in the
sender.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
An important design parameter:
How many unacknowledged outstanding packets are allowed?
It affects:
sequence number
buffer sizes in sender and receiver
sender has to buffer those sent but unacknowledged
(for possible retransmission)
(IF) receiver has to buffer those received but are out-of-
sequence (receiver has to deliver packets to network layer in
order)
SN
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
Pipelining frames over an unreliable channel raises some
serious issues. First, what happens if a frame in the middle of a
long stream is damaged or lost. What should the receiver do
with all the correct frames following it?
Two approaches:
go back n: the receiver discards all frames (the receiver has a
window of size 1)
selective repeat: buffer all the correct frames
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
Go Back n
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
Selective Repeat
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
k-bit sequence number
packet numbering: 0,1,2,3,4, ..., 2
k
-1
maximum window size: 2
k
-1
(3 bits)
(0,1,2,3,4,5,6,7)
(7 buffers at sender)
packets
packets 0~7 received correctly
ACK of 7 received correctly
new batch of 0~7 (correct)
ACK of 7 received correctly
packets 0~7 received correctly
ACK of 7 received correctly
new batch of 0~7 (lost)
ACK of 7 received correctly
(can not be 2
k
, otherwise consider the following)
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.2 A Protocol Using Go Back n
Because this protocol has multiple outstanding frames. It
logically needs multiple timers, one per outstanding frame. All
of these timers can easily be simulated in software using a
single hardware clock that causes interrupts periodically.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.3 A Protocol Using Selective Repeat
How many outstanding and unacknowledged frames are allowed?
senders
window



receivers
window
Consider when the cases when acks are
ok or when acks are lost.
All seven frames received correctly
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.3 A Protocol Using Selective Repeat
How many outstanding and unacknowledged frames are allowed?
senders
window



receivers
window
Make sure there is no overlap when receiver
advances the window.
Chapter 3 The Data Link Layer
3.4 Sliding Window Protocols
3.4.3 A Protocol Using Selective Repeat
k-bit sequence number
packet numbering: 0,1,2,3,4, ..., 2
k
-1
maximum window size: 2
k-1
(3 bits)
(0,1,2,3,4,5,6,7)
(4 buffers)
packets
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.1 HDLCHigh-level Data Link Control
SDLC (Synchronous Data Link Control) of
IBMs SNA (System Network Architecture)
ADCCP (Advanced Data
Communication Control Procedure)
HDLC (High-level Data Link
Control)
ANSI
ISO
LAP (Link Access Procedure)
LAPB (Link Access Procedure
Balanced)
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.1 HDLCHigh-level Data Link Control
Frame format (bit-oriented)
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.1 HDLCHigh-level Data Link Control
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.1 HDLCHigh-level Data Link Control
The P/F bit stands for Poll/Final. It is used when a computer is
polling a group of terminals. When used as P, the computer is
inviting the terminal to send data. All the frames sent by the
terminal, except the final one, have the P/F bit set to P. The final
one is set to F.
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.1 HDLCHigh-level Data Link Control
A supervisory frame
type=0 receiver ready (an acknowledgement frame)
type=1 reject (negative acknowledgement frame)
type=2 receiver not ready (stop sending data)
type=3 selective reject (retransmission of only the frame specified)
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.2 The Data Link Layer in the Internet
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.2 The Data Link Layer in the Internet
SLIP (Serial Line IP) (RFC 1055)
It just sends raw IP packets over the line, with a special flag byte
(0xC0) at the end for framing. If the flag byte occurs inside the
IP packet, a form of character stuffing is used, and the two byte
sequence (0xDB, 0xDC) is sent in its place.
More recent versions of SLIP do some TCP and IP header
compression. (RFC 1144)
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.2 The Data Link Layer in the Internet
PPP (Point-to-Point Protocol) (RFCs 1661, 1662, 1663)
PPP handles error detection, supports multiple protocols (not
just IP), allows IP addresses to be negotiated at connection
time, permits authentication, and has many other improvements
over SLIP.
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.2 The Data Link Layer in the Internet
PPP (Point-to-Point Protocol) (RFCs 1661, 1662, 1663)
PPP provides three things:

1. A framing method that unambiguously delineates the end of
one frame and the start of the next one. The frame format also
handles error detection.
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.2 The Data Link Layer in the Internet
PPP (Point-to-Point Protocol) (RFCs 1661, 1662, 1663)
PPP provides three things:

2. A link control protocol for bringing lines up, testing them,
negotiating options, and bring them down again gracefully
when they are no longer needed. The protocol is called LCP
(Link Control Protocol).
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.2 The Data Link Layer in the Internet
PPP (Point-to-Point Protocol) (RFCs 1661, 1662, 1663)
PPP provides three things:

3. A way to negotiate network-layer options in a way that is
independent of the network layer protocol to be used. The
method chosen is to have a different NCP (Network Control
Protocol) for each network layer supported.
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.2 The Data Link Layer in the Internet
PPP (Point-to-Point Protocol) (RFCs 1661, 1662, 1663)
Making a communication:
1. PC calls the providers router by a modem
2. Use LCP packets to select PPP parameters
3. Use NCP to receive an IP address
4. Send and receive IP packets
5. Use NCP to tear down the network layer connection and free up
the IP address
6. Use LCP to shutdown the data link connection
7. The computer tells the modem to hang up
Chapter 3 The Data Link Layer
3.6 Example data Link Protocols
3.6.2 The Data Link Layer in the Internet
PPP (Point-to-Point Protocol) (RFCs 1661, 1662, 1663)
The PPP full frame format for unnumbered mode operation

Вам также может понравиться