Вы находитесь на странице: 1из 7

Issues

Two sides of the same coin


pre-allocate resources so at to avoid congestion
(reservation based approach)
control congestion if (and when) is occurs
(feedback based approach)

Congestion Control

Source
1
10-M

bps

Ethe

rnet

Router
1.5-Mbps T1 link

Source
2

Destination

I
FDD
bps
00-M

Two points of implementation


hosts at the edges of the network (transport
protocol)
routers inside the network (queuing discipline)

Additive Increase/Multiplicative
Decrease

TCP Congestion Control


Idea

assumes

best-effort network (FIFO or FQ


routers) each source determines network
capacity for itself
uses implicit feedback
ACKs pace transmission (self-clocking)

Objective: adjust to changes in the available capacity


New state variable per connection: CongestionWindow

limits how much data source has in transit


MaxWin = MIN(CongestionWindow,
AdvertisedWindow)
EffWin = MaxWin - (LastByteSent LastByteAcked)

Idea:

Challenge
determining

increase CongestionWindow when congestion goes down


decrease CongestionWindow when congestion goes up

the available capacity in the first

place
adjusting to changes in the available capacity

AIMD (cont)

AIMD (cont)

Source

Question:

how does the source determine


whether or not the network is congested?

Answer:

Destination

Algorithm
increment

CongestionWindow by one
packet per RTT (linear
increase)
divide CongestionWindow by
two whenever a timeout occurs
(multiplicative decrease)

a timeout occurs

timeout

signals that a packet was lost


packets are seldom lost due to transmission
error
lost packet implies congestion

In practice: increment a little for each ACK


Increment = (MSS * MSS)/CongestionWindow
CongestionWindow += Increment

Slow Start

AIMD (cont)

Source

Trace:

sawtooth behavior

Destination

Objective:

determine the
available capacity in the first
Idea:

70
60
50
40
30
20
10

begin

1.0

2.0

3.0

4.0

5.0

6.0

Time (seconds)

7.0

8.0

9.0

10.0

with CongestionWindow
= 1 packet
double CongestionWindow
each RTT (increment by 1
packet for each ACK)

Fast Retransmit and Fast


Recovery

Slow Start (cont)

Exponential growth, but slower than all at


once
Used

Problem:

coarsegrain TCP timeouts


lead to idle periods
Fast retransmit: use
duplicate ACKs to
trigger
retransmission

when first starting connection


when connection goes dead waiting for timeout

Trace
70
60
50
40
30
20
10
1.0

2.0

3.0

4.0

5.0

6.0

7.0

8.0

Sender

Receiver

Packet 1
Packet 2
Packet 3

ACK 1

Packet 4

ACK 2

Packet 5

ACK 2

Packet 6
ACK 2
ACK 2
Retransmit
packet 3
ACK 6

9.0

Time (seconds)

Problem: lose up to half a


CongestionWindows worth of data

Congestion Avoidance

Results

70
60
50
40
30
20
10

1.0

2.0

3.0

4.0

Time (seconds)

TCPs strategy

5.0

6.0

7.0

predict when congestion is about to happen


reduce rate before packets start being discarded
call this congestion avoidance, instead of
congestion control

skip the slow start phase


go directly to half the last
successful CongestionWindow
(ssthresh)

Alternative strategy

Fast recovery

control congestion once it happens


repeatedly increase load in an effort to find the
point at which congestion occurs, and then back
off

Two possibilities

router-centric: DECbit and RED Gateways


host-centric: TCP Vegas

DECbit

End Hosts

Add binary congestion bit to each packet


header
Router

Destination

echoes bit back to source


Source records how many packets
resulted in set bit
If less than 50% of last windows worth
had bit set

monitors average queue length over last busy+idle


cycle Queue length
Current
time

Previous
cycle
Averaging
interval

Current
cycle

increase CongestionWindow

Time

If

50% or more of last windows worth had


bit set

set congestion bit if average queue length > 1


attempts to balance throughout against delay

Random Early Detection


(RED)

Notification

is implicit

just

drop the packet (TCP will timeout)


could make explicit by marking the packet
Early

random drop

rather

than wait for queue to become full, drop


each arriving packet with some drop
probability whenever the queue length
exceeds some drop level

by 1 packet

decrease CongestionWindow

by 0.875 times

TCP Vegas

Idea: source watches


for some sign that
routers queue is
building up and
congestion will
happen too; e.g.,

RTT grows

sending rate flattens

70
60
50
40
30
20
10
1.0 1.5

2.0

2.5 3.0 3.5 4.0 4.5


Time (seconds)

5.0

5.5 6.0

6.5

7.0 7.5 8.0 8.5

0.5 1.0 1.5

2.0

2.5 3.0 3.5 4.0 4.5


Time (seconds)

5.0

5.5 6.0

6.5

7.0 7.5 8.0 8.5

0.5 1.0 1.5

2.0

2.5 3.0 3.5 4.0 4.5


Time (seconds)

5.0

5.5 6.0

6.5

7.0 7.5 8.0 8.5

0.5

1100
900
700
500
300
100

10
5

Algorithm (cont)

Algorithm

Let BaseRTT be the minimum of all measured RTTs


(commonly the RTT of the first packet)
If not overflowing the connection, then

ExpectRate = CongestionWindow/BaseRTT

Parameters

Source calculates sending rate (ActualRate) once


per RTT
Source compares ActualRate with ExpectRate

0.5

TCP throughput

Let

slow start

W be the window size when loss


occurs.
When window is W, throughput is W/RTT
Just after loss, window drops to W/2,
throughput to W/2RTT.
Average throughout: .75 W/RTT

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5.0

5.5

6.0

6.5

7.0

7.5

8.0

3.5 4.0 4.5 5.0


Time (seconds)

5.5

6.0

6.5

7.0

7.5

8.0

240
200
160
120
80
40
1.0

1.5

2.0

2.5

3.0

Even faster retransmit

Ignore

1.0

Time (seconds)

the average throughout ot TCP as


a function of window size and RTT?

70
60
50
40
30
20
10
0.5

Diff = ExpectedRate - ActualRate


if Diff <
increase CongestionWindow linearly
else if Diff >
decrease CongestionWindow linearly
else
leave CongestionWindow unchanged

Whats

= 1 packet
= 3 packets

keep fine-grained timestamps for each packet


check for timeout on first duplicate ACK

TCP Futures

Example: 1500 byte segments, 100ms RTT,


want 10 Gbps throughput
Requires window size W = 83,333 in-flight
segments
Throughput in terms of loss rate:
1.22 MSS
RTT L

L = 210-10
New versions of TCP for high-speed needed!

TCP Fairness
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should
have average rate of R/K

Why is TCP fair?


Two competing sessions:

TCP connection 1

Additive increase gives slope of 1, as throughout


increases
multiplicative decrease decreases throughput
proportionally
equal bandwidth share

TCP
connection 2

Connection 2 throughput

bottleneck
router
capacity R

loss: decrease window by factor of 2


congestion avoidance: additive increase

Connection 1 throughput R

Delay modeling
Q: How long does it take
to receive an object
from a Web server
after sending a
request?
Ignoring congestion,
delay is influenced by:

TCP connection
establishment
data transmission delay
slow start

Fixed congestion window (1)

Notation, assumptions:

Assume one link


between client and
server of rate R
S: MSS (bits)
O: object size (bits)
no retransmissions (no
loss, no corruption)

Window size:

First assume: fixed


congestion window, W
segments
Then dynamic window,
modeling slow start

First case:
WS/R > RTT + S/R: ACK
for first segment in
window returns before
windows worth of data
sent
delay = 2RTT + O/R

Fixed congestion window (2)


Second case:

WS/R < RTT + S/R:


wait for ACK after
sending windows
worth of data sent

delay = 2RTT + O/R


+ (K-1)[S/R + RTT - WS/R]

Вам также может понравиться