Вы находитесь на странице: 1из 7

Congestion Congestion Control

Adolfo Rodriguez
CPS 214
February 19, 2004

Router Congestion Congestion Control Overview


1.5 Mb ps
z Challenge: how do we efficiently share network resources
ps 1 0 Mb among billions of hosts?
• Today: TCP
ps 1 0 Mb Hosts adjust rate based on packet losses
1 0 Mb ps
• Alternative solutions
Fair queuing, RED (router support)
z What if rate of packets arriving > rate of packets departing Vegas, packet pair (add functionality to TCP)
Rate control, credits

Congestion Control Taxonomy Queuing Disciplines


z Router-centric versus host-centric z How to distribute buffers among users/flows
z Reservation-based versus feedback-based • When buffer overflows, which packet to drop?
z Window-based versus Rate-based z Simple solution: FIFO
• First in, first out
• If packet comes along with no available buffer space, drop it
Fair Queuing Scheduling Background
z Goals: z How do you minimize avg. response time?
• Allocate resources equally among all users/flows • By being unfair: shortest job first
• Low delay for interactive users z Example: equal size jobs, start at t=0
• Protection against misbehaving users • Round robin Î all finish at same time
z Approach: simulate general processor sharing (from OS • FIFO Î minimizes avg. response time
world) z Unequal size jobs
• Bitwise round robin • Round robin Î bad if lots of jobs
• Need to compute number of competing flows at each instant Analogy: OS thrashing, spending all its time context switching
• FIFO Î small jobs delayed behind big ones

TCP Congestion Problems TCP Congestion Control


z Original TCP sent full window of data z Adjust transmission rate to match network bandwidth
• Additive increase/multiplicative decrease
z When links become loaded, queues fill up, leading to:
Oscillate around bottleneck capacity
• Congestion collapse: when round-trip time exceeds
retransmit interval this can create a stable condition in which • Slow start
every packet is being retransmitted many times Quickly identify bottleneck capacity
• Synchronized behavior: network oscillates between loaded • Fast retransmit
and unloaded • Fast recovery
Feedback loop

Jacobson Solution Tracking the Bottleneck Bandwidth


z Throughput = window size/RTT
z Transport protocols should obey conservation of packets
• Use ACKs to clock injection of new packets z Multiplicative decrease
• Timeout Î dropped packet Î cut window size in half
z Modify retransmission timer to adapt to variations in delay
z Infer network bandwidth from packet loss z Additive increase
• ACK arrives Î no drop Î increase window size by one
• Drops Î congestion Î reduce rate
packet/window
• No drops Î no congestion Î increase rate
z Limit send rate based on minimum of congestion window
and advertised window
TCP “Sawtooth”
Sawtooth” Slow Start
z Oscillates around bottleneck bandwidth z How do we find bottleneck bandwidth?
• Adjusts to changes in competing traffic z Cannot use ACKs to clock without reaching equilibrium
Additive Increase/Multiplicative Decrease • Start by sending a single packet
18 Start slow to avoid overwhelming network
16
14 • Multiplicative increase until get packet loss
12 Quickly find bottleneck
window 10
(in segs) 8 Cut rate by half
6
• Shift into linear increase/multiplicative decrease
4
2
0
12
15
18
21
24
27
0
3
6
9

round-trip times

Slow Start Slow Start Problems


z Quickly find the bottleneck bandwidth z Slow start usually overshoots bottleneck
• Leading to many lost packets in window
Slow Start • Can lose up to half of window size
300
z Bursty traffic source
250
• Will cause bursty losses for other flows
200
window
z Short flows
150
(in segs)
• Can spend entire time in slow start
100
Especially for large bottleneck bandwidth
50
z Consider repeated connections to the same server
0
0 1 2 3 4 5 6 7 8
• E.g., for web connections
round-trip times

ACK Pacing in TCP Problems with ACK Pacing


z ACK compression
• Variations in queuing delays on return path changes spacing
between ACKs
• Example: ACK waits for single long packet
• Worse with bursty cross-traffic
z What happens after a timeout?

z ACKs open up slots in the congestion/advertised window


• Bottleneck link determines rate to send
• ACK indicates one packet has left the network
Problems with ACK Pacing Timeouts Dominate Performance
z ACK compression 18

• Variations in queuing delays on return path changes spacing 16


between ACKs 14

• Example: ACK waits for single long packet 12

• Worse with bursty cross-traffic window 10


(in segs) 8
z What happens after a timeout?
6
• Potentially, no ACKs to time packet transmissions 4
z Congestion avoidance 2

• Slow start back to last successful rate 0

9
12

15

18

21

24

27

30

33

36

39
• Back to linear increase/multiplicative increase at this point round-trip times

Fast Retransmit Fast Retransmit Caveats


z Can we detect packet loss without a timeout? 1 z Requires in order packet delivery
2
z Duplicate ACKs imply either 3 • Dynamically adjust number of dup ACKs needed for
2
• Packet reordering (route change) 4
retransmit?
5
• Packet loss z Does not work with small windows
z TCP Tahoe 2 • Why not?
2 • E.g., modems
• Resend if see three dup ACKs 2
• Eliminates timeout delay z Does not work if packets lost in burst
2 • Why not?
• E.g., at peak of slow start
6

Fast Retransmit Fast Recovery


Slow Start + Congestion Avoidance
18 + Fast Retransmit z Use duplicate ACKs to maintain ACK pacing
1
16 • Dup ACK Î packet left network
2
14 • Every other ACK Î send packet 3
2
12 4
z Fast recovery allows TCP to fall to half 5
window10
previous bottleneck bandwidth 6
(in segs) 8
6 • Rather than all the way back to 1 2

4
packet/reinitiate slow start 2
• Slow start only at beginning/on timeout 2
2
2
0
10

12

14

16

18

20

22

24

26

28
0

round-trip times 3
Fast Recovery Delayed ACKs
Slow Start + Congestion Avoidance z Problem:
18 + Fast Retransmit + Fast Recovery • In request/response programs, you send separate ACK and
16 data packets for each transaction
14
z Goal: piggyback ACK with subsequent data packet
12
z Solution:
window 10
(in segs) 8 • Do not ACK data immediately
6 • Wait 200ms (must be less than 500ms)
4 • Must ACK every other packet
2
• Must not delay duplicate ACKs
0
10

12

14

16

18

20

22

24
0

round-trip times

What if Two TCP Connections Share Link? What if TCP and UDP share link?
z Reach equilibrium independent of initial bandwidth z Independent of initial rates, UDP will get priority! TCP
(assuming equal RTTs) will take what’s left.
16 18

16
14
14
12
12
10
window 10 UDP
window
8 (in segs) 8
(in segs) TCP
6 6

4 4

2
2
0
0
10

12

14

16

18
0

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 round-trip times
round-trip times

What if Two Different TCP Implementations What if Two Different TCP Implementations
Share Link? Share Link?
z Problem: many different TCP implementations z Problem: many different TCP implementations
z If cut back more slowly after drops Î grab bigger share z If cut back more slowly after drops Î grab bigger share
z If add more quickly after ACKs Î grab bigger share z If add more quickly after ACKs Î grab bigger share
z Incentive to cause congestion collapse z Incentive to cause congestion collapse
• Many TCP “accelerators” • Many TCP “accelerators”
• Easy to improve perf at expense of network • Easy to improve perf at expense of network
z Solutions? z Solutions?
• Per-flow fair queuing at router
TCP Congestion Control Summary
z Slow Start
z Adaptive retransmission
• Account for average and variance
z Fast retransmission TCP Vegas
• Triple duplicate ACKs
z Fast recovery
• Use ACKs in pipeline to avoid shrinking congestion window
to one
• Cuts out going back to slow start when detecting congestion
with fast retransmission

Overview/Goals Implementation
z Goals: z Retrofitted x-kernel with BSD implementations of TCP
• Increase useful throughput of TCP Reno and Vegas
Vegas increases throughput by 37-71% • Ran both simulations and real wide-area experiments
• Decrease retransmissions z Simulated cross traffic (e.g., FTP/NNTP/Telnet) using
Vegas retransmits 1/5 to ½ the data of Reno tcplib
z Note: easy to increase throughput at the expense of other
connections
z TCP Reno controls congestion by causing it
• Vegas aims to avoid congestion using only host-based
measurements

Vegas: New Retransmission Mechanism Congestion Avoidance Mechanism


z Reno uses coarse-grained timeouts and triple dup-ACKs z Reno creates losses to determine available bandwidth
• If bursty losses, or small windowÎno triple dup ACK • Each connection can create losses for other connections
z Vegas reads system clock for every packet sent • No problem if advertised window < congestion window
• On ACK arrival, Vegas calculates RTT on per-packet basis z Use understanding of network behavior as it approaches
z Vegas retransmits in two situations: congestion (not once it gets there)
• On duplicate ACK, check if elapsed time for “missing” • Increased queue sizeÎincreased per-packet RTT
packet exceeds RTT estimate
• Decreased throughputÎ more congestion
If so, retransmit without waiting for triple dup ACK
• On first or second ACK after retransmission also check if
any additional packets have exceeded RTT
z Why not just retransmit on single/double dup ACK?
TCP Vegas Congestion Avoidance TCP Vegas Congestion Avoidance
z Compare expected to actual throughput z Define two parameters α < β
• Expected = window size / base RTT z Let Diff = Expected – Actual
How to measure base RTT? • Always a positive value
• Actual = ACKs / round trip time z If Diff < α, linearly increase congestion window
Pick distinguished packet once every RTT for calculation
z If Diff > β, linearly decrease congestion window
z If actual << expected, queues increasing Î decrease rate z If α < Diff < β, do nothing
before packet drop
z Why can we get away with linear decrease instead of
z If actual ~= expected, queues decreasing Î increase rate multiplicative decrease?
z What if base RTT changes (route changes)? • We are avoiding congestion, not reacting to it

TCP Vegas Congestion Avoidance TCP Vegas Slow Start


z α and β are measured in terms of throughput (e.g., kb/s) z Reno doubles congestion window every RTT in slow start
however, they really represent extra buffers in the network • Can overshoot capacity, cause many losses
z Intuitively, want each connection to occupy one extra z Vegas doubles congestion window every other RTT
buffer in the network • Only double window if actual rate is within equivalent of 1
• If extra capacity becomes available, Vegas flows will router buffer of expected rate
capture them (since they sit in one buffer in steady state) Note 1 KB buffers with 100 ms RTT equals 10 KB/s

• In times of congestion, Vegas flows occupy too many z Vegas* uses packet-pair mechanism to estimate available
buffers, so Vegas backs off bandwidth
z Typical values for α = 1 and β = 3 • Slow start to avail. bandwidth, then back to linear increase
• Why not go straight to bottleneck bandwidth?
• Goal: have Vegas flows occupy between 1 and 3 router
buffers • Vegas* did not result in significant perf/loss improvements

Vegas Discussion
z Does not involve a modification to the TCP spec
• Can be deployed incrementally
z Does not steal bandwidth from other implementations
z Uses additional information available at hosts to better
estimate congestion
• Congestion avoidance vs. control
z Additional processor overhead
• Increases throughput/reduces wasted transmissions
z Should congestion control be in hosts/routers/both?

Вам также может понравиться