Вы находитесь на странице: 1из 30

Announcement

Project 2 finally ready on Tlab

Homework 2 due next Mon tonight


Will be graded and sent back before Tu. class
Midterm next Th. in class
Review session next time
Closed book
One 8.5 by 11 sheet of paper permitted
Recitation tomorrow on project 2
Review of Previous Lecture
Reliable transfer protocols
Pipelined protocols
Selective repeat
Connection-oriented transport: TCP
Overview and segment structure
Reliable data transfer

Some slides are in courtesy of J. Kurose and K. Ross


TCP: retransmission scenarios
Host A Host B Host A Host B

Seq=92 timeout
timeout

X
loss

Sendbase
= 100

Seq=92 timeout
SendBase
= 120

SendBase
= 100 SendBase
= 120 premature timeout
time time
lost ACK scenario
Outline
Flow control
Connection management
Congestion control
TCP Flow Control
flow control
sender wont overflow
receive side of TCP receivers buffer by
connection has a receive transmitting too much,
buffer: too fast

speed-matching
service: matching the
send rate to the
receiving apps drain
rate
app process may be
slow at reading from
buffer
TCP Flow control: how it works
Rcvr advertises spare
room by including value
of RcvWindow in
segments

(Suppose TCP receiver Sender limits unACKed


discards out-of-order data to RcvWindow
segments) guarantees receive
buffer doesnt overflow
spare room in buffer
= RcvWindow
= RcvBuffer-[LastByteRcvd -
LastByteRead]
TCP Connection Management
Three way handshake:
Recall: TCP sender, receiver
establish connection Step 1: client host sends TCP
before exchanging data SYN segment to server
segments specifies initial seq #
initialize TCP variables: no data
seq. #s Step 2: server host receives
buffers, flow control SYN, replies with SYNACK
info (e.g. RcvWindow) segment
client: connection initiator server allocates buffers
server: contacted by client specifies server initial
seq. #
Step 3: client receives SYNACK,
replies with ACK segment,
which may contain data
TCP Connection Management: Closing
Step 1: client end system sends
TCP FIN control segment to
server client server

closing
Step 2: server receives FIN,
replies with ACK. Closes
connection, sends FIN.
closing
Step 3: client receives FIN,
replies with ACK.
Enters timed wait - will
respond with ACK to received

timed wait
FINs
closed
Step 4: server, receives ACK.
Connection closed. closed

Note: with small modification, can


handle simultaneous FINs
TCP Connection Management (cont)

TCP server
lifecycle

TCP client
lifecycle
Outline
Flow control
Connection management
Congestion control
Principles of Congestion Control
Congestion:
informally: too many sources sending too much data
too fast for network to handle
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
Reasons
Limited bandwidth, queues
Unneeded retransmission for data and ACKs
Approaches towards congestion control
Two broad approaches towards congestion control:

End-end congestion Network-assisted


control: congestion control:
no explicit feedback from routers provide feedback to
network end systems
single bit indicating
congestion inferred from congestion (SNA, DECbit,
end-system observed loss, TCP/IP ECN, ATM)
delay explicit rate sender
should send at
approach taken by TCP
TCP Congestion Control
end-end control (no network How does sender
assistance) perceive congestion?
sender limits transmission: loss event = timeout or
LastByteSent-LastByteAcked 3 duplicate acks

CongWin TCP sender reduces


rate (CongWin) after
Roughly, loss event
CongWin
rate = Bytes/sec three mechanisms:
RTT
CongWin is dynamic, function AIMD
of perceived network slow start
congestion conservative after
timeout events
TCP AIMD
multiplicative decrease: additive increase:
cut CongWin in half increase CongWin by
after loss event 1 MSS every RTT in
the absence of loss
congestion
window events: probing
24 Kbytes

16 Kbytes

8 Kbytes

time

Long-lived TCP connection


TCP Slow Start
When connection begins, When connection begins,
CongWin = 1 MSS increase rate
exponentially fast until
Example: MSS = 500
bytes & RTT = 200 msec first loss event
initial rate = 20 kbps

available bandwidth may


be >> MSS/RTT
desirable to quickly ramp
up to respectable rate
TCP Slow Start (more)
When connection begins, Host A Host B
increase rate
exponentially until first

RTT
loss event:
double CongWin every
RTT
done by incrementing
CongWin for every ACK
received

Summary: initial rate is


slow but ramps up
exponentially fast time
Refinement (more)
Q: When should the
exponential
increase switch to 14

linear?

congestion window size


12
10
A: When CongWin

(segments)
8
gets to 1/2 of its 6
value before 4 threshold
timeout. 2

Implementation: 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Variable Threshold
Transmission round

At loss event, Threshold is


set to 1/2 of CongWin just
before loss event
Refinement
Philosophy:
After 3 dup ACKs:
3 dup ACKs indicates
CongWin is cut in half network capable of
window then grows delivering some segments
linearly timeout before 3 dup
But after timeout event: ACKs is more alarming
Enter slow start
CongWin instead set to
1 MSS;
window then grows
exponentially
to a threshold, then
grows linearly
Summary: TCP Congestion Control

When CongWin is below Threshold, sender in


slow-start phase, window grows exponentially.

When CongWin is above Threshold, sender is in


congestion-avoidance phase, window grows linearly.

When a triple duplicate ACK occurs, Threshold


set to CongWin/2 and CongWin set to
Threshold.

When timeout occurs, Threshold set to


CongWin/2 and CongWin is set to 1 MSS.
TCP Fairness
Fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/K

TCP connection 1

bottleneck
TCP
router
connection 2
capacity R
Why is TCP fair?
Two competing sessions:
Additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally
R equal bandwidth share

loss: decrease window by factor of 2


congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase

Connection 1 throughput R
Fairness (more)
Fairness and UDP Fairness and parallel TCP
connections
Multimedia apps often
do not use TCP nothing prevents app from
do not want rate opening parallel
throttled by congestion connections between 2
control
hosts.
Instead use UDP:
Web browsers do this
pump audio/video at
constant rate, tolerate Example: link of rate R
packet loss
supporting 9 connections;
Research area: TCP new app asks for 1 TCP, gets
friendly rate R/10
new app asks for 11 TCPs,
gets R/2 !
Delay modeling
Notation, assumptions:
Q: How long does it take to Assume one link between
receive an object from a client and server of rate R
Web server after sending
S: MSS (bits)
a request?
O: object size (bits)
Ignoring congestion, delay is
no retransmissions (no loss,
influenced by:
no corruption)
TCP connection establishment
Window size:
data transmission delay
First assume: fixed
slow start congestion window, W
segments
Then dynamic window,
modeling slow start
Fixed congestion window (1)

First case:
WS/R > RTT + S/R: ACK
for first segment in
window returns before
windows worth of data
sent

delay = 2RTT + O/R


Fixed congestion window (2)

Second case:
WS/R < RTT + S/R: wait
for ACK after sending
windows worth of data
sent

delay = 2RTT + O/R


+ (K-1)[S/R + RTT - WS/R]
TCP Delay Modeling: Slow Start (1)
Now suppose window grows according to slow start

Will show that the delay for one object is:


O S S
Latency 2 RTT P RTT ( 2 P 1)
R R R

where P is the number of times TCP idles at server:

P min {Q, K 1}

- where Q is the number of times the server idles


if the object were of infinite size.

- and K is the number of windows that cover the object.


TCP Delay Modeling: Slow Start (2)
Delay components: initiate TCP
connection
2 RTT for connection
estab and request request

O/R to transmit
object
first window

object
= S/R

time server idles due RTT


second window
to slow start = 2S/R

Server idles: third window


= 4S/R
P = min{K-1,Q} times

Example: fourth window


= 8S/R
O/S = 15 segments
K = 4 windows
Q=2
P = min{K-1,Q} = 2 object
complete
transmission
delivered

Server idles P=2 times time at


time at
server
client
HTTP Modeling
Assume Web page consists of:
1 base HTML page (of size O bits)
M images (each of size O bits)
Non-persistent HTTP:
M+1 TCP connections in series
Response time = (M+1)O/R + (M+1)2RTT + sum of idle times
Persistent HTTP:
2 RTT to request and receive base HTML file
1 RTT to request and receive M images
Response time = (M+1)O/R + 3RTT + sum of idle times
Non-persistent HTTP with X parallel connections
Suppose M/X integer.
1 TCP connection for base file
M/X sets of parallel connections for images.
Response time = (M+1)O/R + (M/X + 1)2RTT + sum of idle times
HTTP Response time (in seconds)
RTT = 100 msec, O = 5 Kbytes, M=10 and X=5
20
18
16
14
non-persistent
12
10
persistent
8
6
parallel non-
4
persistent
2
0
28 100 1 10
Kbps Kbps Mbps Mbps
For low bandwidth, connection & response time dominated by
transmission time.
Persistent connections only give minor improvement over parallel
connections for small RTT.
HTTP Response time (in seconds)
RTT =1 sec, O = 5 Kbytes, M=10 and X=5
70
60
50
non-persistent
40
30 persistent

20
parallel non-
10 persistent
0
28 100 1 10
Kbps Kbps Mbps Mbps
For larger RTT, response time dominated by TCP establishment
& slow start delays. Persistent connections now give important
improvement: particularly in high delaybandwidth networks.

Вам также может понравиться