Академический Документы
Профессиональный Документы
Культура Документы
Costin Raiciu
Christoph Paasch
University Politehnica of
Bucharest
Client
TCP.
Used by most applications,
offers byte-oriented reliable delivery,
adjusts load to network conditions
Mismatch between
network and transport
creates problems
Offload to WiFi
Collisions in datacenters
Multipath TCP
MPTCP Connection
Management
SY N
LE
B
A
P
A
MP_ C
X
MPTCP Connection
Management
SYN/
MP_C ACK
APAB
LE
Y
MPTCP Connection
Management
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
MPTCP Connection
Management
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
SYN
JOIN Y
FLO
WY
MPTCP Connection
Management
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
CK
A
/
N
SY
X
JOIN
FLO
WY
MPTCP Connection
Management
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
Bit 15 Bit 16
Source Port
Bit 31
Destination Port
Sequence Number
Acknowledgment Number
Header
Length
Reserved
Code bits
Checksum
20
Bytes
Receive Window
Urgent Pointer
Options
Data
0 - 40
Bytes
Bit 15 Bit 16
Source Port
Bit 31
Destination Port
Sequence Number
Acknowledgment Number
Header
Length
Reserved
Code bits
Checksum
20
Bytes
Receive Window
Urgent Pointer
Options
Data
0 - 40
Bytes
Sequence Numbers
Packets go multiple paths.
Need sequence numbers to put them back in
sequence.
Need sequence numbers to infer loss on a single path.
Options:
One sequence space shared across all paths?
One sequence space per path, plus an extra one to
put data back in the correct order at the receiver?
Sequence Numbers
One sequence space per path is preferable.
Loss inference is more reliable.
Some firewalls/proxies expect to see all the
sequence numbers on a path.
Bit 15 Bit 16
Bit 31
Reserved
Code bits
Checksum
Data sequence number
20
Bytes
Receive Window
Urgent Pointer
Options
Data
Data ACK
0 - 40
Bytes
MPTCP Operation
options
SEQ
1000
DSEQ
10000
DATA
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
MPTCP Operation
options
SEQ
1000
DSEQ
10000
DATA
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
MPTCP Operation
options
SEQ
1000
DSEQ
10000
DATA
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
options
SEQ
5000
DSEQ
11000
DATA
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
MPTCP Operation
options
SEQ
1000
DSEQ
10000
DATA
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
options
SEQ
5000
DSEQ
11000
DATA
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
MPTCP Operation
options
SEQ
1000
DSEQ
10000
DATA
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
options
SEQ
5000
DSEQ
11000
DATA
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
MPTCP Operation
ACK
2000
Data
ACK
11000
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
options
SEQ
5000
DSEQ
11000
DATA
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
MPTCP Operation
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
options
SEQ
5000
DSEQ
11000
DATA
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
MPTCP Operation
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
options
SEQ
5000
DSEQ
11000
DATA
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
MPTCP Operation
options
SEQ
2000
DSEQ
11000
DATA
SUBFLOW 1
CWND
Snd.SEQNO
Rcv.SEQNO
FLO
WY
SUBFLOW
2
CWND
Snd.SEQNO
Rcv.SEQNO
Multipath TCP
Congestion
Control
Two circuits
A link
Two separate
links
42
A pool of links
Design goal 1:
43
Tobe
befair,
fair,Multipath
MultipathTCP
TCPshould
shouldtake
takeas
asmuch
much
To
To be fair, Multipath TCP should take as much
capacityas
asTCP
TCPatataabottleneck
bottlenecklink,
link,no
nomatter
matter
capacity
capacity as TCP at a bottleneck link, no matter
howmany
manypaths
subflows
it is using.
how
it
is
using.
how many paths it is using.
Design goal 2:
44
12Mb/
s
12Mb/
s
Design goal 2:
45
12Mb/
s
12Mb/
s
8Mb/
s
8Mb/
s
8Mb/
s
12Mb/
s
To
as
To be
be fair,
fair, Multipath
Multipath TCP
TCP should
should take
take
as much
much
If
each flow TCP
split itsatraffic
1:1 ...
capacity
capacity as
as TCP at
at a bottleneck
bottleneck link,
link, no
no matter
matter
how
how many
many paths
paths it
it is
is using.
using.
Design goal 2:
46
12Mb/
s
9Mb/s
12Mb/
s
9Mb/s
9Mb/s
12Mb/
s
To
as
To be
be fair,
fair, Multipath
Multipath TCP
TCP should
should take
take
as much
much
If
each flow TCP
split itsatraffic
2:1 ...
capacity
capacity as
as TCP at
at a bottleneck
bottleneck link,
link, no
no matter
matter
how
how many
many paths
paths it
it is
is using.
using.
Design goal 2:
47
12Mb/
s
12Mb/
s
10Mb/
s
10Mb/
s
10Mb/
s
12Mb/
s
To
as
To be
be fair,
fair, Multipath
Multipath TCP
TCP should
should take
take
as much
much
capacity
If each flow
split a
its traffic 4:1 link,
...
capacity as
as TCP
TCP at
at a bottleneck
bottleneck link, no
no matter
matter
how
how many
many paths
paths it
it is
is using.
using.
Design goal 2:
48
12Mb/
s
12Mb/
s
12Mb/
s
12Mb/
s
12Mb/
s
12Mb/
s
To
as
To be
be fair,
fair, Multipath
Multipath TCP
TCP should
should take
take
as much
much
capacity
If each flow
split a
its traffic :1link,
...
capacity as
as TCP
TCP at
at a bottleneck
bottleneck link, no
no matter
matter
how
how many
many paths
paths it
it is
is using.
using.
Design goal 3:
49
3G path:
low loss, high RTT
50
51
Goal 2
52
Goals 1&3
53
54
Applications
of Multipath TCP
56
2 TCPs
@ 50Mb/s
100Mb/
s
100Mb/
s
4 TCPs
@
25Mb/s
57
2 TCPs
@ 33Mb/s
1 MPTCP
@ 33Mb/s
4 TCPs
@
25Mb/s
100Mb/
s
100Mb/
s
58
2 TCPs
@ 25Mb/s
2 MPTCPs
@ 25Mb/s
100Mb/
s
100Mb/
s
4 TCPs
@
25Mb/s
59
2 TCPs
@ 22Mb/s
3 MPTCPs
@ 22Mb/s
100Mb/
s
100Mb/
s
4 TCPs
@
22Mb/s
60
2 TCPs
@ 20Mb/s
4 MPTCPs
@ 20Mb/s
100Mb/
s
100Mb/
s
4 TCPs
@
20Mb/s
61
5 TCPs
First 0,
then
10 MPTCPs
100Mb/
s
100Mb/
s
15 TCPs
We confirmed in experiments
that MPTCP nearly manages
to pool the capacity of the
two access links.
Setup: two 100Mb/s access
time [min]
62
5 TCPs
First 0,
then
10 MPTCPs
100Mb/
s
100Mb/
s
15 TCPs
MPTCP on EC2
Amazon EC2: infrastructure as a service
We can borrow virtual machines by the hour
These run in Amazon data centers worldwide
We can boot our own kernel
Same
Rack
Implementing
Multipath TCP
in the Linux
Kernel
Christoph Paasch
Fabien Duchne
Gregory Detal
Freely available at http://mptcp.info.ucl.ac.be
MPTCP-session creation
MPTCP-session creation
MPTCP on multicore
architectures
MPTCP on multicore
architectures
MPTCP on multicore
architectures
Solution: Send all packets from the same MPTCPsession to the same CPU-core
Based on Receive-Flow-Steering implementation in
Linux (Author: Tom Herbert from Google)
MPTCP on multicore
architectures
Multipath TCP
on
Mobile Devices
Related Work
Multipath TCP has been proposed many
times before
First by Huitema (1995),CMT, pTCP, M-TCP,
Multipath topologies
need multipath
transport
Multipath TCP can be used
by unchanged applications
over todays networks
MPTCP moves traffic away from
congestion,
making a collection of links behave like a
single pooled resource
Backup Slides
Packet-level ECMP in
datacenters
107
108