You are on page 1of 1

Efficient Point to Multipoint Transfers Across Datacenters

M. Noormohammadpour, C. S. Raghavendra, S. Rao, S. Kandula


Global Placement of Datacenters and Inter-Datacenter Networks
Datacenters Spread Globally (Microsoft Azure) Inter-Datacenter Networks Benefits

✓ Better communication across datacenters


✓ Increased reliability
Source: C. Hong et al., “Achieving High Utilization with Software-
Driven WAN”, ACM SIGCOMM 2013
✓ Load balancing
✓ Easier to get data closer to users
➢ Lower RTT to users (Higher Avg Throughput)
➢ Less hops to users (Bandwidth Savings)

Source: S. Jain et al., “B4: Experience with a Globally-Deployed


Source: https://azure.microsoft.com/en-us/overview/datacenters/how-to-choose/ (Jun 2017) Software Defined WAN”, ACM SIGCOMM 2013

Point to Multipoint (P2MP) Transfers: An Abstraction Model


Many Applications Cast (Copy) Objects to Multiple Locations P2MP Abstraction Model

Application Reason for delivery to multiple datacenters ✓ Single Source


CDN, Web ✓ Known and fixed receiver set
Getting closer to users
X
Data Recovery B
Making backup copies
Search
Synchronization of state X Inter-Datacenter X
Recommendation, Ads A Network
C

Databases Global load balancing

Input for next processing stage X


Geo-Distributed Data Analytics D

Current Solutions and Their Shortcomings


Separate Unicast Transfers Multicasting (Network-Driven) Multicasting (Client-Driven) Store-and-Forward (SnF)

❖ Wastes Bandwidth ❖ Trees Far From Optimal ❖ Limited visibility into network status ❖ Storage and bandwidth costs on
❖ May increase Completion ➢ No attention to resource utilization ❖ Limited control over routing intermediate datacenters
Times ➢ Trees built gradually with joins/leaves ❖ Example: Overlay Networks ❖ Can lead to excessive delays
❖ Complex Session Management ❖ Complexity (running SnF agents,
chunking and reassembly, etc.)
❖ Example: IP Multicast

Our Solution: DCCast


TE Server
✓ For every P2MP transfer, send traffic to all receivers over a
single Forwarding Tree
Rules
➢ Reduced bandwidth usage (Group Tables)
Request

✓ Forwarding Tree Selection (at controller)


B Rates Update()
➢ Chosen by a controller with global view of network status
➢ Simple weight assignment to edges
➢ Minimum weight Steiner tree selection X
A C Allocate(R) Rate
➢ Load balancing / Reducing completion times Requests -Forwarding Tree Selection Allocation
-Rate Calculation
✓ Rate-Allocation (controller) and Rate-Limiting (senders) Rate-Limiting
Database
➢ Slotted timeline with fixed rates during timeslots D

Tree Selection and Rate-Allocation: Upon Request Arrival


Tree Selection Rate-Allocation
✓ Input: 𝐿𝑒 (outstanding load on every edge 𝒆) and 𝑉𝑅 (request volume) ✓ According to FCFS: Simple, no rate-recalculations
✓ Every edge 𝒆 gets a weight of 𝑾𝒆 = 𝑽𝑹 + 𝑳𝒆 ✓ Guaranteed completion times assuming no failures
✓ Minimum weight Steiner tree → Forwarding Tree of 𝑹 ✓ Investigation of more policies (e.g. Fair Sharing) in future
✓ Many heuristics available for Steiner tree selection ➢ Any scheduling policy can be used on top of DCCast’s forwarding tree selection

Evaluations
Tree Selection Techniques DCCast vs. Separate Unicast Transfers

Future Work

❖ Improving Mean TCT ➢ Applying batching techniques for bursty arrival patterns (e.g. apply SJF policy to batches)

➢ Multiple trees each connected to a subset of receivers (addressing slow receivers) ➢ Applying the Fair Sharing policy (rather than FCFS)

➢ Parallel trees to same subsets of receivers (increasing throughput) ❖ Handling Failures


➢ Applying SRPT with only BW preemption (trees selected upon request arrivals) ➢ Proactive approaches (leaving spare capacity, backup trees)
➢ Combining forwarding trees with store-and-forward ➢ Reactive approaches (rescheduling affected transfers, local activation)

Source Code Available on GitHub:


https://github.com/noormoha/DCCast