Вы находитесь на странице: 1из 44

Arhitecturi pentru retele

si servicii (ARS)

Multicast Communications

Managementul serviciilor si retelelor (MSR)


Multicast communications
 Multicast communications
 Data delivered to a group of receivers. Typical examples:
One-to-many (1:N) One-to-many (1:N) Many-to-many
one-way two-way (N : M)

MR MR/US MS/R

MS MS MS/R

M=Multicast; U=Unicast
S= Sender; R=Receiver

 Chapter outline
 What applications use multicast? What are the requirements
and design challenges of multicast communications?
 What multicast support does IP provide (network layer)?
 After an overview of multicast applications, we'll focus on IP
multicast: service model, addressing, group management, and
routing protocols.
© Octavian Catrina 2 
Multicast applications (examples)
 One-to-many  Many-to-many
 Real-time audio-video distribution:  Multimedia conferencing: multiple
lectures, presentations, meetings, synchronized video/audio streams,
movies, etc. Internet TV. Time whiteboard, etc. Time sensitive.
sensitive. High bandwidth. High bandwidth, but typically only
 Push media: news headlines, one sender active at a time.
stock quotes, weather updates,  Distance learning: presentation
sports scores. Low bandwidth. from lecturer to students, questions
Limited delay. from students to everybody.
 File distribution: Web content  Synchronized replicated resources,
replication (mirror sites, content e.g., distributed databases. Time
network), software distribution. sensitive.
Bulk transfer. Reliable delivery.  Multi-player games: Possibly high
 Announcements: alarms, service bandwidth, all senders active in
advertisements, network time, etc. parallel. Time sensitive.
Low bandwidth. Short delay.  Collaboration, distributed
simulations, etc.

© Octavian Catrina 3 
Multicast requirements (1)
 Efficient and scalable delivery Sender
 Multi-unicast repeats each data item. -

Wastes sender and network resources.


Cannot scale up for many receivers
and/or large amounts of data. 1 Multi-unicast 2
delivery

3
 Timely and synchronized delivery Rcvrs 4

 Multi-unicast uses sequential transmission.


5
Results in long, variable delay for large 7
6
groups and/or for large amounts of data.
In particular, critical issue for real-time
communications (e.g., videoconferencing).

 We need a different delivery paradigm.

© Octavian Catrina 4 
Multi-unicast vs. Multicast tree
 Multi-unicast delivery  Multicast tree delivery
 1:N transmission handled as N  Transmission follows the edges of
unicast transmissions. a tree, rooted at the sender, and
 Inefficient, slow, for N1: multiple reaching all the receivers.
packet copies per link (up to N).  A single packet copy per link.

- Sender - Sender
1 1
Multi-unicast Rcvr Multicast tree Rcvr
delivery delivery
1 1

2 3 2 3

4 5 4 5

2 3 4 2 3 4
Rcvrs Rcvrs

© Octavian Catrina 5 
Multicast requirements (2)

 Multicast group identification


 Applications need special identifiers for multicast groups.
(Could they use lists of host IP addresses or DNS names?)
 Groups have limited lifetime.
 We need mechanisms for dynamic allocation of unique
multicast group identifiers (addresses).

 Group management
 Group membership changes dynamically.
 We need join and leave mechanisms (latency may be critical).
 For many applications, a sender must be able to send without
knowing the group members or having to join (e.g., scalability).
 A receiver might need to select the senders it receives from.

© Octavian Catrina 6 
Multicast requirements (3)
 Session management
 Receivers must learn when a multicast session starts and
which is the group id (such that they can "tune in").
 We need session description & announcement mechanisms.

 Reliable delivery
 Applications need a certain level of reliable data delivery.
Some tolerate limited data loss. Others do not tolerate any loss
(e.g., all data to all group members - hard problem).
 We need mechanisms that can provide the desired reliability.

 Heterogeneous receivers
 Receivers within a group may have very different capabilities
and network connectivity: processing and memory resources,
network bandwidth and delay, etc.
 We need special delivery mechanisms.
© Octavian Catrina 7 
Requirements: Some conclusions
 Multi-unicast delivery is not suitable
 Multi-unicast does not scale up for large groups and/or large
amounts of data: it becomes either very inefficient, or does not
fulfill the application requirements.

 Specific functional requirements


 Specific multicast functions, which are not needed for unicast:
group management, heterogeneous receivers.
 General functions, which are also needed for unicast, but
become much more complex for multicast: addressing, routing,
reliable delivery, flow & congestion control.

 We need new mechanisms and protocols,


specially designed for multicast.

© Octavian Catrina 8 
Which layers should handle multicast?

 Data link layer


 Efficient delivery within a multi-access network.
 Multicast extensions for LAN and WAN protocols.
 Network layer
 Multicast routing for efficient & timely delivery.
 IP multicast extensions. Multicast routing protocols.
 Transport layer
 End-to-end error control, flow control, and congestion control
over unreliable IP multicast.
 Multicast transport protocols.
 Application layer multicast
 Overlay network created at application layer using existing
unicast transport protocols. Easier deployment, less efficient.
Still an open research topic.
© Octavian Catrina 9 
IP multicast model (1)
 "Transmission of an IP datagram to a group of hosts"
 Extension of the IP unicast datagram service.
 IP multicast model specification: RFC 1112, 1989.

 Multicast address
 Unique (destination) address for a group of hosts.
 Different datagram delivery semantics  A distinct range of
addresses is reserved in the IP address space.
 Who receives? Explicit receiver join
 IP delivers datagrams with a destination address G only to
applications that have explicitly notified IP that they are
members of group G (i.e., requested to join group G).
 Who sends? Any host can send to any group
 Multicast senders need not be members of the groups they
send to.
© Octavian Catrina 10 
IP multicast model (2)
 No restrictions for group size and member location
 Groups can be of any size.
 Group members can be located anywhere in an internetwork.

 Dynamic group membership


 Receivers can join and leave a group at will.
 The IP network must adapt the multicast tree accordingly.

 Anonymous groups
 Senders need not know the identity of the receivers.
 Receivers need not know each-other.
 Analogy: A multicast address is like a radio frequency, on which
anyone can transmit, and anyone can tune in.

 Best-effort datagram delivery


 No guarantees that: (1) all datagrams are delivered (2) to all
group members (3) in the order they have been transmitted.
© Octavian Catrina 11 
IP multicast model: brief analysis
 Applications viewpoint
 Simple, convenient service interface. Same send/receive ops
as for unicast, plus join/leave ops.
 Anybody can send/listen to a group. Security, billing?
 Extension to reliable multicast service? Difficult problem.

 IP network viewpoint
 Scales up well with the group size.
 Single destination address, no need to monitor membership.
 Does not scale up with the number of groups. Conflicts with
the original IP model (per session state in routers).
 Routers must discover the existence/location of receivers and
senders. They must maintain dynamic multicast tree state per-
group and even per-source and group.
 Dynamic multicast address allocation. How to avoid allocation
conflicts (globally)? Very difficult problem.
© Octavian Catrina 12 
IPv4 multicast addresses
 IPv4 multicast addresses
Class Address range 31 28 27 0

224.0.0.0 to 1110 multicast address


D
239.255.255.255 228 addresses

 IP multicast in LANs
 Relies on the MAC layer's native multicast.
 Mapping of IP multicast addresses to MAC
multicast addresses:
IPv4 multicast address

1110 28 bits (228 addresses)

group bit Ethernet multicast address

0000000100000000010111100 23 bits
Ethernet LAN
Remark: 32 IP multicast addresses map to the same MAC multicast address.

© Octavian Catrina 13 
Multicast scope
 Multicast scope
 Limited network region where multicast packets are forwarded.
 Motivation: Address allocation. Efficiency. Application-specific.
 TTL-based scopes or administrative scopes (RFC 2365).

 Administrative scopes
 Delimited by configuring boundary routers: do not forward some
ranges of multicast addresses on some interfaces.
1
 Connected, convex regions.
Local B
 Nested and/or overlapping.

 IPv4 administrative scopes: 2 3

 Organization-local (239.192.0.0/14).
 Local scope (239.255.0.0/16). Local A

 Link-local scope (224.0.0.0/24). 4 5


Organization-local
 Global scope: No boundary, all
remaining multicast addresses. Internet
6

© Octavian Catrina 14 
Group management: local
 Multicast service requirements - Sender
1
 Multicast routers have to discover
the locations of the members of Multicast Rcvr
any multicast group & maintain a tree 1

multicast tree reaching them all.


2 3 IGMP
 Dynamic group membership.

 Local (link) level


 Multicast applications must notify IGMP 4 5 IGMP

IP when they join or leave a 2


Rcvrs
3 4
multicast group (API available).

 Internet Group Management Protocol (IGMP) allows multicast


routers to learn which groups have members, at each interface.
 Dialog between hosts and a (link) local multicast router.

© Octavian Catrina 15 
Group management: internetwork
 Global (internetwork) level - Sender
1
 Multicast routing protocols
propagate information about Multicast Rcvr
Multicast Routing
group membership and allow tree Protocol 1

routers to build the tree.


2 3 IGMP
 Implicit vs. explicit join
 Implicit: Multicast tree obtained by
pruning a default broadcast tree. IGMP 4 5 IGMP
Nodes must ask to be removed.
2 3 4
Rcvrs
 Explicit: Nodes must ask to join.

 Data-driven vs. control-driven multicast tree setup


 Data-driven: tree built/maintained when/while data is sent.
 Control-driven: tree set up & maintained by control messages
(join/leave), independently of the sender(s) activity.
© Octavian Catrina 16 
Group Management: IGMP (1)
 Internet Group Management Protocol
 Enables a multicast router to learn, for each of its directly
attached networks, which multicast addresses are of interest to
the systems attached to these networks.
 IGMPv1: join + refresh + implicit leave (timeout). IGMPv2: adds
explicit leave (fast). IGMPv3 (2002): adds source selection.
 IGMPv3 presented in the following, IGMPv1/v2 in the annex.

 Periodic General Queries: Refresh/update group list


 Reports are randomly delayed to avoid bursts.
(Duplicate reports are completely suppressed in IGMPv1 & v2.)
Local groups (1) IP packet to 224.0.0.1 (all systems on this subnet)
at IF i1: IGMPv3 General Query: Anybody interested in any group?
224.1.2.3 i1

(2) IP packet to 224.0.0.22 Groups: Groups: Groups:


IGMPv3 Current State Report: 224.1.2.3 224.1.2.3 none
member of group 224.1.2.3

© Octavian Catrina 17 
IGMP (2)
 Host joins a group
Local groups
IP packet to 224.0.0.22 (all IGMPv3 routers)
at IF i1:
add 224.1.2.3 i1 IGMPv3 State Change Report: joined group 224.1.2.3

Groups:
+ 224.1.2.3

 Host leaves a group


 Router must check if there are other members of that group.
Local groups (2) IP packet to 224.0.0.1 (all systems on this subnet)
at IF i1: IGMPv3 Group-Specific Query: Anybody interested in 224.1.2.3?
224.1.2.3? i1
 maintained
Groups: Groups: Groups:
none/left 224.1.2.3 none
(1) IP packet to 224.0.0.22 (3) IP packet to 224.0.0.22
IGMPv3 State Change Report: IGMPv3 Current State Report:
not member of 224.1.2.3 member of 224.1.2.3

© Octavian Catrina 18 
Multicast trees
-
 What kind of multicast tree? Sender
1
 Minimize tree diameter (path
Rcvr
length, delivery delay) or tree
1
cost (network resources)?
 Special, difficult case of minimum 2 3
cost spanning tree (Steiner tree). Rcvr
No good distributed algorithm! 2

 Practical solution 4 5

 Take advantage of existing


unicast routing: Shortest path Rcvrs
Rcvr
tree based on routing info from 4
3 6 7
unicast routing protocol.
5
 Multicast extension of a unicast
Shortest paths tree Min cost tree.
routing protocol, or separate (e.g., unicast routing).
multicast routing protocol.
© Octavian Catrina 19 
Source-based vs. shared trees
 Source-based trees
 One tree per sender.
 Tree rooted at the
sender. Typically
shortest-path tree.

 Shared trees
 One tree for all senders.
 Examples: Minimum diameter tree or minimum cost tree, etc.

© Octavian Catrina 20 
Source-based trees (1)
172.16.5.0/24
 Source-based tree - Sender
1
 Tree rooted at a sender which Sender +
spans all receivers. Rcvr
1
 Typically, shortest-path tree.
2 3
 In general: M1 senders/group 172.20.2.0/24
Rcvr
 M sources transmit to a group. 2
 Session participants may be
senders, receivers, or both. 4 5

 A separate source-based tree has


to be set up for each sender.
4
Router 2: Multicast forwarding table 6 7
Source prefix Multicast group In IF Out IF 3 5
172.16.5.0/24 224.1.1.1 N S, E, SE
Rcvrs
172.20.2.0/24 224.1.1.1 E N Interface notation:
... N = North (up); S = South (down)
W = West (left); E = East (right)
© Octavian Catrina
NW = North-West (up-left). Etc. 21 
Source-based trees (2)
 Pros
 Per-source tree optimization.
 Shortest network path & transfer delay.
 Tree created/maintained only when/while a source is active.
 Cons
 Does not scale for multicast sessions with M>>1 sources.
 The network must create and maintain M separate trees:
 per-source & group state in routers, higher control traffic and
processing overhead.
 Examples
PIM-DM, DVMRP, MOSPF. Mixed solution: PIM-SM.
 PIM-DM, DVMRP, MOSPF: Data-driven tree setup.
 PIM-SM: Explicit join, control-driven tree setup.

© Octavian Catrina 22

Shared trees (1)
- Sender
 Core-based shared tree 1
 The multicast session uses a Sender +
single distribution tree, with Rcvr
1
the root at a "core" node, and
spanning all the receivers 2 3
("core-based" tree).
Rcvr
 Each sender transmits its 2
Core
packets to the core node,
which delivers them to the 4 5
group of receivers.
 Typically, shortest-path tree,
with the central root node. 4
6 7
Router 5: Multicast forwarding table 3 5
Multicast group In IF Out IF Rcvrs
(any sender)
Interface notation:
224.1.1.1 W N, E
N = North (up); S = South (down)
... W = West (left); E = East (right)

© Octavian Catrina
NW = North-West (up-left). Etc.
23

Shared trees (2)
 Pros
 More efficient for multicast sessions with M>>1 sources.
 The network creates a single delivery tree shared by all senders:
 only per-group state in routers, less control overhead.
 Tree (core to receivers) created and maintained independently
of the presence and activity of the senders.
 Cons
 Less optimal/efficient trees.
 Possible long paths and delays, depending on the relative
location of the source, core, and receiver nodes.
 Traffic concentrates near the core node. Danger of congestion.
 Issue: (optimal) core selection.
 Examples
 PIM-SM, CBT.
 Explicit join, control-driven (soft state, implicit leave/prune).

© Octavian Catrina 24 
DVMRP
 DVMRP: Distance Vector Multicast Routing Protocol
 First IP multicast routing protocol (RFC 1075, 1988).
 DVMRP at a glance
 Source-based multicast trees, data-driven tree setup.
 Distance vector unicast routing (DVR).
 Reverse path multicast (RPM).
 Support for multicast overlays: tunnels between multicast
enabled routers through networks not supporting multicast.
 Used to create the Internet MBone (Multicast Backbone).
 Routing info base
 DVMRP incorporates its own unicast DVR protocol.
 Separated routing for unicast service and multicast service.
 DVR protocol derived from RIP and adapted for RPM.
 E.g., routers learn the downstream neighbors on the multicast
tree for any source address prefix.
© Octavian Catrina 25

Reverse Path Broadcast
 Broadcast tree for source s 172.16.5.0/24
- Sender s
 The unicast route matching s indicates 1
a router's parent in the broadcast tree
for source s (child-to-parent pointer). Rcvr
1
 Reverse Path Forwarding (RPF)
 Broadcast/multicast packet from 2 3
source s received on interface i: Rcvr
- If i is the interface used to forward a 2
unicast packet to s, then forward the
packet on all interfaces except i. 4 5
- Otherwise discard the packet.
 Reverse Path Broadcast (RPB)
 RPF still allows unnecessary copies. Rcvr
6 7
3
 Add parent-child pointer: A router
learns which neighbors use it as next Route entries matching the
hop for each route. Forward a packet broadcast sender's address.
only to these neighbors. Unnecessary packet copies
© Octavian Catrina
sent by RPF.
26 
Reverse Path Multicast (1)
172.16.5.0/24
 Truncated RPB - Sender
 Uses IGMP to avoid unnecessary 1
broadcast in leaf multi-access networks.
Rcvr
 Reverse Path Multicast (RPM) 1

 Creates a multicast tree by pruning 2 3


unnecessary tree branches from the IGMP IGMP
(truncated) RPB broadcast tree. Rcvr
Prune 2
 Prune mechanism
 A router sends a Prune message to 4 5
IGMP IGMP
its upstream (parent) router if:
- its connected networks do not Prune
contain group members, and Rcvr
3
- its neighbor routers either are not 6 7
downstream (child) routers, or have IGMP IGMP
sent Prune messages.
 Both routers maintain Prune state.

© Octavian Catrina 27 
Reverse Path Multicast (2)
172.16.5.0/24
 Adapting the multicast tree to - Sender
group membership changes 1
 Pruning can remove branches when Rcvr
members leave. 1
 A mechanism is necessary to add
branches when members join. IGMP
2 3
IGMP
 Periodic broadcast & prune Rcvr
Graft 2
 The multicast tree can be updated by
repeating periodically the broadcast
4 5
& prune process (a parent removes IGMP IGMP
the prune state after some time).
Graft
Rcvr Rcvr
 Graft mechanism 3
4
 Faster tree extension. 6 7

 A router sends a Graft message, IGMP IGMP


which cancels a previously sent
Prune message.
© Octavian Catrina 28 
DVMRP operation
 Data-driven: multicast tree setup when 172.16.5.0/24
- Sender: Sends to
the source starts sending to the group. 1 224.1.1.1, 224.5.6.7
 Initially, RPB: All routers receive the
Rcvr
packets, learn about the session
1
(source-group), & record state for it.
 Next, RPM: Unnecessary branches 2 3
IGMP IGMP
are pruned from the data paths (but Rcvr
the routers still maintain state). 2
 Tree update by periodic broadcast &
prune, and graft. 4 5
IGMP IGMP

Router 2: Multicast forwarding cache


Source prefix Multicast group In IF Out IF Rcvr
172.16.5.0/24 224.1.1.1 N E, SE
3
224.5.6.7 S(Prune) 6 7
Router 4: Multicast forwarding cache IGMP IGMP
Source prefix Multicast group In IF Out IF Route entries matching the
172.16.5.0/24 224.1.1.1 N(Prune) S(Prune) broadcast sender's address.
224.5.6.7

© Octavian Catrina 29 
DVMRP conclusions
 DVMRP & RPM shortcomings
 Several design solutions limit DVMRP scalability & efficiency.
 Tree setup and maintenance by periodic broadcast & prune.
 Can waste a lot of bandwidth, especially for a sparse group spread
over a large internetwork (OK for dense groups).
 Per-group & source state in all routers, both on-tree & off-tree.
 Due to source-based trees and to enable fast grafts.
 Controversial feature: Embedded DVR protocol.
 New generation RPM-based protocol: PIM-DM
 Protocol Independent Multicast: Uses existing unicast routing
table, from any routing protocol. No embedded unicast routing.
 Dense Mode: Intended for "dense groups" - concentrated in a
network region (rather than thinly spread in a large network).
 Uses RPM as described on previous slides (similar to DVMRP).
No parent-to-child pointers, hence redundant transmissions in broadcast phase.
© Octavian Catrina 30 
PIM-SM
 PIM: Protocol Independent Multicast
 Uses exiting unicast routing table, from any routing protocol.
No embedded unicast routing.
 No solution matches well different application contexts. Two
protocols, different algorithms.
 PIM-DM: Dense Mode
 Efficient multicast for "dense" (concentrated) groups.
 RPM, source-based trees, implicit-join, data-driven setup.
PIM-DM is similar to DVMRP, except it relies on existing unicast routing,
hence it does not avoid redundant transmissions in the broadcast phase.
 PIM-SM: Sparse Mode
 Efficient multicast for sparsely distributed groups.
 Shared trees, explicit join, control-driven setup.
 After initially using the group's shared tree, members can set
up source-based trees. Improved efficiency and scalability.
© Octavian Catrina 31 
Rendezvous Points
 Rendezvous Point (RP) router Sender
R1
 Core of the multicast shared tree. Meeting Sender +
Rcvr
point for the group's receivers & senders. 1

 At any moment, any router must be able to R2 R3

uniquely map a multicast address to an RP. Rcvr


2

 Resilience & load balancing  a set of RP.


RP
R4 R5

 RP discovery and mapping


 Several routers are configured as RP- R6 R7
4
3
candidate routers for a PIM-SM domain. Rcvr
Rcvr
 They elect a Bootstrap Router (BSR).

 BSR monitors the RP candidates and distributes a list of RP


routers (RP-Set) to all the other routers in the domain.
 A hash function allows any router to uniquely map a multicast
address to an RP-Set router.
© Octavian Catrina 32 
Shared tree (RP-tree) setup
 Designated Router (DR)
R1 RP-tree
 Unique PIM-SM router responsible
for multicast routing in a subnet. Rcvr(*,G)
1
 Receiver join
 To join a group G, a receiver R2 R3
informs the local DR using IGMP. IGMP (*,G) IGMP
Join Rcvr(*,G)
 DR join RP(G) Join
4
 DR adds (*,G) multicast tree state
IGMP R4 R5 IGMP
(group G and any source). (*,G) (*,G)
 DR determines the group's RP, Rcvr(*,G) Rcvr(*,G)
and sends a PIM-SM Join(*,G) 2 Join 3
packet towards the RP. Join

 At each router on the path, if (*,G) R6 R7


IGMP (*,G) (*,G) IGMP
state does not exist, it is created, &
the Join(*,G) is forwarded.
 Multicast tree state is soft state: Refreshed by periodic Join messages.
© Octavian Catrina 33 
Sending on the shared tree
 Register encapsulation Sender
R1
 The sender's local DR encapsulates (S,G)
each multicast data packet in a PIM- Join(S,G) Rcvr(*,G)
SM Register packet, and unicasts it 1
to the RP.
 RP decapsulates the data packet R2 (S,G) R3
IGMP (*,G) IGMP
and forwards it onto the RP-tree. Join(S,G) Rcvr(*,G)
 Allows the RP to discover a source, 4
RP(G)
but data delivery is inefficient.
IGMP R4 R5 IGMP
 Register-Stop (*,G) (*,G)
Rcvr(*,G) Rcvr(*,G)
 RP reacts to a Register packet by 2 3
issuing a Join(S, G) towards S.
 At each router on the path, if (S,G) R6 R7
IGMP (*,G) (*,G) IGMP
state does not exist, it is created, &
Register-encapsulated
the Join(S, G) is forwarded. data packet
Register-Stop

 When the (S,G) path is complete, RP stops the encapsulation by


sending (unicast) a PIM-SM Register-stop packet to the sender's DR.
© Octavian Catrina 34 
Source-specific trees
 Shared vs. source-specific tree Sender
R1
 Routers may continue to receive (S,G)
data on the shared RP-tree. Rcvr(*,G)
 Often inefficient: e.g., long detour 1
Join(S,G)
from sender to receiver 1.
R2 R3
 PIM-SM allows routers to create a IGMP (S,G) (S,G) IGMP
Prune Rcvr(*,G)
source-specific shortest-path tree. (S,G)
4
RP(G)
 Transfer to source-specific tree IGMP R4 R5 IGMP
(*,G) (*,G)
 A receiver's DR sends a Join(S, G) Rcvr(*,G) Rcvr(*,G)
towards S  creates (S,G) 2 3
multicast tree state at each hop.
 After receiving data on the (S,G) IGMP
R6
(*,G) (*,G)
R7
IGMP
path, DR sends a Prune(S,G)
towards the RP  removes S from Example: transfer to SPT for receiver 1
G's shared tree at each hop.

© Octavian Catrina 35 
PIM-SM conclusions
 Advantages
 Independence of unicast routing protocol.
 Better scalability, especially for sparsely distributed groups:
- Explicit join, control-driven tree setup  no data broadcast,
no flooding of group membership information. Per session
state maintained only by on-tree routers.
- Shared trees  routers maintain per-group state, instead of
per-source-group state.
 Flexibility and performance: optional, selective transfer to
source-specific trees (e.g., triggered by data rate).
 Weaknesses
 Much more complex than PIM-DM.
 Control traffic overhead (periodic Joins) to maintain soft
multicast tree state.

© Octavian Catrina 36 
MOSPF
 MOSPF Backbone area

 Natural multicast extension of the


ABR ABR ABR
OSPF (Open Shortest Path First)
Area 1 Area 3
link-state unicast routing protocol. Area 2

OSPF hierarchical network structure


 MOSPF at a glance
 Source-based shortest-path multicast trees, data-driven setup.
 Multicast extensions for both intra-area and inter-area routing.
 Extends the OSPF topology database (per-area) with info about
the location of the groups' members.
 Extends the OSPF shortest path computation (Dijkstra) to
determine multicast forwarding:
For each pair source & destination-group, each router:
 computes the same shortest path tree rooted at the source,
 finds its own position in the tree,
 and determines if and where to forward a multicast datagram.
© Octavian Catrina 37 
OSPF review (single area)
 Link state advertisements Example: OSPF topology (link state)
database for one OSPF area.
 Each router maintains a links state Shortest-path tree computed using
table describing its links (attached Dijkstra algorithm by router R1.
networks & routers). It sends Link
State Advertisements (LSA) to all R1 N1

other routers (hop-by-hop flooding).


 Topology database
 All routers build from LSAs the same N2 R2 R3 N3
network topology database (directed
graph labeled with link costs).
 Routing table computation
 Each router independently runs the N4 R4 R5 N5

same algorithm (Dijkstra) on the


topology, to compute a shortest-path
tree rooted at itself, to all destinations.
 A destination-based unicast routing N6 R6 R7 N7

table is derived from the tree.


© Octavian Catrina 38 
MOSPF: topology database
 Local group database
 Records group membership in a - Sender
router's directly attached network. R1

 Created using IGMP. Rcvr, m1


Flood G-M LSA 1
 Group-membership LSA (R3, m1)
 Sent by a router to communicate R2 R3
IGMP IGMP
local group members to all other Rcvr, m1
routers (local transit vertices that 2
Flood G-M LSA
should remain on a group's tree). (R5, m1)
R4 R5
 Topology database extension IGMP IGMP
Rcvr, m1
for multicast 2
Flood G-M LSA
 A router or a transit network is (R7, m1)
labeled with the multicast groups IGMP
R6 R7
IGMP
announced in Group-membership
LSAs.

© Octavian Catrina 39 
MOSPF: multicast tree (intra-area)
MOSPF link state database (one area).
 Source-based multicast tree Shortest-path tree for (N1, m1).
 Shortest-path tree from source to Source (172.16.5.1),
R1 N1 sends to m1= 224.1.1.1
group members (receivers).
172.16.5.0/24
 Data-driven tree setup
 A router computes the tree and the
multicast forwarding state when it N2 R2 R3 N3
receives the first multicast datagram m1

(i.e., learns about the new session).


 Multicast tree & state
N4 R4 R5 N5
 Routers determine independently m1
the same shortest path tree rooted
at the source, using Dijkstra.
 The tree is pruned according to
N6 R6 R7 N7
group membership labels. m1
 The router finds its position in the Router 2: Multicast forwarding cache
pruned tree, and derives the Source Multicast group In IF Out IF
forwarding cache entry. 172.16.5.1 224.1.1.1 N E, SE

© Octavian Catrina 40 
MOSPF conclusions
 Advantages
 OSPF is the interior routing protocol recommended by IETF.
 MOSPF is the natural choice of multicast routing protocol in
networks using OSPF.
 More efficient than DVMRP/RPM: no data broadcast.

 Weaknesses
 Various features limit scalability and efficiency:
 Dynamic (!) group membership advertised by flooding.
 Multicast state per-group & per-source, maintained in on-
tree, as well as off-tree routers.
 Relatively complex computations to determine multicast
forwarding: for each new multicast transmission (source-
group), repeated when the group/topology change.
 Few implementations?

© Octavian Catrina 41 
Annex


IGMP v1/v2 - Group Management

© Octavian Catrina 43 
IGMP v.2 - Group Management

 IGMP v2 enhancements:
 Election of a querier router (lowest IP address).
 Explicit leave (reduce leave latency).

© Octavian Catrina 44

Вам также может понравиться