Вы находитесь на странице: 1из 12

1

Introduction
In this blog, I aim to go a little deeper into how the different DMVPN phases work and how to
properly configure the routing protocol to operate in each phase. The routing protocol of choice
for this blog is EIGRP classic configuration.

At the end of this blog, the reader should have a solid understanding and justification for each
configuration command used in the different phases and their impact on the overall DMVPN
routing design. The focus is purely on the basic configurations without IPsec for simplicity.

Nature of DMVPN clouds

DMVPN clouds create nonbroadcast multiaccess (NBMA) networks. Given the nature of NBMA
networks, all traffic (multicast, broadcast, and unicast) must be sent across the network as
unicast packets. This simply means multicast traffic destined for an IGP neighbor (hello
messages) will always be encapsulated in a unicast packet for delivery. For this reason, it is
crucial that the hub router always knows the identities of all the spokes for which it is the Next-
Hop Server (NHS). For this purpose, the ip nhrp map multicast dynamic command on the hub is
used to dynamically create mappings in the NHRP multicast table for each spoke that registers
with it.

On the DMVPN spokes, which are next-hop clients (NHCs), a static multicast mapping is created
for each hub. This can be achieved in one of two ways:

 ip nhrp map multicast [nbma address of hub]


 ip nhrp nhs [tunnel address of hub] nbma [nbma address of hub] multicast

With this set up, routing adjacencies are only formed between the hub and the spokes. Spokes do not form
routing adjacencies with each other. All that needs to be done is to match the tunnel interface with a
network command on the routers. The IGP will activate and run just like over a physical interface.

Phase 1

DMVPN phase 1 only provides hub-and-spoke tunnel deployment. This means GRE tunnels are only built
between the hub and the spokes. Traffic destined to networks behind spokes is forced to first traverse the
hub.

The topology below shows two spokes connected to the hub router. The hub is configured with an mGRE
tunnel and the spokes with a P2P GRE tunnel.
2

There are two critical configurations that make this a Phase 1 implementation:
1. Configuring the spoke's tunnel interface as P2P GRE tunnel (In all phases, the hub is always configured with an
mGRE tunnel)
2. The next hop on the spokes always point towards the hub

Configuring the spokes with a P2P GRE tunnel restricts it from building dynamic spoke-to-spoke tunnels.
This way, each time Spoke-R2 needs to reach Spoke-R3, only a single tunnel towards Hub-R1 is built. The
traffic sourced from the device 192.168.20.1 on Spoke-R2 destined for 192.168.30.1 behind Spoke-R3,
always hits Hub-R1 first. The following happens on the Hub-R1:
3

1. Hub-R1 receives the traffic from Spoke-R2.


2. Hub-R1 removes the GRE header, exposing the original IP packet. The original packet has the destination of
Spoke-R3’s remote network.
3. Hub-R1 encapsulates the original IP packet with a new GRE header and forwards it to Spoke-R3.

The next hop plays a key role here. The next hop for 192.168.30.1 on Spoke-R2 shows Hub-R1’s
tunnel IP as the next hop:

Spoke-R2#show ip route eigrp


--- Omitted ---
Gateway of last resort is 20.1.1.2 to network 0.0.0.0

D 192.168.30.0/24 [90/28288000] via 172.16.1.1, 00:02:40, Tunnel0

This is important to understand because it prevents Spoke-R2 from ever attempting to build a
direct tunnel towards the remote Spoke-R3. In simple words, Spoke-R2 will always use the Hub-
R1 as its next hop to reach the remote Spoke-R3’s subnets.

Because all spoke-to-spoke traffic in DMVPN Phase 1 always traverses the hub, it is actually
inefficient to even send the entire routing table from the hub to the spokes. This means we can
summarize all of the routing information from the hub down to the spokes. This can be achieved
in one of two ways:

1. Flood a default summary route to the spokes for all traffic. This is achieved in EIGRP using the ip summary-
address eigrp [asn] 0.0.0.0 0.0.0.0 command under the tunnel interface.
2. Flood a summary route only for the remote spoke networks (192.168.20.0 and 192.168.30.0). This is achieved
in EIGRP using the ip summary-address eigrp [asn] 192.168.0.0 255.255.224.0 command under the tunnel
interface.

For this example, a default summary route will be sent to the spokes from Hub-R1. After making
this change, the routing table on Spoke-R2 looks like this:

Spoke-R2#sh ip route eigrp


--- omitted ---
Gateway of last resort is 172.16.1.1 to network 0.0.0.0
D* 0.0.0.0/0 [90/28160000] via 172.16.1.1, 00:00:13, Tunnel0

Notice the next hop for the default route is still Hub-R1’s Tunnel IP. The following is the
configuration on Hub-R1’s tunnel interface:

Hub-R1#show run int tun0


4

interface Tunnel0
ip address 172.16.1.1 255.255.255.0
no ip redirects
ip nhrp authentication cisco
ip nhrp map multicast dynamic
ip nhrp network-id 123
ip summary-address eigrp 123 0.0.0.0 0.0.0.0
tunnel source Ethernet0/0
tunnel mode gre multipoint

A traceroute to 192.168.30.1 (Spoke-R3’s remote network) on Spoke-R2 yields the following


result. Notice Hub-R1 (172.16.1.1) is always traversed:

Spoke-R2#traceroute 192.168.30.1
Type escape sequence to abort.
Tracing the route to 192.168.30.1
VRF info: (vrf in name/id, vrf out name/id)
1 172.16.1.1 1 msec 0 msec 1 msec
2 172.16.1.3 0 msec 1 msec 0 msec

Spoke-R2#traceroute 192.168.30.1
Type escape sequence to abort.
Tracing the route to 192.168.30.1
VRF info: (vrf in name/id, vrf out name/id)
1 172.16.1.1 1 msec 5 msec 5 msec
2 172.16.1.3 0 msec 1 msec 0 msec

Phase 2

In Phase 1, traffic between the spokes would always hit the hub. This was a shortcoming of
DMVPN as, in a larger deployment, the hub would always have to be burdened with
encapsulate/decapsulate overhead for the spoke-to-spoke traffic. In addition to increased routing
overhead on the hub, spoke-to-spoke traffic would take a suboptimal path by detouring to the
hub and then reaching the remote spoke. Phase 2 improved on Phase 1 by allowing spokes to
build a spoke-to-spoke tunnel on demand with these restrictions:

 Spokes must use multipoint GRE tunnels


 The spokes must receive specific routes for all remote spoke subnets
 The next hop of the entry in the routing table must list the remote spoke as the next hop

Here is the same topology diagram from Phase 1 redesigned for Phase 2:
5

First, it must be ensured the spokes utilize multipoint GRE tunnels. Configuring mGRE on the
Spokes allows multiple GRE tunnels to be formed using a single tunnel interface. This is
achieved by removing the static tunnel destination command and replacing it with the tunnel
mode gre multipoint command.

Second, the spokes must receive specific routes for all remote spoke subnets. For EIGRP, this is
accomplished by disabling split horizon on the tunnel interface. The split-horizon algorithm is, “Do
not advertise a route out an interface if the router uses that interface to reach that network.”
6

In DMVPN, the hub uses its Tunnel0 interface to reach the networks behind the spokes. Split
horizon will prevent the hub from advertising those networks to remote spokes. Thus, in order for
DMVPN to work in Phase 2 with EIGRP, split horizon must be disabled on the tunnel interface
using the no ip split-horizon eigrp [asn] command.

Finally, the next hop for all of the routes must point to the remote spoke. This is the key to
triggering the generation of a spoke-to-spoke tunnel. Let’s examine this from Spoke-R2’s
perspective. If we look at the routing table in properly configured Phase 2 implementation, we
see the following:

Spoke-R2#show ip route eigrp


--- omitted ---
Gateway of last resort is not set
D 192.168.30.0/24 [90/28288000] via 172.16.1.3, 00:01:08, Tunnel0

Notice the next hop for 192.168.30.1 is 172.16.1.3, Spoke-R3 itself. The importance of this is
seen whenever we examine the CEF entries for 172.16.1.3, starting with the adjacency table:

Spoke-R2#show adjacency 172.16.1.3


Protocol Interface Address
IP Tunnel0 172.16.1.3(5) (incomplete)

Note the adjacency is incomplete. In order for Spoke-R2 to build the GRE header, it needs to
know the mapping of Spoke-R3’s NBMA address (30.1.1.1) to Tunnel IP (172.16.1.3). The
incomplete adjacency triggers a CEF punt to the CPU for further processing (to resolve the
address):

Spoke-R2#sh ip cef 172.16.1.3 internal


172.16.1.0/24, epoch 0, flags attached, connected, cover dependents, need
deagg, RIB[C], refcount 5,
--- omitted ---
path F2EE8BD8, path list F3476CDC, share 1/1, type connected prefix, for
IPv4
connected to Tunnel0, adjacency punt
output chain: punt
7

This causes Spoke-R2 to send a resolution request to Hub-R1 for Spoke-R3’s NBMA address.
The request gets forwarded from Hub-R1 to Spoke-R3. Spoke-R3 replies directly to Spoke-R2
with its mapping information. During this process, Spoke-R2 will send the actual data packet to
Hub-R1 to be delivered to Spoke-R3 as a last-ditch effort to not drop the traffic. The first
traceroute will look like this:

Spoke-R2#traceroute 192.168.30.1
Type escape sequence to abort.
Tracing the route to 192.168.30.1
VRF info: (vrf in name/id, vrf out name/id)
1 172.16.1.1 1 msec 5 msec 5 msec
2 172.16.1.3 1 msec 6 msec 0 msec

After the NHRP resolution is complete, Spoke-R2 can build a dynamic tunnel to Spoke-R3, and
traffic will not pass through Hub-R1 anymore. Spoke-R2 also caches the mapping information for
Spoke-R3 in its DMVPN table. Subsequent traceroutes look like this:

Spoke-R2#traceroute 192.168.30.1
Type escape sequence to abort.
Tracing the route to 192.168.30.1
VRF info: (vrf in name/id, vrf out name/id)
1 172.16.1.3 6 msec 5 msec 4 msec

Spoke-R2’s DMVPN table:

Spoke-R2#show dmvpn
--- omitted ---
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:2,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 10.1.1.1 172.16.1.1 UP 00:15:36 S
1 30.1.1.1 172.16.1.3 UP 00:05:25 D

This resolution process is all possible because Hub-R1 passes the routing update with the next
hop unmodified to the spokes. This is achieved in EIGRP using the no ip next-hop-self eigrp
123 command under the tunnel interface.

Following is the Phase 2 Hub-R1 interface configuration followed by Spoke-R2's configuration:


8

Hub-R1#show run int tun0


interface Tunnel0
ip address 172.16.1.1 255.255.255.0
no ip redirects
no ip next-hop-self eigrp 123
no ip split-horizon eigrp 123
ip nhrp authentication cisco
ip nhrp map multicast dynamic
ip nhrp network-id 123
tunnel source Ethernet0/0
tunnel mode gre multipoint

Spoke-R2#show run int tun0


interface Tunnel0
ip address 172.16.1.2 255.255.255.0
no ip redirects
ip nhrp authentication cisco
ip nhrp network-id 123
ip nhrp nhs 172.16.1.1 nbma 10.1.1.1 multicast
tunnel source Ethernet0/0
tunnel mode gre multipoint

Because the next hop for each prefix must be preserved, in Phase 2 it is not possible to
summarize from the hub to the spokes. Doing so would recreate Phase 1 behavior where the
spokes would send all traffic to the hub (as the next hop for the summary route) eliminating the
advantages of Phase 2’s spoke-to-spoke tunnels.

Phase 3

Though DMVPN Phase 2 deployment provided direct spoke-to-spoke tunnels, one of the
limitations is maintaining full routing tables on the spokes. Each route for remote spoke networks
needs to be a specific route with the next hop pointing to the remote spoke’s tunnel address. This
prevents the hub from being able to send down a summarized route to the spokes for a more
concise routing table.

Phase 3 overcomes this restriction using NHRP traffic indication messages from the hub to
signal to the spokes that a better path exists to reach the target network. This functionality is
enabled by configuring ip nhrp redirect on the hub and ip nhrp shortcut on the spokes. The
redirect command tells the hub to send the NHRP traffic indication message while the shortcut
command tells the spokes to accept the redirect and install the shortcut route.

Here is the resultant topology diagram modified for Phase 3 implementation:


9

As in case with Phase 1, the hub router is configured to send summarized routing information
down to the spokes.
10

Spoke-R2#sh ip route eigrp


--- omitted ---
Gateway of last resort is 172.16.1.1 to network 0.0.0.0
D* 0.0.0.0/0 [90/28160000] via 172.16.1.1, 00:01:03, Tunnel0

When Spoke-R2 sends traffic to the network behind Spoke-R3 (192.168.30.1):


1. The first packet is routed to Hub-R1 (following the summarized route).
2. Hub-R1 “hairpins” this traffic back onto the DMVPN network, triggering the NHRP process on Hub-R1 to
generate the traffic indication to Spoke-R2 to resolve a better next hop for the remote network 192.168.30.1.
3. Spoke-R2 receives this Indication message and processes the redirect to Spoke-R3.

The below output is taken from debug nhrp packet on Spoke-R2:

*Feb 4 14:09:50.535: NHRP: Receive Traffic Indication via Tunnel0 vrf 0, packet size: 97
--- omitted ---
*Feb 4 14:09:50.543: (M) traffic code: redirect(0)

Spoke-R2 must now send a resolution request to Hub-R1 for the destination 192.168.30.1. This
message contains its own NBMA to tunnel address mapping:

*Feb 4 14:25:54.672: NHRP: Send Resolution Request via Tunnel0 vrf 0, packet size:
85
*Feb 4 14:25:54.672: src: 172.16.1.2, dst: 192.168.30.1
--- omitted ---
*Feb 4 14:25:54.672: src NBMA: 20.1.1.1
*Feb 4 14:25:54.672: src protocol: 172.16.1.2, dst protocol: 192.168.30.1

Hub-R1 forwards this packet to Spoke-R3. Spoke-R3, using the above mapping information,
responds directly to Spoke-R2 with its own mapping information:

*Feb 4 14:25:54.677: NHRP: Receive Resolution Reply via Tunnel0 vrf 0, packet
size: 133
--- omitted ---
*Feb 4 14:25:54.677: client NBMA: 30.1.1.1
*Feb 4 14:25:54.677: client protocol: 172.16.1.3

NOTE: The above is only half of the process. Spoke-R3 will also send a resolution request to Hub-R1 for
Spoke-R2’s NBMA address. This gets forwarded to Spoke-R2. Spoke-R2 responds directly to Spoke-R3
with a resolution reply completing the process.
11

At this point, the spokes can now modify their routing table entries to reflect the NHRP shortcut
route and use it to reach the remote spoke.

NOTE: The behavior of modifying the routing table was implemented in IOS 15.2(1)T. This change
was made to allow the router to forward the traffic using CEF.

For example, on Spoke-R2 an (H) route will be created in the routing table. Another (H) route is
installed for the Tunnel IP address of Spoke-R3. This is inserted under the connected route for
the tunnel interface because it is more specific:

Spoke-R2#sh ip route
--- omitted ---
Gateway of last resort is 172.16.1.1 to network 0.0.0.0
D* 0.0.0.0/0 [90/28160000] via 172.16.1.1, 00:16:35, Tunnel0
R 10.0.0.0/8 [120/1] via 20.1.1.2, 00:00:09, Ethernet0/0
20.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
C 20.1.1.0/24 is directly connected, Ethernet0/0
L 20.1.1.1/32 is directly connected, Ethernet0/0
R 30.0.0.0/8 [120/1] via 20.1.1.2, 00:00:09, Ethernet0/0
172.16.0.0/16 is variably subnetted, 3 subnets, 2 masks
C 172.16.1.0/24 is directly connected, Tunnel0
L 172.16.1.2/32 is directly connected, Tunnel0
H 172.16.1.3/32 is directly connected, 00:14:33, Tunnel0
192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C 192.168.20.0/24 is directly connected, Loopback1
L 192.168.20.1/32 is directly connected, Loopback1
H 192.168.30.0/24 [250/1] via 172.16.1.3, 00:14:33, Tunnel0

Similar to Phase 2, the first data packet will traverse the hub while the spokes resolve the proper
addresses. After the resolution is completed, data will be sent to the remote spokes directly,
bypassing the hub.

The configuration for DMVPN and EIGRP in Phase 3 mirrors Phase 1 configuration with the
addition of the ip nhrp redirect/shortcut commands on the hub and spoke routers:

Hub-R1#show run int tun0


interface Tunnel0
ip address 172.16.1.1 255.255.255.0
no ip redirects
12

ip nhrp authentication cisco


ip nhrp map multicast dynamic
ip nhrp network-id 123
ip nhrp redirect
ip summary-address eigrp 123 0.0.0.0 0.0.0.0
tunnel source Ethernet0/0
tunnel mode gre multipoint

Spoke-R2#show run int tun0


interface Tunnel0
ip address 172.16.1.2 255.255.255.0
no ip redirects
ip nhrp authentication cisco
ip nhrp network-id 123
ip nhrp nhs 172.16.1.1 nbma 10.1.1.1 multicast
ip nhrp shortcut
tunnel source Ethernet0/0
tunnel mode gre multipoint
Summary

This blog examined the basic configuration of DMVPN with routing while highlighting the
shortcomings of each initial DMVPN implementation. Phase 1 provided an optimized control
plane on the spokes through summarization of routing information. However, the major limitation
was the inability to create a direct spoke-to-spoke tunnel and optimize the data plane. Phase 2
improved on this by allowing spokes to dynamically form spoke-to-spoke tunnels. This also came
at the cost of burdening the spokes with specific routes for all remote destinations. Phase 3
eliminated these limitations using NHRP shortcut switching enhancements, optimizing both the
data and control planes.

The configurations in this blog reflect the least amount of configuration required to create a
working solution. They are for the purpose of basic academic understanding. In a real
implementation, the configuration needs to be fined tuned on per-case basis for the most
optimized performance.

Вам также может понравиться