Академический Документы
Профессиональный Документы
Культура Документы
Table of Contents
• OSPF over DMVPN
• Considerations for the DMVPN Design
• Dual Hub Single Cloud using OSPF
• DMVPN Dual HUB-Single Cloud with route redistribution, failover and symmetrical
routing
• Dual Hub Dual Cloud using OSPF
• Dual Hub Single Cloud with EIGRP, using failover and load balancing
Introduction
NOTE: Your e-mail will not be shown in the output. You can use an invalid e-mail, if you
want.
Date: 30-9-2016
In today's network environment, redundancy is one of the most important aspects, whether
it’s on the LAN side or on the WAN side.
1. Dynamic Routing.
2. mGRE Tunnels.
3. Tunnel Protection – IPSec Encryption that protects the GRE tunnel and data.
The disadvantage of a single hub router is that it’s a single point of failure. Once your hub
router fails, the entire DMVPN network is gone.
- Spokes are configured with multiple NHSs (and mappings) on their mGRE.
The dual cloud option also has two hubs but we will use two DMVPN networks, which means
that all spoke routers will get a second multipoint GRE interface.
- Two tunnel interfaces on the Spokes but one tunnel interface on each Hub.
This second way is the best. The major advantage of using this option over the first option is
load balancing spokes between hubs.
NOTE In a real world there will be many spokes and hubs, but if you understand how to
implement a basic scenery with two hubs and two spokes, you will be able to split the different
portions of the topology and choose the best design for your network.
DMVPN requires a single subnet, so all OSPF routers would have to be in the same area.
Summarization is only available on area border routers (ABRs) and autonomous system
boundary routers (ASBRs), which means that the hub must be an ABR for it to summarize
routes. Misconfiguring the designated router (DR) or backup designated router (BDR) role
would also break the connectivity. Any form of traffic engineering is very difficult in a link
state protocol such as OSPF.
For small scale DMVPN deployments, running OSPF may be acceptable. Large scale
implementations will either run EIGRP or BGP.
2) On Phase1 OSPF network type is not important (do not configure Point-to-Point), HUB is
always the next-hop, we can filter specific routes from the RIB of each spoke (if spokes have
only one direct path to the HUB and have not other paths connecting them on the DMVPN
cloud).
NOTE Make sure appropriate MTU value matches between tunnel interfaces (“ip mtu 1400
/ ip tcp mss-adjust 1360”).
Consider the OSPF scalability limitation (50 routers per area). OSPF requires much more
tweekening for large scale deployments.
When using OSPF on a DMVPN a choice has to be made about where to place area 0. There
are three options:
• Area 0 behind the hub; a non-zero area across the DMVPN and at the sites.
• Area 0 on the DMVPN; a unique non-zero area at each spoke site.
• Area 0 everywhere.
The third option has the worst scaling properties and the highest change of control plane
instability. It’s not recommended.
NOTE Because OSPF is link state, there is no chance to use a concept like an offset-list to
selectively modify the cost of a few of intra area routes: link state database must be identical
in all routers that belong to the same area, so any change to the cost between routers would
impact all routes.
In the following design, you can isolate the Main Site (Headquarter) from the branchs
(SOHO, etc.)
• Internal LAN on R1=8.8.8.8/32
• Internal LAN on SpokeA=1.1.1.0/24
• Internal LAN on SpokeB=2.2.2.0/24
In this case, independently of the tunnel interface bandwidth, R1 will have two equal metric
routes to reach both hubs. So, I’ll use the same bandwidth for the tunnel interface on both
hubs. Later we can tune metric for symmetrical routing.
NOTE Remember, I’ll use the command bandwidth 1000 because the guaranteed bandwidth
specified by the ISP is 1 Mbps.
To guarantee HubA is the DR for OSPF, I’ll give it the highest priority: 2 is the highest
priority in this topology. Also is important to boot up first HubA, then HubB and finally the
other routers.
NOTE Priority 1 is the default and priority 0 keeps the router from becoming eligible to be
elected as a DR/BDR.
For the sake of simplicity, I won’t use IPSec. This design is Naked DMVPN Phase 2.
I would like you to implement this topology, but I’ll give you some advices.
R1
Use a loopback interface for the internal LAN (use better /24) and advertise it along
with the fastethernet interface using EIGRP.
HubA
no auto-summary
router ospf 1
log-adjacency-changes
passive-interface default
no passive-interface Tunnel0
HubB
For the DR election process, add the following commands to the tunnel interface:
NOTE For the spokes, use a single tunnel interface pointing to both hubs. Use priority 0 to
avoid them to become the DR.
Verifying DMVPN
HubA#sh dmvpn
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
--output omitted--
So, two equal metric routes to 1.1.1.1 and 2.2.2.2 (asymmetrical routing):
R1#traceroute 1.1.1.1
1 192.168.0.2 80 msec
192.168.0.1 48 msec
192.168.0.2 36 msec
--output omitted--
Tunning metric on Hub B for packets from 8.8.8.8/32 to 1.1.1.0/24 and 2.2.2.0/24
--output omitted--
R1#traceroute 1.1.1.1
Tunning metric on HubB for packets from Spokes’s LAN (loopback interface simulates the
internal LAN) to 8.8.8.8/32:
router ospf 1
--output omitted--
SpokeA#traceroute 8.8.8.8
Type escape sequence to abort.
Testing failover
HubA#conf t
HubA(config)#int fa0/0
HubA(config-if)#shut
--output omitted--
R1#sh ip route
--output omitted--
NOTE There may be scenerys where you need to tune the default timers to speed up network
convergence during a hardware failure. By default the timers on a broadcast network (which
includes Ethernet, point-to-point and point-to-multipoint) are 10 seconds hello and 40
seconds dead. The timers on a non-broadcast network are 30 seconds hello 120 seconds dead.
In this topology, use the following timers on every tunnel interface to speed up network
convergence:
ip ospf hello-interval 1
ip ospf dead-interval 4
NOTE Even so, when shutting down interfaces on GNS3, full adjacencies (FULL/..) could
last several minutes, unless you shut/no shut tunnel interfaces on spokes after shutting
down/no shutting down the physical interfaces on hubs.
1. Each Hub connects to only a single cloud. The spokes connect to both clouds instead of
just one.
2. Because we now have two tunnels that the spokes can choose from tunnel to either Hub A
or B, we can start modifying routing metrics on each tunnel to better influence which path
to take (before we only had one choice).
As the documentation states, this setup is little more trickier to configure, but allows for more
control on where we want our routes to go.
• There are two DMVPN clouds – 10.0.0.0/24 (Primary DMVPN Cloud) and 20.0.0.0/24
(Secondary cloud).
• Only one tunnel interface for each hub, two tunnel interfaces for each spoke.
• The NHRP network IDs and tunnel keys on the hubs should be different.
• Each hub will be the DR for each Cloud.
NOTE You can use ppGRE or mGRE for spokes. I’ll use mGRE.
HubA
interface Tunnel0
description PRIMARY CLOUD
bandwidth 1000
ip nhrp network-id 1
tunnel key 1
ip ospf hello-interval 1
ip ospf priority 1
ip ospf 1 area 1
router ospf 1
log-adjacency-changes
passive-interface default
no passive-interface FastEthernet0/1
no passive-interface Tunnel0
HubB
interface Tunnel0
bandwidth 1000
ip address 20.0.0.1 255.255.255.0
ip nhrp network-id 2
tunnel key 2
ip ospf hello-interval 1
ip ospf priority 1
ip ospf 1 area 1
router ospf 1
log-adjacency-changes
passive-interface default
no passive-interface FastEthernet0/1
no passive-interface Tunnel0
SpokeA
interface Tunnel0
bandwidth 1000
ip nhrp network-id 1
tunnel key 1
ip ospf hello-interval 1
ip ospf priority 0
ip ospf 1 area 1
interface Tunnel1
bandwidth 1000
ip nhrp network-id 2
tunnel key 2
ip ospf hello-interval 1
ip ospf priority 0
ip ospf 1 area 1
router ospf 1
log-adjacency-changes
passive-interface default
no passive-interface Tunnel0
no passive-interface Tunnel1
SpokeB
interface Tunnel0
bandwidth 1000
ip nhrp network-id 1
tunnel key 1
ip ospf hello-interval 1
ip ospf priority 0
ip ospf 1 area 1
tunnel source FastEthernet0/0
interface Tunnel1
bandwidth 1000
ip nhrp network-id 2
tunnel key 2
ip ospf hello-interval 1
ip ospf priority 0
ip ospf 1 area 1
router ospf 1
log-adjacency-changes
passive-interface default
no passive-interface Tunnel0
no passive-interface Tunnel1
DMVPN Verification
HubA#sh ip ospf neighbor
--output omitted--
output omitted--
So, asymmetrical routing and failover is up (see the highlighted areas in the above outputs),
load balancing is done.
Tunning Mertic
HubB
int tu0
bandwidth 900
Spokes
int tu1
bandwidth 900
Final note
With this article ends my contribution to this wide technology. Nonetheless, I propose you to
implement the following topology (using EIGRP):
Dual Hub Single Cloud with EIGRP, using failover and load
balancing
This is one of my favourite GNS3 topologies. I’ve seen many articles about DMVPN, but
none of them mention how to use redundancy in the headquarter site. I focus not only in the
DMVPN configuration but also in configuring redundancy using HSRP in the internal
network.
A single DMVPN network is configured for this design. The spoke routers will use only one
multipoint GRE interface and the second hub is configured as a next hop server.
This is a real lab, except for the INTERNET portion of the topology.
NOTE Normally you would connect the two hub routers and two spokes to different
ISPs and you would use different public IP addresses. For the sake of simplicity, I
connected all routers to the 192.0.0.0/24 subnet using a simple switch.
Prerequisites
• Two different internal networks 10.1.2.0/24 and 10.1.3.0/24 for each hub.
• 10.1.2.0/24 clients use HUB1 for Internet access and 10.1.3.0/24 use HUB2, but both
networks need Internet access in case of faliure on the ISP.
• No primary link for the Internet. So, we can achieve load balancing with this design.
• HUB1 and HUB2 coexist in the same site.
• Only one tunnel interface on each hub and spoke (single DMVPN cloud).
• Tune routing EIGRP protocol to avoid asymmetrical routing.
For failover, I configured HSRP with interface tracking on both hubs. Also, each hub acts as
a DHCP server for both internal networks.
HUB1
ip dhcp excluded-address 10.1.2.248 10.1.2.250
ip dhcp excluded-address 10.1.3.248 10.1.3.250
default-router 10.1.2.250
default-router 10.1.3.250
interface FastEthernet0/0
no ip address
no shut
speed 100
full-duplex
interface FastEthernet0/0.2
encapsulation dot1Q 2
standby 1 ip 10.1.2.250
standby 1 preempt
interface FastEthernet0/0.3
encapsulation dot1Q 3
standby 1 ip 10.1.3.250
standby 1 preempt
In the real world, high availability is needed so you can use two similar switches with 3
interfaces forming an Etherchannel connection. Remember, only on mode is available on
GNS3.
For load balancing I also used HSRP, trying to equally distribute the access ports to belong
to a different vlan. I used EIGRP as the routing protocol, so you can tune this protocol to
achieve load balancing (use Offset Lists for this).
Thanks to Adeolu Owokade for his great articles about these topics.