DMVPN

Overlay Tunnels

An overlay network is a logical or virtual network built over a physical transport network referred to as an underlay network.

Examples of overlay tunneling technologies include the following:

  • GRE
  • IPSec
  • LISP
  • VXLAN
  • MPLS

A virtual private network (VPN) is an overlay network that allows private networks to communicate with each other across an untrusted network such as the Internet. VPN data sent across an unsecure network needs to be encrypted to ensure that the data is not viewed or tampered with by an attacker. The most common VPN encryption algorithm used is IP Security (IPsec).

Private networks typically use RFC 1918 address space (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16), which is not routable across the Internet. To be able to create VPNs between private networks, a tunneling overlay technology is necessary, and the most commonly used one is GRE.

MPLS tunneling is not supported across the Internet unless it is tunneled within another tunneling protocol, such as GRE, which can then be encrypted with IPsec (MPLS over GRE over IPsec).

GRE Tunnels

GRE is a tunneling protocol that provides connectivity to a wide variety of network-layer protocols by encapsulating and forwarding packets over an IP-based network. Their most important application is that they can be used to create VPNs.

  • When a router encapsulates a packet for a GRE tunnel, it adds new header information (known as encapsulation) to the packet, which contains the remote endpoint IP address as the destination.
  • The new IP header information allows the packet to be routed between the two tunnel endpoints without inspection of the packet’s payload.
  • After the packet reaches the remote endpoint, the GRE headers are removed (known as de-encapsulation), and the original packet is forwarded out the remote router.

GRE tunnels support IPv4 or IPv6 addresses as an underlay or overlay network.

IP Packet before and after GRE Headers

GRE Tunnel Configuration

  1. Create the tunnel interface. interface tunnel tunnel-number
  2. Identify the local SRC of the tunnel. tunnel source {ip address|interface-id}
    • The tunnel source interface indicates the interface that will be used for encapsulation and de-encapsulation of the GRE tunnel.
    • The tunnel source can be a physical interface or a loopback interface.
      • A loopback interface can provide reachability if one of the transport interfaces fails.
  3. Identify the remote DST IP. tunnel destination ip-address
    • The tunnel destination is the remote router’s underlay IP address toward which the local router sends GRE packets.
  4. Allocate an IP address to the tunnel interface.
  5. (Optional) Define the tunnel BW. bandwidth [1-10000000] in kilobits per second
    • Virtual interfaces do not have the concept of latency and need to have a reference bandwidth configured so that routing protocols that use bandwidth for best-path calculation can make an intelligent decision.
  6. (Optional) Specify a GRE tunnel keepalive. keepalive [seconds [retries]]
    • Tunnel interfaces are GRE point-to-point (P2P) by default, and the line protocol enters an up state when the router detects that a route to the tunnel destination exists in the routing table. If the tunnel destination is not in the routing table, the tunnel interface (line protocol) enters a down state.
    • Tunnel keepalives ensure that bidirectional communication exists between tunnel endpoints to keep the line protocol up. Otherwise, the router must rely on routing protocol timers to detect a dead remote endpoint.
  7. (Optional) Define the IP MTU for the tunnel interface. ip mtu mtu
    • The GRE tunnel adds a minimum of 24 bytes to the packet size to accommodate the headers that are added to the packet.
    • Specifying the IP MTU on the tunnel interface has the router perform the fragmentation in advance of the host having to detect and specify the packet MTU.
				
					[ R1 ]
interface Tunnel100
 bandwidth 4000
 ip address 192.168.100.1 255.255.255.0
 ip mtu 1400
 keepalive 5 3
 tunnel source GigabitEthernet0/1
 tunnel destination 100.64.2.2
!
router ospf 1
 router-id 1.1.1.1
 network 10.1.1.1 0.0.0.0 area 1
 network 192.168.100.1 0.0.0.0 area 0
!
ip route 0.0.0.0 0.0.0.0 100.64.1.2
				
			
				
					[ R2 ]
interface Tunnel100
 bandwidth 4000
 ip address 192.168.100.2 255.255.255.0
 ip mtu 1400
 keepalive 5 3
 tunnel source GigabitEthernet0/1
 tunnel destination 100.64.1.1
!
router ospf 1
 router-id 2.2.2.2
 network 10.2.2.0 0.0.0.255 area 2
 network 192.168.100.2 0.0.0.0 area 0
!
ip route 0.0.0.0 0.0.0.0 100.64.2.1
				
			

Notice that from R1’s perspective, the network is only one hop away. The traceroute does not display all the hops in the underlay.

In the same fashion, the packet’s time-to-live (TTL) is encapsulated as part of the payload. The original TTL decreases by only one for the GRE tunnel, regardless of the number of hops in the transport network.

During GRE encapsulation, the default GRE TTL value is 255. The interface command tunnel ttl <1-255> is used to change the GRE TTL value.

Encapsulation Overhead for Tunnels

Recursive Routing

Explicit care must be taken when using a routing protocol on a network tunnel. If a router tries to reach the remote router’s encapsulating interface (transport IP address) via the tunnel (overlay network), problems will occur. This is a common issue when the transport network is advertised into the same routing protocol that runs on the overlay network.

For example, say that a network administrator accidentally adds the 100.64.0.0/16 Internet-facing interfaces to OSPF on R1 and R2. The ISP routers are not running OSPF, so an adjacency does not form, but R1 and R2 advertise the Internet-facing IP addresses to each other over the GRE tunnel via OSPF, and since they would be more specific than the configured default static routes, they would be preferred and installed on the routing table. The routers would then try to use the tunnel to reach the tunnel endpoint address, which is not possible. This scenario is known as recursive routing.

DMVPN Overview

DMVPN (Dynamic Multipoint VPN) is a routing technique we can use to build a VPN network with multiple sites without having to statically configure all devices. It’s a “hub and spoke” network where the spokes will be able to communicate with each other directly without having to go through the hub.

It is considered a zero-touch technology because no configuration is needed on the DMVPN hub routers as new spokes are added to the DMVPN network.

Pieces of a DMVPN:

  • Multipoint GRE (mGRE)
  • NHRP
  • Routing
  • IPSec (not required, but recommended)

Summary

  • DMVPN is a combination of mGRE, the Next-Hop Resolution Protocol (NHRP), and IPSec (optional).
  • DMVPN can be implemented as Phase 1, Phase 2, or Phase 3.
  • There are two GRE flavors:
    • GRE
    • mGRE
  • GRE
    • It’s a point-to-point logical link, is configured with a tunnel source and tunnel destination.
    • The tunnel source can either be an IP address or an interface.
    • When a tunnel destination is configured, it ties the tunnel to a specific endpoint. This makes a GRE tunnel a P2P tunnel. If there are 200 endpoints, each endpoint would need to configure 199 GRE tunnels.
  • mGRE
    • With Multipoint GRE, the configuration includes the tunnel source and tunnel mode.
    • Tunnel destination is not configured. Therefore, the tunnel can have many endpoints, and only a single tunnel interface is utilized.
    • The end-points can be configured as a GRE tunnel or an mGRE tunnel.

But what if the spokes need to communicate with each other? Especially with the NBMA nature of mGRE? How would we accomplish that? NHRP

  • NHRP is used by the spokes connected to an NBMA network to determine the NBMA-IP-address of other spoke router.
  • NHRP allows NHCs to dynamically register their NBMA-IP-Address to Tunnel-IP-Address mapping.
  • This allows the NHCs to join the NBMA network without the NHS needing to be reconfigured.
    • This means that when a new NHC is added to the NBMA network, none of the NHCs or the NHSs need to be reconfigured.

Multipoint GRE

  • Our “regular” GRE tunnels are point-to-point and don’t scale well.
    • When we configure P2P GRE tunnels, we have to set a SRC and DST IP address that are used to build the GRE tunnel.
    • We have to create multiple tunnel interfaces, set the source/destination IP addresses etc. It will work but it’s not a very scalable solution.
    • Multipoint GRE, as the name implies allows us to have multiple destinations.
  • When we use Multipoint GRE, there will be only one tunnel interface on each router.
    • We use mGRE so we can have multiple destinations.
    • When we need to tunnel something between spokes, we automatically build new tunnels, traffic can be tunneled directly instead of sending it through the Hub router.

When we configure a point-to-point regular GRE tunnel, we configure a source and destination IPs that are used to build the GRE tunnel. In Multipoint GRE tunnel, when two Spoke routers want to tunnel some traffic, how do they know what IP addresses to use?

Example:

Let’s say we want to ping from Branch 1’s tunnel interface to Branch 2’s tunnel interface. GRE encapsulated IP packet will look like below.

The inner SRC and DST IP addresses are known to use, these are the IPs of tunnel interfaces. We encapsulate this IP packet, add a GRE header in front of it and then we have to fill in the outer SRC and DST IP addresses so that this packet can be routed on the internet. The Branch 1 router knows it’s own public IP address but has no clue what the public IP address of Branch 2 router is. The solution, NRHP.

NHRP

  • One router is the NHRP server.
  • All other routers are NHRP clients.
  • NHRP clients register themselves with the NHRP server and report their public IP address (NBMA address).
  • The NRHP server keeps track of all public IP addresses (NBMA) in its cache.
  • When one spoke router wants to tunnel something to another spoke, it will request the NHRP server for the public IP address of the other spoke router.

Things to remember..

  • DST IP address of the hub router is statically configured on the spoke routers.
  • Hub router dynamically accept spoke routers.
  • Spoke routers use NHRP Registration Request message to register their Public IP address to the hub.
  • The hub (NHRP server) creates a mapping between the Public IP addresses and the IP address of the tunnel interfaces.
    • Example: when spoke1 wants to send something to spoke2, spoke1 needs to figure out the DST Public IP address of spoke2, so it sends a NRHP Resolution Request, asking the hub router to provide the Public IP of spoke2.
    • The hub router checks its cache, finds an entry for spoke2 and sends a NHRP Resolution Reply to spoke1 with the Public IP address of spoke2.
    • Spoke1 now knows the DST Public IP of spoke2 and is able to tunnel traffic directly.

Summary

  • DMVPN uses mGRE tunnels and therefore requires a method of mapping tunnel IP addresses to the transport (underlay) IP address. NHRP provides the technology for mapping those IP addresses.
  • DMVPN spokes (NHCs) are statically configured with the IP addresses of the hubs (NHSs) so that they can register their tunnel and NBMA (transport) IP addresses with the hubs.
  • When a spoke-to-spoke tunnel is established, NHRP messages provide the necessary information for the spokes to locate each other so that they can build a spoke-to-spoke DMVPN tunnel.
  • The NHRP messages also allow a spoke to locate a remote network.
  • NHRP message types: Registration, Resolution, Redirect, Purge and Error.

NHRP Message Types

NHRP Message Type Description
Registration Registration messages are sent by the NHC (DMVPN spoke) toward the NHS (DMVPN hub).
Registration allows the hubs to know a spoke’s NBMA information.
The NHC also specifies the amount of time that the registration should be maintained by the NHS, along with other attributes.
Resolution Resolution messages are NHRP messages to locate and provide the address resolution information of the egress router toward the destination.
A resolution request is sent during the actual query, and a Resolution Reply provides the tunnel IP address and the NBMA IP address of the remote spoke.
Redirect Redirect messages are an essential component of DMVPN Phase 3.
They allow an intermediate router to notify the encapsulator (a router) that a specific network can be reached by using a more optimal path (spoke-to-spoke tunnel).
Purge Purge messages are sent to remove a cached NHRP entry.
Purge messages notify routers of the loss of a route used by NHRP.
A purge is typically sent by an NHS to an NHC (which it answered) to indicate that the mapping for an address/network that it answered is not valid anymore (for example, if the network is unreachable from the original station or has moved).
Error Error messages are used to notify the sender of an NHRP packet that an error has occurred.

DMVPN Configuration

There are two types of DMVPN configurations, hub and spoke, used depending on a router’s role. The DMVPN hub is the NHRP NHS, and the DMVPN spoke is the NHRP NHC. The spokes should be preconfigured with the hub’s static IP address, but a spoke’s NBMA IP address can be static or assigned from DHCP.

Hub Configuration

  1. Create the tunnel interface. interface tunnel tunnel-number
  2. Identify the local source of the tunnel. tunnel source {ip-address | interface-id}
    • The tunnel source depends on the transport type. The encapsulating interface can be a logical interface such as a loopback or a subinterface.
  3. Configure the DMVPN tunnel as an mGRE tunnel. tunnel mode gre multipoint
  4. Allocate an IP address for the DMVPN network (tunnel).
    • The subnet mask or size of the network should accommodate the total number of routers that are participating in the DMVPN tunnel. All the DMVPN tunnels in this book use /24, which accommodates 254 routers.
  5. Enable NHRP on the tunnel interface and uniquely identify the DMVPN tunnel. ip nhrp network-id 1-4294967295
    • The NHRP network ID is locally significant and is used to identify a DMVPN cloud on a router because multiple tunnel interfaces can belong to the same DMVPN cloud.
    • It is recommended that the NHRP network ID match on all routers participating in the same DMVPN network.
  6. Define the tunnel key. tunnel key 0-4294967295
    • Helps identify the DMVPN virtual tunnel interface if multiple tunnel interfaces use the same tunnel source interfaces as defined in step 3.
    • Tunnel keys, if configured, must match for a DMVPN tunnel to be established between two routers.
    • The tunnel key adds 4 bytes to the DMVPN header.
  7. Enable multicast support for NHRP. ip nhrp map multicast dynamic
    • To support multicast or routing protocols that use multicast, enable this on DMVPN hub routers.
    • NHRP provides a mapping service of the protocol (tunnel IP) address to the NBMA (transport) address for multicast packets, too.
  8. (Phase 3) Enable NHRP redirect functions. ip nhrp redirect
  9. Define the tunnel bandwidth. bandwidth [1-10000000]
    • Virtual interfaces do not have the concept of latency and need to have a reference bandwidth configured so that routing protocols that use bandwidth for best-path calculation can make intelligent decisions. Bandwidth is also used for QoS configuration on the interface.
  10. Configure the IP MTU for the tunnel interface. ip mtu mtu
    • Typically an MTU of 1400 is used for DMVPN tunnels to account for the additional encapsulation overhead.
  11. Define the TCP maximum segment size (MSS). ip tcp adjust-mss mss-size
    • The TCP Adjust MSS feature ensures that the router will edit the payload of a TCP three-way handshake if the MSS exceeds the configured value.
    • Typically DMVPN interfaces use a value of 1360 to accommodate IP, GRE, and IPsec headers.

Spoke Configuration

  1. Create the tunnel interface. tunnel tunnel-number
  2. Identify the local source of the tunnel. tunnel source {ip-address | interface-id}
  3. Identify the tunnel destination or configure the DMVPN tunnel as an mGRE tunnel.
    • (Phase 1) tunnel destination ip-address
      • The tunnel destination is the DMVPN hub IP (NBMA) address that the local router uses to establish the DMVPN tunnel.
    • (Phase 2 & 3) tunnel mode gre multipoint
  4. Allocate an IP address for the DMVPN network (tunnel).
  5. Enable NHRP on the tunnel interface and uniquely identify the DMVPN tunnel for the virtual interface. ip nhrp network-id 1-4294967295
  6. Define the NHRP tunnel key. tunnel key 0-4294967295
  7. Specify the address of one or more NHRP NHSs. ip nhrp nhs nhs-address nbma nbma-address [multicast]
    • The multicast keyword provides multicast mapping functions in NHRP and is required to support the following routing protocols: RIP, EIGRP, and Open Shortest Path First (OSPF).
  8. Define the tunnel bandwidth. bandwidth [1-10000000]
  9. Define the tunnel IP MT. ip mtu mtu
  10. Define the tunnel TCP MSS. ip tcp adjust-mss mss-size
  11. (Phase 3) Enable the NHRP shortcut function. ip nhrp shortcut

Note: Remember that the NBMA address is the transport IP address, and the NHS address is the tunnel address for the DMVPN hub. This is the hardest concept for most network engineers to remember.

				
					/* R11-Hub */
interface Tunnel100
 bandwidth 4000
 ip address 192.168.100.11 255.255.255.0
 ip mtu 1400
 ip nhrp map multicast dynamic
 ip nhrp network-id 100
 ip tcp adjust-mss 1360
 tunnel source GigabitEthernet0/1
 tunnel mode gre multipoint
 tunnel key 100
				
			
				
					/* R31-Spoke (Single Command NHRP Configuration) */
interface Tunnel100
 bandwidth 4000
 ip address 192.168.100.31 255.255.255.0
 ip mtu 1400
 ip nhrp network-id 100
 ip nhrp nhs 192.168.100.11 nbma 172.16.11.1 multicast
 ip tcp adjust-mss 1360
 tunnel source GigabitEthernet0/1
 tunnel destination 172.16.11.1
 tunnel key 100
				
			
				
					/* R41-Spoke (Multi-Command NHRP Configuration) */
interface Tunnel100
 bandwidth 4000
 ip address 192.168.100.41 255.255.255.0
 ip mtu 1400
 ip nhrp network-id 100
 ip nhrp map 192.168.100.11 172.16.11.1
 ip nhrp map multicast 172.16.11.1
 ip nhrp nhs 192.168.100.11
 ip tcp adjust-mss 1360
 tunnel source GigabitEthernet0/1
 tunnel destination 172.16.11.1
 tunnel key 100
				
			

Viewing DMVPN Tunnel Status

The command show dmvpn [detail] provides the tunnel interface, tunnel role, tunnel state, and tunnel peers with uptime.

When the DMVPN tunnel interface is administratively shut down, there are no entries associated with that tunnel interface.

These are the tunnel states, in order of establishment:

  • INTF: The line protocol of the DMVPN tunnel is down.
  • IKE: DMVPN tunnels configured with IPsec have not yet successfully established an Internet Key Exchange (IKE) session.
  • IPsec: An IKE session has been established, but an IPsec security association (SA) has not yet been established.
  • NHRP: The DMVPN spoke router has not yet successfully registered.
  • Up: The DMVPN spoke router has registered with the DMVPN hub and received an ACK (positive registration reply) from the hub.
R11-Hub# show dmvpn
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
         N - NATed, L - Local, X - No Socket
         T1 - Route Installed, T2 - Nexthop-override
         C - CTS Capable
         # Ent --> Number of NHRP entries with same NBMA peer
         NHS Status: E --> Expecting Replies, R --> Responding, W --> Waiting
         UpDn Time --> Up or Down Time for a Tunnel
==========================================================================

Interface: Tunnel100, IPv4 NHRP Details
Type:Hub, NHRP Peers:2,

 # Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb
 ----- --------------- --------------- ----- -------- -----
     1 172.16.31.1      192.168.100.31    UP 00:05:26     D
     1 172.16.41.1      192.168.100.41    UP 00:05:26     D
R31-Spoke# show dmvpn
! Output omitted for brevity
Interface: Tunnel100, IPv4 NHRP Details
Type:Spoke, NHRP Peers:1,

# Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb
 ----- --------------- --------------- ----- -------- -----
     1 172.16.11.1      192.168.100.11    UP 00:05:26     S

Viewing the NHRP Cache

The information that NHRP provides is a vital component of the operation of DMVPN. Every router maintains a cache of requests that it receives or is processing. The command show ip nhrp [brief] displays the local NHRP cache on a router.

The NHRP cache contains the following fields:

  1. Network entry for hosts (IPv4: /32 or IPv6: /128) or for a network/xx and the tunnel IP address to NBMA (transport) IP address.
  2. The interface number, duration of existence, and when it will expire (hours:minutes:seconds). Only dynamic entries expire.
  3. The NHRP mapping entry type.

NHRP Mapping Entry Types

NHRP Mapping Entry Description
static An entry created statically on a DMVPN interface.
dynamic An entry created dynamically.
In DMVPN Phase 1, it is an entry created from a spoke that registered with an NHS with an NHRP registration request.
incomplete A temporary entry placed locally while an NHRP resolution request is processing.
An incomplete entry prevents repetitive NHRP requests for the same entry, avoiding unnecessary consumption of router resources.
Eventually this will time out and permit another NHRP resolution request for the same network.
local Displays local mapping information.
One typical entry represents a local network that was advertised for an NHRP Resolution Reply.
This entry records which nodes received this local network mapping through an NHRP resolution reply.
(no socket) Mapping entries that do not have associated IPsec sockets and where encryption is not triggered.
NBMA address Nonbroadcast multi-access address, or the transport IP address where the entry was received.

NHRP Flags Types

NHRP Message Flag Description
Used Indication that this NHRP mapping entry was used to forward data packets within the past 60 seconds.
Implicit Indicates that the NHRP mapping entry was learned implicitly.
Examples of such entries are the source mapping information gleaned from an NHRP resolution request received by the local router or from an NHRP resolution packet forwarded through the router.
Unique Indicates that this NHRP mapping entry must be unique and that it cannot be overwritten with a mapping entry that has the same tunnel IP address but a different NBMA address.
Router Indicates that this NHRP mapping entry is from a remote router that provides access to a network or host behind the remote router.
Rib Indicates that this NHRP mapping entry has a corresponding routing entry in the routing table.
This entry has an associated H route.
Nho Indicates that this NHRP mapping entry has a corresponding path that overrides the next hop for a remote network, as installed by another routing protocol.
Nhop Indicates an NHRP mapping entry for a remote next-hop address (for example, a remote tunnel interface) and its associated NBMA address.

Shows the local NHRP cache for the various routers in the sample topology.

  • R11 contains only dynamic registrations for R31 and R41.
  • In the event that R31 and R41 cannot maintain connectivity to R11’s transport IP address, eventually the tunnel mapping is removed on R11.
  • The NHRP message flags on R11 indicate that R31 and R41 successfully registered with the unique registration to R11 and that traffic has recently been forwarded to both routers.
R11-Hub# show ip nhrp
192.168.100.31/32 via 192.168.100.31
  Tunnel100 created 23:04:04, expire 01:37:26
  Type: dynamic, Flags: unique registered used nhop
  NBMA address: 172.16.31.1
192.168.100.41/32 via 192.168.100.41
  Tunnel100 created 23:04:00, expire 01:37:42
  Type: dynamic, Flags: unique registered used nhop
  NBMA address: 172.16.41.1
R31-Spoke# show ip nhrp
192.168.100.11/32 via 192.168.100.11
   Tunnel100 created 23:02:53, never expire
   Type: static, Flags:
   NBMA address: 172.16.11.1

DMVPN Phases

DMVPN Phases Comparison

Phase 1 Spoke-to-Hub

Tunnel

  • mGRE on hub and p-pGRE on spokes
    • NHRP still required for spoke registration to hub
    • No spoke-to-spoke tunnels

Routing

  • Summarization/default route at hub is allowed.
  • Next-hop on spokes is always changed by the hub.

DMVPN Phase 1, the first DMVPN implementation, provides a zero-touch deployment for VPN sites. VPN tunnels are created only between spoke and hub sites. Traffic between spokes must traverse the hub to reach any other spoke.

Phase 2 Spoke-to-Spoke

Tunnel

  • mGRE on hub and spokes
    • NHRP required for spoke registration to hub
    • NHRP required for spoke-to-spoke resolution
    • Spoke-to-spoke tunnel triggered by spoke

Routing

  • Summarization/default routing at hub is NOT allowed
  • Next-hop on spokes is always preserved by the hub
  • Multi-level hierarchy requires hub daisy-chaining

DMVPN Phase 2 provides additional capability beyond DMVPN Phase 1 and allows spoke-to-spoke communication on a dynamic basis by creating an on-demand VPN tunnel between the spoke devices. DMVPN Phase 2 does not allow summarization (next-hop preservation). As a result, it also does not support spoke-to-spoke communication between different DMVPN networks (multilevel hierarchical DMVPN).

Phase 3 Spoke-to-Spoke

Tunnel

  • mGRE on hub and spokes
    • NHRP required for spoke registration to hub
    • NHRP required for spoke-to-spoke resolution
  • When a hub receives and forwards packet out of same interface..
    • Send NHRP redirect message back to packet source
    • Forward original packet down to spoke via RIB

Routing

  • Summarization/default routing at hub is allowed.
    • Results in NHRP routes for spoke-to-spoke tunnel
    • With no-summary, NHO is performed for spoke-to-spoke tunnel
      • Next-hop is changed from hub IP to spoke IP
  • Next-hop on spokes is always changed by the hub
    • Because of this, NHRP resolution is triggered by hub
  • Multilevel hierarchy works without daisy-chaining

DMVPN Phase 3 refines spoke-to-spoke connectivity by enhancing the NHRP messaging and interacting with the routing table. With DMVPN Phase 3, the hub sends an NHRP Redirect message to the spoke that originated the packet flow. The NHRP Redirect message provides the necessary information so that the originator spoke can initiate a resolution of the destination host/network.

In DMVPN Phase 3, NHRP installs paths in the routing table for the shortcuts it creates. NHRP shortcuts modify the next-hop entry for existing routes or add a more explicit route entry to the routing table. Because NHRP shortcuts install more explicit routes in the routing table, DMVPN Phase 3 supports summarization of networks at the hub while providing optimal routing between spoke routers.

Spoke-to-Spoke Communication

Underlying mechanisms used to establish spoke-to-spoke communication

In DMVPN Phase 1, the spoke devices rely on the configured tunnel destination to identify where to send the encapsulated packets. Phase 3 DMVPN uses mGRE tunnels and thereby relies on NHRP Redirect and Resolution Request messages to identify the NBMA addresses for any destination networks.

Packets flow through the hub in a traditional hub-and-spoke manner until the spoke-to-spoke tunnel has been established in both directions. As packets flow across the hub, the hub engages NHRP Redirection to start the process of finding a more optimal path with spoke-to-spoke tunnels.

Example: First packet travels across the hub, but by the time a second steam of packets is sent, the spoke-to-spoke tunnel has been initialized so that traffic flows directly between spoke routers on the transport and overlay networks.

! Initial Packet Flow
R31-Spoke# traceroute 10.4.4.1 source 10.3.3.1
Tracing the route to 10.4.4.1
  1 192.168.100.11 5 msec 1 msec 0 msec <- This is the Hub Router (R11-Hub)
  2 192.168.100.41 5 msec * 1 msec
! Packetflow after Spoke-to-Spoke Tunnel is Established
R31-Spoke# traceroute 10.4.4.1 source 10.3.3.1
Tracing the route to 10.4.4.1
 1 192.168.100.41 1 msec * 0 msec

Forming Spoke-to-Spoke Tunnels

Step 1

  • R31 performs a route lookup for 10.4.4.1 and finds the entry 10.4.4.0/24 with the next-hop IP address 192.168.100.11.
  • R31 encapsulates the packet destined for 10.4.4.1 and forwards it to R11 out the tunnel 100 interface.

Step 2

  • R11 receives the packet from R31 and performs a route lookup for the packet destined for 10.4.4.1.
  • R11 locates the 10.4.4.0/24 network with the next-hop IP address 192.168.100.41.
  • R11 checks the NHRP cache and locates the entry for the 192.168.100.41/32 address.
  • R11 forwards the packet to R41, using the NBMA IP address 172.16.41.1, found in the NHRP cache. The packet is then forwarded out the same tunnel interface.
  • R11 has ip nhrp redirect configured on the tunnel interface and recognizes that the packet received from R31 hairpinned out of the tunnel interface.
  • R11 sends an NHRP Redirect to R31, indicating the packet source 10.3.3.1 and destination 10.4.4.1. The NHRP Redirect indicates to R31 that the traffic is using a suboptimal path.

Step 3

  • R31 receives the NHRP Redirect and sends an NHRP Resolution Request to R11 for the 10.4.4.1 address.
  • Inside the NHRP Resolution Request, R31 provides its protocol (tunnel IP) address, 192.168.100.31, and source NBMA address, 172.16.31.1.
  • R41 performs a route lookup for 10.3.3.1 and finds the entry 10.3.3.0/24 with the next-hop IP address 192.168.100.11.
  • R41 encapsulates the packet destined for 10.3.3.1 and forwards it to R11 out the tunnel 100 interface.

Step 4

  • R11 receives the packet from R41 and performs a route lookup for the packet destined for 10.3.3.1.
  • R11 locates the 10.3.3.0/24 network with the next-hop IP address 192.168.100.31.
  • R11 checks the NHRP cache and locates an entry for 192.168.100.31/32.
  • R11 forwards the packet to R31, using the NBMA IP address 172.16.31.1, found in the NHRP cache. The packet is then forwarded out the same tunnel interface.
  • R11 has ip nhrp redirect configured on the tunnel interface and recognizes that the packet received from R41 hairpinned out the tunnel interface.
  • R11 sends an NHRP Redirect to R41, indicating the packet source 10.4.4.1 and destination 10.3.3.1 The NHRP Redirect indicates to R41 that the traffic is using a suboptimal path.
  • R11 forwards R31’s NHRP Resolution Requests for the 10.4.4.1 address.

Step 5

  • R41 sends an NHRP Resolution Request to R11 for the 10.3.3.1 address and provides its protocol (tunnel IP) address, 192.168.100.41, and source NBMA address, 172.16.41.1.
  • R41 sends an NHRP Resolution Reply directly to R31, using the source information from R31’s NHRP Resolution Request.
    • The NHRP Resolution Reply contains the original source information in R31’s NHRP Resolution Request as a method of verification and contains the client protocol address of 192.168.100.41 and the client NBMA address 172.16.41.1. (If IPsec protection is configured, the IPsec tunnel is set up before the NHRP reply is sent.)

Note: The NHRP reply is for the entire subnet rather than the specified host address.

Step 6

  • R11 forwards R41’s NHRP Resolution Requests for the 192.168.100.31 and 10.4.4.1 entries.

Step 7

  • R31 sends an NHRP Resolution Reply directly to R41, using the source information from R41’s NHRP Resolution Request.
    • The NHRP Resolution Reply contains the original source information in R41’s NHRP Resolution Request as a method of verification and contains the client protocol address 192.168.100.31 and the client NBMA address 172.16.31.1. (Again, if IPsec protection is configured, the tunnel is set up before the NHRP reply is sent back in the other direction.)

A spoke-to-spoke DMVPN tunnel is established in both directions after step 7 is complete. This allows traffic to flow across the spoke-to-spoke tunnel instead of traversing the hub router.

DMVPN tunnels status on R31.
Two new spoke-to-spoke tunnels.
The DLX entries represent the local (no-socket) routes.
The original tunnel to R11 remains a static tunnel.
R31-Spoke# show dmvpn detail Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete N - NATed, L - Local, X - No Socket T1 - Route Installed, T2 - Nexthop-override C - CTS Capable # Ent --> Number of NHRP entries with same NBMA peer NHS Status: E --> Expecting Replies, R --> Responding, W --> Waiting UpDn Time --> Up or Down Time for a Tunnel ============================================================================ Interface Tunnel100 is up/up, Addr. is 192.168.100.31, VRF "" Tunnel Src./Dest. addr: 172.16.31.1/MGRE, Tunnel VRF "" Protocol/Transport: "multi-GRE/IP", Protect "" Interface State Control: Disabled nhrp event-publisher : Disabled IPv4 NHS: 192.168.100.11 RE NBMA Address: 172.16.11.1 priority = 0 cluster = 0 Type:Spoke, Total NBMA Peers (v4/v6): 3 # Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb Target Network ----- --------------- --------------- ----- -------- ----- ----------------- 1 172.16.31.1 192.168.100.31 UP 00:00:10 DLX 10.3.3.0/24 2 172.16.41.1 192.168.100.41 UP 00:00:10 DT2 10.4.4.0/24 172.16.41.1 192.168.100.41 UP 00:00:10 DT1 192.168.100.41/32 1 172.16.11.1 192.168.100.11 UP 00:00:51 S 192.168.100.11/32

Notice the NHRP mappings: router, rib, nho, and nhop.

  • The flag rib nho indicates that the router has found an identical route in the routing table that belongs to a different protocol. NHRP has overridden the other protocol’s next-hop entry for the network by installing a next-hop shortcut in the routing table.
  • The flag rib nhop indicates that the router has an explicit method to reach the tunnel IP address using an NBMA address and has an associated route installed in the routing table.
R31-Spoke# show ip nhrp detail
10.3.3.0/24 via 192.168.100.31
   Tunnel100 created 00:01:44, expire 01:58:15
   Type: dynamic, Flags: router unique local
   NBMA address: 172.16.31.1
   Preference: 255
    (no-socket)
   Requester: 192.168.100.41 Request ID: 3
10.4.4.0/24 via 192.168.100.41
   Tunnel100 created 00:01:44, expire 01:58:15
   Type: dynamic, Flags: router rib nho
   NBMA address: 172.16.41.1
   Preference: 255
192.168.100.11/32 via 192.168.100.11
   Tunnel100 created 10:43:18, never expire
   Type: static, Flags: used
   NBMA address: 172.16.11.1
   Preference: 255
192.168.100.41/32 via 192.168.100.41
   Tunnel100 created 00:01:45, expire 01:58:15
   Type: dynamic, Flags: router used nhop rib
   NBMA address: 172.16.41.1
   Preference: 255

NHRP Routing Table Manipulation

NHRP tightly interacts with the routing/forwarding tables and installs or modifies routes in the RIB. In the event that an entry exists with an exact match for the network and prefix length, NHRP overrides the existing next hop with a shortcut. The original protocol is still responsible for the prefix, but overwritten next-hop addresses are indicated in the routing table by the percent sign (%).

The next-hop IP address for the EIGRP remote network (highlighted) still shows 192.168.100.11 as the next-hop address but includes a percent sign (%) to indicate a next-hop override. Notice that R31 installs the NHRP route to 192.168.10.41/32 and that R41 installs the NHRP route to 192.168.100.31/32 into the routing table as well.

R31-Spoke# show ip route
! Output omitted for brevity
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
       + - replicated route, % - next hop override, p - overrides from PfR

Gateway of last resort is 172.16.31.2 to network 0.0.0.0

S*    0.0.0.0/0 [1/0] via 172.16.31.2
      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
D        10.1.1.0/24 [90/26885120] via 192.168.100.11, 10:44:45, Tunnel100
C        10.3.3.0/24 is directly connected, GigabitEthernet0/2
D %      10.4.4.0/24 [90/52992000] via 192.168.100.11, 10:44:45, Tunnel100
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.31.0/30 is directly connected, GigabitEthernet0/1
      192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks
C        192.168.100.0/24 is directly connected, Tunnel100
H        192.168.100.41/32 is directly connected, 00:03:21, Tunnel100
R41-Spoke# show ip route
! Output omitted for brevity
Gateway of last resort is 172.16.41.2 to network 0.0.0.0
S*    0.0.0.0/0 [1/0] via 172.16.41.2
      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
D        10.1.1.0/24 [90/26885120] via 192.168.100.11, 10:44:34, Tunnel100
D %      10.3.3.0/24 [90/52992000] via 192.168.100.11, 10:44:34, Tunnel100
C        10.4.4.0/24 is directly connected, GigabitEthernet0/2
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.41.0/24 is directly connected, GigabitEthernet0/1
      192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks
C        192.168.100.0/24 is directly connected, Tunnel100
H        192.168.100.31/32 is directly connected, 00:03:10, Tunnel100

The command show ip route next-hop-override displays the routing table with the explicit NHRP shortcuts that were added.

Notice that the NHRP shortcut is indicated by the NHO marking and shown underneath the original entry with the correct next-hop IP address.

R31-Spoke# show ip route next-hop-override
! Output omitted for brevity
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       + - replicated route, % - next hop override

Gateway of last resort is 172.16.31.2 to network 0.0.0.0

S*    0.0.0.0/0 [1/0] via 172.16.31.2
      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
D        10.1.1.0/24 [90/26885120] via 192.168.100.11, 10:46:38, Tunnel100
C        10.3.3.0/24 is directly connected, GigabitEthernet0/2
D %      10.4.4.0/24 [90/52992000] via 192.168.100.11, 10:46:38, Tunnel100
                     [NHO][90/255] via 192.168.100.41, 00:05:14, Tunnel100
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.31.0/30 is directly connected, GigabitEthernet0/1
      192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks
C        192.168.100.0/24 is directly connected, Tunnel100
H        192.168.100.41/32 is directly connected, 00:05:14, Tunnel100
R41-Spoke# show ip route next-hop-override
! Output omitted for brevity
Gateway of last resort is 172.16.41.2 to network 0.0.0.0

S*    0.0.0.0/0 [1/0] via 172.16.41.2
      10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks
D        10.1.1.0/24 [90/26885120] via 192.168.100.11, 10:45:44, Tunnel100
D %      10.3.3.0/24 [90/52992000] via 192.168.100.11, 10:45:44, Tunnel100
                     [NHO][90/255] via 192.168.100.31, 00:04:20, Tunnel100
C        10.4.4.0/24 is directly connected, GigabitEthernet0/2
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.41.0/24 is directly connected, GigabitEthernet0/1
      192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks
C        192.168.100.0/24 is directly connected, Tunnel100
H        192.168.100.31/32 is directly connected, 00:04:20, Tunnel100

NHRP Routing Table Manipulation with Summarization

Summarizing routes on WAN links provides stability by hiding network convergence and thereby adding scalability. This demonstrates NHRP’s interaction on the routing table when the exact route does not exist there.

R11’s EIGRP configuration now advertises the 10.0.0.0/8 summary prefix out tunnel 100. The spoke routers use the summary route for forwarding traffic until the NHRP establishes the spoke-to-spoke tunnel. The more explicit entries from NHRP are installed into the routing table after the spoke-to-spoke tunnels have been initialized.

				
					/* R11-Hub */
router eigrp GRE-OVERLAY
 address-family ipv4 unicast autonomous-system 100
 af-interface Tunnel100
   summary-address 10.0.0.0 255.0.0.0
   hello-interval 20
   hold-time 60
   no split-horizon
 exit-af-interface
 !
 topology base
 exit-af-topology
 network 10.0.0.0
 network 192.168.100.0
 exit-address-family
				
			

You can clear the NHRP cache on all routers by using the command clear ip nhrp, which removes any NHRP entries.

R31-Spoke# show ip route
! Output omitted for brevity
Gateway of last resort is 172.16.31.2 to network 0.0.0.0

S*    0.0.0.0/0 [1/0] via 172.16.31.2
      10.0.0.0/8 is variably subnetted, 3 subnets, 3 masks
D        10.0.0.0/8 [90/26885120] via 192.168.100.11, 00:29:28, Tunnel100
C        10.3.3.0/24 is directly connected, GigabitEthernet0/2
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.31.0/30 is directly connected, GigabitEthernet0/1
      192.168.100.0/24 is variably subnetted, 2 subnets, 2 masks
C        192.168.100.0/24 is directly connected, Tunnel100
R41-Spoke# show ip route
! Output omitted for brevity
Gateway of last resort is 172.16.41.2 to network 0.0.0.0

S*    0.0.0.0/0 [1/0] via 172.16.41.2
      10.0.0.0/8 is variably subnetted, 3 subnets, 3 masks
D        10.0.0.0/8 [90/26885120] via 192.168.100.11, 00:29:54, Tunnel100
C        10.4.4.0/24 is directly connected, GigabitEthernet0/2
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.41.0/24 is directly connected, GigabitEthernet0/1
      192.168.100.0/24 is variably subnetted, 2 subnets, 2 masks
C        192.168.100.0/24 is directly connected, Tunnel100
  • Traffic was re-initiated from 10.3.3.1 to 10.4.4.1 to initialize the spoke-to-spoke tunnels.
  • R11 still sends the NHRP Redirect for hairpinned traffic, and the pattern would complete as shown earlier except that NHRP would install a more specific route into the routing table on R31 (10.4.4.0/24) and R41 (10.3.3.0/24).
  • The NHRP injected route is indicated by the H entry.
R31-Spoke# show ip route
! Output omitted for brevity
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP

Gateway of last resort is 172.16.31.2 to network 0.0.0.0

S*    0.0.0.0/0 [1/0] via 172.16.31.2
      10.0.0.0/8 is variably subnetted, 4 subnets, 3 masks
D        10.0.0.0/8 [90/26885120] via 192.168.100.11, 00:31:06, Tunnel100
C        10.3.3.0/24 is directly connected, GigabitEthernet0/2
H        10.4.4.0/24 [250/255] via 192.168.100.41, 00:00:22, Tunnel100
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.31.0/30 is directly connected, GigabitEthernet0/1
      192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks
C        192.168.100.0/24 is directly connected, Tunnel100
H        192.168.100.41/32 is directly connected, 00:00:22, Tunnel100
R41-Spoke# show ip route
! Output omitted for brevity

Gateway of last resort is 172.16.41.2 to network 0.0.0.0

S*    0.0.0.0/0 [1/0] via 172.16.41.2
      10.0.0.0/8 is variably subnetted, 4 subnets, 3 masks
D        10.0.0.0/8 [90/26885120] via 192.168.100.11, 00:31:24, Tunnel100
H        10.3.3.0/24 [250/255] via 192.168.100.31, 00:00:40, Tunnel100
C        10.4.4.0/24 is directly connected, GigabitEthernet0/2
      172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks
C        172.16.41.0/24 is directly connected, GigabitEthernet0/1
      192.168.100.0/24 is variably subnetted, 3 subnets, 2 masks
C        192.168.100.0/24 is directly connected, Tunnel100
H        192.168.100.31/32 is directly connected, 00:00:40, Tunnel100

Shows the DMVPN tunnels after R31 and R41 have initialized the spoke-to-spoke tunnel with summarization on R11. Notice that both of the new spoke-to-spoke tunnel entries are DT1 because they are new routes in the RIB. If the routes had been more explicit, NHRP would have overridden the next-hop address and used a DT2 entry.

R31-Spoke# show dmvpn detail
! Output omitted for brevity
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
        N - NATed, L - Local, X - No Socket
        T1 - Route Installed, T2 - Nexthop-override
        C - CTS Capable
        # Ent --> Number of NHRP entries with same NBMA peer
        NHS Status: E --> Expecting Replies, R --> Responding, W --> Waiting
        UpDn Time --> Up or Down Time for a Tunnel
==========================================================================
IPv4 NHS:
192.168.100.11 RE NBMA Address: 172.16.11.1 priority = 0 cluster = 0
Type:Spoke, Total NBMA Peers (v4/v6): 3

# Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb    Target Network
----- --------------- --------------- ----- -------- ----- -----------------
    1 172.16.31.1      192.168.100.31    UP 00:01:17   DLX        10.3.3.0/24
    2 172.16.41.1      192.168.100.41    UP 00:01:17   DT1        10.4.4.0/24
      172.16.41.1      192.168.100.41    UP 00:01:17   DT1  192.168.100.41/32
    1 172.16.11.1      192.168.100.11    UP  11:21:33    S  192.168.100.11/32
R41-Spoke# show dmvpn detail
! Output omitted for brevity
IPv4 NHS:
192.168.100.11 RE NBMA Address: 172.16.11.1 priority = 0 cluster = 0
Type:Spoke, Total NBMA Peers (v4/v6): 3

# Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb    Target Network
----- --------------- --------------- ----- -------- ----- -----------------
    2 172.16.31.1      192.168.100.31    UP 00:01:56   DT1        10.3.3.0/24
      172.16.31.1      192.168.100.31    UP 00:01:56   DT1  192.168.100.31/32
    1 172.16.41.1      192.168.100.41    UP 00:01:56   DLX        10.4.4.0/24
    1 172.16.11.1      192.168.100.11    UP 11:22:09     S  192.168.100.11/32

Phase 3 DMVPN fully supports summarization, which should be used to minimize the number of prefixes advertised across the WAN.

Tags:

Leave a Reply

Related Post

DMVPN Phase 3DMVPN Phase 3

Introduction Spoke routers will only have a single default route pointing to the hub router. But the spoke routers will be able to access other spoke routers or the network(s)