Chapter 2

Multicast Scalability and Transport Diversification

Public cloud services are very commonly used in enterprise networks, and it is therefore important to understand how multicast messages are carried to and from a cloud service provider. Transportation of multicast messages requires consideration of several factors, especially when the cloud service providers do not support native multicast. This chapter introduces the key concepts of cloud services and explains the elements required to support multicast services.

Why Is Multicast Not Enabled Natively in a Public Cloud Environment?

Currently, public cloud environments do not have multicast enabled. This is the case because sending a packet to every host in the customer-owned cloud segment would have implications on the data plane and control plane traffic in terms of scalability of the cloud fabric. As enterprise customers add to the cloud more resources with a need for multicast, the cloud platform needs increased capacity and must meet other considerations for dynamic multicast demand. Such changes can be expensive, and speculative calculation can impact the user experience for the cloud consumers.

The calculation of changes needed becomes more complex and difficult to estimate when multitenancy is added to the equation. Allowing multicast protocol in the cloud fabric can have an impact on the performance of the cloud fabric. However, for enterprise customers that require multicast communication for business-relevant applications, it is difficult to adopt cloud services.

Enterprise Adoption of Cloud Services

Enterprise customers tend to adopt cloud services for a number of reasons, including agile provisioning, lower cost of investment, reduced operational and capital expense, closeness to geocentric user population, and speed of developing a product. Figure 2-1 illustrates the most common types of cloud models.

Figure 2-1 Data Center and Cloud Types

Many cloud offerings allow consumers to utilize storage, network, and compute resources. The consumer can manage these three components to deploy and run specific applications. This is known as infrastructure as a service (IaaS). Organizations that consume resources from the cloud can outsource the support operation for maintaining the IT infrastructure and the hardware cost and can move toward a “pay-as-you-use" model. Platform as a service (PaaS) abstracts many of the standard application stack–level functions and offers those functions as a service. Developers can leverage the tool sets that are readily available in the cloud and can focus on the application logic rather than worrying about the underlying infrastructure. Software as a service (SaaS) gives consumers the ability to use a hosted software licensing and delivery model, delivering an application as a service for an organization.

IaaS is commonly used to host application resources in the cloud, and it can accommodate multicast requirements. Many enterprise customers are moving toward a hybrid approach of application deployment in the cloud. One example of a hybrid deployment is having the web tier hosted in the cloud, while the application and database tiers are hosted in the customer’s on-premises data center. Having the web tier hosted in the cloud provides a cost-effective solution to offer a global service offering, as web applications can be positioned closer to the geographic regions. Cloud applications have ties to the application stack and internal data and require enterprise features such as security, preferred routing, and multicast. Therefore, it is essential to have a thorough understanding of how all these elements interact with one another during the planning, implementation, and migration stages. Many cloud service providers (CSP) do not provide support for multicast. The use of multicast may not be needed for all the enterprise applications, but it is critical for a few that need the support for multicast to function appropriately.

The following sections are dedicated to explaining how multicast is supported in a cloud environment.

Cloud Connectivity to an Enterprise

Connectivity for an enterprise to a cloud service provider can be achieved in three different ways:

Internet-based connectivity

Direct connectivity to the cloud provider

Cloud broker–based connectivity to the cloud provider

With Internet-based connectivity, the enterprise is connected to the cloud service provider via an Internet connection, as shown in Figure 2-2. The enterprise normally uses an encrypted virtual private network (VPN) service to the virtual data center located at the cloud provider. The enterprise customer may leverage the VPN-based service managed by the cloud service provider or a self-managed service.

Figure 2-2 Internet-Based Connectivity: Hybrid Model

Direct connectivity to the cloud provider requires the service provider to terminate the virtual circuit (VC) from the enterprise to the service provider (that is, the network service provider [NSP]). The NSP provides network connectivity to access the cloud asset. There is a one-to-one mapping of circuits from each enterprise to the service provider. The direct connectivity concept is illustrated in Figure 2-3.

Figure 2-3 Direct Connectivity to the Cloud Provider

A new variant of direct connectivity for network service providers is to merge the cloud service provider access with an SP-managed Layer 3 VPN solution that connects branch offices and enterprise data centers. This implementation strategy allows the service provider to tie a VC to a specific customer connection to multiple cloud providers; it is known as cloud broker. Many cloud brokers also provide colocation services to enterprise and cloud service providers. Figure 2-4 illustrates the direct connectivity concept using a cloud broker.

Figure 2-4 Cloud Broker–Based Connectivity to a CSP

The cloud broker type of access provides the flexibility of being able to access multiple CSPs from the same connection from the enterprise. The connectivity from the enterprise to the cloud broker is provided by the NSP. Most cloud brokers also provide a carrier-neutral facility (such as colocation services) space and Internet exchange points. Using a cloud broker allows an enterprise to consolidate its pipe to the cloud service provider and also provides access to multiple cloud service providers. Figure 2-5 illustrates a functional description of a cloud broker.

Figure 2-5 Cloud Broker Connectivity with NSP L3 VPN Service

Network service providers also offer Layer 3 VPN services to access the cloud broker (for example, AT&T NetBond, Verizon Secure Cloud Interconnect). Without the Layer 3 VPN service, each enterprise data center requires a point-to-point connection with the NSP to the regional cloud broker or must use the IPsec VPN over Internet connectivity model. The Layer 3 VPN service allows an enterprise campus or remote sites to have access to the cloud service provider directly, without transiting the enterprise data center. Localization of enterprise security services for cloud usage is accomplished in the cloud broker’s colocation facility.

Virtual Services in a Cloud

Many enterprise architects prefer having control of the services in their tenant space within the cloud infrastructure. The concept of network functions virtualization (NFV) becomes a feasible solution especially when using IaaS cloud service.

NFV involves implementing in software network service elements such as routing, load balancing, VPN services, WAN optimization, and firewalls. Each of these services is referred to as a virtual network function (VNF). The NFV framework stitches together VNFs by using service chaining. In this way, provisioning of the network services can be aligned with service elements. The NFV elements can be automated using the same workflow related to the application services, thus making them easier to manage. These cloud services can then provide a solution for enterprise features to be available in an IaaS infrastructure.

Using a CSR 1000v device as a VNF element provides a customer with a rich set of features in the cloud. The CSR 1000v is an IOS-XE software router based on the ASR 1001. The CSR 1000v runs within a virtual machine, which can be deployed on x86 server hardware. There are a lot of commonalities between the system architecture for the CSR 1000v and the ASR 1000. As shown in Table 2-1, a CSR 1000v provides an enterprise feature footprint in the cloud.

Table 2-1 Enterprise Feature Footprint Covered by a CSR 1000v

Technology Package

CSR 1000v Features (Software 3.17 Release)

IP Base (Routing)

Basic networking: BGP, OSPF, EIGRP, RIP, IS-IS, IPv6, GRE, VRF-LITE, NTP, QoS

Multicast: IGMP, PIM

High availability: HSRP, VRRP, GLBP

Addressing: 802.1Q VLAN, EVC, NAT, DHCP, DNS

Basic security: ACL, AAA, RADIUS, TACACS+

Management: IOS-XE CLI, SSH, Flexible NetFlow, SNMP, EEM, NETCONF

Security (Routing + Security)

IP Base features plus the following:

Advanced security: ZBFW, IPsec, VPN, EZVPN, DMVPN, FlexVPN, SSLVPN

APPX/APP

IP Base features plus the following:

Advanced networking: L2TPv3, BFD, MPLS, VRF, VXLAN

Application experience: WCCPv2, APPXNAV, NBAR2, AVC, IP SLA

Hybrid cloud connectivity: LISP, OTV, VPLS, EoMPLS

Subscriber management: PTA, LNS, ISG

AX (Routing + Security + APPX + Hybrid Cloud)

Security features plus the following:

Advanced networking: L2TPv3, BFD, MPLS, VRF, VXLAN

Application experience: WCCPv2, APPXNAV, NBAR2, AVC, IP SLA

Hybrid cloud connectivity: LISP, OTV, VPLS, EoMPLS

Subscriber management: PTA, LNS, ISG

Note: Use the Cisco feature navigator tool, at www.cisco.com/go/fn, to see the latest features available with the CSR 1000v.

Service Reflection Feature

Service reflection is a multicast feature that provides a translation service for multicast traffic. It allows you to translate externally received multicast or unicast destination addresses to multicast or unicast addresses. This feature offers the advantage of completely isolating the external multicast source information. The end receivers in the destination network can receive identical feeds from two ingress points in the network. The end host can then subscribe to two different multicast feeds that have identical information. The ability to select a particular multicast stream is dependent on the capability of the host, but provides a solution for highly available multicast.

Use Case 1: Multicast-to-Multicast Destination Conversion

Cisco multicast service reflection runs on Cisco IOS software that processes packets forwarded by Cisco IOS software to the Vif1 interface. The Vif1 interface is similar to a loopback interface; it is a logical IP interface that is always up when the router is active. The Vif1 interface has its own unique subnet, which must be advertised in the routing protocol. The Vif1 interface provides private-to-public mgroup mapping and the source of the translated packet. The Vif1 interface is key to the functionality of service reflections. Unlike IP Multicast Network Address Translation (NAT), which only translates the source IP address, service reflection translates both source and destination addresses. Figure 2-6 illustrates a multicast-to-multicast destination conversion use case.

Figure 2-6 Multicast-to-Multicast Destination Conversion

Figure 2-6 shows the lab setup for a multicast-to-multicast destination conversion use case. In Figure 2-6, R2 converts the multicast stream from 224.1.1.1 to 239.1.1.1. The virtual interface (VIF) configuration at R2 (for service reflection) has a static Internet Group Management Protocol (IGMP) join for 224.1.1.1 to attract the multicast flow to the VIF (Example 2-1). In converting the stream from 224.1.1.1 to 239.1.1.1, the source for the multicast stream is changed to 10.5.1.2.

Example 2-1 VIF Configuration at R2

R2# show run interface vif1
Building configuration...
 
Current configuration : 203 bytes
!
interface Vif1
 ip address 10.5.1.1 255.255.255.0
 ip service reflect Ethernet0/0 destination 224.1.1.1 to 239.1.1.1 mask-len 32 source 10.5.1.2
 ip pim sparse-mode
 ip igmp static-group 224.1.1.1
end

Prior to enabling the multicast source, the output at R4 (with the IGMP join group configuration for 239.1.1.1) is as shown in Example 2-2.

Example 2-2 show ip mroute Output at R4

R4# show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.1.1.1), 03:04:11/00:02:53, RP 192.168.2.2, flags: SJCL
  Incoming interface: Ethernet0/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Loopback0, Forward/Sparse, 03:04:10/00:02:53
 
(*, 224.0.1.40), 03:04:11/00:02:53, RP 192.168.2.2, flags: SJCL
  Incoming interface: Ethernet0/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Loopback0, Forward/Sparse, 03:04:10/00:02:53

There is no (S, G) entry for 239.1.1.1 multicast flow. After multicast has been enabled, traffic is captured between the source and R2 to verify the packet flow, as shown in Example 2-3.

Example 2-3 Packet Capture of the Multicast Stream Before Conversion

=============================================================================
15:52:55.622 PST Thu Jan 5 2017                  Relative Time: 1.697999
Packet 17 of 148                                 In: Ethernet0/0
 
Ethernet Packet:  298 bytes
      Dest Addr: 0100.5E01.0101,   Source Addr: AABB.CC00.0100
      Protocol: 0x0800
 
IP    Version: 0x4,  HdrLen: 0x5,  TOS: 0x00
      Length: 284,   ID: 0x0000,   Flags-Offset: 0x0000
      TTL: 60,   Protocol: 17 (UDP),   Checksum: 0xE4A8 (OK)
      Source: 10.1.1.1,     Dest: 224.1.1.1
 
UDP   Src Port: 0 (Reserved),   Dest Port: 0 (Reserved)
      Length: 264,   Checksum: 0x64B5 (OK)
 
Data:
Output removed for brevity…

By using the show ip mroute command output at R2, the 224.1.1.1 stream is converted with service reflection to 239.1.1.1, and the incoming interface (IIF) for 239.1.1.1 is the VIF, as shown in Example 2-4.

Example 2-4 show ip mroute at R2

R2# show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
 
(*, 239.1.1.1), 02:53:55/stopped, RP 192.168.1.1, flags: SF
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Ethernet1/0, Forward/Sparse, 02:53:36/00:03:05
 
(10.5.1.2, 239.1.1.1), 00:00:07/00:02:52, flags: FT
  Incoming interface: Vif1, RPF nbr 0.0.0.0, flags: FT
  Outgoing interface list:
    Ethernet1/0, Forward/Sparse, 00:00:07/00:03:22
 
(*, 224.1.1.1), 02:47:27/stopped, RP 192.168.2.2, flags: SJCF
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Vif1, Forward/Sparse, 02:47:27/00:02:47
 
 
(10.1.1.1, 224.1.1.1), 00:00:47/00:02:12, flags: FT
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Vif1, Forward/Sparse, 00:00:47/00:02:12
 
(*, 224.0.1.40), 02:54:26/00:03:09, RP 192.168.2.2, flags: SJCL
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Ethernet1/0, Forward/Sparse, 02:53:41/00:03:09
    Loopback0, Forward/Sparse, 02:54:25/00:02:39

The output clearly shows R2 receiving the flow 224.1.1.1, and the new stream for 239.1.1.1 is sourced from 10.5.1.2, the VIF interface (that is, the service reflection interface address local to R2).

At R4, the (S, G) route value for 239.1.1.1 is seen with source IP address 10.5.1.2, as shown in Example 2-5.

Example 2-5 show ip mroute at R4

R4# show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
 
(*, 239.1.1.1), 03:08:06/stopped, RP 192.168.2.2, flags: SJCL
  Incoming interface: Ethernet0/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Loopback0, Forward/Sparse, 03:08:05/00:01:59
 
(10.5.1.2, 239.1.1.1), 00:00:17/00:02:42, flags: LJT
  Incoming interface: Ethernet0/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:17/00:02:42
Use Case 2: Unicast-to-Multicast Destination Conversion

Figure 2-7 shows an example of a unicast-to-multicast conversation use case.

Figure 2-7 Unicast-to-Multicast Destination Conversion

Example 2-6 shows the unicast flow 10.5.1.3 converted to multicast flow 239.1.1.1 at R2. The VIF interface configuration at R2 (for service reflection) with the same subnet as 10.5.1.x redirects the packet to the VIF interface. At the VIF interface, 10.5.1.3 is converted to 239.1.1.1 and sourced from 10.5.1.2.

Example 2-6 show interface vif1 at R2

R2# show run int vif1
Building configuration...
 
Current configuration : 203 bytes
!
interface Vif1
 ip address 10.5.1.1 255.255.255.0
 ip service reflect Ethernet0/0 destination 10.5.1.3 to 239.1.1.1 mask-len 32  source 10.5.1.2
 ip pim sparse-mode
end

Prior to enabling the multicast source, you need to use the IGMP join group configuration for 239.1.1.1 to ensure that R4 has no (S, G) for 239.1.1.1 by using show ip mroute (see Example 2-7).

Example 2-7 show ip mroute at R4

R4# show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
 
(*, 239.1.1.1), 00:00:02/00:02:57, RP 192.168.2.2, flags: SJCL
  Incoming interface: Ethernet0/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:02/00:02:57
 
(*, 224.0.1.40), 00:00:02/00:02:57, RP 192.168.2.2, flags: SJCL
  Incoming interface: Ethernet0/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:02/00:02:57

There is no (S, G) entry for the multicast flow of 239.1.1.1 at R4.

Next, you enable the unicast stream from 10.1.1.1 and check the sniffer between the source (10.1.1.1) and R2, as shown in Example 2-8. The unicast stream is generated via an Internet Control Message Protocol (ICMP) ping.

Example 2-8 Sniffer Capture of the Unicast Stream Before Conversion

=============================================================================
22:38:22.425 PST Thu Jan 5 2017                  Relative Time: 27.684999
Packet 31 of 76                                  In: Ethernet0/0
 
Ethernet Packet:  114 bytes
      Dest Addr: AABB.CC00.0200,   Source Addr: AABB.CC00.0100
      Protocol: 0x0800
 
IP    Version: 0x4,  HdrLen: 0x5,  TOS: 0x00
      Length: 100,   ID: 0x0024,   Flags-Offset: 0x0000
      TTL: 255,   Protocol: 1 (ICMP),   Checksum: 0xA56B (OK)
      Source: 10.1.1.1,     Dest: 10.5.1.3
ICMP  Type: 8,   Code: 0  (Echo Request)
      Checksum: 0x6D42 (OK)
      Identifier: 000C,  Sequence: 0003
Echo Data:
    0 : 0000 0000 0177 0F82 ABCD ABCD ABCD ABCD ABCD ABCD  ....................
   20 : ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD  ....................
   40 : ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD ABCD  ....................
   60 : ABCD ABCD ABCD ABCD ABCD ABCD                      ............

The show ip mroute command output at R2 shows that the 239.1.1.1 source entry with IIF has the Vif1 interface. The VIF interface converts the 10.1.1.1 unicast stream into a multicast stream with group address 239.1.1.1, as shown in Example 2-9.

Example 2-9 show ip mroute at R2

R2# show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
 
(*, 239.1.1.1), 06:51:31/00:03:26, RP 192.168.2.2, flags: SF
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Ethernet1/0, Forward/Sparse, 06:51:12/00:03:26
 
 
(10.5.1.2, 239.1.1.1), 00:02:35/00:00:58, flags: FT
  Incoming interface: Vif1, RPF nbr 0.0.0.0
  Outgoing interface list:
    Ethernet1/0, Forward/Sparse, 00:02:35/00:03:26

The unicast stream destined to 10.5.1.3 is converted into multicast 239.1.1.1 and sourced from the 10.5.1.2 VIF (that is, the service reflection interface address local to R2). The incoming interface is the Vif1 that is generating this translation, and the outgoing interface is toward the downstream receiver Ethernet1/0.

At R4, now the (S, G) route value for 239.1.1.1 has source IP address 10.5.1.2, as shown in Example 2-10.

Example 2-10 show ip mroute at R4

R4# show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
 
(*, 239.1.1.1), 00:01:54/stopped, RP 192.168.2.2, flags: SJCL
  Incoming interface: Ethernet0/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:01:54/00:02:09
 
(10.5.1.2, 239.1.1.1), 00:00:52/00:02:07, flags: LJT
  Incoming interface: Ethernet0/0, RPF nbr 10.1.3.1
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:52/00:02:09
Use Case 3: Multicast-to-Unicast Destination Conversion

Figure 2-8 illustrates the topology for a multicast-to-unicast conversion.

Figure 2-8 Multicast-to-Unicast Destination Conversion

Example 2-11 converts the multicast flow of 224.1.1.1 to a unicast flow from 10.1.3.2 at R2. The VIF interface configuration at R2 (for service reflection) has a static join for 224.1.1.1 on the VIF interface, and the multicast packets for 224.1.1.1 are converted into a unicast UDP stream sourced from 10.5.1.2, with destination 10.1.3.2 (the Ethernet0/0 interface of R4).

Example 2-11 show int vif1 at R2

R2# show run int vif1
Building configuration...
 
Current configuration : 203 bytes
!
interface Vif1
 ip address 10.5.1.1 255.255.255.0
 ip service reflect Ethernet0/0 destination 224.1.1.1 to 10.1.3.2 mask-len 32 source 10.5.1.2
 ip pim sparse-mode
 ip igmp static-group 224.1.1.1

In Example 2-12, the packet capture between source and R2 shows the multicast flow for 224.1.1.1 sourced from 10.1.1.1.

Example 2-12 Sniffer Capture Before the Conversion

=============================================================================
22:49:43.470 PST Thu Jan 5 2017                  Relative Time: 11:48.729999
Packet 1534 of 4447                              In: Ethernet0/0
 
Ethernet Packet:  298 bytes
      Dest Addr: 0100.5E01.0101,   Source Addr: AABB.CC00.0100
      Protocol: 0x0800
 
IP    Version: 0x4,  HdrLen: 0x5,  TOS: 0x00
      Length: 284,   ID: 0x0000,   Flags-Offset: 0x0000
      TTL: 60,   Protocol: 17 (UDP),   Checksum: 0xE4A8 (OK)
      Source: 10.1.1.1,     Dest: 224.1.1.1
 
UDP   Src Port: 0 (Reserved),   Dest Port: 0 (Reserved)
      Length: 264,   Checksum: 0x64B5 (OK)
 
Data:
    0 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
   20 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
   40 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
   60 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
   80 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  100 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  120 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  140 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  160 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  180 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  200 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  220 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  240 : 0000 0000 0000 0000 0000 0000 0000 0000            ................

In Example 2-13, the current multicast state at R2 shows the (*, G) and (S, G) entries for the 224.1.1.1 multicast group.

Example 2-13 show ip mroute at R2

R2# show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
 
(*, 224.1.1.1), 07:02:08/stopped, RP 192.168.1.1, flags: SJCF
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Vif1, Forward/Sparse, 07:02:08/00:00:06
 
(10.1.1.1, 224.1.1.1), 00:10:15/00:02:44, flags: FTJ
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.1
  Outgoing interface list:
    Vif1, Forward/Sparse, 00:10:15/00:01:44

In Example 2-14, a packet capture between R3 and R4 shows the unicast stream UDP sourced from 10.5.1.2 and destination 10.1.3.2 (R4 Ethernet0/0 interface).

Example 2-14 Sniffer Output After the Conversion

=============================================================================
22:55:11.710 PST Thu Jan 5 2017                  Relative Time: 1.945999
Packet 23 of 262                                 In: Ethernet0/0
 
Ethernet Packet:  298 bytes
      Dest Addr: AABB.CC00.0301,   Source Addr: AABB.CC00.0201
      Protocol: 0x0800
 
IP    Version: 0x4,  HdrLen: 0x5,  TOS: 0x00
      Length: 284,   ID: 0x0000,   Flags-Offset: 0x0000
      TTL: 58,   Protocol: 17 (UDP),   Checksum: 0x67C8 (OK)
      Source: 10.5.1.2,     Dest: 10.1.3.2
 
UDP   Src Port: 0 (Reserved),   Dest Port: 0 (Reserved)
      Length: 264,   Checksum: 0xE5D4 (OK)
 
Data:
    0 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
   20 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
   40 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
   60 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
   80 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  100 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  120 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  140 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  160 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  180 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  200 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  220 : 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000  ....................
  240 : 0000 0000 0000 0000 0000 0000 0000 0000            ................

Multicast Traffic Engineering

IP Multicast, Volume 1, Chapter 5, “IP Multicast Design Considerations and Implementation,” provides a good overview of various methods of multicast traffic engineering, including the following:

Multipath feature

ip mroute statements or MBGP for multicast path selections

Multicast via tunnels

Table 2-2 presents a summary of these features.

Table 2-2 Multicast Features for Traffic Engineering

Feature

Usage

Multipath feature

Say that multiple unicast paths exist between two routers, and the administrator wants to load-split the multicast traffic. (The default behavior of RPF is to choose the highest IP address as next hop for all the (S, G) flow entries.) Configuring load splitting with the ip multicast multipath command causes the system to load-split multicast traffic across multiple equal-cost paths based on the source address, using the S-hash algorithm. This feature load-splits the traffic and does not load-balance the traffic. Based on the S-hash algorithm, the multicast stream from a source uses only one path. The PIM joins are distributed over the different equal-cost multipath (ECMP) links based on a hash of the source address. This enables streams to be divided across different network paths. The S-hash method is used to achieve a diverse path for multicast data flow that is split between two multicast groups to achieve redundancy in transport of the real-time packets. The redundant flow for the same data stream is achieved using an intelligent application that encapsulates the same data in two separate multicast streams.

ip mroute statements or MBGP for multicast path selections

Say that two equal-cost paths exist between two routers, and the administrator wishes to force the multicast traffic through one path. The administrator can use a static ip mroute statement to force the Reverse Path Forwarding (RPF) check through an interface of choice.

In a large network with redundant links, to achieve the separation of the multicast traffic from the unicast, a dynamic way is more desirable. This is achieved by using the Border Gateway Protocol (BGP) multicast address family. With BGP address families, the multicast network needs to be advertised, and the next-hop prefix needs to be resolved via a recursive lookup in the Interior Gateway Protocol (IGP) to find the upstream RPF interface.

Multicast via tunnels

Multicast support using a tunnel overlay infrastructure can be used to add a non-enterprise-controlled network segment that does not support multicast.

Multicast within the tunnel infrastructure is a key feature in current cloud deployments to support multicast features over non-enterprise segments. Multicast across point-to-point GRE tunnels is simple, and the only consideration is the RPF interface. The selection for the MRIB interface should be the tunnel interface. The other overlay solution is Dynamic Multipoint VPN (DMVPN).

DMVPN is a Cisco IOS software solution for building scalable IPsec VPNs. DMVPN uses a centralized architecture to provide easier implementation and management for deployments that require granular access controls for diverse user communities, including mobile workers, telecommuters, and extranet users. The use of multipoint generic routing encapsulation (mGRE) tunnels with Next Hop Resolution Protocol (NHRP) creates an overlay solution that can be used for adding various features such as encryption, policy-based routing (PBR), and quality of service (QoS). The use of the NHRP feature with mGRE allows for direct spoke-to-spoke communication, which is useful for voice over IP traffic or any other communication except multicast. With the DMVPN overlay solution, the multicast communication must pass through the hub, which manages replication for the spokes. The replication at the hub is a design consideration that needs to be contemplated for multicast designs on an overlay network. For example, if the multicast stream needs to be transmitted to 100 spokes and the stream size is 200MB, the collective multicast stream after replication is 200MB × 100 spokes, which is 2GB. In this case, the outbound WAN link needs to accommodate 2GB multicast, and the platform at the hub should be able to replicate those multicast flows. The WAN link also needs to accommodate the replicated multicast stream, and this should be considered during the capacity planning stage. Figure 2-9 illustrates the considerations.

Figure 2-9 Overview of DMVPN Multicast Considerations

To spread the load of multicast replication at the hub routers, the design could involve multiple tunnel headend routers that take care of specific spoke routers. They would be tied to the replication processing of the platform, based on the number of spokes. This model can be used for dual hub sites.

Figure 2-10 shows a global deployment of an overlay solution where the customer wants to leverage local carrier-neutral facilities and host multicast services at the regional locations. The customer also wants the flexibility of host spoke-to-spoke tunnels across the different regions, which requires a regional solution with multicast tied to the global multicast solution.

Figure 2-10 Design to Facilitate Regional Multicast Replication and Unicast Interregional Direct Communication

The expected communication in this scenario includes the following:

Unicast (applicable to all branches in the region):

Branch A can communicate directly to Branch B (direct spoke-to-spoke communication within the region)

Branch A can communicate directly to Branch C without going through the hub (direct spoke-to-spoke interregional communication)

Branch A can directly communicate with the hub (regional spoke communication with the central data center connected to the hub)

Multicast (applicable to all branches in the region):

Branch C can send or receive multicast (region-specific multicast) to Branch D (localized replication at a regional hub)

Branches A and C can send or receive multicast but must traverse through the central hub and can be part of the global multicast (interregional multicast communication via the DMVPN hub)

Branches A, B, C, and D can receive global multicast from the hub (with a data center connected to the global hub then to a regional hub for multicast communication)

Figure 2-11 provides a configuration example of this DMVPN’s overlay design.

Figure 2-11 Lab Topology Design to Facilitate Regional Multicast Replication and Unicast Spoke-to-Spoke Communication

Highlights of the design include the following:

R2 is the hub for the single NHRP domain (Domain 1). High availability (HA) to R2 is not shown in the topology.

R3 and R4 are part of the NHRP domain (Domain 1) for Region A and Region B, respectively. HA for the regional hub is not shown.

The regional hub in the same NHRP domain has two tunnel connections: one toward the central hub (R2) and the other toward the spoke. The loopback sourcing this tunnel should be separate for each tunnel. This is important and provides the regional capability of replicating multicast streams.

The rendezvous points (RPs) at the central hub represent the global RP and at the regional hubs represent regional RPs.

The routing protocol and RP design should be aligned to general best practices not covered in these configurations.

Example 2-15 illustrates the configuration of DMVPN with multicast at hub R2.

Example 2-15 Configuration Snapshot from Hub R2

interface Loopback0
 ip address 192.168.2.2 255.255.255.255
!
interface Loopback100
 ip address 10.0.100.100 255.255.255.255
 ip pim sparse-mode
!
interface Tunnel1
bandwidth 1000
 ip address 10.0.0.1 255.255.255.0
 no ip redirects
 ip mtu 1400
 no ip split-horizon eigrp 1
 ip pim nbma-mode
 ip pim sparse-mode
 ip nhrp map multicast dynamic
 ip nhrp network-id 1
 ip nhrp holdtime 360
 ip nhrp shortcut
 ip nhrp redirect
 tunnel source Loopback0
 tunnel mode gre multipoint
 tunnel key 1
 hold-queue 4096 in
 hold-queue 4096 out
!
ip pim rp-address 10.0.100.100 2
!
!
!
access-list 2 permit 239.1.1.1

Note: The configuration does not include crypto or routing configurations.

Example 2-16 gives the configuration for DMVPN with multicast at the regional hub R3.

Example 2-16 Configuration Snapshot of Regional Hub R3

interface Loopback0
 ip address 192.168.3.3 255.255.255.255
!
interface Loopback1
 ip address 192.168.3.33 255.255.255.255
 ip pim sparse-mode
!
interface Loopback100
 ip address 10.0.100.103 255.255.255.255
 ip pim sparse-mode
!
interface Tunnel1
bandwidth 1000
 ip address 10.0.0.3 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip pim sparse-mode
 ip nhrp map 10.0.0.1 192.168.2.2
 ip nhrp map multicast 192.168.2.2
 ip nhrp network-id 1
 ip nhrp nhs 10.0.0.1
 ip nhrp shortcut
 tunnel source Loopback0
 tunnel mode gre multipoint
 tunnel key 1
 hold-queue 4096 in
 hold-queue 4096 out
!
interface Tunnel3
 bandwidth 1000
 ip address 10.0.1.3 255.255.255.0
 no ip redirects
 ip mtu 1400
 no ip split-horizon eigrp 1
 ip pim nbma-mode
 ip pim sparse-mode
 ip nhrp map multicast dynamic
 ip nhrp network-id 1
 ip nhrp holdtime 360
 ip nhrp shortcut
 ip nhrp redirect
tunnel source Loopback1
 tunnel mode gre multipoint
 tunnel key 1
 hold-queue 4096 in
 hold-queue 4096 out
!
ip pim rp-address 10.0.100.103 1
ip pim rp-address 10.0.100.100 2
!
!
!
access-list 1 permit 239.192.0.0 0.0.255.255
access-list 2 permit 239.1.1.1

As highlighted in Example 2-17, the regional spoke has two DMVPN tunnel domains sourced from two separate loopbacks (loopbacks 0 and 1). The NHRP and tunnel key for both the DMVPN tunnels have the same ID. This is an important configuration that creates multicast local replication for regional multicast traffic and keeps the unicast traffic spoke to spoke (interregional or intra-regional).

The DMVPN with multicast configuration at the regional spoke R6 is shown in Example 2-17.

Example 2-17 Configuration Snapshot of Regional Spoke R6

interface Loopback0
 ip address 192.168.6.6 255.255.255.255
!
interface Loopback100
 ip address 10.0.100.102 255.255.255.255
 ip pim sparse-mode
 ip igmp join-group 239.192.1.1
!
interface Tunnel3
 bandwidth 1000
 ip address 10.0.1.6 255.255.255.0
 no ip redirects
 ip mtu 1400
 ip pim sparse-mode
 ip nhrp network-id 1
 ip nhrp nhs 10.0.1.3 nbma 192.168.3.33 multicast
 ip nhrp shortcut
 tunnel source Loopback0
 tunnel mode gre multipoint
tunnel key 1
 hold-queue 4096 in
 hold-queue 4096 out
!
ip pim rp-address 10.0.100.103 1
ip pim rp-address 10.0.100.100 2
!
!
!
access-list 1 permit 239.192.0.0 0.0.255.255
access-list 2 permit 239.1.1.1
Unicast Spoke-to-Spoke Intra-regional Communication

To show unicast spoke-to-spoke intra-regional communication, this section reviews the NHRP configuration at the regional hub. The NHRP mapping shows regional hub 10.0.1.3. (10.0.1.3 is the tunnel IP address at R3 and is the next-hop server.) Example 2-18 uses the show ip nhrp command on R5 to reveal the spoke-to-spoke communication stats.

Example 2-18 show ip nhrp on R5

R5# show ip nhrp
10.0.1.3/32 via 10.0.1.3
   Tunnel3 created 3d09h, never expire
   Type: static, Flags: used
   NBMA address: 192.168.3.33

To create the tunnel, as shown in Example 2-19, ping R6 (IP loopback 100 10.0.100.102) from R5 to verify that a dynamic tunnel is created for the unicast flow between R5 and R6.

Example 2-19 ping from R5 to R6

R5# ping 10.0.100.102 source lo100
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.100.102, timeout is 2 seconds:
Packet sent with a source address of 10.0.100.101
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

The highlighted part of Example 2-20 shows the creation of the dynamic tunnel between the R5 and R6 spokes for unicast communication within the region. The IP address 10.0.1.6 is the interface for R6, and the route entry 10.0.100.102/32 points to this interface. The transport for the data path shows a creation of dynamic tunnel DT1 to 192.168.6.6 (the loopback address of R6) through tunnel interface 10.0.1.6 at R6. This dynamic tunnel creation is a result of spoke-to-spoke communication within the region. Issuing the show ip nhrp command again reveals the change in the next-hop route, as shown on R5 in Example 2-20.

Example 2-20 Issuing show ip nhrp on R5 Again

R5# show ip nhrp
10.0.1.3/32 via 10.0.1.3
   Tunnel3 created 3d09h, never expire
   Type: static, Flags: used
   NBMA address: 192.168.3.33
10.0.1.6/32 via 10.0.1.6
   Tunnel3 created 00:00:04, expire 01:59:55
   Type: dynamic, Flags: router used nhop rib
   NBMA address: 192.168.6.6
10.0.100.101/32 via 10.0.1.5
   Tunnel3 created 00:00:04, expire 01:59:55
   Type: dynamic, Flags: router unique local
   NBMA address: 192.168.5.5
    (no-socket)
10.0.100.102/32 via 10.0.1.6
   Tunnel3 created 00:00:04, expire 01:59:55
   Type: dynamic, Flags: router rib nho
   NBMA address: 192.168.6.6
R5# show dmvpn
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
        N - NATed, L - Local, X - No Socket
        T1 - Route Installed, T2 - Nexthop-override
        C - CTS Capable
        # Ent --> Number of NHRP entries with same NBMA peer
        NHS Status: E --> Expecting Replies, R --> Responding, W --> Waiting
        UpDn Time --> Up or Down Time for a Tunnel
==========================================================================
Interface: Tunnel3, IPv4 NHRP Details
Type:Spoke, NHRP Peers:2,
 # Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb
 ----- --------------- --------------- ----- -------- -----
     1 192.168.3.33           10.0.1.3    UP    3d07h     S
     2 192.168.6.6            10.0.1.6    UP 00:00:09   DT1
                              10.0.1.6    UP 00:00:09   DT2
Unicast Spoke-to-Spoke Interregional Communication

In Example 2-21, R7 (located in another region) initiates a ping to R5 (where loopback 100 at R5 is 10.0.100.101). Observe the dynamic tunnel created for spoke-to-spoke communication.

Example 2-21 Dynamic Tunnel Creation on R7

R7# ping 10.0.100.101 source lo100
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.100.101, timeout is 2 seconds:
Packet sent with a source address of 10.2.100.102
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
 
R7# show dmvpn
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
        N - NATed, L - Local, X - No Socket
R7# show dmvpn
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
        N - NATed, L - Local, X - No Socket
 
 T1 - Route Installed, T2 - Nexthop-override
        C - CTS Capable
        # Ent --> Number of NHRP entries with same NBMA peer
        NHS Status: E --> Expecting Replies, R --> Responding, W --> Waiting
        UpDn Time --> Up or Down Time for a Tunnel
==========================================================================
 
Interface: Tunnel4, IPv4 NHRP Details
Type:Spoke, NHRP Peers:2,
 
# Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
     1 192.168.5.5            10.0.1.5    UP 00:00:22     D
     1 192.168.4.44           10.0.2.4    UP 08:41:12     S

After the ping is initiated, a dynamic tunnel is created from R7 to R5, as shown in the highlighted output. For show dmvpn, 192.168.5.5 (loopback 0 at R5) is the dynamic tunnel (D) associated to the physical WAN interface of R5 (10.0.1.5). This is a result of a ping test from R7 that created a need for a data path and hence created this dynamic spoke-to-spoke tunnel relationship.

Unicast Spoke-to-Central Hub Communication

From R5, a ping to 10.0.100.100 (loopback 100 at R2) shows a region communicating with the central hub through a dynamic tunnel created within to the same NHRP domain. The highlighted output in Example 2-22 shows the creation of the dynamic tunnel for the unicast flow. 10.0.0.1 is the tunnel IP address at R2, 10.0.1.3 is the tunnel IP address at R3, 10.0.1.5 is the tunnel IP address at R5, and 10.0.1.6 is the tunnel IP address at R6.

Example 2-22 Dynamic Tunnel Usage

R5# ping 10.0.100.100 source lo 100
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.100.100, timeout is 2 seconds:
Packet sent with a source address of 10.0.100.101
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
R5# show ip nhrp
10.0.0.1/32 via 10.0.0.1
   Tunnel3 created 00:00:02, expire 00:05:57
   Type: dynamic, Flags: router nhop rib
   NBMA address: 192.168.2.2
10.0.1.3/32 via 10.0.1.3
   Tunnel3 created 3d09h, never expire
   Type: static, Flags: used
   NBMA address: 192.168.3.33
10.0.1.6/32 via 10.0.1.6
   Tunnel3 created 00:04:20, expire 01:55:39
   Type: dynamic, Flags: router used nhop rib
   NBMA address: 192.168.6.6
10.0.100.100/32 via 10.0.0.1
   Tunnel3 created 00:00:02, expire 00:05:57
   Type: dynamic, Flags: router rib nho
   NBMA address: 192.168.2.2
10.0.100.101/32 via 10.0.1.5
   Tunnel3 created 00:04:20, expire 01:55:39
   Type: dynamic, Flags: router unique local
   NBMA address: 192.168.5.5
    (no-socket)
10.0.100.102/32 via 10.0.1.6
   Tunnel3 created 00:04:20, expire 01:55:39
   Type: dynamic, Flags: router rib nho
   NBMA address: 192.168.6.6
Traffic Path of a Multicast Stream Sourced at the Central Hub

The source from the central hub transmits a multicast stream (239.1.1.1), and the receiver for this flow is at R5 (a regional spoke router). In Example 2-23, the show ip mroute command shows R5 receiving the multicast flow from the hub.

Example 2-23 R5 Receiving the Flow

R5# show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
 
(*, 239.1.1.1), 00:01:23/stopped, RP 10.0.100.100, flags: SJCL
  Incoming interface: Tunnel3, RPF nbr 10.0.1.3
  Outgoing interface list:
    Loopback100, Forward/Sparse, 00:01:23/00:02:17
 
(10.0.100.100, 239.1.1.1), 00:01:21/00:01:38, flags: LJT
  Incoming interface: Tunnel3, RPF nbr 10.0.1.3
  Outgoing interface list:
    Loopback100, Forward/Sparse, 00:01:21/00:02:17
Intra-regional Multicast Flow Between Two Spokes

Looking at the multicast stream from the source at R5 (239.192.1.1) to the receiver at R6, the show ip mroute command at R6 shows the formation of (*, G) and (S, G) states. Example 2-24 shows the output of R6.

Example 2-24 R6 mroute Table

R6# show ip mroute 239.192.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.192.1.1), 4d03h/stopped, RP 10.0.100.103, flags: SJCLF
  Incoming interface: Tunnel3, RPF nbr 10.0.1.3
  Outgoing interface list:
    Loopback100, Forward/Sparse, 4d03h/00:02:44
 
(10.0.100.101, 239.192.1.1), 00:00:10/00:02:49, flags: LJT
  Incoming interface: Tunnel3, RPF nbr 10.0.1.3
  Outgoing interface list:
    Loopback100, Forward/Sparse, 00:00:10/00:02:49
 
(10.0.1.5, 239.192.1.1), 00:00:10/00:02:49, flags: LFT
  Incoming interface: Tunnel3, RPF nbr 0.0.0.0
  Outgoing interface list:
    Loopback100, Forward/Sparse, 00:00:10/00:02:49

The output at the R3 hub shows that the replication of the multicast stream 239.192.1.1 is at the regional hub (R3), as shown in Example 2-25. The design provides localization of the regional multicast stream and localized replication. The show ip mroute command provides the state information. Note that the incoming interface list (IIF) and outgoing interface list (OIF) interfaces for (S, G) flow through Tunnel3.

Example 2-25 Stream Replication at R3

R3# show ip mroute 239.192.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector, p - PIM Joins on route,
       x - VxLAN group
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode
(*, 239.192.1.1), 3d07h/00:03:26, RP 10.0.100.103, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Tunnel3, 10.0.1.6, Forward/Sparse, 14:05:23/00:03:26
 
(10.0.100.101, 239.192.1.1), 00:00:03/00:02:56, flags:
  Incoming interface: Tunnel3, RPF nbr 10.0.1.5
  Outgoing interface list:
    Tunnel3, 10.0.1.6, Forward/Sparse, 00:00:03/00:03:26

The regional stream output at R2, at the central hub, does not have any significance. As shown in Example 2-26, the show ip mroute command does not provide any multicast state output.

Example 2-26 show ip mroute at R2

R2-CH# show ip mroute  239.192.1.1
Group 239.192.1.1 not found
R2-CH#

R2 does not have the regional stream 239.192.1.1; rather, replication takes place at R3, the regional hub.

One of the best ways to implement QoS in an overlay infrastructure is with DMVPN. With DMVPN, you can do traffic shaping at the hub interface on a per-spoke or per-spoke-group basis. Using dynamic QoS policies, you can configure multiple branch locations in one QoS template from the hub. The application of the policy to the spoke is dynamic when the spoke comes up.

Note: This book does not aim to review the details of the current or future DMVPN implementations. Instead, it provides an introduction to mGRE, which simplifies administration on the hub or spoke tunnels.

Enabling Multicast to the CSP Use Case 1

This section reviews how to enable multicast in a cloud service with a cloud broker. Figure 2-12 shows the first use case.

Figure 2-12 Enabling Multicast to the CSP Use Case 1

To review the design options available to deploy multicast, let’s divide the enterprise access to the cloud service provider into two segments, A and B, as shown in Figure 2-12.

Segment A covers the data center connectivity to the cloud broker. Multiple choices are available for this, as discussed earlier in this chapter: point-to-point virtual circuit (VC) provided by the network service provider (NSP) and MPLS Layer 3 VPN services for the NSP.

Point-to-Point VC Provided by the NSP

With direct VC (dedicated circuit point-to-point provided by the NSP), it is very simple to enable multicast. Segment A needs to terminate at the colocation (COLO) facility provided at the carrier-neutral facility. The termination is in the COLO facility, where the enterprise hosts network devices that offer enterprise-class features.

Segment B provides the transport of multicast from the COLO facility to the CSP. Multicast feature support is not available in the CSP network. In this case, the design engineer uses the service reflection feature and converts the multicast stream to a unicast stream.

MPLS Layer 3 VPN Services for the NSP

Instead of using a direct VC connection, the customer may leverage the SP-provided MPLS Layer 3 VPN service. Multicast support with the MPLS-VPN SP needs to be verified. If it is supported, the customer can leverage the MPLS-VPN SP for transporting multicast traffic. Segment A needs to terminate at the COLO facility provided at the carrier-neutral location. The termination is in the COLO facility where the enterprise hosts network devices for providing enterprise-class features. The termination is based on IP features that extend the connection to the MPLS-VPN cloud with control plane features such as QoS and routing protocols.

Segment B controls the transport of multicast traffic from the COLO facility to the CSP. Multicast feature support is generally not available in the CSP. A similar design principle of using the service reflection feature to convert multicast to a unicast stream can be used here.

Enabling Multicast to the CSP Use Case 2

In this use case, the enterprise data center does not have connectivity to a cloud broker. The connectivity to the CSP is direct via an NSP. Connectivity for Segment A has two options: direct VC access and Internet access. Figure 2-13 illustrates this use case.

Figure 2-13 Enabling Multicast to the CSP Use Case 2

With direct VC access, the enterprise requires network services hosted in the virtual private cloud (VPC). This is accomplished by using a VNF routing instance (such as a CSR 1000v). The enterprise can either buy these instances or rent them from the CSP. Here are two options for the transport of multicast across the NSP:

Conversion of multicast to unicast can be done at the enterprise data center, using the service reflection feature. In this case, VNF in the cloud is not necessary for multicast.

GRE tunnels can be used to transport multicast from the enterprise data center to the CSR 1000v hosted in the VPC. At the CSR 1000v, the administrator can use the service reflection feature to convert multicast to unicast. Using CSR 1000v provides visibility and support of enterprise-centric features.

Note: There’s a third, non-Cisco-specific option, which is not scalable. It is possible to terminate GRE on the compute instance at both ends. For example, Linux supports GRE tunnels directly in the AWS VPC. However, with this solution, you lose visibility of and support for enterprise-centric features.

The connectivity for Segment A from the enterprise to the CSP is via the Internet. Multicast support is possible through two options:

Create an overlay GRE network from the data center to the enterprise VPC located at the CSP. The tunnel endpoints are between a data center router and a CSR 1000v hosted at the enterprise VPC. Once the traffic hits the VPC, the administrator can use the multicast service reflection feature to convert the multicast feed to unicast.

If the enterprise does not host a CSR 1000v instance at the VPC, the conversion of multicast to unicast is done at the enterprise data center router.

Summary

Enterprises are more and more adopting the use of public cloud services. It is therefore important to understand the techniques and considerations involved in transporting multicast to and from a cloud service provider—especially when the cloud service providers do not support native multicast. This chapter reviews different cloud models that an enterprise may use (including SaaS, IaaS, and PaaS) and the various network access types to the cloud. Service reflection is an important feature used to convert multicast to a unicast stream or vice versa. Using NFV with a CSR 1000v is a powerful way for an enterprise to provide enterprise network features in an IaaS public cloud environment. Because multicast functionality is currently not supported in public cloud, it is important to understand cloud network access types and how to use NFV or physical hardware with enterprise features such as service reflection and DMVPN to help provide multicast service in the public IaaS environment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.35.5