© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
L. E. HughesThird Generation Internet Revealedhttps://doi.org/10.1007/978-1-4842-8603-6_8

8. Transition Mechanisms

Lawrence E. Hughes1  
(1)
Frisco, TX, USA
 

This chapter covers a variety of protocols and mechanisms that were created to simplify the introduction of IPv6 into the Internet. The goal is not to make an abrupt transition from all-IPv4 to all-IPv6 on some kind of “flag day” (as happened in the transition from the First Internet to the Second Internet). That would be unbelievably disruptive and unlikely to succeed. The goal is to gradually add new capabilities that take advantage of IPv6, or work far better over it (e.g., IPsec VPN, SIP, IPTV,1 and most other multicast), while continuing to use IPv4 for those things that work tolerably well over IPv4 with NAT (e.g., web, email, FTP, SSH,2 and most client-server with intermediary servers). This allows immediate alleviation of the most grievous problems caused by widespread deployment of NAT and other shortcomings of IPv4 while allowing a longer, more controlled migration of those protocols that do not benefit as much from IPv6. Eventually, all protocols and applications will be migrated (with a few exceptions – likely Skype can never be ported to IPv6, being heavily based on NAT traversal), and IPv4 can quietly be dropped from operating systems and hardware. However, this will probably be 5–10 years from now. As more and more applications are transitioned to IPv6, that will take the pressure off the remaining stock of IPv4 addresses.

Most of these transition mechanisms are defined in RFCs as part of the IPv6 standards. There are many mechanisms, some with confusingly similar names, such as “6in4,” “6to4,” and “6over4,” which are all quite different. Most deployments of IPv6 will use one or more of these transition mechanisms; none will use all of them. Some of the transition mechanisms are designed for use in the early phases of the transition, where there is an “ocean” of IPv4 with small (but growing) islands of IPv6 (e.g., 6in4 tunneling). Some are for use in the later stages of the transition, where the Internet has flipped into an “ocean” of IPv6, with small (and shrinking) islands of IPv4 (e.g., 4in6 tunneling, Dual-Stack Lite). Some are for use in the end stages of the transition where some networks are “IPv6-only” with no IPv4 present (e.g., NAT64/DNS64 to allow reaching legacy external IPv4-only servers from an IPv6-only node).

Since 2010, Teredo, ISATAP, and 6over4 have fallen out of favor, while 6in4, 6rd, and NAT64/DNS64 have become more widely used. 6in4 has the disadvantage that the user must have at least one public IPv4 address in their network to serve as one endpoint of the tunnel. These are becoming extremely difficult to obtain. No phones have them, few residential accounts have any, and even business accounts are getting fewer and fewer of them over time. Again, the transition was supposed to be complete by 2010, before IPv4 public addresses were totally depleted. 6rd works relatively well even without a public IPv4 address at the customer site.

A new standard, 464XLAT, has emerged for mobile devices, which allows telcos to deploy IPv6-only service to customer phones while allowing legacy (IPv4-only) apps to still work. All recent Android phones include support for 464XLAT. This approach is being widely deployed in the United States today.

Relevant Standards for Transition Mechanisms

RFCs related to transition mechanisms (except for Softwires) can be found in the following.

RFCs from the Softwires working group (Dual-Stack Lite, MAP-E, MAP-T, 4in6) can be found under Softwires.3
  • RFC 2473, “Generic Packet Tunneling in IPv6 Specification,” December 1998 (Standards Track) [4in6]

  • RFC 2529, “Transmission of IPv6 over IPv4 Domains Without Explicit Tunnels,” March 1999 (Standards Track) [6over4]

  • RFC 3053 , “IPv6 Tunnel Broker,” January 2001 (Informational)

  • RFC 3056 , “Connection of IPv6 Domains via IPv4 Clouds,” February 2001 (Standards Track) [6to4]

  • RFC 3089, “A SOCKS-Based IPv6/IPv4 Gateway Mechanism,” April 2001 (Informational)

  • RFC 3142, “An IPv6-to-IPv4 Transport Relay Translator,” June 2001 (Informational)

  • RFC 3964 , “Security Considerations for 6to4,” December 2004 (Informational) [6to4]

  • RFC 4038, “Application Aspects of IPv6 Transition,” March 2005 (Informational)

  • RFC 4213 , “Basic Transition Mechanisms for IPv6 Hosts and Routers,” October 2005 (Standards Track) [Dual Stack, 6in4]

  • RFC 4241 , “A Model of IPv6/IPv4 Dual Stack Internet Access Service,” December 2005 (Informational)

  • RFC 4380, “Teredo: Tunneling IPv6 over UDP Through Network Address Translations (NATs),” February 2006 (Standards Track) [Teredo]

  • RFC 4798, “Connecting IPv6 Islands over IPv4 MPLS Using IPv6 Provider Edge Routers (6PE),” February 2007 (Standards Track)

  • RFC 4942 , “IPv6 Transition/Co-existence Security Considerations,” September 2007 (Informational)

  • RFC 5158, “6to4 Reverse DNS Delegation Specification,” March 2008 (Informational) [6to4]

  • RFC 5214, “Intra-Site Automatic Tunnel Addressing Protocol (ISATAP),” March 2008 (Informational) [ISATAP]

  • RFC 5569 , “IPv6 Rapid Deployment on IPv4 Infrastructures (6rd),” January 2010 (Informational) [6rd]

  • RFC 5572, “IPv6 Tunnel Broker with the Tunnel Setup Protocol (TSP),” February 2010 (Experimental) [TSP]

  • RFC 5579 , “Transmission of IPv4 Packets over Intra-Site Automatic Tunnel Addressing Protocol (ISATAP) Interfaces,” February 2010 (Informational)

  • RFC 5902, “IAB Thoughts on IPv6 Network Address Translation,” July 2010 (Informational)

  • RFC 6052, “IPv6 Addressing of IPv4/IPv6 Translators,” October 2010 (Proposed Standard)

  • RFC 6127 , “IPv4 Run-Out and IPv4-IPv6 Co-Existence Scenarios,” May 2011 (Informational)

  • RFC 6146 , “Stateful NAT64: Network Address and Protocol Translation from IPv6 Clients to IPv4 Servers,” Aprille 2011 (Proposed Standard)

  • RFC 6147 , “DNS64: DNS Extensions for Network Address Translation from IPv6 Clients to IPv4 Servers,” April 2011 (Proposed Standard)

  • RFC 6180 , “Guidelines for Using IPv6 Transition Mechanisms During IPv6 Deployment,” May 2011 (Informational)

  • RFC 6219, “The China Education and Research Network (CERNET) IVI Translation Design and Deployment for the IPv4/IPv6 Coexistence and Transition,” May 2011 (Informational)

  • RFC 6324, “Routing Loop Attack Using IPv6 Automatic Tunnels: Problem Statement and Proposed Mitigations,” August 2011 (Informational)

  • RFC 6343, “Advisory Guidelines for 6to4 Deployment,” August 2011 (Informational)

  • RFC 6384, “An FTP Application Layer Gateway (ALG) for IPv6-to-IPv4 Translation,” October 2011 (Proposed Standard)

  • RFC 6535, “Dual Stack Hosts Using the Bump-In-the-Stack Technique (BIS),” February 2012 (Informational)

  • RFC 6586, “Experiences from an IPv6-Only Network,” April 2012 (Informational)

  • RFC 6654, “Gateway-Initiated IPv6 Rapid Deployment on IPv4 Infrastructures (GI 6rd),” July 2012 (Informational)

  • RFC 6889, “Analysis of Stateful 64 Translation,” April 2013 (Informational)

  • RFC 7021 , “Assessing the Impact of Carrier-Grade NAT on Network Applications,” September 2013 (Informational)

  • RFC 7050, “Discovery of the IPv6 Prefix User for IPv6 Address Synthesis,” November 2013 (Standards Track)

  • RFC 7051, “Analysis of Solution Proposals for Hosts to Learn NAT64 Prefix,” November 2013 (Informational)

  • RFC 7084 , “Basic Requirements for IPv6 Customer Edge Routers,” November 2013 (Informational)

  • RFC 7225, “Discovering NAT64 IPv6 Prefixes Using the Port Control Protocol (PCP),” May 2014, (Proposed Standard)

  • RFC 7269 , “NAT64 Deployment Options and Experience,” June 2014 (Informational)

  • RFC 7648, “Port Control Protocol (PCP) Proxy Function,” September 2015 (Proposed Standard)

  • RFC 7857 , “Updates to Network Address Translation (NAT) Behavioral Requirements,” April 2016 (Best Current Practice)

  • RFC 7915, “IP/ICMP Translation Algorithm,” June 2016 (Standards Track)

  • RFC 8215, “Local-Use IPv4/IPv6 Translation Prefix,” August 2017 (Informational)

  • RFC 8219 , “Benchmarking Methodology for IPv6 Transition Technologies,” August 2017 (Informational)

Transition Mechanisms

There are four general classes of transition mechanisms to help us get from all-IPv4 through a mixture of IPv4 and IPv6 (“dual stack”) to eventually all-IPv6.

Co-existence (Dual Stack and Dual-Stack Lite)

Co-existence involves all client and server nodes supporting both IPv4 and IPv6 in their network stacks. The only mechanisms in this group are dual stack and Dual-Stack Lite . This is the most general solution but also involves running essentially two complete networks that share the same infrastructure. It does not double network traffic, as some administrators fear. Any new connection over IPv6 is typically one less connection over IPv4. Over time, an increasing percentage of the traffic on any network will be IPv6, but the only increase in overall traffic will be from the usual suspects (increasing number of applications, users, and/or customers), not from supporting dual stack. In fact, at some point you will see the total amount of IPv4 traffic begin to decrease. You may see an increase in incoming customer connections (on devices that support IPv6) due to the ability of every IPv6 to now also accept connections. When YouTube started accepting connections over IPv6, there was an enormous and almost instant jump in IPv6 traffic on the backbone. Many nodes are ready to begin using IPv6 as soon as content is available, because of automated tunneling. In many cases, the end users might not even have been aware that they were now connecting over IPv6.

As an example, Facebook reports that over 90% of connections from US mobile phones are now over IPv6. Few of these uses are even aware that they have IPv6 service.

There is a recent variant of the dual-stack concept called Dual-Stack Lite that uses the basic dual-stack design but adds in IP-in-IP tunneling and ISP-based Network Address Translation to allow an ISP to share precious IPv4 addresses among multiple customers. It is defined in RFC 6333,4 “Dual-Stack Lite Broadband Deployments Following IPv4 Exhaustion,” August 2011. There is additional information in RFC 6908,5 “Deployment Considerations for Dual-Stack Lite,” March 2013, and in RFC 7870,6 “Dual-Stack Lite (DS-Lite) Management Information Base (MIB) for Address Family Transition Routers (AFTRs),” June 2016. In my previous book (The Second Internet), there was only an Internet Draft on Dual-Stack Lite.

A flowchart depicts the networking process of the dual-stack network. It starts with I P v 4 + I P v 6 internet and goes to mail and web via F W.

Figure 8-1

Example dual-stack network

Tunneling

Tunneling involves creating IP-in-IP tunnels with a variety of mechanisms to allow sending IPv6 traffic over existing IPv4 infrastructures by adding an IPv4 packet header to the front of an entire IPv6 packet. This treats the entire IPv6 packet, including IPv6 packet header(s), TCP/UDP header, and payload fields, as a “black box” payload of an IPv4 packet. In the later phases of the transition, it reverses this: it treats an entire IPv4 packet, including IPv4 packet header and options, TCP/UDP header, and payload fields, as a “black box” payload of an IPv6 packet. Some of these tunnel mechanisms are “automatic” (no setup required). Others require manual setup. Some require authentication, while others do not. The benefit is to leverage the existing IPv4 infrastructure as a transport for IPv6 traffic, without having to wait for ISPs and equipment vendors to support IPv6 everywhere before anyone can start using it. This allows early adopters to deploy nodes and entire networks today, regardless of whether or not their ISP supports IPv6 today. In some cases (e.g., tunnels to a gateway router or firewall), when the ISP does provide dual-stack service, it is a simple process to change from tunneled service to direct service, and the process is largely transparent to inside users. There are several organizations providing free tunneled IPv6 service (using various tunnel mechanisms) during the transition, to help with the adoption of IPv6. Tunneling mechanisms include 6in4, 4in6, 6to4, 6over4, and Teredo. TSP has fallen by the way. There are many operating system features and installable client software available to make use of these tunneling mechanisms.

A flowchart depicts the typical 6 in 4 tunnel. Here, the dual stack L A N turns into a dual-stack service provider by I P v 4 internet.

Figure 8-2

Typical 6in4 tunnel

Translation

This is basically Network Address Translation (with all its attendant problems), this time between IPv4 and IPv6 (as opposed to the more traditional NAT, which is IPv4 to IPv4). An IPv6-to-IPv4 translation gateway allows an IPv6-only internal node to access external IPv4-only nodes and allow replies from those legacy IPv4 nodes to be returned to the originating internal IPv6 node. Connections from an internal IPv6-only node to external IPv6-only or dual-stack nodes would be done as usual over IPv6 (without going through the translation gateway). This would be useful for deploying IPv6-only nodes in a predominantly IPv4 world. An IPv4-to-IPv6 gateway would allow an IPv4-only internal node to access external IPv6-only nodes and allow replies from those external IPv6 nodes to be returned to the internal IPv4-only node. Connections from an internal IPv4-only node to external IPv4-only nodes, or to dual-stack nodes, would be done as usual over IPv4 (without going through the translation gateway). This would be useful for deploying IPv4-only nodes in a predominantly IPv6 world. Some of these mechanisms require considerable modification to (and interaction with) DNS, such as NAT-PT and NAT64 + DNS64.

There are two broad classes of Network Address Translation between IPv4 and IPv6 – those that work at the IP Layer and are transparent to upper layers and protocols and those that work at the Application Layer (i.e., Application Layer gateways, also called proxies). The IP Layer mechanisms need only be implemented once, for all possible Application Layer protocols. Unfortunately, they also have the most technical issues.

A flowchart depicts the process of translation of typical N A T 64 by D N S 64. Here information from a website goes to D N S resolver and information gets back to webpages via gateway.

Figure 8-3

Typical NAT64/DNS64 translation

There has been a lot of work since 2010 on NAT64/DNS64, to provide access to legacy IPv4 nodes from otherwise IPv6-only networks via a NAT64 gateway on the network border. This was experimental and not very useful at the time of my previous book. NAT64 is specified in RFC 6146,7 “Stateful NAT64: Network Address and Protocol Translation from IPv6 Clients to IPv4 Servers,” April 2011. There is more information available in RFC 7269,8 “NAT64 Deployment Options and Experience,” June 2014. There are several commercial and open source implementations of NAT64 gateways. NAT64 requires use of DNS64 by all clients using the gateway. DNS64 is a variant of DNS, specified in RFC 6147,9 “DNS64: DNS Extensions for Network Address Translation from IPv6 Clients to IPv4 Servers,” April 2011.

464XLAT is specified in RFC 6877,10 “464XLAT: Combination of Stateful and Stateless Translation,” April 2013.

As pointed out in the 2014 OECD report, the big benefits from IPv6 deployment will come when you can phase out IPv4 (at least in the main network). There will be legacy (IPv4-only) nodes for some time to come that you might want to connect to, but that can be handled by a NAT64/DNS64 gateway at the border of an IPv6-only network. Even though there are problems with NAT64 11 (as with any NAT), where there are problems (e.g., VoIP, IPsec), people can switch to IPv6 for those protocols, while the easy stuff will work via NAT64. Over time, as more and more external sites support IPv6, there will be less and less need for NAT64. Meanwhile, we can get IPv4 out of our product networks, which will make network management and security much better and cheaper.

The home and corporate networks of the near future will be IPv6-only with access to legacy nodes via NAT64/DNS64.

Proxies (Application Layer Gateways)

The other kind of translation mechanism takes place at the Application Layer. They are called proxies , because they do things “on behalf of” other servers, much like a stock proxy voter will vote your stock on your behalf. They are also called Application Layer gateways (ALGs) because they are gateways (they do forwarding of traffic from one interface to another), and they work at the Application Layer of the TCP/IP four-layer model. They don’t have the serious problems found in IP Layer translation mechanisms, such as dealing with IP addresses embedded in protocols (like SIP or FTP). However, there are some problems unique to proxies.

A proxy must be written for every protocol to be translated, and often even different proxies for incoming and outgoing traffic, even for a given protocol (e.g., “SMTP in” and “SMTP out”). Typically, each proxy is a considerable amount of work. Often only a handful of the most important protocols will be handled by proxies, while all other protocols are handled by packet filtering.

Writing a proxy involves implementing most or all of the network protocol, although sometimes in a simplified manner (e.g., there is no need to store incoming email messages in a way suitable for retrieval by POP3 or IMAP; they just need to be queued by destination domain for retransmission by SMTP).

Proxies can support SSL/TLS, but the secure connection extends only from client to proxy and/or from proxy to server (not directly from client to server). This includes both encryption (the traffic will be in plain text on the proxy) and authentication (authentication is only from server to proxy and/or proxy to client, not from server to client). Typically, another digital certificate is required for the proxy server if it supports SSL/TLS (in addition to the one for the server).

Proxies can’t work with traffic secured in the IP Layer (IPsec ESP), without access to the keys necessary to decrypt the packets.

Throughput is typically lower than with a packet filtering firewall, due to the need to process the protocol. Of course, the security is much better – it won’t let through traffic that is not a valid implementation of the specific protocol, while packet filtering might let through almost anything so long as it uses the right port. There is typically no problem dealing with IP addresses embedded in a protocol.

In many cases, the proxies are not transparent, which means the client must know that it is talking not directly to a server, but via an intermediate proxy. Many protocols support this kind of operation, for example, HTTP provides good support for an HTTP proxy. Basically, there must be a way for a client to specify not only the nodename of the final server but also the address or nodename of the proxy server. In a browser (HTTP client), the nodename of the final server is specified as usual, and the address of the proxy server is specified during the browser configuration (“use a proxy, which is at address w.x.y.z”). When configured for proxy operation, the browser actually connects to the proxy address and relays the address of the final server to the proxy. The proxy then makes an ongoing connection to the final web server. Some protocols have no support for proxy-type operation (e.g., FTP). It is possible for a firewall to recognize outgoing traffic over a given port and automatically redirect it to a local proxy.

Application Layer gateways (e.g., for SIP, HTTP, and SMTP) work quite well. Basically, they accept a connection on one interface of a gateway and make a second “ongoing” connection (on behalf of the original node) via another interface of the same gateway. It is easy for the two connections to use different IP versions (e.g., translate IPv4 traffic to IPv6 traffic or vice versa). In some ALGs an entire message might be spooled onto temporary storage (e.g., email messages) and then retransmitted later. In other cases, the ongoing connection would be simultaneous with the incoming connection and bidirectional (e.g., with HTTP). This would correspond to a human “simultaneous translator” who hears one language (e.g., Chinese), translates, and simultaneously speaks another language (e.g., English).

Another example of this is an outgoing web proxy, which could accept connections from either IPv4-only or IPv6-only browsers and then make an ongoing connection to external servers using whatever version of IP those servers support (based on DNS queries). Again, this is a traditional (forward) web proxy, with the addition of IP version translation. This would allow IPv4-only or IPv6-only clients to access any external web server, regardless of IP version they support. Such a proxy could of course also provide any services normally done by an outgoing web proxy, such as caching and URL filtering.

Another example of this is a dual-stack façade that would accept incoming connections from outside over either IPv4 or IPv6 and make an ongoing connection over IPv4 to an internal IPv4-only (or over IPv6 to an IPv6-only) web server. It would relay the web server’s responses using whatever version of IP was used in the original incoming connection to the client. This is a typical “reverse” web proxy, with the addition of IP version translation. This kind of translation can help you provide dual-stack versions of your web services quickly and easily, without having to dual-stack the actual servers themselves. The same technique could allow you to make your email services dual stack without having to modify your existing mail server.

Dual Stack

Dual stack is defined in RFC 4213,12 “Basic Transition Mechanisms for IPv6 Hosts and Routers,” October 2005. A dual-stack node should include code in the Internet Layer of its network stack to process both IPv4 and IPv6 packets. Typically, there is a single Link Layer that can send and receive either IPv4 or IPv6 packets. The Link Layer also contains both the IPv4 Address Resolution Protocol (ARP) and the IPv6 Neighbor Discovery (ND) protocol. The Transport Layer has only minor differences in the way IPv4 and IPv6 packets are handled, primarily concerning the way the TCP or UDP checksum is calculated (the checksum also covers the source and destination IP addresses from the IP header, which of course is different in the two IP versions). The Application Layer code can make calls to routines in the IPv4 socket API, the IPv6 basic socket API, and the IPv6 advanced socket API. IPv4 socket functions will access the IPv4 side of the IP Layer, and IPv6 socket functions will access the IPv6 side of the IP Layer.

A 4-layer network model depicts the dual stack. It has an application layer, transport layer, internet layer, and link layer.

Figure 8-4

Four-layer network model for dual stack

The node should include the ability to do conventional IPv4 network configuration (including a node address, default gateway, subnet mask, and addresses of DNS servers, all as 32-bit IPv4 addresses). This configuration information can be done manually, via DHCPv4, or some combination thereof. The node should also include the ability to do conventional IPv6 network configuration (including a link-local IP address, one or more global unicast addresses, a default gateway, the subnet length, and the addresses of DNS servers, all 128-bit IPv6 addresses). This configuration information can be done manually, automatically via Stateless Address Autoconfiguration, automatically by DHCPv6, or by some combination thereof. There is usually a way to disable either the IPv6 functionality (in which case the node behaves as an IPv4-only node) or the IPv4 functionality (in which case the node behaves as an IPv6-only node). There may or may not also be some tunneling mechanism involved. If the node is in a native dual-stack network, no tunnel mechanism is seen by the user (any tunnel involved will be between the user’s Customer Premises Equipment and the IPv6 service provider, not inside the network). If the node is in an IPv4-only or an IPv6-only network, there will need to be a tunnel mechanism to bring in traffic of the other IP version (typically 6in4, 4in6, or 6rd).

IPv4-only and IPv6-only applications (client, server, and peer-to-peer) will work just fine on a dual-stack node. They will make calls to system functions on only one side of the network stack. They will not gain any new ability to accept or make connections over the other IP version just because they are running on a dual-stack node.

A dual-stack client can connect to IPv4-only servers, IPv6-only servers, or dual-stack servers. A dual-stack server can accept connections from IPv4-only clients , IPv6-only clients, or dual-stack clients. Dual stack is the most complete and flexible solution. The only issues are the additional complexity of implementation and deployment and the additional memory requirements. For very small devices (typically clients), dual stack may not be an option. Some critics of IPv6 claim that dual stack is not viable because we are running out of IPv4 addresses. What they are missing is that there are plenty of private IPv4 addresses for use behind NAT, and the IPv4 side of dual-stack systems can be used only for protocols where this is not a problem while using their IPv6 side for those protocols that are incompatible with NAT (IPsec VPN, SIP, P2P, etc.) or can benefit from other IPv6 features, which are superior to their IPv4 equivalents, such as multicast and QoS (for SIP, IPTV, conferencing, P2P Direct, etc.). Also, any application running on that node that needs to accept a connection from external nodes (e.g., your own web server) can use a global unicast IPv6 address (for IPv6-capable clients). If you want to accept connections from IPv4 clients, you would have needed a globally routable IPv4 address for that anyway or would need to deploy NAT traversal (with or without dual stack). Dual stack cannot create more globally routable IPv4 addresses. It can, however, allow you to easily make use of an almost unlimited number of globally routable IPv6 addresses (both unicast and multicast). It is common for only a few nodes in a dual-stack network to have IPv4 public addresses (or forwarding via NAT from a border node with a public IPv4 address), but every node can have a public (global) IPv6 address. If incoming connections are not blocked at a firewall, those nodes are accessible over IPv6 from anywhere on the global IPv6 Internet.

A key part of a dual-stack network is a correctly configured dual-stack DNS service. It should not only be able to handle both A and AAAA records (as well as reverse PTR records for IPv4 and IPv6); it should also be able to accept queries and do zone transfers over both IPv4 and IPv6. A dual-stack network typically uses DHCPv4 to assign IPv4 addresses to each node and either Stateless Address Autoconfiguration and/or DHCPv6 to assign IPv6 addresses to each node. A dual-stack firewall can bring in either direct dual-stack service (both IPv4 and IPv6 traffic) from an ISP (if available), routing both to the inside network; or it can bring in direct IPv4 traffic from an ISP and terminate tunneled IPv6 traffic (from a “virtual” ISP usually different from the IPv4 ISP) and route both IPv4 and IPv6 into the inside network. In either case (direct dual-stack service or tunneled IPv6 with endpoint in the gateway), inside nodes appear to have native dual-stack service and require no support for tunneling.

The DNS support does not require any modifications to a standard DNS server (e.g., BIND). Virtually all current DNS servers and appliances have (at least some) support for IPv6. DNS just needs to be able to perform its normal forward and reverse lookups with either IPv4 (A/PTR) or IPv6 (AAAA/PTR) resource records. There is no need for the DNS server to do nonstandard mappings between IPv4 and IPv6 addresses as is required with most IP Layer translation schemes (e.g., NAT64 + DNS64).

Migrating IPv4-only client or server applications to IPv6-only is quite simple. There is essentially a one-to-one mapping of function calls from the IPv4 socket API to similar ones in the IPv6 basic socket API. Of course, more storage is required for each IP address in data structures (4 bytes for IPv4 addresses, 16 bytes for IPv6 addresses).

Modifying either IPv4-only clients or IPv4-only servers to dual-stack operation is somewhat more complicated. A dual-stack client must be modified to retrieve multiple addresses from a forward lookup (IPv4 and/or IPv6) and try connections sequentially to the returned address list until a connection is accepted. The default (assuming IPv6 connectivity is available) is to attempt connections over IPv6 first. If DNS advertises an IPv6 address and the node supports IPv6, but for some reason the client is unable to connect over IPv6 (e.g., the tunnel is down), there will be a 30-second timeout and then a fallback to IPv4. A dual-stack server must listen for connections on both IPv4 and IPv6 and process connections from either. It is also possible to deploy two copies of each server, one being IPv4-only and the other IPv6-only. This might involve cross-process file locking on any shared resource, such as a message store. Either approach to providing dual-stack servers will work fine, and the user experience will be the same. Conditional compilation could be used to have a single source code tree create both an IPv4-only and an IPv6-only executable (depending on settings of system variables at compilation time). For most server designs (process per connection or thread per connection), the split model (an IPv4-only server and an IPv6-only server) would roughly double the memory footprint compared with a single dual-stack server.

There has been an improvement on this scheme since 2010 called “Happy Eyeballs.” The first version of this was specified in RFC 6555,13 “Happy Eyeballs : Success with Dual-Stack Hosts,” April 2012. The second version of this was specified in RFC 8305,14 “Happy Eyeballs Version 2: Better Connectivity Using Concurrency,” December 2017. This mechanism is actually implemented in clients, especially web browsers. It usually connects over both IPv4 and IPv6 and uses whichever one responds first (with some allowance for a slightly slow IPv6 response). The results of this measurement are stored, and that IP version is used for future connections to that server for some time. I have found that sometimes even if I have IPv6 and the server is IPv6, Happy Eyeballs will choose to connect over IPv4 (which violates the prior standard that IPv6 is preferred). This also impacts the statistics on server access over IPv6 (as seen on Google IPv6 stats). It would be nice if there were some way to disable Happy Eyeballs on a browser for people who know what they are doing, but no browser offers that option. It is permanently ON, on every browser I’ve tested. There is an implicit assumption that there is no difference in IPv6 and IPv4 (at least for web), which may not always be true. I may want to provide a better or more complete experience to people who connect over IPv6, but with Happy Eyeballs, the user has not control over this, unless they disable IPv4 on their node, which may cause other issues.

Most open source servers today have good support for dual-stack operation. These include the Apache web server, Postfix SMTP server, Dovecot IMAP/POP3 mail access servers, etc. If you are a developer and want to see examples of how to deploy dual-stack servers, there are numerous examples available in open source. Most open source client software also has good support for IPv6 and dual stack. These include the Firefox web browser, Thunderbird email client, etc. The open source community has done an excellent job of supporting the migration to IPv6. Both the original IPv4-only socket API and the newer IPv6 socket APIs are readily available on all UNIX and UNIX-like platforms. The documentation for the newer IPv6 socket APIs is in RFC 3493,15 “Basic Socket Interface Extensions for IPv6,” and RFC 3542,16 “Advanced Sockets Application Program Interface (API) for IPv6.” There is also RFC 5014,17 “IPv6 Socket API for Source Address Selection,” and RFC 4584,18 “Extension to Sockets API for Mobile IPv6.”

Virtually all Microsoft server products (since 2007) have had good support for dual-stack operation. The Azure Cloud service has been promising IPv6 support from the beginning, with very little progress. You can use load balancers to map an IPv6 address to an IPv4 address on the Azure VM, but you cannot configure an IPv6 address or make connections to IPv6 nodes, from an Azure node. AWS has provided at least some support for IPv6 19 for some time. Microsoft products that support IPv6 well include Windows Server 2008 R2 and later (and all its components, such as DNS, file and printer sharing, etc.), Exchange Server 2007 or later, and many others. Their client operating systems have had good support for IPv6 since Vista. For Microsoft developers, both the original IPv4-only socket API (Winsock) and the new IPv6 socket APIs (basic and advanced) are available as part of the standard Microsoft developer libraries.

Tunneling

Tunneling is very different from translation – the packets from the foreign IP are sent, complete with packet headers, as the Data field of packets of the other IP. For example, 6in4 packets have an IPv4 header, followed by an IPv6 header and IPv6 body. 4in6 packets have an IPv6 header, followed by an IPv4 header and IPv4 body. Once they reach the end of the tunnel, the extra header is stripped off, and the inside packet is routed on its way.

A flowchart depicts the typical 6 in 4 tunnel. Here, the dual stack L A N turns into a dual-stack service provider by I P v 4 internet.

Figure 8-5

How 6in4 tunnels work

A screenshot of a webpage depicts the nested structure of an I P v 6 Echo request.

Figure 8-6

6in4 tunneling – capture of IPv6 Echo Request showing nested structure

If tunneled service is brought into the network by a gateway device (typically the gateway router or firewall), which contains the tunnel endpoint, the internal network is a native dual-stack network from the viewpoint of all internal nodes. No internal node needs to have support for any tunneling mechanism. If at some point the tunneled service is replaced with direct service (both IPv4 and IPv6 service direct from your ISP), a minor reconfiguration at the gateway is all that is required. Internal nodes will probably not require any reconfiguration at all. They will typically have a new IPv6 prefix (unless you were getting tunneled service from your ISP), so you will likely have to update all forward and reverse address references in your DNS server (only for IPv6 addresses), to reflect the new IPv6 prefix. If your DNS server supports instant prefix renumbering like Sixscape DNS, this is a quick, painless process. If you are using DHCPv6 in stateful mode (where it assigns IP addresses) in conjunction with dynamic DNS registration, even DNS changes due to change of IPv6 prefix may happen automatically.

A tunnel mechanism has both a server side and a client side . The server side typically can accept one or more connections from tunnel clients. It is also commonly called a Tunnel Broker. A tunnel client typically makes connections to a single tunnel server. Some such connections (e.g., with 6in4) are not authenticated (although the server can typically be restricted to accepting connections only from specific IP addresses or address ranges). Some such connections include authentication of the client to the server before the tunnel will begin operation. Some connections (e.g., 6in4) require a globally routable IPv4 address on the client (although this can be the same address as the hide-mode NAT address). Other tunnel clients (e.g., 6rd) will work behind NAT, even with a private address. These include a NAT traversal mechanism in the client, and typically all tunneled packets are carried over UDP. Once a tunnel is created, it is bidirectional. Packets can be sent either upstream or downstream. From a hop count perspective, the tunnel counts as one hop, no matter how many hops the tunneled packets traverse.

Typical Product Support for Tunneling: pfSense Open Source Dual-Stack Firewall

As an example of a typical product that includes support for tunneling, pfSense 20 is an open source dual-stack firewall. On the IPv4 side, it includes typical firewall capabilities including routing, filtering by port and address, stateful packet inspection, and various forms of NAPT (hide mode, BINAT or 1:1, and port forwarding). On the IPv6 side, it includes all that (except for NAPT), plus a Router Advertisement Daemon (to enable Stateless Address Autoconfiguration) and 6in4 server and client modes. You could use the 6in4 client mode to bring in IPv6 tunneled service from any 6in4 virtual ISP (e.g., Hurricane Electric 21). You could create your own IPv6 virtual ISP using pfSense’s 6in4 tunnel server mode. For example, you could provide tunneled IPv6 service from your HQ or collocation facility to various branches, using the 6in4 tunnel server at HQ and the 6in4 tunnel clients at each branch. You can carve off any number of “/64” subnets into each branch office. For example, you could split a “/48” block into 16 “/52” blocks and route one “/52” block into each branch office.

Because the client-mode tunnel endpoint is located inside a firewall node, incoming IPv6 packets from the tunnel can be filtered and routed into any inside network(s). Outgoing IPv6 packets from any internal network can be filtered and routed out the tunnel to the outside world (via the same 6in4 tunnel).

The server-mode tunnel endpoint is also located inside a firewall node , so the firewall’s routing capabilities allow you to easily route any block of addresses from the outside world into any tunnel (and hence to branch offices) and outgoing packets (from tunnels coming from branch offices) to the outside world. Currently there is no support for OSPFv3 or BGP4+, so you would need to relay outgoing IPv6 traffic onward via an ISP (or virtual ISP) that can do further routing.

Because the tunnel mechanism used (6in4) is an IETF standard,22 pfSense’s tunnels will interoperate with server- or client-mode 6in4 tunnel endpoints on any other vendor’s products or even on other open source routers or firewalls.

6in4 Tunneling

RFC 4213 23 (in addition to specification for dual stack) specifies 6in4 tunneling (unfortunately they use the term “6over4” when you might recognize “6in4,” which is very confusing). Technically, 6in4 is a tunneling mechanism. 6over4 is a transition mechanism that uses 6in4 tunneling to create a virtual IPv6 link over an IPv4 multicast infrastructure (see RFC 2529,24 “Transmission of IPv6 over IPv4 Domains Without Explicit Tunnels,” March 1999). This book will use the term 6in4 unless we are specifically talking about 6in4 tunnels over IPv4 multicast. 6in4 is also sometimes referred to as “Protocol 41” tunneling. 6in4 tunneling requires both ends of the tunnel to have globally routable IPv4 addresses (neither tunnel endpoint can be behind NAT). It is possible for a firewall that is using a globally routable IPv4 address for HIDE-mode NAT (with multiple internal nodes hidden behind it) to use that same address as one endpoint of a 6in4 tunnel.

6in4 Encapsulation

This process is done to “push packets into the tunnel” for packets going from either end of the tunnel to the other. The basic idea is to prepend a new IPv4 packet header to a complete IPv6 packet (which itself consists of the basic IPv6 header, zero or more extension headers, a TCP or UDP header, and a payload) and treat the entire IPv6 packet as a “black box” payload for the IPv4 packet.

The encapsulation of an IPv6 datagram in IPv4 for 6in4 tunneling is shown in the following.

A block diagram depicts the encapsulation of 6 in 4 tunnel. Here I P v 6 packet gets encapsulated by adding an I P v 4 header.

Figure 8-7

Example of 6in4 encapsulation

The new IPv4 packet header is constructed as follows (from the RFC):

IP Version
  • 4 (the encapsulating packet is IPv4)

IP Header Length
  • 5 (in 32-bit words, so 20 bytes, and no IPv4 options are used in the encapsulating header)

Type of Service
  • 0 unless otherwise specified (see RFC 2983 and RFC 3168 for details)

Total Length
  • IPv6 payload length plus IPv6 header length (40) plus IPv4 header length (20), so IPv6 payload length + 60

Identification
  • Generated uniquely as for any IPv4 packet

Flags
  • DF (Don’t Fragment) flag set as specified in section 3.2 of RFC 4213

  • MF (More Fragments) flag set as necessary if fragmenting

Fragment Offset
  • Set as necessary if fragmenting

Time To Live (TTL)
  • Set as described in section 3.3 of RFC 4213

Protocol
  • 41: This is the defined payload type for IPv6 tunneled over IPv4 and is used regardless of whether the IPv6 transport is UDP or TCP.

Header Checksum
  • Calculated as usual for an IPv4 packet header

Source Address
  • An IPv4 address of the encapsulator: either configured by the administrator or an address of the outgoing interface

Destination Address
  • IPv4 address of the tunnel endpoint (i.e., the client side of the tunnel)

6in4 Decapsulation

This is done for all packets received over the tunnel from the other end. The basic idea is to strip the outer (IPv4) packet header off (and discard it) and then handle what is left (the original IPv6 packet) as native IPv6 traffic.

From the RFC: When a dual-stack node receives an IPv4 datagram that is addressed to one of its own IPv4 addresses (or a joined multicast group address), which has a Protocol field of 41 (tunneled IPv6), the packet must be verified to belong to a configured tunnel interface (according to source/destination addresses), be reassembled (if it was fragmented), and have the IPv4 header removed, and then the resulting IPv6 datagram is submitted to the IPv6 layer on the node for further processing.

The decapsulation process for 6over4 tunneling is shown in the following.

A block diagram depicts the decapsulation of 6 in 4 tunnel. Here I P v 6 packet gets decapsulated by removing an I P v 4 header.

Figure 8-8

Sample 6in4 decapsulation

According to RFC 4213, section 3.2, the MTU of the tunnel must be between 1280 and 1480 bytes (inclusive) but should be 1280 bytes. Section 3.3 specifies that the tunnel counts as a single hop to IPv6, regardless of how many hops the underlying IPv4 packet traverses. The actual TTL value in the outer IP header should be set as for any IPv4 packet (see RFC 3232 and RFC 4087).

RFC 4213 section 3.4 specifies how to handle errors that happen while the encapsulated packet is inside the tunnel. Unfortunately, older routers may not return enough of the packet to include both source and destination IPv6 addresses of the encapsulated packet, so it may not be possible to construct a correct ICMPv6 error message. Newer routers typically include enough of the failed packet for correct ICMPv6 error message creation.

6over4 Tunneling

6over4 tunneling is defined in RFC 2529,25 “Transmission of IPv6 over IPv4 Domains Without Explicit Tunnels,” March 1999. It is a transition mechanism that uses 6in4 tunneling over an IPv4 multicast–capable network. The term 6over4 is sometimes confusingly used for 6in4 tunneling. Due to the requirement for IPv4 multicast, which is very difficult to deploy, 6over4 is not commonly used. You can deploy a basic 6in4 tunnel without IPv4 multicast.

6to4 Tunneling

6to4 tunneling is described in the following RFCs:
  • RFC 3056 , “Connection of IPv6 Domains via IPv4 Clouds,” February 2001

  • RFC 3068, “An Anycast Prefix for 6to4 Relay Routers,” June 2001 (deprecated by RFC 7526, in May 2015)

  • RFC 3964 , “Security Considerations for 6to4,” December 2004 (Informational)

  • RFC 5158, “6to4 Reverse DNS Delegation Specification,” March 2008

6to4 is a transition mechanism that provides tunneled IPv6 over IPv4 without explicitly configured tunnels. With the original 6to4 mechanism, the IPv4 addresses involved must be valid globally routable IPv4 addresses (not behind NAT). Teredo is a variant of 6to4 tunneling that will work even behind NAT.

6to4 does not provide general translation to IPv4 addresses for interoperation between IPv6 hosts and IPv4 hosts (it is not a translator – it is a tunneling scheme). It uses automatically created tunnels over IPv4 to facilitate communication between any number of IPv6 hosts.

A “6to4 host” is a regular IPv6 host that also has at least one 6to4 address assigned to it.

A “6to4 router” is a regular IPv6 router that includes a 6to4 pseudo interface. It is normally a border router between an IPv6 site and a wide-area IPv4 network.

A “6to4 relay router” is a 6to4-capable router, which is also configured to support transit routing between 6to4 addresses and native IPv6 addresses.

Without 6to4 relay routers, you can communicate with other nodes that use 6to4 tunneling over IPv6 (even though your ISP does not yet support IPv6). To communicate with IPv6 users who are not using 6to4, you need to relay your traffic through a 6to4 relay router. You can create your own relay router. It must have both a 6to4 pseudo interface and native (not 6to4) IPv6 connectivity to the IPv6 Internet.

A 6to4 router will send an encapsulated packet directly over IPv4 if the first 16 bits of an IPv6 destination address are 2002, using the next 32 bits as the IPv4 destination (which must be another 6to4 node that will unpack the IPv6 packet being sent and use it or relay it to other IPv6 hosts). For all other IPv6 destination addresses, a 6to4 router will forward the packet to the IPv6 address of a well-known relay router that has access to native IPv6 (or simply send it to the IPv6 anycast address 2002:c058:6301::/128, which will send it to the nearest available 6to4 relay router).

For details on how to configure a FreeBSD node with 6to4 tunneling, see www.kfu.com/~nsayer/6to4 .

An IPv6 address for use with 6to4 tunneling looks like the following:
     | 3 |  13  |    32     |   16   |          64 bits               |
     +---+------+-----------+--------+--------------------------------+
     |FP | TLA  | V4ADDR    | SLA ID |         Interface ID           |
     |001|0x0002|           |        |                                |
     +---+------+-----------+--------+--------------------------------+

Essentially the IPv6 prefix for all 6to4 addresses is 2002:(ipv4addr)::/48.

RFC 2374 defines SLA ID as follows:
  • The SLA ID field is for a Site Level Aggregator Identifier. This can be used by individual organizations to create its own local addressing hierarchy and to identify subnets. It is analogous to subnets in IPv4, except that each organization has a much greater number of subnets.

RFC 3056 defines a 6to4 pseudo interface as follows:
  • 6to4 encapsulation of IPv6 packets inside IPv4 packets occurs at a point that is locally equivalent to an IPv6 interface, with the link layer being the IPv4 unicast network. This point is referred to as the pseudo-interface. Some implementers may treat it exactly like any other interface, and others may treat it like a tunnel endpoint.

Teredo

Teredo is one extension of basic 6to4 tunneling. It adds encapsulation over UDP datagrams and uses a simplified version of STUN NAT traversal, allowing a Teredo client to be behind NAT. It is defined in RFC 4380,26 “Teredo: Tunneling IPv6 over UDP Through Network Address Translations (NATs),” February 2006. The name “Teredo” is part of the Latin name for a little worm that bores holes through wooden ship hulls. This gives you a pretty good idea of what the Teredo protocol does to your firewall. Teredo is installed and enabled by default in Windows Vista and Windows 7. It is possible to disable it, which everyone should do!

There is an open source Teredo client for Linux, BSD, and Mac OS X called Miredo .27 It can act as a client, relay, and server.

There are publicly available Teredo “relay routers” that allow any node with Teredo to access the IPv6 Internet. Microsoft makes several very large ones available for use from Windows nodes. Windows nodes are preconfigured to use these relay servers. Unlike 6to4 and some other tunnel mechanisms, Teredo can only provide a single “/128” IPv6 address per tunnel endpoint. Teredo allows you to let one node connect to the IPv6 Internet, not an entire network.

Teredo uses a different IPv6 address block than basic 6to4 tunneling. The rest of the Teredo address is defined differently as well:
  • Bits 0–31 contain the Teredo prefix, which is 2001:0000::/32. You might want to block this range for both incoming and outgoing connections on your border firewall.

  • Bits 32–63 contain the IPv4 address of the Teredo server used.

  • Bits 64–79 contain some flags. Currently only bit 64 is used. If set to 1, the client is behind a cone NAT; otherwise, it is 0. More of these flag bits are used in Vista, Windows 7, and Windows Server 2008.

  • Bits 80–95 contain the obfuscated UDP port number (port number that is mapped by NAT, with all bits inverted).

  • Bits 96–127 contain the obfuscated IPv4 address of the node (public IPv4 address of the NAT with all bits inverted).

As an example, a Teredo address might be 2001::4136:e378:8000:63bf:3fff:fdd2, which broken into fields is as follows:
  • Bits 0–31: 2001:0000 – the Teredo prefix

  • Bits 32–63: 4136:e378 – IPv4 address 65.54.227.120 in hexadecimal

  • Bits 64–79: 8000 – cone-mode NAT

  • Bits 80–95: 63bf – obfuscated port number 40000

  • Bits 96–127: 3fff:fdd2 – obfuscated public IPv4 address of the node (192.0.2.45)

Hurricane Electric, as of Q1 2009, had deployed 14 public Teredo relays (via anycast), in Seattle, Washington; Fremont, California; Los Angeles, California; Chicago, Illinois; Dallas, Texas; Toronto, Ontario; New York, New York; Ashburn, Virginia; Miami, Florida; London, England; Paris, France; Amsterdam, Netherlands; Frankfurt, Germany; and Hong Kong SAR.

Usage of Teredo has dropped off to virtually zero as native IPv6 and 6in4 tunnels have become more common.

6rd: IPv6 Rapid Deployment

6rd is another extension of 6to4 tunneling that adds reliable routing. Normal 6to4 tunnels use the standard 2002://16 prefix and in theory scale to the entire world. Unfortunately, there is no way to control who can connect to 6to4 public servers, and there is no incentive to provide quality service. Also there is no guarantee that any 6to4 node will be reachable. The same is true of Teredo.

6rd instead works only within the confines of a single ISP, and instead of the 2000://16 prefix, each ISP uses a prefix that they own and control and runs the relay router. They can ensure quality service and reachability of all nodes within their network.

6rd was deployed by a French ISP called “Free” (in spite of the name, this is a commercial ISP). This was done in 5 weeks starting in December 2007. This gave France the second highest IPv6 penetration in the world, 95% of which was via Free’s 6rd. RFC 5569 discusses Free’s 6rd deployment. The current Internet Draft that defines 6rd (draft-ietf-softwire-IPv6-6rd-08, “IPv6 via IPv4 Service Provider Networks ‘6rd,’” March 23, 2010) should be approved soon. Meanwhile, you can read the draft.

In January 2010, Comcast (a large US ISP) announced plans to do a trial deployment of IPv6 using 6rd. SoftBank (a large Japanese ISP) also has announced that they will roll out IPv6 using 6rd.

Intra-site Automatic Tunnel Addressing Protocol (ISATAP)

ISATAP is a transition mechanism that allows transmission of IPv6 packets between dual-stack nodes on top of an IPv4 network. It is similar to 6over4, but it uses IPv4 as a virtual non-broadcast multiple-access (NBMA) network Link Layer and does not require IPv4 multicast (which 6over4 does require). It is discussed in RFC 5214, “Intra-Site Automatic Tunnel Addressing Protocol (ISATAP) .”

ISATAP specifies a way to generate a link-local IPv6 address from an IPv4 address, plus a mechanism for performing Neighbor Discovery on top of IPv4.

The generated link-local address is created by appending the 32-bit IPv4 address onto the 96-bit prefix fe80:0:0:0:0:5efe::. For example, the IPv4 address 192.0.2.143 in hexadecimal is c000028f. Therefore, the corresponding ISATAP link-local address is fe80::5efe:c000:28f.

The Link Layer address for ISATAP is not a MAC address, but an IPv4 address (remember IPv4 is used as a virtual Link Layer). Since the IPv4 address is just the low 32 bits of the ISATAP address, mapping onto the “Link Layer” address simply involves extracting the low 32 bits (ND is not required). However, router discovery is more difficult without multicast. ISATAP hosts are configured with a potential routers list (PRL) . Each of the routers on this list is probed by an ICMPv6 Router Discovery message, to determine which of them are functioning and to then obtain the list of on-link IPv6 prefixes that can be used to create global unicast IPv6 addresses.

Current implementations create their PRL by querying the DNS. DHCPv4 is used to determine the local domain. Then a DNS query is done for isatap.<localdomainame>. For example, if the local domain is demo.com, it would do a DNS query for isatap.demo.com.

ISATAP avoids circular references by only querying DNS over IPv4, but it is still a lower-layer protocol that is using a higher-layer function (DNS). This is a violation of network design principles.

ISATAP is implemented in Windows XP, Windows Vista, Windows 7, Windows Mobile, and Linux (since Kernel 2.6.25). It is not currently implemented in *BSD28 due to a potential patent issue.

Softwires (Includes Dual-Stack Lite, MAP-E, MAP-T, and 4in6)

The IETF has a very active Softwires working group. Essentially, they are trying to create standards for tunneling IPv6 over IPv4 networks and for tunneling IPv4 over IPv6 networks. There are two basic models for this; one is called hub and spoke . This is similar to the way that airlines have a few large hub airports and many spokes or local flights radiating from those hubs to smaller communities nearby. For example, Atlanta International Airport is a hub for the entire Southeastern United States. If you fly in or out of that region, you will likely interchange in Atlanta. There are several schemes that vary in exactly what part of the network path the softwire is deployed:

  • From ISP to customer modem/router

  • From ISP via customer modem/router to an inside softwire router

  • From ISP via customer modem/router to an end-user node

All the components necessary to deploy the various schemes are widely available, including
  • LNS: Large ISP-based L2TP Network Server

  • Dual AF CPE: Customer Premises Equipment modem/router with support for L2TPv2 softwires

  • Dual AF router: Customer premise dual-stack router with support for L2TPv2 softwires

  • Dual AF host: Client software for end-user nodes with support for L2TPv2 softwires

In the preceding, “Dual AF” means Dual Address Family, in other words, IPv4 + IPv6, or dual stack.

The other softwire architecture is called mesh . This involves several peer nodes, with multiple connections between them. If all nodes are connected to all other nodes, that would be a fully meshed network.

The term softwire refers to a tunneled link between two or more nodes. In early RFCs related to this technology, sometimes the term pseudowire is used instead. Softwires are assumed to be long-lived, and the setup time is expected to be a very small fraction of the total time required for the startup of the Customer Premises Equipment/Address Family border router. The goal is to make cost-effective use of existing facilities and equipment where possible.

Current softwire solutions are mostly based on L2TPv2, which is defined in RFC 2661,29 “Layer Two Tunneling Protocol ‘L2TP,’” August 1999. L2TPv1 was defined in RFC 2341,30 “Cisco Layer Two Forwarding (Protocol) ‘L2F,’” May 1998. L2TPv2 is layered on PPP, which is defined in RFC 1661,31 “The Point-to-Point Protocol (PPP),” July 1994. All L2TPv2 connections use UDP encapsulation. There are already some very large deployments of softwires on L2TPv2 in ISPs today. L2TPv2 meets all IPv6-over-IPv4 softwire requirements today. It is 99% ready for IPv4-over-IPv6 softwire today.

Future softwire solutions will be based on L2TPv3, which is defined in RFC 3931,32 “Layer Two Tunneling Protocol – Version 3 (L2TPv3),” March 2005. L2TPv3 can be layered on PPP, but in v3 it is optional (it can layer directly on IP). UDP encapsulation is also optional in v3. UDP encapsulation is useful for NAT traversal, but it increases overhead and lowers throughput and reliability. If no NAT needs to be traversed, turning off the UDP encapsulation can lower overhead. Session ID and Control Connection IDs are 32 bits (vs. 16 in L2TPv2). L2TPv3 also provides better user authentication and data channel security through use of optional cookies. An L2TPv3 cookie is an up to 64-bit cryptographically generated random value, included in every packet. L2TPv3 is close to meeting all softwire requirements.

Relevant Standards for Softwires

  • RFC 4925 , “Softwire Problem Statement,” July 2007 (Informational)

  • RFC 5512, “The BGP Encapsulation Subsequent Address Family Indicator (SAFI) and the BGP Tunnel Encapsulation Attribute,” April 2009 (Standards Track)

  • RFC 5543, “BGP Traffic Engineering Attribute,” May 2009 (Standards Track)

  • RFC 5549, “Advertising IPv4 Network Layer Reachability Information with an IPv6 Next Hop,” May 2009 (Standards Track)

  • RFC 5565 , “Softwire Mesh Framework,” June 2009 (Standards Track)

  • RFC 5566, “BGP IPsec Tunnel Encapsulation Attribute,” June 2009 (Standards Track)

  • RFC 5571 , “Softwire Hub and Spoke Deployment Framework with Layer Two Tunneling Protocol Version 2 (L2TPv2),” June 2009 (Standards Track)

  • RFC 5619, “Softwire Security Analysis and Requirements,” August 2009 (Standards Track)

  • RFC 5640, “Load-Balancing for Mesh Softwires,” August 2009 (Standards Track)

  • RFC 5969 , “IPv6 Rapid Deployment on IPv4 Infrastructures (6rd) – Protocol Specification,” August 2010 (Standards Track)

  • RFC 6333 , “Dual-Stack Lite Broadband Deployments Following IPv4 Exhaustion,” August 2011 (Standards Track)

  • RFC 6334, “Dynamic Host Configuration Protocol for IPv6 (DHCPv6) Option for Dual-Stack Lite,” August 2011 (Standards Track)

  • RFC 6519, “RADIUS Extensions for Dual-Stack Lite,” February 2012 (Standards Track)

  • RFC 6674 , “Gateway-Initiated Dual-Stack Lite Deployment,” July 2012 (Standards Track)

  • RFC 6908 , “Deployment Considerations for Dual-Stack Lite,” March 2013 (Informational)

  • RFC 7040 , “Public IPv4-over-IPv6 Access Network,” November 2013 (Informational)

  • RFC 7596 , “Lightweight 4over6: An Extension to the Dual-Stack Lite Architecture,” July 2015 (Standards Track)

  • RFC 7597 , “Mapping of Address and Port with Encapsulation (MAP-E),” July 2015 (Standards Track)

  • RFC 7598, “DHCPv6 Options for Configuration of Softwire Address and Port-Mapped Clients,” July 2015 (Standards Track)

  • RFC 7599 , “Mapping of Address and Port Using Translation (MAP-T),” July 2015 (Standards Track)

  • RFC 7600 , “IPv4 Residual Deployment via IPv6 – A Stateless Solution (4rd),” July 2015 (Experimental)

  • RFC 7785, “Recommendations for Prefix Binding in the Context of Softwire Dual-Stack Lite,” February 2016 (Informational)

  • RFC 7856, “Softwire Mesh Management Information Base (MIB),” May 2016 (Standards Track)

  • RFC 7870, “Dual-Stack Lite (DS-Lite) Management Information Base (MIB) for Address Family Transition Routers (AFTRs),” June 2016 (Standards Track)

  • RFC 8026 , “Unified IPv4-in-IPv6 Softwire Customer Premises Equipment (CPE): A DHCPv6-Based Prioritization Mechanism,” November 2016 (Standards Track)

  • RFC 8114 , “Delivery of IPv4 Multicast Services to IPv4 Clients over an IPv6 Multicast Network,” March 2017 (Standards Track)

  • RFC 8115, “DHCPv6 Option for IPv4-Embedded Multicast and Unicast IPv6 Prefixes,” March 2017 (Standards Track)

  • RFC 8389, “Definitions of Managed Objects for Mapping of Address and Port with Encapsulation (MAP-E),” December 2018 (Standards Track)

  • RFC 8513, “A YANG Data Model for Dual-Stack Lite (DS-Lite),” January 2019 (Standards Track)

Dual-Stack Lite

The IETF Softwires working group has come up with a variant on the basic dual-stack network design, which is described in RFC 6333,33Dual-Stack Lite Broadband Deployments Following IPv4 Exhaustion,” August 2011. There is additional information on Dual-Stack Lite in RFC 6908,34 “Deployment Considerations for Dual-Stack Lite,” March 2013.

Clients using Dual-Stack Lite will still need to support both IPv4 and IPv6, but the service from the ISP to the customer will be IPv6-only, with IPv4 service tunneled over IPv6 in both directions. If you examine the traffic between the CPE and the ISP, there will only be IPv6 packets, but some of them will contain IPv4 packets as the Data field. The IPv4 addresses provided to the customer will be RFC 1918 private addresses, provided by a giant Carrier-Grade NAT (CGN) at the ISP. The NAT involved actually uses the customer’s IPv6 address to tag the private IPv4 addresses used by the client, which would allow multiple ISP clients to use the same private address range (e.g., all of them could use 10.0.0.0/8, and the LSN would keep each organization’s addresses separate based on their unique IPv6 address). There is a special new private address range (100.64/10) that is used in the carrier-based mapping. So, if the address assigned to the WAN node on your CPE is in 100.64/10, you are behind CGN. This is becoming more and more common as unallocated IPv4 public addresses vanish. According to the CGN RFC (6598 35), no one should deploy CGN without also deploying IPv6, but many telcos and ISPs ignore this and deploy CGN without any IPv6, in order to keep providing their customers with IPv4 service.

IPv6-only or dual-stack nodes at the client would be able to connect to any IPv6 node in the world directly, via the ISP’s IPv6 service. IPv4-only or dual-stack nodes at the client would be able to connect to any IPv4 node in the outside world via IPv4 tunneled over IPv6, with addresses from the ISP’s Carrier-Grade NAT. There is no 6to4 translation that would allow an IPv6-only node to connect to external IPv4 nodes or 4to6 translation that would allow an IPv4-only node to connect to external IPv6 nodes. Any internal node that needs to connect to external IPv4 nodes should be configured to support dual stack. The tunneling of IPv4 packets inside the outgoing IPv6 packets takes place inside the CPE, as does the de-tunneling of IPv4 packets from the incoming IPv6 packets. It’s basically 6in4 upside down. This scheme can be deployed for a very long time compared with basic dual stack.

The way this differs from basic dual-stack operation is that there is no direct IPv4 service provided, and the IPv4 addresses used at the client are private and managed by infrastructure at the ISP. This allows the ISP to share a relatively small number of precious real IPv4 addresses among a large number of customers and also allows the ISP to run IPv6 only to the customer. A major advantage of DS Lite is that no 6to4 or 4to6 translation is required. The downside is that all nodes on the internal network are still dual stack – you must still manage two sets of IP addresses (IPv6 and IPv4). It is much cleaner and less expensive to eliminate IPv4 altogether in the internal network, other than via NAT64 border gateways.

This will require a firmware upgrade (or replacement) of the Customer Premises Equipment (CPE) , which is typically a DSL or cable modem, with embedded router and NAT.

The Internet Systems Corporation (who also supplies the BIND DNS server and dhcpd DHCPv4 server) has created a freeware implementation of the ISP-side facilities to support DS Lite, called AFTR 36 (Address Family Transition Router). This includes IPv4-over-IPv6 tunneling, DHCPv4, DHCPv6, and some other pieces.

The CPE device for DS-Lite 37 is called B4 (Basic Bridging BroadBand Element). There is an open source implementation of this for the Linksys WRT-54GL. Some network vendors are beginning to produce DS Lite–compatible CPE now.

PET (Prefixing, Encapsulation, and Translation)

PET is one of the emerging softwire standards, which is trying to work out the optimal combination of tunneling and translation mechanisms to provide a workable framework for IPv4/IPv6 co-existence. The types of tunnels discussed are
  • IP-in-IP tunnels (RFC 2893, RFC 4213)

  • GRE tunnel (RFC 1702)

  • 6to4 tunnel (RFC 3056)

  • 6over4 tunnel (RFC 2529)

  • Softwire transition technique (RFC 5565)

The translation mechanisms discussed include
  • SIIT (RFC 2765)

  • NAT-PT (RFC 2766 – deprecated)

  • BIS (RFC 2767)

  • SOCKS64 (RFC 3089)

  • BIA (RFC 3338)

  • IVI (RFC 6219)

These standards discuss various combinations of the preceding tunneling and translation mechanisms to accomplish different kinds of co-existence. The recommended tunneling scheme is the softwire transition technique (RFC 5565). It also notes that DNS may have to interact with the co-existence solution using a DNS Application Layer gateway, such as DNS64.

Translation

Translation between IPv4 and IPv6 is by far the most complex transition mechanism. It has all the issues of IPv4-to-IPv4 Network Address Translation, plus new issues that complicate it even further. There is a great deal of activity in the IETF trying to create standards that will be implementable and deployable.

Since IPv4 addresses are running out, many ISPs would like to deploy IPv6-only service to their customers (as opposed to dual stack with both IPv4 and IPv6 services). Without translation, an IPv6-only node cannot access legacy IPv4-only nodes on the Second Internet (which currently includes most online sites). Over time, more and more sites and services will be dual stack, which will make IPv6-only nodes more useful. Until that time, translation gateways will be needed for IPv6-only nodes. It will be far simpler and cheaper, resulting in a superior user experience if both IPv4 and IPv6 are deployed, even if the IPv4 service is heavily NATted. However, ISPs seemed to be obsessed with deploying translation. There are a variety of ways that this can be accomplished, but most are quite complex and likely to be major sources of problems.

Tunneling cannot achieve IPv4-to-IPv6 interworking, but it’s highly transparent and lightweight, can be implemented by hardware, and can keep IPv4 routing and IPv6 routing separated. It allows existing infrastructure (whether IPv4 or IPv6) to be used as a transport to link two nodes (or networks) using the other version of IP.

Translation achieves direct intercommunication between IPv4 and IPv6 nodes or networks by means of converting the semantics between IPv4 and IPv6. However, it has limitations in operational complexity and scalability. Like any NAT, it may have serious issues with transparency (some protocols may not work through it). Correct translation requires
  • Address or (address, port) tuple substitution

  • MTU discovery

  • Fragmentation when necessary

  • Translation of both IP and ICMP fields

  • ICMP address substitution in payloads (e.g., with SIP)

  • IP/TCP/UDP checksum recomputation

  • Application Layer translation when necessary

Stateless translation consumes IPv4 addresses to satisfy IPv6 hosts, which does not scale (for one thing we are running out of IPv4 addresses; for another, there are lots more IPv6 addresses than IPv4 addresses). It can be implemented in hardware, but any ALG translation is too complex for hardware.

Stateful translation requires maintaining complex state for dynamic mapping of (address, port) tuples and cannot be implemented in hardware.

NAT64/DNS64

This transition mechanism requires both a NAT64 gateway and either a DNS server that supports DNS64 mapping or a DNS ALG that supports DNS64. What follows is a highly simplified description of operation. The full details are covered in the RFCs (there is quite a bit of complexity involved in the real operation).

The NAT64 gateway should have two interfaces, one connected to the IPv4 network (with a valid IPv4 address on that network) and the other connected to the IPv6 network (with a valid IPv6 address on that network). IPv6 traffic from a node on the IPv6 network going to an IPv4 node is sent in IPv6 and routed to the NAT64 gateway. The gateway does address translation and forwards the translated packets to the IPv4 interface, from which they are routed to the destination node. Reply packets from the IPv4 node are sent in IPv4 to the gateway and are translated into IPv6 and forwarded to the IPv6 interface, from which they are routed back to the original sender. This process requires state, binding an IPv6 address and TCP/UDP port (referred to as an IPv6 transport address) to an IPv4 address and TCP/UDP port (referred to as an IPv4 transport address).

Packets that originate on the IPv4 side cannot be correctly translated, because there would be no state from the packets coming through the gateway in the v6->v4 direction. NAT64 is not symmetric. For traffic initiated by an IPv6 node, everything works right. Once the binding is created, that traffic flow can continue (from the IPv6 node to the IPv4 and back).

For the traffic originating on the IPv4 side to be translated to IPv6, it requires some additional mechanism, such as ICE or a static binding configuration.

This mechanism depends on constructing IPv4-converted IPv6 addresses. Each IPv4 address is mapped into a different IPv6 address by concatenating a special IPv6 prefix assigned to the NAT64 device (Pref64::/n).

It also uses a small pool of IPv4 addresses, from which mappings will be created and released dynamically, as needed (as opposed to permanently binding specific IPv4 addresses to specific IPv6 addresses). This implies that NAT64 does both address and port translation.

When an IPv6 initiator does a DNS lookup to learn the address of the responder, DNS64 is used to synthesize AAAA resource records from A resource records. The synthesized AAAA resource records are passed back to the IPv6 initiator, which then initiates an IPv6 connection with the IPv6 address that is associated with the IPv4 receiver. The packet will be routed to the NAT64 device, which will create the IPv6-to-IPv4 address mapping as described before.

In general, dual-stack nodes should not use DNS64. If they get a synthesized IPv6 address and a native IPv4 address, the rule to prefer IPv6 will cause the dual-stack host to do the access via the NAT64 gateway instead of direct using IPv4. If you deploy DNS64, it should be used only by IPv6-only nodes, and there should be a regular DNS for use by any dual-stack nodes.

IVI

This address translation scheme is being used on a large scale between CERNET (IPv4-only) and CERNET2 (IPv6-only) for nodes on either side to connect to nodes on the other side, as well as allowing IPv6-only nodes to connect to IPv4 nodes out on the public Internet.

The pros of using IVI are as follows:
  • It is stateless, so it scales to a large number of nodes better than NAT64/DNS64.

  • The translation is decoupled from DNS.

  • It is symmetric, so can be used for connections initiated on either side of the gateway (IPv4 to IPv6 side or IPv6 to IPv4 side).

  • There is an open source implementation of the IVI gateway and DNS64 ALG available on Linux.

The cons of using IVI are as follows:
  • An ALG is still required for any protocol that embeds IP addresses in the protocol, such as SIP.

  • It restricts the IPv6 hosts to use a subset of the addresses inside the ISP’s IPv6 block. Therefore, IPv6 Stateless Address Autoconfiguration cannot be used to assign IPv6 addresses to nodes. You must either manually assign addresses or use stateful DHCPv6.

  • There are still some issues with end-to-end transparency, address referrals, and incompatible semantics between protocol versions.

  • You still need a DNS64 ALG for DNS.

Preferred Network Implementation Going Forward: IPv6-Only

As the 2014 OECD report points out, the real benefits of IPv6 only come once you remove IPv4, except for gateway access to legacy IPv4-only nodes outside your network.

One very interesting discussion of this approach can be found in “Microsoft Works Toward IPv6-Only Single Stack Network,”38 by Veronika McKillop (Microsoft CSEO), April 3, 2019.

This is a large-scale, real-world deployment of IPv6-only and will be fully realized globally over time. It is already far enough along to provide some very good insights into doing this from actual experience.

Here are key points from this writeup:
  • IPv4 address depletion is already a serious problem, and not just public addresses. Microsoft is now having problems allocating even private addresses company-wide. They have predicted that they will no longer be able to use the 10/8 private address block in around 2–3 years. They have explored reclaiming unused IPv4 addresses, with little success.

  • There are big benefits to a single stack network in troubleshooting, security, and QoS policies. Dual stack still involves having to work with NAT44, which has many problems.

  • Since all companies today use private IPv4, there are always problems of address conflict in acquisitions, requiring even more NAT44 and address renumbering. These problems are not present in IPv6, even with ULA.

  • Industry pressure is growing, such as Apple’s decision to require IPv6 in all apps submitted to the App Store. It is critical that app developers have an IPv6-only environment to test apps. MA currently has 12 locations for this kind of work.

  • A good IPv6 address plan is critical. The one they created in 2006 has required very minor changes (one in 2015 and another in 2018). They started with one /32 from ARIN and then in 2013 added /32s from RIPE and APNIC.

  • It is important that both DHCPv6 and RDNSS (IPv6 addresses for DNS via RA messages) must be implemented everywhere, since some nodes only support one way or the other.

  • Extensive training in IPv6 for engineering staff is critical.

  • Working with outside vendors often requires forcing them to support IPv6 well.

  • Clouds are still mostly IPv4-only, which causes major problems for cloud-based security.

  • Global routing works better with IPv6.

  • Currently, 20–30% of their internal traffic is IPv6.

  • NAT64/DNS64 is essential but still problematic.

  • They have a “scream test” that involves removing all IPv4 from a network temporarily and seeing who screams and what about.

  • “My advice is to take your deployment bit by bit. Focus on things that give you the biggest benefit, the biggest learning, the biggest impact on the largest group of users.”

  • “Dual stack is only a temporary solution. The ultimate solution is IPv6-only.”

Supporting IPv6 for Developers at Sixscape

We develop products for Windows, MacOS, Android, and iOS. All must fully support IPv6 and even work in IPv6-only environments (where possible). This means our developers must have access to three different network architectures, IPv4-only, dual stack, and IPv6-only.

Most of our developers use notebooks, and even a desktop can be provided with a Wi-Fi network adapter, so we chose to implement multiple Wi-Fi networks (and for both 2.4 GHz and 5.0 GHz). So we have six SSIDs in our office:
  • V4-2.4: IPv4-only, 2.4 GHz, 172.18/16

  • V4-5.0: IPv4-only, 5.0 GHz, 172.18/16

  • DS-2.4: IPv4 + IPv6, 2.4 GHz, 172.17/16 and 2001:470:xxxx:1000::/64

  • DS-5.0: IPv4 + IPv6, 5.0 GHz, 172.17/16 and 2001:470:xxxx:1000::/64

  • V6-2.4: IPv6-only, 2.4 GHz, 2001:470:xxxx:2000::/64

  • V6-5.0: IPv6-only, 5.0 GHz, 2001:470:xxxx:2000::/64

Depending on what Wi-Fi adapter your computer has, you may see only the 2.4 GHz or both 2.4 GHz and 5.0 GHz SSIDs. From the visible SSIDs, choose the subnet you want to test with. 5.0 GHz has higher speeds (up to 867 Mbps internally, although our ISP connection is only 500 Mbps).

DHCPv4, DHCPv6, and RDNSS are all implemented. Static routes and firewall rules allow the 172.18/16 and 2001:470:xxxx:2000::/64 subnets access anything in the 172.17/16 and 2001:470:xxxx:1000::/64 subnets (and vice versa). The public IPv4 Internet is accessible from the V4-only and DS subnets, while the public IPv6 Internet is accessible from the V6-only and DS subnets. All internal nodes can configure appropriate internal IP addresses and addresses of DNS and find the default gateways. We can open incoming ports to any nodes on the V6-only or DS subnets and provide limited access via incoming connections via our 6 public IPv4 addresses (either BINAT or port mapped). I have IPv6 at home as well and often access the node at my desk via RDP from home – it is just like being in the office. If needed, we can make a wired Ethernet connection from any subnet to any internal node, but typically the wired connections are only to the DS subnet.

Our firewall (pfSense based) supports four NICs – one for WAN, one for V4-only LAN, one for DS LAN, and one for V6-only LAN. Those NICs are connected to three Wi-Fi routers (via the LAN taps, not WAN taps). This bridges the Wi-Fi networks to the wired networks. I have not yet implemented NAT64 on the V6-only subnet but will do that soon. It is interesting currently (without NAT64) to see how much outside stuff works on the V6-only subnet. Most things do, amazingly (Google, FB, DynDNS, etc.).

As an example, our DNSSEC appliance is fully functional in a V6-only network – most have some IPv4 dependency (e.g., NTP, SNMP, etc.). I believe we need to extend IPv6-ready certification to include working in an IPv6-only subnet (without NAT64), as a higher-level certification.

Our ISP does not currently offer native IPv6, so we use 6in4 to bring a /48 block in from Hurricane Electric (there is a tap here in Singapore, so performance is quite good). Even if we only had a single IPv4 public address, we would be able to use that for both cone NAT and our endpoint of the 6in4 tunnel. We route one /64 block to the DS subnet and another to the V6-only subnet. The fact that we obtain IPv6 via a tunnel does not impact this setup at all. However, if we had an ISP that only provided one /64, we would not be able to do this.

Summary

In this chapter, we covered the many transition mechanisms intended to help during the transition from all-IPv4 to all-IPv6. Some of these (dual stack, 6in4 tunneling, etc.) have been successful and are still in use. Some of these (6over4, Teredo, ISATAP) were used in the early days but due to various problems have fallen out of use. We covered those here in case you tun into an old implementation of them.

Most of the translation mechanisms have not worked very well. The only one still in use had to be severely restricted in terms of how it was deployed for it to actually work (NAT64/DNS64). It now only supports connections from IPv6-only nodes in an IPv6-only subnet to external IPv4 servers. This can help during the deployment of IPv6-only subnets.

One of the hot topics today is finally doing away with IPv4 in entire subnets (IPv6-only). The US DoD has now mandated that new equipment must work in dual-stack and IPv6-only subnets. This means there can be no IPv4 dependencies (e.g., using IPv4 versions of ancillary protocols such as NDP or SNMP).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.11.19