Chapter 3. Integration and Transition Technologies

Transition mechanisms facilitate IPv6 integration by enabling IPv4 and IPv6 to coexist in situations where full native IPv6 is not yet possible. Fundamentally, transition mechanisms let you remove or temporarily avoid the constraints often placed on IPv6 deployment by the existing infrastructure. They allow you to decouple your IPv6 rollout schedule from the IPv6 readiness of the environment.

As the term transition implies, these mechanisms are not to be used in the long term. Your main goal is to run an IPv6-only network, meaning that the end-state network would have neither IPv4 enabled nor any transition mechanisms left in the network. An IPv6-only network will be much easier to manage and secure, and much more efficient in routing and administration. But since in most cases you cannot deploy native IPv6 in one big step, transition mechanisms support a phased integration to reach that end goal—they are only to be used temporarily until you can turn on native IPv6. And because IPv4 will coexist with IPv6 for a long time, I prefer to instead call them integration mechanisms, as they help you get IPv6 deployed while you’re still running IPv4.

Overview of the Integration and Transition Technologies

There is a large and growing number of integration mechanisms. They can be classified as follows:

  • Dual-stack

  • Tunneling

  • Translation

From the 50,000-foot view, this is also the preferred order of leveraging these mechanisms. Since nobody will have a “flag day” transition from IPv4 to IPv6, your network will need to support both protocols simultaneously, and the easiest way to do this is with the dual-stack mechanism. If dual-stack is not possible (for example, because the ISP does not currently support IPv6, or some expensive devices don’t support it and can’t be replaced right now), your next choice should be a tunneling solution. Finally, and only if dual-stack and tunneling solutions don’t work, you may use a translation mechanism. The translation mechanism is the most complex mechanism with the biggest cost and performance hit, and it often comes with a serious loss of functionality.

The next sections describe the mechanisms available in each category to give you an idea of what is available and some guidance on what’s best for your particular environment and deployment plan. On the path from an IPv4 network to your future IPv6 network, you will go through different stages of IPv6 deployment, each stage possibly requiring different integration mechanisms. This is explained in more detail in the section High-Level Concept, in Chapter 2.

Available Mechanisms

The following sections provide an overview of the currently available integration mechanisms, sorted by category. Some of these mechanisms continue to be refined by the engineering community, but most of them have already been used in many production deployments worldwide.

Dual-Stack

Dual-stack is nothing new. If you were a Novell NetWare shop, you ran IPX (internetwork packet exchange) along with IPv4 on the same interface; or if you have Macs, you are running AppleTalk along with IPv4. In this dual-stack approach, we are running two versions of IP on the same interface.

Note

When we talk about dual-stack, we talk about using IPv6 natively (no use of transition mechanisms such as tunneling or translation). In the dual-stack approach we use native IPv6 together with IPv4. If we run a network which doesn’t need IPv4 anymore, we call it an IPv6-only network.

A dual-stacked node can support both IPv4 and IPv6 on the same interface. The interface has one (and possibly a secondary) IPv4 address and, normally, multiple IPv6 addresses. An application can use either IP version as a transport, depending on availability, configuration, and operational policies. In most cases, the choice between using IPv4 or IPv6 is based on DNS configuration (see the section DNS Design, in Chapter 2, for more details).

The dual-stack approach is the most flexible integration mechanism, as it supports legacy IPv4 applications and at the same time allows you to deploy new, possibly IPv6-only, applications. It also makes it easy to turn off IPv4 once it isn’t required anymore. You can configure an IPv6 infrastructure that is independent of IPv4, even if you run it on the same physical network. Keep in mind, though, that you can choose this approach only if you have sufficient IPv4 address space. As soon as a lack of IPv4 addresses drives your project, you will probably have to look for other solutions.

Note

Whenever you can, choose native IPv6. Only if that doesn’t work for some reason you should start looking into other integration mechanisms.

Obviously, in most cases the dual-stack mechanism is combined with another mechanism, such as tunneling. Dual-stack does not mean everything in the network is dual-stack; otherwise, you wouldn’t need it in the first place. If everything was dual-stack, you could turn off IPv4. So, to clarify, dual-stack means that parts of your network and groups of hosts are dual-stacked, while others are mostly IPv4-only and some may be IPv6-only. Your clients may be dual-stacked—able to use IPv6 for new applications while simultaneously supporting legacy applications that are IPv4-only. Clients may also need IPv4 in order to use IPv4 resources outside your corporate domain.

The reason you may need to support legacy IPv4 applications is that some of them have the layer 3 stack incorporated into the compiled application code and cannot support IPv6 without a complete overhaul—which may be impossible, especially with orphaned applications. This means that there are applications out there that can never be migrated to support IPv6 but may still be needed for many years to come. At the same time, the number of IPv6-enabled applications is rising, so there will be a gradual shift of IPv4 traffic to IPv6 traffic. At some point in the future, though, you will be able to turn off IPv4 completely.

Dual-stack requires more resources than a single protocol approach for the following reasons:

  • The network bandwidth is shared and its use prioritized between the two protocols. In most cases total traffic does not increase substantially, but some capacity planning might be a good idea, especially if you are adding new IPv6 services to your network.

  • Hosts need resources for two independent TCP/IP stacks.

  • You may need two IGP routing protocols.

  • Routers share resources for holding routing information and performing SP (shortest path) calculations and lookups for each protocol.

  • You need two security concepts. Security requirements and administration will be different for IPv4 and IPv6.

  • Security devices need more resources for handling two protocol versions. Hardware forwarding devices, for example, support a limited number of ACLs, which could be too limited to handle ACLs for IPv6 in addition to the IPv4 ACLs.

  • Operational and management tools must support two protocols in provisioning, monitoring, and troubleshooting.

In spite of these complications, the advantages you get from the flexibility a dual-stack approach offers probably outweigh the disadvantages.

MPLS

MPLS (multiprotocol label switching) is not an IPv6 transition mechanism, but it can be used nicely in such a scenario. MPLS sits between layer 2 and layer 3 in the protocol stack and tunnels network layer traffic using a set of labels. If you have an MPLS backbone, you can simply label IPv6 traffic.

There are two flavors to use MPLS as an IPv6 transition mechanism. The first, called 6PE (RFC 4798), is a native IPv6 tunnel over an IPv4 MPLS core. In this variation, IPv6 packets are encapsulated in MPLS frames and switched across the MPLS core based on two labels. No changes are required in the operation of the MPLS core. Dual-stack edge routers (provider edge, or PE, and customer edge, or CE) use multiprotocol BGP to exchange IPv6 routing information and corresponding labels to IPv4 endpoints.

The other flavor, called 6VPE (RFC 4659), is an IPv6 layer 3 VPN over MPLS. In this case, IPv6 packet labeling is IPv6 VPN–specific. A second IPv4 label is appended for forwarding over the MPLS core, which allows for isolation of IPv6 transport for specific customers or services. By comparison, 6PE is similar to a single, global VPN.

Whenever you have MPLS available, choose one of these flavors depending on your requirements. MPLS is your best choice. It is widely implemented and deployed, and has a proven successful track record.

Tunneling

Generally, tunnels allow packets of a certain protocol to be encapsulated in a header of another protocol in order to traverse certain parts of a network. So, in our example, you can transport IPv6 packets over an IPv4 backbone by encapsulating the IPv6 packet in an IPv4 header. The tunnel endpoints manage the encapsulation and decapsulation. Tunnels are mostly used in the edge-to-core approach, usually combined with dual-stack or to connect IPv6 islands. For instance, you could have a tunnel from the edge of your network to the edge of your core, the core being native IPv6, or you could have a tunnel from edge to edge across an IPv4 core.

You should implement tunnels only if native IPv6 is not possible. Right now, the IP protocol 41 tunneling (41 is the protocol value for IPv6 encapsulated in IPv4) is the most used version, because the Internet and corporate networks are IPv4 networks. Over time, as the number of IPv6 networks exceeds the number of IPv4 networks, we can expect the reverse to happen: IPv4 packets will begin to be tunneled in IPv6 packets. IPv6 is expected to enhance the performance of the Internet, but we will not see that improvement until most forwarding takes place in native IPv6.

You can configure tunnels either manually or automatically. Manually configured tunnels work well in less dynamic environments, where you connect two or more networks that are in a controlled environment. If there are too many networks, or if the networks are very dynamic, the administrative burden of managing the tunnels may become too big. Automatic tunnels ease administration. Usually, the IPv4 destination address for the tunnel endpoint is embedded as part of the end node’s IPv6 address. This will be discussed in the upcoming sections on 6to4, Teredo, and ISATAP tunneling. In other cases, you can find the destination tunnel address by querying DNS.

6to4

The 6to4 mechanism defined in RFC 3056 (and illustrated in Figure 3-1) has been the recommended automatic tunneling mechanism for a long time. 6to4 requires one globally routable IPv4 address to operate. A 6to4 router connects a domain configured with the 6to4 prefix (we call this a 6to4 network) with another 6to4 network or with the IPv6 Internet. 6to4 uses a reserved prefix—2002::/16.

Format of the 6to4 address
Figure 3-1. Format of the 6to4 address

Embedding the global IPv4 address in the prefix makes automatic tunneling possible. The 6to4 prefix of 2002::/16, combined with the 32-bit IPv4 address, creates a /48 prefix. This leaves 16 bits for intrasite subnets. A public IPv4 address of 62.2.84.115 would create the 6to4 prefix of 2002:3e02:5473::/48 (the IPv4 address represented in hexadecimal).

Figure 3-2 shows the 6to4 communication.

6to4 communication
Figure 3-2. 6to4 communication

Figure 3-2 shows two 6to4 routers, R1 and R2, at the border of two different 6to4 sites. The 2002::/16 prefix used in each site is the 6to4 prefix, and the 32 bits following the 2002::/16 represent the IPv4 address of the public interface of each of the 6to4 routers. A 6to4 router requires at least one public IPv4 address. All hosts within each 6to4 site are configured for addresses within their 6to4 prefix, which is:

R1: 2002:IPv4addressR1::/48
R2: 2002:IPv4addressR2::/48

The nodes on the network are provisioned either by router advertisements or through DHCPv6.

For connections between the two 6to4 sites, the tunneling is automatic. Let’s say R1 gets a packet from a host in its network with a destination for a host in 6to4 network 2. R1 can identify the destination network as a 6to4 network based on the 2002::/16 prefix, and being a 6to4 router, it knows that the IPv4 address of the tunnel endpoint is part of the IPv6 prefix in the destination address field. R1 extracts the 32 bits following the 2002::/16 prefix from the destination address field in the IPv6 header to build an IPv4 header with its own IPv4 source address and the extracted IPv4 address of R2 in the destination address field; then, it sends the IPv4 encapsulated IPv6 packet over the IPv4 infrastructure. R2 receives the IPv4 packet, decapsulates it by stripping off the IPv4 header, performs an IPv6 routing lookup, and forwards the IPv6 packet to its final destination.

For connections with the native IPv6 Internet, the situation is a little more complex and a 6to4 relay is needed. Automatic tunneling doesn’t work in this scenario, since there is no IPv4 address embedded in the IPv6 destination address. So, for example, how can R1 find the relay router R3? There are two possible mechanisms. RFC 3068 defines an anycast address to find 6to4 relays, or the relay could be preconfigured on R1. The way back from the native IPv6 network to the 6to4 network is found by R3 advertising the 6to4 prefix of 2002::/16 into the native IPv6 network.

The 6to4 mechanism has some limitations:

  • There is no guarantee that every native IPv6 device in the Internet will find a working route to reach 6to4 hosts.

  • There are issues with delegating reverse DNS records (translating IPv6 addresses into host names). The conventional address delegation procedures do not work for 6to4 as in this case, the DNS records would have to reflect the IPv4 address delegation (RFC 5158). This causes some issues that have not been fully resolved.

  • For connections with native IPv6 Internet, routing is not under your own administrative control, so you must depend on public deployment of 6to4 relays.

The last bullet point is one of the main problems with 6to4. ISPs were expected to deploy 6to4 relays in large numbers, but this has not been the case. So 6to4 performance and routing are not optimal, which is one reason why 6to4 has not really taken off and will now probably be replaced by 6rd (described in the next section).

You can use 6to4 if you have no global IPv6 prefix (yet) but want to communicate with the IPv6 Internet. It doesn’t make sense to use 6to4 if you already have a global IPv6 prefix, since you’d probably prefer a mechanism that uses your own IPv6 address. You can also use 6to4 in a network with NAT if the NAT box has a full implementation of an IPv6 router, and the public IPv4 address of the outermost NAT box is used to build the 6to4 prefix.

As one of the older tunnel mechanisms, 6to4 is widely supported in code. You can find it in almost any equipment and operating system. There is an ongoing discussion in the IETF working group about deprecating the 6to4 anycast address and moving 6to4 to historic. RFC 6343, “Advisory Guidelines for 6to4 Deployment” offers guidelines for using the protocol. If you are deploying now or will in the near future, you may want to use 6rd, a variant of 6to4.

IPv6 sites can somewhat improve the experience of those clients that still use the anycast 6to4 relay address by installing 6to4 relays, which encapsulate any IPv6 traffic to the 2002::/16 prefix, and send it directly to the destination router over IPv4. This would help decrease the latency on the reverse path. The forward path is not in the server’s control, and usually under only limited client control. Those who deploy the client side could also improve the experience by installing IPv6-connected 6to4 relays to improve the latency for the traffic in the forward direction.

6rd

6rd was developed by Free.fr, an ISP in France. Within only five weeks in 2007, Free enabled IPv6 Internet access for 1.5 million residential customers. The company managed to do this by basing its mechanism on 6to4, which (as mentioned previously) is code that you find in all devices and operating systems. Free modified 6to4 slightly to make it more suitable for ISP requirements. 6rd does not use the 6to4 prefix of 2002::/16, but instead uses the ISP’s global unicast prefix. If you are interested in how Free accomplished this integration, you can read the description in RFC 5569. Because the mechanism was easy to deploy and seemed useful, it was adopted by the IETF working group and is standardized today in RFC 5969, “IPv6 Rapid Deployment on IPv4 Infrastructures (6rd).”

6rd is a dynamic tunneling protocol that provides IPv6 access to native IPv6 islands connected over an IPv4 backbone (usually, the ISP IPv4 network). 6rd builds on the principles and experiences of 6to4, and is similar to it at a very high level. The mapping between the IPv6 and IPv4 addresses within the ISP network enables the automatic determination of the IPv4 tunnel endpoints from the IPv6 prefix. However, there are some important differences:

  • 6to4 uses a predefined prefix (2002::/16), while 6rd can be tailored to use the provider-allocated prefixes. This eases the later transition to native IPv6 within the backbone and solves the routing issues in the Internet.

  • The concept of relay is present in 6rd as well, however, 6rd relays have IPv6 addresses within the allocation of the organization and IPv4 anycast addresses selected by the organization. With 6rd, the organization providing the service is in full control of the relay resources and can manage the asymmetrical routing problems plaguing 6to4.

Figure 3-3 shows the construction of the prefix for the end nodes connected using 6rd.

Format of the 6rd address
Figure 3-3. Format of the 6rd address

The 6rd prefix is taken from the public IPv6 unicast space, so each provider can choose the prefix from his ISP allocation. This way, 6rd operates fully under the ISP’s administrative control, and all 6rd hosts are reachable from anywhere through regular routing mechanisms. If the provider has a /32 prefix, and 24 bits of the IPv4 address are used (the unique bits of all IPv4 addresses within a given 6rd domain; see the RFC), the customer gets a /56 (32 + 24).

The client part of 6rd runs on the CPE (customer premises equipment) devices connecting the customer network with the Internet. These CPE devices serve native IPv6 addresses (according to the prefix) to the end hosts in the customer network. To send IPv6 packets to the Internet, the CPE acts like the tunnel server, encapsulating them into protocol 41 and forwarding them over IPv4. The tunnel destination in the IPv4 tunnel header depends on the destination IPv6 address. If the destination IPv6 address is within the 6rd domain (i.e., the SP prefix is the same), then the CPE will construct the tunnel endpoint address based on the configured IPv4 prefix for the domain and the IPv4 address extracted from the destination IPv6 address. If the destination IPv6 address is outside the domain, the CPE will send the protocol 41 packets to an explicitly configured IPv4 address of a border relay router (BR). The border relays are routers on the “Internet” side of the local backbone and have native IPv6 connectivity to the Internet. They will decapsulate the tunneled packets and forward the IPv6 packets to the destination.

For the return packets, the routing for the 6rd-enabled IPv6 prefixes needs to direct the IPv6 packets from the Internet to a BR router. The BR router will deduce the IPv4 tunnel destination address based on the IPv4 address taken from the IPv6 destination address.

The 6rd mechanism allows for stateless and scalable deployment with predictable performance characteristics. There are many large-scale deployments based on 6rd, so, if your deployment involves IPv6 islands over an IPv4 backbone, this would be a proven and scalable way to go.

ISATAP

ISATAP stands for Intrasite Automatic Tunnel Addressing Protocol and is defined in RFC 5214. As its name implies, ISATAP was created to connect IPv6 hosts in an IPv4-only routed network, where you can’t deploy IPv6 routers for the moment.

The ISATAP tunneling mechanism works just like 6to4, except with ISATAP the tunnel endpoint is created within the host. So, whereas 6to4 is a router-to-router tunnel, ISATAP is a host-to-host tunnel. The ISATAP driver on the interface is the tunnel endpoint. The application sends an IPv6 packet to some IPv6 destination. Before the packet leaves the interface to get onto the IPv4 network, the ISATAP driver encapsulates the IPv6 packet in an IPv4 header and sends it over the IPv4 infrastructure.

How does the ISATAP driver know the IPv4 destination address for the tunnel header? The IPv4 address of the interface is contained in the last 32 bits of the ISATAP IPv6 address. So, in other words, the ISATAP driver takes the 32 bits from the IPv6 destination address and knows that this is the IPv4 address of the destination interface. The packet arrives at the destination as an IPv4 packet, and the ISATAP driver strips off the IPv4 header and forwards the IPv6 packet internally to the application.

Figure 3-4 shows how the IPv4 address is embedded in the last 32 bits of the IPv6 address. ISATAP works with both private and public IPv4 addresses, and the address contains an identifier to distinguish them (0000 for private addresses and 0200 for public addresses). 5efe is the IANA identifier indicating that this is an IPv6 address with an embedded IPv4 address.

Format of the ISATAP address
Figure 3-4. Format of the ISATAP address

Let’s assume the network prefix is 2001:db8:10:3a::/64. If the node’s IPv4 address is 62.2.84.115, the ISATAP address is 2001:db8:10:3a:200:5efe:62.2.84.115. Alternatively, you can convert the IPv4 address to hexadecimal format and write 2001:db8:10:3a:200:5efe:3e02:5473.

So the ISATAP mechanism allows the routing of IPv6 packets within an IPv4-only network. Each host queries an ISATAP router within the site to obtain address and routing information. Packets sent to the IPv6 Internet are routed via the ISATAP router, and packets destined for other hosts within the same site are tunneled directly to the destination.

The ISATAP router within the enterprise has direct connectivity to IPv6 Internet. It decapsulates the packet and forwards it to the IPv6 Internet. On the way back, the IPv6 packets from the Internet will hit the ISATAP router, and the router will encapsulate the IPv6 packet into protocol 41 and send it directly to the destination host, using the IPv4 address within the IPv6 address of the destination.

This approach offers a relatively simple-to-enable tunneling mechanism in an IPv4-only infrastructure. All you need to do is to configure isatap.<yourdomainname> to be the IPv4 address of your ISATAP router. The hosts that support the protocol will query the DNS name during bootup and enable ISATAP automatically.

You might use ISATAP if you are bound to an IPv4-only routed network but want to open access to the IPv6 Internet for your internal IPv6 users, who are typically dispersed across different subnets. ISATAP makes this access possible without requiring an IPv6 routing infrastructure.

ISATAP is implemented and enabled by default on Windows operating systems. The Linux implementation must be installed separately (isatapd), and FreeBSD has ISATAP support by means of the KAME protocol stack. For MacOS X users, there is a pre-alpha version of the ISATAP client.

Teredo

The 6to4 tunneling mechanism can be used only if you have a public IPv4 address, and ISATAP does not work through NAT either. But the reality is that most residential users sit behind a NAT, so Teredo was developed to allow clients behind a NAT to tunnel packets over an IPv4 infrastructure (RFC 4380, updated by RFC 5991). The way Teredo does this is by tunneling the packets in UDP so they can traverse the NAT gateway. UDP messages can be translated by most NATs and can traverse multiple layers of NATs. The packets are encapsulated at the client and sent to a Teredo server, which is responsible for the initial configuration of the Teredo tunnel. The clients must be preconfigured with the address of the Teredo server. Teredo communication will then go through a Teredo relay. The Teredo server is used only to provide an IPv6 address to the client and to discover the best Teredo relay for that communication. A Teredo client learns about other Teredo clients on its own IPv4 network by using the IPv4 multicast address 224.0.0.253.

Figure 3-5 shows the structure of a Teredo address. The first 32 bits are the prefix that has been assigned to the Teredo service—the 2001::/32. The next 32 bits are used for the IPv4 address of the Teredo server. The 16-bit flags field defines address types and the NAT type used. The port bits contain the Teredo port. Teredo servers are listening on port 3544 (configurable). The last 32 bits contain the IPv4 address of the Teredo client.

Format of the Teredo address
Figure 3-5. Format of the Teredo address

Figure 3-6 shows how the Teredo client connects to the Teredo server, which configures the tunnel setup and chooses the best Teredo relay. The NAT gateway is traversed with a IPv4-UDP tunnel. Teredo works with different types of cone NATs, but usually doesn’t work with symmetric NATs (for more information, see RFC 5389, “Session Traversal Utilities for NAT”).

Teredo communication
Figure 3-6. Teredo communication

Teredo has more overhead than other tunneling mechanisms and favors robustness over performance. So, if you can choose native IPv6, 6rd, or ISATAP, don’t choose Teredo. Over time, more NAT environments will support 6to4 or 6rd (or run on boxes with IPv6 router functionality) and IPv6 connectivity will become more common, so Teredo will be used less and less.

Note

If you want to see the relation of native traffic versus tunneled traffic in the Internet, check the Google statistics at http://www.google.com/intl/en/ipv6/statistics. It shows clearly that tunneled traffic is decreasing while native IPv6 traffic makes up the largest part of total IPv6 traffic.

Besides current Windows operating systems, Teredo is not implemented very widely. There is a Linux implementation (miredo), but it needs to be installed separately in most of the distributions.

Teredo suffers from an issue similar to 6to4 anycast: the unpredictable user experience. To help improve the Teredo user experience, sites can install Teredo relays. Teredo is considered to lower the overall security of a network because it bypasses security controls, allows for unsolicited traffic, and is difficult to track. If your design does not rely on Teredo, you might consider blocking it at the perimeter.

Tunnel broker

The tunnel broker could be considered a sort of virtual IPv6 ISP. It sets up and manages IPv6 connectivity on behalf of IPv6 clients, so it has less impact on internal administration than the other mechanisms described.

When a client (a host or a router) connects to the tunnel broker, the tunnel broker sets up a connection to tunnel servers and assigns the tunnel endpoint and DNS records for the endpoints. Tunnel servers are dual-stack routers, connected to the global Internet.

The tunnel broker specification leaves room for individual implementation. There are many public tunnel broker services available. In many cases, when you register for a tunnel broker service, you can download the scripts that you need to configure your environment. Tunnel brokers are designed to be used in smaller sites or for single users. Using them is a good way to get started with your own private lab, as long as your ISP doesn’t support IPv6.

NAT and Translation

NAT and translation are the most debated areas of IPv6 deployment scenarios. This section discusses the different forms of NAT available for the IPv4 address depletion problem and the integration of IPv6.

NAT terminology

To cover these NAT-related mechanisms, we have to extend our NAT terminology. The type of NAT we have been using to date translates private addresses in the customer’s site or home into one or more public IPv4 addresses to go out to the Internet. The NAT replaces the packet’s private IPv4 address and replaces it with its public IPv4 address. NAT does more than just that in order to provide always-on connectivity to many devices, it not just maps devices and addresses, but also uses ports for each address. There are 65,636 available port numbers each for UDP and TCP, many of which are unused. So NAT actually maps the internal private address and the port number to the outside public address and port number. This way, it can map a large number of sessions for each public address. This type of NAT usually sits at the customer edge and runs on the CPE. We call this legacy form of NAT NAT44.

NAT44 has been used for many years. Its main purpose has always been to extend the lifespan of the limited IPv4 address space. IPv6 was designed to overcome the address limitations and therefore is supposed to work without NATs. Even though the goal is to run IPv6 networks without any type of NAT, the fact that we waited too long for IPv6 deployment and are now running out of IPv4 address space will force us to continue using this type of transition technology to deal with the Internet’s exponential growth. The problem is that if new Internet users simply get IPv6 addresses, they will not be able to access the still predominantly IPv4-accessible web content. So the IETF working groups decided to define standard translation methods to prevent the industry from developing an ungovernable variety of nonstandard methods.

Some of the new NAT scenarios used these days are necessary to overcome the IPv4 address problem, especially on the ISP side. Although they can’t be considered an IPv6 transition mechanism, I describe them here to help you work with your ISP to identify what type and quality of Internet connectivity he offers.

Large-Scale NAT

As providers run out of IPv4 addresses and cannot cover Internet growth with IPv4, they have to deploy IPv6. But users still want to be dual-stacked so they can access IPv4 content on the Internet. So why not NAT the IPv4 part of the Internet connection? This lets users access IPv6 content over IPv6 but still get to IPv4 content over their NAT-ed IPv4 connection.

You can achieve this by using Large-Scale NAT, or LSN (sometimes also called Carrier-Grade NAT, or CGN), or NAT444. LSN means we add another layer of NAT to the NAT44 by adding a NAT44 inside the ISP’s network. Traditional NAT44 is between the customer network and the ISP network. LSN is within the ISP network, and allows the ISP to assign a private IPv4 address to the customers, not a public one. In other words, the traditional customer-side NAT now translates from private IPv4 inside to private IPv4 outside. Figure 3-7 shows this relationship in a diagram.

Large-Scale NAT
Figure 3-7. Large-Scale NAT

At the bottom of Figure 3-7, we see the traditional NAT connecting a customer’s privately addressed networks through NAT with the provider network. In this NAT444 scenario, the translation is from private IPv4 to private IPv4. This allows the ISP to connect many customers through a single public IPv4 address on the outside.

So let’s follow a packet. It originates inside the customer site. Its address is converted from private inside to a private address from within the LSN. When leaving the ISP network, it gets the public address assigned to the outside interface of the LSN. The packet goes through address translation and port mapping twice. This mechanism is called NAT444 because it only translates IPv4 to IPv4, with the goal to expand the address space. The advantage here is that you can achieve this mostly with current equipment and implementations.

Time will tell how well this scales with large numbers of users. The processing required for all the translations and mappings for a large number of users will have its limits, which are yet to be determined. There may also be issues with overlapping private space, if an organization uses the same range internally as the provider within the LSN. Another problem might occur if customers connected to the same LSN want to send traffic to each other. Their packets may have to be routed to the outside and come back with a public IPv4 source address; otherwise, they may be filtered by traditional ACLs based on their private source address.

One potentially major pitfall is if somebody successfully attacks the public ISP IPv4 address, it means that not just one customer, but all customers using that LSN IPv4 address, will be affected. It may also prevent the use of IP-based access control lists to, for instance, block addresses that are known to send spam. Another consideration is that all these customers share a fixed pool of ports, so fewer and fewer ports will be available as the number of customers increases. Applications that use a large number of multiple simultaneous sessions—such as Google Maps or iTunes, to name just a couple of examples—will fail. Finally, this approach limits customized marketing and personalized website content display, because all these users coming from one ISP NAT will appear to be coming from one IP address, so all statistical and analytical tools will be unable to provide more specific information on user behavior.

NAT464

Another option is to deploy IPv6-only between the customer edge and the provider network. Known as NAT464, this method requires translation from IPv4 to IPv6 at the customer edge and, again, translation from IPv6 to IPv4 at the LSN. This obviously reduces the need for IPv4 addresses on the provider side. Translation becomes more difficult, as translation across protocols (from IPv4 to IPv6) is more complex than address translation within one protocol family. Furthermore, NAT444 is widely implemented and available, while implementations for NAT464 are not so widespread.

The main disadvantage of this type of translation in general is that, besides the fact that a NAT device is always a bottleneck, when you have to translate IPv6 to IPv4, you lose all the advanced features of IPv6 because they cannot be translated to IPv4. So, for instance, if the packet has extension headers, not all the information can be translated into IPv4 options.

DS-Lite

DS-Lite is a mechanism that allows for IPv6-only connection between the customer site and the LSN, but instead of translating from IPv4 to IPv6 and vice versa as in NAT464, the IPv4 packets are tunneled to the LSN, so no translation needs to be performed. The trick here is to ensure that source addresses are unique. If many customers using private RFC 1918 addresses connect, their source address isn’t distinguishable anymore. DS-Lite solves this problem by linking the IPv4 source address with the unique IPv6 address used for the tunnel.

DS-Lite performs a little better than NAT444 or NAT464, because it has only one layer of NAT. The disadvantage is that single users can no longer be identified by their IP address.

Figure 3-8 shows the private customer network connected to the ISP private network through an IPv6 tunnel. The NAT maps the combination of the IPv6 source address, IPv4 source address, and port to the outside IPv4 address and port.

DS-Lite
Figure 3-8. DS-Lite

Stateless NAT64

Stateless NAT64 provides a mechanism that translates IPv6 packets into IPv4 packets and vice versa. It is based on RFC 6144, which defines a framework for IPv4/IPv6 translation and provides an overview and discussion of all possible scenarios.

For the stateless mechanism, the translation information is carried in the address itself combined with configuration information in the translator. The translator does not need to maintain state. So this mechanism supports end-to-end transparency and has better scalability than stateful translation.

The disadvantage of this mechanism is that it cannot be used to connect IPv4 networks to the IPv6 Internet, because it is a 1:1 address mapping, which doesn’t work in this scenario because there are (will be) way too many IPv6 addresses in the IPv6 Internet to be mapped to IPv4 addresses. So, in this case, you will have to use a stateful translation mechanism, discussed next.

Stateful NAT64 and DNS64

This approach is for scenarios in which users have IPv6 addresses but need to connect to IPv4 networks and the IPv4 Internet. One or more public IPv4 addresses are assigned to the translator to be shared among the IPv6 clients. When stateful NAT64 is used with DNS64, no changes are usually required in the IPv6 client or the IPv4 server. To have support for DNS64, use BIND 9.8.0. Stateful NAT64 is specified in RFC 6146, and DNS64 in RFC 6147.

Figure 3-9 shows how stateful NAT64 with DNS64 works.

Stateful NAT64 with DNS64
Figure 3-9. Stateful NAT64 with DNS64

The IPv6 client sends a DNS AAAA request to the DNS64 server for a certain service name. If the name server has a AAAA record, it will pass the information and the client will connect over IPv6. If the DNS64 server does not have a AAAA record because it is an IPv4-only service, it finds the corresponding A record and creates a synthetic record. The name server uses the well-known prefix of 64:ff9b::/96 and inserts the IPv4 address it learned from the A record into the 32 low-order bits of the IPv6 address. So, if the A record was 203.10.100.2, the IPv6 address would be 64:ff9b::203.10.100.2.

When the client initializes a connection to this address, it will be routed through the NAT64 gateway. This gateway uses an IPv4 address from its pool with an associated port number, creates a mapping entry for the two addresses, translates the IPv6 header into an IPv4 header (using the translation mechanisms described in RFC 6145), and sends the packet to the destination IPv4 address learned from the IPv6 address.

This has been tested and works for most applications. Problems arise when IPv4 addresses are embedded in applications or when IPv4 literals are used. The best solution is for application developers to stick to using fully qualified domain names (FQDNs) in applications instead of IP addresses. A variety of vendors have implemented stateful NAT64 with DNS64, and large mobile providers are doing trials. In the mobile world, this mechanism may be preferable because it uses less power on the mobile client (battery) than a dual-stack client.

NAT-PT

An early translation mechanism, NAT-PT (network address translation/protocol translation) was defined in RFC 2766 and later deprecated in RFC 4966. It used stateless IP/ICMP translation algorithms (SIIT), which are defined in RFC 6145. With translation the header is translated to the header of the other protocol. So a translator takes all the information in an IPv6 header and replaces the IPv6 header with an IPv4 header, while copying and translating all information from IPv6 to IPv4 or vice versa. NAT-PT was designed to be used with DNS application layer gateways.

NAT-PT tried to solve all aspects of translation in one solution, by defining the translation from v4 to v6 (NAT46) and vice versa, from v6 to v4 (NAT64). This created significant complexity and many serious issues. The simpler, more scalable solution used today drops the NAT46 functionality and uses the NAT64 functionality.

And again, no matter what is defined in the future, be aware that translating across protocol families is very limiting. Translating from IPv4 is not that difficult, but when you translate an IPv6 header with extension headers to IPv4, you can lose a lot of information.

How Can IPv6 Clients Reach IPv4 Content?

Due to the empty IPv4 pool, more and more clients and also content providers may find themselves in the position of setting up IPv6-only systems and having to enable some mechanisms to provide access to IPv4-only systems.

So here is a summary of the aforementioned mechanisms that can help in such a situation. The options are:

  • LSN, two layers of NAT44, which is what we call NAT444

  • DS-Lite, NAT44 at CPE, plus NAT44 at ISP, connected by IPv6 tunnel

  • NAT64/DNS64

Whether we like NAT or not, there seems to be no way around it. But which option is the best? If you know that one NAT can be very difficult, degrades performance, and creates issues with applications that may break, it’s logical to assume that two layers of NAT—as in NAT444—is even worse. And security is a big concern, because many customers will be coming from one single IPv4 address, which makes it an attractive attack target. So using NAT444 is probably not the best choice.

If we choose NAT44 with DS-Lite, we only have one layer of NAT, so this is definitely preferable. And DS-Lite allows you to use traditional NAT44, which is widely implemented. To access the IPv6 Internet, clients need to be dual-stacked, with an IPv4 stack to access the IPv4 Internet and an IPv6 stack to get to the IPv6 Internet. The gradual move of the client traffic to IPv6 content is a natural one, and requires managing both protocols on the client side.

NAT64 is also a useful option for supporting transition and legacy applications. There are fewer implementations right now and people don’t have much experience with it, but this may change soon. This is definitely an option to watch, and maybe even one we should be pushing more vendors and operating systems to support. The advantage of NAT64 is that the client is IPv6-only from the beginning, so the transition to native IPv6 for services that are already reachable over IPv6 is a natural one. There is only one protocol in the access network from the beginning, and translation is used only to access IPv4 content.

Load Balancing

The lack of globally routable IPv4 addresses presents an opposite challenge on the service provider side. For example, if you want to build a new data center, you may not be able to get public IPv4 address space. This means that to ensure you have enough public IP address space, you may want to choose IPv6 as the protocol inside the data center. But can you sell data center space with IPv6-only access? This is probably not an economically viable business model right now!

There are two potential options to cover this scenario:

  • Use NAT46 (the opposite of NAT64), the part that was dropped from the NAT-PT specification

  • Use load balancers

For the NAT46 choice, there are currently no specifications. There are some proposals, but the general feeling is that it is too complex to map the IPv6 space to the smaller IPv4 space for the sessions initiated by IPv4 hosts. But it is possible that such a mechanism may be developed once this requirement becomes more obvious. On the other hand, the load balancer choice is probably doable right now and a good short-term solution, since load balancers are essentially mandatory pieces of equipment for the frontend in data centers anyway. Several vendors offer high-performance load balancers that support many different mechanisms and can be used in most scenarios. Whether the performance is sufficient to support the IPv6-only datacenter scenario has to be tested.

Summary

To summarize this chapter, here are the rules for using integration mechanisms in your high-level plan and deployment strategy:

  • Go native IPv6 wherever you can.

  • Use tunneling mechanisms only where they are really needed.

  • Use translation only if there is no other way.

  • Remove all transition mechanisms as soon as you no longer need them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.88.165