By Himanshu Desai
This chapter discusses the methods of transferring legacy protocols over ATM.
Protocols such as IP, IPX, SNA, and so on have long been ported over WAN media such as Frame Relay and SMDS. ATM allows these protocols to be ported over campuses, as well as over WAN connections. Design considerations must be looked at, however, before choosing one method over another. Usually, Multiprotocols over ATM with AAL5 (RFC 1483) is used over WAN connections, and LANE is used over a campus ATM backbone.
This chapter covers these methods separately, using configuration examples. Specific design considerations are noted for each method. The chapter does not discuss the exact implementation of any particular method. The Design Considerations sections compare and contrast each method. The Configuration and Troubleshooting sections support these considerations.
The chapter begins with a discussion of RFC 1483 on PVCs and SVCs, and highlights their merits. RFC 1577 is also covered because it simplifies the operation issues encountered with RFC 1483. However, the section on RFC 1577 does not go into detail regarding deployment of Layer 3 routing protocols. While reading this chapter, you should also consider the issues associated with deploying routing protocols over any one of these methods.
The chapter's primary focus is on how these methods of deploying legacy protocols over ATM differ from each other and in what kind of network it makes sense to use one over another.
The third method of deploying legacy protocols over ATM is LAN Emulation (LANE). This method is used most often in campus backbone networks. The section begins with pointers to design consideration. It is highly encouraged, before deploying LANE in the campus backbone, that you look closely at these design points for scaling the backbone and ease of troubleshooting. Understanding the topology layout and distributing LANE services to different components is critical. Cisco has published an excellent paper on designing LANE networks, called "Campus ATM LANE Design."
The last section covers Multiprotocols over ATM (MPOA), which works in conjunction with LANE. The section briefly describes how MPOA can create cut-through switching over a LANE domain, enhancing the performance of LANE networks and reducing the load of Layer 3 routing when crossing one LANE cloud to another.
The chapter briefly describes each method, including design considerations, configuration examples, and troubleshooting of basic functionality. You can quickly summarize by reading the opening and "Design Consideration" sections for each method to get a basic idea of all methods. To help you after a method has been chosen, the Configuration and Troubleshooting sections provide additional advice for implementation.
It is highly recommended that you be familiar with the fundamentals of ATM before reading this chapter because it does not discuss basic theory.
Multiprotocol encapsulation over ATM AAL5 configuration can be accomplished in two ways.
The first method uses PVCs to configure point-to-point connections between nodes in an ATM cloud. This method requires individual PVCs for every node in a fully meshed ATM cloud.
The second method uses SVCs to connect to every node in the fully meshed ATM cloud.
This section covers PVCs and SVCs. PVCs are permanent virtual circuits, or virtual circuits that are permanently established. PVCs save bandwidth associated with circuit establishment and tear down in situations where certain virtual circuits must exist all the time.
SVCs are switched virtual circuits, or virtual circuits dynamically established on demand and torn down when transmission is complete. SVCs are used in situations where data transmission is sporadic. They are called a "switched virtual connection" in ATM terminology.
RFC 1483 describes two methods of carrying connectionless network traffic over an ATM cloud:
AAL5SNAP—. Allows multiple protocols over a single ATM virtual circuit
AAL5MUX—. Allows one protocol per ATM virtual circuit
Protocols supported using these ATM encapsulation methods include IP, IPX, AppleTalk, CLNS, DECnet, VINES, and bridging. This section discusses design considerations, configuration, and troubleshooting of ATM networks using RFC 1483 with Cisco products and AAL5MUX or AAL5SNAP. This includes configuration using both PVCs and SVCs.
RFC 1483 networks are usually deployed in small scale. This type of network is ideal for campus or WAN backbones, consisting of 5–10 end nodes with a few intermediate switches. Using the three-node network in this example, you need eight VPI/VCI pairs to be configured across the network and three map statements for each router to form a fully meshed ATM cloud. As the number of end nodes and protocols to support increases, RFC 1483 does not scale, and it becomes increasingly difficult to manage and troubleshoot. RFC 1483 networks provide an easy transition if you are replacing existing FDDI or other media backbones with an ATM backbone. An RFC 1483 network starting with two router nodes and a couple of intermediate switches can be increased by just moving the end node from the old backbone to the ATM backbone and adding this node to the newly formed cloud using the map statement. Although this provides ease of transition, as the ATM backbone becomes larger, it requires substantial maintenance.
VPI stands for virtual path identifier. It is an 8-bit field in the header of an ATM cell. The VPI, together with the VCI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. ATM switches use the VPI/VCI fields to identify the next VCL that a cell needs to transit on its way to its final destination.
VCI stands for virtual channel identifier. It is a 16-bit field in the header of an ATM cell. The VCI, together with the VPI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. ATM switches use the VPI/VCI fields to identify the next network VCL that a cell needs to transit on its way to its final destination.
RFC 1483 is a simple concept, easy to configure, and requires less protocol overhead. It is also a stable and approved solution. It does not scale to large networks; however, it requires lots of manual configuration, and does not support ATM to the desktop.
Figure 18-1 shows an example of ATM topology. The ATM cloud in this topology could easily be one or more ATM switches co-located with routers in a LAN environment or multiple switches in a carrier cloud.
Configuring PVCs requires manual mapping on all switches to each end node. Although it is cumbersome and difficult to troubleshoot in larger topologies, PVC configuration is generally simpler in smaller topologies.
In the topology described earlier, there are three ATM-enabled routers—San Jose, Chicago, and New York. They are interconnected physically via two ATM switches, Denver and Iowa. You want a fully meshed ATM cloud between the three routers.
You have two ATM PVCs configured on the San Jose router; one for connectivity to Chicago, and one for New York.
The statement atm pvc 1 0 40 aal5snap enables you to configure the PVC, where number 1 indicates virtual circuit descriptor (VCD), number 0 indicates virtual path identifier (VPI), and number 40 indicates virtual circuit identifier. Valid VPI values to use when configuring PVC on Cisco devices are 0 to 7. VCI values are 32 to 1023. ATM Forum reserves VCI values of 0 to 31.
The statement map-group 1483pvc enables you to apply map-list 1483pvc to the ATM interface, which in turn maps remote router IP addresses to the local VPI or VCI using the VCD number. The other two routers are configured similarly. The San Jose router configuration is as follows:
interface ATM0 ip address 172.10.10.1 255.255.255.0 atm pvc 1 0 40 aal5snap atm pvc 2 0 50 aal5snap map-group 1483pvc map-list 1483pvc ip 172.10.10.2 atm-vc 1 broadcast ip 172.10.10.3 atm-vc 2 broadcast
The configuration of the Chicago router is as follows:
interface ATM2/0 ip address 172.10.10.2 255.255.255.0 map-group 1483pvc atm pvc 1 0 40 aal5snap atm pvc 2 0 60 aal5snap map-list 1483pvc ip 172.10.10.1 atm-vc 1 broadcast ip 172.10.10.3 atm-vc 2 broadcast
The configuration of the New York router is as follows:
interface ATM0 ip address 172.10.10.3 255.255.255.0 atm pvc 1 0 60 aal5snap atm pvc 2 0 50 aal5snap map-group 1483pvc map-list 1483pvc ip 172.10.10.1 atm-vc 2 broadcast ip 172.10.10.2 atm-vc 1 broadcast
The Denver switch configuration shows incoming 0/40 VPI/VCI pairs on interface 1/1/1 coming from the San Jose router and outgoing on interface 1/1/2 with 1/40 as VPI/VCI pairs to the Iowa switch. The configuration is shown from interface 1/1/2's point of view. It also shows another incoming 0/50 VPI/VCI pair on interface 1/1/1 coming from the San Jose router and going out on interface 1/1/2 with 1/50.
The configuration of the Denver LS1010 switch is as follows:
interface ATM1/1/2 no keepalive atm pvc 1 40 interface ATM1/1/1 0 40 atm pvc 1 50 interface ATM1/1/1 0 50 interface ATM1/1/1
The Iowa switch configuration shows incoming 1/40 VPI/VCI pairs from the Denver switch and outgoing to interface 3/0/2 with 0/40 VPI/VCI pairs to the Chicago router. This creates end-to-end PVC between the San Jose and Chicago routers. The Iowa switch has another incoming 1/50 VPI/VCI pair from the Denver switch going out to interface 3/0/1 with 0/50 VPI/VCI pairs to the New York router. Also, 0/60 VPI/VCI pairs are coming in from the Chicago router on interface 3/0/2, which is being switched out on interface 3/0/1 with VPI/VCI pairs of 0/60 to the New York router. This forms a fully meshed ATM cloud with all three routers directly connected to each other.
The configuration of the Iowa LS1010 switch is as follows:
interface ATM3/0/0 no keepalive interface ATM3/0/1 no keepalive atm pvc 0 50 interface ATM3/0/0 1 50 interface ATM3/0/2 no keepalive atm pvc 0 40 interface ATM3/0/0 1 40 atm pvc 0 60 interface ATM3/0/1 0 60
Planning is the key to successful deployment and stability of RFC 1483 networks.
First, create a VPI/VCI pair table for each and every device you want to connect in the cloud. After that is done, make a configuration template, and start configuring individual routers and switches. Then, execute the subsequent commands to see whether the configuration and the design deployed are working accordingly.
The following command shows that two PVCs are active on the ATM0 interface. These VCs have local significance and show an active connection to the nearest switch. These VC values do not indicate a router-to-router ATM connection. For that, you must go to each device between the two end routers and begin checking the interface status and incoming VPI/VCI pair. The outgoing VPI/VCI pair of the San Jose router should match the incoming VPI/VCI pair of the Denver switch. If it doesn't match, the router continues sending out ATM cells, but the switch drops them because it considers them coming from an unknown VPI/VCI pair:
SanJose#show atm vc
Interface VCD VPI VCI Type AAL/ Peak Avg. Burst Status
Encapsulation KBPS KBPS Cells
ATM0 1 0 40 PVC AAL5-SNAP 155000 155000 94 Active
ATM0 2 0 50 PVC AAL5-SNAP 155000 155000 94 Active
A VC (virtual circuit) is a logical circuit created to ensure reliable communication between two network devices. A virtual circuit is defined by a VPI/VCI pair, and can be either permanent (a PVC) or switched (an SVC).
A VCC (virtual channel connection) is a logical connection between two ATM-enabled edge devices. Edge devices can be an ATM-enabled host, router, or switch. VCCs are comprised of many VCs in between, to complete the virtual channel connection.
The following command displays mapping of the layer 3 IP address to the ATM VC address, and it also indicates that broadcast is enabled to go out on the same VC:
SanJose#show atm map
Map list 1483pvc : PERMANENT
ip 172.10.10.2 maps to VC 1, broadcast
ip 172.10.10.3 maps to VC 2 , broadcast
On the Denver switch, you can see that the interface status is up:
Denver#show atm statistics NUMBER OF INSTALLED CONNECTIONS: (P2P=Point to Point, P2MP=Point to MultiPoint) Type PVCs SoftPVCs SVCs PVPs SoftPVPs SVPs Total P2P 11 0 0 0 0 0 11 P2MP 0 0 0 0 0 0 0 TOTAL INSTALLED CONNECTIONS = 11 PER-INTERFACE STATUS SUMMARY AT 10:11:00 UTC Fri Jan 16 1998: Interface IF Status Admin Auto-Cfg ILMI Addr SSCOP Hello Name Status Reg State State State ATM1/0/0 DOWN down waiting n/a Idle n/a ATM1/0/1 DOWN down waiting n/a Idle n/a ATM1/0/2 DOWN down waiting n/a Idle n/a ATM1/0/3 DOWN down waiting n/a Idle n/a ATM1/1/0 UP up waiting WaitDevType Idle n/a ATM1/1/1 UP up done UpAndNormal Idle n/a ATM1/1/2 UP up done UpAndNormal Active 2way_in ATM1/1/3 DOWN down waiting n/a Idle n/a ATM2/0/0 UP up n/a UpAndNormal Idle n/a ATM3/0/0 DOWN down waiting n/a Idle n/a ATM3/0/1 DOWN down waiting n/a Idle n/a
The following command displays the incoming VPI/VCI pair 0/40 from the San Jose router on interface ATM 1/1/1 and is going out on ATM 1/1/2 to the Iowa switch:
Denver#show atm vc int atm 1/1/1 Interface VPI VCI Type X-Interface X-VPI X-VCI Status ATM1/1/1 0 5 PVC ATM2/0/0 0 47 UP ATM1/1/1 0 16 PVC ATM2/0/0 0 48 UP ATM1/1/1 0 18 PVC ATM2/0/0 0 49 UP ATM1/1/1 0 40 PVC ATM1/1/2 1 40 UP ATM1/1/1 0 50 PVC ATM1/1/2 1 50 UP
The following command displays the incoming VPI/VCI pair 1/40 from the Denver switch on interface ATM 3/0/0 and is going out on ATM 3/0/0 to the Chicago router:
Iowa#show atm vc int atm 3/0/0 Interface VPI VCI Type X-Interface X-VPI X-VCI Status ATM3/0/0 0 5 PVC ATM2/0/0 0 32 UP ATM3/0/0 0 16 PVC ATM2/0/0 0 33 UP ATM3/0/0 0 18 PVC ATM2/0/0 0 34 UP ATM3/0/0 1 40 PVC ATM3/0/2 0 40 UP ATM3/0/0 1 50 PVC ATM3/0/1 0 50 UP
After checking VPI/VCI pair and mapping statements for each device, you should be able to ping from the San Jose router to the Chicago router:
SanJose#ping 172.10.10.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echoes to 172.10.10.2, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms SanJose#ping 172.10.10.3 Type escape sequence to abort. Sending 5, 100-byte ICMP Echoes to 172.10.10.3, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
This section discusses RFC 1483 using SVCs. It covers configuring routers and switches for SVCs, and some troubleshooting techniques.
Multiprotocol encapsulation over ATM AAL5 (RFC 1483) configuration using SVCs is semi-dynamic. It still requires mapping of all ATM NSAP node addresses to protocol addresses. The advantage of this configuration is that it does not require any kind of mapping in the ATM switches that interconnect two or more routers. The configuration is performed dynamically using PNNI protocol.
NSAP stands for network service access point. It is a network address, as specified by ISO. An NSAP is the point at which OSI network service is made available to a transport layer (Layer 4) entity.
PNNI has two definitions
Private Network-Network Interface. This is an ATM Forum specification for distributing topology information between switches and clusters of switches; the information is used to compute paths through the network. The specification is based on well-known link-state routing techniques, and includes a mechanism for automatic configuration in networks in which the address structure reflects the topology.
Private Network Node Interface. This is an ATM Forum specification for signaling to establish point-to-point and point-to-multipoint connections across an ATM network. The protocol is based on the ATM Forum's UNI specification with additional mechanisms for source routing, crank back, and alternate routing of call setup requests.
In the illustrated topology in Figure 18-2, there are three ATM-enabled routers—San Jose, Chicago, and New York. They are interconnected physically via two ATM switches, Denver and Iowa. You want a fully meshed ATM cloud between the three routers.
The statement atm pvc 10 0 5 qsaal enables you to configure the PVC, providing a channel for sending signaling messages for SVC call setup. The VPI and VCI values must also be configured consistently with the local switch. The standard value of VPI is 0 and VCI is 5. It uses a special kind of ATM adaptation encapsulation called qsaal.
The statement atm pvc 20 0 16 ilmi enables you to configure the PVC, providing a channel to send Interim Local Management Interface (ILMI) messages to the ATM switch. The standard value of VPI is 0 and VCI is 16 for ILMI. ILMI has many functions; here, it enables you to register the prefix for the ATM interface address. It sends a trap upon ATM interface restart to the switch, and the switch registers its 13-byte prefix with the router. This 13-byte prefix constitutes a 20-byte ATM interface address.
ILMI stands forInterim Local Management Interface. It is the specification developed by the ATM Forum for incorporating NETWORK-management capabilities into the ATM UNI.
The statement atm esi-address 100000000000.00 configures the last seven bytes of the ATM interface address. Using the 13-byte prefix learned via ILMI, and the 7-byte address from the end system identifier (ESI), the router forms an ATM interface address. This address should be unique for each device in the ATM cloud. ESI should be configured so that it can create a unique NSAP address.
ESI stands forend system identifier. It is an identifier that distinguishes multiple nodes at the same level when the lower-level peer group is partitioned.
The statement map-group 1483svc applies map-list 1483svc to the ATM interface, which in turn maps remote router IP addresses to their respective NSAP addresses for call setup. You can get remote router NSAP addresses by executing the show interface ATM x/x command. The map statement maps the Chicago router IP address to the NSAP address.
The configuration of the San Jose router is as follows:
interface ATM0 ip address 172.10.10.1 255.255.255.0 atm esi-address 100000000000.00 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi map-group 1483svc map-list 1483svc ip 172.10.10.2 atm-nsap 47.009181000000006170595C01.200000000000.00 broadcast ip 172.10.10.3 atm-nsap 47.009181000000006170595C01.300000000000.00 broadcast
The other two routers are configured similarly with appropriate protocols to ATM NSAP addresses. The configuration of the Chicago router is as follows:
interface ATM2/0 ip address 172.10.10.2 255.255.255.0 map-group 1483svc atm esi-address 200000000000.00 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi map-list 1483svc ip 172.10.10.1 atm-nsap 47.009181000000006170598A01.100000000000.00 broadcast ip 172.10.10.3 atm-nsap 47.009181000000006170595C01.300000000000.00 broadcast
The configuration of the New York router is as follows:
interface ATM0 ip address 172.10.10.3 255.255.255.0 atm esi-address 300000000000.00 atm pvc 1 0 5 qsaal atm pvc 2 0 16 ilmi map-group 1483svc map-list 1483svc ip 172.10.10.1 atm-nsap 47.009181000000006170598A01.100000000000.00 broadcast ip 172.10.10.2 atm-nsap 47.009181000000006170595C01.200000000000.00 broadcast
The statement atm address 47.0091.8100.0000.0061.7059.8a01.0061.7059.8a01.00 represents the Denver switch ATM address. This is generated automatically, although LS1010 switches allow user-defined addresses. Cisco uses the following mechanism to generate unique ATM addresses for the ATM switches:
AFI | CISCO ICD | Assigned by CISCO | ESI Field | Selector Byte |
---|---|---|---|---|
47 | 00 91 | 81 00 00 00 | MAC Address | 00 |
3 bytes | 4 bytes | 6 bytes | 1 byte |
The statement atm router pnni enables PNNI on all NNI interfaces after ILMI has determined the interface type. The statement node 1 level 56 lowest configures the switch for a PNNI node with node-index 1 at the lowest level of 56.
NNI stands forNetwork-to-Network Interface. It is an ATM Forum standard that defines the interface between two ATM switches that are both located in a private network or are both located in a public network. The interface between a public switch and a private one is defined by the UNI standard.
The San Jose router ATM interface is directly connected to the Denver switch ATM interface 1/1/1. As revealed by the configuration of the ATM interface 1/1/1, you can see that there is no configuration required on it, as well as on ATM interface 1/1/2 that connects to the Iowa switch. The link between the San Jose router and the Denver switch ATM interface 1/1/1 is called User-Network Interface (UNI), and the link between the Denver switch ATM interface 1/1/2 to the Iowa switch ATM interface 3/0/0 is called Network-to-Network Interface (NNI). PNNI runs on the NNI links.
UNI stands forUser-Network Interface. It is an ATM Forum specification that defines an interoperability standard for the interface between ATM-based products (a router or an ATM switch) located in a private network and the ATM switches located within the public carrier networks.
The configuration of the Denver switch is as follows:
atm address47.0091.8100.0000.0061.7059.8a01.0061.7059.8a01.00 atm router pnni node 1 level 56 lowest redistribute atm-static interface ATM1/1/1 no keepalive interface ATM1/1/2 no keepalive
The configuration of the Iowa switch is similar to the Denver switch:
atm address 47.0091.8100.0000.0061.7059.5c01.0061.7059.5c01.00 atm router pnni node 1 level 56 lowest redistribute atm-static interface ATM3/0/0 no keepalive interface ATM3/0/1 no keepalive interface ATM3/0/2 no keepalive
The configuration of SVCs requires mapping the protocol address to the remote router NSAP address. Routers form this NSAP address by combining the prefix obtained via ILMI from the ATM switch and the preconfigured ESI address. This creates a complete 20-byte ATM NSAP address for the router ATM interface. So, you need to make sure that ILMI is working properly. Some of the following analysis ensures this.
The following command output indicates that it received the prefix 47.009181000000006170598A01 from the ATM switch. It forms the ATM interface NSAP address by adding its ESI address to the prefix, and it registers itself in the switch table for the PNNI to propagate the information. It also tells the peer interface address whether it is a Cisco device:
SanJose#show atm ilmi Interface ATM0 ILMI VCC: (0, 16) ILMI Keepalive: Disabled Address Registration: Enabled Addr Reg State: UpAndNormal Peer IP Addr: 0.0.0.0 Peer IF Name: ATM1/1/1 Prefix(s): 47.009181000000006170598A01 Addresses Registered: Local Table : 47.009181000000006170598A01.100000000000.00 Remote Table : 47.009181000000006170598A01.100000000000.00
The following command confirms that ILMI between the router and switch is working because you can see that the ATM interface has its associated NSAP address. ILMI also exchanges information regarding the UNI version and whether the router is user side or network side. In this example, the router is running UNI Version 3, and it is user side:
SanJose#show int atm 0
ATM0 is up, line protocol is up
Hardware is ATMizer BX-50
Internet address is 172.10.10.1/24
MTU 4470 bytes, sub MTU 4470, BW 156250 Kbit, DLY 100 usec,
rely 210/255, load 1/255
NSAP address: 47.009181000000006170598A01.100000000000.00
Encapsulation ATM, loopback not set, keepalive set (10 sec)
Encapsulation(s): AAL5 AAL3/4, PVC mode
1024 maximum active VCs, 1024 VCs per VP, 4 current VCCs
VC idle disconnect time: 300 seconds
Signalling vc = 10, vpi = 0, vci = 5
UNI Version = 3.0, Link Side = user
Last input 00:00:20, output 00:00:01, output hang never
Last clearing of "show interface" counters never
The following command displays the ILMI message exchanges between the router and switch. ILMI uses standard SNMP messages. The output shows the NSAP address passed by the switch. It is then registered by the router in the local table and sent for registration in the peer switch table. Some of the parameters, such as UNI version and peer interface name, are also exchanged:
SanJose#debug atm ilmi ILMI Transition : Intf := 1 From Restarting To AwaitRestartAck <ilmi_initiate_addreg> ILMI: REQ_PROCESSING Reqtype = GETNEXT Reqid = 12 Requestor = ILMI, Transid = 1 (ATM0) ILMI: Trap Received (ATM0) ILMI Transition : Intf := 1 From AwaitRestartAck To UpAndNormal <ilmi_snmp_callback> ILMI: REQ_PROCESSING Reqtype = GET Reqid = 13 Requestor = ILMI, Transid = 1 (ATM0) ILMI: REQ_PROCESSING Reqtype = GET Reqid = 14 Requestor = ILMI, Transid = 1 (ATM0) ILMI: REQ_PROCESSING Reqtype = GET Reqid = 15 Requestor = ILMI, Transid = 1 (ATM0) ILMI: VALID_RESP_RCVD Reqtype = GET Reqid = 13 Requestor = ILMI, Transid = 1 (ATM0) ILMI: VALID_RESP_RCVD Reqtype = GET Reqid = 14 Requestor = ILMI, Transid = 1 (ATM0) ILMI: VALID_RESP_RCVD Reqtype = GET Reqid = 15 Requestor = ILMI, Transid = 1 (ATM0) ILMI: Peer UNI Version on 1 = 3 ILMI: TERMINATE Reqtype = GET Reqid = 13 Requestor = ILMI, Transid = 1 (ATM0) ILMI: TERMINATE Reqtype = GET Reqid = 14 Requestor = ILMI, Transid = 1 (ATM0) ILMI: Peer IfName on 1 = ATM1/1/1 ILMI: TERMINATE Reqtype = GET Reqid = 15 Requestor = ILMI, Transid = 1 (ATM0) ILMI: REQ_TIMEOUT Reqtype = GETNEXT Reqid = 12 Requestor = ILMI, Transid = 1 (ATM0) ILMI Retry count (before decrement) = 3 ILMI: REQ_PROCESSING Reqtype = GETNEXT Reqid = 12 Requestor = ILMI, Transid = 1 (ATM0) ILMI: ERROR_RESP_RCVD (No Such Name) Reqtype = GETNEXT Reqid = 12 Requestor = ILMI, Transid = 1 (ATM0) ILMI: TERMINATE Reqtype = GETNEXT Reqid = 12 Requestor = ILMI, Transid = 1 (ATM0) ILMI: No request associated with Expired Timer Reqid = 13 ILMI: No request associated with Expired Timer Reqid = 14 ILMI: No request associated with Expired Timer Reqid = 15 ILMI: Trap sent. Waiting for Prefix ATM0 ILMI: No request associated with Expired Timer Reqid = 12 ILMI: Prefix will be Added (If currently not registered): 470918100006170598A1 ILMI: Notifying Address Addition 470918100006170598A1 (ATM0) ILMI: REQRCVD Reqtype = SET Reqid = 0 Requestor = atmSmap, Transid = 1621657796 (ATM0) ILMI: Notifying Address Addition 470918100006170598A1 (ATM0) ILMI: Notifying Address Addition 470918100006170598A1 (ATM0) ILMI: REQ_PROCESSING Reqtype = SET Reqid = 16 Requestor = atmSmap, Transid = 1621657796 (ATM0) ILMI: (Local) Reg. validation attempt for 470918100006170598A110000000 ILMI: Address added to local table. ILMI: Register request sent to peer ILMI: VALID_RESP_RCVD Reqtype = SET Reqid = 16 Requestor = atmSmap, Transid = 1621657796 (ATM0) ILMI: Set confirmed. Updating peer address table ILMI: TERMINATE Reqtype = SET Reqid = 16 Requestor = atmSmap, Transid = 1621657796 (ATM0) ILMI: No request associated with Expired Timer Reqid = 16
The following debug output indicates the switch side ILMI messages. You can see the switch sending its prefix upon receiving the trap. It also validates the address for the end station to be registered in the end system remote table:
Denver#debug atm ilmi ATM 1/1/1 ILMI: Querying peer device type. (ATM1/1/1) ILMI : (ATM1/1/1) From ilmiIntfDeviceTypeComplete To ilmiIntfAwaitPortType <ilmi_initiate_portquery> ILMI: The Maximum # of VPI Bits (ATM1/1/1) is 3 ILMI: The Maximum # of VCI Bits (ATM1/1/1) is 10 ILMI: Response Received and Matched (ATM1/1/1) The peer UNI Type on (ATM1/1/1) is 2 The Peer UNI Version on (ATM1/1/1) is 2 ILMI: Assigning default device type (ATM1/1/1) ILMI: My Device type is set to Node (ATM1/1/1) ILMI: Auto Port determination enabled ILMI: For Interface (ATM1/1/1) ILMI: Port Information Complete : ILMI: Local Information :Device Type = ilmiDeviceTypeNode Port Type = ilmiPrivateUNINetworkSide ILMI: Peer Information :Device Type = ilmiDeviceTypeUser Port Type = ilmiUniTypePrivate MaxVpiBits = 3 MaxVciBits = 10 ILMI: KeepAlive disabled ILMI : (ATM1/1/1) From ilmiIntfAwaitPortType To ilmiIntfPortTypeComplete <ilmi_find_peerPort> Restarting Interface (ATM1/1/1) ILMI : (ATM1/1/1) From ilmiIntfPortTypeComplete To AwaitRestartAck <ilmi_process_intfRestart> ILMI: Response Received and Matched (ATM1/1/1) ILMI: Errored response <No Such Name> Intf (ATM1/1/1) Function Type = ilmiAddressTableCheck ILMI : (ATM1/1/1) From AwaitRestartAck To UpAndNormal <ilmi_process_response> ILMI: Response Received and Matched (ATM1/1/1) ILMI: The Neighbor's IfName on Intf (ATM1/1/1) is ATM0 ILMI: The Neighbor's IP on Intf (ATM1/1/1) is 2886339073 ILMI: Trap Received (ATM1/1/1) ILMI: Sending Per-Switch prefix ILMI: Registering prefix with end-system 47.0091.8100.0000.0061.7059.8a01 ILMI: Response Received and Matched (ATM1/1/1) ILMI: Validating address 47.0091.8100.0000.0061.7059.8a01.1000.0000.0000.00 ILMI: Address considered validated (ATM1/1/1) ILMI: Address added : 47.0091.8100.0000.0061.7059.8a01.1000.0000.0000.00 (ATM1/1/1) ILMI: Sending Per-Switch prefix ILMI: Registering prefix with end-system 47.0091.8100.0000.0061.7059.8a01 ILMI: Response Received and Matched (ATM1/1/1)
The following debug command displays the output for signaling events occurring on the San Jose router. If the VC to the remote router is not present and if you try to ping it, it will open up the call using the signaling protocol and, once connected, it will start sending data packets. This phenomenon is very fast, but depends on the number of call requests on that device at the time and the number of ATM switches in between:
SanJose#debug atm sig-events ATMAPI: SETUP ATMSIG: Called len 20 ATMSIG: Calling len 20 ATMSIG(0/-1 0,0 - 0031/00): (vcnum:0) build Setup msg, Null(U0) state ATMSIG(0/-1 0,0 - 0031/00): (vcnum:0) API - from sig-client ATM_OWNER_SMAP ATMSIG(0/-1 0,0 - 0031/00): (vcnum:0) Input event : Req Setup in Null(U0) ATMSIG(0/-1 0,0 - 0031/00): (vcnum:0) Output Setup msg(XferAndTx), Null(U0) state ATMSIG: Output XferSetup ATMSIG: Called Party Addr: 47.009181000000006170595C01.200000000000.00 ATMSIG: Calling Party Addr: 47.009181000000006170598A01.100000000000.00 ATMSIG(0/-1 0,0 - 0031/00): (vcnum:0) Null(U0) -> Call Initiated(U1) ATMSIG(0/-1 0,0 - 0031/00): (vcnum:0) Input event : Rcvd Call Proceeding in Call Initiated(U1) ATMSIG(0/-1 0,153 - 0031/00): (vcnum:0) Call Initiated(U1) -> Outgoing Call Proceeding(U3) ATMSIG(0/-1 0,153 - 0031/00): (vcnum:0) Input event : Rcvd Connect in Outgoing Call Proceeding(U3) ATMSIG(0/-1 0,153 - 0031/00): (vcnum:114) API - notifying Connect event to client ATM0 ATMSIG(0/-1 0,153 - 0031/00): (vcnum:114) Input event : Req Connect Ack in Active(U10)
The following debug command displays output on the signaling events occurring on the Denver ATM switch when the New York router is trying to call the San Jose router. The Denver switch acts as a transit node for this call:
Denver#debug atm sig-events ATMSIG(1/1/2:0 0 - 161222): Input Event : Rcvd Setup in Null(N0) ATMSIG(1/1/2:0 0 - 161222):Call Control Rcvd Setup in state : Call Initiated(N1) ATMSIG: Called Party Addr: 47.009181000000006170598A01.100000000000.00 ATMSIG: Calling Party Addr: 47.009181000000006170595C01.300000000000.00 ATMSIG(1/1/2:0 215 - 161222): Input Event : Req Call Proceeding in Call Initiated(N1) ATMSIG(1/1/2:0 215 - 161222): Output Call Proc msg, Call Initiated(N1) state ATMSIG(1/1/2:0 215 - 161222): Call Initiated(N1) -> Call Proceeding sent (NNI) (N3) ATMSIG: 1/1/1:0 findSvcBlockByCr, Svc not found, callref = 219 ATMSIG: 1/1/1:0 findSvcBlockByCr, Svc not found, callref = 220 ATMSIG(1/1/1:0 36 - 0220): Input Event : Req Setup in Null(N0) ATMSIG(1/1/1:0 36 - 0220): Output Setup msg(XferAndTx), Null(N0) state ATMSIG(1/1/1:0 36 - 0220): Null(N0) -> Call Present(N6) ATMSIG:openTransitConnection, svc 0x60685D68, partnerSvc 0x606863A0 ATMSIG(1/1/2:0 215 - 161222): Null(N0) -> Call Proceeding sent (NNI) (N3) ATMSIG(1/1/1:0 36 - 0220): Input Event : Rcvd Call Proceeding in Call Present(N6) ATMSIG(1/1/1:0 36 - 0220): Call Present(N6) -> Incoming Call Proceeding(N9) ATMSIG(1/1/1:0 36 - 0220): Input Event : Rcvd Connect in Incoming Call Proceeding(N9) ATMSIG(1/1/1:0 36 - 0220):Call Control Rcvd Connect in state : Incoming Call Proceeding(N9) ATMSIG(1/1/1:0 36 - 0220): Input Event : Req Connect Ack in Incoming Call Proceeding(N9) ATMSIG(1/1/1:0 36 - 0220): Output Connect Ack msg, Incoming Call Proceeding(N9) state ATMSIG(1/1/1:0 36 - 0220): Incoming Call Proceeding(N9) -> Active(N10) ATMSIG(1/1/2:0 215 - 161222): Input Event : Req Connect in Call Proceeding sent (NNI) (N3) ATMSIG(1/1/2:0 215 - 161222): Output Connect msg(XferAndTx), Call Proceeding sent (NNI) (N3) state ATMSIG(1/1/2:0 215 - 161222): Call Proceeding sent (NNI) (N3) -> Active(N10) ATMSIG: connectTransitPath, svc 0x60685D68, partnerSvc 0x606863A0 ATMSIG(1/1/1:0 36 - 0220): Incoming Call Proceeding(N9) -> Active(N10)
The following CLI command, show atm map, indicates mapping of protocol address (IP) to ATM address (NSAP). It also indicates that the connections to the remote routers are up, and indicates which VCD value it is using to create a connection through the ATM switch:
SanJose#show atm map
Map list 1483svc : PERMANENT
ip 172.10.10.2 maps to NSAP
47.009181000000006170595C01.200000000000.00, broadcast,
connection up, VC 2, ATM0
ip 172.10.10.3 maps to NSAP
47.009181000000006170595C01.300000000000.00, broadcast,
connection up, VC 1, ATM0
The following CLI command,show atm vc, indicates the status of VCD values used by the router connecting to a remote router, as shown by the preceding command. It also provides corresponding VPI/VCI values associated with that VCD value:
SanJose#show atm vc
Interface VCD VPI VCI Type AAL/ Peak Avg Burst Status
Encapsulation Kbps Kbps Cells
ATM0 1 0 32 SVC AAL5-SNAP 155000 155000 94 ACTIVE
ATM0 2 0 33 SVC AAL5-SNAP 155000 155000 94 ACTIVE
The following CLI command, show atm vcn, provides detailed output of the associated VCDs for the local router. This assists in troubleshooting connectivity to the remote router because this output can tell you that the local router is transmitting the cells and may not be receiving anything coming in:
SanJose#show atm vc 1 ATM0: VCD: 1, VPI: 0, VCI: 32, etype:0x0, AAL5 - LLC/SNAP, Flags: 0x50 PeakRate: 155000, Average Rate: 155000, Burst Cells: 94, VCmode: 0x1 OAM DISABLED, InARP DISABLED InPkts: 42, OutPkts: 46, InBytes: 3796, OutBytes: 4172 InPRoc: 42, OutPRoc: 12, Broadcasts: 34 InFast: 0, OutFast: 0, InAS: 0, OutAS: 0 OAM F5 cells sent: 0, OAM cells received: 0 Status: ACTIVE , TTL: 4 interface = ATM0, call remotely initiated, call reference = 2 vcnum = 1, vpi = 0, vci = 32, state = Active aal5snap vc, point-to-point call Retry count: Current = 0, Max = 10 timer currently inactive, timer value = 00:00:00 Remote ATM Nsap address: 47.009181000000006170595C01.300000000000.00 SanJose#show atm vc 2 ATM0: VCD: 2, VPI: 0, VCI: 33, etype:0x0, AAL5 - LLC/SNAP, Flags: 0x50 PeakRate: 155000, Average Rate: 155000, Burst Cells: 94, VCmode: 0x1 OAM DISABLED, InARP DISABLED InPkts: 45, OutPkts: 46, InBytes: 4148, OutBytes: 4220 InPRoc: 45, OutPRoc: 12, Broadcasts: 34 InFast: 0, OutFast: 0, InAS: 0, OutAS: 0 OAM F5 cells sent: 0, OAM cells received: 0 Status: ACTIVE , TTL: 4 interface = ATM0, call locally initiated, call reference = 1 vcnum = 2, vpi = 0, vci = 33, state = Active aal5snap vc, point-to-point call Retry count: Current = 0, Max = 10 timer currently inactive, timer value = 00:00:00 Remote ATM Nsap address: 47.009181000000006170595C01.200000000000.00
RFC 1577, or "Classical IP and ATMARP over ATM," provides a mechanism to talk dynamically to IP-enabled routers through the ATM cloud. You do not require any kind of IP address–to–ATM address mapping to go from one router to another. It does so by using an ARP mechanism similar to Ethernet ARP.
In the following topology, you have three routers connected to the ATM cloud formed by two ATM switches. These three routers form a LIS, 172.10.x.x. Classical IP over ATM allows them to talk to each other dynamically with minimum configuration.
In the following example, treat the Denver switch as an ARP server and all three routers as clients. Each client connects to the ARP server by using a preconfigured ARP server ATM NSAP address and the ARP server gets the corresponding IP address of all the clients using InARP. The ARP server has an ARP table with IP-to-ATM NSAP address pairs. If any router wants to talk to another router, it sends ARP to the server, asking for another router's ATM NSAP address. After the requesting router receives the NSAP address, it communicates with that router directly over the ATM cloud.
The preceding scenario specifies one LIS group. You can have another set of ATM-enabled routers connected to the same ATM switches, but at Layer 3 they might be in a different LIS. This second LIS is independent of the first LIS. Both LIS groups have their own ARP server and corresponding clients. If the router in one LIS wants to talk to a router in another LIS, it has to go through an IP router that is configured as a member of both LIS groups; that is, it has to do Layer 3 routing even though it might be possible to open a direct VC between the two over the ATM cloud.
RFC 1577 is very simple and straightforward to implement. Its simplicity comes from ease of configuration and troubleshooting. This kind of network is well suited for 10–15 nodes with one logical IP subnet. It does not scale, however, because of the problems with finding neighbors at the routing protocol level if the VCs are not already established. Usually, a multivendor RFC 1577 environment has a single point of failure because of its centralized ARP server. Cisco supports multiple ARP servers for the same LIS, but it is a proprietary solution.
Figure 18-3 presents the topology for this example.
Theconfiguration of RFC 1577 requires an ATM ARP server configuration. Cisco routers with an ATM interface or an ATM switch LS1010 can act as an ATM ARP server for RFC 1577. In the following example, the Denver switch is an ARP server.
The CLI command atm arp-server self enables the Denver switch CPU card to act as an ARP server for LIS 172.10.x.x.
The configuration of the Denver switch is as follows:
interface ATM2/0/0 ip address 172.10.10.4 255.255.255.0 no keepalive am esi-address 123456789000.00 atm arp-server self
The CLI command atm arp-server nsap 47.009181000000006170598A01.123456789000.00 enables the San Jose router ATM interface to become an ARP client for LIS 172.10.x.x, and also provides it with an NSAP address of the ARP server for that LIS. It uses this NSAP address to establish a connection to the ARP server when the ATM interface is administratively enabled. The configuration of other routers in the same LIS is similar:
The configuration of the San Jose router is as follows:
interface ATM0 ip address 172.10.10.1 255.255.255.0 atm esi-address 100000000000.00 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi atm arp-server nsap 47.009181000000006170598A01.123456789000.00
The configuration of the Chicago router is as follows:
interface ATM2/0 ip address 172.10.10.2 255.255.255.0 atm esi-address 200000000000.00 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi atm arp-server nsap 47.009181000000006170598A01.123456789000.00
The configuration of the New York router is as follows:
interface ATM0 ip address 172.10.10.3 255.255.255.0 atm esi-address 300000000000.00 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi atm arp-server nsap 47.009181000000006170598A01.123456789000.00
Classical IP over ATM requires the same MTU size on all ATM clients and ARP servers in the same LIS. Understanding the client-server interaction eases the problem of troubleshooting. The show and debug commands provide insight into the client/server interaction.
The following CLI command, show atm map, reveals that as soon as the client interface is enabled, it connects to the ARP server via VC 159. The CLI command debug atm arp on the same client reveals that the client is getting an InARP request from the ARP server (172.10.10.4) to resolve the ATM NSAP address to an IP address and to update its table.
The client responds back with its IP address and the server builds its ARP table. Similarly, any client that is enabled is registered into the ARP table using this process.
SanJose#show atm map Map list ATM0_ATM_ARP : DYNAMIC arp maps to NSAP 47.009181000000006170598A01.123456789000.00 , connection up, VC 159, ATM0 SanJose#debug atm arp ATMARP(ATM0)I: INARP Request VCD#159 from 172.10.10.4 ATMARP(ATM0)O: INARP Response VCD#159 to 172.10.10.4 ATMSM(ATM0): Attaching to VC #159 for type 1 traffic SanJose#show atm vc Interface VCD VPI VCI Type AAL/ Peak Avg Burst Status Encapsulation Kbps Kbps Cells ATM0 10 0 5 PVC AAL5-SAAL 155000 155000 94 ACTIVE ATM0 20 0 16 PVC AAL5-ILMI 155000 155000 94 ACTIVE AMT0 159 0 62 PVC AAL5-SNAP 155000 155000 94 ACTIVE
The following CLI command displays the corresponding VC to the ATM ARP server and reveals that the call was locally initiated:
SanJose#show atm vc 159
ATM0: VCD: 159, VPI: 0, VCI: 62, etype:0x0, AAL5 - LLC/SNAP,
Flags: 0xD0
PeakRate: 155000, Average Rate: 155000, Burst Cells: 94,
VCmode: 0x1
OAM DISABLED, InARP DISABLED
InPkts: 1, OutPkts: 5, InBytes: 52, OutBytes: 376
InPRoc: 1, OutPRoc: 0, Broadcasts: 4
InFast: 0, OutFast: 0, InAS: 0, OutAS: 0
OAM F5 cells sent: 0, OAM cells received: 0
Status: ACTIVE , TTL: 0
interface = ATM0, call locally initiated, call reference = 112
vcnum = 159, vpi = 0, vci = 62, state = Active
aal5snap vc, point-to-point call
Retry count: Current = 0, Max = 10
timer currently inactive, timer value = 00:00:00
Remote ATM Nsap address:
47.009181000000006170598A01.123456789000.00
The following CLI command, show atm map, now has one entry with a corresponding IP-NSAP address pair from the ARP server:
SanJose#show atm map
Map list ATM0_ATM_ARP : DYNAMIC
arp maps to NSAP
47.009181000000006170598A01.123456789000.00, connection
up, VC 159, ATM0
ip 172.10.10.4 maps to NSAP
47.009181000000006170598A01.123456789000.00, broadcast,
connection up, VC 159, ATM0
The following CLI command debug atm arp on the ARP server reveals that as each client comes up, it establishes an ATM connection to the ARP server using a predefined ARP server ATM address. The ARP server sends an InARP request to get the IP address of each client, and the output shows exactly that. You can see from the bold text that the server transmitted the InARP for client 172.10.10.2, receives a reply, and updates its ARP table:
Denver#debug atm arp ARPSERVER (ATM2/0/0): tx InARP REQ on vc 254 ATMARP(ATM2/0/0)O: INARP_REQ to VCD#254 for link 7(IP) ARPSERVER (ATM2/0/0): tx InARP REQ on vc 255 ATMARP(ATM2/0/0)O: INARP_REQ to VCD#255 for link 7(IP) ATMARP(ATM2/0/0)I: INARP Reply VCD#254 from 172.10.10.2 ARPSERVER (ATM2/0/0): rx InARP REPLY from 172.10.10.2 (vc 254) ARPSERVER (ATM2/0/0): New IP address for vcd 254 -- was 0.0.0.0, now 172.10.10.2 ATMARP(ATM2/0/0)I: INARP Reply VCD#255 from 172.10.10.3 ARPSERVER (ATM2/0/0): rx InARP REPLY from 172.10.10.3 (vc 255) ARPSERVER (ATM2/0/0): New IP address for vcd 255 -- was 0.0.0.0, now 172.10.10.3 ARPSERVER (ATM2/0/0): tx InARP REQ on vc 256 ATMARP(ATM2/0/0)O: INARP_REQ to VCD#256 for link 7(IP) ARPSERVER (ATM2/0/0): vc 256 wait timer expiry. Retransmitting. ARPSERVER (ATM2/0/0): tx InARP REQ on vc 256 ATMARP(ATM2/0/0)O: INARP_REQ to VCD#256 for link 7(IP) ATMARP(ATM2/0/0)I: INARP Reply VCD#256 from 172.10.10.5 ARPSERVER (ATM2/0/0): rx InARP REPLY from 172.10.10.5 (vc 256) ARPSERVER (ATM2/0/0): New IP address for vcd 256 -- was 0.0.0.0, now 172.10.10.5 ARPSERVER (ATM2/0/0): tx InARP REQ on vc 257 ATMARP(ATM2/0/0)O: INARP_REQ to VCD#257 for link 7(IP) ARPSERVER (ATM2/0/0): vc 257 wait timer expiry. Retransmitting. ARPSERVER (ATM2/0/0): tx InARP REQ on vc 257 ATMARP(ATM2/0/0)O: INARP_REQ to VCD#257 for link 7(IP) ATMARP(ATM2/0/0)I: INARP Reply VCD#257 from 172.10.10.1 ARPSERVER (ATM2/0/0): rx InARP REPLY from 172.10.10.1 (vc 257) ARPSERVER (ATM2/0/0): New IP address for vcd 257 – was 0.0.0.0, now 172.10.10.1A
Until now, you have seen how each client is registered in the ARP table. The following analysis tells how client-client interaction occurs for existing clients, and what happens if the client does not exist.
You are pinging from the San Jose router to the Chicago router (172.10.10.2), but don't know the ATM address of the Chicago router. So, the San Jose router sends an ARP request to the server and receives a response with a corresponding NSAP address:
SanJose#ping 172.10.10.2 Type escape sequence to abort. Sending 5, 100-byte ICMP Echoes to 172.10.10.2, timeout is 2 seconds: Success rate is 60 percent (3/5), round-trip min/avg/max = 1/1/1 ms SanJose#debug atm arp ATMARP(ATM0): Sending ARP to 172.10.10.2 ATMARP:(ATM0): ARP reply from 172.10.10.2 -> 47.009181000000006170595C01.200000000000.00 ATMARP(ATM0): Opening VCC to 47.009181000000006170595C01.200000000000.00..!!!
The following output from the ARP server shows that it received the ARP request from 172.10.10.1 for ATM NSAP address of 172.10.10.2. It replies back with the appropriate ATM NSAP address:
Denver#debug atm arp ATMARP:(ATM2/0/0): ARP Request from 172.10.10.1 -> 47.009181000000006170598A01.100000000000.00 ATMARP(ATM2/0/0): ARP VCD#0 172.10.10.1 replacing NSAP ARPSERVER (ATM2/0/0):rx ARP REQ from 172.10.10.1 to 172.10.10.2 (vc 257) ARPSERVER (ATM2/0/0): tx ARP REPLY from 172.10.10.2 to 172.10.10.1 (vc 257) ATMARP:(ATM2/0/0): ARP Request from 172.10.10.2 -> 47.009181000000006170595C01.200000000000.00 ATMARP(ATM2/0/0): ARP VCD#0 172.10.10.2 replacing NSAP ARPSERVER (ATM2/0/0): rx ARP REQ from 172.10.10.2 to 172.10.10.1 (vc 254) ARPSERVER (ATM2/0/0): tx ARP REPLY from 172.10.10.1 to 172.10.10.2 (vc 254)
Now, see what happens if you try to ping a nonexistent client in the LIS:
SanJose#ping 172.10.10.10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echoes to 172.10.10.10, timeout is 2
seconds:
Success rate is 0 percent (0/5)
The pinging to the 172.10.10.10 nonexistent client failed. Assuming that you don't know that the client does not exist, you might think the ARP server is down. But the debug atm arp command tells you that the server is alive and is sending you an ARP_NAK response, indicating that the client does not exist or at least is not registered with that server:
SanJose#debug atm arp
ATMARP(ATM0): Sending ARP to 172.10.10.10
ATMARP(ATM0): ARP_NAK received on VCD#159.
ATMARP(ATM0): Sending ARP to 172.10.10.10
ATMARP(ATM0): ARP_NAK received on VCD#159.
ATMARP(ATM0): Sending ARP to 172.10.10.10
ATMARP(ATM0): ARP_NAK received on VCD#159.
ATMARP(ATM0): Sending ARP to 172.10.10.10
ATMARP(ATM0): ARP_NAK received on VCD#159.
ATMARP(ATM0): Sending ARP to 172.10.10.10
ATMARP(ATM0): ARP_NAK received on VCD#159.
The following CLI command, debug atm arp, on the ARP server displays the response from the server to the client requesting a nonexistent IP-NSAP ARP resolution:
Denver#debug atm arp
ATMARP:(ATM2/0/0): ARP Request from 172.10.10.1 ->
47.009181000000006170598A01.100000000000.00
ATMARP(ATM2/0/0): ARP Update from VCD#257 172.10.10.1 MAP
VCD#0
ARPSERVER (ATM2/0/0): rx ARP REQ from 172.10.10.1 to
172.10.10.10 (vc 257)
ARPSERVER (ATM2/0/0): tx ARP NAK to 172.10.10.1 for
172.10.10.10 (vc 257)
ATMARP:(ATM2/0/0): ARP Request from 172.10.10.1 ->
47.009181000000006170598A01.100000000000.00
ATMARP(ATM2/0/0): ARP Update from VCD#257 172.10.10.1 MAP
VCD#0
ARPSERVER (ATM2/0/0): rx ARP REQ from 172.10.10.1 to
172.10.10.10 (vc 257)
ARPSERVER (ATM2/0/0): tx ARP NAK to 172.10.10.1 for
172.10.10.10 (vc 257)
ATMARP:(ATM2/0/0): ARP Request from 172.10.10.1 ->
47.009181000000006170598A01.100000000000.00
ATMARP(ATM2/0/0): ARP Update from VCD#257 172.10.10.1 MAP
VCD#0
ARPSERVER (ATM2/0/0): rx ARP REQ from 172.10.10.1 to
172.10.10.10 (vc 257)
ARPSERVER (ATM2/0/0): tx ARP NAK to 172.10.10.1 for
172.10.10.10 (vc 257)
ATMARP:(ATM2/0/0): ARP Request from 172.10.10.1 ->
47.009181000000006170598A01.100000000000.00
The following CLI command, show atm map, shows that the San Jose router can now talk directly to all other routers in the same LIS using ATM SVCs. Now, it does not require the server to provide the end-router's NSAP address. This table will remain in effect as long as the two routers continue to exchange some packets/cells. In this case study, OSPF Hellos are exchanged at regular intervals, keeping the VC up.
Also, it is important to notice that the broadcast packet cannot initiate a VC in the ATM cloud because ATM itself is NBMA media. So, there should be some way for the router to find all its neighbors in the ATM cloud or in the same LIS. To do this, it can either ping each LIS router or manually configure its neighbors:
SanJose#show atm map
Map list ATM0_ATM_ARP : DYNAMIC
arp maps to NSAP
47.009181000000006170598A01.123456789000.00, connection
up, VC 159, ATM0
ip 172.10.10.1 maps to NSAP
47.009181000000006170598A01.100000000000.00, broadcast,
connection up, VC 162, ATM0
ip 172.10.10.2 maps to NSAP
47.009181000000006170595C01.200000000000.00, broadcast,
connection up, VC 160, ATM0
ip 172.10.10.3 maps to NSAP
47.009181000000006170595C01.300000000000.00, broadcast,
connection up, VC 163, ATM0
ip 172.10.10.4 maps to NSAP
47.009181000000006170598A01.123456789000.00, broadcast,
connection up, VC 159, ATM0
The following CLI command, show atm arp, shows the ARP table on the server with all the active-client entries:
Denver#show atm arp
Note that a '*' next to an IP address indicates an active call
IP Address ATM2/0/0: TTL ATM Address
* 172.10.10.1 19:29 47009181000000006170598a0110000000000000
* 172.10.10.2 12:56 47009181000000006170595c0120000000000000
* 172.10.10.3 19:31 47009181000000006170595c0130000000000000
* 172.10.10.4 9:23 47009181000000006170598a0112345678900000
* 172.10.10.5 16:02 47009181000000006170595c0150000000000000
LAN Emulation ( LANE) is a method of emulating LAN over an ATM infrastructure. Standards for emulating Ethernet 802.3 and Token Ring 802.5 are defined. Because ATM is connection-oriented in nature, it becomes difficult to support popular multiprotocols, such as IP and IPX, which are connectionless in nature. By having ATM emulate Ethernet, it becomes easier to support Multiprotocols over ATM without creating new protocols. It is also possible to create multiple LANs over the same ATM infrastructure. These ELANs cannot talk to each other directly at Layer 2, but need to be routed. Therefore, such a setup always requires a router running multiple ELANs.
Designing LANE in a campus environment requires planning and careful allocation of devices to enable LANE services. There is lots of documentation written on this topic. This discussion highlights some of the most common issues associated with LANE design in a campus environment. Finally, LANE design depends on the particular network traffic pattern and how ATM resources are allocated to accommodate that pattern.
One of the most important and heavily utilized components in LANE is its BUS service because all the broadcast packets come to the BUS and are forwarded back to all the LECs in an ELAN. The Cisco Catalyst 5000 LANE card BUS-processing capability is around 120 kbps and the router AIP card capability is around 60 kbps.
BUS stands forbroadcast-and-unknown server. It is a multicast server used in ELANs that is used to flood traffic addressed to an unknown destination, and to forward multicast and broadcast traffic to the appropriate clients.
The other important factor in LANE design is its VC consumption in edge devices and the ATM switch cloud itself. Equation 18-1 provides some insight.
In this equation, the values are as follows:
W Total number of wiring-closet ATM-LAN switches and routers
E Total number of ELANs (VLANs)
N Typical number of wiring closets per ELAN
Using this equation, a LANE cloud with four edge devices and running a single ELAN requires 20 VCs. As the number of ELANs per cloud increases, the VCC requirement increases. This places a burden on the device running LANE services, in case of ELAN failure. It is a recommended practice, therefore, to spread LANE services across different devices.
The last important factor to consider when designing LANE is the call-setup capability of ATM switches that form the ATM cloud. This capability is especially important during a failure scenario—suddenly, for example, all the LECs try to connect to the LANE services through these switches. In such a case, the ATM switches might experience many simultaneous call-setup requests. If they are not configured to successfully handle the load, a negative ripple effect could result.
The call-handling capability of an LS1010 is about 110 calls per second.
Figure 18-4 presents the topology for this example.
LAN Emulation is commonly used in a campus environment. The routers represented here are named for cities, but should be referred to as "Building A, Building B, and Building C."
LAN Emulation configuration requires the LECS/LES/BUS to be configured first for its full functionality. These components of LANE can be configured on ATM-enabled routers and catalysts. The difference is in their performance. A Catalyst switch provides better performance than routers for running these services.
LEC stands for LAN Emulation Client. It is an entity in an end system that performs data forwarding, address resolution, and other control functions for a single ES within a single ELAN. An LEC also provides a standard LAN service interface to any higher-layer entity that interfaces to the LEC. Each LEC is identified by a unique ATM address, and is associated with one or more MAC addresses reachable through that ATM address.
LECS stands for LAN Emulation Configuration Server. It is an entity that assigns individual LANE clients to particular ELANs by directing them to the LES that corresponds to the ELAN. There is logically one LECS per administrative domain, and this serves all ELANs within that domain.
LES stands for LAN Emulation Server. It is an entity that implements the control function for a particular ELAN. There is only one logical LES per ELAN, and it is identified by a unique ATM address.
The configuration in this section on an ATM-enabled Catalyst 5000 enables the LECS/LES/BUS.
The command lane database ABC creates a named database for the LANE configuration server. This database contains a different LANE server ATM address and other information. It contains the LANE characteristic information. When some device wants to join a particular LANE in the database with a specific characteristic, it compares the request and, if satisfied, responds with the LANE server ATM address to continue the LEC joining process.
The command name red server-atm-address 47.009181000000006170598A01.00602FBCC511.01 binds the red LANE with its appropriate ATM address of the LANE server. Similarly, for the blue and green LANE clouds, refer to your configuration manual regarding assignment of ATM addresses to various LANE services on Cisco devices.
The command lane config database ABC links the configuration server's database name to the specified major interface and enables the configuration server.
The command lane auto-config-atm-address specifies that the configuration server's ATM address be computed by Cisco's automatic method of assigning the addresses to the various LANE services.
The command lane server-bus ethernet red enables a LANE server and a LANE BUS for the first emulated LAN. Similarly, for the emulated LANs green and blue on different subinterfaces, this command creates a separate LANE cloud with every separate IP subnet.
The command lane client ethernet 1 green enables the green LANE client and binds VLAN1 on the Catalyst 5000 to the ELAN green. With this configuration, VLAN1 and ELAN green comprise one big IP subnet. So, ELAN is actually an extension of VLAN in the ATM-switched network from the Ethernet/token-switched network.
In short, Catalyst 5000 in this example is acting as LECS for one big LANE domain and LES/BUS for ELAN red, green, and blue. It also acts as the LANE client for ELAN green.
The configuration of Catalyst 5000 is as follows:
lane database ABC name red server-atm-address 47.009181000000006170598A01.00602FBCC511.01 name blue server-atm-address 47.009181000000006170598A01.00602FBCC511.03 name green server-atm-address 47.009181000000006170598A01.00602FBCC511.02 ! interface ATM0 atm pvc 1 0 5 qsaal atm pvc 2 0 16 ilmi lane config database ABC lane auto-config-atm-address ! interface ATM0.1 multipoint lane server-bus ethernet red ! interface ATM0.2 multipoint lane server-bus ethernet green lane client ethernet 1 green ! interface ATM0.3 multipoint lane server-bus ethernet blue
In the following configuration of the San Jose router, the ATM interface is acting as a LANE client for three different ELANs, creating three different IP subnets. Therefore, San Jose is a router with a common interface for intra-ELAN routing or connectivity. LEC in the red LANE cannot talk at the ATM layer directly to the LEC in ELAN green. It has to go through the router San Jose and do Layer 3 routing:
interface ATM0 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi interface ATM0.1 multipoint ip address 192.10.10.1 255.255.255.0 lane client ethernet red ! interface ATM0.2 multipoint ip address 195.10.10.1 255.255.255.0 lane client ethernet green ! interface ATM0.3 multipoint ip address 198.10.10.1 255.255.255.0 lane client ethernet blue
In the following configuration of the Chicago router, the ATM interface is acting as an LEC for the ELAN red:
interface ATM2/0 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi interface ATM2/0.1 multipoint ip address 192.10.10.2 255.255.255.0 lane client ethernet red
In the following configuration of the New York router, the ATM interface is acting as an LEC for the ELAN blue:
interface ATM0 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi ! interface ATM0.1 multipoint ip address 198.10.10.2 255.255.255.0 lane client ethernet blue
In the following configuration of the Denver and Iowa ATM switches, there is no configuration needed on the interfaces if you are running PNNI between them. The command atm lecs-address-default 47.0091.8100.0000.0061.7059.8a01.0060.2fbc.c513.00 1 provides the LECS address to any directly connected LEC upon initialization. This command is an absolute requirement for LANE to be in operation on all the edge ATM switches directly connected to the routers, or the Catalyst 5000 running LEC, if you are using the automatic configuration option for assigning the LECS address.
The configuration of the Denver switch is as follows:
atm lecs-address-default 47.0091.8100.0000.0061.7059.8a01.0060.2fbc.c513.00 1 atm address 47.0091.8100.0000.0061.7059.8a01.0061.7059.8a01.00 atm router pnni node 1 level 56 lowest redistribute ATM-static interface ATM1/1/1 no keepalive interface ATM1/1/2 no keepalive interface ATM1/1/3 no keepalive
The configuration of the Iowa switch is as follows:
atm lecs-address-default 47.0091.8100.0000.0061.7059.8a01.0060.2fbc.c513.00 1 atm address 47.0091.8100.0000.0061.7059.5c01.0061.7059.5c01.00 atm router pnni node 1 level 56 lowest redistribute ATM-static interface ATM3/0/0 no keepalive interface ATM3/0/1 no keepalive interface ATM3/0/2 no keepalive
Troubleshooting LANE is most complex. Usually, the problem is either LES/BUS performance or connectivity to the LANE. LES/BUS performance is a design issue and involves many factors. But the connectivity problem mostly arises from the LEC being unable to join the particular LANE. The intra-LANE connectivity problem is dependent more on IP routing than on LANE, so you need to look at the LANE LEC client operation and its connection phase. After it is operational in the particular LANE, it should be able to talk directly to other LECs.
To be operational, LEC needs to have all the following VCCs (except Data Direct):
Configure Direct—. LEC-to-LECS connect phase
Control Direct and Control Distribute—. LEC-to-LES control VCs connection and join phase
Multicast Send and Multicast Forward—. LEC-to-BUS connect phase
Data Direct—. LEC-to-LEC connect phase
Figure 18-5 illustrates these connections. This section discusses troubleshooting these connections.
The following analysis examines how the LEC on blue ELAN on the New York router joins the LANE and becomes operational, and how it talks with other LECs on the same ELAN.
The mention of color is not actually color, but the name of a logical LAN created by ATM LANE technology. There is a logical LAN between the New York router and the San Jose router.
Even though I have named the routers San Jose, New York, Chicago, and so on, these routers are on the same campus. It is like Building A, Building B, or Building C. The names do not indicate actual geographical distances.
For any LEC to join the ELAN, there must be an operational LECS/LES/BUS before it attempts to join the LANE.
The command show lane brief on the Catalyst reveals the ELAN blue LES and BUS address, and confirms that they are in operational mode. This is required before any LEC can join the ELAN blue:
Catalyst#show lane brief
LE Server ATM0.3 ELAN name: blue Admin: up State: operational
type: ethernet Max Frame Size: 1516
ATM address: 47.009181000000006170598A01.00602FBCC511.03
LECS used: 47.009181000000006170598A01.00602FBCC513.00
connected, vcd 261
control distribute: vcd 159, 2 members, 4022 packets
LE BUS ATM0.3 ELAN name: blue Admin: up State: operational
type: ethernet Max Frame Size: 1516
ATM address: 47.009181000000006170598A01.00602FBCC512.03
data forward: vcd 163, 2 members, 6713 packets, 0 unicasts
The command show lane config reveals that the LECS configured on Catalyst 5000 is operational and active with its corresponding LECS address. Also, it indicates that it serves three ELANs and that they are all active:
Catalyst#show lane config
LE Config Server ATM0 config table: ABC
Admin: up State: operational
LECS Mastership State: active master
list of global LECS addresses (12 seconds to update):
47.009181000000006170598A01.00602FBCC513.00 <-------- me
ATM Address of this LECS:
47.009181000000006170598A01.00602FBCC513.00
vcd rxCnt txCnt callingParty
252 1 1 47.009181000000006170598A01.00602FBCC511.01 LES red 0 active
256 2 2 47.009181000000006170598A01.00602FBCC511.02 LES green 0 active
260 6 6 47.009181000000006170598A01.00602FBCC511.03 LES blue 0 active
cumulative total number of unrecognized packets received so far: 0
cumulative total number of config requests received so far: 100
cumulative total number of config failures so far: 29
cause of last failure: no configuration
culprit for the last failure:
47.009181000000006170595C01.00000C7A5660.01
This section covers getting an LECS address via ILMI. Cisco LEC can find LECS by using one of three methods:
Hard-coded ATM address
Get the LECS via ILMI VPI=0, VCI=16
Fixed address defined by the ATM Forum (47007900000000000000000000.00A03E000001.00)
The command debug lane client all reveals that the LEC on ATM0.1 is trying to get the LECS address from its directly connected switch:
NewYork#debug lane client all LEC ATM0.1: predicate PRED_LEC_NSAP TRUE LEC ATM0.1: state IDLE event LEC_TIMER_IDLE => REGISTER_ADDR LEC ATM0.1: action A_POST_LISTEN LEC ATM0.1: sending LISTEN LEC ATM0.1: listen on 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: state REGISTER_ADDR event LEC_CTL_ILMI_SET_RSP_POS => POSTING_LISTEN LEC ATM0.1: received LISTEN LEC ATM0.1: action A_ACTIVATE_LEC LEC ATM0.1: predicate PRED_CTL_DIRECT_NSAP FALSE LEC ATM0.1: predicate PRED_CTL_DIRECT_PVC FALSE LEC ATM0.1: predicate PRED_LECS_PVC FALSE LEC ATM0.1: predicate PRED_LECS_NSAP FALSE LEC ATM0.1: state POSTING_LISTEN event LEC_SIG_LISTEN_POS => GET_LECS_ADDR LEC ATM0.1: action A_ALLOC_LECS_ADDR LEC ATM0.1: state GET_LECS_ADDR event LEC_CTL_ILMI_SET_RSP_POS => GET_LECS_ADDR LEC ATM0.1: action A_REGISTER_ADDR
Configure Direct:
Bidirectional VCC setup by LEC as part of the LECS connect phase
Used to obtain LES ATM address
Figure 18-6 illustrates the LEC-to-LECS connect phase.
The following output reveals that upon finding the LECS address, an LEC establishes the call to the LECS. This VCC is called Configure Direct VCC. It then sends the configuration request to the LECS on the same VCC with its own information, asking for a corresponding LES address of that ELAN. LECS responds, confirming that the ELAN blue information LEC requested is defined and supplies the LEC with a corresponding LES address of ELAN blue:
NewYork#debug lane client all LEC ATM0.1: action A_SEND_LECS_SETUP LEC ATM0.1: sending SETUP LEC ATM0.1: callid 0x60AC611C LEC ATM0.1: called party 47.009181000000006170598A01.00602FBCC513.00 LEC ATM0.1: calling_party 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: state GET_LECS_ADDR event LEC_CTL_ILMI_SET_RSP_NEG => LECS_CONNECT LEC ATM0.1: received CONNECT LEC ATM0.1: callid 0x60AC611C LEC ATM0.1: vcd 28 LEC ATM0.1: action A_SEND_CFG_REQ LEC ATM0.1: sending LANE_CONFIG_REQ on VCD 28 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: LAN Type 1 LEC ATM0.1: Frame size 1 LEC ATM0.1: LAN Name blue LEC ATM0.1: LAN Name size 4 LEC ATM0.1: state LECS_CONNECT event LEC_SIG_CONNECT => GET_LES_ADDR LEC ATM0.1: received LANE_CONFIG_RSP on VCD 28 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: LAN Type 1 LEC ATM0.1: Frame size 1 LEC ATM0.1: LAN Name blue LEC ATM0.1: LAN Name size 4
After the LEC gets the LES address of its ELAN, it establishes the following:
Control Direct:
Bidirectional point-to-point VCC to the LES for sending control traffic
Setup by the LEC as part of the initialization process
Control Distribute:
Unidirectional point-to-multipoint control VCC to the LEC for distributing control traffic.
Figure 18-7 illustrates LEC-to-LES control VCs.
The following debug output reveals that the LEC is setting up the Control Direct VCC to the LES. On this VCC, it sends out the LANE_JOIN_REQ. LES responds back with LECID on the same VCC. At this point, the LES opens the Control Distribute VCC to the LEC and the LEC must accept this VCC to enable LES to distribute the control traffic:
NewYork#debug lane client all LEC ATM0.1: action A_SEND_LES_SETUP LEC ATM0.1: sending SETUP LEC ATM0.1: callid 0x60ABEDF4 LEC ATM0.1: called party 47.009181000000006170598A01.00602FBCC511.03 LEC ATM0.1: calling_party 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: received CONNECT LEC ATM0.1: callid 0x60ABEDF4 LEC ATM0.1: vcd 97 LEC ATM0.1: action A_SEND_JOIN_REQ LEC ATM0.1: sending LANE_JOIN_REQ on VCD 97 LEC ATM0.1: Status 0 LEC ATM0.1: LECID 0 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: LAN Type 1 LEC ATM0.1: Frame size 1 LEC ATM0.1: LAN Name blue LEC ATM0.1: LAN Name size 4 LEC ATM0.1: received SETUP LEC ATM0.1: callid 0x60AC726C LEC ATM0.1: called party 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: calling_party 47.009181000000006170598A01.00602FBCC511.03 LEC ATM0.1: sending CONNECT LEC ATM0.1: callid 0x60AC726C LEC ATM0.1: vcd 98 LEC ATM0.1: received CONNECT_ACK LEC ATM0.1: received LANE_JOIN_RSP on VCD 97 LEC ATM0.1: Status 0 LEC ATM0.1: LECID 1 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: LAN Type 1 LEC ATM0.1: Frame size 1 LEC ATM0.1: LAN Name blue LEC ATM0.1: LAN Name size 4
After the LEC connects to the LES, it ARPs for the BUS ATM address, and LES responds to it. Upon receiving the BUS ATM address, LEC and BUS establish the following VCCs to each other:
Multicast Send:
LEC sets up bidirectional point-to-point Multicast Send VCC to the BUS.
Used for sending broadcast/multicast data to the BUS.
Multicast Forward:
BUS sets up point-to-multipoint Multicast Forward VCC to the LEC.
Used for forwarding multicast/broadcast traffic to all the LECs.
Figure 18-8 illustrates LEC-to-BUS VCs.
The following debug output reveals that the LEC sends out a LANE_ARP_ REQ to the LES on Control Direct VCC to resolve the BUS ATM address. LES responds back on the Control Distribute VCC with the BUS ATM address. LEC then sets up a connection to the bus directly. This VCC is called a Multicast Send and is used for forwarding broadcast traffic to other LECs. Also, the BUS sets up the multipoint VCC to the LEC, and this VCC is called Multicast Forward. Every time a new client comes up, it adds it to this point-to-multipoint VCC.
This VCC is used by the BUS for forwarding broadcast and multicast traffic to all the LECs in that ELAN.
At this point, the LEC client on the New York router in ELAN blue changes its state to up, and becomes operational and ready to talk with other LECs in the same ELAN:
NewYork#debug lane client all LEC ATM0.1: action A_SEND_BUS_ARP LEC ATM0.1: sending LANE_ARP_REQ on VCD 97 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: TARGET MAC address ffff.ffff.ffff LEC ATM0.1: TARGET ATM address 00.000000000000000000000000.000000000000.00 LEC ATM0.1: received LANE_ARP_RSP on VCD 98 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: TARGET MAC address ffff.ffff.ffff LEC ATM0.1: TARGET ATM address 47.009181000000006170598A01.00602FBCC512.03 LEC ATM0.1: action A_SEND_BUS_SETUP LEC ATM0.1: predicate PRED_MCAST_SEND_NSAP FALSE LEC ATM0.1: predicate PRED_MCAST_SEND_PVC FALSE LEC ATM0.1: sending SETUP LEC ATM0.1: callid 0x60AC7418 LEC ATM0.1: called party 47.009181000000006170598A01.00602FBCC512.03 LEC ATM0.1: calling_party 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: received CONNECT LEC ATM0.1: callid 0x60AC7418 LEC ATM0.1: vcd 99 LEC ATM0.1: action A_PROCESS_BUS_CONNECT LEC ATM0.1: received SETUP LEC ATM0.1: callid 0x60AC6CA4 LEC ATM0.1: called party 47.009181000000006170595C01.00000C5CA980.01 LEC ATM0.1: calling_party 47.009181000000006170598A01.00602FBCC512.03 LEC ATM0.1: action A_SEND_BUS_CONNECT LEC ATM0.1: sending CONNECT LEC ATM0.1: callid 0x60AC6CA4 LEC ATM0.1: vcd 100 %LANE-5-UPDOWN: ATM0.1 elan blue: LE Client changed state to up LEC ATM0.1: state MCAST_FORWARD_CONN event LEC_SIG_SETUP => ACTIVE LEC ATM0.1: received CONNECT_ACK LEC ATM0.1: action A_PROCESS_CONNECT_ACK LEC ATM0.1: state ACTIVE event LEC_SIG_CONNECT_ACK => ACTIVE
The command show lane client on the New York router shows LEC in an operational state and all the corresponding VCCs it has established to the different LANE services.
The important thing to note here is that if the LEC fails in establishing any one of these VCCs, it will start the join process from the beginning and keep trying until successful.
Therefore, looking at the client's operate state itself, you can determine where the problem lies and perform additional debugging accordingly:
NewYork#show lane client LE Client ATM0.1 ELAN name: blue Admin: up State: operational Client ID: 1 LEC up for 1 hour 35 minutes 35 seconds Join Attempt: 1 HW Address: 0000.0c5c.a980 Type: ethernet Max Frame Size: 1516 ATM Address: 47.009181000000006170595C01.00000C5CA980.01 VCD rxFrames txFrames Type ATM Address 0 0 0 configure 47.009181000000006170598A01.00602FBCC513.00 97 1 2 direct 47.009181000000006170598A01.00602FBCC511.03 98 1 0 distribute 47.009181000000006170598A01.00602FBCC511.03 99 0 95 send 47.009181000000006170598A01.00602FBCC512.03 100 190 0 forward 47.009181000000006170598A01.00602FBCC512.03
After LEC is operational, it can connect to other LECs in the same ELAN; the following analysis shows exactly that.
The following debug output reveals that the LEC is sending out a LANE_ARP_REQ for the target LEC it wants to talk on the Control Direct VCC. It receives a response from the LANE_ARP_RSP on the Control Distribute with a corresponding ATM address. It then registers the ATM address in its cache table and sets up the VCC directly to the other LEC; this VCC is called Data Direct VCC:
NewYork#debug lane client all LEC ATM0.1: state ACTIVE event LEC_CTL_READY_IND => ACTIVE LEC ATM0.1: sending LANE_ARP_REQ on VCD 97 LEC ATM0.1: LECID 2 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01. 00000c5ca980.01 LEC ATM0.1: TARGET MAC address 00e0.1eae.fa38 LEC ATM0.1: TARGET ATM address 00.000000000000000000000000.000000000000.00 LEC ATM0.1: num of TLVs 0 LEC ATM0.1: received LANE_ARP_REQ on VCD 98 LEC ATM0.1: LECID 2 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01. 00000c5ca980.01 LEC ATM0.1: TARGET MAC address 00e0.1eae.fa38 LEC ATM0.1: TARGET ATM address 00.000000000000000000000000.000000000000.00 LEC ATM0.1: num of TLVs 0 LEC ATM0.1: action A_SEND_ARP_RSP LEC ATM0.1: state ACTIVE event LEC_CTL_ARP_REQ => ACTIVE LEC ATM0.1: received LANE_ARP_RSP on VCD 98 LEC ATM0.1: LECID 2 LEC ATM0.1: SRC MAC address 0000.0c5c.a980 LEC ATM0.1: SRC ATM address 47.009181000000006170595C01. 00000c5ca980.01 LEC ATM0.1: TARGET MAC address 00e0.1eae.fa38 LEC ATM0.1: TARGET ATM address 47.009181000000006170598A01.00E01EAEFA38.03 LEC ATM0.1: num of TLVs 1 LEC ATM0.1: TLV id 0x00A03E2A, len 28, 01 01 47 00 91 81 00 00 00 00 61 70 59 8A 01 00 E0 1E AE FA 3C 00 00 E0 1E AE FA 38 LEC ATM0.1: action A_PROCESS_ARP_RSP LEC ATM0.1: lec_process_lane_tlv: msg LANE_ARP_RSP, num_tlvs 1 LEC ATM0.1: process_dev_type_tlv: lec 47.009181000000006170598A01.00E01EAEFA38.03, tlv 0x60C90C70
Figure 18-9 illustrates LEC-to-LEC Data Direct VC.
The command show lane client shows the Data Direct VCC being established to the remote LEC. Because this LEC creates individual connections to remote LECs, additional Data Direct VCCs appear on this output. This VCC gets removed if there is no activity between the two LECs for a certain amount of time:
NewYork#show lane client LE Client ATM0.1 ELAN name: blue Admin: up State: operational Client ID: 1 LEC up for 1 hour 35 minutes 35 seconds Join Attempt: 1 HW Address: 0000.0c5c.a980 Type: ethernet Max Frame Size: 1516 ATM Address: 47.009181000000006170595C01.00000C5CA980.01 VCD rxFrames txFrames Type ATM Address 0 0 0 configure 47.009181000000006170598A01.00602FBCC513.00 97 1 2 direct 47.009181000000006170598A01.00602FBCC511.03 98 1 0 distribute 47.009181000000006170598A01.00602FBCC511.03 99 0 95 send 47.009181000000006170598A01.00602FBCC512.03 100 190 0 forward 47.009181000000006170598A01.00602FBCC512.03 101 6 4 data 47.009181000000006170598A01.00E01EAEFA38.03
Multiprotocols over ATM works in conjunction with LANE. Intersubnet transfer of data in LANE requires a router, even though two devices in different subnets are connected by a common ATM infrastructure. This creates performance degradation for an ATM-based network. At the same time, it is absolutely necessary to separate ATM infrastructure into small IP layer subnets to keep the large broadcast transmission to its minimum and to where it is required. MPOA provides the facility to create small subnets for the devices, and at the same time provides cut-through connection to the intersubnet devices at the ATM layer if required.
MPOA has two main logical components— Multiprotocol Client (MPC) and Multiprotocol Server (MPS). MPC usually resides in an edge device such as Catalyst switches or ATM-enabled hosts. The main function of MPC is to act as a point of entry and exit for traffic using shortcuts. It caches the shortcut information it gets from its interaction with MPS. MPS usually resides in the router running multiple LECs. Its main function is to provide Layer 3 forwarding information to the MPCs.
MPOA is well- suited in a large enterprise campus environment with a common ATM backbone connecting different campuses. This common ATM infrastructure can be divided into many logical Layer 3 subnets to reduce the broadcast transmission to its minimum, yet allow intersubnet direct connection as needed, enhancing the overall performance. You can view MPOA as enlarging the scale of the LANE in the campus environment, without creating a bottleneck at the router.
The following topology has two ELANs–one between the Chicago and San Jose router, and another between the New York and San Jose router. Therefore, if the device behind the Chicago router needs to talk to the device behind the New York router, it goes through the San Jose router because it is running multiple LECs and it needs to do Layer 3 routing. Although it is obvious that the Chicago and New York routers are connected to the same ATM switch, it still must pass through the San Jose router for any intersubnet transmission. This is an inefficient use of the ATM infrastructure and also degrades performance.
MPOA is commonly used in a campus environment. The routers represented here are named for cities, but should be referred to as "Building A," "Building B," and "Building C."
Using MPOA in this scenario allows a direct connection from the Chicago router to the New York router, even though they are in different subnets. This can be achieved by enabling MPCs on the Chicago and New York routers, and MPS on the San Jose router.
Figure 18-10 shows the topology for this example.
MPOA configuration works in conjunction with LANE. The following configuration example reveals MPCs and MPS configuration on various ATM-enabled devices.
In the following configuration example, the Chicago router acts as the MPOA client, which is the MPC. The command mpoa client config name CHI defines an MPC with a specified name. But the MPC is not functional until it is attached to a specific interface. The command mpoa client name CHI starts an MPC process on a specific interface. This command makes MPC fully operational. MPC has acquired an ATM address using a specific algorithm and is ready to accept calls. The commandlane client mpoa client name CHI associates a LANE client red with the specified MPC CHI. The configuration of the Chicago router is as follows:
mpoa client config name CHI interface Loopback1 ip address 40.1.1.1 255.255.255.0 interface ATM2/0 no ip address atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi mpoa client name CHI ! interface ATM2/0.1 multipoint ip address 192.10.10.2 255.255.255.0 lane client mpoa client name CHI lane client ethernet red
In the following configuration example, the New York router is acting as an MPOA client for the devices behind it. The configuration of the New York router is as follows:
mpoa client config name NY ! interface Loopback0 ip address 50.1.1.1 255.255.255.0 interface ATM0 no ip address no ip mroute-cache atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi mpoa client name NY ! interface ATM0.1 multipoint ip address 198.10.10.2 255.255.255.0 lane client mpoa client name NY lane client ethernet blue
In the following configuration example, the San Jose router is acting as the MPOA server for ELAN red and blue.
The command mpoa server config name SJ defines an MPS with the specified name, but the MPS is not functioning until it is attached to specific hardware.
The command mpoa server name SJ binds an MPS to a specific major interface. At this point, the MPS can obtain its auto-generated ATM address and an interface through which it can communicate to the neighboring MPOA devices. Only when both an MPS is defined globally and attached to an interface is it considered to be operational.
The command lane client mpoa server name SJ associates an LEC with the named MPS. The specified MPS must exist before this command is accepted.
The configuration of the San Jose router is as follows:
lane database ABC name red server-ATM-address 47.009181000000006170598A01.00E01EAEFA39.01 name red elan-id 10 name blue server-ATM-address 47.009181000000006170598A01.00E01EAEFA39.03 name blue elan-id 30 ! mpoa server config name SJ ! interface Loopback0 ip address 60.1.1.1 255.255.255.0 interface ATM0 atm pvc 10 0 5 qsaal atm pvc 20 0 16 ilmi lane config database ABC lane auto-config-ATM-address mpoa server name SJ ! interface ATM0.1 multipoint ip address 192.10.10.1 255.255.255.0 lane server-bus ethernet red lane client mpoa server name SJ lane client ethernet red ! interface ATM0.3 multipoint ip address 198.10.10.1 255.255.255.0 lane server-bus ethernet blue lane client mpoa server name SJ lane client ethernet blue
MPOA troubleshooting becomes easier when you understand the interaction of its logical components, such as MPCs and MPSs. Figure 18-11 shows the operation of MPOA.
MPOA operation is as follows:
MPCs must know their MPSs—. Discovering the MPS
MPOA resolution request—. A request from an MPC to resolve a destination protocol address to an ATM address to establish a shortcut SVC with the egress device
MPOA cache imposition request—. A request from MPS to an egress MPC, providing the MAC rewrite information for a destination protocol address
MPOA cache imposition reply—. Reply from egress MPC, matching a previous egress MPS request
MPOA resolution reply—. Reply from MPS, resolving a protocol address to an ATM address
The following troubleshooting analysis uses different show and debug commands to illustrate the MPC and MPS interaction used to create a shortcut VCC.
The command show mpoa client on the Chicago router shows that it does not know any other MPS or MPC.
The following output first shows, there is no MPS being discovered. Then, after initiating the LE_ARP, as shown by the debug output, it will get the MPS address. The last of the output reveals the discovered MPS:
Chicago#show mpoa client MPC Name: CHI Interface: ATM2/0 State: Up MPC ATM Address: 47.009181000000006170595C01.006070CA9045.00 Shortcut-Setup Count: 1 Shortcut-Setup Time: 1 LECs bound to CHI: ATM2/0.1 MPS Neighbors of CHI: ATM Address MPS-ID VCD rxPkts txPkts Remote Devices known to CHI: ATM Address VCD rxPkts txPkts
The command debug lane client mpoa shows output indicating that the local MPC is getting the MPS ATM address. Every time an LEC_ARP_REQ or an LEC_ARP_RESP is sent from an LEC, a TLV (type, length, value) is included, specifying the ATM address of the MPC associated with the LEC:
Chicago#debug lane client mpoa LEC ATM2/0.1: received lec_process_lane_tlv: msg LANE_ARP_REQ, num_tlvs 1 LEC ATM2/0.1: process_dev_type_tlv: lec 47.009181000000006170598A01.00E01EAEFA38.01, tlv 0x61039220 LEC ATM2/0.1: type MPOA_MPS, mpc 00.000000000000000000000000.000000000000.00 mps 47.009181000000006170598A01.00E01EAEFA3C.00 mac 00e0.1eae.fa38 LEC ATM2/0.1: process_dev_type_tlv: create le_arp for le_mac 00e0.1eae.fa38 LEC ATM2/0.1: create mpoa_lec LEC ATM2/0.1: new mpoa_lec 0x611401D4 LEC ATM2/0.1: process_dev_type_tlv: type MPS, tlv- >num_mps_mac 1 LEC ATM2/0.1: lec_add_mps: remote lec 47.009181000000006170598A01.00E01EAEFA38.01 mps 47.009181000000006170598A01.00E01EAEFA3C.00 num_mps_mac 1, mac 00e0.1eae.fa38 LEC ATM2/0.1: lec_add_mps: add mac 00e0.1eae.fa38, mps_mac 0x611407C0 LEC ATM2/0.1: lec_append_mpoa_dev_tlv:
LE_ARP stands forLAN Emulation Address Resolution Protocol. It is a protocol that provides the ATM address that corresponds to a MAC address.
The command show mpoa client now has neighboring MPS and MPC ATM addresses with its associated VCs:
Chicago#show mpoa client MPC Name: CHI Interface: ATM2/0 State: Up MPC ATM Address: 47.009181000000006170595C01.006070CA9045.00 Shortcut-Setup Count: 1 Shortcut-Setup Time: 1 LECs bound to CHI: ATM2/0.1 MPS Neighbors of CHI: ATM Address MPS-ID VCD rxPkts txPkts 47.009181000000006170598A01.00E01EAEFA3C.00 1 20 1256 836 Remote Devices known to CHI: ATM Address VCD 47.009181000000006170595C01.00E01EAEFA6D.00 257------MPC on New York Router
In the following troubleshooting example, tracing the route 50.1.1.1 from Chicago requires the data packets to flow through the San Jose router because it is running both ELAN red and blue. But, with MPOA enabled, and a common ATM infrastructure to allow the cut-through VCC, CHI MPC will send an MPOA resolution request to resolve the IP address 50.1.1.1 to an ATM address, through which it can reach that network.
MPS San Jose responds to the request with IP-to-ATM address resolution. Upon learning the ATM address through which 50.1.1.1 can be reached, the Chicago router makes a direct VCC through the ATM cloud. This can be confirmed by another trace to 50.1.1.1, which shows that it can be reached via one hop 198.10.10.2 rather than two:
Chicago#trace 50.1.1.1 Tracing the route to 50.1.1.1 1 192.10.10.1 0 msec 198.10.10.2 0 msec 0 msec Chicago#debug mpoa client all MPOA CLIENT: mpc_trigger_from_lane: mac 00e0.1eae.fa38 on out ATM2/0.1 MPOA CLIENT: Is MAC 00e0.1eae.fa38 interesting on i/f: ATM2/0.1 MPOA CLIENT: MAC 00e0.1eae.fa38 interesting MPOA CLIENT CHI: Ingress Cache entry created for 50.1.1.1 MPOA CLIENT CHI: manage_hw_ingress_cache: msgtype QUERY_DATA_FLOW_ACTIVE for ip 50.1.1.1 MPOA CLIENT CHI: ipcache not exist MPOA CLIENT CHI: mpc_manage_ingress_cache(): called with MPC_IN_CACHE_UPDATE_ADD for destIp=50.1.1.1 MPOA CLIENT CHI: Ingress Cache- curr state= MPC_IN_CACHE_INITIALIZED, event= MPC_ELIGIBLE_PACKET_RECEIVED, dest IP= 50.1.1.1 MPOA CLIENT CHI: Flow detected for IP=50.1.1.1 MPOA CLIENT CHI: MPOA Resolution process started for 50.1.1.1 MPOA CLIENT CHI: Sending MPOA Resolution req for 50.1.1.1 MPOA CLIENT CHI: Ingress Cache state changed- old=0, new=1, IP addr=50.1.1.1 MPOA CLIENT: mpc_count_and_trigger: cache state TRIGGER MPOA DEBUG: nhrp_parse_packet finished, found 1 CIE's and 2 TLV's MPOA CLIENT: received a MPOA_RESOLUTION_REPLY (135) packet of size 127 bytes on ATM2/0 vcd 1 MPOA CLIENT CHI: Resol Reply-IP addr 50.1.1.1, mpxp addr=47.009181000000006170595C01.00E01EAEFA6D.00,TA G=2217672716 MPOA CLIENT CHI: Ingress Cache- curr state= MPC_IN_CACHE_TRIGGER, event= MPC_VALID_RESOL_REPLY_RECVD, dest IP= 50.1.1.1 MPOA CLIENT CHI: No Active VC-connect to remote MPC 47.009181000000006170595C01.00E01EAEFA6D.00 MPOA CLIENT CHI: connect to remote MPC 47.009181000000006170595C01.00E01EAEFA6D.00 called MPOA CLIENT CHI: SETUP sent to remote MPC 47.009181000000006170595C01.00E01EAEFA6D.00 MPOA CLIENT CHI: Ingress Cache state changed- old=1, new=4, IP addr=50.1.1.1 MPOA DEBUG: nhrp_parse_packet finished, found 1 CIE's and 2 TLV's Chicago#trace 50.1.1.1 Tracing the route to 50.1.1.1 1 198.10.10.2 0 msec
Upon receiving the MPOA resolution request, MPS San Jose sends out an MPOA cache imposition request, as revealed in the following debug output, to the egress MPC. It gets the response back from the egress MPC with DLL (data link layer) information in the form of MPOA cache imposition reply, and MPS will convert this reply into MPOA resolution reply back to originating ingress MPC.
The show output reveals that the originating ingress MPC now has cache for destination IP address and it can establish the cut-through Layer 2 switching to the destination egress MPC, avoiding Layer 3 switching in the router:
SanJose#debug mpoa server MPOA SERVER: received a MPOA_RESOLUTION_REQUEST (134) packet of size 64 bytes on ATM0 vcd 342 MPOA SERVER SJ: packet came from remote MPC 47.009181000000006170595C01.006070CA9045.00 MPOA SERVER SJ: process_mpoa_res_req called MPOA SERVER SJ: mps_next_hop_info activated MPOA SERVER SJ: next hop interface and next hop ip address are NOT known, trying to find them MPOA SERVER SJ: mps_next_hop_info: next hop interface: ATM0.3, next hop ip address: 198.10.10.2 MPOA SERVER SJ: ingress cache entry created for: 47.009181000000006170595C01.006070CA9045.00, 50.1.1.1 MPOA SERVER SJ: ingress cache entry is not yet valid, started the giveup timer (40 secs) on it MPOA SERVER: next_hop_mpoa_device: returning type MPC for 198.10.10.2 MPOA SERVER SJ: I am the egress router: starting mpoa cache impo req procedures MPOA SERVER SJ: egress cache entry created for: 47.009181000000006170595C01.006070CA9045.00, 50.1.1.1 198.10.10.1 (src) MPOA SERVER SJ: a NEW cache id (28) is assigned MPOA SERVER SJ: egress cache entry is not yet valid, started the giveup timer (40 secs) on it MPOA SERVER SJ: MPOA_CACHE_IMPOSITION_REQUEST packet sent to remote MPC 47.009181000000006170595C01.00E01EAEFA6D.00 MPOA SERVER: received a MPOA_CACHE_IMPOSITION_REPLY (129) packet of size 127 bytes on ATM0 vcd 327 MPOA SERVER SJ: packet came from remote MPC 47.009181000000006170595C01.00E01EAEFA6D.00 MPOA SERVER SJ: process_mpoa_cache_imp_reply called MPOA SERVER: searching cache entry by new req id 58 MPOA SERVER SJ: egress MPS received a 'proper' mpoa cache impo REPLY: validating and starting the holding timer on the egress cache entry MPOA SERVER SJ: snooping on the mpoa cache imposition reply packet CIE: cli_addr_tl = 20, cli_nbma_addr = 47.009181000000006170595C01.00E01EAEFA6D.00 MPOA SERVER SJ: tag value 2217672716 extracted MPOA SERVER SJ: mps_next_hop_info activated MPOA SERVER SJ: next hop interface and next hop ip address are NOT known, trying to find them MPOA SERVER SJ: mps_next_hop_info: next hop interface: ATM0.3, next hop ip address: 198.10.10.1 MPOA SERVER SJ: converting the packet to a nhrp res reply MPOA SERVER SJ: process_nhrp_res_reply called MPOA SERVER: searching cache entry by new req id 57 MPOA SERVER SJ: success: ingress MPS picked up holding time of 1200 seconds from the 1st CIE MPOA SERVER SJ: validated and started the holding timer on the ingress cache entry MPOA SERVER SJ: converting the packet to an mpoa res reply MPOA SERVER SJ: MPOA_RESOLUTION_REPLY packet sent to remote MPC 47.009181000000006170595C01.006070CA9045.00 Chicago#show mpoa client cache MPC Name: CHI Interface: ATM2/0 State: Up MPC ATM Address: 47.009181000000006170595C01.006070CA9045.00 Shortcut-Setup Count: 1 Shortcut-Setup Time: 1 Number of Ingress cache entries: 1 MPC Ingress Cache Information: Dst IP addr State MPSid VCD Time-left Egress MPC ATM addr RESOLVE 1 57 19:27 47.009181000000006170595C01.00E01EAEFA6D.00 Number of Egress cache entries: 1 MPC Egress Cache Information: Dst IP addr Dst MAC Src MAC MPSid Elan Time-left CacheId 192.10.10.2 0060.70ca.9040 00e0.1eae.fa38 1 10 19:26 29
This chapter covered various methods of deploying Multiprotocols over ATM. You can apply appropriate methods, based on your network requirements and environment, and then use these configuration examples to familiarize yourself with the method(s). Of course, you can also try out some troubleshooting in the lab before deploying a method. After your method is deployed on the production network, keep the following in mind:
Turning on debugging is not as easy as it seems. It can cause CPU performance problems and should be approached with caution.
Try to use show commands as much as possible before turning on the debugging.
For RFC 1483 debugging, ILMI does not impose much stress on the CPU, but debugging signaling events can.
For RFC 1577, debugging on the ARP client side is an excellent way to identify problems. Avoid debugging on the ARP server side.
LANE debugging is very sensitive on both the LEC and LES/BUS side. Try to use show commands to see where the client is stuck to join the ELAN. Ifzz you are running multiple LECs on the router/switch, debug only the client with the problem.
Avoid turning on debugging on LES/BUS because they are heavily utilized and can output lots of data, causing troubles for the CPU.
For MPOA, debugging on the MPC with keepalive debugging off gives the most useful information.
3.128.206.91