Chapter 2: Choosing between Protocols: TCP and UDP

In This Chapter

check.png Comparing TCP versus UDP for network communication

check.png Examining the header structures

check.png Figuring out when to use TCP and UDP

check.png Getting familiar with three-way handshaking

check.png Understanding how TCP sliding windows operate

As I mention in earlier chapters when discussing the Open System Interconnection (OSI) model and the Internet Protocol (IP) network model, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are network protocols that operate at the transport or host-to-host layer, depending on the model you are using. They are responsible for the delivery control mechanisms in use with the TCP/IP suite of protocols.

In this chapter, I show you how TCP and UDP allow you to send data over a large IP-based network and the significance of choosing one transport layer protocol over another. In showing you how these two protocols operate, you see where they fit into the IP packet (which I also discuss in Chapter 1 of this minibook) and you find out about the structure of both TCP and UDP by examining the information found in their headers. When an IP host makes a connection to another IP host, they establish the connection through an access path, or port, so I discuss ports and how they relate to a socket in this chapter. I wrap up this chapter by looking at the mechanisms that TCP uses to guarantee delivery of data and why you would want to use UDP instead of TCP in certain cases.

Understanding the UDP and TCP Structure

TCP and UDP are host-to-host layer protocols in the IP network model. TCP and UDP manage the data that is sent to or from the application layer protocols and is delivered using an Internet layer protocol to another host on the network. Although they both have a role in managing the data flow between the application layer protocols on both the sending and receiving hosts, here are some differences between how TCP and UDP operate:

TCP manages the process of transferring data from the application layer protocol on one host to the application layer protocols on another host. Not only does TCP facilitate the delivery of data to the application layer protocol, but it confirms delivery with the sending host, verifies that data was not changed during transmission, and ensures that data is received in order. This full service data management is referred to as guaranteed or reliable delivery.

UDP also manages the process of transferring data from the application layer protocol on one host to the application layer protocols on another host, but this is where it deviates from TCP. UDP accepts data from the Internet layer protocol, verifies that it has not been modified in transmission, by way of a checksum (see Book I, Chapter 4), and delivers it to the application layer protocol. Although UDP verifies that the data that arrived matches what was sent, it does not notify the sending host that it arrived, nor does it verify that it arrived in order. The lack of the last two functions (notifications and order) makes UDP functions a best-effort transport protocol, or non-guaranteed or unreliable delivery.

Examining packet structure

TCP and UDP headers and data reside in the data section of the IP packet. In Book I, Chapter 4, I discuss the data link layer structure, or frame. When using IP at the network layer, the contents of the frame data field are called a packet. The packet is the network layer IP structure, which also has a data section, but in this case, the data section of the IP packet contains a structure dubbed a segment, which is either a TCP or UDP segment and is defined by a set of headers.

technicalstuff.eps The official name for TCP and UDP data is a segment.

In this section, I show you what makes up the IP packet— or, more specifically, what makes up the header of an IP packet. Figure 2-1 shows an IP packet; the sections of the header are as follows.

Figure 2-1:
The structure of an IP packet.

9780470945582-fg020201.eps

Version: This number identifies the version of IP being used for this packet. Because this packet will be using IPv4, you will find a value of 4 in this field. IPv4 is the current standard, but the world is on their way to converting to IPv6. To find out about IPv6 and its header structure read through Chapter 4 of this minibook.

Internet Header Length (IHL): This value denotes the overall size of the IP header for this packet, measured in bytes. Because the options (covered later in this Bullet1ed list) listed in the header may vary, this value tells the packet receiver where to expect the data to start.

Differentiated Services Code Point (DSCP): RFC2474 defines this field as Differentiated Services, or DiffServ, but it used to be known as the Type of Service (ToS) field. This field is used to identify the type of data found in the IP packet and may be used to classify the data for network Quality of Service (QoS). More and more systems these days want to have their traffic given priority, and this field allows for the classification of QoS data. Your network switches and routers then uses the QoS data to allocate data to higher or lower priority internal queues, processing data at a higher rate from the higher priority queues.

Explicit Congestion Notification (ECN): If both ends of an IP connection support this option, they allow for network congestion to occur but adjust the flow of traffic without dropping any data packets. This technology is defined in RFC3168.

Total Length: Total Length identifies the total size of the IP packet or datagram, including both the header and data found in the data structure. The smallest this can be is 20 bytes of IP header data, but it can grow up to 65,535 bytes. Every IP host is required to handle datagrams or packets of 576 bytes, but most will support larger blocks of data.

technicalstuff.eps If the datagram — the data found in the data portion of the IP packet — is larger than the size of the network frame, the IP data in the network frame will be fragmented. Fragmenting data occurs at the IP layer, and the data is divided into multiple network frames and sent to the destination where it is reassembled. Fragmentation happens as the IP crosses from one interface on a router to another interface on the router, and will happen only if the maximum transmission unit (MTU) on the network segment the network frame is being transferred to is smaller than the MTU on the original network segment.

remember.eps Even if the network frame size increases along the way, reassembly happens only at the final destination, which makes sense because there may end up being smaller segments along the way, causing it to be fragmented again.

Identification: The Identification field is used to identify fragments that belong to an IP datagram.

Flags: This three-bit field is used to manage or identify IP datagrams that were fragmented. The bits (in order) are

Reserved: This flag is reserved for future use, so it will be 0 (zero).

Don’t Fragment (DF): If the DF flag is set and the packet needs to cross a network segment that has a smaller MTU, which would require the datagram from the packet to be fragmented, the IP packet will be discarded rather than fragmented. When the frame is discarded, the router throws out the packet and sends a delivery failure message to the sending host through Internet Control Message Protocol (ICMP).

More Fragments (MF): If the datagram was fragmented, the MF flag would identify that there are still more portions of the original datagram to follow. If the MF flag is 0 and the fragment offset (see next bullet) is 0, the datagram was never fragmented, whereas the packet containing the last portion of a fragmented datagram will have MF set to 0 with a non-0 offset.

Fragment Offset: The offset is used with fragmented datagrams and identifies where this portion of the datagram fits into the whole. This section of the header stores an integer (a number without decimal places) representing the number of 8-byte blocks. The portion of the datagram in the current packet is from the beginning of the data in the original, unfragmented datagram.

Time to Live (TTL): TTL is an 8-bit field that can hold a value up to 255. This is the number of hops, or routers, that the packet can cross or traverse, before being discarded from the network. Rather than a straight hop count being used to decrement the counter, the router reduces the count by the number of seconds that it takes to handle and forward the packet to its destination, always rounding up to the next second. In most cases, this decrements the count by 1 per router. If the TTL is set to 0, the router handling the packet sends an ICMP – Time Exceeded message to the sender, informing it as to the packet’s ill-fated destiny.

Protocol: The host-to-host layer protocol that will receive the contents of the data portion of the packet. RFC1700 identifies the possible protocol values that can be used in an IP packet. The most common protocols are 1 (ICMP), 6 (TCP), and 17 (UDP).

Header Checksum: This checksum (which I introduce in Book I, Chapter 4) consists only of header data. If the header data fails the checksum, the entire packet is discarded because it would be expected to contain errors as well. In addition, the data field of the IP packet (either UDP or TCP) would calculate checksums on the data contained in the IP frame. The IP packet is concerned only with errors in the header because the upper-level protocols would take care of errors in the data. In this case, all the bits in the header are broken into 16-bit words, and all the words are summed. This value is then stored in this field.

Source IP Address: This holds the IPv4 address of the host that sent this IP packet.

Destination IP Address: This holds the IPv4 address of the host to which the IP packet is being sent.

Options: This field allows the inclusion of options in the IP packet, such as header copy settings, requiring the entire IP header to be copied to all packet fragments if the IP packet needs to be split or source route settings.

Data: This is the payload of the frame — it is not included in the checksum calculation; the transport layer protocol or higher layer protocols take care of that function. The length of the data in the IP packet can be from 0 bytes up to the maximum amount of data that network frame can hold. For an Ethernet network frame, this is 1500 bytes minus the size of the IP header, which leaves 1308 bytes for data.

TCP and UDP header structures

Just as the data portion of the network frame contains a structure that is an IP packet, the data portion of the IP packet contains a structure that is a segment. The segment could be a TCP or UDP segment depending on which host-to-host service was used. Like the frames and packets, the segment has a header structure and data. The header is used by the TCP or UDP, whereas the data is forwarded to the application layer protocol associated with a port number (which I discuss in the section “Sockets and ports” later in this chapter). The following sections show you the structure and contents of TCP and UDP headers, respectively.

TCP headers

A copy of the RFC793 compatible TCP header is shown in Figure 2-2, and includes the following fields.

Figure 2-2:
The structure of the TCP header.

9780470945582-fg020202.eps

Source Port: This is the TCP port used on the sending computer.

remember.eps The connection between two hosts is made between two TCP ports: one for the source host, and one for the destination host.

Destination Port: This is the TCP port being connected to the target host.

Sequence Number: All TCP data is sent sequentially. The sequence number is used to ensure that all data is received and is in order.

Acknowledgement Number: When data is received, an acknowledgement is sent back to the sender, letting the sender know that the last complete piece of data was received.

Data Offset: For large pieces of data that needed to be fragmented, this shows where in the overall piece of data this particular piece fits.

Reserved: This space has not been designated to hold any data, but is reserved for future use.

Flags: Eight flags can be set on a TCP segment, which deal with session control with the remote host:

• CWR (Congestion Window Reduced): Acknowledges a congestion notification received from a TCP segment with the ECE flag enabled.

ECE (ECN Echo): Explicit Congestion Notification (ECN), alerting the sending host that there is congestion on the network between the sending and receiving host.

URG (Urgent): Indicates that information in the Urgent Pointer field should be used.

ACK (Acknowledge): Acknowledges the receipt of a TCP segment with the SYN flag set. This is part of the three-handshake process covered in the “Three-way handshaking” section of this chapter.

PSH (Push): Requests that any data the receiving host has received be pushed to the receiving application.

RST (Reset): Notifies that host receiving this TCP segment that the connection has been reset (or unexpectedly terminated) and needs to be reestablished with another three-way handshake.

SYN (Synchronize Sequence Numbers): TCP guarantees data is received in order by number segments. This flag is used to synchronize the initial sequence number for both the sending and receiving host.

FIN (Finished): Indicates that the TCP communication session is complete and is going to be cleanly terminated, unlike the reset, which is an unexpected termination.

Window Size: Window Size tells the sending device how much data the receiver is willing to receive in a single transmission burst. Read more later in this chapter.

Checksum: Checksums are generated on the TCP data and stored in this field, guaranteeing that data has not been corrupted.

Urgent Pointer: This field holds an integer indicating the offset, from the current sequence number to the end of the urgent data. In other words, this is the number of remaining segments that have urgent data.

Options: Any additional options are specified by this field. They may be of varying length, which is specified by the value in the second byte of this field. The options specified here would control the flow of segment data, such as specifying the maximum size of the TCP segments.

Padding: The specifications for TCP state that the total header size needs to end at a 32-bit boundary, so the padding field is used to round off the header to a proper size after the variable length options have been specified. Because this is a padding field, it is filled with zeros.

UDP header

In Figure 2-3, you can see that the header is compliant with RFC768 and contains only four fields, taking up a total of 64 bits:

Figure 2-3:
The structure of the UDP header.

9780470945582-fg020203.eps

Source Port: This is the TCP port used on the sending computer.

Destination Port: This is the TCP port being connected on the target host.

Length: The length of the overall frame is stored here. This is used to verify that the full frame has arrived intact.

Checksum: A checksum is generated on the UDP data and stored in this field, ensuring that data has not been corrupted during transit. UDP does verify that the data that arrives has arrived intact, whereas TCP ensure that the data arrived, arrived intact, and arrived in order.

Sockets and ports

TCP and UDP operate at the host-to-host layer in the IP communication model and provide host-to-host communication services for the application layer protocol. This means an application layer protocol is on one IP host connecting to an application layer protocol on another IP host. In most situations, these host-to-host connections have a sever process running on one host and a client process running on the other host. Examples of this host-to-host connection include a web browser (such as Mozilla Firefox) connecting to a web server; an e-mail client (such as Outlook Express) connecting to a Simple Mail Transfer Protocol (SMTP) mail server; or a Secure Copy Protocol (SCP) client (such as WinSCP) connecting to an SCP server. To manage the connection between these application layer protocols, TCP and UDP use ports and sockets, which I look at in this section.

A port is a TCP or UDP connection point. Think of them as receptacles on an old-fashioned telephone switchboard. There are 65,536 (or 2^16) ports available for a host to manage connections, numbered from 0 to 65,535 for each TCP and UDP. When you establish an application server running on an IP host, you configure that server to be used (or bound to) a specific TCP or UDP port. By associating the application layer server to use a specific port, you have created a destination that a remote IP host can connect to.

When the remote IP host connects to an application layer server, the connection the host makes is to a port operating on a specific IP host (identified by an IP address). This pairing of an IP address and a port as a connection endpoint is a socket. In that old-fashioned switchboard analogy, the socket has two connectors connected to each client’s phone: one is a receptacle, and the other is a plug. Think of these connectors as the ports, but because the port is associated with a phone, together they make a socket, such as the TCP or UDP port, when paired with an IP address is a socket. To make a phone connection for a client, the “operator” takes the plug for one client and connects it to the socket for the other client. With IP, the client application has a port that it operates on, so on the client host, there is an IP address and port for the client side of the connection; this is a socket. On the server side of the connection is an IP address for the server and a port to make a socket on the server host. To establish a connection between the client application layer and the server application layer is a virtual connection between these two sockets.

As an example of this connection process, I walk you through the process of connecting to a website, such as www.wiley.com. You would open your web browser (like Mozilla Firefox) and type www.wiley.com into the address bar. I do not go into the details here, but your web browser uses a Domain Name System (DNS) server to look up the name www.wiley.com and find out what its IP address is. For this example, the address is 192.0.2.100.

Firefox makes a connection to the 192.0.2.100 address and to the port where the application layer web server is operating. Firefox knows what port to expect because it is a well-known port (which I discuss later in this section). The well-known port for a web server is TCP port 80. The destination socket that Firefox attempts to connect is written as socket:port, or in this example, 192.0.2.100:80. This is the server side of the connect, but the server needs to know where to send the web page you want to view in Mozilla Firefox, so you have a socket for the client side of the connection also.

The client side connection is made up of your IP address, such as 192.168.1.25, and a randomly chosen dynamic port number (keep reading this section to find out more about a dynamic port). The socket associated with Firefox looks like 192.168.1.25:49175. Because web servers operate on TCP port 80, both of these sockets are TCP sockets, whereas if you were connecting to a server operating on a UDP port, both the server and client sockets would be UDP sockets.

When you refer to sockets, you always base the names on your current point of view. Here are the two types of sockets, based on your point of view. They are

Local sockets: Sockets or connection endpoints on your current host.

Foreign, or remote, sockets: Foreign, or remote, sockets are on another host.

If you use a command, such as netstat -n on Microsoft Windows or Linux, you see a listing of the local addresses (and ports) and the foreign addresses (and ports) to which they are connected.

The three categories of TCP and UDP ports are

Well-known ports: When IP was being implemented, there was a slow start of assigning services that needed to use specific ports. The ports were initially assigned from the lowest port number and worked their way up. Ports 0–1023 are considered well-known ports because they were used by many of the core services on the Unix servers, and most required privilege permissions on the server to implement. Telnet (23) and Simple Mail Transport Protocol (SMTP) (25) are two examples of these services.

Registered ports: The Internet Assigned Numbers Authority (IANA) keeps the list of all services that run on both the well-known ports and on all registered ports. The registration process puts a permanent association in place with the port number and the service. These services are all long-running services and would be assigned to ports between 1,024 and 49,151. The Microsoft Remote Desktop Protocol (RDP) (3389) and Network File System (NFS) (2049) are two examples of registered ports.

Dynamic and/or private ports: All other ports, from 49,152 to 65,535, are referred to as dynamic, or private ports. These ports are not permanently associated to any service. If I write my own service, I can configure it to use any dynamic port that I want, but someone else may write his own service and use the same port. This will not cause any issue until you install both services on the same IP host because they are both going to want to use the same port, and that is just not possible. It would be like two people having their phones hooked up to the same plug and receptacle at the operator’s office; it is not possible. This problem should not happen, though, if you have a registered port to work with because the other developer cannot use the same service.

Checking out which services use which ports

So what services use what ports in the world of TCP and UDP? Here is a short list (from the hosts file on my computer) of some of the major services in the well-known ports registered range. You may notice that some of the ports are used by the services on both UDP and TCP, but this is not true in all cases. As you scan through the list, you may notice that most of these service names are application layer protocols and either have come up in this book, or will come up in this book. Some protocols, such as FTP, use two ports, and Kerberos operates on either TCP or UDP (which typically means that one port functions as a backup port).

ftp-data           20/tcp                           #FTP, data

ftp                21/tcp                           #FTP. control

telnet             23/tcp

domain             53/tcp                           #Domain Name Server

domain             53/udp                           #Domain Name Server

bootps             67/udp    dhcps                  #Bootstrap Protocol Server

bootpc             68/udp    dhcpc                  #Bootstrap Protocol Client

tftp               69/udp                           #Trivial File Transfer

http               80/tcp    www www-http           #World Wide Web

kerberos           88/tcp    krb5 kerberos-sec      #Kerberos

kerberos           88/udp    krb5 kerberos-sec      #Kerberos

ntp               123/udp                           #Network Time Protocol

netbios-ns        137/tcp    nbname                 #NETBIOS Name Service

netbios-ns        137/udp    nbname                 #NETBIOS Name Service

netbios-dgm       138/udp    nbdatagram             #NETBIOS Datagram Service

netbios-ssn       139/tcp    nbsession              #NETBIOS Session Service

Knowing When to Use TCP

When you have a large amount of data to send over a network connection, use TCP because the data is guaranteed to arrive in order and intact. Yes, you could send that data over UDP, but you need to build into your application a process for checking and verifying that all the data has arrived or to deal with the fact that some data may not show up. With TCP, all this checking for order and data accuracy is built into the TCP transport layer protocol. By having these checks built into the transport protocol, less work is required by the developer of the application protocols because the receiver is already guaranteed to receive consistent data from the network.

remember.eps The greatest strength of TCP is when you are sending large blocks of data over an unreliable network that may be experiencing a high number of dropped packets. With TCP, you are guaranteed that all the data will arrive and will be in order. TCP guarantees this level of data accuracy by sending a series of acknowledgements for each piece of data that the client device receives.

TCP is good to use for large data blocks, broken into several data packets, arriving at their destination for reassembly to the large data block with a guarantee that it will be done in such a manner. The following sections describe some services that work with TCP.

Services that use TCP

TCP is used by such protocols as

File Transfer Protocol (20, 21)

Telnet (23)

Simple Mail Transfer Protocol (25)

Post Office Protocol 3 (110)

Network News Transfer Protocol retrieval of newsgroup messages (119)

HyperText Transfer Protocol over SSL/TLS (443)

Microsoft-DS Server Message Block file sharing (445)

SolarWinds Kiwi Log Server (1470)

Citrix XenApp Independent Computing Architecture thin client protocol (1494)

In addition to the preceding are several services that operate on, or at least have been registered for, both TCP and UDP connections. These include

Secure Shell (22)

Domain Name System (53)

HyperText Transfer Protocol (80)

Lightweight Directory Access Protocol (389)

Timbuktu service ports (1417–1420)

Microsoft Windows Internet Name Service (1512)

Cisco Skinny Call Control Protocol (2000)

Three-way handshaking

To send data over TCP, follow the required session establishment process, known as handshaking, or more specifically, a three-way handshake because it involves completing three IP packets. The three-way handshake is illustrated in Figure 2-4 and involves these three frames.

SYN: This is the synchronization phase. This TCP segment sets the sequence number to be used for the upcoming data transfer.

SYN-ACK: The reply from the remote host does two things:

• Verifies the sequence number that will be used.

• Acknowledges the original request.

ACK: This data is sent from the originating host, and acknowledges the sequence number and the acknowledgement from the targeted host.

Figure 2-4: The TCP three-way handshake.

9780470945582-fg020204.eps

After being established through the handshaking process, the TCP sequence numbers will be used in sequential order until the session is terminated. The sequence numbers allow all the data to arrive in order (or in the correct sequence).

There is a process to start a session, and there is also a process to terminate the TCP session. To terminate the session, a Finish frame is sent from one host to the other:

FIN: The Finish frame is a request that the session be terminated.

FIN-ACK: The response to a finish request is an agreement for finishing and an acknowledgement. Unlike session setup, there is no follow-up acknowledgement; this end of the session is closed when the data is sent. The remote host closes its end of the connection when it receives FIN-ACK.

Sliding windows

Sliding windows are part of the data flow control and are implemented through TCP. If you refer to the TCP header in Figure 2-2, you can see that one of the fields in the header is the Window Size, in addition to a Sequence Number and an Acknowledgement Number.

To send data across the network, there is a happy medium between sending 100MB in one go versus sending 1 byte. The answer lies somewhere in-between. The sliding window is how you can manage the amount of data on the network for delivery at any given time. Think of a delivery courier who can carry a set number of packages in his truck. When he leaves to deliver them, you do not know what is going on (prior to couriers using wireless handheld computers that constantly update your package status). When he returns to the sorting station, you know what was delivered, and you can put more packages on his truck for delivery. Sliding windows set the size of his truck.

The benefit of using sliding windows is that it controls overall speed at which data can be delivered. Its size is represented as a number of data packets that can be out for delivery on the network at any one time. In Figure 2-5, I set up a sliding window size of six frames, which would equate to the size of the data on my computer. The size of the window is the smallest between the announced window size of the receiving host and the send buffer on the sending host.

This process then starts with the sending host sending a complete buffer of data — or in this case, six frames. Those frames take a period of time to cross the network to the destination. When they arrive at the destination, the recipient host accepts all the frames and sends back an acknowledgement for the last frame that it receives. If something went wrong with some of the packets, or they suffered an abnormal delay, the acknowledgement may not be for the entire size of the window. Although each frame could have a separate acknowledgement, more typically, the frames will arrive in close enough succession that only one acknowledgement will be sent.

Figure 2-5: Sending a window full of data with sliding windows.

9780470945582-fg020205.eps

In this case, Figure 2-6 shows the acknowledgement being returned to the sender. Also, note that the window has moved along to the next block of data that it expects to receive from the sender. If the acknowledgement had been for only part of the window — say, that only three frames were received — the acknowledgement would have been for frame 4 as the next frame. The sender would have moved its window to send — or resend — frame 4 with five other frames.

Figure 2-6: Acknow-ledging a full window with a single acknow-ledgement.

9780470945582-fg020206.eps

Figure 2-7 shows the follow-up set of data being sent to the receiving host. This process continues until all the data has been received by the receiving host.

Based on the reliability of the underlying network, the speed of the network, and the Round Trip Time (RTT) for data, you may want to adjust the window size to achieve an optimal data transmission rate. Here are a few examples of how the window size adjusts data transmission time:

With the network being reliable, set the RTT to 10 milliseconds (ms) to send 32 blocks of data with a window size of 1, waiting 10 ms after sending each frame. The wait would be for the acknowledgment. To send all the data, you would be looking at a total time of 320ms and a total transmission on the sender of 32 frames.

With the same reliable network and RTT, set the window size to 10. If all data gets through in each transmission cycle, the sending host would have sent 32 frames in no less than 40ms. That time being made up of transmitting three blocks of 10 frames and one block of 2 frames, with the 10ms RTT for the acknowledgement to be received.

Finally, with an unreliable network, allowing only two of every ten packets to make it through and the same RTT, making a valid estimate is very hard. But, with a sliding window of 10 frames, and assuming that 1 of those 10 frames that makes it through is the first frame, the best time you would likely be able to transmit the data during is 320ms, although likely much longer. During that 320ms, you likely have transmitted approximately 275 frames of data. In this scenario, a smaller window size would have gotten the data through just as quickly but with the transmission of far fewer frames.

Figure 2-7: Sending a second window worth of information.

9780470945582-fg020207.eps

Knowing When to Use UDP

So why would you ever want to use a best-effort delivery transport protocol when you have the guaranteed option available to you? With the guarantee of delivery that TCP provides, there is extra overhead, just like how guaranteed delivery with Express Post requires completing an extra form and costs you more money. This overhead can be overwhelming when the data that you are delivering is either small or requires no delays. So, UDP is useful when you care less about the data arriving in particular sequence because it all fits in one data packet. Or, perhaps you can deal with missing data, as in the case of streaming video, where you can deal with a lost frame or two and see a video properly over huge delays that cause stuttered playback.

And what uses UDP? UDP is used by such protocols as

Dynamic Host Configuration Protocol (67, 68)

Trivial File Transfer Protocol (69)

Network Time Protocol (123)

Simple Network Management Protocol (161)

Internet Security Association and Key Management Protocol (500)

Doom, popular 3D online first-person shooter (666)

Layer 2 Tunneling Protocol (1701)

Microsoft Simple Service Discover Protocol (1900)

Cisco Hot Standby Router Protocol (1985)

Cisco Media Gateway Control Protocol (2427)

remember.eps Some services can run with TCP and UDP, as listed in the “Knowing When to Use TCP” section, earlier in this chapter.

So is UDP really all that unreliable? Perhaps in the past it was, when you were sending data over all kinds of links in which data bits would regularly run the risk of being lost or delayed. In most modern networks, though, this tends not to be the case. With highly reliable underlying networks, the role of UDP is now really to reduce overhead; in some cases where unreliable connections persist, it still offers utility.

UDP offers a great advantage in situations where you are sending only a small amount of data. As mentioned, between the handshaking process of TCP and its larger headers, there is more overhead and less data sent per packet. If you have a small amount of data — less than about 1,400 bytes — using UDP is for you. The data will either get there or not. And because the data is small enough to fit in a packet, it will arrive in order.

Some systems using UDP include Trivial File Transfer Protocol (TFTP), a UDP version of FTP, which allows you to use UDP for large pieces of data. If data does not arrive intact or in order, it is not the job of the protocol to deal with it, but rather the application that needs to deal with the issue. In the case of TFTP as an application/protocol, TFTP can build checking into its protocol, placing data order information into what UDP sees as data but TFTP sees as its protocol information. Because the order of the big file is important, the data in the file needs to be kept in sequence; in TFTP, though, this job is done outside the transport layer.

Error checking is another function that needs to be done by an application protocol when using UDP as a transport layer protocol. If needed, UDP is required to retransmit entire data blocks to get complete data. Again, this is good for fairly reliable networks in which retransmission will be low.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.124.53