© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
V. JainWireshark Fundamentalshttps://doi.org/10.1007/978-1-4842-8002-7_4

4. Analyzing Layer 4 Traffic

Vinit Jain1  
(1)
San Jose, CA, USA
 
This chapter covers the following topics:
  • Understanding the TCP/IP model

  • Transmission Control Protocol

  • User Datagram Protocol

Understanding the TCP/IP Model

We have already covered the OSI model, in which we learned about Layer 2 frames and Layer 3 packets and their importance when exchanging packets between two endpoints. This chapter focuses on the Transport layer (Layer 4) of the OSI model, which is responsible for transporting the data between the source and destination either via a connection-oriented or a connectionless mechanism. There are various Transport layer protocols that are used to transmit the data, including these:
  • Transmission Control Protocol (TCP)

  • User Datagram Protocol (UDP)

  • Stream Control Transmission Protocol (SCTP)

  • Reliable User Datagram Protocol (RUDP)

In this chapter, we focus primarily on the TCP and UDP modes of transmission.

In the 1970s, two Defense Advanced Research Projects Agency (DARPA) scientists, Vint Cerf and Bob Kahn, often known as the fathers of the Internet, started researching reliable data communications across packet radio networks. From the lessons learned from the Networking Control Protocol (NCP), a set of protocols forming a part of Point-to-Point Protocol (PPP), Cerf and Kahn created the Transmission Control Program. Transmission Control Program was a huge success, and in 1974 it was initially standardized in RFC 675, Specification of Internet Transmission Control Program.

Initially, the Transmission Control Program managed both the routing and datagram transmission, but over time, collaborators suggested dividing the functionality into layers. In 1978, the Transmission Control Program was split into two distinct Protocols, the Internet Protocol (IP) and the Transmission Control Protocol (TCP). Both the protocols combined to form the Internet Protocol suite, commonly known as TCP/IP. The TCP/IP model only has four layers:
  • Application layer: This layer allows for process-to-process communication on the same host or different hosts. This layer leverages the lower layer protocols to transmit the information. The Application layer introduces different communication models such as the client/server model or peer-to-peer networking model. Some of the examples of aHTTP, FTP, and SSH.

  • Transport layer: This layer takes care of performing host-to-host communication that is either in a local LAN segment or remote network segments separated by routers. Two of the primary protocols in the Transport layer are TCP and UDP.

  • Internet layer: This layer defines the addressing and routing structures used by the TCP/IP protocols. IP defines the addressing that will be used by the hosts and network elements in the same or different segments and provides a function for hop-by-hop routing by sending datagrams to the next hop that holds the information to the next network segment.

  • Link layer: This layer is a combination of the Data Link layer and Physical layer of the OSI model. This layer provides information about hardware addresses (MAC address) and ensures physical transmission of data.

When distributed applications or client/server applications that are separated across network segments communicate using a router, they leverage the TCP/IP model to establish the communication and exchange information. As mentioned before, the Application layer of the TCP/IP model only focuses on process-to-process communication; it leverages the underlying layers to transmit the data. Figure 4-1 illustrates how a client/server application communicates using the TCP/IP model. When the client wants to send a request to the server, it creates a packet data unit (PDU) for the data that it wants to send to the remote server. The first level of encapsulation is provided by the Transport layer, in which it is decided based on the application requirements if the communication is to be established using a connectionless architecture (via UDP) or using a connection-oriented architecture (via TCP). The packet is then encapsulated with an IP header and then with the protocols at the Link layer. Once the final encapsulation is completed, the packet is sent across the network segments, processed, and routed accordingly toward the destination host, where the decapsulation process starts from Link layer all the way to the Transport layer, after which the final PDU is received by the remote server and processed accordingly.
Figure 4-1

Data flow across the TCP/IP model

The TCP/IP model is so widely used that it can be safely said that the Internet today depends on it. The TCP/IP model contains a suite of protocols that allows for host-to-host communication across multiple network segments. Figure 4-2 displays the TCP/IP models and the protocols being used at each layer.
Figure 4-2

TCP/IP protocol suite

If we take a closer look at the TCP/IP model, it is not very different from the OSI model, but it was developed to solve a different problem than the OSI model. The Application layer, Presentation layer, and Session layer of the OSI model are categorized all under the Application layer in the TCP/IP model. The Transport layer and Network layer remain the same in the TCP/IP model. The Data Link layer and Physical layer of the OSI model are categorized under the Link layer in the TCP/IP model. Apart from the number of layers and layer mapping, there are some other key differences between the two models, listed in Table 4-1.
Table 4-1

OSI Model vs. TCP/IP Model

OSI Model

TCP/IP Model

Transport layer is only connection-oriented.

Transport layer is both connection-oriented and connectionless.

Allows users to standardize router, switch, motherboard, and other hardware.

Focuses only on establishing connection between different types of computers.

Provides clear distinction between interfaces, services, and protocols.

Doesn’t provide clear distinction points.

Problem of Ownership

Today, almost every IT-enabled organization depends on mission-critical applications to successfully run its business. Sometimes these applications are an important aspect of a company’s revenue generation model (e.g., an ecommerce portal). When those mission-critical applications stop working and the software developers are sure that it is not because of their code, they escalate the problem to network engineers or network administrators. The software developers usually own the Application layer, Presentation layer, and Session layer of the OSI model or just the Application layer of the TCP/IP model.

In 90 percent of the cases, if not more, network engineers verify the routing and reachability, which falls under the Link layer and Internet layer of the TCP/IP model (Physical layer, Data Link layer, and Network layer of the OSI model) and escalate it back to the application team, saying it is not their problem. This is where the finger-pointing game starts between the application developers and network engineers. If we take a close look at both the OSI model and the TCP/IP model, you will realize that both the application developers and the network engineers do not demonstrate ownership for an important layer—the Transport layer. Nobody wants to go the extra mile and check the information presented at the Transport layer. Instead of posing this as a challenge, we should be seeing this as an opportunity. Knowing how the data are transmitted and how host-to-host communication will happen can quickly help isolate the problem and help both the application developers and network engineers solve the problem quickly.

Transmission Control Protocol

When the initial research was being done by DARPA in 1973, the focus was on developing a protocol that would ensure secure transmission between two hosts while maintaining the integrity of the data, regardless of the amount of data being sent. DARPA and the University of Southern California collaborated and standardized the protocol specification of TCP, a protocol that provided a connection-oriented data transmission mechanism while ensuring data integrity, in RFC 793. RFC 793 was later updated by RFC 1122, RFC 3168, RFC 6093, and RFC 6528.

The current version of TCP allows two nodes or endpoints to establish a connection that enables two-way transmission of data; that is, a device can send and receive data at the same time. Each connection in TCP works in a client/server model, irrespective of which node assumes the role of the server or client, and each endpoint connection is uniquely identified using an ordered pair of IP address and port number. This ordered pair is known as a tuple or a socket. Thus, a TCP connection is often referred to as a socket connection. Now, before moving onto understanding how a TCP connection is established, let’s take a closer look at the TCP header, displayed in Figure 4-3.

A TCP header is a 20-byte header that consists of the following fields:
  • Source port (16-bit): Specifies the port number of the sender.

  • Destination port (16-bit): Specifies the port number of the receiver.

  • Sequence number (32-bit): Used to keep track of the data (in bytes) sent out by the host during a TCP session. During a new TCP connection, the initial sequence number sent is a random 32-bit value. The receiver will use the sequence number and reply with an acknowledgment. When it comes to troubleshooting TCP issues, protocol analyzers often use a relative sequence number of 0 as it is easier to remember than some high-value random number. The sequence number is also used for validating the segments after transmission.

  • Acknowledgment number (32-bit): Used to keep track of every byte received by the receiver. An acknowledgment is sent in response to every packet that is received by the host.

  • Offset (4-bit): Specifies the length of the TCP header, which allows us to know where the actual data begins.

  • Reserved (6-bit): Reserved for further use as per RFC 793.

  • Flags (6-bit): Enables various TCP actions for data processing and communication. The TCP software will perform specific actions when one or more flags are set in the TCP header. The following are the various flags that are set in TCP:
    • Urgent Pointer (URG)

    • Acknowledgment (ACK)

    • Push (PSH)

    • Reset (RST)

    • Synchronization (SYN)

    • Finish (FIN)

  • Window size (16-bit): Specifies the number of bytes the receiver is willing to receive. Using this field, the receiver tells the sender the amount of data that it is willing to receive.

  • Checksum (16-bit): Used to verify if the TCP header is okay or not. Using this field, TCP is able to reliably detect any transmission issues.

  • Urgent pointer (16-bit): This field is used to indicate how many bytes of packet data starting from the first byte are considered to be the urgent data by the sender. This field is only used when the URG flag is set under the Flags field.

  • Options (0–320 bits): The TCP Options field is at the end of the header and is always specified in multiples of 8 bits. If any of the bits are not filled, they are padded with zeros. This field is used to include various TCP functions that do not belong to the general TCP header. The following is a list of various TCP functions available as part of the TCP Options field:
    • Maximum Segment Size (MSS)

    • Window Scaling

    • Selective Acknowledgments (SACK)

    • Timestamps

    • Nop

Figure 4-3

TCP header

Figure 4-4 shows the Wireshark capture of the first packet of an SSH session. Notice that this packet displays both the raw sequence number, which is a randomly generated number, and a relative sequence number that can be used for troubleshooting purposes. This packet also has multiple TCP options such as MSS and Timestamps as part of the TCP Options field.
Figure 4-4

Wireshark capture of TCP packet

When talking about port numbers in the TCP header, there are a maximum of 65,535 port numbers that are allowed on a system. There are some well-known port numbers, too, that we use knowingly or unknowingly on a daily basis through various applications:
  • HTTP: Port 80

  • HTTPS: Port 443

  • Telnet: Port 23

  • SSH: Port 22

  • FTP: Port 21

  • DNS: Port 53

  • IMAP: Port 143

  • POP3: Port 110

Port numbers can be categorized into three types:
  • Well-known ports: These port numbers range from 0 to 1023.

  • Registered ports: These port numbers range from 1024 to 49151. They are not assigned or controlled but can be registered to avoid any duplication.

  • Dynamic ports: These ports range from 49152 to 65535 and cannot be assigned, controlled, or registered.

When troubleshooting, TCP-related issues such as HTTP/TCP-based IP Service Level Agreement (SLA) probes that are deployed on network devices might stop working. As network engineer you might focus on validating the configuration or checking why the remote end has stopped responding, but it is equally important to keep an eye on the ports that are locally open on the box. It could be that even though the TCP session gets established, the connection does not terminate from time to time, and the device might run out of available ports to establish further TCP sessions.

TCP Flags

In the previous section we learned about the six flags in the TCP header. Each flag plays an important role during various phases of a TCP session. Figure 4-5 shows all the different flags in the TCP header seen in a Wireshark capture.
Figure 4-5

TCP flags in Wireshark capture

Each TCP flag is set to perform the following actions:
  • URG: The Urgent flag is set to signal the TCP application that the payload data must be processed urgently up to the set pointer in the Urgent Pointer field. Note that the Urgent Pointer field is relevant only when the URG flag is set in the TCP header.

  • ACK: The Acknowledgment flag is set in combination with the Acknowledgment Number field. This flag indicates the acknowledgment by the receiver that it has received the TCP packets that were previously sent.

  • PSH: The Transport layer, by default, stores the application data in a buffer for some time so that it can transmit data equal to the MSS size to ensure faster convergence and better performance in the network for TCP applications. Such behavior is not desirable, though, for certain applications, such as chat applications. Similar issues apply on the receiving end as well. The PSH flag in the TCP header solves this problem by telling the TCP software to immediately send the payload to the Network layer as soon as it receives the payload from the Application layer. In simple words, it tells the receiver and sender to immediately process the packets instead of buffering them.

  • RST: If the TCP software identifies an error during transmission, it sends an RST flag to reset the connection.

  • SYN: The SYN flag is the first step to initiate a TCP connection via the three-way handshake process.

  • FIN: – The FIN flag signals the receiver that the sender is ending the transmission.

TCP Three-Way Handshake

The TCP three-way handshake is a three-step process that is required to establish a secure and reliable TCP connection between a client and a server.
  1. 1.

    SYN: In the first step, the client initiates a TCP connection toward the remote server. When it does that, the SYN flag is set to 1 in the TCP header and a random sequence number is used for this TCP connection. In this case, it is 926587467, as shown in Figure 4-6. Because this is the first packet, the ACK flag is set to 0. There are other fields that are also set in the TCP header such as Window size and MSS TCP options.

     
Figure 4-6

TCP SYN Wireshark capture

  1. 2.

    SYN-ACK: When the server receives the SYN packet for the TCP connection, it responds back with an acknowledgment by setting the ACK flag bit to 1. Also, when the ACK flag is set, the Acknowledgment Number field is set to the value of one more than the received SYN packet. So, in this case, the Acknowledgment number field will have the value 926587468. Also, because TCP allows for two-way communication, the server also sets the SYN flag to 1 and sets a random sequence number in the TCP header. Note that the sequence number used in the SYN-ACK packet will be different than the one received from the client. In this case, it is set to 4227540456. Figure 4-7 displays the TCP SYN-ACK packet from server to client.

     
Figure 4-7

TCP SYN-ACK Wireshark capture

  1. 3.

    ACK: On receiving the SYN from the server, the client now must respond back with an acknowledgment. For that, the client sends another TCP packet to the server with the ACK flag set, and the Acknowledgment Number field value set to the value of the sequence number plus 1. In this packet the SYN flag is set to 0. Figure 4-8 displays the ACK from the client to the server with the Acknowledgment Number field set to 4227540457. Note that after the ACK is received by the server, the minimum of the client or server’s MSS value is taken into consideration for data transmission.

     
Figure 4-8

TCP ACK Wireshark capture

Because we are looking at an example of an SSH session between the server and client, after a TCP session has been established, all the SSH protocol exchanges are performed. If you look at the packets that are exchanged by the SSH protocol, you will notice that they mostly have the PSH flag set, which indicates that the SSH protocol is telling the TCP software not to buffer the data and transmit the packet to the remote end. Figure 4-9 displays the Wireshark capture of an SSH control packet with the PSH flag set in the TCP header.
Figure 4-9

SSH packet with PSH flag

Like the connection initiation process, there is a three-way handshake process for connection termination. The client and the server exchange the following TCP packets when they wish to close the connection:
  • FIN: When the client wishes to terminate the connection, it sends a TCP packet with the FIN flag set to 1 and sends it with a random sequence number. Note that at this point, the SYN flag will be set to 0 and the ACK is set to 0. If the client is supposed to send an acknowledgment to the server for the previously received TCP packet, the ACK flag can be set to 1 along with the acknowledgment number, but this ACK has no relation to the FIN flag that is set on the packet. TCP does this to reduce the number of packets being exchanged. Figure 4-10 shows the Wireshark capture of SSH connection being terminated when an ‘exit  ’ command is typed on the terminal by the client. Figure 4-10 shows the Wireshark capture of TCP FIN packet when the client wishes to terminate the connection. Notice that in this packet, there is an ACK flag set as well, but this acknowledgment is for another sequence number that the client received.

Figure 4-10

TCP FIN Wireshark capture

  • FIN-ACK: On receiving the client’s termination request with TCP FIN, the server acknowledges the request by replying to the client with ACK. The server also sets the FIN flag to 1 and sends it to the client with a random sequence number different than that of the received FIN. Once this step is completed, the connection is terminated from the client to the server side. Figure 4-11 shows the Wireshark capture of a TCP FIN-ACK packet sent by the server toward the client in response to the FIN request received from the client.

Figure 4-11

TCP FIN-ACK Wireshark capture

  • ACK: This is the last step where the client sends the acknowledgment back to the server for the received FIN from the server. It sets the ACK flag to 1 and sets the Acknowledgment Number value to the Sequence Number of FIN plus 1. After this step is completed, the connection is terminated from the server to the client side. Figure 4-12 displays the Wireshark capture of the TCP ACK packet sent by the client to the server in response to the FIN packet it received from the server in the previous step. Note that the Sequence Number and Acknowledgment Number fields work in the same manner as they did during the session initiation process.

Figure 4-12

TCP ACK Wireshark capture

Every TCP connection goes through different connection states during its session lifetime. The following are the possible TCP connection states:
  • LISTEN: A TCP application is awaiting an inbound connection request.

  • SYN-SENT: A connection request has been sent but no acknowledgment has been received from the remote end.

  • SYN-RECEIVED: A connection request has been received and an acknowledgment has been sent to the remote host, but the host is awaiting an acknowledgment of the connection request sent out as a response to the original connection request.

  • ESTABLISHED: All SYN and ACK have been received and the connection has now been established. Both end hosts can start sharing the data.

  • FIN_WAIT_1: A session or connection termination request has been sent, but no acknowledgment has been received.

  • FIN_WAIT_2: An acknowledgment has been received from the remote host, but no corresponding termination request has been received from the remote host.

  • CLOSING: A session termination request has been sent and a corresponding session termination request has been received and acknowledged but no acknowledgment has been received from the remote host for the original session termination request.

  • CLOSE_WAIT: A session termination request was received and acknowledged but no corresponding session termination request has been sent out yet.

  • TIME_WAIT: The host waits for a reasonable amount of time to ensure the remote host receives the final acknowledgment of a session termination request.

  • LAST_ACK: Host awaits a final acknowledgment after sending an end of connection message in response to having received a session termination request.

At times it gets hard to remember the state transitions and the corresponding TCP flags set during those state transitions. To remember this, you can simply follow the TCP finite state machine as shown in Figure 4-13.
Figure 4-13

TCP finite state machine

Port Scanning

Many network and security analysts perform port scanning to find out about network and host vulnerabilities and services running on the network that can be exploited. Port scanning is a technique of determining which ports in a network or host are open to send or receive traffic. An open port indicates that a service such as HTTP/HTTPS or FTP is offered on the destination network or host. If attackers know what services are offered, they might be able to use other tools to identify security vulnerabilities to exploit those services.

NMAP is a freely available scanner that runs on the UNIX OS and has options for various port scanning techniques. It also has options to detect any scans that might be running on the network. Some of the port scanning techniques are listed here:
  • Connect requests: In this technique an active connection is attempted using the three-way handshake. If the port is open, the three-way handshake is completed, and the scanner gracefully closes the connection by sending an active close request. If the port is closed, the destination responds back with an RST flag set. Note that this is not a safe scanning method, as these connection attempts are logged on the target host.

  • Half-open scan: In this technique, the three-way handshake is not completed and thus the name half-open scan. A SYN is sent by the scanner and it waits for a response. If the target port is open, it returns a SYN-ACK and the connection will be immediately torn down by the scanning host because it did not issue the connection request. Because the handshake never completed, the target host might not log these TCP SYN packet scans.

  • Non-SYN-ACK-RST scans: As per RFC 793, segments containing an RST flag are always discarded and segments containing an ACK always generate an RST flag. So, non-SYN packets that do not contain an RST or ACK could be used for port scanning. Note that this method of port scanning is only useful if the target host or network follows the RFC specifications. OSs that do not follow the RFC send RSTs from both open as well as closed ports, thus making it difficult for scanners to return accurate results.

As part of network security best practices, it is equally important to detect any impossible packet types that might have the following TCP flag combinations:
  • SYN RST

  • SYN FIN

  • RST FIN

  • FIN

  • No flags

Network operators can perform filtering of various types of flags in Wireshark using the following filters:
  • SYN Flag set: tcp && tcp.flags == 0x02

  • ACK flag set: tcp && tcp.flags == 0x10

  • RST flag set: tcp && tcp.flags == 0x04

  • FIN flag set: tcp && tcp.flags == 0x01

  • No flags set: tcp && tcp.flags == 0x3f

Investigating Packet Loss

Packet loss in a network can happen for two main reasons:
  • Link errors/Layer 2 errors

  • Network congestion

Most of the time, once a network is set up, it runs smoothly. It could demonstrate transient or complete packet loss only when the hardware fails or the link has issues. Detecting hardware failures is not very complex, as multiple links and protocols running on the network hardware or host will start showing symptoms of hardware failure and can easily be fixed by replacing the complete hardware or a particular part that is causing the symptoms. When it comes to link issues, there could be several things to troubleshoot, some within our control and some outside. With a link, the issue could be with the unidirectional failures, Small Form-Factor Pluggables (SFPs), fiber or Ethernet cables, duplex settings, telco provider in the middle, and so on. The challenge with link issues is that even though the link might have errors, it will still forward some traffic and drop the remaining traffic, so network operators might not even know unless there is a notification of an event or a complaint from an end customer. With link issues, the data transmitted may also get corrupted and get eventually dropped. In most cases, an error counter on the network or host interface will increment to indicate an issue with the link, which then helps to identify and resolve the problem.

Traffic congestion, on the other hand, can cause a great deal of service disruption and is seen especially when transitioning between link speeds within the network (from 10 Gbps to 1 Gbps). If the higher speed link sends traffic at a rate the egress interface might not be able to keep up with, then it will start dropping the packets. In such cases, with TCP, the sender determines that the loss occurred in transit and will retransmit the packets. This scenario is also known as discards. Because TCP is a reliable connection-oriented protocol, it provides a mechanism to track data that have been sent and receive an acknowledgment of what has been received. If for any given packet the mapping ACK is not received, the TCP software resends the data assuming the packet has gone missing and ensuring reliable transmission of data. You might wonder why, after so much progress and innovation in the field of networking and development of 100 Gbps fiber links, we still have to deal with issues such as network congestion.

TCP Retransmission

As we already know, for every byte of data sent across a TCP connection, there is an associated sequence number. When a sender sends a TCP segment, it starts a retransmission timer of variable length. Let’s assume that the TCP segment gets lost in transit before reaching the receiver. Due to the packet being lost in transit, the receiver never sends the ACK back to the sender. After the retransmission timer expires, the sender assumes that the segment has been lost and it retransmits the data again to the receiver. Figure 4-14 demonstrates the segment loss and data retransmission between server and client. So, if a Wireshark capture shows a lot of retransmitted TCP segments, it simply means that there is packet loss in the network.
Figure 4-14

TCP retransmission

To analyze retransmissions in a network, network operators might have to place multiple taps in the network. For instance, examine the simple topology shown in Figure 4-15. In this topology host H1 (IP 10.1.2.1) sitting behind router R1 is trying to send traffic to host H3 (IP 3.3.3.3), which is sitting behind router R3.
Figure 4-15

Topology

If there is packet loss happening in the network segment between R2 and R3, you will notice in the Wireshark capture that there are multiple retransmissions between the source and destination. In Figure 4-16, the segment for which the sender did not receive the acknowledgment retransmitted the segment back. In this case the sequence number of that segment was 3546854380 (relative sequence number 213) and the acknowledgment number was 597044005 (relative acknowledgment number 501).
Figure 4-16

Wireshark capture with TCP retransmissions

TCP Out-of-Order Packets

In networks, users might also encounter TCP out-of-order (OOO) packets . TCP OOO packets simply mean that the packets arrive at the destination in a different order from that in which they were sent. This could happen for several reasons:
  • Multiple paths: If the TCP segments are following multiple paths (ECMP paths to the destination) or via parallel processing paths within a router or a network equipment (e.g., per-packet load balancing), and either of the systems are not designed to ensure the ordering of the packets, this could lead to OOO packets in the network. Note that it is TCP’s job to deliver the packets in the right order.

  • QoS: Poorly configured QoS, especially a queueing mechanism, can cause OOO packets in the network. If the QoS settings do not forward the packet in a first in, first out (FIFO) manner or if the QoS settings drop the TCP packets along the path, this could lead to retransmission of those dropped TCP segments and eventually to OOO packets.

  • Oversubscription: Oversubscribed links in the network can cause OOO packets. The traffic will end up getting dropped, causing retransmission, slowdowns, and OOO packets.

  • Microbursts: A microburst is a behavior seen in networks when rapid bursts of data packets are seen in quick succession, leading to time intervals of full line-rate transmission. This can cause packets to get dropped due to buffer overflows on the interface. When such bursts occur in the network, links would end up dropping packets, causing retransmissions, slowness, and OOO packets.

When OOO segments are received by the TCP software, one of the main functions that it performs is reassembling packets in order or requesting retransmission of OOO packets. If the Wireshark capture shows that there are OOO packets, then as part of the troubleshooting process you might want to look at the possible causes listed earlier.

Tip

If you see a lot of TCP OOO packets, there is packet loss between the capture point and the sender. If you see a lot of TCP retransmissions, though, there is packet loss happening between the capture point and the receiver.

Troubleshooting with Wireshark Graphs

When troubleshooting TCP or any network issues in a large-scale environment, where there is large amount of data to be analyzed, it becomes challenging and time consuming to identify the root cause. In such a scenario, a quick peek at graphical data can give us a better understanding of what is happening in the network. Wireshark provides you with numerous graph options that can be used for investigating various types of issues. There are graphs in Wireshark that are specific for TCP and can come in handy for day-to-day network analysis and troubleshooting tasks. This section focuses on some of the various graphs that can be used.

TCP Stream Graphs

TCP Stream Graphs can be used to provide visual insights about TCP streams. The Wireshark tool gives user options to select between all packets and TCP packets. The graphs are part of the Wireshark profile and can also be imported from another profile. Within TCP Stream Graphs, there are different graph and analysis options that network analysts can use:
  • Time Sequence (Stevens)

  • Time Sequence (tcptrace)

  • Throughput

  • Round Trip Time

  • Window Scaling

All these options can be accessed in Wireshark on the Statistics menu by selecting TCP Stream Graphs, as shown in Figure 4-17.
Figure 4-17

TCP Stream Graphs options

Time Sequence (Stevens)

This TCP stream time sequence graph shows TCP sequence numbers plotted against time in any single direction. You do have options to switch between the direction of the TCP stream, but only one direction can be analyzed at any given point in time. If the captured traffic is only TCP traffic, then you can simply select Time Sequence (Stevens) graphs from the menu. This will display the graph based on sequence numbers vs. time in seconds as shown in Figure 4-18.
Figure 4-18

TCP Stream Graphs: Time Sequence (Stevens)

In an ideal situation you might want to see a smooth line from the bottom left corner to the top right corner of the graph. Notice that in the graph in Figure 4-18, the graph is mostly incremental and has a smooth line, but there are flat periods in the graph. The flat periods in this graph are bad in that they indicate that the sequence number in that direction is not increasing. When you click on these flat periods, you will notice that there are TCP errors seen during those periods in the Wireshark capture. In this case, when we click on one of the flat periods, we can see TCP ZeroWindow error shown in Figure 4-19, which indicates that the window size is 0 in the TCP header. A TCP window size of 0 usually indicates the client (or server, but in most scenarios it is the client) has advertised the value of 0 for its window size, indicating that the TCP receive buffer is full and cannot receive any more data.
Figure 4-19

TCP ZeroWindow Error seen in Stevens Time Sequence graphs

If there are dips in the graph, it would usually indicate TCP retransmissions or OOO packets. Figure 4-20 displays the dip in graphs indicating TCP retransmissions as well as OOO packets in the network.
Figure 4-20

TCP retransmissions and OOO packets in Stevens graph

Time Sequence (tcptrace)

The tcptrace Time Sequence graph is similar to the Stevens graph, but on steroids. It shows the bytes in flight as well as the receive window information, which is highlighted. This graph also shows other information such as acknowledgments and selective acknowledgments (SACKs) received. Figure 4-21 displays the tcptrace Time Sequence graph. Notice the green line in the graph; this indicates the receive window (rwnd) received from the destination host. The blue sections or blue dots in the graph indicate the packets in transit. The red lines in the graph indicate SACKs.
Figure 4-21

Tcptrace Time Sequence graph

When we further expand the graph as seen in Figure 4-22, notice the brown lines in the graph. These brown lines indicate acknowledgments received from the receiver end. The red lines in the graph indicate SACKs.
Figure 4-22

ACKs and SACKs in tcptrace Time Sequence graphs

When looking at these graphs, there are two things that we do not want to see:
  1. 1.

    The bytes in flight (blue lines or dots) touching the receive window (graph lines).

     
  2. 2.

    Steps (these denote that the sender is not sending the data fast enough or it could be related to a receive window size issue).

     
Figure 4-23 shows a pretty big step in the graph and when a user clicks on that step, they can see TCP ZeroWindow errors in the Wireshark capture indicating the receiver’s TCP receive buffer is full and it cannot process any further packets at the moment.
Figure 4-23

Steps in tcptrace Time Sequence graphs

Throughput Graph

The Throughput graph is very useful during throughput testing in a greenfield deployment or during migration testing in the network. This graph shows the segment length (packet size) and average throughput vs. bytes per second (bps) over time. It also has options to show both the throughput and the goodput in the graph. Figure 4-24 shows the Throughput graph. Notice that in this graph, the segment length is stable during the capture but there is also a gap in the segment length section that indicates that the sender is not sending anything.
Figure 4-24

Throughput graph

Note

In computer networks, goodput means the application-level throughput of a communication. It simply indicates good throughput of an application.

Figure 4-25

Sporadic segments

If the graph shows sporadic segments (dotted lines), it indicates that the device is sending sporadically as shown in Figure 4-25, and it usually indicates that there is packet loss in the path. If users click those sporadic segments, they might be able to see TCP retransmissions or OOO packets.

Window Scaling Graph

The Window Scaling graph can be very useful when troubleshooting TCP window issues. These issues usually occur when one end is sending more traffic than the other end can handle, or the receiving end has no buffer left in the TCP receive window (as seen in some previous examples). This graph, displayed in Figure 4-26, shows the TCP receive window (in green) vs. bytes in flight (in blue). Note that in an ideal situation, the bytes in flight should never be more than the receive window size.
Figure 4-26

Window Scaling graph

Further zooming into the graph, if we see the flat lines (steps) in the graph, it usually represents round trip time (RTT). An RTT is the difference between the time when the packet was sent out and an ACK was received for that packet. Figure 4-27 displays the flat lines in the Window Scaling graphs indicating the RTT.
Figure 4-27

RTT in Window Scaling graph

Note that if the bytes in flight (the blue dots and line) start from the bottom (i.e., at 0), it indicates that all the previous segments that were transmitted have been acknowledged and there are no packets in flight. If the bytes in flight start above the 0 value (baseline) it indicates that there are segments and bytes that have not yet been acknowledged. Figure 4-28 shows the bytes in flight starting above the baseline.
Figure 4-28

Unacknowledged bytes in flight in Window Scaling graph

RTT Graph

If there are jitters in the network, you might want to leverage the help of the TCP Streams RTT graph. The RTT graph measures the RTT of all TCP packets. If the graph shows big spikes, it usually indicates there is either packet loss in the network or network congestion. Figure 4-29 shows the RTT graph for all the captured packets in Wireshark. Notice that initially in the graph, there are very large spikes, but the later part of the graph shows consistent RTT. This indicates that initially there was either congestion or packet loss that increased the RTT, but after it was resolved the RTT was fairly stable.
Figure 4-29

RTT graph

I/O Graphs

The I/O graphs provide a customizable list of graphs allowing users to compare different types of traffic and correlate the events with the application traffic based on errors seen in the network quickly and easily. The I/O graphs allow users to customize the different graphs they wish to see simultaneously, which makes it easier to correlate the data with network events. For instance, you might observe a dip in the requests coming in on a web server on HTTP as well as HTTPS using the I/O graphs while comparing it with any TCP errors seen during that instance. Figure 4-30 displays the traffic pattern of both HTTP and HTTPS traffic from the Wireshark capture. The green lines highlight the HTTP traffic, whereas the red line indicates the HTTPS traffic. Note that the HTTP and HTTPS graph filters are not present by default. These can be added by clicking the + icon, assigning the graph name, and under Display Filters, setting the filter to tcp.port == 80 for HTTP or tcp.port == 443 for HTTPS. Once the filters are set, users can customize these graphs with the colors of their choice.
Figure 4-30

HTTP vs. HTTPS traffic in I/O graphs

If we look at another Wireshark capture where we only have HTTP traffic, but a lot of TCP errors, we can easily correlate the dip in the traffic with a high number of TCP errors. In Figure 4-31, the traffic pattern of HTTP traffic is shown along with TCP errors. When there are major dips seen in the HTTP traffic, we can also see the spike in the TCP errors. When looking at the Wireshark capture around that time, we will be able to determine that there was packet loss happening around that time.
Figure 4-31

HTTP traffic and TCP errors in I/O graphs

The I/O graphs can also be used to analyze any type of microbursts happening in the network. The I/O graphs give options to plot the graph, not just at a 1-s time interval, but also to the millisecond level (performance could be affected based on the amount of data being analyzed by the Wireshark capture). Figure 4-32 shows the I/O graphs adjusted to a 100-ms time interval. The graph in this scenario shows spikes from time to time. The traffic spikes in these scenarios might not be relevant because there are not too many packets being sent within the 100-ms time interval, but if there were more packets sent during the 100-ms time interval, it would have been a concern.
Figure 4-32

Microbursts in I/O graphs

Flow Graphs

When troubleshooting TCP-related network problems, it is necessary to track the flow such as the three-way handshake, data flow and acknowledgments, and so on. Just looking at Wireshark it might be difficult to identify the flow unless you are using the option to follow the TCP stream, in which case it will give you the complete flow of that packet. However, it might still be difficult to understand the direction of each packet, as you will have to keep track of the source and destination IP addresses. This challenge is solved by another Wireshark graph known as Flow graphs. Flow graphs provide you with a graphical representation of all the TCP flows from the Wireshark capture and help you visualize the TCP flow along with its direction. Figure 4-33 shows the Flow graph of the TCP packets from the Wireshark capture.
Figure 4-33

Flow graph

One of the benefits of using Flow graphs is that they preserve the colors from the Wireshark profile and allow you to apply filters. The Flow graph comes in very handy when troubleshooting VoIP-related issues. It shows all the conversations related to DNS, TCP, HTTP, and so on, for the specified traffic. These are a few of the most common use cases of Flow graphs:
  • Tracking any malicious host or application that is trying to access multiple servers on the network.

  • Tracking TCP retransmissions

  • Tracking connection resets

For applying filters in Flow graphs, you can simply apply a display filter in the Wireshark tool using filter expressions and then use Limit To Display Filter check box. When this check box is selected, it will automatically change the Flow graph to only the flow that is being targeted in the display filter. Figure 4-34 shows the Flow graph of an HTTP flow that has been filtered on the Wireshark display filter. Next to the Limit To Display Filter check box, there is also a drop-down list that allows users to further trim down the visual Flow graph to particular types of flows such as ICMP, TCP, and so on.
Figure 4-34

Flow graph filter

TCP Expert

When working on a complex problem, you must know the right filters, use the right options, and have your own profile in Wireshark for different protocols to be able to analyze and identify the problem as quickly as possible. Knowing and using various display filters for troubleshooting different types of TCP issues can save you a lot of time. Table 4-2 displays a list of common TCP-based display filters and what they do.
Table 4-2

TCP Display Filters and Their Functions

Display Filter

Function

tcp.flags == 0x2tcp.flags.syn == 1

Capture all TCP SYN packets

tcp.flags.reset == 1

Capture TCP Resets

(tcp.flags == 0x10) && (tcp.seq == 1) && (tcp.ack == 1)

Capture only third packet of the TCP three-way handshake

tcp.time_delta > 1

Filter TCP delays greater than t seconds; in this example, t = 1

tcp.time_delta > 1 && tcp.flags.fin == 0 && tcp.flags.reset == 0

Identifying TCP delays but ignoring delays from the TCP connection termination process (during the connection termination process, TCP FIN is sent to the remote end, or the TCP reset flag is set)

tcp.window_size >= 0 && tcp.window_size < 500

Identifying small TCP window sizes

tcp.analysis.out_of_order

Filtering TCP OOO packets

(tcp.flags.syn == 1) && (!tcp.len == 0)

Filtering TCP SYN or SYN-ACK packets that contain data

Wireshark Profile for TCP

Wireshark allows users to create custom profiles that can come in very handy based on the type of issue being investigated. Every Wireshark application comes with a Default profile that has the following fields:
  • No.

  • Time

  • Source

  • Destination

  • Protocol

  • Length

  • Info

The Default profile is good for beginners and yields a lot of useful information, but troubleshooting TCP issues is a complex process and requires more specific fields related to TCP to quickly analyze TCP packets. To create a new profile, follow these steps:
  1. 1.

    Right-click Profile: Default at the bottom right corner of the Wireshark application (Figure 4-35).

     
  2. 2.

    Select the New option. This will open the profile modally.

     
  3. 3.

    Create a profile named TCP and click OK.

     
  4. 4.

    Right-click again at the bottom right corner of the Wireshark application and select the TCP profile from the Switch To submenu.

     
  5. 5.

    Once selected, the new TCP profile will become your active profile. Note that at this point, this new profile will have the same columns and settings as the Default profile.

     
To change the settings of the new TCP profile, select Preferences from the Wireshark Menu then go to Appearance | Columns and then add the following columns with the types and settings as shown in Figure 4-36.
  • No.

  • Time

  • Delta

  • Source

  • Destination

  • TCP Delta

  • SEQ

  • ACK

  • Window

  • Bytes in flight

  • Info

Figure 4-35

Selecting the Default profile

Once the columns are added, the TCP profile UI yields more granular information about TCP as shown in Figure 4-36. You can see how easy it looks to point out packets with window size 0. Not just for TCP, but in general, network and security analysts should always create and use custom profiles and use custom fields in their UI based on their style of troubleshooting.
Figure 4-36

Custom columns for TCP profile

Most of the fields in the Columns list are self-explanatory. The only field that needs some explanation is the TCP Delta field. The TCP Delta simply means the time since previous frame in the given TCP stream. This field helps identify if there have been delays in the network in turn causing delays in receiving the TCP stream. The information in the TCP Delta field is available in the Timestamp section of the TCP header but let’s not get confused with every delta that you see. Some delays are normal, such as these:
  • SYN packets: There might be a delay before the initial SYN packet. For instance, once the Wireshark capture is started, you might ask the user to connect to a web server. There will be a delay in such a case before the first packet is seen on the wire.

  • Connection termination packets: TCP connection termination packets are basically FIN, FIN-ACK, RST, and RST-ACK. These packets are explicitly sent to close or terminate a connection. These packets could be sent when a user opens a new tab on the browser, or the session gets automatically closed after a page is loaded.

  • GET requests: GET requests can be generated in HTTP when a user clicks a link to open a new page or to request new data from the back end of the web application. Some GET requests are instant, but there might be GET requests initiated by background processes that might not have any priority, for instance, a GET request for .ico files.

  • DNS queries: DNS queries during a web browsing session are common and could lead to unexpected delays in response.

  • Image files: When a browser application requests image files or .ico files, there might be delays for such requests based on the web server settings or file size of the image.

User Datagram Protocol

Unlike TCP, UDP is a lightweight connectionless protocol that is used to transfer data in the network. UDP is different than TCP in several ways:
  • No handshake mechanism

  • No session teardown

  • Smaller header size

  • Unreliable data delivery

  • No mechanism to manage OOO packets

  • No protection from data duplication

UDP as a transport protocol thus seems useful in scenarios where error checking and correction mechanisms are either not necessary, or these functions are performed by the end applications. The UDP protocol was designed by David P. Reed in 1980 and was standardized in RFC 768. It is a simple message-oriented Transport layer protocol that primarily consists of four fields of 2 bytes each, as shown in Figure 4-37. The UDP header is always 8 bytes in length, as it does not have any Options field in the header.
  • Source port: Identifies the sender’s port number.

  • Destination port: Identifies the receiver’s port number.

  • Length: Specifies the length in bytes of the UDP header and payload; minimum length is 8 bytes.

  • Checksum: This field is used to ensure the integrity of the data. This field stores the 16-bit words summed using 1’s complement arithmetic, which is calculated based on the IP header, the UDP header, and the payload.

Figure 4-37

UDP header

Knowingly or unknowingly, you use UDP in various applications on your network computer. Applications such as DHCP, DNS, Trivial File Transfer Protocol (TFTP), and more, all use UDP as their transport protocol. If you are interested in checking which UDP ports are in use on your system, use the command netstat -anp udp. Example 4-1 displays the output of this command on a Mac OS and Windows OS.
genie@VinJ ~ % netstat -anp udp
udp4       0      0  10.65.55.185.*         8.8.8.8.53
udp4       0      0  10.65.55.185.*         8.8.8.8.53
udp4       0      0  10.65.55.185.*         8.8.8.8.53
udp4       0      0  10.65.55.185.*         8.8.8.8.53
! Output omitted for brevity
C:UsersAdministrator>netstat -anp udp
Active Connections
  Proto  Local Address          Foreign Address        State
  UDP    127.0.0.1:1900         *:*
  UDP    127.0.0.1:56629        *:*
  UDP    127.0.0.1:57233        *:*
  UDP    127.0.0.1:65272        *:*
  UDP    192.168.0.3:137        *:*
  UDP    192.168.0.3:138        *:*
  UDP    192.168.0.3:1900       *:*
  UDP    192.168.0.3:5353       *:*
  UDP    192.168.0.3:56527      *:*
  UDP    192.168.0.3:57232      *:*
  UDP    192.168.0.3:65271      *:*
Example 4-1

Netstat Command for Verifying UDP

You can follow these simple steps to capture UDP traffic on your computer system:
  1. 1.

    Start the Wireshark application and start a capture on your computer’s NIC.

     
  2. 2.

    Open a command prompt.

     
  3. 3.

    Clear your DNS cache using the ipconfig /flushdns command.

     
  4. 4.

    Try initiating a ping to a remote server or website from the command prompt.

     
  5. 5.

    Close the command prompt.

     
  6. 6.

    Stop the Wireshark capture.

     
Figure 4-38 shows the Wireshark capture of a DNS query for www.apple.com. The destination UDP port of 53 indicates that this is a DNS packet. If there are too many packets in the Wireshark capture file, you can simply filter the DNS packets using the display filter udp.port == 53. If we look at the UDP packet, we can see the source port is 51053, the destination port is 53, which is used for DNS, the length of the packet is 39 bytes, and the checksum value is set to 0x22e5. Note that at the end of the UDP header, you can see that the UDP payload is 31 bytes and adding 8 bytes of UDP header it equates to 39 bytes.
Figure 4-38

Wireshark capture of DNS query

Once the Wireshark capture has been performed, users can also follow the UDP streams by selecting one of the flows in Wireshark, right-clicking, and from Follow menu, selecting UDP Stream. This will show both the DNS query and the DNS response in the Wireshark window with the filter being set to udp.stream eq stream-number. Figure 4-39 displays the complete UDP stream for a DNS query to www.apple.com.
Figure 4-39

Filtered UDP stream in Wireshark

There isn’t much as user can do when it comes to troubleshooting UDP. If there is a packet loss in the network, the application can simply request the data again, but the UDP software itself does not track any sequence number. A UDP data packet lost is data lost. There is some analysis that can still be done in Wireshark using I/O graphs, as these are not specific to just TCP, but any kind of stream captured in Wireshark. Users can filter a UDP stream on Wireshark and then select the I/O Graph option from the Statistics menu. The I/O graph will display options such as All Packets and TCP Errors, but also Filtered Packets with the filter set to UDP packets that was used as the display filter in Wireshark. Figure 4-40 displays the I/O graph of filtered UDP packets on Wireshark.
Figure 4-40

I/O graph for filtered UDP packets

Summary

Any network engineer or security analyst should have a deep and solid understanding of protocols working at different layers, but one of the layers that most engineers least on least is the Transport layer. The Transport layer protocols are crucial for ensuring end-to-end communication and transporting data between sender and receiver. To transport the data, the Transport layer has protocols that follow either connectionless or connection-oriented architecture with both having their respective use cases. In this chapter, we focused on the two key Transport layer protocols, TCP and UDP.

In this chapter, we explored the working of TCP and how it helps solves several problems such as data reliability, data integrity, and so on. We learned about the TCP connection process using three-way handshake, how port scanning is done by attackers, and how network engineers can investigate packet loss issues in the network. We learned that packet loss in the network can lead to issues such as TCP retransmissions and TCP OOO packets. We then saw how quick analysis and troubleshooting can be performed for network traffic using Wireshark graphs, including TCP Stream graphs, I/O graphs, and Flow graphs. As a network engineer or security analyst it is important to have custom profiles in Wireshark to analyze different types of traffic. We covered how users can create custom profiles for TCP and can quickly identify issues such as ZeroWindow by simply looking at the capture.

At the end of the chapter, we looked at UDP and in which scenarios UDP is used by different applications. We then learned how to filter UDP traffic and how we can leverage I/O graphs to learn about the UDP traffic pattern.

References in This Chapter

  • RFC 793: Transmission Control Protocol, DARPA, Information Sciences Institute University of Southern California, IETF, September 1981. http://tools.ietf.org/html/rfc793.

RFC 768: User Datagram Protocol, J. Postel, IETF, August 1980. http://tools.ietf.org/html/rfc768.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.167.114