Chapter 11. TKIP

Chapter 11 reviews one of the new security protocols that was developed specifically for use with existing Wi-Fi equipment. We will see that the TKIP security protocol provides a huge improvement over WEP and yet is able to operate on the same type of hardware and can even be applied to many older Wi-Fi systems through firmware upgrades. We start off with an overview of what TKIP is intended to accomplish and then work through each of the functions of TKIP in detail.

What Is TKIP and Why Was It Created?

TKIP stands for Temporal Key Integrity Protocol, but that's not important right now (or probably ever). TKIP exists for one reason: to allow WEP systems to be upgraded to be secure. This is the reason TKIP was created and this requirement guided the design throughout the standardization process. TKIP has now been adopted as part of the WPA certification and also is included as part of RSN in IEEE 802.11i. In 2001, when WEP was blown apart, there were millions of installed Wi-Fi systems, all suddenly without a viable link layer security solution. Originally, the new security measures of the IEEE 802.11i standard were expected to come into effect gradually, starting with applications where very high security was needed. However, when WEP was broken, in an instant, all these millions of systems were rendered insecure. A solution was needed that would allow them to be upgraded and become secure again.

The requirement that TKIP should run on legacy hardware (that is, hardware already installed using WEP) was a severe restriction on the approach to be taken. In the case of AES (see Chapter 12), the solution was designed from scratch; the designers could focus on the best solution possible within the general constraints that it should be practical and cost-effective to implement. In some ways, this ability to start from scratch mode made the AES approach a simpler problem to solve. But TKIP had the requirement to be secure and available as an upgrade to WEP systems.

To help understand why upgrading existing WEP systems is a significant problem, we need to take a look at the internals of Wi-Fi LAN systems and how they are built. We start with a Wi-Fi adapter card. There are not too many manufacturers of silicon chips for Wi-Fi LAN. In fact, the majority of existing WEP-based Wi-Fi LAN systems are based on the chips of only two or three companies. There are essentially four parts to a Wi-Fi LAN card:

  • Radio Frequency (RF) section

  • Modem section

  • MAC (Medium Access Control) section

  • Host interface to connect to your computer—PC card or USB, for example

Roughly speaking, the RF section deals with receiving and transmitting through the antenna; the modem deals with extracting data from the received signals; and the MAC deals with protocol issues, including WEP encryption. The four components are shown in Figure 11.1.

Components of a Wi-Fi LAN Adapter

Figure 11.1. Components of a Wi-Fi LAN Adapter

The RF section requires very special design and the use of exotic semiconductor materials. However, the remaining parts can be implemented in standard run-of-the-mill integrated circuit (IC) technology. The key to reducing cost in electronics is to cram everything you can into a minimum number of integrated circuits and then produce a huge quantity of them. Therefore, successive generations of Wi-Fi LAN designs used fewer and fewer components. In the latest systems, Host Interface, MAC, and modem blocks are combined into a single IC. Eventually, we might expect that the RF section will also be included, to produce a single IC solution.

The part we want to look at is the MAC section of the IC. This is the part that implements most of the IEEE 802.11 protocol. On one side (the Host Interface side), it receives, from the computer, packets of data for transmission and instructions for activities such as “look for a new AP” or “issue a request connection to that AP.” It also delivers to the computer packets of data that have been received. On the other side (by the modem), it delivers a stream of bits containing all the various IEEE 802.11 control and data frames, including special functions like sleep modes, data acknowledgment, and retransmission of lost data. Importantly (for us), it also encrypts and decrypts the data frames.

Because the MAC operations are rather complex, all the implementations are built around a small microprocessor embedded inside the IC. The microprocessor is programmed to handle all the formatting and timing operations to control the protocol. Typically, however, this processor is not very powerful and certain operations are just too fast for it to handle. Therefore, the MAC is implemented as a combination of firmware and hardware, as shown in Figure 11.2.

Inside the MAC Chip

Figure 11.2. Inside the MAC Chip

Figure 11.2 shows a block called Hardware Assist. If you want to go to the store for a loaf of bread, you can walk. But if you want to go at 70 mph, you need hardware assistance (in other words, your automobile). So it is with the MAC. The small microprocessors in the Wi-Fi cards shipped from 1997 to 2003 need help to go at 11Mbps, and that comes in the form of custom hardware in the IC.

If all the MAC functions were done only by the microprocessor, it would be possible to change the security system just by downloading new firmware. However, because encryption and decryption requires a fair bit of computation, the implementation of WEP almost always depends on the hardware assist functions and, of course, these functions cannot be changed after manufacture.

You can see now why TKIP is necessary. The hardware assist functions in these earlier systems cannot support AES–CCMP. They can support only RC4 (WEP). Therefore, the designers set out to find a way to implement real security using the existing RC4 implementation, and in a way that can be done through firmware upgrades. This is TKIP.

TKIP Overview

WEP has some serious shortcomings, as listed in Table 11.1.

TKIP introduces a set of measures that address each one of these weaknesses. In some ways, it's like the way in which the Hubble telescope problem was repaired. The Hubble telescope is a huge optical telescope placed in space and orbiting the earth. Because it is in orbit, where there is just a clear vacuum to look through and no gravity to distort the lenses, the results were expected to be spectacular. Excitement turned to despair as the telescope was tested and it was found that the main mirror had been manufactured with a defect and was not perfectly shaped. The solution was not to replace the main mirror, which was not possible in space; it was to add some corrective lenses in the light path to the optical receiver after the main mirror.

TKIP solves the problems of WEP in a similar way. It cannot change the major items such as the way RC4 is implemented in hardware, but it can add a series of corrective tools around the existing hardware. The changes applied to WEP to make TKIP are summarized in Table 11.2. The numbers in parentheses indicate which weaknesses in Table 11.1 are addressed by each change.

Table 11.1. Weaknesses of WEP

1

The IV value is too short and not protected from reuse.

2

The way keys are constructed from the IV makes it susceptible to weak key attacks (FMS attack).

3

There is no effective detection of message tampering (message integrity).

4

It directly uses the master key and has no built-in provision to update the keys.

5

There is no protection against message replay.

Table 11.2. Changes from WEP to TKIP

Purpose

Change

Weakness Addressed

Message Integrity

Add a message integrity protocol to prevent tampering that can be implemented in software on a low-power microprocessor.

(3)

IV selection and use

Change the rules for how IV values are selected and reuse the IV as a replay counter.

(1) (3)

Per-Packet Key Mixing

Change the encryption key for every frame.

(1)(2)(4)

IV Size

Increase the size of the IV to avoid ever reusing the same IV.

(1)(4)

Key Management

Add a mechanism to distribute and change the broadcast keys (see Chapter 10).

(4)

Message Integrity

Message integrity is an essential part of security. If an attacker is able to modify a message, there are many ways in which your system can be compromised. WEP had a method for detecting modification called the integrity check value (ICV), but the ICV offers no real protection at all. The ICV is not considered part of TKIP security (although its value is still checked).

If you want to detect tampering, one simple method would be to add together all the bytes in the message together to create a “checksum” value and then send this along with the message. The receiving party can perform the same computation and verify that it gets the same result. If a bit changes, the sum won't get the same result.

Such a simple approach is of no use for security because the attacker can simply recompute the checksum to match any changes he makes to the message. However, the general idea is similar: Combine together all the bytes in the message to produce a check value called the MIC (message integrity code) and then send the MIC along with the message. The MIC is computed using a special nonreversible process and combining a secret key. As a result, an attacker cannot recompute the MIC value unless he knows the secret key. Only the intended receiver can recompute and check the result.

There are several well-known secure methods for computing the MIC. Such methods have been well tried and tested in other protocols and security applications. However, for TKIP there was a problem. All the well-known methods require introduction of a new cryptographic algorithm or require computations using fast multiply operations. The ones that use multiply operations were considered, but they generally need at least one multiply operation for each group of four bytes. That would mean, for a typical 1514-byte data packet, at least 379 multiply operations. The microprocessor inside the MAC chip of most Wi-Fi cards is not very powerful; typically, it doesn't have any sort of fast multiplication hardware. In a small microprocessor a 32-bit multiply might take 50µs to complete. That would mean that it would take nearly 20ms to compute the MIC, reducing the data throughput from 11Mbps down to less than 1Mbps.

One proposal was to move the computation of the MIC up to the software driver level. After all, if you are using a laptop with a modern PC processor, such computations can be done in almost no time. But what about the poor old access point? Most access points do not have such a high-power processor and are already hard pressed to keep up with a multitude of connected stations.

What was needed was a method that was as secure as the well-known approaches but that could be done without either multiplication or new cryptographic algorithms. This was a tough goal. It was like saying, “I want a car as fast as a Porsche but with a 500cc engine”—not practical. However, a good compromise solution came from cryptographer Niels Ferguson using a method that he called Michael. Michael is a method of computing the MIC that uses no multiplications, just shift and add operations, and is limited to a fairly short checkword. Michael can be implemented on existing access points without ruining their performance. However, the cost of the simplicity is that Michael is vulnerable to brute force attacks during which an attacker is able to make many attacks rapidly one after the other. Michael makes up for this vulnerability by introducing the concept of countermeasures.

The concept of countermeasures is straightforward: Have a reliable method to detect when an attack is being made and take action to slam the door in the attacker's face. The simplest countermeasure is just to shut the whole network down when an attack is detected, thus preventing the attacker from making repeated attempts.

We will look at the details of Michael later; but for now, it is enough to know that the method allows the computation of a MIC value that is added to the message prior to encryption and checked by the receiver after decryption. This value provides the message integrity missing in WEP.

Michael operates on MSDUs; the computation to produce the MIC is done on the MSDU rather than on each MPDU. This has two advantages. First, for the mobile device, it allows the implementer to perform the computation in the device driver running on the host computer prior to forwarding the MSDU to the Wi-Fi LAN adapter card. Second, it reduces the overhead because it is not necessary to add a MIC value to every fragment (MPDU) of the message. By contrast, TKIP encryption is done at the MPDU level.

Michael requires its own secret key, which must be different from the secret key used for encryption. Creating such a key is easily accomplished when you are generating temporal keys from a master key, as described in Chapter 10.

IV Selection and Use

Chapter 6 explained the purpose of the IV (initialization vector) and how it is prepended to the secret key to generate the encryption key for each packet. We also noted that there are three fundamental weaknesses with the way IVs are used in WEP:

  • The IV is too short so IV values are reused regularly in a busy network.

  • The IV is not specific to the station so the same IV can be used with the same secret key on multiple wireless devices.

  • The way the IV is prepended to the key makes it susceptible to Fluhrer-Mantin-Shamir attack (FMS) attack (this is discussed in more detail later).

For a further explanation of why IV reuse is a problem, see Chapter 6.

In the WEP standard, there were no requirements to avoid IV reuse. In fact, it was not mandatory to change the value of IV at all! Many vendors picked a random value for IV for each packet, which seems intuitively to be a good idea. However, at the end of the day, if you want to avoid IV reuse for the longest time possible, the simple approach is to start with a value of one and count up by one for each packet. In TKIP, this behavior has been mandated as part of the standard.

Incrementing the IV delays IV reuse, but it doesn't get you out of the problem of “too short” IV. Even counting up, all the values will be used after about 16 million frames, which happens surprisingly fast in a busy network. Like the odometer on a car, when a binary counter reaches its maximum value, the next increment returns it all to 0—this is called rollover. TKIP introduces several new rules for using the IV. Essentially, there are three differences in how IVs are used compared to WEP:

  • IV size is increased from 24 bits to 48 bits.

  • IV has a secondary role as a sequence counter to avoid replay attacks.

  • IV is constructed to avoid certain “weak keys.”

IV Length

The WEP IV, at 24 bits, allowed only 16,777,216 values before a duplicate IV would be used. This is unacceptable in practice when this number of frames could be sent in a few hours. Chapter 6 discussed the FMS attack, the most effective attack against WEP, and showed how WEP is very susceptible because its IV appears first in the frame and advertises when a weak key is being used. Some vendors try to reduce the potency of this attack by skipping IV values that would produce weak keys. However, this strategy just reduces the number of possible IV values still further, making one problem better but another one worse!

In designing TKIP, the security experts recommended that the IV must be increased in size to provide a robust solution. There was some debate about how to do this; but in the end, the IEEE 802.11 task group (i) decided to insert 32 extra bits in between the existing WEP IV and the start of the encrypted data. This was a contentious decision because not all vendors can upgrade their legacy systems to meet this requirement. However, most can.

Potentially, the extra 32 bits added to the original 24 gives a new IV of 56 bits; however, in practice only 48 bits is used because 1 byte must be “thrown away” to avoid weak keys. The advantages of going to a 48-bit IV are startling. Suppose you have a device sending 10,000 packets per second. This is feasible using 64-byte packets at 11Mbps. for example. The 24-bit IV would roll over in less than half and hour, while the 48-bit IV would not roll over for over 900 hundred years! Moving to a 48-bit IV effectively eliminates the IV rollover problem, although you still have to be careful to avoid two devices separately using the same IV value with the same key.

Increasing the IV length sounds straightforward, but it introduces some real problems for practical implementation. Remember that the original WEP IV is joined on the front of the secret key to create the RC4 encryption key. Thus, a 40-bit[1] secret key is joined to the 24-bit IV to produce a 64-bit RC4 key. The hardware in legacy systems assumes this type of key structure and can't be upgraded to suddenly deal with a 88-bit RC4 key created by joining the new 48-bit IV to the old 40-bit secret key. In TKIP this problem is solved in an interesting way. Instead of simply forming a new RC4 key by joining the secret key and the IV, the IV is split into two pieces. The first 16 bits of the new IV are padded out to 24 bits in a way that avoids known weak keys. This 24-bit value is used in the same way as for WEP systems. However, rather than joining this value to the ordinary secret key, a new mixed key is generated by merging together the secret key and the remaining 32 bits of the IV (and some other stuff, too). The way in which the long IV is incorporated into the key, called per-packet key mixing, is described in more detail later in this chapter and is shown in Figure 11.3. The important thing to note here is that this approach allows us to achieve two objectives:

  • The value of the key used for RC4 encryption is different for every IV value.

  • The structure of the RC4 key is a 24-bit “old IV” field and a 104-bit secret key field.

Creating the RC4 Encryption Key

Figure 11.3. Creating the RC4 Encryption Key

These objectives have been achieved with the advantage of a 48-bit IV value.

IV as a Sequence Counter—the TSC

WEP had no protection against replay attack. An enemy could record a valid packet and play it back again later, in the expectation that, if it decrypted correctly the first time, it probably would do so again. In a replay attack, the enemy doesn't attempt to decode your messages, but she does try to guess what you are doing. In an extreme example, by recording messages while you delete a file and replaying them later, she could cause a file of the same name to be deleted without ever breaking the encryption. Replay prevention is designed to block old messages used in this way. TKIP has a mechanism to enforce this called the TKIP sequence counter (TSC).

In reality, the TSC and the IV value are the same thing. This value always starts with 0 and increments by 1 for each packet sent. Because the IV is guaranteed not to repeat for a given key, we can prevent reply by ignoring any messages with a TSC that we have already received. These rules mean that it is not possible to mount a replay attack by recording earlier messages and sending them again.

The simplest way to prevent replay attacks is to throw out any received messages in which the TSC has not increased by 1 compared to the last message. However, there are a number of reasons why this simple approach would not work in practice. First, it is possible for frames to be lost in transmission due to interference and noise. If a frame with TSC 1234 is received and then the next frame (1235) is lost, the subsequent arriving frame would have a TSC of 1236, a value that is 2 greater than the last TSC seen. Because of the lost frame, all subsequent frames would get rejected on the basis that the TSC did not increase by 1.

So let's revise the rule: throw out any messages that have a TSC less than or equal to the last message. But what about retransmission? According to the standard, frames must be acknowledged by the receiver with a short ACK message. If they are not acknowledged, the message should be retransmitted with a bit set to indicate that it is a duplicate. Being a repeat message, these will have the same TSC as the original attempt. In practice, this works okay because only one valid copy is needed at the receiving end so it is no problem to throw out the duplicates when checking the TSC at the receiver. The possibility of retransmissions illustrates that duplicate TSCs should not always be treated as evidence of an attack.

A more difficult problem arises due to a new concept called burst-ack. In the original IEEE 802.11 standard, each data frame sent must be acknowledged separately. While this requirement is effective, it is somewhat inefficient because the transmitter has to keep stopping and waiting for the ACK message to be received before proceeding. The idea of burst-ack is to send up to 16 frames in quick succession and then allow the receiver to acknowledge all 16 in one message. If some of the messages were not received successfully, the receiver can specify which ones need to be resent. Burst-ack has not yet been added to the standard, but it is likely to be included in the future.

Can you see the problem from the perspective of the TSC? Suppose 16 frames are sent, each with a TSC greater than the last, and the receiver fails to get the first one. It then requests that the first frame be resent, which would happen with its original TSC value. This value would be 15 less than the last frame received and would thus be rejected, according to the rule of “TSC must be greater.”

To accommodate this burst-ack, TKIP uses the concept of a replay window. The receiver keeps track of the highest TSC received and also the last 16 TSC values received. When a new frame is received, it categorizes the frame as one of three types:

  • ACCEPTTSC is larger than the largest seen so far.

  • REJECTTSC is less than the value of the largest—16.

  • WINDOWTSC is less than the largest, but more than the lower limit (largest—16).

For the WINDOW category, it checks to see whether a frame with that TSC has been received before. If so, it rejects it. The receiver must keep a record of the last 16 TSC values and check them off as they are received.

This set of rules is more complicated than the simple test we started with, but it effectively prevents replay attacks while allowing the protocol to run efficiently.

Countering the FMS Attack

The most devastating attack against WEP was described in a paper by Scott Fluhrer, Itsik Mantin, and Adi Shamirand and is usually referred to as the FMS attack. This weakness allows script tools to deduce the secret key by monitoring the link. Let's take a (very) quick look at the attack.

The RC4 cipher generates a pseudorandom stream of bytes that can be combined with the data to be encrypted so the whole stream of data looks like random noise. To generate the random sequence, RC4 uses a 256-byte array of values that is reorganized into a different pattern between each byte of random output. The 256-byte array is initialized using the secret encryption key.

Certain key values, called weak keys, create a situation in which the first few bytes of pseudorandom output are not all that random. In other words, if a weak key were being used, there would be a higher probability of certain values coming out in the first few bytes. This fact is exploitable: If you know a weak key is being used, you can work backwards to guess the entire encryption key value just by looking at the first few bytes from a collection of packets. WEP has this particular weakness because it uses the IV value as the first bytes of the key, and the IV value is visible to everyone. As a result, a weak key will eventually be used, thereby giving an enemy the basis for an attack.

Ron Rivest, the designer of RC4, recommended a simple solution to this problem. You simply generate, and throw away, 256 bytes of the random stream before you start encrypting. This approach denies the attacker a chance to get hold of the dangerous first few bytes and ensures that the whole 256-byte array is fully churned up and no longer contains any accessible key information. Well, this would be a simple answer except that, for most of the Wi-Fi cards shipped with WEP, the hardware assist in the MAC chip does not support this solution. It is programmed to start using the first random bytes as soon as they are ready.

Given this severe known weakness, and denied the opportunity to resolve it in the recommended way, the designers of TKIP decided on a two-pronged defense:

  • Try to avoid weak keys.

  • Try to further obscure the secret key.

The FMS attack depends on the ability of the attacker to collect multiple samples of frames with weak keys. Only about 60 frames are needed before the first bits of information emerge and complete decoding of the key can occur after a few million packets. So how to defeat the attack? The approach adopted in TKIP is to change the secret key with every packet.[2] If you take this approach, the attacker can never get enough samples to attack any given key. This sounds impractical, but in the next section you will see how you can make this change even on older adapter cards.

Another defense against FMS attack is to avoid using weak keys at all. The problem is that no one knows for sure what all the weak keys are. Cryptographers can say that a certain type of key is weak—but they can't say that all the others are strong. However, the following section on key mixing shows that two bits in the IV are always fixed to a specific value to avoid a well-known class of weak keys.

Some vendors have modified their WEP implementations to avoid IVs that produce weak keys. However, there is another problem with this approach. We already know that there are not enough IV values available when the IV is 24 bits long. If we now reduce the IV space still further, we reduce one problem but make another one worse! This problem is removed in TKIP simply by doubling the length of the IV.

This section focuses on the changes to the use of the IV in TKIP. There are three significant changes: The length is increased to 48 bits, the IV doubles up as a sequence counter (the TSC), and the IV is combined with the secret key in a more complicated way than WEP. The last feature achieves two results: It enables the 48-bit IV to be incorporated without changing the implementation hardware and it avoids the use of a known class of weak keys. The changes to the IV provide a significant amount of extra security over WEP.

Per-Packet Key Mixing

The previous section explains that the basis of an FMS attack is an enemy trying to guess the key based on observing the first few bytes of the encrypted data. The attacker needs to analyze quite a few frames to make a reasonable guess. To defeat this attack (in its current known form), the encryption key is changed for every packet sent. After all, an enemy can't attack the key if it keeps changing.

This issue of “keys” is a little confusing. In the WEP scenario, it was simple. There was a WEP key and it was used for everything. If it was compromised, then all was lost. Such a simple approach is not good enough for TKIP, so there are multiple levels of keys derived from a single master key. Session keys are derived from the master key. These keys are then split into pieces for various uses, one of which is encryption (for more detail, see Chapter 10). What per-packet key mixing does is to further derive a key specifically for each and every packet sent. In other words, at the level of RC4, every packet uses a different, and apparently unrelated, key. The session keys and master key do not, of course, change every packet! In addition to making it harder to mount attacks, the generation of a key per packet allows the extended-length IV value to be incorporated.

The process of key derivation involves mixing together various bits of information in a hash function. The result produced bears no obvious relationship to the values you start with; however, if both ends of a link start off with the same information and use the same hash method, they will produce the same result—in other words, matching keys.

The problem is that the computation to derive the key can be processing intensive. There is not a lot of computing power in the MAC chip of most WEP-based Wi-Fi cards. So, on the face of it, deriving a new key for every packet might seem infeasible. But there was another trick up the sleeve of Doug Whiting and Russ Housley, the inventors of the key-mixing scheme. The calculation was divided into two phases. Phase 1 involves all the data that is relatively static, such as the secret session key, the high order 32 bits of the IV, and the MAC address. Phase 2 is a quicker computation and includes the only item that changes every packet—the low order 16 bits of the IV. Even in this case, the next IV value is known so the processor can go off and compute one or more mixed keys in advance, anticipating that a frame will arrive shortly and need decrypting.

We briefly mentioned that the MAC address is included in the computation of the mixed key. There is an excellent reason for this inclusion. Two devices are communicating using a shared session key, which means that the same session is used for messages in both directions. A uses the same session key to send messages to B as B does when sending to A. But if both A and B start with an IV of 0 and then increment the IV by 1 for each packet sent, you immediately get IV collisions. They both are using the same IV value with the same key. One way to avoid this problem is for A to use only even IV values and B to use only odd values, for example. However, this further reduces our IV number space and doesn't help for broadcasts and multicasts.

We do know that for A and B to work in the LAN, they must have different MAC addresses. So by mixing the MAC address into the per-packet key, we guarantee that even if both devices use the same IV and the same session key, the mixed key used by A in encrypting the packet will be different from that used by B. Problem solved.

The process of combining the MAC address session key and IV is shown in Figure 11.3. Notice how the lower bits of the IV are only incorporated into the phase 2 computation so the phase 1 computation only needs to be redone every 216 (that is, 65,536) packets. Only 16 bits of the new, long IV go into the old WEP IV position. The middle byte of the old IV d is computed by copying the first byte and setting certain bits to fixed values to avoid creating a class of weak keys.

Details of the mixing algorithm for the two phases of the key mixing are given later. However, at this point we have seen the essentials of all the mechanisms that have been added to TKIP to make it both secure and compatible with old WEP systems.

All the problems with WEP have been solved in TKIP. The list of weaknesses is completely covered. The solutions used allow backwards implementation on WEP hardware—an example of excellent engineering: not just finding a good solution, but finding one within severe and sometimes perverse constraints. The following section revisits the concepts described so far in this chapter and looks at the details at implementation issues.

TKIP Implementation Details

This section gets into the details of how the TKIP algorithm is implemented. This may be of interest only if you yourself are a designer. If you don't need to know these details but still want to read on, we admire your dedication!

The first assumption we make is that master keys have been distributed and session keys derived on both sides of the communications link. The master keys could have been obtained using an upper layer authentication method based on EAP, or preshared master keys could be in use. The latter case is analogous to the WEP approach, in which keys are distributed out of band and programmed, or simply typed, into the devices. This approach could be used for smaller networks or ad-hoc mode (IBSS) operation.

Chapter 10 describes the way in which the keys are derived. As noted in that chapter, three types of key are derived for TKIP:

  • Key to protect the EAPOL-Key exchange messages

  • Pairwise key to be used for actual message protection using TKIP

  • Group key to protect broadcasts using TKIP

It is the second two types of key information that we are interested in here. From the pairwise key information are derived temporal keys:

  • Temporal Encryption key (128 bit)This is used as an input to the key mixing stage prior to actual RC4 encryption.

  • Temporal Authenticator TX MIC keyThis is used with the Michael authentication method to create the MIC on frames transmitted by the authenticator (access point in ESS network).

  • Temporal Authenticator RX MIC keyThis is used with Michael to create the MIC on frames transmitted by the supplicant (typically the mobile device).

For the group keys, only the first two types of key need be derived because broadcasts (and multicasts) are sent out only by the authenticator and never by the supplicant.

TKIP's task is to provide a security service for validating the integrity of received data and to obscure transmitted data beyond recognition. To accomplish this, TKIP employs a set of tools:

  • IV generation and checking

  • MIC generation and checking

  • Encryption and decryption

For the transmit side, the position of these components relative to other MAC activities is shown in Figure 11.4. The four processes of TKIP are shown as follows:

  • Michael

  • Key derivation

  • IV/TSC

  • RC4

TKIP Role in Transmission

Figure 11.4. TKIP Role in Transmission

Note that the integrity check value is computed over, and appended to, the MSDU prior to fragmentation. As a result, the check value bytes are present only in the last MPDU and are within the encrypted payload. The original (WEP) checkword, the ICV, is still computed and appended to each MPDU, although it is not included as part of TKIP packet integrity checking.

Because the MIC is computed at the MSDU level, it is not possible to include the IV value in the MIC computation for two reasons. First, because the MSDU might be fragmented, there might be multiple values of IV used to send the fragments of the MSDU, so which value of IV would you choose? Second, the value for the IV must not be selected until after the fragment is removed from the transmission queues. In future to support multimedia, IEEE 802.11e could have up to eight priority queues for outbound frames and the order in which fragments are selected for transmission depends on many factors related to priority and real-time constraints. Therefore, MSDUs of higher priority can overtake a previous MSDU or even interrupt it between fragments. In TKIP, we have only one IV counter for the whole link (and not one per queue), and so the assignment of the IV value has to wait until the last moment, when a fragment is selected for transmission. As a result, the value cannot be known when the MIC is calculated

Computing the MIC at the MSDU level, and not protecting the IV, allows an attacker to block a station by replaying old frames with a new IV value. The problem arises because the IV doubles up as the TKIP sequence counter (TSC) used to avoid replay attacks. Such forged frames will, of course, fail to decrypt correctly and be discarded. They do not threaten the integrity of the protocol. However, the effect is to make subsequent valid frames look like a replay attack. When a valid frame arrives, it might be rejected because the TSC value has been “used up” by the attacking station. This is a class of denial-of-service attack. In wireless, there are many simple ways to deny service that cannot be prevented and do present a potential nuisance.

The Encryption box shown in Figure 11.4 is assumed to be the same RC4 encryption used in WEP. Most manufacturers implemented this box in a way that can't be changed by firmware upgrades. Existing WEP equipment often includes hardware initialization of the RC4 S-Box. The fact that this unit could not be changed was the biggest problem for the design of TKIP.

Before looking at each of the TKIP elements in detail, let's look at the corresponding receive chain, as shown in Figure 11.5.

TKIP Role in Reception

Figure 11.5. TKIP Role in Reception

The receive chain is not quite the reverse of the transmit chain. For one thing, decryption is not the first operation. Instead, the TSC (derived from the IV) is checked as a replay defense. Note that the ICV value is checked and used to reject the packet. This is not technically an integrity check, but it is a quick indication of whether the decryption has been successful: Decrypting a packet with the wrong key or IV values is almost certain to produce a bad ICV value.

The MIC is checked after all the fragments have been received and reassembled into an MSDU. Note that if the MIC fails, not only will the MSDU be discarded but countermeasures may be invoked. Although possible, it is extremely unlikely that random errors in transmission would be such that the frame would pass the CRC check and then decrypt to produce an acceptable ICV. If we receive a MIC failure, we can be very sure that it is due to intentional tampering and not just random interference or transmission errors.

Message Integrity—Michael

Michael works by computing an 8-byte check value called the message integrity code (MIC) and appending this to the MSDU prior to transmission. The MIC is computed over the entire (unencrypted) data in the frame and also the source and destination MAC addresses. Michael was invented by Neils Ferguson (2002) and was designed specifically to address the special needs of TKIP, in particular the need to be implemented using a relatively low power processor and without high-speed hardware multiply.

The adoption of Michael by the standards group was somewhat controversial. The algorithm is new, and “new” is considered a bad word by cryptographers. Cryptographers like well-studied algorithms. Furthermore, the security level, measured by the equivalent key bit size, is low—the design goal is only 20 bits of security, so a randomly chosen MIC value has about a one in a million chance of being accepted as valid. One in a million is not considered to be very rare by cryptographic standards. As a result of this weakness, mandatory countermeasures were added to stop an attacker from making rapidly repeating attempts with random MIC value.

There are several well-known ways to implement a MIC with very high levels of security; the problem is that these methods are just too processing intensive to be run by older equipment already in the field. Because the whole point of TKIP was to allow upgrade of that equipment, these attractive methods were simply not viable. The weaknesses of Michael are in no way criticisms. The IEEE 802.11 task group (i) could have chosen to design an approach as strong as any known. The weaknesses come from the need to design an approach that could be used with existing WEP hardware. In the end, the standards group felt that this was the best solution within these constraints and that the countermeasures overcome any risk from attacks on the basic method.

Countermeasures

Let's elaborate on what is meant by countermeasures. Suppose you are an attacker and you want to get a forged message past the MIC check. The most likely scenario is that you have captured a previous message and modified it in some way. You want the modification to go unnoticed. It is already going to be hard to get the message accepted to the point at which the MIC value is even checked—you have to get past the IV replay protection and the ICV decryption check first. But let's suppose you have figured out how to do that. The 8-byte MIC value is encrypted in the frame along with the data so you won't know what the original value is. However, you do know where it is because RC4 encrypts each byte separately and therefore you are able to substitute random values into the field where you know the MIC bytes will be. When the message is decrypted, your inserted random bytes will certainly be changed to another value. However, because they are random, you don't really care about that. You just think about that one in a million chance that the inserted bytes will happen to be right.

In most cases your replacement MIC bytes will not match the message contents and the message will be thrown away. However, there is a small chance that the bytes will match the message and, bingo, you've succeeded in your attack. The message is delivered as valid despite having been altered.

A chance of one in a million doesn't sound too good; but remember, people do actually win the lottery, although it will never (statistically) be you. If an attacker is able to try this trick a million times, all the odds change. Sooner or later, the attack will succeed if it is not detected. The purpose of countermeasures is to detect such an attack and stop the attacker having too many plays at the game.

The purpose of countermeasures with Michael is to reliably detect an attack and close down communication to the attacked station for a period of one minute. This simple action limits the attacker to one try per minute, meaning that on average it would take one year of continuous trying to get a random packet through. Unless you have a particularly bad network administration department, someone will notice the network going up and down once per minute all year long (“ours wouldn't,” I hear some cynics say).

The actual countermeasures used with Michael are a little more sophisticated. The object is to prevent the attacker from making repeated attacks, but also to try to keep the network going as long as possible. We don't want to tear everything down on the first detected problem.

The approach is to disable the keys for a link as soon as the attack is detected. The compromised devices are then unable to communicate until new keys are generated. Typically the new keys are generated immediately and the network can recover quickly. However, Michael has a 60-second “blackout” rule that says that, if there has been any MIC attack within the last 60 seconds, generation of new keys must be delayed until the 60-second period has expired. This limits the attacker to one try per minute for the entire network.

MIC Failure at Mobile Device

There are two cases in which the supplicant (in the mobile device) can detect a MIC failure. The first is received multicasts (broadcasts) that indicate an attack on the group keys. The other is received unicast messages in which a failure indicates an attack on the pairwise keys. The required behavior is similar in both cases:

For a MIC failure on a multicast message:

  1. Delete the local copy of the group key.

  2. Request a new copy of the group key from the authenticator using an EAPOL message (indicates MIC failure).

  3. Log the event and inform system manager if possible.

For a MIC failure on a unicast message:

  1. Drop any received frames and block any transmitted frames except for IEEE 802.1X messages (to allow new key exchange).

  2. Request new pairwise keys by sending EAPOL message to the authenticator.

  3. Log the event and notify the operator if possible.

MIC Failure at Access Point

Although the access point does not receive multicast messages (and hence can't discover a MIC failure for the group key), there are still two cases to consider. The first is a MIC failure detected in a received unicast message, and the second is notification of a group key MIC failure due to receiving an EAPOL key message from a mobile device. The actions to take in each case are as follows:

For a MIC failure related to the group key

  1. Delete the existing group key and stop sending multicast messages.

  2. Log the event and notify the operator.

  3. If there has been another MIC failure within the last 60 seconds, wait until the 60-second blackout period expires.

  4. Create a new group key and distribute to all stations.

For a MIC failure related to pairwise keys:

  1. Log the event and notify the system operator.

  2. Drop any received frames and block any transmitted frames except IEEE 802.1X messages (to allow new key exchange).

  3. If there has been another MIC failure within the last 60 seconds, wait until the 60-second blackout period expires.

  4. Initiate a four-way key exchange to establish new pairwise keys.

On first encountering these countermeasures, many people express a concern that an attacker could cause untold disruption to the network. On the face of it, if an attacker sends a forged multicast message every 59 seconds, the network would be permanently in blackout period and unable to operate. In principle, this is true and is another class of denial-of-service attack. However, it is important to note that, in practice, it is extremely difficult to forge a frame to do this. The frame must first pass the TSC check and must decrypt correctly before the MIC is even looked at. Consider these issues:

  1. You have to forge a frame where the TSC (TKIP sequence counter) is correct so the frame is not immediately dropped as “out of sequence.”

  2. The TSC is also part of the IV, and the IV is mixed into the per-packet encryption key. So if you change the TSC, the frame will not decrypt correctly; the ICV will not give a good value and the frame will be deleted.

So to mount the denial-of-service attack, you have to capture a valid frame during transmission, prevent it being delivered to its intended destination, modify the MIC (to make it invalid), recompute the ICV so that it matches the changed MIC value, and finally deliver the message so as to trigger the MIC failure.

Frankly there are many ways to the mount denial-of-service attacks, and most of them are much simpler than trying to trigger the Michael countermeasures. The simplest way to mount a DoS attack is just to send disassociate messages for each of the connected stations. By its very nature, wireless communications is subject to DoS attack. Look at the way the Soviet Union successfully denied its population access to western TV stations by jamming. DoS is a service attack rather than a security attack; and while the countermeasures give one more mechanism, triggering countermeasures is by no means the easiest approach for the enemy.

Computation of the MIC

The computation of the MIC in Michael uses only substitutions, rotations, and exclusive OR operations. There are no multiplies. The units of data that are handled are based on 32-bit words. These characteristics make the method suitable for implementation on lower-power processors. Many of the access points shipped between 1998 and 2002 were based on low-cost 32-bit CISC processors such as the Intel I486. Consistent with this, operations that are endian dependent are defined to be little endian (as for Intel processors).

The data to be protected by the MIC includes the actual payload and the source and destination address fields. These are ordered as shown in Figure 11.6.

Data for MIC Computation

Figure 11.6. Data for MIC Computation

The first part of the algorithm is to organize the data into 32-bit words. This is done for both the key and the data. The 64-bit key is divided into two 32-bit words called K0 and K1. This task must be done in the right order and is designed to be easy for Intel x86 processors. The conversion to two 32-bit words is shown in Figure 11.7 where the 64-bit key is treated as 8 bytes, stored sequentially in memory.

Little Endian Key Splitting

Figure 11.7. Little Endian Key Splitting

The least significant byte of the information is always stored in the lowest memory address and K0 is stored lower than K1. After splitting the key, the data must also be split into 32-bit words. To guarantee this, the data must be padded to a multiple of 4 bytes. First, a single byte value of 0x5a is added to the end of the data. Finally, extra pad bytes with a value of 0 are added. At least four 0 bytes must be added and the total length of the data must be a multiple of 4 bytes. Thus, between four and seven zeros are always added.

As an example, if the original user data was 1 byte:

  • MSDU data is 13 bytes (two 6-byte addresses plus 1 user data byte).

  • Value 0x5a is added to give 14 bytes.

  • Six 0 bytes are added, giving a total of 20 bytes (5 x 32-bit words) to be processed by Michael.

Note that these extra bytes are not really added to the data or ever sent over the link. They are just added for the purpose of computing the MIC and are then discarded. Now we have two 32-bit key words K0, K1, and a set of data words M0, M1… Mn. The last word Mn is always 0 and the second to last word is always non zero (because of the 0x5a that was added). Our object is to compute a 64-bit MIC value comprising two 32-bit words, V0 and V1, which will be appended to the data prior to encryption.

The algorithm works as follows:

  1. Make a copy of the key: l = K0 and r = K1.

  2. Exclusive OR the first data word M0 with l.

  3. Call the Michael block function with the values of l and r; new values are returned.

  4. Repeat steps 2 and 3 for all the remaining data blocks.

The final values of l and r form the MIC result V0 and V1, respectively. In programming language style, this sequence can be represented as follows::

(l, r) ← (K0, K1)
for i=0 to N-1 do
   ll ⊕ Mi
   (l, r) ← FnMichael(l, r)
V0 = l
V1 = r

The Michael block function FnMichael takes two words (l, r), processes them together, and produces two new values of (l, r). The details of the computations are provided in programming language style here:

(l, r) ← FnMichael (l, r)
rr ⊕ (l <<< 17)
l ← (l + r) mod 232
rr ⊕ XSWAP(l)
l ← (l + r) mod 232
rr ⊕ (l <<< 3)
l ← (l + r) mod 232
rr ⊕ (l >>> 2)
l ← (l + r) mod 232
return (l, r)

The key to the operations here is:

  • Exclusive OR

  • +twos compliment addition

  • <<<rotate left

  • >>>rotate right

  • mod(2n)discard any bits of higher significance than n

  • XSWAPgiven a 32-bit word, swap the lower 16 bits and the upper 16 bits (hexadecimal 12345678 becomes 56781234)

The resulting two words can now be appended onto the actual MSDU data (nonpadded) as 8 bytes of extra data. Notice that, because this process occurs at the MSDU level, it is completely transparent to the rest of the 802.11 MAC and encryption.

Per-Packet Key Mixing

The key-mixing function creates a new key for every packet transmitted. It was introduced for two reasons:

  • As a way to protect against RC4 weak key attacks. The recommended defense (discard the first 256 bytes of key stream) was not possible in existing deployed hardware.

  • As a way to incorporate the extra bits of the extended IV.

The approach is to combine the session key, the IV and the source MAC address together with a hash function to produce a mixed key. Including the source MAC address provides added protection against forgery and also separates out the key space of the two communicating devices that share the session key.

Performing a hash operation for every packet to be encrypted or decrypted is a major overhead and hard work for the low-power MAC processor typical in earlier Wi-Fi LAN systems. To ease the burden, the mixing has been divided into two phases. During phase 1, the session key and source address are hashed. The result remains constant during the session. In phase 2, performed for every packet, the IV is mixed with the result of the first phase to produce an encryption key. This key is then used to initialize the RC4 encryption hardware.

Note that even the second part of the key-mixing computation can be performed in advance because the IV increases monotonically. Therefore, a station knows which values of IV will be coming up shortly and could precompute a number of keys in advance. This step avoids the need for a real-time computation when a packet is received. The two phases of key mixing are shown in Figure 11.3. As you can see in this figure, 104 bits of mixed key material are needed to form the total RC4 key of 128 bits when the IV is added.

There is nothing very glamorous about the computations for the key mixing but let's quickly look at the algorithm.

The inputs to the computation are abbreviated:

  • TSC = TKIP Sequence Counter (48 bits)

  • TSCU = Upper 32 bits of TSC (32 bits)

  • TSCL = Lower 16 bits of TSC (16 bits)

  • TA = Transmitter address, the MAC address of the encrypting station (48 bits)

  • TK = The temporal session key (128 bits)

  • P1K = Output of the first phase (80 bits)

  • P2K = Output of the second phase (128 bits); this becomes the RC4Key.

The two phases can be written as the following functions:

  • P1K ← Phase1(TA, TSCU, TK)

  • P2K ← Phase2(P1K, TSCL, TK)

When the system starts up or a new key exchange occurs, the TSC is reset to 0. The system typically computes the value of P1K right away and stores it for use in generating P2K. P1K needs to be recomputed every time TSCU changes, which is every 65536 packets. There is no reason why the next value of P1K can't be computed in advance so there will be no delay when TCSU actually changes.

Substitution Table or S-Box

Both phase 1 and phase 2 require a byte substitution table or S-box. The substitution table is to the computer what logarithm tables used to be to schoolchildren before calculators.[3] You take a value and look up a corresponding value in a table. The calculations have been done in advance to determine the correct values in the table. A typical substitution table for a byte value is 256 bytes long so there is one entry for each value of the byte.

However, key mixing uses 16-bit word values. A full substitution table for a 16-bit value would be 216 or 65,336 words long—a total of 128Kbytes! However, this full table is needed only if you want to be able to have a reversible function and create a second table that will “undo” the substitution and get you back where you started. You do not need such a table for hashing functions; in fact, this type of reversal is something you want to prevent. Therefore, the key-mixing algorithm uses a partial substitution table with 512 word entries, which you can think of as two tables, each with 256 words. To make the substitution for a 16-bit word X, we use the upper byte of X as an index into the first table and the lower byte of X as an index into the second table. Then we exclusive OR the two words from the tables to produce a final 16-bit word substitution. This is denoted in the algorithm by the function:

i = S[j]

where i is the result of substituting for j.

The 512 values for the substitution table are listed in the standard. The same substation tables must be used by everyone for this approach to work.

Phase 1 Computation

The output of phase 1 is only 80 bits long (not 128), but it uses all 128 bits of the temporal key in the computation. The result is computed in an array of five 16-bit words called P1K0, P1K1, P1K2, P1K3, and P1K4. The following terminology is used in the algorithm:

  • TSC1 = bits 16–31 of the TSC (the middle 16 bits)

  • TSC2 = bits 32–47 of the TSC (the upper 16 bits)

  • TAn = nth byte of the encrypting station's MAC address

  • (TA0 = lowest byte, TA5 = highest byte)

  • TKn = nth byte of the temporal key

  • (TK0 = lowest byte, TK5 = highest byte)

the expression X∩Y is used to denote combining two bytes into a 16-bit word so that:

  • X∩Y = 256*X + Y

S[ ] denotes the result from the 16-bit substitution table:

PHASE1_STEP1
  P1K0 = TSC1
  P1K1 = TSC2
  P1K2 = TA1 ∩ TA0
  P1K3 = TA3 ∩ TA2
  P1K4 = TA5 ∩ TA4

PHASE1_STEP2:
  FOR i = 0 to 3
  BEGIN
     P1K0 = P1K0 + S[ P1K4 ⊕  (TK1 ∩ TK0) ]
     P1K1 = P1K1 + S[ P1K0 ⊕  (TK5 ∩ TK4) ]
     P1K2 = P1K2 + S[ P1K1 ⊕  (TK9 ∩ TK8) ]
     P1K3 = P1K3 + S[ P1K2 ⊕  (TK13 ∩ TK12) ]
     P1K4 = P1K4 + S[ P1K3 ⊕  (TK1 ∩ TK0) ]  +  i
     P1K0 = P1K0 + S[ P1K4 ⊕  (TK3 ∩ TK2) ]
     P1K1 = P1K1 + S[ P1K0 ⊕  (TK7 ∩ TK6) ]
     P1K2 = P1K2 + S[ P1K1 ⊕  (TK11 ∩ TK10) ]
     P1K3 = P1K3 + S[ P1K2 ⊕  (TK15 ∩ TK14) ]
     P1K4 = P1K4 + S[ P1K3 ⊕  (TK3 ∩ TK2) ]  +  2*i  +  1
END

Although this is quite a bit of computation—certainly more than in phase 2—the arithmetic comprises entirely shifts, adds, and exclusive OR operations.

Phase 2 Computation

On the face of it, phase 2 looks more complicated than phase 1. However, although there are more steps, there is no repeating loop in the computation. The result is computed in an array of six 16-bits words called: PPK0, PPK1, PPK2, PPK3, and PPK4. The following terminology is used in the algorithm:

  • P1Kn = output words from phase 1

  • TSC0 = bits 0 - 15 of the TSC (the lower 16 bits)

  • TKn = nth byte of the temporal key

  • (TK0 = lowest byte, TK5 = highest byte)

The expression X∩Y is used to denote combining two bytes into a 16-bit word so that:

  • X∩Y = 256*X + Y

The expression >>>(word) means that the 16-bit word is rotated one place right.

The expression → (word) means that the 16-bit word is shifted one place right.

S[ ] denotes the result from the 16-bit substitution table.

RC4Keyn means the nth byte of the RC4 key used for encryption.

PHASE2,STEP1:
     PPK0 = P1K0
     PPK1 = P1K1
     PPK2 = P1K2
     PPK3 = P1K3
     PPK4 = P1K4
     PPK5 = P1K5 + TSC0

PHASE2,STEP2:
     PPK0 = PPK0 +  S[ PPK5 ⊕  (TK1 ∩ TK0) ]
     PPK1 = PPK1 +  S[ PPK0 ⊕  (TK3 ∩ TK2) ]
     PPK2 = PPK2 +  S[ PPK1 ⊕  (TK5 ∩ TK4) ]
     PPK3 = PPK3 +  S[ PPK2 ⊕  (TK7 ∩ TK6) ]
     PPK4 = PPK4 +  S[ PPK3 ⊕  (TK9 ∩ TK8) ]
     PPK5 = PPK5 +  S[ PPK4 ⊕  (TK11 ∩ TK10) ]
     PPK0 = PPK0 +  >>>(PPK5 ⊕  (TK13 ∩ TK12))
     PPK1 = PPK1 +  >>>(PPK0 ⊕  (TK15 ∩ TK14))
     PPK2 = PPK2 +  >>>(PPK1)
     PPK3 = PPK3 +  >>>(PPK2)
     PPK4 = PPK4 +  >>>(PPK3)
     PPK5 = PPK5 +  >>>(PPK4)
PHASE2,STEP3:
     RC4Key0 =  UpperByte(TSC0)
     RC4Key1 =  (UpperByte (TSC0) | 0x20) & 0x7F
     RC4Key2 =  LowerByte(TSC0)
     RC4Key3 =  LowerByte ((PPK5 ⊕   → (TK1 ∩ TK0))
     RC4Key4 =  LowerByte (PPK0)
     RC4Key5 =  UpperByte (PPK0)
     RC4Key6 =  LowerByte (PPK1)
     RC4Key7 =  UpperByte (PPK1)
     RC4Key8 =  LowerByte (PPK2)
     RC4Key9 =  UpperByte (PPK2)
     RC4Key10 = LowerByte (PPK3)
     RC4Key11 = UpperByte (PPK3)
     RC4Key12 = LowerByte (PPK4)
     RC4Key13 = UpperByte (PPK4)
     RC4Key14 = LowerByte (PPK5)
     RC4Key15 = UpperByte (PPK5)

The final output of phase 2 is an array of 16 bytes containing the RC4 key to be used in encryption. This can be loaded into the legacy WEP encryption engine prior to processing the MPDU for transmission. The first 3 bytes of this key are transmitted as the WEP IV field (24 bits) and contain the lower 16 bits of the TKIP IV value and the TSC. The second byte of the WEP IV is a repeat of the first byte, except that bit 5 is forced to 1 and bit 4 is forced to 0. Forcing these bits prevents generation of the major class of weak keys. This byte is ignored on receipt.

Summary

This chapter describes how the designers started with the limitations of existing WEP systems and devised a whole new security protocol to fit. TKIP is a masterpiece of retro-engineering and provides real security in a way that WEP never could. All the major weaknesses of WEP have been addressed, including weak key attacks, lack of tamper detection, lack of replay protection, and others. Furthermore, TKIP has been designed by some of the most eminent experts in the field and confidence in the integrity of the solution is high.

Still, there is no doubt that TKIP is a compromise. The necessary simplicity of the Michael integrity protection means that network disruptive countermeasures are necessary. Also, although the weak key vulnerability has been mitigated by the key-mixing approach, the fundamental weakness in the first bytes of the RC4 key stream is still there and might in future be compromised in some way. It seems unlikely now, but it could happen.

Assuming no cracks show up, it seems likely that TKIP will be around for a long time and that new systems will also provide support, not just old WEP systems. However, there are a number of reasons why completely new users might want to consider the use of AES-based security, as described in the next chapter.



[1] Or 104 bits when using the Wi-Fi 128-bit WEP mode.

[2] With WEP the encryption key did change with each packet because it contained the IV. However, the secret part of the key (excluding the IV) was constant.

[3] This is true for the S-box of RC-4, which is generated by computation. However, unlike log-tables, S-boxes for other algorithms may contain values that are not mathematically generated.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.24.90