9.2 Block Ciphers

The strongest and most respected modern ciphers are block ciphers. While stream ciphers encrypt bit by bit, block ciphers encrypt fixed-sized blocks of bits. The block cipher takes the block of data and encrypts it into an equal-sized block of data. The old Data Encryption Standard worked with 64-bit blocks of data. The new Advanced Encryption Standard works on 128-bit blocks. To apply a block cipher, we must break the data into block-sized chunks and add padding as needed to match the data to the block size (FIGURE 9.2).

A procedure diagram depicts block cipher encrypting data in fixed sized blocks.

FIGURE 9.2 A block cipher encrypts data in fixed-sized blocks.

In a sense, block ciphers are like the Caesar cipher examined in Section 7.2. We substitute one letter for another, no more and no less, based on the key. We could in theory create a “codebook” for a given key that lists all plaintext values and the corresponding ciphertext values. The book would be impractically huge. For example, if we have a 64-bit block size, the book would have 1019 entries. We also would need a different book for each key.

Building a Block Cipher

Encryption is a systematic scrambling of data. Most block ciphers encrypt by applying a function repeatedly to the plaintext to encrypt. Each repetition is called a round.

Before the cipher scrambles the data, it takes the key and produces a key schedule. In some cases, the schedule consists of small blocks of the key that the cipher applies to successive rounds. Often, however, the algorithm performs key expansion to yield a schedule that is larger than the original key.

Typically, each round uses a successive part of the key schedule. The rounds repeat until the key schedule is exhausted. Here is a general description of the block encryption procedure:

  • ■   Generate the key schedule.

  • ■   Divide the key schedule into subsections, one per round.

  • ■   For each subsection, perform a round:

    • For the first round, take the plaintext as the input text; for the remaining rounds, take the output of the previous round as the input text.

    • Take the next unused subsection of the key schedule as the key input.

    • Scramble the input text using permutations and substitutions as specified in the encryption algorithm.

The round scrambles the data by applying arithmetic and logical operations, particularly the xor operation. A round often uses both permutations and substitutions to scramble the data. The substitutions may be controlled by special data structures called S-boxes. These serve as lookup tables to provide the bits to substitute for particular input bits, given a particular key.

The decryption procedure takes the key and the ciphertext and unscrambles the data. Each round tries to undo the work of a corresponding round in the encryption process. In some algorithms, decryption just applies the key schedule, the permutations, and the S-boxes in the reverse order.

This procedure highlights three important features of block ciphers:

  1. It takes time to change keys, because the procedure must generate a new key schedule for each key.

  2. The number of rounds reflect a trade-off between speed and security. More rounds may scramble the data more thoroughly, but each round takes time to execute.

  3. Key sizes, block sizes, and the number of rounds performed are built into the procedure. We can’t arbitrarily change any of these without redesigning the procedure.

To highlight the third point, compare the flexibility of a stream cipher like RC4 to that of Advanced Encryption Standard. In RC4, the key may contain as few—or as many—bytes as desired. Fewer key bytes provide dramatically inferior encryption. AES supports exactly three key sizes: 128 bits, 192 bits, and 256 bits. The AES procedure performs 10 rounds for a 128-bit key, and additional rounds when using longer keys.

With RC4, we can encrypt a single bit, gigabytes, or more. AES encrypts data in fixed-sized blocks of exactly 128 bits. If we encrypt less than 128 bits, we must pad the data out to 128 bits. AES ciphertext must be a multiple of 128 bits. If we omit part of a ciphertext block, we won’t be able to decrypt it correctly.

Despite the inflexibility, AES provides far better encryption than any known stream cipher. Researchers have found numerous shortcomings in RC4. Researchers have studied AES at least as thoroughly, though over a shorter time period, and have found no significant weaknesses.

The Effect of Ciphertext Errors

We noted how ciphertext errors affect stream ciphers. (See Section 7.3.3.) If we change a single bit of ciphertext, then the corresponding bit will change in the plaintext.

Block ciphers behave differently. There is no one-to-one relationship between individual bits in the plaintext and the ciphertext. The block cipher behaves more like a one-way hash; if we change a single bit in the plaintext, the resulting ciphertext will change dramatically.

Thus, if we encrypt with a block cipher, and an error or attack changes a single bit in some ciphertext, it scrambles the entire plaintext data block. In a well-designed block cipher, a change to one input bit will affect bits throughout the output block. This is true when encrypting or decrypting.

Consider, on the other hand, what happens if we rearrange entire blocks of ciphertext. If we rearrange ciphertext bits in a stream cipher, every bit we change will decrypt incorrectly. Stream cipher decryption is sensitive to the order of the data; each bit in the ciphertext must line up with its corresponding bit in the key stream.

In block ciphers, however, we encrypt and decrypt each block independently. Thus, if we change the order of blocks in the ciphertext, we will recover the reordered plaintext as long as we move whole blocks of ciphertext. If we rearrange ciphertext within a block, we lose the entire block when decrypted. As long as whole blocks remain unchanged, the ciphertext decrypts cleanly into plaintext.

9.2.1 Evolution of DES and AES

We introduced Data Encryption Standard and controversies surrounding its key size. (See Section 7.3.) DES was a block cipher that worked on 64-bit blocks using a 56-bit key. The algorithm evolved from “Lucifer,” an encryption algorithm developed by IBM. Lucifer implemented rounds, permutations, and S-boxes in a form called a “Feistel cipher.” It also used a 128-bit key.

DES and Lucifer

In the 1970s, the NSA was the only organization in the U.S. government with cryptographic expertise, so it was naturally given the job of assessing the algorithm. NSA recommended two changes to Lucifer before making it a national standard. One change was to the S-boxes. A second change was to reduce the key size to 56 bits. The changes were made before the algorithm was approved as a U.S. standard.

As noted earlier, critics immediately decried the 56-bit key as too small. Researchers started cracking DES keys successfully in the 1990s.

The changes to the S-boxes also caused concern. The NSA instructed IBM not to explain why the changes were made. Some observers feared that the changes would give the NSA a secret shortcut to crack DES encryption. If it existed, the shortcut would allow the NSA to crack DES encryption without performing a brute force attack that tried every possible DES key. Any shortcut in a brute force attack would render DES ineffective.

These worries never came true. No shortcut was ever uncovered. Although cryptanalysts have found attacks that work against DES, none of these attacks are significantly better than a traditional brute force attack.

Looking back, the principal weakness in DES has been its key size. The design concepts embodied in DES have stood the test of time. DES has served as a good, working example for future cryptographic designers.

DES was designed to be efficient in hardware. Aside from its key size, this may be its only significant shortcoming. The algorithm required so much low-level bit manipulation that some experts believed—or at least hoped—that DES software implementation would be too slow to be practical. This proved untrue. Software implementations flourished despite the inefficiencies.

Triple DES

To reduce the risk of the 56-bit encryption key, some organizations tried applying DES several times with different keys. The American National Standards Institute (ANSI) developed a series of standards for using DES. These standards were adopted by the U.S. banking industry. One of these standards included a technique that encrypted data with two or three 56-bit keys instead of a single key (FIGURE 9.3). This was called Triple DES.

A procedure diagram depicts triple DES encryption.

FIGURE 9.3 Triple DES encryption.

In Triple DES, each DES operation may use a separate 56-bit key. The technique does not simply apply encryption or decryption three times in a row. Instead, the middle step performs the opposite function of the outer steps. The swapped step allows users to operate Triple DES with one, two, or three different 56-bit DES keys.

Here is how we use different numbers of keys with Triple DES:

  • ■   One key: Use the same key in all steps. The first two steps invert each other, and the final step performs the encryption.

  • ■   Two keys: Use the first key for the first and last steps, and use the second key in the middle step.

  • ■   Three keys: Use a different key for each step.

When using three keys, Triple DES requires a 168-bit random key. This may sound significantly stronger than AES with its 128-bit key. In fact, there are attacks on Triple DES that reduce the strength of the 168-bit key to be closer to 112 bits. If Triple DES really worked at the full strength of its key, it would take 2168 trials to decrypt the ciphertext. In fact, a more efficient attack only takes 2112 trials. When using two keys, Triple DES requires a 112-bit random key. Clever attacks, however, can recover the ciphertext in 280 trials.

Triple DES is less efficient because it performs DES three times. The extra cycles improve security over the original 56-bit key, but the improvement costs a great deal in computing time.

The Development of AES

As DES began to show its age in the 1990s, NIST began the process of replacing it. Unlike the closed, secretive process that produced DES, NIST announced an open competition to produce its replacement. The process for selecting the replacement, called the Advanced Encryption Standard, or AES, had several important features:

  • ■   Anyone, anywhere, could submit an AES candidate algorithm. Authorship was not limited to U.S. citizens.

  • ■   AES candidates must be free of patents or other intellectual property restrictions.

  • ■   The design rationale for every AES candidate would be made public, including the rationale for choosing built-in constants, for designing S-boxes, and so on.

  • ■   All “unclassified” evaluations of AES candidates would be made public.

  • ■   The assessments and relative rankings of AES candidates would be made public.

  • ■   The rationale for choosing one candidate over the others to be the new standard would be made public.

Twenty-one candidates were submitted by the June 1998 deadline. Six candidates were eliminated for not meeting requirements. Ten more were eliminated during the first round of evaluation.

AES Finalists

The five finalists were announced in 1999:

  1. Rijndael, submitted by Joan Daemen of Proton World International and Vincent Rijmen of the Katholieke Universiteit Leuven, in Belgium. NIST ultimately chose Rijndael to become the new AES.

  2. MARS, submitted by IBM.

  3. RC6, submitted by RSA Laboratories.

  4. Serpent, submitted by Ross Anderson of the University of Cambridge, United Kingdom, Eli Biham of Technion, Israel, and Lars Knudsen of the University of California, San Diego.

  5. Twofish, submitted by Bruce Schneier, John Kelsey, and Niels Ferguson of Counterpane Internet Security; David Wagner of University of California, Berkeley; Doug Whiting of Hi/Fn; and Chris Hall of Princeton University.

All five finalists were block ciphers designed around the execution of multiple rounds. The final review involved three categories: flexibility, implementation efficiency, and security. There are ways to compare flexibility and efficiency fairly. Each algorithm was assessed as to how easily it could adapt to larger block and key sizes. Although DES was explicitly designed to work efficiently in hardware, the AES candidates were assessed for their efficiency in both hardware and software implementations.

Security assessment posed more of a challenge. It is hard to come up with an objective and appropriate comparison for encryption algorithms. The challenge was to determine which differences in the algorithms might significantly affect their security. One approach was to assess “reduced round” versions of each algorithm. All finalists implemented rounds, and this assessment sought weaknesses in variants that had fewer rounds than were really required.

For example, Rijndael normally performs 10 rounds when encrypting with a 128-bit key. To search for weaknesses, analysts reduced the number of rounds and looked at the result. With Rijndael, the algorithm did not show weaknesses until at least three rounds were omitted. This was not the widest safety margin in terms of rounds, but was judged to provide a good trade-off between security and efficiency.

All five AES finalists have been judged to be excellent block ciphers. Some products and applications use one or another AES finalist and probably provide good security. However, the only algorithm recognized as a national standard by the U.S. government is AES itself, which is a variant of Rijndael.

9.2.2 The RC4 Story

Ron Rivest developed the RC4 algorithm in the late 1980s. (See Section 7.3.2.) It is a stream cipher and it has been used in numerous products. It is still used occasionally by older web servers and older wireless networks.

Rivest was a cofounder of RSA Data Security, and the company marketed the algorithm. When introduced in 1987, the company offered RC4 as a trade secret. A vendor could pay a license fee to RSA and use the algorithm in a product, but they were forbidden to share the RC4 source code with anyone.

In the late 1700s, the U.S. Founding Fathers favored the notion of patent protection for inventions. A patent requires inventors to publish details of their invention. In exchange, the inventor gets a monopoly on its use for roughly 20 years. This promotes science, education, and knowledge, because the invention’s principles are made public. It also produces an Open Design so that others can analyze the invention and verify its behavior. The temporary monopoly gives the inventor some time to commercialize the discovery and pay for the costs of development.

A trade secret, on the other hand, is never published. In theory, an inventor could keep a permanent monopoly on an invention by keeping it a trade secret. The inventor can sue anyone who shares and publishes the secret for breach of contract. On the other hand, if someone else makes the same discovery independently, the original inventor has no protection.

Export Restrictions

The trade secret approach proved to be a smart move at the time. Network software vendors realized that their products needed security and that encryption was the only effective mechanism. However, the U.S. International Traffic in Arms Regulations (ITAR) classified encryption as a weapon of war. Encryption systems, whether in hardware or software, required an export license from the state department. Naturally, the state department turned to the NSA for advice on such licenses. In practice, the state department only allowed exports for use by U.S. companies overseas.

In the early 1990s, software vendors started negotiating with the NSA to establish some type of exportable encryption. NSA eventually adopted an unwritten rule allowing the use of RC4 and an earlier block cipher, Rivest’s Cipher 2 (RC2).

The unwritten rule included two additional restrictions. First, if the product used secret encryption keys, the secret part was limited to 40 bits or less. Second, the algorithms had to remain secret or at least unpublished.

RC4 Leaking, Then Cracking

The RC4 algorithm was leaked to the internet in 1994. The person responsible was never identified. This produced an odd situation. RSA did not officially acknowledge that the leaked algorithm was RC4, so vendors still could tell the NSA that the algorithm was “secret.” Today, however, the crypto community accepts the leaked algorithm as being RC4.

The U.S. government still requires licensing of crypto products for export, but the 40-bit key requirement was effectively dropped in 1999. This was fortunate; by 1997, researchers like Ian Goldberg were cracking 40-bit keys in university computing labs. The revised regulations also permitted other crypto algorithms, just as RC4’s weaknesses fully emerged.

Before RC4 was leaked, RSA Data Security claimed that its own analyses confirmed its cryptographic strength. However, cracks appeared soon after the algorithm was leaked. Within a year, researchers found sets of weak keys, all of which generated similar key streams.

In an ideal stream cipher, the entropy in the secret key is spread out among the bits in the stream. For example, we might use a 128-bit key to generate a million-bit key stream. Ideally, we should need to see about 8000 bits of the key stream before we start to detect a bias. In RC4, biases appear in the first bytes of the key stream.

In 2001, several researchers published strategies to attack RC4. When RC4 became the centerpiece of wireless encryption, researchers soon developed practical procedures to crack the encryption. Today, product developers avoid RC4.

Lessons Learned

This story illustrates some important points.

  • ■   We can’t depend on the algorithm’s owner to find its flaws. RSA Security claimed to have performed some analyses that provided favorable results. It was not until third parties analyzed RC4 that problems were found. Notably, RSA Security acted responsibly in the face of these results. Although some vendors intentionally have suppressed information about weaknesses in proprietary crypto algorithms and even threatened legal action against published critiques, there is no evidence that RSA Security ever tried to suppress information about RC4’s weaknesses.

  • ■   It is harder to find flaws if we limit the number of people investigating the algorithm. Cryptology is a relatively new field in the public sphere. We can’t rely on any individual expert—or handful of experts—to find the flaws in a particular algorithm. The best way to find flaws is to open the search to as many researchers as possible.

  • ■   It is hard to keep an algorithm secret. It is not clear how RC4 leaked. Someone may have transcribed a description published by the owner, or someone may have reverse engineered a program containing RC4. In 1994, RC4 was being used in Netscape’s web browsers. Someone might have picked apart the instructions in the browser that encrypted data and derived the RC4 algorithm from that listing.

It took a few years, but the crypto community found and exploited vulnerabilities in this “secret” algorithm. It is hard to keep a crypto algorithm secret, especially if it resides in software that most people have installed on their computer.

9.2.3 Qualities of Good Encryption Algorithms

Since the development of DES in the 1970s, designers have discussed and reviewed numerous cipher designs. Some have stood the test of time in one form or another. Others, like DES and RC4, have been dropped from newer developments because researchers found problems with them. This is the benefit of Open Design; a larger community of analysts may study algorithms and, we hope, identify the weak ones before attackers take advantage of them.

Large, complex products always pose a challenge to buyers, whether they are automobiles or crypto algorithms. In the case of crypto algorithms, however, we can look for certain qualities that greatly reduce our risks. Here are six qualities of good encryption algorithms:

  1. Explicitly designed for encryption

  2. Security not dependent on its secrecy

  3. Available for analysis

  4. Subjected to analysis

  5. No practical weaknesses

  6. Completed formal cryptographic evaluation

Several of these qualities reflect the principle of Open Design.

Explicitly Designed for Encryption

We constructed a key stream algorithm out of a one-way hash. (See Figure 7.11 in Section 7.3.2.) In one sense, this seems like a clever thing to do; the one-way hash output seems random. However, this isn’t really what the one-way hash is designed to do. A series of hashes might have unexpected statistical properties. Moreover, the hash feedback is itself vulnerable to a known plaintext attack.

We do not build a strong encryption algorithm by reusing an existing algorithm for a different purpose. Encryption puts its own requirements on how the algorithm works and what types of patterns are acceptable. Occasionally, we can use an encryption algorithm for a different purpose, like to produce a one-way hash. Even in those cases, we must be careful not to use the algorithm in a way that undercuts its effectiveness.

Security Not Dependent on Its Secrecy

Open Design, Kerckhoffs’s law, and Shannon’s maxim all argue against keeping a crypto algorithm secret. We build secrecy systems to rely on an independent piece of secret information: the key. If we keep the key secret, then attackers can’t efficiently recover the protected information.

If the algorithm’s security relies on its secrecy, then everyone who looks at the algorithm will be a member of its cryptonet. This is impractical in countless ways: Not only does it restrict its security analysis, but it makes the algorithm hard to implement and deploy securely.

Available for Analysis

It is hard to assess the strength of a cryptographic algorithm. Some weak algorithms fall quickly to simple attacks, while others like RC4 survive for a period of time before problems arise. In any case, we can’t analyze an algorithm unless we can examine it and see what it does. The first step is to make the algorithm available for analysis. If we prevent or delay that analysis, we risk building products with vulnerable encryption.

We can’t rely on the algorithm’s owners to analyze the algorithm for us. The owners have to pay for the analysis, and they will only pay as much as necessary for some good news. They don’t need to be thorough and, in fact, it’s not clear what a thorough analysis would entail. If we make the algorithm available for public examination, then a broader community of experts has the opportunity to pass judgment. Potential customers who are interested in the algorithm could arrange their own analyses, which are much more likely to be unbiased than those arranged by the owners.

Subjected to Analysis

It’s not enough to publish an algorithm. Most experts are busy people who aren’t going to pick up arbitrary algorithms and analyze them for pleasure. If experts have analyzed the algorithm and published positive results, that gives us a basis for confidence in its behavior.

If recognized experts have been publishing analyses about an algorithm, we have a better reason to trust that algorithm. These analyses should be in peer-reviewed publications where the analyses themselves are studied by experts. If the consensus of these experts supports the algorithm’s use, then we can trust it.

No Practical Weaknesses

Cryptographic experts are going to find weaknesses in just about any cryptographic algorithm. This is a natural result: As we use these algorithms in new ways, they become vulnerable to new attacks.

Ideally, the attacks are not relevant to the way we intend to use the algorithm. For example, there are chosen plaintext attacks against DES, but they require enormous amounts of data to be effective. The attack doesn’t really apply to typical DES implementations. On the other hand, a theoretical weakness in RC4 was cleverly converted by researchers in Darmstadt, Germany, into a real-time attack on commercial wireless network products.

Cryptographic Evaluation

Our checklist for choosing a file encryption program noted the importance of cryptographic product evaluation. (See Section 7.4.4.) This remains true when using a block cipher. There are “tricks” to using block ciphers that we will discuss further, and the evaluation helps ensure that the developer uses the right tricks to keep the encryption secure.

Unfortunately, no evaluation process is 100 percent foolproof. In fact, different evaluation regimens focus on different issues, and one process may tolerate security flaws that another might reject. For example, the FIPS-140 process focused primarily on correct implementation of cryptographic mechanisms. It did not always detect and reject even egregious security flaws, such as the authentication weakness found in several encrypted USB products in late 2009.

Choosing an Encryption Algorithm

Here are the general recommendations:

  • ■   Use AES if at all possible. AES was subjected to incredible public scrutiny before its selection, which suggests that it will stand the test of time. It is also a U.S. national standard, which provides some legal protection if flaws arise in the algorithm itself.

  • ■   Use evaluated cryptographic products if using a certified algorithm. The evaluation process ensures that a trained third party has reviewed the implementation to ensure that the cryptography works correctly.

  • ■   Use a well-known, respected algorithm if AES is not available. The best alternatives today are the AES candidates. An older alternative is Triple DES.

  • ■   Check recent news and research results in the crypto community regarding the algorithm you want to use. Emerging problems with crypto algorithms have become significant news items in the technical press.

  • ■   Do not use “private label” algorithms that have not been published and reviewed by the cryptographic community. There is no way to ensure the effectiveness of such algorithms. History is riddled with examples of weak proprietary algorithms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.244.66