Domain 5
Cryptography

Cryptography is one of the most common and effective tools the security practitioner has to meet the security objectives of an organization. This chapter explains the key foundation concepts the security practitioner will need to understand and apply the fundamental concepts of cryptography.

Topics

  • The following topics are addressed in this chapter:
    • Fundamental concepts of cryptography
      • Evaluation of algorithms
      • Hashing
      • Salting
      • Symmetric/asymmetric cryptography
      • Digital signatures
      • Non-repudiation
    • Requirements for cryptography
    • Secure protocols
    • Cryptographic systems
      • Fundamental key management concepts
      • Public key infrastructure
      • Administration and validation
      • Web of Trust
      • Implementation of secure protocols

Objectives

The security practitioner is expected to participate in the following areas related to cryptography:

  • Understand and apply fundamental concepts of cryptography
  • Understand requirements for cryptography
  • Understand and support secure protocols
  • Operate and implement cryptographic systems

Encryption Concepts

The whole of the cryptographic universe revolves around a few key concepts and definitions. Mastering these is fundamental to gaining the understanding necessary to obtain the SSCP certification. A successful candidate must understand how cryptography plugs into the overall framework of confidentiality, integrity, and availability. Confidentiality is the most obvious use for cryptography. A message or data stream that is encrypted with even the most basic of techniques is certainly more confidential than one left alone in plaintext. Integrity is the next great area of contribution for cryptography. Hashes and cryptographic hashes are often used to verify the integrity of message as we will learn shortly.

However, if cryptography provides so many benefits, why is it that everything is not encrypted at all times? The answer is unfortunately, availability. Availability is adversely impacted by cryptography through the introduction of extra risk from the loss, distribution, or mismanagement of cryptographic keys. Data that are encrypted must at some point be unencrypted, so losing a decryption key becomes a real problem with the passage of time. In addition, key distribution (the method of getting a key from where it was generated to where it needs to be used) adds another layer of risk and complexity should a key not be transported in time for its use. On top of all of this, cryptography can add a measure of processing overhead and lag time to a data stream or message decryption that may make the data obsolete or unusable. A successful implementer and operator of cryptographic solutions must keep the balance of these three critical aspects in mind at all times to effectively exercise the strengths and minimize the weaknesses involved with this domain.

Key Concepts and Definitions

Before exploring encryption in more depth, it is important to understand the key concepts and definitions:

  • Key clustering is when different encryption keys generate the same ciphertext from the same plaintext message.
  • Synchronous is a term used to refer to when each encryption or decryption request is performed immediately.
  • Asynchronous refers to when encrypt/decrypt requests are processed in queues. A key benefit of asynchronous cryptography is utilization of hardware devices and multiprocessor systems for cryptographic acceleration.
  • A hash function is a one-way mathematical operation that reduces a message or data file into a smaller fixed length output, or hash value. By comparing the hash value computed by the sender with the hash value computed by the receiver over the original file, unauthorized changes to the file can be detected, assuming they both used the same hash function. Ideally there should never be more than one unique hash for a given input and one hash exclusively for a given input.
  • Digital signatures provide authentication of a sender and integrity of a sender’s message. A message is input into a hash function. Then the hash value is encrypted using the private key of the sender. The result of these two steps yields a digital signature. The receiver can verify the digital signature by decrypting the hash value using the signer’s public key, then perform the same hash computation over the message, and then compare the hash values for an exact match. If the hash values are the same then the signature is valid.
  • Asymmetric is a term used in cryptography in which two different but mathematically related keys are used where one key is used to encrypt and another is used to decrypt. This term is most commonly used in reference to public key infrastructure (PKI).
  • A digital certificate is an electronic document that contains the name of an organization or individual, the business address, the digital signature of the certificate authority issuing the certificate, the certificate holder’s public key, a serial number, and the expiration date. The certificate is used to identify the certificate holder when conducting electronic transactions.
  • Certificate authority (CA) is an entity trusted by one or more users as an authority in a network that issues, revokes, and manages digital certificates.
  • Registration authority (RA) performs certificate registration services on behalf of a CA. The RA, a single purpose server, is responsible for the accuracy of the information contained in a certificate request. The RA is also expected to perform user validation before issuing a certificate request.
  • Plaintext or cleartext is the message in its natural format. Plaintext is readable to anyone and is extremely vulnerable from a confidentiality perspective.
  • Ciphertext or cryptogram is the altered form of a plaintext message, so as to be unreadable for anyone except the intended recipients. An attacker seeing ciphertext would be unable to easily read the message or to determine its content.
  • The cryptosystem represents the entire cryptographic operation. This includes the algorithm, the key, and key management functions.
  • Encryption is the process of converting the message from its plaintext to ciphertext. It is also referred to as enciphering. The two terms are used interchangeably in the literature and have similar meanings.
  • Decryption is the reverse process from encryption. It is the process of converting a ciphertext message into plaintext through the use of the cryptographic algorithm and key that was used to do the original encryption. This term is also used interchangeably with the term decipher.
  • The key or cryptovariable is the input that controls the operation of the cryptographic algorithm. It determines the behavior of the algorithm and permits the reliable encryption and decryption of the message. There are both secret and public keys used in cryptographic algorithms.
  • Nonrepudiation is a security service by which evidence is maintained so that the sender and the recipient of data cannot deny having participated in the communication. Individually, it is referred to as the “nonrepudiation of origin” and “nonrepudiation of receipt.”
  • An algorithm is a mathematical function that is used in the encryption and decryption processes. It may be quite simple or extremely complex.
  • Cryptanalysis is the study of techniques for attempting to defeat cryptographic techniques and, more generally, information security services.
  • Cryptology is the science that deals with hidden, disguised, or encrypted communications. It embraces communications security and communications intelligence.
  • Collision occurs when a hash function generates the same output for different inputs.
  • Key space represents the total number of possible values of keys in a cryptographic algorithm or other security measure, such as a password. For example, a 20-bit key would have a key space of 1,048,576.
  • Work factor represents the time and effort required to break a protective measure.
  • An initialization vector (IV) is a non-secret binary vector used as the initializing input algorithm for the encryption of a plaintext block sequence to increase security by introducing additional cryptographic variance and to synchronize cryptographic equipment.
  • Encoding is the action of changing a message into another format through the use of a code. This is often done by taking a plaintext message and converting it into a format that can be transmitted via radio or some other medium, and is usually used for message integrity instead of secrecy. An example would be to convert a message to Morse code.
  • Decoding is the reverse process from encoding—converting the encoded message back into its plaintext format.
  • Transposition or permutation is the process of reordering the plaintext to hide the message.
  • Substitution is the process of exchanging one letter or byte for another.
  • The SP-network is the process described by Claude Shannon used in most block ciphers to increase their strength. SP stands for substitution and permutation (transposition), and most block ciphers do a series of repeated substitutions and permutations to add confusion and diffusion to the encryption process. An SP-network uses a series of S-boxes to handle the substitutions of the blocks of data. Breaking a plaintext block into a subset of smaller S-boxes makes it easier to handle the computations.
  • Confusion is provided by mixing (changing) the key values used during the repeated rounds of encryption. When the key is modified for each round, it provides added complexity that the attacker would encounter.
  • Diffusion is provided by mixing up the location of the plaintext throughout the ciphertext. Through transposition, the location of the first character of the plaintext may change several times during the encryption process, and this makes the cryptanalysis process much more difficult.
  • The avalanche effect is an important consideration in all cryptography used to design algorithms where a minor change in either the key or the plaintext will have a significant change in the resulting ciphertext. This is also a feature of a strong-hashing algorithm.

Foundational Concepts

The information security practitioner must also be familiar with the fundamental concepts and methods related to cryptography. Methods and concepts range from different ways of using cryptographic technologies to encrypt information to different standard encryption systems used in industry.

High Work Factor

The average amount of effort or work required to break an encryption system, that is to say, decrypt a message without having the entire encryption key or to find a secret key given all or part of a ciphertext, is referred to as the work factor of the cryptographic system. This is measured in some units such as hours of computing time on one or more given computer systems or a cost in dollars of breaking the encryption.

If the work factor is sufficiently high, the encryption system is considered to be practically or economically unbreakable, and is sometimes referred to as “economically infeasible” to break. Communication systems using encryption schemes which are economically infeasible to break are generally considered secure.

The work factor required to break a given cryptographic system can vary over time due to advancements in technology, such as improvements in the speed and capacity of computers. For example, while a 40-bit secret key encryption scheme can currently be broken by a fast personal computer in less than a year or by a room full of personal computers in a short amount of time, future advances in computer technology will likely substantially reduce this work factor.

Stream-Based Ciphers

There are two primary methods of encrypting data: the stream and block methods. When a cryptosystem performs its encryption on a bit-by-bit basis, it is called a stream-based cipher. This is the method most commonly associated with streaming applications, such as voice or video transmission Wireless Equivalent Privacy, or WEP, uses a streaming cipher, RC4, but is not considered secure due to number of weaknesses that expose the encryption key to an attacker and weak key size, among other well-known vulnerabilities in WEP implementation. Newer wireless cryptography implements block ciphers such as Advanced Encryption Standard (AES) which provide stronger security. The cryptographic operation for a stream-based cipher is to mix the plaintext with a keystream that is generated by the cryptosystem. The mixing operation is usually an exclusive-or (XOR) operation—a very fast mathematical operation.

As seen in Table 5-1, the plaintext is XORed with a seemingly random keystream to generate ciphertext. It is seemingly random because the generation of the keystream is usually controlled by the key. If the key could not produce the same keystream for the purposes of decryption of the ciphertext, then it would be impossible to ever decrypt the message.

Table 5-1: Plaintext being xored to generate ciphertext

PlaintextEncryption KeystreamCiphertext
AXOR randomly generated keystream$
0101 00010111 00110010 0010

The exclusive-or process is a key part of many cryptographic algorithms. It is a simple binary operation that adds two values together. If the two values are the same, 0 + 0 or 1 + 1, then the output is always a 0; however, if the two values are different, 1 + 0 or 0 + 1, then the output is a 1.

From the previous example, the following operation is the result:

Input plaintext + keystream = output of XOR

or

0101 0001 + 0111 0011 = 0010 0010

A stream-based cipher relies primarily on substitution—the substitution of one character or bit for another in a manner governed by the cryptosystem and controlled by the cipher key. For a stream-based cipher to operate securely, it is necessary to follow certain rules for the operation and implementation of the cipher:

  1. The keystream should not be linearly related to the cryptovariable. Knowledge of the keystream output value does not disclose the cryptovariable (encryption/decryption key).
  2. It should be statistically unpredictable. Given n successive bits from the keystream, it is not possible to predict the n + 1st bit with a probability different from 1/2.
  3. It should be statistically unbiased. There should be as many 0s as 1s, as many 00s as 01s, 10s, 11s, etc.
  4. There should be long periods without repetition.
  5. It should have functional complexity. Each keystream bit should depend on most or all of the cryptovariable bits.

The keystream must be strong enough to not be easily guessed or predictable. In time, the keystream will repeat, and that period (or length of the repeating segment of the keystream) must be long enough to be difficult to calculate. If a keystream is too short, then it is susceptible to frequency analysis or other language-specific attacks.

The implementation of the stream-based cipher is probably the most important factor in the strength of the cipher—this applies to nearly every crypto product and, in fact, to security overall. Some important factors in the implementation are to ensure that the key management processes are secure and cannot be readily compromised or intercepted by an attacker.

Block Ciphers

A block cipher operates on blocks or chunks of text. As plaintext is fed into the cryptosystem, it is divided into blocks of a preset size—often a multiple of the ASCII character size—64, 128, 192 bits, etc. Most block ciphers use a combination of substitution and transposition to perform their operations. This makes a block cipher relatively stronger than most stream-based ciphers, but more computationally intensive and usually more expensive to implement. This is also why many stream-based ciphers are implemented in hardware, whereas a block-based cipher is implemented in software.

Initialization Vectors (IV)

Because messages may be of any length, and because encrypting the same plaintext using the same key always produces the same ciphertext as described below, several “modes of operation” have been invented, which allow block ciphers to provide confidentiality for messages of arbitrary length. (See Table 5-2 for block cipher mode descriptions.) The use of various modes answers the need for unpredictability into the keystream such that even if the same key is used to encrypt the same message the ciphertext will still be different each time.

Table 5-2: Basic block cipher modes

ModeHow It WorksUsage
Electronic Code Book (ECB)In ECB mode, each block is encrypted independently, allowing randomly accessed files to be encrypted and still accessed without having to process the file in a linear encryption fashion.Any file with non-repeating blocks (less than 64 bits in length), such as transmission of a DES key or short executables.
Cipher Block Chaining (CBC)In CBC mode, the result of encrypting one block of data is fed back into the process to encrypt the next block of data.Data at rest, such as stand-alone encrypted files on users’ hard drives.
Cipher Feedback (CFB)In CFB mode, the cipher is used as a keystream generator rather than for confidentiality. Each block of keystream comes from encrypting the previous block of ciphertext.Retired due to the delay imposed by encrypting each block of keystream before proceeding.
Output Feedback (OFB)In OFB mode, the keystream is generated independently of the messageRetired due to Avalanche problems. Was used in Pay-Per-View applications.
Counter (CTR)Uses the formula Encrypt (Base+N) as a keystream generator where Base is a starting 64 bit number and N is a simple incrementing function.Used where high speed or random access encryption is needed. Examples are WPA2 and the Content Scrambling System.

Source: Tiller, J.S., “Message authentication,” Information Security Management Handbook, 5th ed., Tipton, H.F. and Krause, M., Eds., Auerbach Publications, New York, 2004. With permission.

To illustrate why an initialization vector (IV) is needed when using block ciphers, consider how they are used in various modes of operation using block ciphers. The simplest mode is the Electronic Code Book (ECB) mode where the plaintext is divided into blocks and each block is encrypted separately. However, in the Cipher-Block Chaining (CBC) mode, each block of plaintext is XORed with the previous ciphertext block before being encrypted. In the ECB mode, the same plaintext will encrypt to same ciphertext for the same key. This reveals patterns in the code.

In the CBC mode, each block is XORed with the result of the encryption of the previous block. This hides patterns. However, two similar plaintexts that have been encrypted using the same key will yield the same ciphertext up to the block containing the first difference. This problem can be avoided by adding an IV block, which starts the keystream randomization process for the first real block, to the plaintext. This will make each ciphertext unique, even when similar plaintext is encrypted with the same key in the CBC mode. There is no need for the IV to be secret, in most cases, but it is important that it is never reused with the same key. Reusing an IV leaks some information about the first block of plaintext, and about any common prefix shared by the two messages. Therefore, the IV must be randomly generated at encryption time.

Key Length

Key length is another important aspect of key management to consider when generating cryptographic keys. Key length is the size of a key, usually measured in bits or bytes, which a cryptographic algorithm used in ciphering or deciphering protected information. As discussed earlier, keys are used to control how an algorithm operates so that only the correct key can decipher the information. The resistance to successful attack against the key and the algorithm, aspects of their cryptographic security, is of concern when choosing key lengths. An algorithm’s key length is distinct from its cryptographic security.

Cryptographic security is a logarithmic measure of the fastest known computational attack on the algorithm, also measured in bits. The security of an algorithm cannot exceed its key length. Therefore, it is possible to have a very long key that still provides low security. As an example, three-key (56 bits per key) Triple DES (i.e., Triple Data Encryption Algorithm, aka TDEA) can have a key length of 168 bits but, due to the meet-in-the-middle attack, the effective security that it provides is at most 112 bits. However, most symmetric algorithms are designed to have security equal to their key length.

A natural inclination is to use the longest key possible, which may make the key more difficult to break. However, the longer the key, the more computationally expensive the encrypting and decrypting process can be. The goal is to make breaking the key cost more (in terms of effort, time, and resources) than the worth of the information or mission being protected and, if possible, not a penny more (to do more would not be economically sound).

Block Size

The block size of a block cipher, like key length, has a direct bearing on the security of the key. Block ciphers produce a fixed-length block of ciphertext. However, since the data being encrypted are arbitrary number of bytes, the ciphertext block size may not come out to be a full block. This is solved by padding the plaintext up to the block size before encryption and unpadding after decryption. The padding algorithm is to calculate the smallest nonzero number of bytes, say n, which must be suffixed to the plaintext to bring it up to a multiple of the block size.1

Evaluation of Algorithms

Many encryption algorithms are available for information security. The two main categories of encryption algorithms are symmetric (private) and asymmetric (public) keys encryption. Symmetric keys encryption (also known as secret key encryption) uses only one key to both encrypt and decrypt data. The key is distributed before transmission between entities begins. The size of the key will determine how strong it is.

Asymmetric key encryption (also known as public key encryption) is used to solve the problem of key distribution. Asymmetric keys use two keys, one private and the other public. The public key is used for encryption, and the private key is used for decryption. Users tend to use two keys: a public key, which is known to the public, and a private key, which is known only to the user.

Some common encryption techniques include the following:

  • DES (Data Encryption Standard) was the first encryption standard recommended by NIST. DES has a 64-bit key size and a 64-bit block size. Due to a history of successful attacks, DES is now considered to be an insecure block cipher.
  • 3DES is an enhancement of DES with a 64-bit block size and a 192-bit key size. This uses an encryption method similar to that of DES but applies it three times to increase the encryption level and the average safe time. 3DES is slower than other block cipher methods.
  • RC2 uses a 64-bit block cipher and a variable key size that ranges from 8 to 128 bits. Unfortunately, RC2 can be exploited using a related-key attack.
  • Blowfish is another block cipher with a 64-bit block. It uses a variable-length key that ranges from 32 bits to 448 bits. The default is 128 bits. Blowfish is unpatented, license-free, and free.
  • AES (Advanced Encryption Standard) is a block cipher with a variable key length of 128, 192, or 256 bits. The default is 256 bits. AES encrypts 128-bit data blocks in 10, 12, or 14 rounds depending on the key size. AES is fast and flexibleand has been tested for many security applications.
  • RC6 is a block cipher derived from RC5. RC6 has a block size of 128 bits and key sizes of 128, 192, or 256 bits.

Encryption algorithm characteristics that could be considered for the development of metrics include:

  • Type
  • Functions
  • Key size
  • Rounds
  • Complexity
  • Attack
  • Strength

The security practitioner can use some or all of the items mentioned above to engage in the evaluation of algorithms for deployment and use within the organization. Best known methods of attack include brute force, factoring, linear, and differential cryptanalysis (qualified with whether known or chosen plaintext is provided). Strength is an assessment of the algorithm, based on key length, algorithm complexity, and the best methods of attack.

As a first step towards any evaluation process, the security practitioner would need to understand and document the business requirements that will be used to create the baseline functionality profile for the deployment and use of the algorithm to be evaluated. Once this has been done, then the evaluation process can proceed.

Hashing

A cryptographic hash function is a hash function which is considered practically impossible to invert, that is, to recreate the input data from its hash value alone. The input data is often called the message, and the hash value is often called the message digest or simply the digest.

The ideal cryptographic hash function has four main properties:

  • It is easy to compute the hash value for any given message.
  • It is infeasible to generate a message that has a given hash.
  • It is infeasible to modify a message without changing the hash.
  • It is infeasible to find two different messages with the same hash.

Cryptographic hash functions have many information security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption.

The general idea of a cryptographic hash function can be summarized with the following formula:

Variable data input + hashing algorithm = fixed bit size data output (the digest)

The security practitioner needs to understand the use of a cryptographic hash with regards to the formula, and how it allows for the validation of data integrity. Figure 5-1 illustrates this point.

c05f001.tif

Figure 5-1: Cryptographic hashing function

Source: https://en.wikipedia.org/wiki/File:Cryptographic_Hash_Function.svg

Specific Hashes

Up until now we have been discussing the concepts of hashing at a high level, focusing on how hashing works theoretically. In the following sections, we will turn our attention to some specific examples of hashing algorithms, their use, and implementation details.

Message Digest 2, 4, and 5

Message Digest (MD) 2, 4, and 5 are hash functions used to create message digests for digital signatures. MD2 was first created in 1989 and forms a 128-bit message digest using a 128-bit block through 18 rounds of operation. Although it is considered to be older than is ideal, MD2 is still used in certain PKI environments where it is used in the generation of digital certificates. MD4 was created in 1990 and also generates a 128-bit message digest using a 512-bit block, but does so through only three rounds of operation. MD4 is a popular choice among file sharing and synchronization applications, although there are several well-published compromises that severely limit the nonrepudiation qualities of an MD4 hash. MD5 uses a 512-bit block and generates a 128-bit message digest as well, but does so over four rounds of operation along with several mathematical tweaks including a unique additive constant that is used during each round to provide an extra level of nonrepudiation assurance. That being said, there are numerous easy and well-published exploits available for creating hash collisions in an MD5-enabled environment.

Secure Hash Algorithm 0, 1, and 2

Secure Hash Algorithm (SHA) is a collection of hash functions created by the U.S. government starting in the early mid-1990s. SHA-0 was the first of these hash standards, but was quickly removed and replaced by SHA-1 in 1995 due to some rather fundamental flaws in the original SHA-0. SHA-1 uses a block size of 512 bits to create a message digest of 160 bits through 80 rounds of operation. SHA-1 does not provide a great deal of protection against attacks such as a birthday attack, and so the SHA-2 family of hash functions was created. With SHA-2, the often employed naming convention is to use the size of the created message digest to describe the particular SHA-2 implementation. In SHA-2, the possible message digests are 224, 256, 384, and 512 bits in length. SHA-224 and SHA-256 use a block length of 512 bits while SHA-384 and SHA-512 use a block length of 1024 bits.

HAVAL

HAVAL was created in the in the mid1990s as a highly flexible and configurable hash function. With HAVAL, the implementer can create hashes of 128, 160, 192, 224, and 256 bits in length, using a fixed block size of 128 bits and 3, 4, or 5 rounds of operation.

RIPEMD-160

Research and Development in Advanced Communications Technologies in Europe (RACE) Integrity Primitives Evaluation Message Digest (RIPEMD) is a hash function that produces 160-bit message digests using a 512-bit block size. RIPEMD was produced by a collaborative effort of European cryptographers and is not subject to any patent restrictions.

Attacks on Hashing Algorithms and Message Authentication Codes

There are two primary ways to attack hash functions: through brute-force attacks and cryptanalysis. Over the past few years, research has been done on attacks on various hashing algorithms, such as MD-5 and SHA-1. Both cases are susceptible to cryptographic attacks. A brute-force attack relies on finding a weakness in the hashing algorithm that would allow an attacker to reconstruct the original message from the hash value (defeat the one-way property of a hash function), find another message with the same hash value, or find any pair of messages with the same hash value (which is called collision resistance). Oorschot and Weiner developed a machine that could find a collision on a 128-bit hash in about 24 days.2

Cryptanalysis is the art and science of defeating cryptographic systems and gaining access to encrypted messages even when the keys are unknown. Side-channel attacks are examples of cryptanalyses. These attacks do not attack the algorithms but rather the implementation of the algorithms. Cryptanalysis is responsible for the development of rainbow tables, which are used to greatly reduce the computational time and power needed to break a cipher at the expense of storage. A freely available password cracking program called Cain and Abel comes with rainbow tables preloaded.3

Rainbow tables are pre-computed tables or lists used in cracking password hashes. Tables are designed for specific algorithms such as MD5 and SHA-1 and can be purchased on the open market. Salted hashes provide a defense against rainbow tables. In cryptographic terms, “salt” is made of random bits and is an input to the one-way hash function with target plaintext as the only other input. The salt is stored with the resulting hash so hashing will use the same salt and get the same results. As the rainbow table did not include the salt when it was created, its values will never match the salted values.

The Birthday Paradox

The birthday paradox has been described in textbooks on probability for several years. It is a surprising mathematical condition that indicates the ease of finding two people with the same birthday in a group of people. If one considers that there are 365 possible birthdays (not including leap years and assuming that birthdays are spread evenly across all possible dates), then one would expect to need to have roughly 183 people together to have a 50% probability that two of those people share the same birthday. In fact, once there are more than 23 people together, there is a greater than 50% probability that two of them share the same birthday. Consider that in a group of 23 people, there are 253 different pairings (n(n − 1)/2). Once 100 people are together, the chance of two of them having the same birthday is greater than 99.99%.

So why is a discussion about birthdays important in the middle of hashing attacks? Because the likelihood of finding a collision for two messages and their hash values may be a lot easier than may have been believed.

It would be very similar to the statistics of finding two people with the same birthday. One of the considerations for evaluating the strength of a hash algorithm must be its resistance to collisions. The probability of finding a collision for a 160-bit hash can be estimated at either 2160 or 2160/2, depending on the level of collision resistance needed.

This approach is relevant because a hash is a representation of the message and not the message itself. Obviously, the attacker does not want to find an identical message; he wants to find out how to (1) change the message contents to what he wants it to read or (2) cast some doubt on the authenticity of the original message by demonstrating that another message has the same value as the original. The hashing algorithm must be resistant to a birthday-type attack that would allow the attacker to feasibly accomplish his goals.

Salting

In cryptography, a salt is random data that is used as an additional input to a one-way function that hashes a password or passphrase. The primary function of salts is to defend against dictionary attacks and against pre-computed rainbow table attacks. A new salt is randomly generated for each password. In a typical setting, the salt and the password are concatenated and processed with a cryptographic hash function, and the resulting output (but not the original password) is stored with the salt in a database. Hashing allows for later authentication while defending against compromise of the plaintext password in the event that the database is somehow compromised.

Salts also combat the use of rainbow tables for cracking passwords. A rainbow table is a large list of pre-computed hashes for commonly used passwords. For a password file without salts, an attacker can go through each entry and look up the hashed password in the rainbow table. Salts protect against the use of rainbow tables as they, in effect, extend the length and potentially the complexity of the password. If the rainbow tables do not have passwords matching the length (e.g., an 8-byte password and 2-byte salt is effectively a 10-byte password) and complexity (non-alphanumeric salt increases the complexity of strictly alphanumeric passwords) of the salted password, then the password will not be found. If found, one will have to remove the salt from the password before it can be used.

Encryption and Decryption

Encryption is the process of transforming information so it is unintelligible to anyone but the intended recipient. Decryption is the process of transforming encrypted information so that it is intelligible again. A cryptographic algorithm, also called a cipher, is a mathematical function used for encryption or decryption. In most cases, two related functions are employed, one for encryption and the other for decryption.

With most modern cryptography, the ability to keep encrypted information secret is based not on the cryptographic algorithm, which is widely known, but on a number called a key that must be used with the algorithm to produce an encrypted result or to decrypt previously encrypted information. Decryption with the correct key is simple. Decryption without the correct key is very difficult, and in some cases impossible for all practical purposes.

Symmetric Cryptography

There are two primary forms of cryptography in use today, symmetric and asymmetric cryptographies. Symmetric algorithms operate with a single cryptographic key that is used for both encryption and decryption of the message. For this reason, it is often called single, same, or shared key encryption. It can also be called secret or private key encryption because the key factor in secure use of a symmetric algorithm is to keep the cryptographic key secret.

Some of the most difficult challenges of symmetric key ciphers are the problems of key management. Because the encryption and decryption processes both require the same key, the secure distribution of the key to both the sender (or encryptor) of the message and the receiver (or decryptor) is a key factor in the secure implementation of a symmetric key system. The cryptographic key cannot be sent in the same channel (or transmission medium) as the data, so out-of-band distribution must be considered. Out of band means using a different channel to transmit the keys, such as courier, fax, phone, or some other method (Figure 5-2).

c05f002.tif

Figure 5-2: Out-of-band key distribution.

The advantages of symmetric key algorithms are that they are usually very fast, secure, and cheap. There are several products available on the Internet at no cost to the user who uses symmetric algorithms.

The disadvantages include the problems of key management, as mentioned earlier, but also the limitation that a symmetric algorithm does not provide many benefits beyond confidentiality, unlike most asymmetric algorithms, which also provide the ability to establish nonrepudiation, message integrity, and access control. Symmetric algorithms can provide a form of message integrity—the message will not decrypt if changed. Symmetric algorithms also can provide a measure of access control—without the key, the file cannot be decrypted.

This limitation is best described by using a physical security example. If 10 people have a copy of the key to the server room, it can be difficult to know who entered that room at 10 p.m. yesterday. There is limited access control in that only those people with a key are able to enter; however, it is unknown which one of those 10 actually entered. The same with a symmetric algorithm; if the key to a secret file is shared between two or more people, then there is no way of knowing who the last person to access the encrypted file was. It would also be possible for a person to change the file and allege that it was changed by someone else. This would be most critical when the cryptosystem is used for important documents such as electronic contracts. If a person that receives a file can change the document and allege that that was the true copy he had received, repudiation problems arise.

Algorithms and systems such as the Caesar cipher, the Spartan Scytale, and the Enigma machine are all examples of symmetric algorithms. The receiver needed to use the same key to perform the decryption process as he had used during the encryption process. The following sections cover many of the modern symmetric algorithms.

The Data Encryption Standard (DES)

When looking at a data encryption standard (DES) key, it is 64 bits in length; however, every eighth bit (used for parity) is ignored. Therefore, the effective length of the DES key is 56 bits. Because every bit has a possible value of either 1 or 0, it can be stated that the effective key space for the DES key is 256. This gives a total number of keys for DES to be 7.2 × 1016. However, the modes of operation discussed next are used by a variety of other block ciphers, not just in DES. Originally there were four modes of DES accepted for use by the U.S. federal government (NIST); in later years, the CTR mode was also accepted.

Basic Block Cipher Modes

The following basic block cipher modes operate in a block structure.4

  • Electronic Codebook Mode—The electronic codebook mode (ECB) is the most basic block cipher mode (Figure 5-3). It is called codebook because it is similar to having a large codebook containing every piece of 64-bit plaintext input and all possible 64-bit ciphertext outputs. In a manual sense, it would be the same as looking up the input in a book and finding what the output would be depending on which key was used. When a plaintext input is received by ECB, it operates on that block independently and produces the ciphertext output. If the input was more than 64 bits long and each 64-bit block was the same, then the output blocks would also be the same. Such regularity would make cryptanalysis simple. For that reason, ECB is only used for very short messages (less than 64 bits in length), such as transmission of a key. As with all Feistel ciphers, the decryption process is the reverse of the encryption process.
c05f003.tif

Figure 5-3: Electronic codebook is a basic mode used by block ciphers.

  • Cipher Block Chaining Mode—The cipher block chaining mode (CBC) mode is stronger than ECB in that each input block will produce a different output—even if the input blocks are identical. This is accomplished by introducing two new factors in the encryption process—an IV and a chaining function that XORs each input with the previous ciphertext. (Note: Without the IV, the chaining process applied to the same messages would create the same ciphertext.) The IV is a randomly chosen value that is mixed with the first block of plaintext. This acts just like a seed in a stream-based cipher. The sender and the receiver must know the IV so that the message can be decrypted later. The function of CBC can be seen in Figure 5-4.

The initial input block is XORed with the IV, and the result of that process is encrypted to produce the first block of ciphertext. This first ciphertext block is then XORed with the next input plaintext block. This is the chaining process, which ensures that even if the input blocks are the same, the resulting outputs will be different.

c05f004.tif

Figure 5-4: Cipher block chaining mode.

The Stream Modes of DES

The following modes of DES operate as a stream; even though DES is a block mode cipher, these modes attempt to make DES operate as if it were a stream mode algorithm. A block-based cipher is subject to the problems of latency or delay in processing. This makes them unsuitable for many applications where simultaneous transmission of the data is desired. In these modes, DES tries to simulate a stream to be more versatile and provide support for stream-based applications.

  • Cipher Feedback Mode—In the cipher feedback mode (CFB) mode, the input is separated into individual segments, the size of which can be 1-bit, 8-bit, 64-bit, or 128-bit (the four sub-modes of CFB)—usually of 8 bits, because that is the size of one character (Figure 5-5). When the encryption process starts, the IV is chosen and loaded into a shift register. It is then run through the encryption algorithm. The first 8 bits that come from the algorithm are then XORed with the first 8 bits of the plaintext (the first segment). Each 8-bit segment is then transmitted to the receiver and also fed back into the shift register. The shift register contents are then encrypted again to generate the keystream to be XORed with the next plaintext segment. This process continues until the end of the input. One of the drawbacks of this, however, is that if a bit is corrupted or altered, all of the data from that point onward will be damaged. It is interesting to note that because of the nature of the operation in CFB, the decryption process uses the encryption operation rather than operate in reverse like CBC.
c05f005.tif

Figure 5-5: Cipher feedback mode of DES.

  • Output Feedback Mode—The output feedback mode (OFB) mode is very similar in operation to the CFB except that instead of using the ciphertext result of the XOR operation to feed back into the shift register for the ongoing keystream, it feeds the encrypted keystream itself back into the shift register to create the next portion of the keystream (Figure 5-6).
c05f006.tif

Figure 5-6: Output feedback mode of DES.

Because the keystream and message data are completely independent (the keystream itself is chained, but there is no chaining of the ciphertext), it is now possible to generate the entire keystream in advance and store it for later use. However, this does pose some storage complications, especially if it were to be used in a high-speed link.

  • Counter Mode—The counter mode (CTR) mode is used in high-speed applications such as IPSec and ATM (Figure 5-7). In this mode, a counter—a 64-bit random data block—is used as the first IV. A requirement of CTR is that the counter must be different for every block of plaintext, so for each subsequent block, the counter is incremented by 1. The counter is then encrypted just as in OFB, and the result is used as a keystream and XORed with the plaintext. Because the keystream is independent from the message, it is possible to process several blocks of data at the same time, thus speeding up the throughput of the algorithm. Again, because of the characteristics of the algorithm, the encryption process is used at both ends of the process—there is no need to install the decryption process.
    c05f007.tif

    Figure 5-7: Counter mode is used in high-speed applications such as IPSec and ATM.

Advantages and Disadvantages of DES

DES is a strong, fast algorithm that has endured the test of time; however, it is not suitable for use for very confidential data due to the increase in computing power over the years. Initially, DES was considered unbreakable, and early attempts to break a DES message were unrealistic. (A computer running at one attempt per millisecond would still take more than 1000 years to try all possible keys.) However, DES is susceptible to a brute-force attack. Because the key is only 56 bits long, the key may be determined by trying all possible keys against the ciphertext until the true plaintext is recovered. The Electronic Frontier Foundation (www.eff.org) demonstrated this several years ago. However, it should be noted that they did the simplest form of attack—a known plaintext attack; they tried all possible keys against a ciphertext knowing what they were looking for (they knew the plaintext). If they did not know the plaintext (if they did not know what they were looking for), the attack would have been significantly more difficult.

Regardless, DES can be deciphered using today’s computing power and enough stubborn persistence. There have also been criticisms of the structure of the DES algorithm. The design of the S-boxes used in the encryption and decryption operations was secret, and this can lead to claims that they may contain hidden code or untried operations.

Double DES

The primary complaint about DES was that the key was too short. This made a known plaintext brute-force attack possible. One of the first alternatives considered to create a stronger version of DES was to double the encryption process, as shown in Figure 5-8. The first DES operation created an intermediate ciphertext, which will be referred to as “m” for discussion purposes.

c05f008.tif

Figure 5-8: Operations within double DES cryptosystems.

This intermediate ciphertext, m, was then re-encrypted using a second 56-bit DES key for greater cryptographic strength. Initially there was a lot of discussion as to whether the ciphertext created by the second DES operation would be the same as the ciphertext that would have been created by a third DES key.

Ciphertext created by double DES is the result of the plaintext encrypted with the first 56-bit DES key and then re-encrypted with the second 56-bit DES key.

Would the result of two operations be the same as the result of one operation using a different key? This is not the case as more serious vulnerabilities in double DES have emerged. The intention of double DES was to create an algorithm that would be equivalent in strength to a 112-bit key (two 56-bit keys). Unfortunately, this was not the case because of the “meet in the middle” attack, which is why the lifespan of double DES was very short.

Meet in the Middle

The most effective attack against double DES was just like the successful attacks on single DES, based on doing a brute-force attack against known plaintext5 (Figure 5-9). The attacker would encrypt the plaintext using all possible keys and create a table containing all possible results. This intermediate cipher is referred to as “m” for this discussion. This would mean encrypting using all 256 possible keys. The table would then be sorted according to the values of m. The attacker would then decrypt the ciphertext using all possible keys until he found a match with the value of m. This would result in a true strength of double DES of approximately 256 (twice the strength of DES, but not strong enough to be considered effective), instead of the 2112 originally hoped for.6

c05f009.tif

Figure 5-9: Meet-in-the-middle attack on 2DES.

Triple DES (3DES)

The defeat of double DES resulted in the adoption of triple DES as the next solution to overcome the weaknesses of single DES. Triple DES was designed to operate at a relative strength of 2112 using two different keys to perform the encryption.

The ciphertext is created by encrypting the plaintext with key 1, re-encrypting with key 2, and then encrypting again with key 1.

This would have a relative strength of 2112 and be unfeasible for attack using either the known plaintext or differential cryptanalysis attacks. This mode of 3DES would be referred to as EEE2 (encrypt, encrypt, encrypt using two keys).

The plaintext was encrypted using key 1, then decrypted using key 2, and then encrypted using key 1.

Doing the decrypt operation for the intermediate step does not make a difference in the strength of the cryptographic operation, but it does allow backward compatibility through permitting a user of triple DES to also access files encrypted with single DES. This mode of triple DES is referred to as EDE2. Originally, the use of triple DES was primarily done using two keys as shown above, and this was compliant with ISO 8732 and ANS X9.17; however, some users, such as Pretty Good Privacy (PGP) and Secure/Multipurpose Internet Mail Extension (S/MIME), are moving toward the adoption of triple DES using three separate keys.

The Advanced Encryption Standard (AES)

In 1997, the National Institute of Standards and Technology (NIST) in the United States issued a call for a product to replace DES and 3DES. The requirements were that the new algorithm would be at least as strong as DES, have a larger block size (because a larger block size would be more efficient and more secure), and overcome the problems of performance with DES. DES was developed for hardware implementations and is too slow in software. 3DES is even slower, and thus creates a serious latency in encryption as well as significant processing overhead.

After considerable research, the product chosen to be the new advanced encryption standard (AES) was the Rijndael algorithm, created by Dr. Joan Daemon and Dr. Vincent Rijmen of Belgium. The name Rijndael was merely a contraction of their surnames. Rijndael beat out the other finalists: Serpent, of which Ross Anderson was an author; MARS, an IBM product; RC6, from Ron Rivest and RSA; and TwoFish, developed by Bruce Schneier. The AES algorithm was obliged to meet many criteria, including the need to be flexible, implementable on many types of platforms, and free of royalties.

Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP)

Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) is an encryption protocol that forms part of the 802.11i standard for wireless local area networks. The CCMP protocol is based on AES encryption using the CTR with CBC-MAC (CCM) mode of operation. CCMP is defined in the IETF RFC 3610 and is included as a component of the 802.11i IEEE standard.7

AES processing in CCMP must use AES 128-bit key and 128-bit block size. Per United States’ Federal Information Processing Standard (FIPS) 197 standard, the AES algorithm (a block cipher) uses blocks of 128 bits, cipher keys with lengths of 128, 192, and 256 bits, as well as a number of rounds—10, 12, and 14 respectively. CCMP use of 128-bit keys and a 48-bit IV minimizes vulnerability to replay attacks. The CTR component provides data privacy. The Cipher Block Chaining Message Authentication Code component produces a message integrity code (MIC) that provides data origin authentication and data integrity for the packet payload data.

The 802.11i standard includes CCMP. AES is often referred to as the encryption protocol used by 802.11i; however AES itself is simply a block cipher. The actual encryption protocol is CCMP. It is important to note here that, although the 802.11i standard allows for TKIP encryption, Robust Security Network (RSN) is part of the 802.11i IEEE standard and negotiates authentication and encryption algorithms between access points and wireless clients. This flexibility allows new algorithms to be added at any time and supported alongside previous algorithms. The use of AES-CCMP is mandated for RSNs. AES-CCMP introduces a higher level of security from past protocols by providing protection for the MAC protocol data unit (MPDU) and parts of the 802.11 MAC headers. This protects even more of the data packet from eavesdropping and tampering.

Rijndael

The Rijndael algorithm can be used with block sizes of 128, 192, or 256 bits. The key can also be 128, 192, or 256 bits, with a variable number of rounds of operation depending on the key size. Using AES with a 128-bit key would do 10 rounds, whereas a 192-bit key would do 12, and a 256-bit key would do 14. Although Rijndael supports multiple block sizes, AES only supports one block size (subset of Rijndael). AES is reviewed below in the 128-bit block format. The AES operation works on the entire 128-bit block of input data by first copying it into a square table (or array) that it calls state. The inputs are placed into the array by column so that the first four bytes of the input would fill the first column of the array.

Following is input plaintext when placed into a 128-bit state array:

1st byte5th byte9th byte13th byte
2nd byte6th byte10th byte14th byte
3rd byte7th byte11th byte15th byte
4th byte8th byte12th byte16th byte

The key is also placed into a similar square table or matrix. The Rijndael operation consists of four major operations.

  1. Substitute Bytes—Use of an S-box to do a byte-by-byte substitution of the entire block.
  2. Shift Rows—Transposition or permutation through offsetting each row in the table.
  3. Mix Columns—A substitution of each value in a column based on a function of the values of the data in the column.
  4. Add Round Key—XOR each byte with the key for that round; the key is modified for each round of operation.

These four major operations of the Rijndael operation deserve more examination:

  1. Substitute Bytes The substitute bytes operation uses an S-box that looks up the value of each byte in the input and substitutes it with the value in the table. The S-box table contains all possible 256 8-bit word values and a simple cross-reference is done to find the substitute value using the first half of the byte (4-bit word) in the input table on the x-axis and the second half of the byte on the y-axis. Hexadecimal values are used in both the input and S-box tables.
  2. Shift Row Transformation The shift row transformation step provides blockwide transposition of the input data by shifting the rows of data as follows. If one starts with the input table described earlier, the effect of the shift row operation can be observed. Please note that by this point the table will have been subjected to the substitute bytes operation, so it would not look like this any longer, but this table will be used for the sake of clarity.
    Columns
    Rows1st byte5th byte9th byte13th byte
    2nd byte6th byte10th byte14th byte
    3rd byte7th byte11th byte15th byte
    4th byte8th byte12th byte16th byte
  3. The first row is not shifted.
    1st byte5th byte9th byte13th byte
  4. The second row of the table is shifted one place to the left.
    6th byte10th byte14th byte2nd byte
  5. The third row of the table is shifted two places to the left.
    11th byte15th byte3rd byte7th byte
  6. The fourth row of the table is shifted three places to the left.
    16th byte4th byte8th byte12th byte
  7. The final result of the shift rows step would look as follows:
    15913
    610142
    111537
    164812
  8. Mix Column Transformation The mix column transformation is performed by multiplying and XORing each byte in a column together, according to the table in Figure 5-10.
    c05f010.tif

    Figure 5-10: Mix column transformation

  9. The table in Figure 5-10 is the result of the previous step, so when the first column (shaded in the state table), with the first row is worked using use multiplication and XOR in the mix column table (shaded), the computation of the mix columns step for the first columns would be

(1×02) (6×03) (11×01) (16×01)

  1. The second byte in the column would be calculated using the second row in the mix columns table as

(6×01) (11×02) (16×03) (1×01)

  1. Add Round Key The key is modified for each round by first dividing the key into 16-bit pieces (four 4-bit words) and then expanding each piece into 176 bits (44 4-bit words). The key is arrayed into a square matrix, and each column is subjected to rotation (shifting the first column to the last (1, 2, 3, 4 would become 2, 3, 4, 1) and then the substitution of each word of the key using an S-box. The result of these first two operations is then XORed with a round constant to create the key to be used for that round. The round constant changes for each round, and its values are predefined. Each of the above steps (except for the mix columns, which is only done for nine rounds) are done for 10 rounds to produce the ciphertext. AES is a strong algorithm that is not considered breakable at any time in the near future and is easy to deploy on many platforms with excellent throughput.

Other Symmetric Algorithm Approaches

Besides DES and AES, a number of other approaches to encryption algorithms have been developed over the years.

International Data Encryption Algorithm (IDEA)

International Data Encryption Algorithm (IDEA) was developed as a replacement for DES by Xuejai Lai and James Massey in 1991. IDEA uses a 128-bit key and operates on 64-bit blocks. IDEA does eight rounds of transposition and substitution using modular addition and multiplication, and bitwise exclusive-or (XOR). The patents on IDEA expired in 2011.

CAST

CAST was developed in 1996 by Carlisle Adams and Stafford Tavares. CAST-128 can use keys between 40 and 128 bits in length and will do between 12 and 16 rounds of operation, depending on key length. CAST-128 is a Feistel-type block cipher with 64-bit blocks. CAST-256 was submitted as an unsuccessful candidate for the new AES. CAST-256 operates on 128-bit blocks and with keys of 128, 192, 160, 224, and 256 bits. It performs 48 rounds and is described in RFC 2612.

Secure and Fast Encryption Routine (SAFER)

All of the algorithms in Secure and Fast Encryption Routine (SAFER) are patent-free. The algorithms were developed by James Massey and work on either 64-bit input blocks (SAFER-SK64) or 128-bit blocks (SAFER-SK128). A variation of SAFER is used as a block cipher in Bluetooth.

Blowfish

Blowfish is a symmetrical algorithm developed by Bruce Schneier. It is an extremely fast cipher and can be implemented in as little as 5K of memory. It is a Feistel-type cipher in that it divides the input blocks into two halves and then uses them in XORs against each other. However, it varies from the traditional Feistel cipher in that Blowfish does work against both halves, not just one. The Blowfish algorithm operates with variable key sizes, from 32 up to 448 bits on 64-bit input and output blocks. One of the characteristics of Blowfish is that the S-boxes are created from the key and are stored for later use. Because of the processing time taken to change keys and recompute the S-boxes, Blowfish is unsuitable for applications where the key is changed frequently or in applications on smart cards or with limited processing power. Blowfish is currently considered unbreakable (using today’s technology), and in fact, because the key is used to generate the S-boxes, it takes over 500 rounds of the Blowfish algorithm to test any single key.

Twofish

Twofish was one of the finalists for the AES. It is an adapted version of Blowfish developed by a team of cryptographers led by Bruce Schneier. It can operate with keys of 128, 192, or 256 bits on blocks of 128 bits. It performs 16 rounds during the encryption/decryption process.

RC5

RC5 was developed by Ron Rivest of RSA and is deployed in many of RSA’s products. It is a very adaptable product useful for many applications, ranging from software to hardware implementations. The key for RC5 can vary from 0 to 2040 bits, the number of rounds it executes can be adjusted from 0 to 255, and the length of the input words can also be chosen from 16-, 32-, and 64-bit lengths. The algorithm operates on two words at a time in a fast and secure manner.

RC5 is defined in RFC 2040 for four different modes of operation:

  • RC5 block cipher is similar to DES ECB producing a ciphertext block of the same length as the input.
  • RC5-CBC is a cipher block chaining form of RC5 using chaining to ensure that repeated input blocks would not generate the same output.
  • RC5-CBC-Pad combines chaining with the ability to handle input plaintext of any length. The ciphertext will be longer than the plaintext by at most one block.
  • RC5-CTS is called ciphertext stealing and will generate a ciphertext equal in length to a plaintext of any length.
RC4

RC4, a stream-based cipher, was developed in 1987 by Ron Rivest for RSA Data Security and has become the most widely used stream cipher, being deployed, for example, in WEP and SSL/TLS. RC4 uses a variable-length key ranging from 8 to 2048 bits (1 to 256 bytes) and a period of greater than 10100. In other words, the keystream should not repeat for at least that length.

The key is used to initialize a state vector that is 256 bytes in length and contains all possible values of 8-bit numbers from 0 through 255. This state is used to generate the keystream that is XORed with the plaintext. The key is only used to initialize the state and is not used thereafter. Because no transposition is done, RC4 is considered by some cryptographers to be theoretically weaker. The U.S. federal government through the NIST bans its use for protecting sensitive data for federal agencies and their contractors. If RC4 is used with a key length of at least 128 bits, there are currently no practical ways to attack it; the published successful attacks against the use of RC4 in WEP applications are related to problems with the implementation of the algorithm, not the algorithm itself.

Advantages and Disadvantages of Symmetric Algorithms

Symmetric algorithms are very fast and secure methods of providing confidentiality and some integrity and authentication for messages being stored or transmitted. Many algorithms can be implemented in either hardware or software and are available at no cost to the user.

However, there are serious disadvantages to symmetric algorithms—key management is very difficult, especially in large organizations. The number of keys needed grows rapidly with every new user according to the formula n(n − 1)/2, where n is the number of users. An organization with only 10 users, all wanting to communicate securely with one another, requires 45 keys (10×9/2). If the organization grows to 1000 employees, the need for key management expands to nearly a half million keys.

Symmetric algorithms also are not able to provide nonrepudiation of origin, access control, and digital signatures, except in a very limited way. If two or more people share a symmetric key, then it is impossible to prove who altered a file protected with a symmetric key. Selecting keys is an important part of key management. There needs to be a process in place that ensures that a key is selected randomly from the entire key space and that there is some way to recover a lost or forgotten key.

Because symmetric algorithms require both users (the sender and the receiver) to share the same key, there can be challenges with secure key distribution. Often the users must use an out-of-band channel such as mail, fax, telephone, or courier to exchange secret keys. The use of an out-of-band channel should make it difficult for an attacker to seize both the encrypted data and the key. The other method of exchanging the symmetric key is to use an asymmetric algorithm.

Asymmetric Cryptography

Due to the practical limitations of symmetric cryptography, asymmetric cryptography attempts to provide the best of all worlds. While initially more key management is required, the fundamentals of asymmetric cryptography provide an extensible and elastic framework in which to deploy cryptographic functions for integrity, confidentiality, authentication, and nonrepudiation.

Asymmetric Algorithms

Unlike symmetric algorithms, which have been in existence for several millennia, the use of asymmetric (or public key) algorithms is relatively new. These algorithms became commonly known when Drs. Whit Diffie and Martin Hellman released a paper in 1976 called “New Directions in Cryptography.”8 The Diffie–Hellman paper described the concept of using two different keys (a key pair) to perform the cryptographic operations. The two keys would be linked mathematically, but would be mutually exclusive. For most asymmetric algorithms, if one half of this key pair was used for encryption, then the other key half would be required to decrypt the message.

When a person wishes to communicate using an asymmetric algorithm, she would first generate a key pair. Usually this is done by the cryptographic application or the PKI without user involvement to ensure the strength of the key generation process. One half of the key pair is kept secret, and only the key holder knows that key. For this reason, it is often called the private key. The other half of the key pair can be given freely to anyone that wants a copy. In many companies, it may be available through the corporate website or access to a key server. That is why this half of the key pair is often referred to as the public key. Asymmetric algorithms are one-way functions, that is, a process that is much simpler to go in one direction (forward) than to go in the other direction (backward or reverse engineering). The process to generate the public key (forward) is fairly simple, and providing the public key to anyone who wants it does not compromise the private key because the process to go from the public key to the private key is computationally infeasible.

Confidential Messages

Because the keys are mutually exclusive, any message that is encrypted with a public key can only be decrypted with the corresponding other half of the key pair, the private key. Therefore, as long as the key holder keeps her private key secure, there exists a method of transmitting a message confidentially. The sender would encrypt the message with the public key of the receiver. Only the receiver with the private key would be able to open or read the message, providing confidentiality. See Figure 5-11.

Open Messages

Conversely, when a message is encrypted with the private key of a sender, it can be opened or read by anyone who possesses the corresponding public key. When a person needs to send a message and provide proof of origin (nonrepudiation), he can do so by encrypting it with his own private key. The recipient then has some guarantee that, because she opened it with the public key from the sender, the message did, in fact, originate with the sender. See Figure 5-12.

c05f011.tif

Figure 5-11: Using public key cryptography to send a confidential message.

c05f012.tif

Figure 5-12: Using public key cryptography to send a message with proof of origin.

Confidential Messages with Proof of Origin

By encrypting a message with the private key of the sender and the public key of the receiver, the ability exists to send a message that is confidential and has proof of origin. See Figure 5-13.

c05f013.tif

Figure 5-13: Using public key cryptography to send a message that is confidential and has a proof of origin.

RSA

RSA was developed in 1978 by Ron Rivest, Adi Shamir, and Len Adleman when they were at MIT. RSA is based on the mathematical challenge of factoring the product of two large prime numbers. A prime number can only be divided by 1 and itself. Some prime numbers include 2, 3, 5, 7, 11, 13, and so on. Factoring is defined as taking a number and finding the numbers that can be multiplied together to calculate that number. For example, if the product of a×b = c, then c can be factored into a and b. As 3×4 = 12, then 12 can be factored into 3, 4 and 6, 2 and 12, 1. The RSA algorithm uses large prime numbers that when multiplied together would be incredibly difficult to factor. Successful factoring attacks have been executed against 512-bit numbers (at a cost of approximately 8000 MIPS years), and since successful attacks against 1024-bit numbers appeared increasingly possible in the near term, the U.S. government organization, NIST recommended moving away from 1024-bit RSA key size by the end of 2010.9

The three primary approaches to attack the RSA algorithm are to use brute force, trying all possible private keys; mathematical attacks, factoring the product of two prime numbers; and timing attacks, measuring the running time of the decryption algorithm.

Diffie–Hellmann Algorithm

Diffie–Hellmann is a key exchange algorithm. It is used to enable two users to exchange or negotiate a secret symmetric key that will be used subsequently for message encryption. The Diffie–Hellmann algorithm does not provide for message confidentiality, but is extremely useful for applications such as public key infrastructure. Diffie–Hellmann is based on discrete logarithms. This is a mathematical function based first on finding the primitive root of a prime number.

El Gamal

The El Gamal cryptographic algorithm is based on the work of Diffie–Hellmann, but it included the ability to provide message confidentiality and digital signature services, not just session key exchange. The El Gamal algorithm was based on the same mathematical functions of discrete logs.

Elliptic Curve Cryptography (ECC)

One branch of discrete logarithmic algorithms is based on the complex mathematics of elliptic curves. These algorithms, which are too complex to explain in this context, are advantageous for their speed and strength. The elliptic curve algorithms have the highest strength per bit of key length of any of the asymmetric algorithms. The ability to use much shorter keys for Elliptic Curve Cryptography (ECC )implementations provides savings on computational power and bandwidth. This makes ECC especially beneficial for implementation in smart cards, wireless, and other similar application areas. Elliptic curve algorithms provide confidentiality, digital signatures, and message authentication services.

Advantages and Disadvantages of Asymmetric Key Algorithms

The development of asymmetric key cryptography revolutionized the cryptographic community. Now it was possible to send a message across an untrusted medium in a secure manner without the overhead of prior key exchange or key material distribution. It allowed several other features not readily available in symmetric cryptography, such as the nonrepudiation of origin, access control, data integrity, and the nonrepudiation of delivery.

The problem was that asymmetric cryptography is extremely slow compared to its symmetric counterpart. Asymmetric cryptography was a product that was extremely problematic in terms of speed and performance and would be impractical for everyday use in encrypting large amounts of data and frequent transactions. This is because asymmetric is handling much larger keys and computations—making even a fast computer work harder than if it were only handling small keys and less complex algebraic calculations. The ciphertext output from asymmetric algorithms may be much larger than the plaintext. This means that for large messages, they are not effective for secrecy; however, they are effective for message integrity, authentication, and nonrepudiation.

Hybrid Cryptography

The solutions to many of the problems with symmetric encryption lies in developing a hybrid technique of cryptography that combined the strengths of both symmetric cryptography, with its great speed and secure algorithms, and asymmetric cryptography, with its ability to securely exchange session keys, message authentication, and nonrepudiation. Symmetric cryptography is best for encrypting large files. It can handle the encryption and decryption processes with little impact on delivery times or computational performance.

Asymmetric cryptography can handle the initial setup of the communications session through the exchange or negotiation of the symmetric keys to be used for this session. In many cases, the symmetric key is only needed for the length of this communication and can be discarded following the completion of the transaction, so the symmetric key in this case will be referred to as a session key. A hybrid system operates as shown in Figure 5-14. The message itself is encrypted with a symmetric key, SK, and is sent to the recipient. The symmetric key is encrypted with the public key of the recipient and sent to the recipient. The symmetric key is decrypted with the private key of the recipient. This discloses the symmetric key to the recipient. The symmetric key can then be used to decrypt the message.

c05f014.tif

Figure 5-14: Hybrid system using asymmetric algorithm for bulk data encryption and an asymmetric algorithm for distribution of the symmetric key.

Message Digests

A message digest is a small representation of a larger message. Message digests are used to ensure the authentication and integrity of information, not the confidentiality.

Message Authentication Code

A message authentication code (MAC, also known as a cryptographic checksum) is a small block of data that is generated using a secret key and then appended to the message. When the message is received, the recipient can generate her own MAC using the secret key, and thereby know that the message has not changed either accidentally or intentionally in transit. Of course, this assurance is only as strong as the trust that the two parties have that no one else has access to the secret key. A MAC is a small representation of a message and has the following characteristics:

  • A MAC is much smaller than the message generating it.
  • Given a MAC, it is impractical to compute the message that generated it.
  • Given a MAC and the message that generated it, it is impractical to find another message generating the same MAC.

In the case of DES-CBC, a MAC is generated using the DES algorithm in CBC mode, and the secret DES key is shared by the sender and the receiver. The MAC is actually just the last block of ciphertext generated by the algorithm. This block of data (64 bits) is attached to the unencrypted message and transmitted to the far end. All previous blocks of encrypted data are discarded to prevent any attack on the MAC itself. The receiver can just generate his own MAC using the secret DES key he shares to ensure message integrity and authentication. He knows that the message has not changed because the chaining function of CBC would significantly alter the last block of data if any bit had changed anywhere in the message. He knows the source of the message (authentication) because only one other person holds the secret key. If the message contains a sequence number (such as a TCP header or X.25 packet), he knows that all messages have been received and not duplicated or missed.

HMAC

A MAC based on DES is one of the most common methods of creating a MAC; however, it is slow in operation compared to a hash function. A hash function such as MD5 does not have a secret key, so it cannot be used for a MAC. Therefore, RFC 2104 was issued to provide a hashed MACing (HMAC) system that has become the process used now in IPSec and many other secure Internet protocols, such as SSL/TLS. Hashed MACing implements a freely available hash algorithm as a component (black box) within the HMAC implementation. This allows ease of the replacement of the hashing module if a new hash function becomes necessary. The use of proven cryptographic hash algorithms also provides assurance of the security of HMAC implementations. HMACs work by adding a secret key value to the hash input function along with the source message. The HMAC operation provides cryptographic strength similar to a hashing algorithm, except that it now has the additional protection of a secret key, and still operates nearly as rapidly as a standard hash operation.

Digital Signatures

Tamper detection and related authentication techniques rely on a mathematical function called a one-way hash (also called a message digest). A one-way hash is a number with fixed length that has the following characteristics:

  • The value of the hash is unique for the hashed data. Any change in the data, even deleting or altering a single character, results in a different value.
  • The content of the hashed data cannot, for all practical purposes, be deduced from the hash—which is why it is called “one-way.”

It is possible to use your private key for encryption and your public key for decryption. Although this is not desirable when you are encrypting sensitive information, it is a crucial part of digitally signing any data. Instead of encrypting the data itself, the signing software creates a one-way hash of the data, then uses your private key to encrypt the hash. The encrypted hash, along with other information, such as the hashing algorithm, is known as a digital signature.

The recipient of some signed data will get two items: the original data and the digital signature, which is a one-way hash (of the original data) that has been encrypted with the signer’s private key. To validate the integrity of the data, the receiving software first uses the signer’s public key to decrypt the hash. It then uses the same hashing algorithm that generated the original hash to generate a new one-way hash of the same data. Information about the hashing algorithm used is sent with the digital signature. Finally, the receiving software compares the new hash against the original hash. If the two hashes match, the data has not changed since it was signed. If they do not match, the data may have been tampered with since it was signed, or the signature may have been created with a private key that does not correspond to the public key presented by the signer.

If the two hashes match, the recipient can be certain that the public key used to decrypt the digital signature corresponds to the private key used to create the digital signature. Confirming the identity of the signer, however, also requires some way of confirming that the public key really belongs to a particular person or other entity.

The significance of a digital signature is comparable to the significance of a handwritten signature. Once you have signed some data, it is difficult to deny doing so later—assuming that the private key has not been compromised or out of the owner’s control. This quality of digital signatures provides a high degree of nonrepudiation—that is, digital signatures make it difficult for the signer to deny having signed the data. In some situations, a digital signature may be as legally binding as a handwritten signature.

Non-Repudiation

Non-repudiation is a service that ensures the sender cannot deny a message was sent and the integrity of the message is intact. NIST’s SP 800–57 “Recommendation for Key Management—Part 1: General” (Revision 3) defines non-repudiation as:

A service that is used to provide assurance of the integrity and origin of data in such a way that the integrity and origin can be verified by a third party as having originated from a specific entity in possession of the private key of the claimed signatory. In a general information security context, assurance that the sender of information is provided with proof of delivery and the recipient is provided with proof of the sender’s identity, so neither can later deny having processed the information.

Non-repudiation can be accomplished with digital signatures and PKI. The message is signed using the sender’s private key. When the recipient receives the message, they may use the sender’s public key to validate the signature. While this proves the integrity of the message, it does not explicitly define the ownership of the private key. A certificate authority must have an association between the private key and the sender (meaning only the sender has the private key) for the non-repudiation to be valid.

Methods of Cryptanalytic Attack

Any security system or product is subject to compromise or attack. The following explains common attacks against cryptography systems that the security practitioner needs to be aware of.

Chosen Plaintext

To execute the chosen attacks, the attacker knows the algorithm used for the encrypting, or even better, he may have access to the machine used to do the encryption and is trying to determine the key. This may happen if a workstation used for encrypting messages is left unattended. Now the attacker can run chosen pieces of plaintext through the algorithm and see what the result is. This may assist in a known plaintext attack. An adaptive chosen plaintext attack is where the attacker can modify the chosen input files to see what effect that would have on the resulting ciphertext.

Social Engineering for Key Discovery

This is the most common type of attack and usually the most successful. All cryptography relies to some extent on humans to implement and operate. Unfortunately, this is one of the greatest vulnerabilities and has led to some of the greatest compromises of a nation’s or organization’s secrets or intellectual property. Through coercion, bribery, or befriending people in positions of responsibility, spies or competitors are able to gain access to systems without having any technical expertise.

Brute Force

Brute force is trying all possible keys until one is found that decrypts the ciphertext. This is why key length is such an important factor in determining the strength of a cryptosystem. Because DES only had a 56-bit key, in time the attackers were able to discover the key and decrypt a DES message. This is also why SHA-256 is considered stronger than MD5, because the output hash is longer, and, therefore, more resistant to a brute-force attack. Graphical processor units (GPUs) have revolutionized brute force hacking methods. Where a standard CPU might take 48 hours to crack an eight-character mixed password, a modern GPU can crack it in less than ten minutes. GPUs have a large number of Arithmetic/Logic Units (ALUs) and are designed to perform repetitive tasks continuously. These characteristics make them ideal for performing brute-force attack processes. Due to the introduction of GPU-based brute-force attacks, many security professionals are evaluating password length, complexity, and multifactor considerations.

Differential Cryptanalysis

Also called a side channel attack, this more complex attack is executed by measuring the exact execution times and power required by the crypto device to perform the encryption or decryption. By measuring this, it is possible to determine the value of the key and the algorithm used.

Linear Cryptanalysis

This is a known plaintext attack that uses linear approximations to describe the behavior of the block cipher. Linear cryptanalysis is a known plaintext attack and uses a linear approximation to describe the behavior of the block cipher. Given sufficient pairs of plaintext and corresponding ciphertext, bits of information about the key can be obtained and increased amounts of data will usually give a higher probability of success.

There have been a variety of enhancements and improvements to the basic attack. For example, there is an attack called differential-linear cryptanalysis, which combines elements of differential cryptanalysis with those of linear cryptanalysis.

Algebraic

Algebraic attacks are a class of techniques that rely for their success on block ciphers exhibiting a high degree of mathematical structure. For instance, it is conceivable that a block cipher might exhibit a group structure. If this were the case, it would then mean that encrypting a plaintext under one key and then encrypting the result under another key would always be equivalent to single encryption under some other single key. If so, then the block cipher would be considerably weaker, and the use of multiple encryption cycles would offer no additional security over single encryption.

Rainbow Table

Hash functions map plaintext into a hash. Since the hash function is a one-way process, one should not be able to determine the plaintext from the hash itself. To determine a given plaintext from its hash there are two ways to do that:

  • Hash each plaintext until a matching hash is found.
  • Hash each plaintext but store each generated hash in a table that can be used as a look up table so hashes do not need to be generated again.

A rainbow table is a lookup table of sorted hash outputs. The idea here is that storing pre-computed hash values in a rainbow table that one can later refer to saves time and computer resources when attempting to decipher the plaintext from its hash value.

Ciphertext-Only Attack

The ciphertext-only attack is one of the most difficult because the attacker has so little information to start with. All the attacker starts with is some unintelligible data that he suspects may be an important encrypted message. The attack becomes simpler when the attacker is able to gather several pieces of ciphertext and thereby look for trends or statistical data that would help in the attack. Adequate encryption is defined as encryption that is strong enough to make brute-force attacks impractical because there is a higher work factor than the attacker wants to invest into the attack. Moore’s law states that available computing power doubles every 18 months.11 Experts suggest this advance may be slowing; however, encryption strength considered adequate today will probably not be sufficient a few years from now due to advances in CPU and GPU technology and new attack techniques.12 Security professionals should consider this when defining encryption requirements.

Known Plaintext

For a known plaintext attack, the attacker has access to both the ciphertext and the plaintext versions of the same message. The goal of this type of attack is to find the link—the cryptographic key that was used to encrypt the message. Once the key has been found, the attacker would then be able to decrypt all messages that had been encrypted using that key. In some cases, the attacker may not have an exact copy of the message—if the message was known to be an e-commerce transaction, the attacker knows the format of such transactions even though he does not know the actual values in the transaction.

Frequency Analysis13

This attack works closely with several other types of attacks. It is especially useful when attacking a substitution cipher where the statistics of the plaintext language are known. In English, for example, some letters will appear more often than others will, allowing an attacker to assume that those letters may represent an E or S.

Chosen Ciphertext

This is similar to the chosen plain text attack in that the attacker has access to the decryption device or software and is attempting to defeat the cryptographic protection by decrypting chosen pieces of ciphertext to discover the key. An adaptive chosen ciphertext would be the same, except that the attacker can modify the ciphertext prior to putting it through the algorithm. Asymmetric cryptosystems are vulnerable to chosen ciphertext attacks. For example, the RSA algorithm is vulnerable to this type of attack. The attacker would select a section of plaintext, encrypt it with the victim’s public key, then decrypt the ciphertext to get the plaintext back. Although this does not yield any new information to the attacker, the attacker can exploit properties of RSA by selecting blocks of data that when processed using the victim’s private key, yields information that can be used in cryptanalysis. The weakness with asymmetric encryption in chosen ciphertext attacks can be mitigated by including a random padding in the plaintext before encrypting the data. Security vendor RSA Security recommends modifying the plaintext using process called optimal asymmetric encryption padding (OAEP). RSA encryption with OAEP is defined in PKCS #1 v2.1.14

Birthday Attack

Because a hash is a short representation of a message, given enough time and resources, another message would give the same hash value. However, hashing algorithms have been developed with this in mind, so that they can resist a simple birthday attack. (This is described in more detail in the “The Birthday Paradox” section earlier in this domain.) The point of the birthday attack is that it is easier to find two messages that hash to the same message digest than to match a specific message and its specific message digest. The usual countermeasure is to use a hash algorithm with twice the message digest length as the desired work factor (e.g., use 160 bit SHA-1 to have it resistant to 280 work factor).

Dictionary Attack

The dictionary attack is used most commonly against password files. It exploits the poor habits of users who choose simple passwords based on natural words. The dictionary attack merely encrypts all of the words in a dictionary and then checks whether the resulting hash matches an encrypted password stored in the SAM file or other password file.

Replay Attack

This attack is meant to disrupt and damage processing by the attacker sending repeated files to the host. If there are no checks or sequence verification codes in the receiving software, the system might process duplicate files.

Factoring Attacks

This attack is aimed at the RSA algorithm. Because that algorithm uses the product of large prime numbers to generate the public and private keys, this attack attempts to find the keys through solving the factoring of these numbers.

Reverse Engineering

This attack is one of the most common. A competing firm buys a crypto product from another firm and then tries to reverse engineer the product. Through reverse engineering, it may be able to find weaknesses in the system or gain crucial information about the operations of the algorithm.

Attacking the Random Number Generators

This attack was successful against the SSL installed in Netscape several years ago. Because the random number generator was too predictable, it gave the attackers the ability to guess the random numbers so critical in setting up initialization vectors or a nonce. With this information in hand, the attacker is much more likely to run a successful attack.

Temporary Files

Most cryptosystems will use temporary files to perform their calculations. If these files are not deleted and overwritten, they may be compromised and lead an attacker to the message in plaintext.

Implementation Attacks

Implementation attacks are some of the most common and popular attacks against cryptographic systems due to their ease and reliance on system elements outside of the algorithm. The main types of implementation attacks include:

  • Side-channel analysis
  • Fault analysis
  • Probing attacks

Side-channel attacks are passive attacks that rely on a physical attribute of the implementation such as power consumption/emanation. These attributes are studied to determine the secret key and the algorithm function. Some examples of popular side-channels include timing analysis and electromagnetic differential analysis.

Fault analysis attempts to force the system into an error state to gain erroneous results. By forcing an error, gaining the results, and comparing them with known good results, an attacker may learn about the secret key and the algorithm.

Probing attacks attempt to watch the circuitry surrounding the cryptographic module in hopes that the complementary components will disclose information about the key or the algorithm. Additionally new hardware may be added to the cryptographic module to observe and inject information.

Data Sensitivity and Regulatory Requirements

Some data is subject to various laws and regulations and requires notification in the event of a disclosure. Beyond that, some data requires special handling, especially particularly to protect against penalties, identity theft, financial loss, invasion of privacy, or unauthorized access. Data should be assigned a level of sensitivity based on who has access to it and the risk of potential harm that is involved. This assignment of sensitivity is sometimes referred to as “data classification.” The data classification process is often context-sensitive. Incidents involving data in the organization’s custody should be judged on a case-by-case basis. Some common examples of data classifications include the following:15

  • Regulated data such as credit card numbers, bank accounts, medical information, and employee data are all protected by laws and regulations. This data should only be accessible by users who are granted specific authorization. If this data is compromised, the harm to finances and reputation can be extreme. This should have the highest data classification.
  • Confidential data such as contracts, NDA-protected data, financial data, personnel data, and sensitive research are generally protected by legally binding contractual obligations. This information should only be accessible by individuals who have a designated, business-based need to know. If this data is compromised, the harm to reputation and finances can be serious, but generally not as bad as with regulated data. This should have a high data classification.
  • Public data such as public web sites, navigational maps, and press releases are generally at the discretion of the content provider. This information should be accessible by a large number of people (possibly everyone). This data presents little risk of harm to privacy, finances, and reputation and generally has a low data classification.

Legislative and Regulatory Compliance

Organizations operate in environments where laws, regulations and compliance requirements must be met. Security professionals must understand the laws and regulations of the country and industry they are working in. An organization’s governance and risk management processes must take into account these requirements from an implementation and risk perspective. These laws and regulations often offer specific actions that must be met for compliance, or in some cases, that must be met for a safe harbor provision. A safe harbor provision is typically a set of “good faith” conditions which, if met, may temporarily or indefinitely protect the organization from the penalties of a new law or regulation.

For example, in the United States, federal executive agencies are required to adhere to the Federal Information Security Management Act (FISMA).16 FISMA mandates the use of specific actions, standards and requirements for agencies to ensure sensitive information and vital mission services are not disrupted, distorted or disclosed to improper individuals. Agencies often take the requirements from FISMA and use them as the baseline for their information security policy and adopt the standards required by FISMA as their own. In doing so they not only meet the requirements of the law but can also provide proof to external parties that they are making a good faith effort to comply with the requirements of the law.

Compliance stemming from legal or regulatory requirements is best addressed by ensuring an organization’s policies, procedures, standards, and guidance are consistent with any laws or regulations that may govern it. Furthermore, it is advisable that specific laws and their requirements are sited in an organization’s governance program and information security training programs. As a general rule, laws and regulations represent a “moral minimum” which must be adhered to and should never be considered wholly adequate for an organization without a thorough review. Additional requirements and specificity can be added to complement the requirements of law and regulation but they should never conflict with them. For example, a law may require sensitive financial information to be encrypted and an organization’s policy could state that in accordance with the law all financial information will be encrypted. Furthermore, the agency may specify a standard strength and brand of encryption software to be used in order to achieve the required level of compliance with the law, while also providing for the additional layers of protection that the organization wants in place.

Privacy Requirements Compliance

Privacy laws and regulations pose confidentially challenges for the security practitioner. Personally identifiable information is becoming an extremely valuable commodity for marketers, as demonstrated by the tremendous growth of social networking sites based on demography, and the targeted marketing activities that come with them. While valuable, this information can also become a liability for an organization which runs afoul of information privacy regulations and laws.

For example, the European Data Protection Directive only allows for the processing of personal data under specific circumstances such as:

  • When processing is necessary for compliance with a legal action
  • When processing is required to protect the life of the subject
  • When the subject of the personal data has provided consent
  • When the processing is performed within the law and scope of “public interest.”

The four requirements listed above reflect only a small portion of the directive. The directive further states what rights the subject has, such as objecting at any time to the processing of their personal data if the use is for direct marketing purposes. Recently, several Internet search companies and social media companies have been cited for not complying with this law. These organizations have been accused of using the personal data of the subject for direct marketing efforts without the subject’s permission. The information security professional working in a marketing firm in the European Union must understand the impact of these requirements on how information will be processed, stored, and transmitted in their organization.

The “Directive 95/46 of the European Parliament and the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data” (Data Protection Directive 95/46/EC) was established to provide a regulatory framework to guarantee secure and free movement of personal data across the national borders of the EU member countries, in addition to setting a baseline of security around personal information wherever it is stored, transmitted or processed.17 The Directive contains 33 articles in 8 chapters. The Directive went into effect in October, 1998. This general Data Protection Directive has been complemented by other legal instruments, such as the e-Privacy Directive (Directive 2002/58/EC) for the communications sector.18 There are also specific rules for the protection of personal data in police and judicial cooperation in criminal matters (Framework Decision 2008/977/JHA).19

The Data Protection Directive 95/46/EC clarifies the data protection elemsnts that EU states must transpose into law. Each EU state regulates data protection and enforcement within its own jurisdiction. Additionally, data protection commissioners from the EU states participate in a working group together.

The Data Protection Directive 95/46/EC defines personal data as any information related to an “identified or identifiable natural person.” The data controller must ensure compliance with the principles relating to data quality and legitimate reasons for data processing. Whenever personal data is collected, the data controller has information duties toward the data subject. The data controller also must implement appropriate technical and organizational measures against unlawful destruction, accidental loss, or unauthorized access, alteration, or disclosure.

The Directive establishes data subjects’ rights as the following: the right to know who the data controller is, who the recipient of the data is, and the purpose of the processing; the right to have inaccurate data rectified; a right of recourse in the event of unlawful processing; and the right to withhold permission to use data in some circumstances. The EU Data Protection Directive also strengthens protections regarding the use of sensitive personal data such as health, sex life, and religious beliefs.

The regulatory framework can be enforced either through judicial remedies or administrative proceedings of the supervisory authority. EU member states’ supervisory authorities are given investigative and intervening powers. These include the power to issue a ban on processing or to order blocking, erasure, and destruction of data. Any individual who has suffered damage resulting from an unlawful processing operation is entitled to receive compensation from the liable controller. The Data Protection Directive provides a mechanism by which transfers of personal data outside of the EU have to meet a processing level adequate to the degree prescribed by the directive’s provisions.

In January 2012, after the Lisbon Treaty gave the EU the explicit competence to legislate on the protection of individuals with regard to the processing of their personal data, the Commission proposed a reform package comprising a general data protection regulation to replace Directive 95/46/EC and a directive to replace Framework Decision 2008/977/JHA.

The Parliament’s committee on Civil Liberties, Justice and Home Affairs adopted its reports on the basis of 4,000 amendments (to the Regulation) and 768 amendments (to the Directive). The Parliament adopted a position at first reading in March 2014. The key points of the Parliament’s position as regards the Regulation are are directly reproduced below:19

  • A comprehensive approach to data protection, with a clear, single set of rules, which applies within and outside the Union
  • A clarification of the concepts used (personal data, informed consent, data protection by design and default) and a strengthening of individuals’ rights (e.g., as regards inter alia the right of access or the right to object to data processing)
  • A more precise definition of the rules concerning the processing of personal data relating to some sectors (health, employment and social security) or for some specific purposes (historical, statistical, scientific research or archives-related purposes)
  • A clarification and a strengthening of the regime of sanctions
  • A better and consistent enforcement of data protection rules (strengthened role of the corporate data protection officers, setting up of a European Data Protection Board, unified framework for all Data Protection Authorities, creation of a one-stop shop mechanism)
  • A strengthening of the criteria for assessing the adequacy of protection offered in a third country

The proposed directive deals with the processing of personal data in the context of prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties. Parliament’s position on the directive contains the key elements reproduced below:

  • A clear definition of the data protection principles (the exceptions which have to be duly justified)
  • The conditions to be complied with as regards the processing (e.g., lawful, fair, transparent and legitimate processing, and explicit purposes) and the transmission of personal data
  • The setting up of an evaluation mechanism and of a data protection impact assessment
  • A clear definition of profiling
  • A strengthening of the regime for transferring personal data to third countries
  • A clarification of the monitoring and enforcement powers of the Data Protection Authorities
  • A new article on genetic data

The security practitioner needs to stay up to date on the latest developments such as those being pursued by the Parliament with regards to processing and handling of personal information in order to ensure that the compliance activities that they engage in are directed towards to support the required laws and regulations that are in force within the geographies that the enterprise operates within and that they are responsible for.

End-User Training

Security awareness is the knowledge and attitude members of an organization possess regarding the protection of the physical and, especially, information assets of that organization. Many organizations require formal security awareness training for all workers when they join the organization and periodically thereafter, usually annually.

SANS, through their Securing The Human project, offers on-line security awareness training that is free. It is comprised of approximately 43 modules averaging 3 minutes in length each, and is available in 28 languages. The content can be accessed at http://www.securingthehuman.org/enduser.

Topics covered in security awareness training traditionally will include:

  • The nature of sensitive material and physical assets such as trade secrets, privacy concerns and classified government information
  • The responsibilities of contractors and employees when handling sensitive information, which includes NDAs
  • The requirements for properly handling sensitive material in physical form, including marking, transmission, storage and destruction
  • The proper techniques for protecting sensitive computer data, including password policy and two-factor authentication
  • Understanding computer-based security concerns such as phishing, malware, and social engineering
  • Workplace security, including building access, wearing of security badges, reporting of incidents, forbidden articles, etc.
  • Knowing the repercussions of failing to properly protect information, including potential loss of employment, economic harm to the company, damage to individuals whose private records are divulged, and possible civil and criminal penalties

Public Key Infrastructure (PKI)

A public key infrastructure (PKI) is a set of system, software, and communication protocols required to use, manage, and control public key cryptography. It has three primary purposes: publish public keys/certificates, certify that a key is tied to an individual or entity, and provide verification of the validity of a public key.

The certificate authority (CA) “signs” an entity’s digital certificate to certify that the certificate content accurately represents the certificate owner. There can be different levels of assurance implied by the CA signing the certificate, similar to forms of the physical identification of an individual can imply differing levels of trust. In the physical world, a credit card with a name on it has a differing level of authentication value than, say, a government-issued ID card. Any entity can claim to be anything it wants, but if the entity wants to provide a high level of assurance it should provide identifying content in its certificate that is easily confirmed by third parties that are trusted by all parties involved. In the digital world, a Dun and Bradstreet number, credit report, or perhaps another form of trusted third party reference would be provided to the CA before certifying by signing the entity’s certificate with a marking indicating the CA asserts a high level of trust of the entity. Now all the entities that trust the CA can now trust that the identity provided by a certificate is trustworthy.

The functions of a CA may be distributed among several specialized servers in a PKI. For example, server registration authorities (RAs) may be used to provide scalability and reliability of the PKI. RA servers provide the facility for entities to submit requests for certificate generation. The RA service is also responsible for ensuring the accuracy of certificate request content.

The CA can revoke certificates and provide an update service to the other members of the PKI via a certificate revocation list (CRL), which is a list of non-valid certificates that should not be accepted by any member of the PKI. The use of public key (asymmetric) cryptography has enabled more effective use of symmetric cryptography as well as several other important features, such as greater access control, nonrepudiation, and digital signatures.

So often, the biggest question is “who can be trusted?” How does one know that the public key being used to verify Terry’s digital signature truly belongs to Terry, or that the public key being used to send a confidential message to Pat is truly Pat’s and not that of an attacker who has set himself up in the middle of the communications channel?

Public keys are by their very nature public. Many people include them on signature lines in e-mails, or organizations have them on their web servers so that customers can establish confidential communications with the employees of the organization, who they may never even meet. How does one know an imposter or attacker has not set up a rogue web server and is attracting communications that should have been confidential to his site instead of the real account, as in a phishing attack?

Setting up a trusted public directory of keys is one option. Each user must register with the directory service, and a secure manner of communications between the user and the directory would be set up. This would allow the user to change keys—or the directory to force the change of keys. The directory would publish and maintain the list of all active keys and also delete or revoke keys that are no longer trusted. This may happen if a person believes that her private key has been compromised, or she leaves the employ of the organization. Any person wanting to communicate with a registered user of the directory could request the public key of the registered user from the directory.

An even higher level of trust is provided through the use of public key certificates. This can be done directly. Pat would send a certificate to Terry, or through a CA, which would act as a trusted third party and issue a certificate to both Pat and Terry containing the public key of the other party. This certificate is signed with the digital signature of the CA and can be verified by the recipients. The certification process binds identity information and a public key to an identity. The resultant document of this process is the public key certificate. A CA will adhere to the X.509 standards. This is part of the overall X.500 family of standards applying to directories. X.509 version 3 of the standard, is the most common. Table 5-3 shows an example of an X.509 certificate issued by Verisign.

Table 5-3: A X.509 certification issued by Verisign

FieldDescription of Contents
Algorithm used for the signatureAlgorithm used to sign the certificate
Issuer nameX.500 name of CA
Period of Validity:
Start Date/End Date
Subject’s nameOwner of the public key
Subject’s Public Key Information (algorithm, parameters, key)Public key and algorithm used to create it
Issuer unique identifierOptional field used in case the CA used more than one X.500 name
Subject’s unique identifierOptional field in case the public key owner has more than one X.500 name
Extensions
Digital signature of CAHash of the certificate encrypted with the private key of the CA

Fundamental Key Management Concepts

Perhaps the most important part of any cryptographic implementation is key management. Control over the issuance, revocation, recovery, distribution, and the history of cryptographic keys is of utmost importance to any organization relying on cryptography for secure communications and data protection. The information security practitioner should know the importance of Kerckhoff’s law. Auguste Kerckhoff wrote: “a cryptosystem should be secure even if everything about the system, except the key, is public knowledge.” 21 The key, therefore, is the true strength of the cryptosystem. The size of the key and the secrecy of the key are perhaps the two most important elements in a crypto implementation.

Advances in Key Management

Key management has become increasingly important due to critical business requirements for secure information sharing and collaboration in high risk environments. As a result, developers are seeing the need to embed security, particularly cryptography, directly into the application or network device. However, the complexity and specialized nature of cryptography means increased risk if not implemented properly. To meet this challenge, a number of standardized key management specifications are being developed and implemented for use as a sort of key management “plug-in” for such products.

XML (Extensible Markup Language), the flexible data framework that allows applications to communicate on the Internet, has become the preferred infrastructure for e-commerce applications. All of those transactions require trust and security, making it mission-critical to devise common XML mechanisms for authenticating merchants, buyers, and suppliers to each other, and for digitally signing and encrypting XML documents such as contracts and payment transactions. XML-based standards and specifications have been in development for use in the field of key management systems. Such specifications and standards are then implemented within web services libraries, provided by vendors or by open source collaborative efforts.

One such specification is the XML Key Management Specification 2.0 (XKMS).22 This specification defines protocols for distributing and registering public keys, suitable for use in conjunction with XML Digital Signatures23 and XML Encryption.24 XKMS, while very focused on key management, works in conjunction with other specifications that define protocols and services necessary to establishing and maintaining the trust needed for secure Web transactions.25 These basic mechanisms can be combined in various ways to accommodate building a wide variety of security models using a variety of cryptographic technologies. A goal of XKMS implementation is based on the assumption that simplicity helps developers avoid mistakes and, as such, increases the security of applications. The XKMS protocol consists of pairs of requests and responses. XKMS protocol messages share a common format that may be carried within a variety of protocols. However, XKMS messages transported via SOAP over HTTP is recommended for interoperability.

The two parts of the XML Key Management Specification 2.0 are the XML Key Information Service Specification (X-KISS) and the XML Key Registration Service Specification (X-KRSS). First, X-KISS describes a syntax that allows a client (i.e., application) to delegate part or all of the tasks required to process XML Signature < ds:KeyInfo > elements to a trust service. A key objective of the protocol design is to minimize the complexity of applications that use XML Digital Signatures. By becoming a client of the trust service, the application is relieved of the complexity and syntax of the underlying PKI used to establish trust relationships, which may be based upon a different specification such as X.509/PKIX, SPKI, PGP, Diffie–Hellman or Elliptic Curve, and can be extended for other algorithms. The < ds:KeyInfo > element in a XML Digital Signature is an optional element that enables the recipient to obtain cryptography key-related data needed to validate the signature. The < ds:KeyInfo > element may contain the key itself, a key name, X.509 certificate, a PGP key identifier, chain of trust, revocation list info, in-band key distribution or key agreement data and so on. As an option, a link to the location where the full < ds:KeyInfo > data set can be found can also be provided.

For example, if using certificates, DSA, RSA, X.509, PGP, SPKI are values that can be used in the < ds:KeyInfo > element of an XML Digital Signature. An application (client of the XKMS) would learn what public key cryptographic algorithm is being used for the transaction by reading from a directory server the < ds:KeyInfo > element of an XML Digital Signature using the X-KISS protocol of XKMS 2.0.

Secondly, X-KRSS describes a protocol for registration of public key information. The key material can be generated by the X-KRSS, on request to support easier key recovery, or manually. The registration service can also be used to subsequently recover a private key. An application may request that the registration service (X-KRSS) bind information to a public key. The information bound may include a name, an identifier or other attributes defined by the implementation. After first registering a key pair, the key pair is then usable along with the X-KISS or a PKI such as X.509v3.

The XKMS service shields the client application from the complexities of the underlying PKI such as:

  • Handling of complex syntax and semantics (e.g., X.509v3)
  • Retrieval of information from directory/data repository infrastructure
  • Revocation status verification
  • Construction and processing of trust chains

Additional information about the signer’s public signing key (< ds:KeyInfo >) can be included inside the signature block, which can be used to help the verifier determine which public key certificate to select.

Information contained in the < ds:KeyInfo > element may or may not be cryptographically bound to the signature itself. Therefore, < ds:KeyInfo> element data can be replaced or extended without invalidating the digital signature. For example, Valerie signs a document and sends it to Jim with a <ds:KeyInfo > element that specifies only the signing key data. On receiving the message, Jim retrieves additional information required to validate the signature and adds this information into the < ds:KeyInfo > element when he passes the document on to Yolanda (see Figure 5-15).

The X-KISS Locate service resolves a < ds:KeyInfo > element but does not require the service to make an assertion concerning the validity of the binding between the data in the < ds:KeyInfo > element. The XKMS service can resolve the < ds:KeyInfo > element using local information store or may relay the request to other directory servers. For example, the XKMS service might resolve a < ds:retrievalmethod > element (figure 5-16) or act as a gateway to an underlying pki based on a non-xml syntax (e.g., x.509v3).

c05f015.tif

figure 5-15: the xkms service shields the client application from the complexities of the underlying pki.

c05f016.tif

figure 5-16: the skms service might resolve a <ds: retrieval method> element.

standards for financial institutions

ansi x9.17 was developed to address the need of financial institutions to transmit securities and funds securely using an electronic medium. specifically, it describes the means to ensure the secrecy of keys. the ansi x9.17 approach is based on a hierarchy of keys. at the bottom of the hierarchy are data keys (dks). data keys are used to encrypt and decrypt messages. they are given short lifespans, such as one message or one connection. at the top of the hierarchy are master key-encrypting keys (kkms).

kkms, which must be distributed manually, are afforded longer lifespans than data keys. using the two-tier model, the kkms are used to encrypt the data keys. the data keys are then distributed electronically to encrypt and decrypt messages. the two-tier model may be enhanced by adding another layer to the hierarchy. in the three-tier model, the kkms are not used to encrypt data keys directly, but to encrypt other key-encrypting keys (kks). the kks, which are exchanged electronically, are used to encrypt the data keys.

segregation of duties

another aspect of key management is maintaining control over sensitive cryptographic keys that enforce the need-to-know principle as part of a business process. for example, in many business environments, employees are required to maintain separation or segregation of duties. in other words, in such environments no one person is allowed to have full control over all phases of an entire transaction without some level of accountability enforcement. the more negotiable the asset under protection, the greater the need for the proper segregation of duties is. especially in the area of cryptography, this is a business concern. imagine the damage that could be done by a single dishonest person if allowed unchecked access to cryptographic keys that, for example, unlock high risk, high value, or high liquidity information such as customer financial accounts.

the segregation of duties is used as a cross-check to ensure that misuse and abuse of assets, due to innocent mistake or malicious intent, can be efficiently detected and prevented. this is an important confidentiality and integrity principle that is often misunderstood, judging by news reports of embezzlement schemes, primarily by employee insiders, that go undetected for long amounts of time. the segregation of duties is primarily a business policy and access control issue. however, it may not be possible for smaller organizations, due to personnel constraints, to perform the segregation of all duties, so other compensating controls may have to be used to achieve the same control objective. such compensating controls include monitoring of activities, audit trails, and management supervision. two mechanisms necessary to implement high integrity cryptographic operations environments where separation of duties is paramount are dual control and split knowledge.

  1. dual control dual control is implemented as a security procedure that requires two or more persons to come together and collude to complete a process. in a cryptographic system the two (or more) persons would each supply a unique key, that when taken together, performs a cryptographic process. split knowledge is the other complementary access control principle to dual control.
  2. split knowledge split knowledge is the unique “what each must bring” and joined together when implementing dual control. to illustrate, a box containing petty cash is secured by one combination lock and one keyed lock. one employee is given the combination to the combo lock and another employee has possession of the correct key to the keyed lock. in order to get the cash out of the box both employees must be present at the cash box at the same time. one cannot open the box without the other. this is the aspect of dual control.

on the other hand, split knowledge is exemplified here by the different objects (the combination to the combo lock and the correct physical key), both of which are unique and necessary, that each brings to the meeting. split knowledge focuses on the uniqueness of separate objects that must be joined together. dual control has to do with forcing the collusion of at least two or more persons to combine their split knowledge to gain access to an asset. both split knowledge and dual control complement each other and are necessary functions that implement the segregation of duties in high integrity cryptographic environments (see table 5-4).

in cryptographic terms, one could say dual control and split knowledge are properly implemented if no one person has access to or knowledge of the content of the complete cryptographic key being protected by the two processes. the sound implementation of dual control and split knowledge in a cryptographic environment necessarily means that the quickest way to break the key would be through the best attack known for the algorithm of that key. the principles of dual control and split knowledge primarily apply to access to plaintext keys. access to cryptographic keys used for encrypting and decrypting data or access to keys that are encrypted under a master key (which may or may not be maintained under dual control and split knowledge) do not require dual control and split knowledge.

table 5-4: split knowledge and dual control complement each other and are necessary functions that implement segregation of duties in high-integrity cryptographic environments.

bad examplesproblemhow to make dual control/split knowledge compliant
splitting a key “in half” to form two parts.dual control but no split knowledge (assuming two people each with a unique key half).
one person could determine the key by brute forcing the other key half space.
each person maintains control of his or her half of the key.
protect each half with a unique pin or passphrase.
storing key components on two cryptographic tokens with no further user authentication.no enforcement of split knowledge (i.e., no unique authentication method for individual accountability).each person maintains control of his individual token/smartcard.
protect each smartcard with unique pin/passphrase.
storing a key on a single smartcard (or cryptographic token) that requires one or more passphrases to access.no dual control enforcement. single card cannot be maintained by two or more persons.distribute cryptographic token to each person.
protect token with unique pin/passphrase.

dual control and split knowledge can be summed up as the determination of any part of a key being protected must require the collusion between two or more persons with each supplying unique cryptographic materials that must be joined together to access the protected key. any feasible method to violate the axiom means that the principles of dual control and split knowledge are not being upheld.

there are a number of applications that implement aspects of dual control and split knowledge in a scalable manner. for example, a pgp commercial product based on the openpgp standard has features for splitting public keys that are not part of the openpgp standard.25 these features use blakely–shamir secret sharing. this is an algorithm that allows the user to take a piece of data and break it into n shares, of which k of them are needed to retrieve the original data. using a simple version of this approach, the user could break the data into three shares, two of which are needed to get the data back. in a more complex version, the user could require 3 of 6 or even 5 of 12 shares to retrieve the original data, with each key share protected with a unique passphrase known only to the key holder.

such a solution uses the basic form of secret sharing and shares the private key. this process permits a key pair to be controlled by a group of people, with some subgroup required to reconstitute and use the key. other systems are based on key holders answering a series of questions in order to recover passwords needed to unlock a protected plaintext key.

to recreate the key under protection, a user can create a set of questions that contain some information only the user would know. the key is split to those questions, with some set of them being required to synthesize the key. not only does the user provide individualized security questions that are unique to each key holder, but also decides how many of the questions need to be answered correctly to retrieve the key under protection, by having it reconstructed from the split parts.

management and distribution of keys

the details of key creation using various algorithms were discussed earlier in this domain in the “foundational concepts” section. however, from a key management perspective there are a number of issues that pertain to scalability and cryptographic key integrity.

automated key generation

mechanisms used to automatically generate strong cryptographic keys can be used to deploy keys as part of key lifecycle management. effective automated key generation systems are designed for user transparency as well as complete cryptographic key policy enforcement.

truly random

for a key to be truly effective, it must have an appropriately high work factor. that is to say, the amount of time and effort (work by an attacker) needed to break the key must be sufficient so that it at least delays its discovery for as long as the information being protected needs to be kept confidential. one factor that contributes to strong keys, which have a high work factor, is the level of randomness of the bits that make up the key.

random

as discussed earlier, cryptographic keys are essentially strings of numbers. the numbers used in making up the key need to be unpredictable, so that an attacker cannot easily guess the key and then expose the protected information. thus, the randomness of the numbers that comprise a key plays an important role in the lifecycle of a cryptographic key. in the context of cryptography, randomness is the quality of lacking predictability. randomness intrinsically generated by a computer system is also called pseudo randomness. pseudo randomness is the quality of an algorithm for generating a sequence of numbers that approximates the properties of random numbers. computer circuits and software libraries are used to perform the actual generation of pseudo random key values. computers and software libraries are well known as weak sources of randomness.

computers are inherently designed for predictability, not randomness. computers are so thoroughly deterministic that they have a hard time generating high-quality randomness. therefore, special purpose built hardware and software called random number generators or rngs, are needed for cryptography applications. the u.s. federal government provides recommendations on deterministic random number generators through the nist.26 an international standard for random number generation suitable for cryptographic systems is sponsored by the international organization for standardization as iso 18031.27 a rigorous statistical analysis of the output is often needed to have confidence in such rng algorithms. a random number generator based solely on deterministic computation done solely by computer cannot be regarded as a true random number generator sufficient in lack of predictability for cryptographic applications, since its output is inherently predictable.

there are various methods for ensuring the appropriate level of randomness in pseudo random keys. the approach found in most business-level cryptographic products uses computational algorithms that produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed or key. the use of initialization vectors and seed values that are concatenated onto computer generated keys increases the strength of keys by adding additional uniqueness to a random key material. the seed value or initialization vector is the number input as a starting point for an algorithm. the seed or iv can be created either manually or by an external source of randomness, such as radio frequency noise, randomly sampled values from a switched circuit, or other atomic and subatomic physical phenomenon. to provide a degree of randomness intermediate between specialized hardware on the one hand and algorithmic generation on the other, some security-related computer software requires the user to input a lengthy string of mouse movements, or keyboard input.

regarding manually created seed or initialization values, many may be familiar with this process if they have ever set up a wireless network with encryption using a wep/wpa key. in most cases, when configuring wireless encryption on a wireless adapter or router the user is asked to enter a password or variable-length “key” that is used by the wireless device to create cryptographic keys for encrypting data across the wireless network. this “key” is really a seed or initialization value that will be concatenated to the computer-generated key portion that together comprise the keying material to generate a key consisting of appropriate amount of pseudo randomness to make it hard for an attacker to easily guess and thus “breaking” the key.

the important role randomness plays in key creation is illustrated by the following example. one method of generating a two-key encryption key set making a private component and a public component is comprised of the following steps:

  1. generate a first pseudo random prime number.
  2. generate a second pseudo random prime number.
  3. produce a modulus by multiplying the first pseudo random number by the second pseudo random prime number.
  4. generate a first exponent by solving a first modular arithmetic equation.
  5. generate a second exponent that is a modular inverse to the first exponent, by solving a second modular arithmetic equation and securely storing either the first exponent or the second exponent in at least one memory location.

key length

key length is another important aspect of key management to consider when generating cryptographic keys. key length is the size of a key, usually measured in bits or bytes, which a cryptographic algorithm uses in ciphering or deciphering protected information. keys are used to control how an algorithm operates so that only the correct key can decipher the information. the resistance to successful attack against the key and the algorithm, aspects of their cryptographic security, is of concern when choosing key lengths. an algorithm’s key length is distinct from its cryptographic security. cryptographic security is a logarithmic measure of the fastest known computational attack on the algorithm, also measured in bits.

the security of an algorithm cannot exceed its key length. therefore, it is possible to have a very long key that still provides low security. as an example, three-key (56 bits per key) triple des can have a key length of 168 bits but, due to the meet-in-the-middle attack, the effective security that it provides is at most 112 bits. however, most symmetric algorithms are designed to have security equal to their key length. a natural inclination is to use the longest key possible, which may make the key more difficult to break. however, the longer the key, the more computationally expensive the encrypting and decrypting process can be. the goal is to make breaking the key cost more (in terms of effort, time, and resources) than the worth of the information being protected and, if possible, not a penny more (to do more would not be economically sound).

the effectiveness of asymmetric cryptographic systems depends on the hard-to-solve nature of certain mathematical problems such as prime integer factorization. these problems are time consuming to solve, but usually faster than trying all possible keys by brute force. thus, asymmetric algorithm keys must be longer for equivalent resistance to attack than symmetric algorithm keys.

rsa security claims that 1024-bit rsa keys are equivalent in strength to 80-bit symmetric keys, 2048-bit rsa keys to 112-bit symmetric keys, and 3072-bit rsa keys to 128-bit symmetric keys. rsa claims that 2048-bit keys are sufficient until 2030. an rsa key length of 3072 bits should be used if security is required beyond 2030.28 nist key management guidelines further suggest that 15,360-bit rsa keys are equivalent in strength to 256-bit symmetric keys.29

ecc can secure with shorter keys than those needed by other asymmetric key algorithms. nist guidelines state that elliptic curve keys should be twice the length of equivalent strength symmetric key algorithms. for example, a 224-bit elliptic curve key would have roughly the same strength as a 112-bit symmetric key. these estimates assume no major breakthroughs in solving the underlying mathematical problems that ecc is based on.

key wrapping and key encrypting keys

one role of key management is to ensure that the same key used in encrypting a message by a sender is the same key used to decrypt the message by the intended receiver. thus, if terry and pat wish to exchange encrypted messages, each must be equipped to decrypt received messages and to encrypt sent messages. if they use a cipher, they will need appropriate keys. the problem is how to exchange whatever keys or other information are needed so that no one else can obtain a copy.

one solution is to protect the session key with a special purpose long-term use key called a key encrypting key (kek). keks are used as part of key distribution or key exchange. the process of using a kek to protect session keys is called key wrapping. key wrapping uses symmetric ciphers to securely encrypt (thus encapsulating) a plaintext key along with any associated integrity information and data. one application for key wrapping is protecting session keys in untrusted storage or when sending over an untrusted transport. key wrapping or encapsulation using a kek can be accomplished using either symmetric or asymmetric ciphers. if the cipher is a symmetric kek, both the sender and the receiver will need a copy of the same key. if using an asymmetric cipher, with public/private key properties, to encapsulate a session key both the sender and the receiver will need the other’s public key.

protocols such as ssl, pgp, and s/mime use the services of keks to provide session key confidentiality, integrity, and sometimes to authenticate the binding of the session key originator and the session key itself to make sure the session key came from the real sender and not an attacker.

key distribution

keys can be distributed in a number of ways. for example, two people who wish to perform key exchange can use a medium other than that through which secure messages will be sent. this is called out of band key exchange. if the two or more parties will send secure messages via e-mail, they may choose to meet up with each other or send via courier. the concept of out of band key exchange is not very scalable beyond a few people. a more scalable method of exchanging keys is through the use of a pki key server. a key server is a central repository of public keys of members of a group of users interested in exchanging keys to facilitate electronic transactions. public key encryption provides a means to allow members of a group to conduct secure transactions spontaneously. the receiver’s public key certificate, which contains the receiver’s public key, is retrieved by the sender from the key server and is used as part of a public key encryption scheme, such as s/mime, pgp, or even ssl to encrypt a message and send it. the digital certificate is the medium that contains the public key of each member of the group and makes the key portable, scalable, and easier to manage than an out-of-band method of key exchange.

key distribution centers

recall the formula used before to calculate the number of symmetric keys needed for users: n(n − 1)/2. this necessitates the setup of directories, public key infrastructures, or key distribution centers.

the use of a key distribution center (kdc) for key management requires the creation of two types of keys. the first are master keys, which are secret keys shared by each user and the kdc. each user has his own master key, and it is used to encrypt the traffic between the user and the kdc. the second type of key is a session key, created when needed, used for the duration of the communications session, and then discarded once the session is complete. when a user wants to communicate with another user or an application, the kdc sets up the session key and distributes it to each user for use. an implementation of this solution is found in kerberos. a large organization may even have several kdcs, and they can be arranged so that there are global kdcs that coordinate the traffic between the local kdcs.

because master keys are integral to the trust and security relationship between the users and hosts, such keys should never be used in compromised situations or where they may become exposed. for encrypting files or communications, separate non-master keys should be used. ideally, a master key is never visible in the clear; it is buried within the equipment itself, and it is not accessible to the user.

key storage and destruction

the proper storing and changing of cipher keys are important aspects of key management and are essential to the effective use of cryptography for security. ultimately, the security of information protected by cryptography directly depends on the protection afforded by the keys. all keys need to be protected against modification, and secret and private keys need to be protected against unauthorized disclosure. methods for protecting stored keying material include trusted, tamperproof hardware security modules, passphrase protected smart cards, key wrapping the session keys using long-term storage keks, splitting cipher keys and storing in physically separate storage locations, protecting keys using strong passwords/passphrases, key expiry, and the like.

in order to guard against a long-term cryptanalytic attack, every key must have an expiration date after which it is no longer valid. the key length must be long enough to make the chances of cryptanalysis before key expiration extremely small. the validity period for a key pair may also depend on the circumstances in which the key is used. a signature verification program should check for expiration and should not accept a message signed with an expired key. the fact that computer hardware continues to improve makes it prudent to replace expired keys with newer, longer keys every few years. key replacement enables one to take advantage of any hardware improvements to increase the security of the cryptosystem. according to nist’s “guideline for implementing cryptography in the federal government”, additional guidance for storage of cipher keys include:30

  • all centrally stored data that is related to user keys should be signed or have a mac applied to it (maced) for integrity, and encrypted if confidentiality is required (all user secret keys and ca private keys should be encrypted). individual key records in a database—as well as the entire database—should be signed or maced and encrypted. to enable tamper detection, each individual key record should be signed or maced so that its integrity can be checked before allowing that key to be used in a cryptographic function.
  • backup copies should be made of central/root keys, since the compromise or loss of those components could prevent access to keys in the central database, and possibly deny system users the ability to decrypt data or perform signature verifications.
  • provide key recovery capabilities. there must be safeguards to ensure that sensitive records are neither irretrievably lost by the rightful owners nor accessed by unauthorized individuals. key recovery capabilities provide these functions.
  • archive user keys for a sufficiently long crypto period. a crypto period is the time during which a key can be used to protect information; it may extend well beyond the lifetime of a key that is used to apply cryptographic protection (where the lifetime is the time during which a key can be used to generate a signature or perform encryption). keys may be archived for a lengthy period (on the order of decades), so that they can be used to verify signatures and decrypt ciphertext.

among the factors affecting the risk of exposure are:31

  • the strength of the cryptographic mechanisms (e.g., the algorithm, key length, block size, and mode of operation)
  • the embodiment of the mechanisms (e.g., fips 140—2 level 4 implementation, or software implementation on a personal computer)
  • the operating environment (e.g., secure limited access facility, open office environment, or publicly accessible terminal)
  • the volume of information flow or the number of transactions
  • the security life of the data
  • the security function (e.g., data encryption, digital signature, key production or derivation, key protection)
  • the re-keying method (e.g., keyboard entry, re-keying using a key loading device where humans have no direct access to key information, remote re-keying within a pki)
  • the key update or key derivation process
  • the number of nodes in a network that share a common key
  • the number of copies of a key and the distribution of those copies
  • the threat to the information (e.g., whom the information is protected from, and what are their perceived technical capabilities and financial resources to mount an attack).

in general, short crypto periods enhance security. for example, some cryptographic algorithms might be less vulnerable to cryptanalysis if the adversary has only a limited amount of information encrypted under a single key. caution should be used when deleting keys that are no longer needed. a simple deletion of the keying material might not completely obliterate the information. for example, erasing the information might require overwriting that information multiple times with other non-related information, such as random bits, or all zero or one bits. keys stored in memory for a long time can become “burned in.” this can be mitigated by splitting the key into components that are frequently updated, as shown in figure 5-17.

c05f017.tif

figure 5-17: recommended crypto periods for key types

on the other hand, where manual key distribution methods are subject to human error and frailty, more frequent key changes might actually increase the risk of exposure. in these cases, especially when very strong cryptography is employed, it may be more prudent to have fewer, well-controlled manual key distributions rather than more frequent, poorly controlled manual key distributions. secure automated key distribution, where key generation and exchange are protected by appropriate authentication, access, and integrity controls may be a compensating control in such environments.

users with different roles should have keys with lifetimes that take into account the different roles and responsibilities, the applications for which the keys are used, and the security services that are provided by the keys (user/data authentication, confidentiality, data integrity, etc.). reissuing keys should not be done so often that it becomes excessively burdensome; however, it should be performed often enough to minimize the loss caused by a possible key compromise.

handle the deactivation/revocation of keys so that data signed prior to a compromise date (or date of loss) can be verified. when a signing key is designated as “lost” or “compromised,” signatures generated prior to the specified date may still need to be verified in the future. therefore, a signature verification capability may need to be maintained for lost or compromised keys. otherwise, all data previously signed with a lost or compromised key would have to be re-signed.

cost of certificate replacement/revocation

in some cases, the costs associated with changing digital certificates and cryptographic keys are painfully high. examples include decryption and subsequent re-encryption of very large databases, decryption and re-encryption of distributed databases, and revocation and replacement of a very large number of keys; e.g., where there are very large numbers of geographically and organizationally distributed key holders. in such cases, the expense of the security measures necessary to support longer crypto periods may be justified; e.g., costly and inconvenient physical, procedural, and logical access security; and use of cryptography strong enough to support longer crypto periods even where this may result in significant additional processing overhead.

in other cases, the crypto period may be shorter than would otherwise be necessary, for example, keys may be changed frequently in order to limit the period of time the key management system maintains status information. on the other hand, a user losing their private key would require that the lost key be revoked so that an unauthorized user cannot use it. it would be a good practice to use a master decryption key (additional decryption key in pgp), or another key recovery mechanism to guard against losing access to the data encrypted under the lost key. another reason to revoke a certificate is when an employee leaves the company or, in some cases, when changing job roles, as in the case of someone moving to a more trusted job role, which may require a different level of accountability, access to higher risk data, and so on.

key recovery

a lost key may mean a crisis to an organization. the loss of critical data or backups may cause widespread damage to operations and even financial ruin or penalties. there are several methods of key recovery, such as common trusted directories or a policy that requires all cryptographic keys to be registered with the security department. some people have even been using steganography to bury their passwords in pictures or other locations on their machine to prevent someone from finding their password file. others use password wallets or other tools to hold all of their passwords.

one method is multiparty key recovery. a user would write her private key on a piece of paper, and then divide the key into two or more parts. each part would be sealed in an envelope. the user would give one envelope each to trusted people with instructions that the envelope was only to be opened in an emergency where the organization needed access to the user’s system or files (disability or death of the user). in case of an emergency, the holders of the envelopes would report to human resources, where the envelopes could be opened and the key reconstructed. the user would usually give the envelopes to trusted people at different management levels and different parts of the company to reduce the risk of collusion.

key recovery should also be conducted with the privacy of the individual in mind. if a private individual used encryption to protect the confidentiality of some information, it may be legally protected according to local laws. in some situations, a legal order may be required to retrieve the key and decrypt the information.

key escrow

key escrow is the process of ensuring a third party maintains a copy of a private key or key needed to decrypt information. key escrow also should be considered mandatory for most organizations’ use of cryptography, as encrypted information belongs to the organization and not the individual, however often an individual’s key is used to encrypt the information. there must be explicit trust between the key escrow provider and the parties involved as the escrow provider now holds a copy of the private key and could use it to reveal information. conditions of key release must be explicitly defined and agreed upon by all parties.

web of trust

in cryptography, a web of trust is a concept used in pgp, gnupg, and other openpgp-compatible systems to establish the authenticity of the binding between a public key and its owner. its decentralized trust model is an alternative to the centralized trust model of a public key infrastructure (pki), which relies exclusively on a certificate authority (or a hierarchy of such). as with computer networks, there are many independent webs of trust, and any user (through their identity certificate) can be a part of, and a link between, multiple webs. the web of trust concept was first put forth by pgp creator phil zimmermann in 1992 in the manual for pgp version 2.0:

as time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. everyone else will each choose their own trusted introducers. and everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. this will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.

secure protocols

ip security (ipsec) is a suite of protocols for communicating securely with ip by providing mechanisms for authenticating and encryption. implementation of ipsec is mandatory in ipv6, and many organizations are using it over ipv4. further, ipsec can be implemented in two modes, one that is appropriate for end-to-end protection and one that safeguards traffic between networks. standard ipsec only authenticates hosts with each other. if an organization requires users to authenticate, they must employ a nonstandard proprietary ipsec implementation, or use ipsec over l2tp (layer 2 tunneling protocol). the latter approach uses l2tp to authenticate the users and encapsulate ipsec packets within an l2tp tunnel. because ipsec interprets the change of ip address within packet headers as an attack, nat does not work well with ipsec. to resolve the incompatibility of the two protocols, nat-transversal (aka nat-t) encapsulates ipsec within udp port 4500 (see rfc 3948 for details).32

authentication header (ah)

the authentication header (ah) is used to prove the identity of the sender and ensure that the transmitted data has not been tampered with. before each packet (headers + data) is transmitted, a hash value of the packet’s contents (except for the fields that are expected to change when the packet is routed) based on a shared secret is inserted in the last field of the ah. the endpoints negotiate which hashing algorithm to use and the shared secret when they establish their security association. to help thwart replay attacks (when a legitimate session is retransmitted to gain unauthorized access), each packet that is transmitted during a security association has a sequence number, which is stored in the ah. in transport mode, the ah is shimmed between the packet’s ip and tcp header. the ah helps ensure integrity, not confidentiality. encryption is implemented through the use of encapsulating security payload (esp).

encapsulating security payload (esp)

the encapsulating security payload (esp) encrypts ip packets and ensures their integrity. esp contains four sections:

  • esp header—contains information showing which security association to use and the packet sequence number. like the ah, the esp sequences every packet to thwart replay attacks.
  • esp payload—the payload contains the encrypted part of the packet. if the encryption algorithm requires an initialization vector (iv), it is included with the payload. the endpoints negotiate which encryption to use when the security association is established. because packets must be encrypted with as little overhead as possible, esp typically uses a symmetric encryption algorithm.
  • esp trailer—may include padding (filler bytes) if required by the encryption algorithm or to align fields.
  • authentication—if authentication is used, this field contains the integrity check value (hash) of the esp packet. as with the ah, the authentication algorithm is negotiated when the endpoints establish their security association.

security associations

a security association (sa) defines the mechanisms that an endpoint will use to communicate with its partner. all sas cover transmissions in one direction only. a second sa must be defined for two-way communication. mechanisms that are defined in the sa include the encryption and authentication algorithms, and whether to use the ah or esp protocol. deferring the mechanisms to the sa, as opposed to specifying them in the protocol, allows the communicating partners to use the appropriate mechanisms based on situational risk.

transport mode and tunnel mode

endpoints communicate with ipsec using either transport or tunnel mode. in transport mode, the ip payload is protected. this mode is mostly used for end-to-end protection, for example, between client and server. in tunnel mode, the ip payload and its ip header are protected. the entire protected ip packet becomes a payload of a new ip packet and header. tunnel mode is often used between networks, such as with firewall-to-firewall vpns.

internet key exchange (ike)

internet key exchange allows communicating partners to prove their identity to each other and establish a secure communication channel, and is applied as an authentication component of ipsec. ike uses two phases:

  1. phase 1 in this phase, the partners authenticate with each other, using one of the following:
  • shared secret—a key that is exchanged by humans via telephone, fax, encrypted e-mail, etc.
  • public key encryption—digital certificates are exchanged.
  • revised mode of public key encryption—to reduce the overhead of public key encryption, a nonce (a cryptographic function that refers to a number or bit string used only once, in security engineering) is encrypted with the communicating partner’s public key, and the peer’s identity is encrypted with symmetric encryption using the nonce as the key.
  1. next, ike establishes a temporary security association and secure tunnel to protect the rest of the key exchange.
  2. phase 2 the peers’ security associations are established, using the secure tunnel and temporary sa created at the end of phase 1.

high assurance internet protocol encryptor (haipe)

based on ipsec, high assurance internet protocol encryptor (haipe) possesses additional restrictions and enhancements; for instance, the ability to encrypt multicast data using high-assurance hardware encryption which requires that the same key be manually loaded on all communicating devices. haipe is an extension of ipsec that would be used for highly-secure communications such as those employed by military applications.

secure socket layer/transport layer security (ssl/tls)

secure socket layer/transport layer security (ssl/tls) is primarily used to encrypt confidential data sent over an insecure network such as the internet. in the https protocol, the types of data encrypted include the url, the http header, cookies, and data submitted through forms. a web page secured with ssl/tls has a url that begins with https://.

according to the microsoft technet site, “the ssl/tls security protocol is layered between the application protocol layer and tcp/ip layer, where it can secure and then send application data to the transport layer. because it works between the application layer and the tcp/ip layer, ssl/tls can support multiple application layer protocols.

the ssl/tls protocol can be divided into two layers. the first layer is the handshake protocol layer, which consists of three sub-protocols: the handshake protocol, the change cipher spec protocol, and the alert protocol. the second layer is the record protocol layer. figure 5-18 illustrates the various layers and their components.

c05f018.tif

figure 5-18: ssl/tls protocol layers

“ssl/tls uses both symmetric key and asymmetric key encryption. ssl/tls uses public key encryption to authenticate the server to the client, and optionally the client to the server. public key cryptography is also used to establish a session key. the session key is used in symmetric algorithms to encrypt the bulk of the data. this combines the benefit of asymmetric encryption for authentication with the faster, less processor-intensive symmetric key encryption for the bulk data.”33

secure/multipurpose internet mail extensions (s/mime)34

according to the microsoft technet site, “secure/multipurpose internet mail extensions (s/mime) is a widely accepted method, or more precisely a protocol, for sending digitally signed and encrypted messages. s/mime allows you to encrypt emails and digitally sign them. when you use s/mime with an email message, it helps the people who receive that message to be certain that what they see in their inbox is the exact message that started with the sender. it will also help people who receive messages to be certain that the message came from the specific sender and not from someone pretending to be the sender. to do this, s/mime provides for cryptographic security services such as authentication, message integrity, and non-repudiation of origin (using digital signatures). it also helps enhance privacy and data security (using encryption) for electronic messaging.” 35

going hands-on with cryptography—cryptography exercise

cryptography is a modern cornerstone of secure information storage and transmission. this exercise is designed to provide the security practitioner with an overview of public key infrastructure using gnu privacy guard (gpg).

requirements

the following items are necessary to perform this exercise:

be sure to use different random account names than those listed in the exercise.

setup

for this exercise two simulated users will be present. andy will be the first user and rae will be the second. they both have a need to communicate securely through email with each other. they have never met in person and therefore have never had a chance to exchange a synchronous encryption key with each other. they suspect pki might be able to help them.

set up the first user account by setting up a free email account. (andy will use gmail [] for this example and rae will use hotmail [] for each user.) you’ll notice they used addresses that do not directly relate to their real names. they have a need for discretion and confidentiality in their work!

once that is completed, download each software package (claws and thunderbird) from the download sites.

first user email setup

for this example the fictitious andy account () is going to use the claws email client. since andy is using claws, he will need to ensure he has installed the gnupg core package first. (see figure 5-19.) download the system-appropriate gnupg package from https://www.gnupg.org/download/index.html.

c05f019.tif

figure 5-19: choosing components during gnupg setup

  1. during the first launch of claws, it will ask for account information. choose imap and auto-configure. see figure 5-20.
  2. next configure the smtp settings. see figure 5-21.
    c05f020.tif

    figure 5-20: setting up claws account information

    c05f021.tif

    figure 5-21: smtp server setup in claws

  3. click forward and then save. depending on the email provider, an ssl certificate error may be present. since this is simply an exercise, click on accept and save to continue to the main mailbox.
  4. once opened a message warning about keys should pop up as shown in figure 5-22.
c05f022.tif

figure 5-22: no pgp key warning

  1. click yes to create a new key pair. on the screen that follows create a new passphrase for the encryption keys. see figure 5-23. this should be different than the passphrase developed for the email account.
  2. once the password is confirmed, a key creation dialog box will prompt the user to randomly move the mouse to help create entropy for the encryption key pair. once complete, the user is given the option to publish the public key to a keyserver. click on yes and upload andy’s public key to a keyserver.

    in windows you may get an error stating the key cannot be exported. if this error comes up, perform steps 7 through 9 to publish the public key. otherwise skip to step 10.

    c05f023.tif

    figure 5-23: entering a passphrase for new key creation

  3. click on the start menu and click gpa. see figure 5-24.
c05f024.tif

figure 5-24: accessing gpa from the windows start menu

  1. andy’s key should be listed in the key manager. right-click on the entry for andy and choose send keys. see figure 5-25.
  2. a warning will be provided regarding sending the private key to a server. see figure 5-26. choose yes and a confirmation should appear. this concludes the additional information necessary should the automatic key upload for windows fail.
    c05f025.tif

    figure 5-25: finding a key in the key manager

    c05f026.tif

    figure 5-26: exporting a key to the keyserver

  3. next, proper plugins should be installed for claws. click on configuration and then plugins. see figure 5-27.
  4. ensure the three plugins shown in figure 5-28 are selected, and if not, browse to them through the load button and select them.
    c05f027.tif

    figure 5-27: accessing the plug-ins menu

    c05f028.tif

    figure 5-28: selecting and loading plug-ins

  5. once loaded click close.

the first email user is now configured with a public and private key along with an email account and an email client.

second user email setup

the second user, rae (), uses hotmail and mozilla thunderbird for her email. to set up her account, install thunderbird from the download file. when thunderbird first launches, it will prompt for account creation.

  1. thunderbird may prompt for the creation of an email address. choose skip this and use my existing email. see figure 5-29.
    c05f029.tif

    figure 5-29: setting up a user in thunderbird

  2. next enter the email information for the second user. see figure 5-30.
  3. click continue. thunderbird automatically detects the majority of major email providers. if it detects the settings for the selected account, simply click done. if it doesn’t detect the settings, review the email service provider’s self-help for information regarding server settings and manually configure the client. see figure 5-31.
    c05f030.tif

    figure 5-30: entering account information

    c05f031.tif

    figure 5-31: mail account setup

  4. once completed, the next step is to set up the public and private key for the second user. doing this in thunderbird requires an additional plugin called enigmail. to enable enigmail click on the thunderbird menu and choose add-ons. see figure 5-32.
  5. in the add-ons manager type enigmail in the search box and hit the search button. see figure 5-33.
    c05f032.tif

    figure 5-32: accessing the add-ons menu

    c05f033.tif

    figure 5-33: searching for a specific add-on

  6. when the search results return, click install for enigmail. see figure 5-34.
    c05f034.tif

    figure 5-34: installing an add-on

  7. once the install is complete, choose restart now. see figure 5-35.
    c05f035.tif

    figure 5-35: restarting machine after add-on installation is complete

  8. when thunderbird restarts, the enigmail wizard will start as well. see figure 5-36.
    c05f036.tif

    figure 5-36: enigmail setup wizard

  9. choose yes, i would like the wizard to get me started and next.
  10. on the next screen choose convenient auto encryption and next. see figure 5-37.
    c05f037.tif

    figure 5-37: configuring encryption settings

  11. on the next screen choose to sign by default and click next. see figure 5-38.
c05f038.tif

figure 5-38: configuring your digital signature

  1. on the next screen choose yes and choose next. see figure 5-39.
c05f039.tif

figure 5-39: preference settings

  1. on the next screen there may be a prompt to use the first user’s certificates. for this exercise do not choose the first user (as they are going to use the claws client). instead choose to create a new key pair and click next. see figure 5-40.
c05f040.tif

figure 5-40: creating a key to sign and encrypt email

  1. next create a passphrase for the second user (rae) key pair. see figure 5-41.
    c05f041.tif

    figure 5-41: configuring a passphrase to be used to protect the private key

  2. click next two more times and the key creation process will start for the second user (rae). see figure 5-42.
    c05f042.tif

    figure 5-42: key generation

  3. when completed, an option to generate a revocation certificate is offered. generate the certificate and store it in a known location. see figure 5-43.
    c05f043.tif

    figure 5-43: revocation certificate creation

  4. next there will be a prompt to enter the key pair password for the second user (rae). enter the password created when generating the keys. see figure 5-44.
    c05f044.tif

    figure 5-44: passphrase entry to unlock the secret key

  5. a notice will appear regarding the use of the revocation certificate. click ok and then finish. if the add-ons manager is still open, close it and return to the main mail window.

both users (andy and rae) now have secure email clients and key pairs for pki.

key exchange and sending secure email

this section explains how andy and rae exchange keys and how they securely send each other email. if the email clients and gpa are still open, close them before starting this section.

since andy and rae have never seen each other and don’t trust many people, they would not want to use a “shared secret” like a single password that they could both use to encrypt emails to each other. with pki they don’t need a secret shared key. all they need is each other’s public key!

first user (andy) gets the public key of the second user (rae)

now that the pki is set up, andy and rae are excited to start sharing secure emails between them. to look up rae’s public key, andy needs to pull it off the public key server. to do that he will need rae’s email address. he asks rae what email address she is using and she replies with .

  1. now that andy knows rae’s email address, he can look up her public key automatically using his email client (claws). he opens his email client and clicks on the compose button. see figure 5-45.
    c05f045.tif

    figure 5-45: composing a message

  2. in the blank message window, he clicks on privacy system and switches it to pgp inline. he then checks the encryption option on the menu for the message. this means the body of the message will be encrypted. see figure 5-46.
  3. in the message he types rae’s address () and a test message, and when he is done he clicks send. see figure 5-47.
  4. a warning message will come up and explain that only the body of the message is encrypted. click continue.
c05f046.tif

figure 5-46: accessing the privacy system and encryption options

c05f047.tif

figure 5-47: sending a message

the message will be sent to rae and it will be encrypted. the plugins automatically search for the email address provided () and use the appropriate public key if it is available on the server.

rae receives the message and responds

once the message is sent, rae can check her email and see if andy’s message is indeed encrypted.

  1. close claws and open thunderbird.
  2. when thunderbird opens, check the inbox. there should be a message there from andy. clicking on the message will provide a preview below. notice the preview is ciphertext! it looks like andy successfully encrypted the message! clicking on the message will prompt for rae’s password. rae must enter her password to unlock her private key (which is the only key that can decrypt the message).
  3. enter rae’s private key password. (it is the same password generated when the key pair was created.) see figure 5-48.
    c05f048.tif

    figure 5-48: entering a private key to decrypt a message

    once the password is entered, the message decrypts into its original form. see figure 5-49.

    rae is worried that the message might actually be unencrypted on the server as well. so she logs into the webmail site to see what the message looks like on the server. to her relief it is encrypted on the server because the encryption and decryption take place locally! see figure 5-50.

    c05f049.tif

    figure 5-49: decrypted message

    c05f050.tif

    figure 5-50: checking on the encryption of mail message on email server

    rae decides to send andy an encrypted response.

  4. first she needs to look up his public key. in thunderbird she clicks on the thunderbird menu, then enigmail, then key management. see figure 5-51.
c05f051.tif

figure 5-51: accessing thunderbird enigmail key management menu

  1. in the key management menu, she picks key server and then search. she puts andy’s email address into the search box and hits the search key. see figure 5-52.
    c05f052.tif

    figure 5-52: searching for a public key

  2. andy’s public key is found through the search. rae clicks ok to import the public key and ok once more on the confirmation screen. now that rae has andy’s public key, she can send him an encrypted response.
  3. she clicks reply on the message in thunderbird and creates a quick response. she then clicks on the enigmail button and ensures inline pgp is used to encrypt the message back to andy, while digital signatures are turned off. see figure 5-53.
    c05f053.tif

    figure 5-53: enigmail encryption and signing settings

  4. rae clicks ok on the enigmail options and then composes a short reply and hits send. rae closes thunderbird.
  5. andy opens claws and sees he has a new message from rae. when he clicks on the message, he is prompted for his private key passphrase to complete the decryption. see figure 5-54.

    the message decrypts and andy is able to read rae’s response. see figure 5-55.

    c05f054.tif

    figure 5-54: entering the passphrase to unlock the secret key for the openpgp certificate

    c05f055.tif

    figure 5-55: reading the decrypted message

    andy is impressed but wants to ensure the message is encrypted on the server. he logs into the webmail interface of his email provider and views the message there. see figure 5-56.

    c05f056.tif

    figure 5-56: viewing the encrypted message on the e-mail server

andy is pleased to see the messages are indeed encrypted on the server. andy and rae continue to send encrypted messages back and forth. each has fewer worries about an email server breach since they each control their own encryption.

conclusion

the email encryption tools illustrated as part of this exercise can be easily deployed by anyone. the exercise provided two different email clients and an open source pgp encryption suite that is suitable for a variety of platforms and systems. while digital signatures were not explicitly covered as part of this exercise, they may also be performed using the tools demonstrated.

summary

the areas touched on in the cryptography domain include the fundamental concepts of cryptography such as hashing, salting, digital signatures, and symmetric/asymmetric cryptographic systems. in addition, the need to understand and support secure protocols to help operate and implement cryptographic systems was also discussed. the sscp needs to be aware of the importance of cryptography, and its impact on the ability to ensure both confidentiality and integrity within the organization.

sample questions

  1. applied against a given block of data, a hash function creates:
    1. a chunk of the original block used to ensure its confidentiality
    2. a block of new data used to ensure the original block’s confidentiality
    3. a chunk of the original block used to ensure its integrity
    4. a block of new data used to ensure the original block’s integrity
  2. in symmetric key cryptography, each party should use:
    1. a publicly available key
    2. a previously exchanged secret key
    3. a randomly generated value unknown to everyone
    4. a secret key exchanged with the message
  3. nonrepudiation of a message ensures that the message:
    1. can be attributed to a particular author.
    2. is always sent to the intended recipient.
    3. can be attributed to a particular recipient.
    4. is always received by the intended recipient.
  4. in electronic code book (ecb) mode, data are encrypted using:
    1. a cipher-based on the previous block of a message
    2. a user-generated variable-length cipher for every block of a message
    3. a different cipher for every block of a message
    4. the same cipher for every block of a message
  5. in cipher block chaining (cbc) mode, the key is constructed by:
    1. generating new key material completely at random
    2. cycling through a list of user defined choices
    3. modifying the previous block of ciphertext
    4. reusing the previous key in the chain of message blocks
  6. stream ciphers are normally selected over block ciphers because of:
    1. the high degree of strength behind the encryption algorithms
    2. the high degree of speed behind the encryption algorithms
    3. their ability to use large amounts of padding in encryption functions
    4. their ability to encrypt large chunks of data at a time
  7. a key escrow service is intended to allow for the reliable:
    1. recovery of inaccessible private keys
    2. recovery of compromised public keys
    3. transfer of inaccessible private keys between users
    4. transfer of compromised public keys between users
  8. the correct choice for encrypting the entire original data packet in a tunneled mode for an ipsec solution is:
    1. generic routing encapsulation (gre)
    2. authentication header (ah)
    3. encapsulating security payload (esp)
    4. point-to-point tunneling protocol (pptp)
  9. when implementing an md5 solution, what randomizing cryptographic function should be used to help avoid collisions?
    1. multistring concatenation
    2. modular addition
    3. message pad
    4. salt
  10. key clustering represents the significant failure of an algorithm because:
    1. a single key should not generate different ciphertext from the same plaintext, using the same cipher algorithm.
    2. two different keys should not generate the same ciphertext from the same plaintext, using the same cipher algorithm.
    3. two different keys should not generate different ciphertext from the same plaintext, using the same cipher algorithm.
    4. a single key should not generate the same ciphertext from the same plaintext, using the same cipher algorithm.
  11. asymmetric key cryptography is used for the following:
    1. asymmetric key cryptography is used for the following:
    2. encryption of data, nonrepudiation, access control
    3. nonrepudiation, steganography, encryption of data
    4. encryption of data, access control, steganography
  12. which of the following algorithms supports asymmetric key cryptography?
    1. diffie-hellman
    2. blowfish
    3. sha-256
    4. rijndael
  13. a certificate authority (ca) provides which benefit to a user?
    1. protection of public keys of all users
    2. history of symmetric keys
    3. proof of nonrepudiation of origin
    4. validation that a public key is associated with a particular user
  14. what is the output length of a ripemd-160 hash?
    1. 150 bits
    2. 128 bits
    3. 160 bits
    4. 104 bits
  15. ansi x9.17 is concerned primarily with:
    1. financial records and retention of encrypted data
    2. the lifespan of master key-encrypting keys (kkm’s)
    3. formalizing a key hierarchy
    4. protection and secrecy of keys
  16. what is the input that controls the operation of the cryptographic algorithm?
    1. decoder wheel
    2. encoder
    3. cryptovariable
    4. cryptographic routine
  17. aes is a block cipher with variable key lengths of?
    1. 128, 192 or 256 bits
    2. 32, 128 or 448 bits
    3. 8, 64, 128 bits
    4. 128, 256 or 448 bits
  18. a hashed message authentication code (hmac) works by:
    1. adding a non-secret key value to the input function along with the source message.
    2. adding a secret key value to the output function along with the source message.
    3. adding a secret key value to the input function along with the source message.
    4. adding a non-secret key value to the output function along with the source message.
  19. the main types of implementation attacks include: (choose all that apply.)
    1. linear
    2. side-channel analysis
    3. fault analysis
    4. probing
  20. what is the process of using a key encrypting key (kek) to protect session keys called?
    1. key distribution
    2. key escrow
    3. key generation
    4. key wrapping

end notes

    href="">< ds:retrievalmethod > element (figure 5-16) or act as a gateway to an underlying pki based on a non-xml syntax (e.g., x.509v3).

    c05f015.tif

    figure 5-15: the xkms service shields the client application from the complexities of the underlying pki.

    c05f016.tif

    figure 5-16: the skms service might resolve a <ds: retrieval method> element.

    standards for financial institutions

    ansi x9.17 was developed to address the need of financial institutions to transmit securities and funds securely using an electronic medium. specifically, it describes the means to ensure the secrecy of keys. the ansi x9.17 approach is based on a hierarchy of keys. at the bottom of the hierarchy are data keys (dks). data keys are used to encrypt and decrypt messages. they are given short lifespans, such as one message or one connection. at the top of the hierarchy are master key-encrypting keys (kkms).

    kkms, which must be distributed manually, are afforded longer lifespans than data keys. using the two-tier model, the kkms are used to encrypt the data keys. the data keys are then distributed electronically to encrypt and decrypt messages. the two-tier model may be enhanced by adding another layer to the hierarchy. in the three-tier model, the kkms are not used to encrypt data keys directly, but to encrypt other key-encrypting keys (kks). the kks, which are exchanged electronically, are used to encrypt the data keys.

    segregation of duties

    another aspect of key management is maintaining control over sensitive cryptographic keys that enforce the need-to-know principle as part of a business process. for example, in many business environments, employees are required to maintain separation or segregation of duties. in other words, in such environments no one person is allowed to have full control over all phases of an entire transaction without some level of accountability enforcement. the more negotiable the asset under protection, the greater the need for the proper segregation of duties is. especially in the area of cryptography, this is a business concern. imagine the damage that could be done by a single dishonest person if allowed unchecked access to cryptographic keys that, for example, unlock high risk, high value, or high liquidity information such as customer financial accounts.

    the segregation of duties is used as a cross-check to ensure that misuse and abuse of assets, due to innocent mistake or malicious intent, can be efficiently detected and prevented. this is an important confidentiality and integrity principle that is often misunderstood, judging by news reports of embezzlement schemes, primarily by employee insiders, that go undetected for long amounts of time. the segregation of duties is primarily a business policy and access control issue. however, it may not be possible for smaller organizations, due to personnel constraints, to perform the segregation of all duties, so other compensating controls may have to be used to achieve the same control objective. such compensating controls include monitoring of activities, audit trails, and management supervision. two mechanisms necessary to implement high integrity cryptographic operations environments where separation of duties is paramount are dual control and split knowledge.

    1. dual control dual control is implemented as a security procedure that requires two or more persons to come together and collude to complete a process. in a cryptographic system the two (or more) persons would each supply a unique key, that when taken together, performs a cryptographic process. split knowledge is the other complementary access control principle to dual control.
    2. split knowledge split knowledge is the unique “what each must bring” and joined together when implementing dual control. to illustrate, a box containing petty cash is secured by one combination lock and one keyed lock. one employee is given the combination to the combo lock and another employee has possession of the correct key to the keyed lock. in order to get the cash out of the box both employees must be present at the cash box at the same time. one cannot open the box without the other. this is the aspect of dual control.

    on the other hand, split knowledge is exemplified here by the different objects (the combination to the combo lock and the correct physical key), both of which are unique and necessary, that each brings to the meeting. split knowledge focuses on the uniqueness of separate objects that must be joined together. dual control has to do with forcing the collusion of at least two or more persons to combine their split knowledge to gain access to an asset. both split knowledge and dual control complement each other and are necessary functions that implement the segregation of duties in high integrity cryptographic environments (see table 5-4).

    in cryptographic terms, one could say dual control and split knowledge are properly implemented if no one person has access to or knowledge of the content of the complete cryptographic key being protected by the two processes. the sound implementation of dual control and split knowledge in a cryptographic environment necessarily means that the quickest way to break the key would be through the best attack known for the algorithm of that key. the principles of dual control and split knowledge primarily apply to access to plaintext keys. access to cryptographic keys used for encrypting and decrypting data or access to keys that are encrypted under a master key (which may or may not be maintained under dual control and split knowledge) do not require dual control and split knowledge.

    table 5-4: split knowledge and dual control complement each other and are necessary functions that implement segregation of duties in high-integrity cryptographic environments.

    bad examplesproblemhow to make dual control/split knowledge compliant
    splitting a key “in half” to form two parts.dual control but no split knowledge (assuming two people each with a unique key half).
    one person could determine the key by brute forcing the other key half space.
    each person maintains control of his or her half of the key.
    protect each half with a unique pin or passphrase.
    storing key components on two cryptographic tokens with no further user authentication.no enforcement of split knowledge (i.e., no unique authentication method for individual accountability).each person maintains control of his individual token/smartcard.
    protect each smartcard with unique pin/passphrase.
    storing a key on a single smartcard (or cryptographic token) that requires one or more passphrases to access.no dual control enforcement. single card cannot be maintained by two or more persons.distribute cryptographic token to each person.
    protect token with unique pin/passphrase.

    dual control and split knowledge can be summed up as the determination of any part of a key being protected must require the collusion between two or more persons with each supplying unique cryptographic materials that must be joined together to access the protected key. any feasible method to violate the axiom means that the principles of dual control and split knowledge are not being upheld.

    there are a number of applications that implement aspects of dual control and split knowledge in a scalable manner. for example, a pgp commercial product based on the openpgp standard has features for splitting public keys that are not part of the openpgp standard.25 these features use blakely–shamir secret sharing. this is an algorithm that allows the user to take a piece of data and break it into n shares, of which k of them are needed to retrieve the original data. using a simple version of this approach, the user could break the data into three shares, two of which are needed to get the data back. in a more complex version, the user could require 3 of 6 or even 5 of 12 shares to retrieve the original data, with each key share protected with a unique passphrase known only to the key holder.

    such a solution uses the basic form of secret sharing and shares the private key. this process permits a key pair to be controlled by a group of people, with some subgroup required to reconstitute and use the key. other systems are based on key holders answering a series of questions in order to recover passwords needed to unlock a protected plaintext key.

    to recreate the key under protection, a user can create a set of questions that contain some information only the user would know. the key is split to those questions, with some set of them being required to synthesize the key. not only does the user provide individualized security questions that are unique to each key holder, but also decides how many of the questions need to be answered correctly to retrieve the key under protection, by having it reconstructed from the split parts.

    management and distribution of keys

    the details of key creation using various algorithms were discussed earlier in this domain in the “foundational concepts” section. however, from a key management perspective there are a number of issues that pertain to scalability and cryptographic key integrity.

    automated key generation

    mechanisms used to automatically generate strong cryptographic keys can be used to deploy keys as part of key lifecycle management. effective automated key generation systems are designed for user transparency as well as complete cryptographic key policy enforcement.

    truly random

    for a key to be truly effective, it must have an appropriately high work factor. that is to say, the amount of time and effort (work by an attacker) needed to break the key must be sufficient so that it at least delays its discovery for as long as the information being protected needs to be kept confidential. one factor that contributes to strong keys, which have a high work factor, is the level of randomness of the bits that make up the key.

    random

    as discussed earlier, cryptographic keys are essentially strings of numbers. the numbers used in making up the key need to be unpredictable, so that an attacker cannot easily guess the key and then expose the protected information. thus, the randomness of the numbers that comprise a key plays an important role in the lifecycle of a cryptographic key. in the context of cryptography, randomness is the quality of lacking predictability. randomness intrinsically generated by a computer system is also called pseudo randomness. pseudo randomness is the quality of an algorithm for generating a sequence of numbers that approximates the properties of random numbers. computer circuits and software libraries are used to perform the actual generation of pseudo random key values. computers and software libraries are well known as weak sources of randomness.

    computers are inherently designed for predictability, not randomness. computers are so thoroughly deterministic that they have a hard time generating high-quality randomness. therefore, special purpose built hardware and software called random number generators or rngs, are needed for cryptography applications. the u.s. federal government provides recommendations on deterministic random number generators through the nist.26 an international standard for random number generation suitable for cryptographic systems is sponsored by the international organization for standardization as iso 18031.27 a rigorous statistical analysis of the output is often needed to have confidence in such rng algorithms. a random number generator based solely on deterministic computation done solely by computer cannot be regarded as a true random number generator sufficient in lack of predictability for cryptographic applications, since its output is inherently predictable.

    there are various methods for ensuring the appropriate level of randomness in pseudo random keys. the approach found in most business-level cryptographic products uses computational algorithms that produce long sequences of apparently random results, which are in fact completely determined by a shorter initial value, known as a seed or key. the use of initialization vectors and seed values that are concatenated onto computer generated keys increases the strength of keys by adding additional uniqueness to a random key material. the seed value or initialization vector is the number input as a starting point for an algorithm. the seed or iv can be created either manually or by an external source of randomness, such as radio frequency noise, randomly sampled values from a switched circuit, or other atomic and subatomic physical phenomenon. to provide a degree of randomness intermediate between specialized hardware on the one hand and algorithmic generation on the other, some security-related computer software requires the user to input a lengthy string of mouse movements, or keyboard input.

    regarding manually created seed or initialization values, many may be familiar with this process if they have ever set up a wireless network with encryption using a wep/wpa key. in most cases, when configuring wireless encryption on a wireless adapter or router the user is asked to enter a password or variable-length “key” that is used by the wireless device to create cryptographic keys for encrypting data across the wireless network. this “key” is really a seed or initialization value that will be concatenated to the computer-generated key portion that together comprise the keying material to generate a key consisting of appropriate amount of pseudo randomness to make it hard for an attacker to easily guess and thus “breaking” the key.

    the important role randomness plays in key creation is illustrated by the following example. one method of generating a two-key encryption key set making a private component and a public component is comprised of the following steps:

    1. generate a first pseudo random prime number.
    2. generate a second pseudo random prime number.
    3. produce a modulus by multiplying the first pseudo random number by the second pseudo random prime number.
    4. generate a first exponent by solving a first modular arithmetic equation.
    5. generate a second exponent that is a modular inverse to the first exponent, by solving a second modular arithmetic equation and securely storing either the first exponent or the second exponent in at least one memory location.

    key length

    key length is another important aspect of key management to consider when generating cryptographic keys. key length is the size of a key, usually measured in bits or bytes, which a cryptographic algorithm uses in ciphering or deciphering protected information. keys are used to control how an algorithm operates so that only the correct key can decipher the information. the resistance to successful attack against the key and the algorithm, aspects of their cryptographic security, is of concern when choosing key lengths. an algorithm’s key length is distinct from its cryptographic security. cryptographic security is a logarithmic measure of the fastest known computational attack on the algorithm, also measured in bits.

    the security of an algorithm cannot exceed its key length. therefore, it is possible to have a very long key that still provides low security. as an example, three-key (56 bits per key) triple des can have a key length of 168 bits but, due to the meet-in-the-middle attack, the effective security that it provides is at most 112 bits. however, most symmetric algorithms are designed to have security equal to their key length. a natural inclination is to use the longest key possible, which may make the key more difficult to break. however, the longer the key, the more computationally expensive the encrypting and decrypting process can be. the goal is to make breaking the key cost more (in terms of effort, time, and resources) than the worth of the information being protected and, if possible, not a penny more (to do more would not be economically sound).

    the effectiveness of asymmetric cryptographic systems depends on the hard-to-solve nature of certain mathematical problems such as prime integer factorization. these problems are time consuming to solve, but usually faster than trying all possible keys by brute force. thus, asymmetric algorithm keys must be longer for equivalent resistance to attack than symmetric algorithm keys.

    rsa security claims that 1024-bit rsa keys are equivalent in strength to 80-bit symmetric keys, 2048-bit rsa keys to 112-bit symmetric keys, and 3072-bit rsa keys to 128-bit symmetric keys. rsa claims that 2048-bit keys are sufficient until 2030. an rsa key length of 3072 bits should be used if security is required beyond 2030.28 nist key management guidelines further suggest that 15,360-bit rsa keys are equivalent in strength to 256-bit symmetric keys.29

    ecc can secure with shorter keys than those needed by other asymmetric key algorithms. nist guidelines state that elliptic curve keys should be twice the length of equivalent strength symmetric key algorithms. for example, a 224-bit elliptic curve key would have roughly the same strength as a 112-bit symmetric key. these estimates assume no major breakthroughs in solving the underlying mathematical problems that ecc is based on.

    key wrapping and key encrypting keys

    one role of key management is to ensure that the same key used in encrypting a message by a sender is the same key used to decrypt the message by the intended receiver. thus, if terry and pat wish to exchange encrypted messages, each must be equipped to decrypt received messages and to encrypt sent messages. if they use a cipher, they will need appropriate keys. the problem is how to exchange whatever keys or other information are needed so that no one else can obtain a copy.

    one solution is to protect the session key with a special purpose long-term use key called a key encrypting key (kek). keks are used as part of key distribution or key exchange. the process of using a kek to protect session keys is called key wrapping. key wrapping uses symmetric ciphers to securely encrypt (thus encapsulating) a plaintext key along with any associated integrity information and data. one application for key wrapping is protecting session keys in untrusted storage or when sending over an untrusted transport. key wrapping or encapsulation using a kek can be accomplished using either symmetric or asymmetric ciphers. if the cipher is a symmetric kek, both the sender and the receiver will need a copy of the same key. if using an asymmetric cipher, with public/private key properties, to encapsulate a session key both the sender and the receiver will need the other’s public key.

    protocols such as ssl, pgp, and s/mime use the services of keks to provide session key confidentiality, integrity, and sometimes to authenticate the binding of the session key originator and the session key itself to make sure the session key came from the real sender and not an attacker.

    key distribution

    keys can be distributed in a number of ways. for example, two people who wish to perform key exchange can use a medium other than that through which secure messages will be sent. this is called out of band key exchange. if the two or more parties will send secure messages via e-mail, they may choose to meet up with each other or send via courier. the concept of out of band key exchange is not very scalable beyond a few people. a more scalable method of exchanging keys is through the use of a pki key server. a key server is a central repository of public keys of members of a group of users interested in exchanging keys to facilitate electronic transactions. public key encryption provides a means to allow members of a group to conduct secure transactions spontaneously. the receiver’s public key certificate, which contains the receiver’s public key, is retrieved by the sender from the key server and is used as part of a public key encryption scheme, such as s/mime, pgp, or even ssl to encrypt a message and send it. the digital certificate is the medium that contains the public key of each member of the group and makes the key portable, scalable, and easier to manage than an out-of-band method of key exchange.

    key distribution centers

    recall the formula used before to calculate the number of symmetric keys needed for users: n(n − 1)/2. this necessitates the setup of directories, public key infrastructures, or key distribution centers.

    the use of a key distribution center (kdc) for key management requires the creation of two types of keys. the first are master keys, which are secret keys shared by each user and the kdc. each user has his own master key, and it is used to encrypt the traffic between the user and the kdc. the second type of key is a session key, created when needed, used for the duration of the communications session, and then discarded once the session is complete. when a user wants to communicate with another user or an application, the kdc sets up the session key and distributes it to each user for use. an implementation of this solution is found in kerberos. a large organization may even have several kdcs, and they can be arranged so that there are global kdcs that coordinate the traffic between the local kdcs.

    because master keys are integral to the trust and security relationship between the users and hosts, such keys should never be used in compromised situations or where they may become exposed. for encrypting files or communications, separate non-master keys should be used. ideally, a master key is never visible in the clear; it is buried within the equipment itself, and it is not accessible to the user.

    key storage and destruction

    the proper storing and changing of cipher keys are important aspects of key management and are essential to the effective use of cryptography for security. ultimately, the security of information protected by cryptography directly depends on the protection afforded by the keys. all keys need to be protected against modification, and secret and private keys need to be protected against unauthorized disclosure. methods for protecting stored keying material include trusted, tamperproof hardware security modules, passphrase protected smart cards, key wrapping the session keys using long-term storage keks, splitting cipher keys and storing in physically separate storage locations, protecting keys using strong passwords/passphrases, key expiry, and the like.

    in order to guard against a long-term cryptanalytic attack, every key must have an expiration date after which it is no longer valid. the key length must be long enough to make the chances of cryptanalysis before key expiration extremely small. the validity period for a key pair may also depend on the circumstances in which the key is used. a signature verification program should check for expiration and should not accept a message signed with an expired key. the fact that computer hardware continues to improve makes it prudent to replace expired keys with newer, longer keys every few years. key replacement enables one to take advantage of any hardware improvements to increase the security of the cryptosystem. according to nist’s “guideline for implementing cryptography in the federal government”, additional guidance for storage of cipher keys include:30

    among the factors affecting the risk of exposure are:31

    in general, short crypto periods enhance security. for example, some cryptographic algorithms might be less vulnerable to cryptanalysis if the adversary has only a limited amount of information encrypted under a single key. caution should be used when deleting keys that are no longer needed. a simple deletion of the keying material might not completely obliterate the information. for example, erasing the information might require overwriting that information multiple times with other non-related information, such as random bits, or all zero or one bits. keys stored in memory for a long time can become “burned in.” this can be mitigated by splitting the key into components that are frequently updated, as shown in figure 5-17.

    c05f017.tif

    figure 5-17: recommended crypto periods for key types

    on the other hand, where manual key distribution methods are subject to human error and frailty, more frequent key changes might actually increase the risk of exposure. in these cases, especially when very strong cryptography is employed, it may be more prudent to have fewer, well-controlled manual key distributions rather than more frequent, poorly controlled manual key distributions. secure automated key distribution, where key generation and exchange are protected by appropriate authentication, access, and integrity controls may be a compensating control in such environments.

    users with different roles should have keys with lifetimes that take into account the different roles and responsibilities, the applications for which the keys are used, and the security services that are provided by the keys (user/data authentication, confidentiality, data integrity, etc.). reissuing keys should not be done so often that it becomes excessively burdensome; however, it should be performed often enough to minimize the loss caused by a possible key compromise.

    handle the deactivation/revocation of keys so that data signed prior to a compromise date (or date of loss) can be verified. when a signing key is designated as “lost” or “compromised,” signatures generated prior to the specified date may still need to be verified in the future. therefore, a signature verification capability may need to be maintained for lost or compromised keys. otherwise, all data previously signed with a lost or compromised key would have to be re-signed.

    cost of certificate replacement/revocation

    in some cases, the costs associated with changing digital certificates and cryptographic keys are painfully high. examples include decryption and subsequent re-encryption of very large databases, decryption and re-encryption of distributed databases, and revocation and replacement of a very large number of keys; e.g., where there are very large numbers of geographically and organizationally distributed key holders. in such cases, the expense of the security measures necessary to support longer crypto periods may be justified; e.g., costly and inconvenient physical, procedural, and logical access security; and use of cryptography strong enough to support longer crypto periods even where this may result in significant additional processing overhead.

    in other cases, the crypto period may be shorter than would otherwise be necessary, for example, keys may be changed frequently in order to limit the period of time the key management system maintains status information. on the other hand, a user losing their private key would require that the lost key be revoked so that an unauthorized user cannot use it. it would be a good practice to use a master decryption key (additional decryption key in pgp), or another key recovery mechanism to guard against losing access to the data encrypted under the lost key. another reason to revoke a certificate is when an employee leaves the company or, in some cases, when changing job roles, as in the case of someone moving to a more trusted job role, which may require a different level of accountability, access to higher risk data, and so on.

    key recovery

    a lost key may mean a crisis to an organization. the loss of critical data or backups may cause widespread damage to operations and even financial ruin or penalties. there are several methods of key recovery, such as common trusted directories or a policy that requires all cryptographic keys to be registered with the security department. some people have even been using steganography to bury their passwords in pictures or other locations on their machine to prevent someone from finding their password file. others use password wallets or other tools to hold all of their passwords.

    one method is multiparty key recovery. a user would write her private key on a piece of paper, and then divide the key into two or more parts. each part would be sealed in an envelope. the user would give one envelope each to trusted people with instructions that the envelope was only to be opened in an emergency where the organization needed access to the user’s system or files (disability or death of the user). in case of an emergency, the holders of the envelopes would report to human resources, where the envelopes could be opened and the key reconstructed. the user would usually give the envelopes to trusted people at different management levels and different parts of the company to reduce the risk of collusion.

    key recovery should also be conducted with the privacy of the individual in mind. if a private individual used encryption to protect the confidentiality of some information, it may be legally protected according to local laws. in some situations, a legal order may be required to retrieve the key and decrypt the information.

    key escrow

    key escrow is the process of ensuring a third party maintains a copy of a private key or key needed to decrypt information. key escrow also should be considered mandatory for most organizations’ use of cryptography, as encrypted information belongs to the organization and not the individual, however often an individual’s key is used to encrypt the information. there must be explicit trust between the key escrow provider and the parties involved as the escrow provider now holds a copy of the private key and could use it to reveal information. conditions of key release must be explicitly defined and agreed upon by all parties.

    web of trust

    in cryptography, a web of trust is a concept used in pgp, gnupg, and other openpgp-compatible systems to establish the authenticity of the binding between a public key and its owner. its decentralized trust model is an alternative to the centralized trust model of a public key infrastructure (pki), which relies exclusively on a certificate authority (or a hierarchy of such). as with computer networks, there are many independent webs of trust, and any user (through their identity certificate) can be a part of, and a link between, multiple webs. the web of trust concept was first put forth by pgp creator phil zimmermann in 1992 in the manual for pgp version 2.0:

    as time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. everyone else will each choose their own trusted introducers. and everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. this will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.

    secure protocols

    ip security (ipsec) is a suite of protocols for communicating securely with ip by providing mechanisms for authenticating and encryption. implementation of ipsec is mandatory in ipv6, and many organizations are using it over ipv4. further, ipsec can be implemented in two modes, one that is appropriate for end-to-end protection and one that safeguards traffic between networks. standard ipsec only authenticates hosts with each other. if an organization requires users to authenticate, they must employ a nonstandard proprietary ipsec implementation, or use ipsec over l2tp (layer 2 tunneling protocol). the latter approach uses l2tp to authenticate the users and encapsulate ipsec packets within an l2tp tunnel. because ipsec interprets the change of ip address within packet headers as an attack, nat does not work well with ipsec. to resolve the incompatibility of the two protocols, nat-transversal (aka nat-t) encapsulates ipsec within udp port 4500 (see rfc 3948 for details).32

    authentication header (ah)

    the authentication header (ah) is used to prove the identity of the sender and ensure that the transmitted data has not been tampered with. before each packet (headers + data) is transmitted, a hash value of the packet’s contents (except for the fields that are expected to change when the packet is routed) based on a shared secret is inserted in the last field of the ah. the endpoints negotiate which hashing algorithm to use and the shared secret when they establish their security association. to help thwart replay attacks (when a legitimate session is retransmitted to gain unauthorized access), each packet that is transmitted during a security association has a sequence number, which is stored in the ah. in transport mode, the ah is shimmed between the packet’s ip and tcp header. the ah helps ensure integrity, not confidentiality. encryption is implemented through the use of encapsulating security payload (esp).

    encapsulating security payload (esp)

    the encapsulating security payload (esp) encrypts ip packets and ensures their integrity. esp contains four sections:

    security associations

    a security association (sa) defines the mechanisms that an endpoint will use to communicate with its partner. all sas cover transmissions in one direction only. a second sa must be defined for two-way communication. mechanisms that are defined in the sa include the encryption and authentication algorithms, and whether to use the ah or esp protocol. deferring the mechanisms to the sa, as opposed to specifying them in the protocol, allows the communicating partners to use the appropriate mechanisms based on situational risk.

    transport mode and tunnel mode

    endpoints communicate with ipsec using either transport or tunnel mode. in transport mode, the ip payload is protected. this mode is mostly used for end-to-end protection, for example, between client and server. in tunnel mode, the ip payload and its ip header are protected. the entire protected ip packet becomes a payload of a new ip packet and header. tunnel mode is often used between networks, such as with firewall-to-firewall vpns.

    internet key exchange (ike)

    internet key exchange allows communicating partners to prove their identity to each other and establish a secure communication channel, and is applied as an authentication component of ipsec. ike uses two phases:

    1. phase 1 in this phase, the partners authenticate with each other, using one of the following:
    1. next, ike establishes a temporary security association and secure tunnel to protect the rest of the key exchange.
    2. phase 2 the peers’ security associations are established, using the secure tunnel and temporary sa created at the end of phase 1.

    high assurance internet protocol encryptor (haipe)

    based on ipsec, high assurance internet protocol encryptor (haipe) possesses additional restrictions and enhancements; for instance, the ability to encrypt multicast data using high-assurance hardware encryption which requires that the same key be manually loaded on all communicating devices. haipe is an extension of ipsec that would be used for highly-secure communications such as those employed by military applications.

    secure socket layer/transport layer security (ssl/tls)

    secure socket layer/transport layer security (ssl/tls) is primarily used to encrypt confidential data sent over an insecure network such as the internet. in the https protocol, the types of data encrypted include the url, the http header, cookies, and data submitted through forms. a web page secured with ssl/tls has a url that begins with https://.

    according to the microsoft technet site, “the ssl/tls security protocol is layered between the application protocol layer and tcp/ip layer, where it can secure and then send application data to the transport layer. because it works between the application layer and the tcp/ip layer, ssl/tls can support multiple application layer protocols.

    the ssl/tls protocol can be divided into two layers. the first layer is the handshake protocol layer, which consists of three sub-protocols: the handshake protocol, the change cipher spec protocol, and the alert protocol. the second layer is the record protocol layer. figure 5-18 illustrates the various layers and their components.

    c05f018.tif

    figure 5-18: ssl/tls protocol layers

    “ssl/tls uses both symmetric key and asymmetric key encryption. ssl/tls uses public key encryption to authenticate the server to the client, and optionally the client to the server. public key cryptography is also used to establish a session key. the session key is used in symmetric algorithms to encrypt the bulk of the data. this combines the benefit of asymmetric encryption for authentication with the faster, less processor-intensive symmetric key encryption for the bulk data.”33

    secure/multipurpose internet mail extensions (s/mime)34

    according to the microsoft technet site, “secure/multipurpose internet mail extensions (s/mime) is a widely accepted method, or more precisely a protocol, for sending digitally signed and encrypted messages. s/mime allows you to encrypt emails and digitally sign them. when you use s/mime with an email message, it helps the people who receive that message to be certain that what they see in their inbox is the exact message that started with the sender. it will also help people who receive messages to be certain that the message came from the specific sender and not from someone pretending to be the sender. to do this, s/mime provides for cryptographic security services such as authentication, message integrity, and non-repudiation of origin (using digital signatures). it also helps enhance privacy and data security (using encryption) for electronic messaging.” 35

    going hands-on with cryptography—cryptography exercise

    cryptography is a modern cornerstone of secure information storage and transmission. this exercise is designed to provide the security practitioner with an overview of public key infrastructure using gnu privacy guard (gpg).

    requirements

    the following items are necessary to perform this exercise:

    be sure to use different random account names than those listed in the exercise.

    setup

    for this exercise two simulated users will be present. andy will be the first user and rae will be the second. they both have a need to communicate securely through email with each other. they have never met in person and therefore have never had a chance to exchange a synchronous encryption key with each other. they suspect pki might be able to help them.

    set up the first user account by setting up a free email account. (andy will use gmail [] for this example and rae will use hotmail [] for each user.) you’ll notice they used addresses that do not directly relate to their real names. they have a need for discretion and confidentiality in their work!

    once that is completed, download each software package (claws and thunderbird) from the download sites.

    first user email setup

    for this example the fictitious andy account () is going to use the claws email client. since andy is using claws, he will need to ensure he has installed the gnupg core package first. (see figure 5-19.) download the system-appropriate gnupg package from https://www.gnupg.org/download/index.html.

    c05f019.tif

    figure 5-19: choosing components during gnupg setup

    1. during the first launch of claws, it will ask for account information. choose imap and auto-configure. see figure 5-20.
    2. next configure the smtp settings. see figure 5-21.
      c05f020.tif

      figure 5-20: setting up claws account information

      c05f021.tif

      figure 5-21: smtp server setup in claws

    3. click forward and then save. depending on the email provider, an ssl certificate error may be present. since this is simply an exercise, click on accept and save to continue to the main mailbox.
    4. once opened a message warning about keys should pop up as shown in figure 5-22.
    c05f022.tif

    figure 5-22: no pgp key warning

    1. click yes to create a new key pair. on the screen that follows create a new passphrase for the encryption keys. see figure 5-23. this should be different than the passphrase developed for the email account.
    2. once the password is confirmed, a key creation dialog box will prompt the user to randomly move the mouse to help create entropy for the encryption key pair. once complete, the user is given the option to publish the public key to a keyserver. click on yes and upload andy’s public key to a keyserver.

      in windows you may get an error stating the key cannot be exported. if this error comes up, perform steps 7 through 9 to publish the public key. otherwise skip to step 10.

      c05f023.tif

      figure 5-23: entering a passphrase for new key creation

    3. click on the start menu and click gpa. see figure 5-24.
    c05f024.tif

    figure 5-24: accessing gpa from the windows start menu

    1. andy’s key should be listed in the key manager. right-click on the entry for andy and choose send keys. see figure 5-25.
    2. a warning will be provided regarding sending the private key to a server. see figure 5-26. choose yes and a confirmation should appear. this concludes the additional information necessary should the automatic key upload for windows fail.
      c05f025.tif

      figure 5-25: finding a key in the key manager

      c05f026.tif

      figure 5-26: exporting a key to the keyserver

    3. next, proper plugins should be installed for claws. click on configuration and then plugins. see figure 5-27.
    4. ensure the three plugins shown in figure 5-28 are selected, and if not, browse to them through the load button and select them.
      c05f027.tif

      figure 5-27: accessing the plug-ins menu

      c05f028.tif

      figure 5-28: selecting and loading plug-ins

    5. once loaded click close.

    the first email user is now configured with a public and private key along with an email account and an email client.

    second user email setup

    the second user, rae (), uses hotmail and mozilla thunderbird for her email. to set up her account, install thunderbird from the download file. when thunderbird first launches, it will prompt for account creation.

    1. thunderbird may prompt for the creation of an email address. choose skip this and use my existing email. see figure 5-29.
      c05f029.tif

      figure 5-29: setting up a user in thunderbird

    2. next enter the email information for the second user. see figure 5-30.
    3. click continue. thunderbird automatically detects the majority of major email providers. if it detects the settings for the selected account, simply click done. if it doesn’t detect the settings, review the email service provider’s self-help for information regarding server settings and manually configure the client. see figure 5-31.
      c05f030.tif

      figure 5-30: entering account information

      c05f031.tif

      figure 5-31: mail account setup

    4. once completed, the next step is to set up the public and private key for the second user. doing this in thunderbird requires an additional plugin called enigmail. to enable enigmail click on the thunderbird menu and choose add-ons. see figure 5-32.
    5. in the add-ons manager type enigmail in the search box and hit the search button. see figure 5-33.
      c05f032.tif

      figure 5-32: accessing the add-ons menu

      c05f033.tif

      figure 5-33: searching for a specific add-on

    6. when the search results return, click install for enigmail. see figure 5-34.
      c05f034.tif

      figure 5-34: installing an add-on

    7. once the install is complete, choose restart now. see figure 5-35.
      c05f035.tif

      figure 5-35: restarting machine after add-on installation is complete

    8. when thunderbird restarts, the enigmail wizard will start as well. see figure 5-36.
      c05f036.tif

      figure 5-36: enigmail setup wizard

    9. choose yes, i would like the wizard to get me started and next.
    10. on the next screen choose convenient auto encryption and next. see figure 5-37.
      c05f037.tif

      figure 5-37: configuring encryption settings

    11. on the next screen choose to sign by default and click next. see figure 5-38.
    c05f038.tif

    figure 5-38: configuring your digital signature

    1. on the next screen choose yes and choose next. see figure 5-39.
    c05f039.tif

    figure 5-39: preference settings

    1. on the next screen there may be a prompt to use the first user’s certificates. for this exercise do not choose the first user (as they are going to use the claws client). instead choose to create a new key pair and click next. see figure 5-40.
    c05f040.tif

    figure 5-40: creating a key to sign and encrypt email

    1. next create a passphrase for the second user (rae) key pair. see figure 5-41.
      c05f041.tif

      figure 5-41: configuring a passphrase to be used to protect the private key

    2. click next two more times and the key creation process will start for the second user (rae). see figure 5-42.
      c05f042.tif

      figure 5-42: key generation

    3. when completed, an option to generate a revocation certificate is offered. generate the certificate and store it in a known location. see figure 5-43.
      c05f043.tif

      figure 5-43: revocation certificate creation

    4. next there will be a prompt to enter the key pair password for the second user (rae). enter the password created when generating the keys. see figure 5-44.
      c05f044.tif

      figure 5-44: passphrase entry to unlock the secret key

    5. a notice will appear regarding the use of the revocation certificate. click ok and then finish. if the add-ons manager is still open, close it and return to the main mail window.

    both users (andy and rae) now have secure email clients and key pairs for pki.

    key exchange and sending secure email

    this section explains how andy and rae exchange keys and how they securely send each other email. if the email clients and gpa are still open, close them before starting this section.

    since andy and rae have never seen each other and don’t trust many people, they would not want to use a “shared secret” like a single password that they could both use to encrypt emails to each other. with pki they don’t need a secret shared key. all they need is each other’s public key!

    first user (andy) gets the public key of the second user (rae)

    now that the pki is set up, andy and rae are excited to start sharing secure emails between them. to look up rae’s public key, andy needs to pull it off the public key server. to do that he will need rae’s email address. he asks rae what email address she is using and she replies with .

    1. now that andy knows rae’s email address, he can look up her public key automatically using his email client (claws). he opens his email client and clicks on the compose button. see figure 5-45.
      c05f045.tif

      figure 5-45: composing a message

    2. in the blank message window, he clicks on privacy system and switches it to pgp inline. he then checks the encryption option on the menu for the message. this means the body of the message will be encrypted. see figure 5-46.
    3. in the message he types rae’s address () and a test message, and when he is done he clicks send. see figure 5-47.
    4. a warning message will come up and explain that only the body of the message is encrypted. click continue.
    c05f046.tif

    figure 5-46: accessing the privacy system and encryption options

    c05f047.tif

    figure 5-47: sending a message

    the message will be sent to rae and it will be encrypted. the plugins automatically search for the email address provided () and use the appropriate public key if it is available on the server.

    rae receives the message and responds

    once the message is sent, rae can check her email and see if andy’s message is indeed encrypted.

    1. close claws and open thunderbird.
    2. when thunderbird opens, check the inbox. there should be a message there from andy. clicking on the message will provide a preview below. notice the preview is ciphertext! it looks like andy successfully encrypted the message! clicking on the message will prompt for rae’s password. rae must enter her password to unlock her private key (which is the only key that can decrypt the message).
    3. enter rae’s private key password. (it is the same password generated when the key pair was created.) see figure 5-48.
      c05f048.tif

      figure 5-48: entering a private key to decrypt a message

      once the password is entered, the message decrypts into its original form. see figure 5-49.

      rae is worried that the message might actually be unencrypted on the server as well. so she logs into the webmail site to see what the message looks like on the server. to her relief it is encrypted on the server because the encryption and decryption take place locally! see figure 5-50.

      c05f049.tif

      figure 5-49: decrypted message

      c05f050.tif

      figure 5-50: checking on the encryption of mail message on email server

      rae decides to send andy an encrypted response.

    4. first she needs to look up his public key. in thunderbird she clicks on the thunderbird menu, then enigmail, then key management. see figure 5-51.
    c05f051.tif

    figure 5-51: accessing thunderbird enigmail key management menu

    1. in the key management menu, she picks key server and then search. she puts andy’s email address into the search box and hits the search key. see figure 5-52.
      c05f052.tif

      figure 5-52: searching for a public key

    2. andy’s public key is found through the search. rae clicks ok to import the public key and ok once more on the confirmation screen. now that rae has andy’s public key, she can send him an encrypted response.
    3. she clicks reply on the message in thunderbird and creates a quick response. she then clicks on the enigmail button and ensures inline pgp is used to encrypt the message back to andy, while digital signatures are turned off. see figure 5-53.
      c05f053.tif

      figure 5-53: enigmail encryption and signing settings

    4. rae clicks ok on the enigmail options and then composes a short reply and hits send. rae closes thunderbird.
    5. andy opens claws and sees he has a new message from rae. when he clicks on the message, he is prompted for his private key passphrase to complete the decryption. see figure 5-54.

      the message decrypts and andy is able to read rae’s response. see figure 5-55.

      c05f054.tif

      figure 5-54: entering the passphrase to unlock the secret key for the openpgp certificate

      c05f055.tif

      figure 5-55: reading the decrypted message

      andy is impressed but wants to ensure the message is encrypted on the server. he logs into the webmail interface of his email provider and views the message there. see figure 5-56.

      c05f056.tif

      figure 5-56: viewing the encrypted message on the e-mail server

    andy is pleased to see the messages are indeed encrypted on the server. andy and rae continue to send encrypted messages back and forth. each has fewer worries about an email server breach since they each control their own encryption.

    conclusion

    the email encryption tools illustrated as part of this exercise can be easily deployed by anyone. the exercise provided two different email clients and an open source pgp encryption suite that is suitable for a variety of platforms and systems. while digital signatures were not explicitly covered as part of this exercise, they may also be performed using the tools demonstrated.

    summary

    the areas touched on in the cryptography domain include the fundamental concepts of cryptography such as hashing, salting, digital signatures, and symmetric/asymmetric cryptographic systems. in addition, the need to understand and support secure protocols to help operate and implement cryptographic systems was also discussed. the sscp needs to be aware of the importance of cryptography, and its impact on the ability to ensure both confidentiality and integrity within the organization.

    sample questions

    1. applied against a given block of data, a hash function creates:
      1. a chunk of the original block used to ensure its confidentiality
      2. a block of new data used to ensure the original block’s confidentiality
      3. a chunk of the original block used to ensure its integrity
      4. a block of new data used to ensure the original block’s integrity
    2. in symmetric key cryptography, each party should use:
      1. a publicly available key
      2. a previously exchanged secret key
      3. a randomly generated value unknown to everyone
      4. a secret key exchanged with the message
    3. nonrepudiation of a message ensures that the message:
      1. can be attributed to a particular author.
      2. is always sent to the intended recipient.
      3. can be attributed to a particular recipient.
      4. is always received by the intended recipient.
    4. in electronic code book (ecb) mode, data are encrypted using:
      1. a cipher-based on the previous block of a message
      2. a user-generated variable-length cipher for every block of a message
      3. a different cipher for every block of a message
      4. the same cipher for every block of a message
    5. in cipher block chaining (cbc) mode, the key is constructed by:
      1. generating new key material completely at random
      2. cycling through a list of user defined choices
      3. modifying the previous block of ciphertext
      4. reusing the previous key in the chain of message blocks
    6. stream ciphers are normally selected over block ciphers because of:
      1. the high degree of strength behind the encryption algorithms
      2. the high degree of speed behind the encryption algorithms
      3. their ability to use large amounts of padding in encryption functions
      4. their ability to encrypt large chunks of data at a time
    7. a key escrow service is intended to allow for the reliable:
      1. recovery of inaccessible private keys
      2. recovery of compromised public keys
      3. transfer of inaccessible private keys between users
      4. transfer of compromised public keys between users
    8. the correct choice for encrypting the entire original data packet in a tunneled mode for an ipsec solution is:
      1. generic routing encapsulation (gre)
      2. authentication header (ah)
      3. encapsulating security payload (esp)
      4. point-to-point tunneling protocol (pptp)
    9. when implementing an md5 solution, what randomizing cryptographic function should be used to help avoid collisions?
      1. multistring concatenation
      2. modular addition
      3. message pad
      4. salt
    10. key clustering represents the significant failure of an algorithm because:
      1. a single key should not generate different ciphertext from the same plaintext, using the same cipher algorithm.
      2. two different keys should not generate the same ciphertext from the same plaintext, using the same cipher algorithm.
      3. two different keys should not generate different ciphertext from the same plaintext, using the same cipher algorithm.
      4. a single key should not generate the same ciphertext from the same plaintext, using the same cipher algorithm.
    11. asymmetric key cryptography is used for the following:
      1. asymmetric key cryptography is used for the following:
      2. encryption of data, nonrepudiation, access control
      3. nonrepudiation, steganography, encryption of data
      4. encryption of data, access control, steganography
    12. which of the following algorithms supports asymmetric key cryptography?
      1. diffie-hellman
      2. blowfish
      3. sha-256
      4. rijndael
    13. a certificate authority (ca) provides which benefit to a user?
      1. protection of public keys of all users
      2. history of symmetric keys
      3. proof of nonrepudiation of origin
      4. validation that a public key is associated with a particular user
    14. what is the output length of a ripemd-160 hash?
      1. 150 bits
      2. 128 bits
      3. 160 bits
      4. 104 bits
    15. ansi x9.17 is concerned primarily with:
      1. financial records and retention of encrypted data
      2. the lifespan of master key-encrypting keys (kkm’s)
      3. formalizing a key hierarchy
      4. protection and secrecy of keys
    16. what is the input that controls the operation of the cryptographic algorithm?
      1. decoder wheel
      2. encoder
      3. cryptovariable
      4. cryptographic routine
    17. aes is a block cipher with variable key lengths of?
      1. 128, 192 or 256 bits
      2. 32, 128 or 448 bits
      3. 8, 64, 128 bits
      4. 128, 256 or 448 bits
    18. a hashed message authentication code (hmac) works by:
      1. adding a non-secret key value to the input function along with the source message.
      2. adding a secret key value to the output function along with the source message.
      3. adding a secret key value to the input function along with the source message.
      4. adding a non-secret key value to the output function along with the source message.
    19. the main types of implementation attacks include: (choose all that apply.)
      1. linear
      2. side-channel analysis
      3. fault analysis
      4. probing
    20. what is the process of using a key encrypting key (kek) to protect session keys called?
      1. key distribution
      2. key escrow
      3. key generation
      4. key wrapping

    end notes

    >
    ..................Content has been hidden....................

    You can't read the all page of ebook, please click here login for view all page.
    Reset
    3.144.38.253