Chapter 15. Cryptographic Techniques

This chapter covers the following topics:

This chapter covers CAS-003 objective 4.4.

Cryptography is one of the most complicated domains of the security knowledge base. Cryptography is a crucial factor in protecting data at rest and in transit. It is a science that involves either hiding data or making data unreadable by transforming it. In addition, cryptography provides message author assurance, source authentication, and delivery proof.

Cryptography concerns confidentiality, integrity, and authentication but not availability. The CIA triad is a main security tenet that covers confidentiality, integrity, and availability, so cryptography covers two of the main tenets of the CIA triad. It helps prevent or detect the fraudulent insertion, deletion, and modification of data. Cryptography also provides non-repudiation by providing proof of origin. All these concepts are discussed in more detail in this chapter.

Most organizations use multiple hardware devices to protect confidential data. These devices protect data by keeping external threats out of the network. In case one of an attacker’s methods works and an organization’s first line of defense is penetrated, data encryption ensures that confidential or private data will not be viewed.

The key benefits of encryption include:

  • Power: Encryption relies on global standards. The solutions are so large that they ensure an organization is fully compliant with security policies. Data encryption solutions are affordable and may provide even a military-level security for any organization.

  • Transparency: Efficient encryption allows normal business flow while crucial data is secured in the background, and it does so without the user being aware of what is going on.

  • Flexibility: Encryption saves and protects any important data, whether it is stored on a computer, a removable drive, an email server, or a storage network. Moreover, it allows you to securely access your files from anyplace.

In this chapter, you will learn about cryptography techniques, concepts, and implementations that are used to secure data in the enterprise.


Different cryptographic techniques are employed based on the needs of the enterprise. Choosing the correct cryptographic technique involves examining the context of the data and determining which technique to use. When determining which technique to use, security professionals should consider the data type, data sensitivity, data value, and the threats to the data.

The techniques you need to understand include key stretching, hashing, digital signature, Message Authentication, code signing, Pseudo-Random Number Generation. Perfect forward secrecy, Data-in-Transit Encryption, Data-at-rest Encryption, and Data-in-Memory/Processing.

Key Stretching

Key stretching, also referred to as key strengthening, is a cryptographic technique that involves making a weak key stronger by increasing the time it takes to test each possible key. In key stretching, the original key is fed into an algorithm to produce an enhanced key, which should be at least 128 bits for effectiveness.

If key stretching is used, an attacker would need to either try every possible combination of the enhanced key or try likely combinations of the initial key. Key stretching slows down the attacker because the attacker must compute the stretching function for every guess in the attack.

Systems that use key stretching include Pretty Good Privacy (PGP), GNU Privacy Guard (GPG), Wi-Fi Protected Access (WPA), and WPA2. Widely used password key stretching algorithms include Password-Based Key Derivation Function 2 (PBKDF2), bcrypt, and scrypt.


Hashing involves running data through a cryptographic function to produce a one-way message digest. The size of the message digest is determined by the algorithm used. The message digest represents the data but cannot be reversed in order to determine the original data. Because the message digest is unique, it can be used to check data integrity.

A one-way hash function reduces a message to a hash value. A comparison of the sender’s hash value to the receiver’s hash value determines message integrity. If both the sender and receiver used the same hash function but the resultant hash values are different, then the message has been altered in some way. Hash functions do not prevent data alteration but provide a means to determine whether data alteration has occurred.

Hash functions do have limitations. If an attacker intercepts a message that contains a hash value, the attacker can alter the original message to create a second invalid message with a new hash value. If the attacker then sends the second invalid message to the intended recipient, the intended recipient will have no way of knowing that he received an incorrect message. When the receiver performs a hash value calculation, the invalid message will look valid because the invalid message was appended with the attacker’s new hash value, not the original message’s hash value. To prevent this from occurring, the sender should use a message authentication code (MAC).

Encrypting the hash function with a symmetric key algorithm generates a keyed MAC. The symmetric key does not encrypt the original message. It is used only to protect the hash value.


Symmetric and asymmetric algorithms are discussed in more detail later in this chapter.


Figure 15-1 illustrates the basic steps in a hash function.

A diagram displays the process flow of Hash Function.

Figure 15-1 Hash Function Process

Two major hash function vulnerabilities can occur: collisions and rainbow table attacks. A collision occurs when a hash function produces the same hash value on different messages. A rainbow table attack occurs when rainbow tables are used to reverse a hash through the computation of all possible hashes and looking up the matching value.

Because a message digest is determined by the original data, message digests can be used to compare different files to see if they are identical down to the bit level. If a computed message digest does not match the original message digest value, data integrity has been compromised.

Password hash values are often stored instead of actual passwords to ensure that the actual passwords are not compromised.

When choosing which hashing function to use, it is always better to choose the function that uses a larger hash value. To determine the hash value for a file, you should use the hash function. For example, suppose that you have a document named contract.doc that you need to ensure is not modified in any way. To determine the hash value for the file using the MD5 hash function, you would enter the following command:

md5sum contract.doc

This command would return a hash value that you should record. Later, when users need access to the file, they should always issue the md5sum command listed to recalculate the hash value. If the value is the same as the originally recorded value, the file is unchanged. If it is different, then the file has been changed.

The hash functions that you should be familiar with include MD2/MD4/MD5/MD6, SHA/SHA-2/SHA-3, HAVAL, RIPEMD-160, and Tiger.


The MD2 message digest algorithm produces a 128-bit hash value. It performs 18 rounds of computations. Although MD2 is still in use today, it is much slower than MD4, MD5, and MD6.

The MD4 algorithm also produces a 128-bit hash value. However, it performs only three rounds of computations. Although MD4 is faster than MD2, its use has significantly declined because attacks against it have been very successful.

Like the other MD algorithms, the MD5 algorithm produces a 128-bit hash value. It performs four rounds of computations. It was originally created because of the issues with MD4, and it is more complex than MD4. However, MD5 is not collision free. For this reason, it should not be used for SSL certificates or digital signatures. The U.S. government requires the use of SHA-2 instead of MD5. However, in commercial use, many software vendors publish the MD5 hash value when they release software patches so customers can verify the software’s integrity after download.

The MD6 algorithm produces a variable hash value, performing a variable number of computations. Although it was originally introduced as a candidate for SHA-3, it was withdrawn because of early issues the algorithm had with differential attacks. MD6 has since been rereleased with this issue fixed. However, that release was too late to be accepted as the National Institute of Standards and Technology (NIST) SHA-3 standard.


Secure Hash Algorithm (SHA) is a family of four algorithms published by the U.S. NIST. SHA-0, originally referred to as simply SHA because there were no other “family members,” produces a 160-bit hash value after performing 80 rounds of computations on 512-bit blocks. SHA-0 was never very popular because collisions were discovered.

Like SHA-0, SHA-1 produces a 160-bit hash value after performing 80 rounds of computations on 512-bit blocks. SHA-1 corrected the flaw in SHA-0 that made it susceptible to attacks.


SHA-2 is actually a family of hash functions, each of which provides different functional limits. The SHA-2 family is as follows:

  • SHA-224: Produces a 224-bit hash value after performing 64 rounds of computations on 512-bit blocks.

  • SHA-256: Produces a 256-bit hash value after performing 64 rounds of computations on 512-bit blocks.

  • SHA-384: Produces a 384-bit hash value after performing 80 rounds of computations on 1,024-bit blocks.

  • SHA-512: Produces a 512-bit hash value after performing 80 rounds of computations on 1,024-bit blocks.

  • SHA-512/224: Produces a 224-bit hash value after performing 80 rounds of computations on 1,024-bit blocks. The 512 designation here indicates the internal state size.

  • SHA-512/256: Produces a 256-bit hash value after performing 80 rounds of computations on 1,024-bit blocks. Once again, the 512 designation indicates the internal state size.

SHA-3, like SHA-2, is a family of hash functions. This standard was formally adopted in May 2014. The hash value sizes range from 224 to 512 bits. SHA-3 performs 120 rounds of computations by default.

Keep in mind that SHA-1 and SHA-2 are still widely used today. SHA-3 was not developed because of some security flaw with the two previous standards but was instead proposed as an alternative hash function to the others.

Often hashing algorithms are implemented with other cryptographic algorithms for increased security. But enterprise administrators should ensure that the algorithms that are implemented together can provide strong security with the best performance. For example, implementing 3DES with SHA would provide strong security but worse performance than implementing RC4 with MD5.

Let’s look at an example of using SHA for hashing. If an administrator attempts to install a package named 5.9.4-8-x86_64.rpm on a server, the administrator needs to ensure that the package has not been modified even if the package was downloaded from an official repository. On a Linux machine, the administrator should run sha1sum and verify the hash of the package before installing the package.


HAVAL is a one-way function that produces variable-length hash values, including 128 bits, 160 bits, 192 bits, 224 bits, and 256 bits, and uses 1,024-bit blocks. The number of rounds of computations can be three, four, or five. Collision issues have been discovered while producing a 128-bit hash value with three rounds of computations. All other variations do not have any discovered issues as of this printing.


Although several variations of the RIPEMD hash function exist, security professionals should worry only about RIPEMD-160. RIPEMD-160 produces a 160-bit hash value after performing 160 rounds of computations on 512-bit blocks.

Digital Signature

A digital signature is a hash value encrypted with the sender’s private key. A digital signature provides authentication, non-repudiation, and integrity. A blind signature is a form of digital signature where the contents of the message are masked before it is signed.


The process for creating a digital signature is as follows:

Step 1. The signer obtains a hash value for the data to be signed.

Step 2. The signer encrypts the hash value using her private key.

Step 3. The signer attaches the encrypted hash and a copy of her public key in a certificate to the data and sends the message to the receiver.


The process for verifying the digital signature is as follows:

Step 1. The receiver separates the data, encrypted hash, and certificate.

Step 2. The receiver obtains the hash value of the data.

Step 3. The receiver verifies that the public key is still valid by using the PKI.

Step 4. The receiver decrypts the encrypted hash value using the public key.

Step 5. The receiver compares the two hash values. If the values are the same, the message has not been changed.

Public key cryptography, which is discussed later in this chapter, is used to create digital signatures. Users register their public keys with a certification authority (CA), which distributes a certificate containing the user’s public key and the CA’s digital signature. The digital signature is computed by the user’s public key and validity period being combined with the certificate issuer and digital signature algorithm identifier.

The Digital Signature Standard (DSS) is a federal digital security standard that governs the Digital Security Algorithm (DSA). DSA generates a message digest of 160 bits. The U.S. federal government requires the use of DSA, RSA, or Elliptic Curve DSA (ECDSA) and SHA for digital signatures.

DSA is slower than RSA and provides only digital signatures. RSA provides digital signatures, encryption, and secure symmetric key distribution.

When considering cryptography, keep the following facts in mind:

  • Encryption provides confidentiality.

  • Hashing provides integrity.

  • Digital signatures provide authentication, non-repudiation, and integrity.

Message Authentication

A message authentication code (MAC) plays a role similar to code signing in that it can provide message integrity and authenticity. You should be familiar with three types of MACs: HMAC, CBC-MAC, and CMAC.

A hash MAC (HMAC) is a keyed-hash MAC that involves a hash function with a symmetric key. HMAC provides data integrity and authentication. Any of the previously listed hash functions can be used with HMAC, with HMAC being prepended to the hash function name (for example, HMAC-SHA-1). The strength of HMAC depends on the strength of the hash function, including the hash value size and the key size. HMAC’s hash value output size is the same as that of the underlying hash function. HMAC can help reduce the collision rate of the hash function.

Cipher block chaining MAC (CBC-MAC) is a block-cipher MAC that operates in CBC mode. CBC-MAC provides data integrity and authentication.

Cipher-based MAC (CMAC) operates in the same manner as CBC-MAC but with much better mathematical functions. CMAC addresses some security issues with CBC-MAC and is approved to work with AES and 3DES.

Code Signing

Code signing occurs when code creators digitally sign executables and scripts so that the user installing the code can be assured that it comes from the verified author. The code is signed using a cryptographic hash, which in turn ensures that the code has not been altered or corrupted. Java applets, ActiveX controls, and other active web and browser scripts often use code signing for security. In most cases, the signature is verified by a third party, such as VeriSign.

Pseudo-Random Number Generation

A pseudo-random number generator (PRNG) generates a sequence of numbers that approximates the properties of random numbers using an algorithm. In actuality, the sequence is not random because it is derived from a relatively small set of initial values.

Security professionals should be able to recognize issues that could be resolved using a PRNG. If an enterprise needs a system that produces a series of numbers with no discernible mathematical progression for a Java-based, customer-facing website, a pseudo-random number should be generated at invocation by Java.

Perfect Forward Secrecy

Perfect forward secrecy (PFS) ensures that a session key derived from a set of long-term keys cannot be compromised if one of the long-term keys is compromised in the future. The key must not be used to derive any additional keys. If the key is derived from some other keying material, then the keying material must not be used to derive any more keys. Compromise of a single key permits access only to data protected by that single key.

To work properly, PFS requires two conditions:

  • Keys are not reused.

  • New keys are not derived from previously used keys.

Understanding when to implement PFS is vital to any enterprise. If a security audit has uncovered that some encryption keys used to secure the financial transactions with an organization’s partners may be too weak, the security administrator should implement PFS on all VPN tunnels to ensure that financial transactions will not be compromised if a weak encryption key is found.

PFS is primarily used in VPNs but can also be used by web browsers, services, and applications.

Data-in-Transit Encryption

Transport encryption ensures that data is protected when it is transmitted over a network or the Internet. Transport encryption can protect against network sniffing attacks.

Security professionals should ensure that their data is protected in transit in addition to protecting data at rest. As an example, think of an enterprise that implements token and biometric authentication for all users, protected administrator accounts, transaction logging, full-disk encryption, server virtualization, port security, firewalls with ACLs, a NIPS, and secured access points. None of these solutions provides any protection for data in transport. Transport encryption would be necessary in this environment to protect data.

To provide this encryption, secure communication mechanisms should be used, including SSL/TLS, HTTP/HTTPS/SHTTP, SET, SSH, and IPsec.


Secure Sockets Layer (SSL) is a protocol that provides encryption, server and client authentication, and message integrity. It interfaces with the application and transport layers but does not really operate within these layers. SSL was developed by Netscape to transmit private documents over the Internet. SSL implements either 40-bit (SSL 2.0) or 128-bit (SSL 3.0) encryption, but the 40-bit version is susceptible to attacks because of its limited key size. SSL allows an application to have encrypted, authenticated communication across a network.

Transport Layer Security (TLS) 1.0 is based on SSL 3.0 but is more extensible. The main goal of TLS is privacy and data integrity between two communicating applications.

SSL and TLS are most commonly used when data needs to be encrypted while it is being transmitted (in transit) over a medium from one system to another.


Hypertext Transfer Protocol (HTTP) is the protocol used on the Web to transmit website data between a web server and a web client. With each new address that is entered into the web browser, whether from initial user entry or by clicking a link on the page displayed, a new connection is established because HTTP is a stateless protocol.

HTTP Secure (HTTPS) is the implementation of HTTP running over the SSL/TLS protocol, which establishes a secure session using the server’s digital certificate. SSL/TLS keeps the session open using a secure channel. HTTPS websites always include the https://designation at the beginning.

Although it sounds very similar, Secure HTTP (SHTTP) protects HTTP communication in a different manner. SHTTP encrypts only a single communication message, not an entire session (or conversation). SHTTP is not as common as HTTPS.

SET and 3-D Secure

Secure Electronic Transaction (SET), proposed by Visa and MasterCard, was intended to secure credit card transaction information over the Internet. It was based on X.509 certificates and asymmetric keys. It used an electronic wallet on a user’s computer to send encrypted credit card information. But to be fully implemented, SET would have required the full cooperation of financial institutions, credit card users, wholesale and retail establishments, and payment gateways. It was never fully adopted.

Visa now promotes the 3-D Secure protocol instead of SET. 3-D Secure is an XML-based protocol designed to provide an additional security layer for online credit and debit card transactions. It is offered to customers under the name Verified by Visa. The implementation of 3-D Secure by MasterCard is called SecureCode.


Internet Protocol Security (IPsec) is a suite of protocols that establishes a secure channel between two devices. IPsec is commonly implemented over VPNs. IPsec provides traffic analysis protection by determining the algorithms to use and implementing any cryptographic keys required for IPsec.

IPsec includes Authentication Header (AH), Encapsulating Security Payload (ESP), and Security Associations (SAs). AH provides authentication and integrity, whereas ESP provides authentication, integrity, and encryption (confidentiality). An SA is a record of a device’s configuration that needs to participate in IPsec communication. A Security Parameter Index (SPI) is a type of table that tracks the different SAs used and ensures that a device uses the appropriate SA to communicate with another device. Each device has its own SPI.

IPsec runs in one of two modes: transport mode or tunnel mode. Transport mode protects only the message payload, whereas tunnel mode protects the payload, routing, and header information. Both of these modes can be used for gateway-to-gateway or host-to-gateway IPsec communication.

IPsec does not determine which hashing or encryption algorithm is used. Internet Key Exchange (IKE), which is a combination of OAKLEY and Internet Security Association and Key Management Protocol (ISAKMP), is the key exchange method that is most commonly used by IPsec. OAKLEY is a key establishment protocol based on Diffie-Hellman that was superseded by IKE. ISAKMP was established to set up and manage SAs. IKE with IPsec provides authentication and key exchange.

The authentication method used by IKE with IPsec includes preshared keys, certificates, and public key authentication. The most secure implementations of preshared keys require a PKI. But a PKI is not necessary if a preshared key is based on simple passwords.


In-memory processing is an approach in which all data in a set is processed from memory rather than from the hard drive. It assumes that all the data will be available in memory rather than just the most recently used data, as is usually done using RAM or cache memory. This results in faster reporting and decision making in business.

Securing this requires encrypting the data in RAM. Windows offers the Data-Protection API (DPAPI), which lets you encrypt data using the user’s login credentials. One of the key questions is where to store the key as it is typically not a good idea to store it in the same location as the data.

Intel’s Software Guard Extensions (SGX), shipping with Skylake and newer CPUs, allows you to load a program into your processor, verify that its state is correct (remotely), and protect its execution. The CPU automatically encrypts everything leaving the processor (that is, everything that is offloaded to RAM) and thereby ensures security.

Data-at-Rest Encryption

Data at rest refers to data that is stored physically in any digital form that is not active. This data can be stored in databases, data warehouses, files, archives, tapes, offsite backups, mobile devices, or any other storage medium. Data at rest is most often protected using data encryption algorithms.

Algorithms that are used in computer systems implement complex mathematical formulas when converting plaintext to ciphertext. The two main components of any encryption system are the key and the algorithm. In some encryption systems, the two communicating parties use the same key. In other encryption systems, the two communicating parties use different keys in the process, but the keys are related. The encryption systems you need to understand include symmetric algorithms, asymmetric algorithms, and hybrid ciphers.

Symmetric Algorithms

Symmetric algorithms use a private, or secret, key that must remain secret between the two parties. Each party pair requires a separate private key. Therefore, a single user would need a unique secret key for every user with whom she communicates.

Consider an example in which there are 10 unique users. Each user needs a separate private key to communicate with the other users. To calculate the number of keys that would be needed in this example, you would use the following formula:

# of users × (# of users – 1) / 2

In this example, you would calculate 10 × (10 – 1) / 2, or 45 needed keys.

With symmetric algorithms, the encryption key must remain secure. To obtain the secret key, the users must find a secure out-of-band method for communicating the secret key, including courier or direct physical contact between the users.

A special type of symmetric key called a session key encrypts messages between two users during a communication session.

Symmetric algorithms can be referred to as single-key, secret-key, private-key, or shared-key cryptography.

Symmetric systems provide confidentiality but not authentication or non-repudiation. If both users use the same key, determining where the message originated is impossible.

Symmetric algorithms include DES, AES, IDEA, Skipjack, Blowfish, Twofish, RC4/RC5/RC6, and CAST.

Digital Encryption Standard (DES) and Triple DES (3DES)

Digital Encryption Standard (DES) uses a 64-bit key, 8 bits of which are used for parity. Therefore, the effective key length for DES is 56 bits. DES divides a message into 64-bit blocks. Sixteen rounds of transposition and substitution are performed on each block, resulting in a 64-bit block of ciphertext.

DES has mostly been replaced by 3DES and AES, both of which are discussed shortly.

DES-X is a variant of DES that uses multiple 64-bit keys in addition to the 56-bit DES key. The first 64-bit key is XORed to the plaintext, which is then encrypted with DES. The second 64-bit key is XORed to the resulting cipher.

Double-DES, a DES version that used a 112-bit key length, is no longer used. After it was released, a security attack occurred that reduced Double-DES security to the same level as DES.

Because of the need to quickly replace DES, Triple DES (3DES), a version of DES that increases security by using three 56-bit keys, was developed. Although 3DES is resistant to attacks, it is up to three times slower than DES. 3DES did serve as a temporary replacement to DES. However, the NIST has actually designated AES as the replacement for DES, even though 3DES is still in use today.

Advanced Encryption Standard (AES)

Advanced Encryption Standard (AES) is the replacement algorithm for DES. Although AES is considered the standard, the algorithm that is used in the AES standard is the Rijndael algorithm. The terms AES and Rijndael are often used interchangeably.

The three block sizes that are used in the Rijndael algorithm are 128, 192, and 256 bits. A 128-bit key with a 128-bit block size undergoes 10 transformation rounds. A 192-bit key with a 192-bit block size undergoes 12 transformation rounds. Finally, a 256-bit key with a 256-bit block size undergoes 14 transformation rounds.

Rijndael employs transformations composed of three layers: the nonlinear layer, key addition layer, and linear-maxing layer. The Rijndael design is very simple, and its code is compact, which allows it to be used on a variety of platforms. It is the required algorithm for sensitive but unclassified U.S. government data.


International Data Encryption Algorithm (IDEA) is a block cipher that uses 64-bit blocks. Each 64-bit block is divided into 16 smaller blocks. IDEA uses a 128-bit key and performs eight rounds of transformations on each of the 16 smaller blocks.

IDEA is faster and harder to break than DES. However, IDEA is not as widely used as DES or AES because it was patented and licensing fees had to be paid to IDEA’s owner, a Swiss company named Ascom. However, the patent expired in 2012. IDEA is used in PGP.


Skipjack is a block-cipher, symmetric algorithm developed by the U.S. National Security Agency (NSA). It uses an 80-bit key to encrypt 64-bit blocks. This is the algorithm that is used in the Clipper chip. Details of this algorithm are classified.


Blowfish is a block cipher that uses 64-bit data blocks with anywhere from 32- to 448-bit encryption keys. Blowfish performs 16 rounds of transformation. Initially developed with the intention of serving as a replacement for DES, Blowfish is one of the few algorithms that is not patented.


Twofish is a version of Blowfish that uses 128-bit data blocks using 128-, 192-, and 256-bit keys. It uses 16 rounds of transformation. Like Blowfish, Twofish is not patented.


A total of six RC algorithms have been developed by Ron Rivest. RC1 was never published, RC2 was a 64-bit block cipher, and RC3 was broken before release. So the main RC implementations that a security professional needs to understand are RC4, RC5, and RC6.

RC4, also called ARC4, is one of the most popular stream ciphers. It is used in SSL and WEP. RC4 uses a variable key size of 40 to 2,048 bits and up to 256 rounds of transformation.

RC5 is a block cipher that uses a key size of up to 2,048 bits and up to 255 rounds of transformation. Block sizes supported are 32, 64, and 128 bits. Because of all the possible variables in RC5, the industry often uses an RC5-w/r/b designation, where w is the block size, r is the number of rounds, and b is the number of 8-bit bytes in the key. For example, RC5-64/16/16 denotes a 64-bit word (or 128-bit data blocks), 16 rounds of transformation, and a 16-byte (128-bit) key.

RC6 is a block cipher based on RC5, and it uses the same key size, rounds, and block size. RC6 was originally developed as an AES solution but lost the contest to Rijndael. RC6 is faster than RC5.


CAST, invented by and named for Carlisle Adams and Stafford Tavares, has two versions: CAST-128 and CAST-256. CAST-128 is a block cipher that uses a 40- to 128-bit key that performs 12 or 16 rounds of transformation on 64-bit blocks. CAST-256 is a block cipher that uses a 128-, 160-, 192-, 224-, or 256-bit key that performs 48 rounds of transformation on 128-bit blocks.

Table 15-1 lists the key facts about each symmetric algorithm.


Table 15-1 Symmetric Algorithm Key Facts

Algorithm Name

Block or Stream Cipher?

Key Size

Number of Rounds

Block Size



64 bits (effective length 56 bits)


64 bits



56, 112, or 168 bits


64 bits



128, 192, or 256 bits

10, 12, or 14 (depending on block/key size)

128 bits



128 bits


64 bits



80 bits


64 bits



32 to 448 bits


64 bits



128, 192, or 256 bits


128 bits



40 to 2,048 bits

Up to 256




Up to 2,048 bits

Up to 255

32, 64, or 128 bits



Up to 2,048 bits

Up to 255

32, 64, or 128 bits

Asymmetric Algorithms

Asymmetric algorithms, often referred to as dual-key cryptography or public key cryptography, use both a public key and a private, or secret, key. The public key is known by all parties, and the private key is known only by its owner. One of these keys encrypts the message, and the other decrypts the message.

In asymmetric cryptography, determining a user’s private key is virtually impossible even if the public key is known, although both keys are mathematically related. However, if a user’s private key is discovered, the system can be compromised.

Asymmetric systems provide confidentiality, integrity, authentication, and non-repudiation. Because both users have one unique key that is part of the process, determining where the message originated is possible.

If confidentiality is the primary concern for an organization, a message should be encrypted with the receiver’s public key, which is referred to as secure message format. If authentication is the primary concern for an organization, a message should be encrypted with the sender’s private key, which is referred to as open message format. When using open message format, the message can be decrypted by anyone who has the public key.

Asymmetric algorithms include Diffie-Hellman, RSA, El Gamal, ECC, Knapsack, and Zero Knowledge Proof.


Diffie-Hellman is responsible for the key agreement process, which works like this:

  1. John and Sally need to communicate over an encrypted channel and decide to use Diffie-Hellman.

  2. John generates a private key and a public key, and Sally generates a private key and a public key.

  3. John and Sally share their public keys with each other.

  4. An application on John’s computer takes John’s private key and Sally’s public key and applies the Diffie-Hellman algorithm, and an application on Sally’s computer takes Sally’s private key and John’s public key and applies the Diffie-Hellman algorithm.

  5. Through this application, the same shared value is created for John and Sally, which in turn creates the same symmetric key on each system, using the asymmetric key agreement algorithm.

Through this process, Diffie-Hellman provides secure key distribution but not confidentiality, authentication, or non-repudiation. This algorithm deals with discrete logarithms. Diffie-Hellman is susceptible to man-in-the-middle attacks unless an organization implements digital signatures or digital certificates for authentication at the beginning of the Diffie-Hellman process.


The most popular asymmetric algorithm, RSA, was invented by Ron Rivest, Adi Shamir, and Leonard Adleman. RSA can provide key exchange, encryption, and digital signatures. The strength of the RSA algorithm lies in the difficulty of finding the prime factors of very large numbers. RSA uses a 1,024- to 4,096-bit key and performs one round of transformation.

RSA-768 and RSA-704 have been factored. If factorization of the prime numbers used by an RSA implementation occurs, then the implementation is considered breakable and should not be used. RSA-2048 is the largest RSA number; successful factorization of RSA-2048 carries a cash prize of US$200,000.

As a key exchange protocol, RSA encrypts a DES or AES symmetric key for secure distribution. RSA uses a one-way function to provide encryption/decryption and digital signature verification/generation. The public key works with the one-way function to perform encryption and digital signature verification. The private key works with the one-way function to perform decryption and signature generation.

In RSA, the one-way function is a trapdoor. The private key knows the one-way function. The private key is capable of determining the original prime numbers. Finally, the private key knows how to use the one-way function to decrypt the encrypted message.

Attackers can use Number Field Sieve (NFS), a factoring algorithm, to attack RSA.

El Gamal

El Gamal is an asymmetric key algorithm based on the Diffie-Hellman algorithm. Like Diffie-Hellman, El Gamal deals with discrete logarithms. However, whereas Diffie-Hellman can only be used for key agreement, El Gamal can provide key exchange, encryption, and digital signatures.

With El Gamal, any key size can be used. However, a larger key size negatively affects performance. Because El Gamal is the slowest asymmetric algorithm, using a key size of 1,024 bit or less would be wise.


Elliptic curve cryptography (ECC) provides secure key distribution, encryption, and digital signatures. The elliptic curve’s size defines the difficulty of the problem.

Although ECC can use a key of any size, it can use a much smaller key than RSA or any other asymmetric algorithm and still provide comparable security. Therefore, the primary benefit promised by ECC is a smaller key size, which means reduced storage and transmission requirements. ECC is more efficient and provides better security than RSA keys of the same size.


Knapsack is a series of asymmetric algorithms that provide encryption and digital signatures. This algorithm family is no longer used due to security issues.

Zero Knowledge Proof

Zero Knowledge Proof is a technique used to ensure that only the minimum needed information is disclosed, without giving all the details. An example of this technique occurs when one user encrypts data with his private key and the receiver decrypts with the originator’s public key. The originator has not given his private key to the receiver. But the originator is proving that he has his private key simply because the receiver can read the message.

Hybrid Ciphers

Because both symmetric and asymmetric algorithms have weaknesses, solutions have been developed that use both types of algorithms in a hybrid cipher. By using both algorithm types, the cipher provides confidentiality, authentication, and non-repudiation.


The process for hybrid encryption is as follows:

  1. The symmetric algorithm provides the keys used for encryption.

  2. The symmetric keys are passed to the asymmetric algorithm, which encrypts the symmetric keys and automatically distributes them.

  3. The message is encrypted with the symmetric key.

  4. Both the message and the key are sent to the receiver.

  5. The receiver decrypts the symmetric key and uses the symmetric key to decrypt the message.

Disk-Level Encryption

Disk-level encryption is used to encrypt an entire volume or an entire disk and may use the same key for the entire disk or, in some cases, a different key for each partition or volume. It may also use a Trusted Platform Module (TPM) chip. This chip is located on the motherboard of the system and provides password protection, digital rights management (DRM), and full disk encryption. It protects the keys used to encrypt the computer’s hard disks and provides integrity authentication for a trusted boot pathway. This can help prevent data loss by the theft of the computer or the hard drive. Since the key in the TPM chip is required to access the hard drive, if it is removed, decryption of the data on it becomes impossible. Full disk encryption is an effective measure to defeat the theft of sensitive data on laptops or other mobile devices that could be stolen.


Keep in mind the following characteristics of disk encryption when considering its deployment:

  • It encrypts an entire volume or an entire disk.

  • It uses a single encryption key per drive.

  • It slows the boot and logon process.

  • It provides no encryption for data in transit.

Block-Level Encryption

Sometimes the term block-level encryption is used as a synonym for disk-level encryption, but block-level encryption can also mean encryption of a disk partition or a file that is acting as a virtual partition. This term is also used when discussing types of encryption algorithms. A block cipher encrypts blocks of data at a time, in contrast to a stream cipher, which encrypts one bit at a time.

File-Level Encryption

File-level encryption is just what it sounds like: The encryption and decryption process is performed per file, and each file owner has a key. Figure 15-2 depicts the encryption and decryption process.

Two process flow diagrams depict the Encryption and Decryption of a File.

Figure 15-2 File Encryption and Decryption

Record-Level Encryption

Storage encryption can also be performed at the record level. In this case, choices can be made about which records to encrypt, and this has a significant positive effect on both performance and security. This type of encryption allows more granularity in who possesses the keys since a single key does not decrypt the entire disk or volume.

In high-security environments such as those holding credit card information, records should be encrypted. For example, the following record in a database should raise a red flag. Can you tell what the problem is?

UserID Address Credit Card Password
jdoe123 62nd street 55XX-XXX-XXXX-1397 Password100
ssmith234 main street 42XX-XXX-XXXX-2027 17DEC12

That’s right! The passwords are stored in cleartext!

Keep in mind the following characteristics of file and record encryption when considering its deployment:

  • It provides no encryption while the data is in transit.

  • It encrypts a single file.

  • It uses a single key per file.

  • It slows opening of a file.

Port-Level Encryption

You can encrypt network data on specific ports to prevent network eavesdropping with a network protocol analyzer. Network encryption occurs at the network layer of a selected protocol. Network data is encrypted only while it is in transit. Once the data has been received, network encryption is no longer in effect. You must consider the impact on performance when using this encryption.

Table 15-2 compares the forms of encryption covered in this section. Keep in mind these characteristics of encryption when considering deploying these methods.


Table 15-2 Forms of Encryption



Key Usage

Performance Impact



Encrypts an entire volume or an entire disk

Single key per drive

Slows the boot and logon process

No encryption while data is in transit

File and record

Encrypts a single file

Single key per file

Slows opening of a file

No encryption while data is in transit


Encrypts data in transit

Single key per packet

Slows network performance

No encryption while data is at rest


Steganography occurs when a message is hidden inside another object, such as a picture or document. In steganography, it is crucial that only those who are expecting the message know that the message exists.

Using a concealment cipher is one method of steganography. Another method of steganography is digital watermarking. Digital watermarking involves having a logo or trademark embedded in documents, pictures, or other objects. The watermarks deter people from using the materials in an unauthorized manner.

The most common technique is to alter the least significant bit for each pixel in a picture. In this case, pixels are changed in a small way that the human eye cannot detect.


Enterprises employ cryptography in many different implementations, depending on the needs of the organization. Some of the implementations that security professionals must be familiar with include crypto modules, crypto processors, cryptographic service providers, DRM, watermarking, GPG, SSL/TLS, SSH, and S/MIME.

Crypto Modules

Crypto module is a term used to describe the hardware, software, and/or firmware that implements cryptographic logic or cryptographic processes. Several standards bodies can assess and rate these modules. Among them is the NIST, using the Federal Information Processing Standard (FIPS) Publication 140-2. FIPS 140-2 defines four levels of security that such a module can receive. FIPS 140-2 says the following about crypto modules:

Security Levels 1 and 2

For Security Levels 1 and 2, the physical port(s) and logical interface(s) used for the input and output of plaintext cryptographic keys, cryptographic key components, authentication data, and CSPs may be shared physically and logically with other ports and interfaces of the cryptographic module.

Security Levels 3 and 4

For Security Levels 3 and 4, the physical port(s) used for the input and output of plaintext cryptographic key components, authentication data, and CSPs shall be physically separated from all other ports of the cryptographic module or the logical interfaces used for the input and output of plaintext cryptographic key components, authentication data, and CSPs shall be logically separated from all other interfaces using a trusted path, and plaintext cryptographic key components, authentication data, and other CSPs shall be directly entered into the cryptographic module (e.g., via a trusted path or directly attached cable).

Crypto Processors

Crypto processors are dedicated to performing encryption. They typically include multiple physical measures to prevent tampering. There are a number of implementations of this concept. One example is the processor that resides on a smart card. The processor inputs program instructions in encrypted form and decrypts the instructions to plain instructions, which are then executed within the same chip where the decrypted instructions are inaccessibly stored.

Another example is the Trusted Platform Module (TPM) on an endpoint device that stores RSA encryption keys specific to the host system for hardware authentication. A final example are the processors contained in hardware security modules (HSMs).

Cryptographic Service Providers

A cryptographic service provider (CSP) is a software library that implements the Microsoft CryptoAPI (CAPI) in Windows. CSPs are independent modules that can be used by different applications for cryptographic services. CSPs are implemented as a type of DLL with special restrictions on loading and use.

All CSPs must be digitally signed by Microsoft, and the signature is verified when Windows loads the CSP. After being loaded, Windows periodically rescans the CSP to detect tampering, either by malicious software such as computer viruses or by the user herself trying to circumvent restrictions (for example, on cryptographic key length) that might be built into the CSP’s code. For more information on the CSPs available, see


Digital rights management (DRM) is used by hardware manufacturers, publishers, copyright holders, and individuals to control the use of digital content. This often also involves device controls. First-generation DRM software controls copying. Second-generation DRM controls executing, viewing, copying, printing, and altering works or devices. The U.S. Digital Millennium Copyright Act (DMCA) of 1998 imposes criminal penalties on those who make available technologies whose primary purpose is to circumvent content protection technologies. DRM includes restrictive license agreements and encryption. DRM protects computer games and other software, documents, ebooks, films, music, and television.

In most enterprise implementations, the primary concern is the DRM control of documents by using open, edit, print, or copy access restrictions that are granted on a permanent or temporary basis. Solutions can be deployed that store the protected data in a central or decentralized model. Encryption is used in the DRM implementation to protect the data both at rest and in transit.


Digital watermarking is a method used in steganography. Digital watermarking involves embedding a logo or trademark in documents, pictures, or other objects. The watermark deters people from using the materials in an unauthorized manner.

GNU Privacy Guard (GPG)

GNU Privacy Guard (GPG) is closely related to Pretty Good Privacy (PGP). Both programs were developed to protect electronic communications.

PGP provides email encryption over the Internet and uses different encryption technologies based on the needs of the organization. PGP can provide confidentiality, integrity, and authenticity based on the encryption methods used.

PGP provides key management using RSA. PGP uses a web of trust to manage the keys. By sharing public keys, users create this web of trust instead of relying on a CA. The public keys of all the users are stored on each user’s computer in a key ring file. Within that file, each user is assigned a level of trust. The users within the web vouch for each other. So if User 1 and User 2 have a trust relationship and User 1 and User 3 have a trust relationship, User 1 can recommend the other two users to each other. Users can choose the level of trust initially assigned to a user but can change that level later if circumstances warrant a change. But compromise of a user’s private key in the PGP system means that the user must contact everyone with whom she has shared her key to ensure that this key is removed from the key ring file.

PGP provides data encryption for confidentiality using IDEA. However, other encryption algorithms can be used. Implementing PGP with MD5 provides data integrity. Public certificates with PGP provide authentication.

GPG is a rewrite or an upgrade of PGP and uses AES. It does not use the IDEA encryption algorithm because the goal was to make it completely free. All the algorithm data is stored and documented publicly by the OpenPGP Alliance. GPG is a better choice than PGP because AES costs less than IDEA and is considered more secure. Moreover, GPG is royalty free because it is not patented.

Although the basic GPG program has a command-line interface, some vendors have implemented front ends that provide GPG with a graphical user interface, including KDE and Gnome for Linux and Aqua for macOS. Gpg4win is a software suite that includes GPG for Windows, Gnu Privacy Assistant, and GPG plug-ins for Windows Explorer and Outlook.


SSL and TLS were discussed earlier in this chapter, in the section “Data-in-Transit Encryption.” As mentioned there, Secure Sockets Layer (SSL)/Transport Layer Security (TLS) is another option for creating secure connections to servers. It interfaces with the application and transport layers but does not really operate within these layers. It is mainly used to protect HTTP traffic or web servers. Its functionality is embedded in most browsers, and its use typically requires no action on the part of the user. It is widely used to secure Internet transactions and can be implemented in two ways:

  • In an SSL portal VPN, a user can have a single SSL connection to access multiple services on the web server. After being authenticated, the user is provided a page that acts as a portal to other services.

  • An SSL tunnel VPN uses an SSL tunnel to access services on a server that is not a web server. It uses custom programming to provide access to non-web services through a web browser.

TLS and SSL are very similar but not the same. TLS 1.0 is based on the SSL 3.0 specification, but the two are not operationally compatible. Both implement confidentiality, authentication, and integrity above the transport layer. The server is always authenticated, and optionally the client can also be authenticated.

SSL 2 must be used for client-side authentication. When configuring SSL, a session key length must be designated. The two options are 40 bit and 128 bit. SSL 2 prevents man-in-the-middle attacks by using self-signed certificates to authenticate the server public key.

Keep in mind that SSL traffic cannot be monitored using a traditional IDS or IPS deployment. If an enterprise needs to monitor SSL traffic, a proxy server that can monitor this traffic must be deployed.

Secure Shell (SSH)

Secure Shell (SSH) is an application and protocol that is used to remotely log in to another computer using a secure tunnel. After a session key is exchanged and the secure channel is established, all communication between the two computers is encrypted over the secure channel.

SSH is a solution that could be used to remotely access devices, including switches, routers, and servers. SSH is preferred over Telnet because Telnet does not secure the communication.


Multipurpose Internet Mail Extensions (MIME) is an Internet standard that allows email to include non-text attachments, non-ASCII character sets, multiple-part message bodies, and non-ASCII header information. In today’s world, SMTP in MIME format transmits a majority of email.

MIME allows an email client to send an attachment with a header describing the file type. The receiving system uses this header and the file extension listed in it to identify the attachment type and open the associated application. This allows the computer to automatically launch the appropriate application when the user double-clicks the attachment. If no application is associated with that file type, the user is able to choose the application using the Open With option, or a website might offer the necessary application.

Secure MIME (S/MIME) allows MIME to encrypt and digitally sign email messages and encrypt attachments. It adheres to the Public Key Cryptography Standards (PKCS), which is a set of public key cryptography standards designed by the owners of the RSA algorithm.

S/MIME uses encryption to provide confidentiality, hashing to provide integrity, public key certificates to provide authentication, and message digests to provide non-repudiation.

Cryptographic Applications and Proper/Improper Implementations

Cryptographic applications provide many functions for an enterprise. It is usually best to implement cryptography that is implemented within an operating system or an application. This allows the cryptography to be implemented seamlessly, usually with little or no user intervention. Always ensure that you fully read and understand any vendor documentation when implementing the cryptographic features of any operating system or application. It is also important that you keep the operating system or application up-to-date with the latest service packs, security patches, and hot fixes.

Improperly implementing any cryptographic application can result in security issues for your enterprise. This is especially true in financial or ecommerce applications. Avoid designing your own cryptographic algorithms, using older cryptographic methods, or partially implementing standards.

Strength Versus Performance Versus Feasibility to Implement Versus Interoperability

While implementing cryptographic algorithms can increase the security of your enterprise, it is not the solution to all the problems encountered. Security professionals must understand the confidentiality and integrity issues of the data to be protected. Any algorithm that is deployed on an enterprise must be properly carried out from key exchange and implementation to retirement. When implementing any algorithm, you need to consider four aspects: strength, performance, feasibility to implement, and interoperability.


The strength of an algorithm is usually determined by the size of the key used. The longer the key, the stronger the encryption for the algorithm. But while using longer keys can increase the strength of the algorithm, it often results in slower performance.


The performance of an algorithm depends on the key length and the algorithm used. As mentioned earlier, symmetric algorithms are faster than asymmetric algorithms.

Feasibility to Implement

For security professionals and the enterprises they protect, proper planning and design of algorithm implementation ensures that an algorithm can be implemented.


The interoperability of an algorithm is its ability to operate within the enterprise. Security professionals should research any known limitations with algorithms before attempting to integrate them into their enterprise.

Stream vs. Block

If you incorporate cryptography into your enterprise, you must consider the implications of the implementation. The following sections explain stream ciphers and block ciphers in more detail.

Stream Ciphers

Stream-based ciphers perform encryption on a bit-by-bit basis and use keystream generators. A keystream generator creates a bit stream that is XORed with the plaintext bits. The result of this XOR operation is the ciphertext. Stream ciphers are used to secure streaming video and audio.

A synchronous stream-based cipher depends only on the key, and an asynchronous stream cipher depends on the key and plaintext. The key ensures that the bit stream that is XORed to the plaintext is random.


Advantages of stream-based ciphers include the following:

  • They generally have lower error propagation because encryption occurs on each bit.

  • They are generally used more in hardware implementations.

  • They use the same key for encryption and decryption.

  • They are generally cheaper to implement than block ciphers.

  • They employ only confusion, not diffusion.

Block Ciphers

Blocks ciphers perform encryption by breaking a message into fixed-length units, called blocks. A message of 1,024 bits could be divided into 16 blocks of 64 bits each. Each of those 16 blocks is processed by the algorithm formulas, resulting in a single block of ciphertext. If the data is less than a complete block, it will be padded.

Examples of block ciphers include IDEA, Blowfish, RC5, and RC6.


Advantages of block ciphers include the following:

  • Implementation of block ciphers is easier than implementation of stream-based ciphers.

  • Block ciphers are generally less susceptible to security issues.

  • They are generally used more in software implementations.

  • Block ciphers employ both confusion and diffusion.

Block ciphers often use different modes: ECB, CBC, CFB, and CTR.


DES and 3DES use modes in their implementations. In this section we discuss those modes.

DES Modes

DES comes in the following five modes:

In ECB, 64-bit blocks of data are processed by the algorithm using the key. The ciphertext produced can be padded to ensure that the result is a 64-bit block. If an encryption error occurs, only one block of the message is affected. ECB operations run in parallel, making ECB a fast method.

Although ECB is the easiest and fastest mode to use, it has security issues because every 64-bit block is encrypted with the same key. If an attacker discovers the key, all the blocks of data can be read. If an attacker discovers both versions of the 64-bit block (plaintext and ciphertext), the key can be determined. For these reasons, the mode should not be used when encrypting a large amount of data because patterns would emerge. ECB is a good choice if an organization needs encryption for its databases because ECB works well with the encryption of short messages.

Figure 15-3 shows the ECB encryption process.

A figure represents the E C B encryption process.

Figure 15-3 The ECB Encryption Process

In CBC, each 64-bit block is chained together because each resultant 64-bit ciphertext block is applied to the next block. So plaintext message block 1 is processed by the algorithm using an initialization vector (IV). The resultant ciphertext message block 1 is XORed with plaintext message block 2, resulting in ciphertext message 2. This process continues until the message is complete.

Unlike ECB, CBC encrypts large files without having any patterns within the resulting ciphertext. If a unique IV is used with each message encryption, the resultant ciphertext will be different every time, even in cases where the same plaintext message is used.

Figure 15-4 shows the CBC encryption process.

A figure depicts the process flow of C B C encryption.

Figure 15-4 The CBC Encryption Process

Whereas CBC and ECB require 64-bit blocks, CFB works with 8-bit (or smaller) blocks and uses a combination of stream ciphering and block ciphering. As with CBC, the first 8-bit block of the plaintext message is XORed by the algorithm using a keystream, which is the result of an IV and the key. The resultant ciphertext message is applied to the next plaintext message block.

Figure 15-5 shows the CFB encryption process.

A figure demonstrates the process of C F B Encryption.

Figure 15-5 The CFB Encryption Process

The ciphertext block must be the same size as the plaintext block. The method that CFB uses can have issues if any ciphertext result has errors because those errors will affect any future block encryption. For this reason, CFB should not be used to encrypt data that can be affected by this problem, particularly video or voice signals. This problem led to the need for DES OFB mode.

Similarly to CFB, OFB works with 8-bit (or smaller) blocks and uses a combination of stream ciphering and block ciphering. However, OFB uses the previous keystream with the key to create the next keystream. Figure 15-6 shows the OFB encryption process.

A figure demonstrates the process of O F B Encryption.

Figure 15-6 The OFB Encryption Process

With OFB, the keystream value must be the same size as the plaintext block. Because of the way OFB is implemented, OFB is less susceptible to the error type that CFB has.

CTR mode is similar to OFB mode. The main difference is that CTR mode uses an incrementing IV counter to ensure that each block is encrypted with a unique keystream. Also, the ciphertext is not chaining into the encryption process. Because this chaining does not occur, CTR performance is much better than that of the other modes.

Figure 15-7 shows the CTR encryption process.

A figure depicts the process of C T R Encryption.

Figure 15-7 The CTR Encryption Process

3DES Modes

3DES comes in the following four modes:

  • 3DES-EEE3: Each block of data is encrypted three times, each time with a different key.

  • 3DES-EDE3: Each block of data is encrypted with the first key, decrypted with the second key, and encrypted with the third key.

  • 3DES-EEE2: Each block of data is encrypted with the first key, encrypted with the second key, and finally encrypted again with the first key.

  • 3DES-EDE2: Each block of data is encrypted with the first key, decrypted with the second key, and finally encrypted again with the first key.

Known Flaws/Weaknesses

When implementing cryptographic algorithms, security professionals must understand the flaws or weaknesses of those algorithms. In this section, we first discuss both the strengths and weaknesses of symmetric and asymmetric algorithms. Then we discuss some of the attacks that can occur against cryptographic algorithms and which algorithms can be affected by these attacks. However, keep in mind that cryptanalysis changes daily. Even the best cryptographic algorithms in the past have eventually been broken. For this reason, security professionals should ensure that the algorithms used by their enterprise are kept up-to-date and retired once compromise has occurred.

Table 15-3 lists the strengths and weaknesses of symmetric algorithms.


Table 15-3 Symmetric Algorithm Strengths and Weaknesses



1,000 to 10,000 times faster than asymmetric algorithms

Number of unique keys needed can cause key management issues

Hard to break

Secure key distribution critical

Cheaper to implement

Key compromise occurs if one party is compromised, thereby allowing impersonation

Table 15-4 lists the strengths and weaknesses of asymmetric algorithms.


Table 15-4 Asymmetric Algorithm Strengths and Weaknesses



Key distribution is easier and more manageable than with symmetric algorithms

More expensive to implement

Key management is easier because same public key used by all parties

1,000 to 10,000 times slower than symmetric algorithms


While the basics of a PKI have been discussed, an enterprise should also consider several advanced PKI concepts, including wildcard, OCSP versus CRL, issuance to entities, and key escrow.



A wildcard certificate is a public key certificate that can be used with multiple subdomains of a domain. The advantages of using a wildcard certificate include:

  • The wildcard certificate can secure unlimited subdomains.

  • While wildcard certificates cost more than single certificates, buying a wildcard certificate is often much cheaper than buying separate certificates for each subdomain. In some cases, it is possible to purchase an unlimited server license, so you only buy one wildcard certificate to use on as many web servers as necessary.

  • A wildcard certificate is much easier to manage, deploy, and renew than separate certificates for each subdomain.

There are, however, some important disadvantages to using wildcard certificates:

  • If one server in one subdomain is compromised, all the servers in all the subdomains that used the same wildcard certificate are compromised.

  • Some popular mobile device operating systems do not recognize the wildcard character (*) and cannot use a wildcard certificate.

Wildcard certificates can cause issues within enterprises. For example, if an administrator revokes an SSL certificate after a security breach for a web server and the certificate is a wildcard certificate, all the other servers that use that certificate will start generating certificate errors.

Let’s take a moment to look at a deployment scenario for a wildcard certificate. After connecting to a secure payment server at, a security auditor notices that the SSL certificate was issued to *, meaning a wildcard certificate was used. The auditor also notices that many of the internal development servers use the same certificate. If it is later discovered that the USB thumb drive where the SSL certificate was stored is missing, then all the servers on which this wildcard certificate was deployed need new certificates. In this scenario, security professionals should deploy a new certificate on the server that is most susceptible to attacks, which would probably be the server.


The Online Certificate Status Protocol (OCSP) is an Internet protocol that obtains the revocation status of an X.509 digital certificate using the serial number. OCSP is an alternative to the standard certificate revocation list (CRL) that is used by many PKIs. OCSP automatically validates the certificates and reports back the status of the digital certificate by accessing the CRL on the CA. OCSP allows a certificate to be validated by a single server that returns the validity of that certificate.

A CRL is a list of digital certificates that a CA has revoked. To find out whether a digital certificate has been revoked, either the browser must check the CRL or the CA must push out the CRL values to clients. This can become quite daunting when you consider that the CRL contains every certificate that has ever been revoked.

One concept to keep in mind is the revocation request grace period. This period is the maximum amount of time between when the revocation request is received by the CA and when the revocation actually occurs. A shorter revocation period provides better security but often results in a higher implementation cost.

Issuance to Entities

The issuance of certificates to entities is the most common function performed by any PKI. However, any PKI handles other traffic, including certificate usage, certificate verification, certificate retirement, key recovery, and key escrow.


The steps involved in requesting a digital certificate are as follows:

  1. A user requests a digital certificate, and the RA receives the request.

  2. The RA requests identifying information from the requestor.

  3. After the required information is received, the RA forwards the certificate request to the CA.

  4. The CA creates a digital certificate for the requestor. The requestor’s public key and identity information are included as part of the certificate.

  5. The user receives the certificate.


After the user has a certificate, she is ready to communicate with other trusted entities. The process for communication between entities is as follows:

  1. User 1 requests User 2’s public key from the certificate repository.

  2. The repository sends User 2’s digital certificate to User 1.

  3. User 1 verifies the certificate and extracts User 2’s public key.

  4. User 1 encrypts the session key with User 2’s public key and sends the encrypted session key and User 1’s certificate to User 2.

  5. User 2 receives User 1’s certificate and verifies the certificate with a trusted CA.

After this certificate exchange and verification process occurs, the two entities are able to communicate using encryption.


A PKI must validate that an entity claiming to have the key is a valid entity, using the certificate information. Certificates can be issued to users; a user can be a person, a hardware device, a department, or a company.

A digital certificate provides an entity, usually a user, with the credentials to prove its identity and associates that identity with a public key. At minimum, a digital certification must provide the serial number, the issuer, the subject (owner), and the public key.


Any participant that requests a certificate must first go through the registration authority (RA), which verifies the requestor’s identity and registers the requestor. After the identity is verified, the RA passes the request to the CA.

A CA is the entity that creates and signs digital certificates, maintains the certificates, and revokes them when necessary. Every entity that wants to participate in the PKI must contact the CA and request a digital certificate. The CA is the ultimate authority for the authenticity for every participant in the PKI as it signs each digital certificate. The certificate binds the identity of the participant to the public key.

There are different types of CAs. Some organizations provide PKIs as a payable service to companies that need them. An example is VeriSign. Some organizations implement their own private CAs so that the organization can control all aspects of the PKI process. If an organization is large enough, it might need to provide a structure of CAs, with the root CA being the highest in the hierarchy.

Because more than one entity is often involved in the PKI certification process, certification path validation allows the participants to check the legitimacy of the certificates in the certification path.


When an application needs to use a digital certificate, vendors use a PKI standard to exchange keys via certificates. The browser uses the required keys and checks the trust paths and revocation status before allowing the certificate to be used by the application.

Key Escrow

Key escrow is the process of storing keys with a third party to ensure that decryption can occur. This is most often used to collect evidence during investigations. Key recovery is the process whereby a key is archived in a safe place by the administrator.


An X.509 certificate complies with the X.509 standard. An X.509 certificate contains the following fields:

  • Version

  • Serial Number

  • Algorithm ID

  • Issuer

  • Validity

  • Subject

  • Subject Public Key Info

    • Public Key Algorithm

    • Subject Public Key

  • Issuer Unique Identifier (optional)

  • Subject Unique Identifier (optional)

  • Extensions (optional)

VeriSign first introduced the following digital certificate classes:

  • Class 1: For individuals and intended for email. These certificates get saved by web browsers. No real proof of identity is required.

  • Class 2: For organizations that must provide proof of identity.

  • Class 3: For servers and software signing in which independent verification and identity and authority checking is done by the issuing CA.

  • Class 4: For online business transactions between companies.

  • Class 5: For private organizations or governmental security.


Tokens are hardware devices that store digital certificates and private keys. Implementations include USB devices and smart cards. An example of a USB token is shown in Figure 15-8.

A figure shows an example of a USB Token implementation.

Figure 15-8 USB Token

As you can see in Figure 15-8, these tokens can be used in a variety of scenarios.


Formally known as the TLS Certificate Status Request extension, OCSP stapling is an alternative to using OCSP. In a stapling scenario, the certificate holder queries the OCSP server at regular intervals and obtains a signed time-stamped OCSP response for each query. When the site’s visitors attempt to connect to the site, this response is included (“stapled”) with the SSL/TLS handshake via the Certificate Status Request extension. Figure 15-9 compares the regular OCSP process and OCSP stapling.

Two figures depict the comparison of Regular O C S P and O C S P Stapling.

Figure 15-9 OCSP Versus OCSP Stapling


Public key pinning is a security mechanism delivered via an HTTP header that allows HTTPS websites to resist impersonation by attackers using mis-issued or otherwise fraudulent certificates. It delivers a set of public keys to the client (browser), which should be the only ones trusted for connections to this domain. This process is depicted in Figure 15-10.

A figure depicts the concept of Public Key Pinning.

Figure 15-10 Public Key Pinning


Another implementation of cryptography is the implementation of cryptocurrency, such as bitcoin. Cryptocurrencies make use of a process called blockchain. A blockchain is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for validating new blocks. The blockchain process is depicted in Figure 15-11.

A figure shows the process of Blockchain, step-by-step.

Figure 15-11 Blockchain Process

Mobile Device Encryption Considerations

Mobile devices present a unique challenge to the process of securing data. These devices have much less processing power than desktop devices and laptops. For this reason, a special form of encryption is used that is uniquely suited to this scenario. Let’s look at this algorithm.

Elliptic Curve Cryptography

Elliptic curve cryptography (ECC) is an approach to public key cryptography based on the algebraic structure of elliptic curves over finite fields. The key characteristic that makes it suitable for mobile devices of all types is that it can provide the same level of security provided by other algorithms by using smaller keys. Smaller keys require less processing power when the encryption and decryption process occur. For example, a 256-bit elliptic curve public key should provide comparable security to a 3,072-bit RSA public key.

P256 vs. P384 vs. P512

ECC can use several key sizes, the most common of which are P256 bit, P384 bit, and P512 bit. These three are also the only ones matching NSA Suite B security requirements, which is a set of cryptographic algorithms promulgated by the NSA as part of its Cryptographic Modernization Program. It has been established as the cryptographic base for both unclassified information and most classified information. Suite B includes AES for symmetric encryption, Elliptic Curve Digital Signature Algorithm (ECDSA) for digital signatures, Elliptic Curve Diffie-Hellman (ECDH) for key agreements, and SHA-256 and SHA-384 for message digests.

Exam Preparation Tasks

You have a couple choices for exam preparation: the exercises here and the practice exams in the Pearson IT Certification test engine.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 15-5 lists these key topics and the page number on which each is found.


Table 15-5 Key Topics for Chapter 15

Key Topic Element


Page Number


Benefits of encryption


Figure 15-1

Hash function process



The SHA-2 family



Creating a digital signature



Verifying the digital signature


Table 15-1

Symmetric algorithm key facts



Diffie-Hellman steps



Hybrid encryption



Characteristics of disk encryption



Characteristics of file and record encryption


Table 15-2

Forms of encryption






Cryptographic algorithm considerations



Advantages of stream-based ciphers



Advantages of block ciphers



3DES modes


Table 15-3

Strengths and weaknesses of symmetric algorithms


Table 15-4

Asymmetric algorithm strengths and weaknesses



Advantages of using a wildcard certificate



Disadvantages to using wildcard certificates



Requesting a digital certificate



Using a certificate



Digital certificate classes


Figure 15-9

OCSP versus OCSP stapling


Figure 15-10

Public key pinning


Figure 15-11

Blockchain process


Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

3-D Secure





Advanced Encryption Standard (AES)

asymmetric algorithm

Authentication Header (AH)

block cipher


block-level encryption




certificate revocation list (CRL)

cipher block chaining (CBC)

cipher feedback (CFB)

code signing


counter mode (CTR)

crypto module

crypto processor


cryptographic service provider (CSP)

digital rights management (DRM)



Digital Encryption Standard (DES)

digital signature

digital watermarking

disk-level encryption

El Gamal

electronic code book (ECB)

elliptic curve cryptography (ECC)

Encapsulating Security Payload (ESP)

file-level encryption

GNU Privacy Guard (GPG)

hash MAC

hash MAC (HMAC)




hybrid cipher

Hypertext Transfer Protocol (HTTP)

in-memory processing

International Data Encryption Algorithm (IDEA)

Internet Key Exchange (IKE)

Internet Protocol Security (IPsec)

Internet Security Association and Key Management Protocol (ISAKMP)

key escrow

key recovery

key stretching

MD5 algorithm

OCSP stapling

Online Certificate Status Protocol (OCSP)

output feedback (OFB)

perfect forward secrecy (PFS)

Pretty Good Privacy (PGP)

Public key cryptography

public key pinning

rainbow table attack





Secure Electronic Transaction (SET)

Secure Hash Algorithm (SHA)

Secure MIME (S/MIME)

Secure Shell (SSH)

Secure Sockets Layer (SSL)

Security Association (SA)




stream-based cipher

symmetric algorithm


transport encryption

Transport Layer Security (TLS)

transport mode

Triple DES (3DES)

tunnel mode


wildcard certificate

Zero Knowledge Proof

Review Questions

1. Your organization has decided that it needs to protect all confidential data that is residing on a file server. All confidential data is located within a folder named Confidential. You need to ensure that this data is protected. What should you do?

  • Implement hashing for all files in the Confidential folder.

  • Decrypt the Confidential folder and all its contents.

  • Encrypt the Confidential folder and all its contents.

  • Implement a digital signature for all the users that should have access to the Confidential folder.

2. Your organization has recently decided to implement encryption on the network. Management requests that you implement a system that uses a private, or secret, key that must remain secret between the two parties. Which system should you implement?

  • running key cipher

  • concealment cipher

  • asymmetric algorithm

  • symmetric algorithm

3. You have recently been hired by a company to analyze its security mechanisms to determine any weaknesses in the current security mechanisms. During this analysis, you detect that an application is using a 3DES implementation that encrypts each block of data three times, each time with a different key. Which 3DES implementation does the application use?

  • 3DES-EDE3

  • 3DES-EEE3

  • 3DES-EDE2

  • 3DES-EEE2

4. Management at your organization has decided that it no longer wants to implement asymmetric algorithms because they are much more expensive to implement. You have determined that several algorithms are being used across the enterprise. Which of the following should you discontinue using, based on management’s request?

  • IDEA

  • Twofish

  • RC6

  • RSA

5. Users on your organization’s network need to be able to access several confidential files located on a file server. Currently, the files are encrypted. Recently, it was discovered that attackers were able to change the contents of the file. You need to use a hash function to calculate the hash values of the correct files. Which of the following should you not use?

  • ECC

  • MD6

  • SHA-2

  • RIPEMD-160

6. Your organization implements a public key infrastructure (PKI) to issue digital certificates to users. Management has requested that you ensure that all the digital certificates that were issued to contractors have been revoked. Which PKI component should you consult?

  • CA

  • RA

  • CRL

  • OCSP

7. Your organization has implemented a virtual private network (VPN) that allows branch offices to connect to the main office. Recently, you have discovered that the key used on the VPN has been compromised. You need to ensure that the key is not compromised in the future. What should you do?

  • Enable PFS on the main office end of the VPN.

  • Implement IPsec on the main office end of the VPN.

  • Enable PFS on the main office and branch office ends of the VPN.

  • Implement IPsec on the main office and branch office ends of the VPN.

8. Which of the following is a term used to describe the hardware, software, and/or firmware that implements cryptographic logic or cryptographic processes?

  • crypto module

  • crypto processor

  • token

  • CSP

9. Which of the following is an example of a crypto processor?

  • Microsoft CryptoAPI (CAPI)

  • TPM chip

  • token

  • CSP

10. Which of the following is an application and protocol that is used to remotely log in to another computer using a secure tunnel?

  • Microsoft CryptoAPI (CAPI)

  • S/MIME

  • SSH

  • CSP

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.