Chapter 9

Understanding the Importance of Security Concepts in an Enterprise Environment

This chapter covers the following topics related to Objective 2.1 (Explain the importance of security concepts in an enterprise environment) of the CompTIA Security+ SY0-601 certification exam:

  • Configuration management

    • Diagrams

    • Baseline configuration

    • Standard naming conventions

    • Internet protocol (IP) schema

  • Data sovereignty

  • Data protection

    • Data loss prevention (DLP)

    • Masking

    • Encryption at rest, in transit/motion, and in processing

    • Tokenization

    • Rights management

  • Geographical considerations

  • Response and recovery controls

  • Secure Sockets Layer (SSL)/Transport Layer Security (TLS) inspection

  • Hashing

  • API considerations

  • Site resiliency

    • Hot site

    • Cold site

    • Warm site

  • Deception and disruption

    • Honeypots

    • Honeyfiles

    • Honeynets

    • Fake telemetry

    • DNS sinkhole

This chapter starts with an overview of best practices for system configuration management and then details different approaches for data sovereignty and protection. You learn about different geographical considerations for data protection, as well as response and recovery controls. This chapter also covers the principles of Secure Sockets Layer (SSL)/Transport Layer Security (TLS) inspection, hashing, and considerations to protect APIs. You also learn about the different methods for site resiliency and techniques for deception and disruptions (including honeypots, honeyfiles, honeynets, fake telemetry, DNS sinkholes, and others).

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Chapter Review Activities” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 9-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.”

Table 9-1 “Do I Know This Already?” Section-to-Question Mapping

Foundation Topics Section

Questions

Configuration Management

1–3

Data Sovereignty and Data Protection

4–7

Site Resiliency

8–9

Deception and Disruption

10–12

Caution

The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the self-assessment. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.

1. Which of the following is a primary goal of configuration management?

  1. Maintaining computer systems, servers, network infrastructure, and software in a desired, consistent state

  2. Reducing the cost of acquiring computer systems, servers, network infrastructure, and software used for information security

  3. Ensuring that any changes done to the infrastructure do not affect the underlying organization’s IT budget

  4. All of these answers are correct.

2. After a minimum desired state of security is defined, ________ should be taken to assess the current security state of computers, servers, network devices, and the network in general.

  1. network diagrams

  2. IPv4 schemas

  3. baselines

  4. None of these answers are correct.

3. Which of the following is a benefit of standard naming conventions in an IT infrastructure?

  1. Appropriate naming conventions are used to avoid conflicts and to be able to correlate data among disparate systems.

  2. Appropriate naming conventions are used to reduce unnecessary spending of IT infrastructure.

  3. Appropriate naming conventions are used to better create IPv6 network schemas and for data sovereignty.

  4. None of these answers are correct.

4. Which of the following are privacy laws or regulations? (Choose two.)

  1. PCI-DSS

  2. CCPA

  3. GDPR

  4. FedRamp

5. Which of the following is a type of software or hardware-based data loss prevention solution?

  1. Endpoint DLP systems

  2. Network DLP systems

  3. Storage DLP systems

  4. All of these answers are correct.

6. You were hired to deploy a system to prevent unauthorized use and transmission of confidential information. What should you prioritize to protect and encrypt?

  1. Data at rest

  2. Data in use

  3. Data in motion

  4. All of the answers are correct.

7. Which of the following are used in digital signatures, in file and message authentication, and as a way to protect and verify the integrity of sensitive data?

  1. Data masking

  2. Tokenization

  3. Hashes

  4. Redaction

8. Which term is used when you have a near duplicate of the original site of the organization that can be up and running within minutes?

  1. Hot site

  2. Warm site

  3. Cluster site

  4. Cold site

9. What do you call a redundant site that has tables, chairs, bathrooms, and possibly some technical setup, but a lot of configuration of computers and data restoration is necessary before the site can be properly utilized?

  1. Hot site

  2. Warm site

  3. Cluster site

  4. Cold site

10. Which term is used to categorize a group of computers used to attract and trap potential adversaries to counteract and analyze an attack?

  1. Honeypot

  2. Honeynet

  3. Honeyfile

  4. None of these answers are correct.

11. A security analyst creates a file called passwords.txt to lure attackers to access it. Which term is used for this technique?

  1. Honeynet

  2. Honeypot

  3. Honeyfarm

  4. Honeyfile

12. In a _________ you configure one or more DNS servers to provide false results to attackers and redirect them to areas in the network where you can observe their tactics and techniques.

  1. DNS sinkhole

  2. DNS tunnel

  3. DNS Zone transfer

  4. None of these answers are correct.

Foundation Topics

Configuration Management

Configuration management is an ongoing process created with the goal of maintaining computer systems, servers, network infrastructure, and software in a desired, consistent state. One of the primary goals of configuration management is to ensure that your infrastructure performs as it’s expected to as changes are made over time.

Several key elements are used in the process of configuration management:

  • Diagrams and other documentation: A good configuration management process helps to avoid small or large changes going undocumented. These undocumented changes can lead to poor performance, inconsistencies, or noncompliance and negatively impact business operations and security. When poorly documented changes are made within your infrastructure, they add to instability and downtime. Having good network diagrams and well-written and up-to-date documentation is crucial and allows you to not only troubleshoot problems but also respond quickly to security incidents.

  • Baseline configuration: After a minimum desired state of security is defined, baselines should be taken to assess the current security state of computers, servers, network devices, and the network in general. Baseline configurations should be properly documented and reviewed to include a set of specifications for information systems or configuration items within those systems. Baseline configurations are used by security professionals, along with network and system administrators, as a basis for future deployments, releases, or changes to information systems and applications. A baseline configuration could include information about standard software packages installed on endpoint systems, servers, network infrastructure devices, mobile devices, or applications and infrastructure hosted in the cloud. These baseline configurations should also include current version numbers and patch information on operating systems and applications, and configuration parameters, network topology, and the logical placement of those components within the system architecture. You should always review configuration baselines to make sure that they are still relevant. New baselines should be created as organizational information systems change over time.

  • Standard naming conventions: You should make sure that your organization has appropriate naming conventions for describing IT infrastructure, applications, and users. Appropriate naming conventions are used to avoid conflicts and to be able to correlate data among disparate systems.

  • Internet protocol (IP) schema: Similar to standard naming conventions, having a proper IPv4 or IPv6 schema will help avoid conflicts within your on-premises network or cloud deployments and to be able to correlate data among disparate systems. For example, when using RFC 1918 private IP addresses, you should perform proper planning so that the IP network scheme can be more organized, easier to set up, and easier to troubleshoot than user and network services. Identify what subnets are used for wired and wireless users, as well as virtual private network (VPN) users. Wireless access may require additional subnets for guest access, quarantine of nonsecure devices, IoT devices, and so on. In addition, you may have IP address subnets dedicated for voice over IP (VoIP) devices, separate from printers and IoT devices. Configuration management systems let you consistently define system settings, as well as build and maintain those systems according to those baseline settings.

Data Sovereignty and Data Protection

Data sovereignty refers to any information (data) that has been converted and stored in a digital form. Many laws and regulations have regulated how organizations should handle their customer and employee data. One of the main concerns around data sovereignty is privacy. For example, the General Data Protection Regulation (GDPR) is a regulation in the European Union (EU) and the European Economic Area (EEA) focused on data protection and privacy. Another example is the California Consumer Privacy Act (CCPA). These regulations give consumers the right to know what personal information is being collected by companies, government, and any other organizations. Consumers can also access the personal information that is collected and request that it be deleted, as well as to know whether their personal information is being shared (and if so, with whom). In addition, consumers can opt out of the sale of their personal information.

Another concept that is crucial for data protection is data loss prevention (DLP) systems. DLP is a concept that refers to the monitoring of data in processing, data in transit/motion, and data at rest. A DLP system performs content inspection and is designed to prevent unauthorized use of data as well as prevent the leakage of data outside the computer (or network) where it resides. DLP systems can be software- or hardware-based solutions and come in three varieties:

  • Endpoint DLP systems: These systems run on an individual computer and are usually software-based. They monitor data in processing, such as email communications, and can control what information flows between various users. These systems can also be used to inspect the content of USB-based mass-storage devices or block those devices from being accessed altogether by creating rules within the software.

  • Network DLP systems: These software- or hardware-based solutions are often installed on the perimeter of the network. They inspect data that is in transit/motion.

  • Storage DLP systems: These systems are typically installed in data centers or server rooms as software that inspects data at rest.

Most cloud providers offer cloud-based DLP solutions to protect against data breaches and misuse of data. These solutions often integrate with software, infrastructure, and platform services and can include any of the systems mentioned previously. Cloud-based DLP is necessary for companies that have increased bring-your-own-device (BYOD) usage, and that store data and operate infrastructure within the cloud.

As with host intrusion detection systems (HIDS) and network intrusion detection systems (NIDS), DLP solutions must be accurate and updated to reduce the number of false positives and false negatives. Most systems alert the security administrator if there is a possibility of data leakage. However, it is up to the administrator to determine whether the threat is real.

Secure Sockets Layer (SSL)/Transport Layer Security (TLS) Inspection

Transport Layer Security Inspection (TLSI) is a security process that allows organizations to decrypt traffic, inspect the decrypted content for threats, and then reencrypt the traffic before it enters or leaves the network. TLSI is also known as “TLS break and inspect.” TLS replaces the Secure Sockets Layer (SSL) protocol. In the past, TLSI was referred to as SSL Inspection (SSLI). Because newer implementations use TLS instead of SSL, this chapter uses the term TLSI to refer to this type of inspection. Using TLSI enhances visibility within some security products (such as IDS and DLP systems at the network edge) but also introduces new threats.

One of the main threats introduced when using TLSI is the potential abuse of the certificate authority (CA). Attackers can use this CA to issue unauthorized certificates trusted by the TLS clients. Abuse of a trusted CA could allow an attacker to sign malicious code to security controls and monitoring capabilities or to deploy malicious services that impersonate legitimate services to endpoints.

Some organizations can configure a policy to enforce traffic to be decrypted and inspected only “as authorized” to ensure that decrypted traffic is contained in an out-of-band and isolated segment of the network. This technique can be used to prevent unauthorized access to the decrypted traffic.

Tip

As a mitigation, you should break and inspect TLS traffic only once within the organization’s network. Redundant TLSI (where traffic is decrypted and inspected more than once) should not be performed. Inspecting multiple times can greatly complicate the ability to diagnose network issues with TLS traffic.

API Considerations

Application programming interfaces (APIs) are used in most modern applications. Organizations should evaluate different API considerations to make sure that APIs and underlying implementations are secure. If not configured or managed correctly, they can expose sensitive data to attackers. Furthermore, in the past there were several major incidents where the privacy of individuals was compromised. For example, several years ago Cambridge Analytica used Facebook’s APIs to obtain information about millions of Facebook users. The misuse of similar APIs has sparked many privacy concerns from consumers and lawmakers. Nowadays, APIs are used in mobile apps, games, social networking platforms, dating apps, news websites, e-commerce sites, video and music streaming services, mobile payment systems, and many other implementations. When a third-party organization (such as an app developer or advertiser) obtains access to data through APIs, it may also gain access to very sensitive personal information. Some of these past events also sparked the creation of enhanced privacy laws in the United States, Europe, and worldwide.

Data Masking and Obfuscation

Data masking (otherwise known as data obfuscation) is the act of hiding sensitive information/data with specific characters or other data in order to protect it. For example, data masking (and obfuscation) has been used to protect personally identifiable information (PII) or commercially sensitive data from unauthorized users and attackers.

The following techniques have been used for data masking and obfuscation:

  • Substitution: In this technique you substitute the original data with another authentic-looking value. For instance, you may replace a Social Security number or credit card number with a fake number. Figure 9-1 illustrates a basic example of substitution.

An example of the method of substitution used for data masking.

FIGURE 9-1 Data Masking Using Substitution

Figure 9-2 shows another example of masking by revealing only the last four digits of the credit card.

An example of data masking with an additional step.

FIGURE 9-2 Additional Data Masking Example

  • Tokenization: This technique is used mostly when protecting data at rest. It is the process of randomly generating a token value for plaintext data and storing the mapping in a database. Tokenization is difficult to scale securely due to the performance and size of the underlying database. Tokenized data is difficult to exchange, since it requires direct access to the database (or token vault). Encryption provides a better way to scale and exchange sensitive data in a secure manner.

Encryption at Rest, in Transit/Motion, and in Processing

What do you need to encrypt? Without a doubt, you need to encrypt data, but more specifically three types of data: data in use, data at rest, and data in transit.

Figure 9-3 illustrates the concepts of encrypting data at rest, in transit/motion, and in use/processing.

A diagram presents encryption of data at different stages.

FIGURE 9-3 Encrypting Data at Rest, in Transit/Motion, and in Use/Processing

Data in use/processing can be described as actively used data undergoing constant change; for example, it could be stored in databases or spreadsheets. Data at rest is inactive data that is archived—backed up or stored in cloud storage services. Data in transit (also known as data in motion) is data that crosses the network or data that currently resides in computer memory.

Hashing

A hash is a summary of a file or message, often in numeric format. Hashes are used in digital signatures, in file and message authentication, and as a way to protect the integrity of sensitive data—for example, data entered into databases or perhaps entire hard drives. A hash is generated through the use of a hash function to verify the integrity of the file or message, most commonly after transit over a network. A hash function is a mathematical procedure that converts a variable-sized amount of data into a smaller block of data. The hash function is designed to take an arbitrary data block from the file or message, use that as an input, and from that block produce a fixed-length hash value. Basically, the hash is created at the source and is recalculated and compared with the original hash at the destination.

Because the hash is a condensed version of the file/message, or a portion of it, it is also known as a message digest. It provides integrity to data so that a user knows that the message is intact, hasn’t been modified during transit, and comes from the source the user expects. A hash can fall into the category of a one-way function. This means it is easy to compute when generated but difficult (or impossible) to compute in reverse. In the case of a hash, a condensed version of the message, initial computation is relatively easy (compared to other algorithms), but the original message should not be re-created from the hash. Contrast this concept to encryption methods that indeed can be reversed. A hash can be created without the use of an algorithm, but generally, the ones used in the field require some kind of cryptographic algorithm.

Note

In Chapter 16, “Summarizing the Basics of Cryptographic Concepts,” you will learn the details about different hashing algorithms such as MD5 and different versions of SHA. You will also learn which flavors of those algorithms are recommended and which should be avoided (such as MD5).

Rights Management

Digital rights management (DRM) is the name given to a set of access control technologies that are used to control the use of proprietary hardware, software, and copyrighted works. DRM solutions are used to restrict the use, modification, and distribution of copyrighted works and the underlying systems used to enforce such policies.

Traditional DRM policies include restrictive licensing agreements and solutions that encrypt expressive material or embed a digital tag designed to control access and reproduction of information. Software companies and content publishers typically enforce their own access policies on content (that is, restrictions on copying or viewing such content or software). Hardware manufacturers have also expanded the usage of DRM to more traditional hardware products.

Digital rights management policies and processes are not universally accepted. Some argue that there is no substantial evidence that DRM helps to completely prevent copyright infringement. The following section covers some of the geographical considerations for data protection and DRM.

Geographical Considerations

Many geographical considerations (depending on where you reside) affect the laws and regulations that have been created to address data privacy and DRM:

  • Data privacy: Earlier in this chapter, you learned about GDPR (a regulation in the European Union [EU] and the European Economic Area [EEA] focused on data protection and privacy). You also learned about the California Consumer Privacy Act (CCPA). Many other laws and regulations in other countries give consumers the right to know what personal information is being collected by companies, government, and any other organizations. Additional examples include the Australia Privacy Principles (APP), Canada’s Personal Information Protection and Electronic Data Act (PIPEDA), and Japan’s Personal Information Protection Act (JPIPA).

  • DRM: Many laws around the world have been created to criminalize the circumvention of DRM, communication about such circumvention, and the creation and distribution of tools used. Examples of these DRM-related laws are the United States Digital Millennium Copyright Act (US-DMCA) and the European Union’s Information Society Directive.

Tip

Different countries might have laws that dictate how personal data is stored and transferred between different geographical locations and countries. In addition, some regulations (such as GDPR) define policies for “third countries” and what personal data can or cannot be transferred between countries. Additional information about GDPR “third countries” definitions and information can be obtained from https://gdpr-info.eu/issues/third-countries/.

Data Breach Response and Recovery Controls

You must have the proper response and recovery controls in place in the unfortunate event of a data breach. This data breach response and recovery control plan must include the assembly of a team of experts within your organization, as well as legal counsel. You also have to identify a data forensics team (in most cases hired from third-party providers). The forensics team must help determine the source and scope of the breach, as well as collect and analyze evidence to outline remediation steps.

In the case of the data breach, you should notify law enforcement (when applicable) and also notify the affected businesses and individuals based on local and federal laws.

Tip

The United States Federal Trade Commission (FTC) provides detailed guidance and recommendations for post-breach response and recovery at www.ftc.gov/tips-advice/business-center/guidance/data-breach-response-guide-business.

Site Resiliency

Within the confidentiality, integrity, availability (CIA) triad, redundant sites fall into the category of availability. In the case of a disaster, a redundant site can act as a safe haven for your data and users. Redundant sites are sort of a gray area between redundancy and a disaster recovery method. If you have one and need to use it, a “disaster” has probably occurred. But the better the redundant site, the less time the organization loses, and the less it seems like a disaster and more like a failure that you have prepared for. Of course, this outcome all depends on the type of redundant site your organization decides on.

Regarding the types of redundant sites, I like to refer to the story of Goldilocks and the three bears’ three bowls of porridge. One was too hot, one too cold, and one just right. Most organizations opt for the warm redundant site as opposed to the hot or cold. Let’s look at these three now.

  • Hot site: This site is a near duplicate of the original site of the organization that can be up and running within minutes (maybe longer). Computers and phones are installed and ready to go, a simulated version of the server room stands ready, and the vast majority of the data is replicated to the site on a regular basis in the event that the original site is not accessible to users for whatever reason. Hot sites are used by companies that would face financial ruin in the case that a disaster makes their main site inaccessible for a few days or even a few hours. This is the only type of redundant site that can facilitate a full recovery.

  • Warm site: This site has computers, phones, and servers, but they might require some configuration before users can start working on them. The warm site will have backups of data that might need to be restored; they will probably be several days old. This type of site is chosen the most often by organizations because it has a good amount of configuration yet remains less expensive than a hot site.

  • Cold site: This site has tables, chairs, bathrooms, and possibly some technical setup—for example, basic phone, data, and electric lines. Otherwise, a lot of configuration of computers and data restoration is necessary before the site can be properly utilized. This type of site is used only if a company can handle the stress of being nonproductive for a week or more.

Although they are redundant, these types of sites are generally known as backup sites because if they are required, a disaster has probably occurred. A good network security administrator tries to plan for, and rely on, redundancy and fault tolerance as much as possible before having to resort to disaster recovery methods.

Deception and Disruption

Honeypots and honeynets attract and trap potential attackers to counteract any attempts at unauthorized access of the network. These solutions isolate the potential attacker in a monitored area and contain dummy resources that look to be of value to the perpetrator. While an attacker is trapped in one of these, the attacker’s methods can be studied and analyzed, and the results of those analyses can be applied to the general security of the functional network.

A honeypot is generally a single computer but could also be a file, group of files, or an area of unused IP address space, whereas a honeynet is one or more computers, servers, or an area of a network; a honeynet is used when a single honeypot is not sufficient. Either way, the individual computer, or group of servers, will usually not house any important company information. Various analysis tools are implemented to study the attacker; these tools, along with a centralized group of honeypots (or a honeynet), are known collectively as a honeyfarm.

One example of a honeypot in action is the spam honeypot. Spam email is one of the worst banes known to network administrators; a spam honeypot can lure spammers in, enabling network administrators to study the spammers’ techniques and habits, thus allowing the network admins to better protect their actual email servers, SMTP relays, SMTP proxies, and so on, over the long term. This solution might ultimately keep the spammers away from the real email addresses because the spammers are occupied elsewhere. Some of the information gained by studying spammers is shared with other network admins or organizations’ websites dedicated to reducing spam. A spam honeypot could be as simple as a single email address or as complex as an entire email domain with multiple SMTP servers.

Of course, as with any technology that studies attackers, honeypots also bear risks to the legitimate network. The honeypot or honeynet should be carefully firewalled off from the legitimate network to ensure that the attacker can’t break through.

Honeyfiles can also be used as bait files intended to lure adversaries to access and then send alarms to security analysts for detection. They can also be used to potentially learn the tactics and techniques used by attackers. For instance, you can create a honeyfile named credentials.txt to lure attackers to access it. Honeyfiles can be used to learn the attackers’ tactics, techniques, and behavior without adversely affecting normal operations.

Fake Telemetry

Some organizations have used additional deception techniques, such as fake telemetry, as decoys and breadcrumbs in order to lure and trick attackers. Similarly, attackers have compromised systems that will also generate fake telemetry and reporting data to fool security monitoring systems, analysts in a security operations center (SOC), and evade other security controls that may be in place.

Tip

The MITRE ATT&CK framework includes examples of these deception and evasion techniques. Examples include the T0856 technique described at https://collaborate.mitre.org/attackics/index.php/Technique/T0856.

DNS Sinkhole

Another deception and disruption technique is the use of DNS sinkholes, or “blackhole DNS servers.” In a DNS sinkhole, you configure one or more DNS servers to provide false results to attackers and redirect them to areas in the network where you can observe their tactics and techniques. DNS sinkholes have been used to contain different types of malware such as the infamous WannaCry ransomware and to disrupt certain malicious DNS operations in denial-of-service (DoS) and other attacks. For example, these DNS sinkholes have been used to interrupt DNS resolution to malicious command and control (C2) servers and botnet coordination.

Tip

Adversaries have also used similar techniques to perform DNS poisoning attacks to redirect systems and users to malicious destinations.

Chapter Review Activities

Use the features in this section to study and review the topics in this chapter.

Review Key Topics

Review the most important topics in the chapter, noted with the Key Topic icon in the outer margin of the page. Table 9-2 lists a reference of these key topics and the page number on which each is found.

Table 9-2 Key Topics for Chapter 9

Key Topic Element

Description

Page Number

List

Listing the key elements used in the process of configuration management

213

Paragraph

Defining data sovereignty

214

Paragraph

Defining data loss prevention (DLP) systems

214

Paragraph

Understanding Transport Layer Security Inspection

215

Paragraph

Defining data masking and obfuscation

216

List

Listing the different techniques used for data masking

216

Figure 9-2

Additional data masking example

217

Figure 9-3

Encrypting data at rest, in transit/motion, and in use/processing

218

Paragraph

Understanding encryption of data in processing, data at rest, and data in transit/motion

218

Paragraph

Defining hashing and hashing functions

218

Paragraph

Defining digital rights management

219

List

Defining hot site, warm site, and cold site in the context of site resiliency

221

Paragraph

Defining honeypots and honeynets

222

Paragraph

Describing honeyfiles

223

Paragraph

Understanding the use of fake telemetry

223

Paragraph

Understanding the use of DNS sinkholes

223

Define Key Terms

Define the following key terms from this chapter, and check your answers in the glossary:

configuration management

diagrams

baseline configuration

standard naming conventions

Internet Protocol (IP) schema

data sovereignty

data protection

data loss prevention (DLP)

Transport Layer Security Inspection (TLSI)

SSL Inspection (SSLI)

data masking

tokenization

data in use/processing

data at rest

data in transit

hash

digital rights management (DRM)

geographical considerations

response and recovery controls

hot site

warm site

cold site

honeypot

honeynet

honeyfiles

fake telemetry

DNS sinkholes

Review Questions

Answer the following review questions. Check your answers with the answer key in Appendix A.

1. In the context of site resiliency, a ________ will have backups of data that might need to be restored; they will probably be several days old. This type of site is chosen most often by organizations because it has a good amount of configuration yet remains less expensive than a hot site.

2. What can be used as bait files intended to lure adversaries to access and then send alarms to security analysts for detection?

3. What is the name given to a set of access control technologies that are used to control the use of proprietary hardware, software, and copyrighted works?

4. What term is used when data is actively used and undergoing constant change? For instance, data could be stored in databases or spreadsheets and be processed by running applications.

5. What is the process of generating a random value for plaintext data and storing the mapping in a database?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.212.102.174