Chapter 2
Asset Security

This chapter covers the following topics:

  • Asset Security Concepts: Concepts discussed include data policy, roles and responsibilities, data quality, and data documentation and organization.

  • Identify and Classify Information and Assets: Classification topics discussed include data and asset classification, sensitivity and criticality, private sector classifications, military and government classifications, the information life cycle, databases, and data audit.

  • Information and Asset Ownership: Discusses determining and documenting information and asset ownership.

  • Protect Privacy: Components include owners, data processors, data remanence, and collection limitation.

  • Asset Retention: Retention concepts discussed include media, hardware, and personnel.

  • Data Security Controls: Topics include data security, data states, data access and sharing, data storage and archiving, baselines, scoping and tailoring, standards selections, and data protection methods.

  • Information and Asset Handling Requirements: Topics include marking, labeling, storing, and destruction.

Assets are any entities that are valuable to an organization and include tangible and intangible assets. As mentioned in Chapter 1, “Security and Risk Management,” tangible assets include computers, facilities, supplies, and personnel. Intangible assets include intellectual property, data, and organizational reputation. All assets in an organization must be protected to ensure the organization’s future success. While securing some assets is as easy as locking them in a safe, other assets require more advanced security measures.

The Asset Security domain addresses a broad array of topics including information and asset identification and classification, information and asset ownership, privacy protection, asset retention, security controls, and information and asset handling. Out of 100% of the exam, this domain carries an average weight of 10%, which is the lowest weight of the domains.

A security professional must be concerned with all aspects of asset security. The most important factor in determining the controls used to ensure asset security is an asset’s value. While some assets in the organization may be considered more important because they have greater value, you should ensure that no assets are forgotten. This chapter covers all the aspects of asset security that you as an IT security professional must understand.

Foundation Topics

Asset Security Concepts

Asset security concepts that you must understand include

  • Data policy

  • Roles and responsibilities

  • Data quality

  • Data documentation and organization

Data Policy

As a security professional, you should ensure that your organization implements a data policy that defines long-term goals for data management. It will most likely be necessary for each individual business unit within the organization to define its own data policy, based on the organization’s overall data policy. Within the data policy, individual roles and responsibilities should be defined to ensure that personnel understand their job tasks as related to the data policy.

Once the overall data policy is created, data management practices and procedures should be documented to ensure that the day-to-day tasks related to data are completed. In addition, the appropriate quality assurance and quality control procedures must be put into place for data quality to be ensured. Data storage and backup procedures must be defined to ensure that data can be restored.

As part of the data policy, any databases implemented within an organization should be carefully designed based on user requirements and the type of data to be stored. All databases should comply with the data policies that are implemented.

Prior to establishing a data policy, you should consider several issues that can affect it. These issues include cost, liability, legal and regulatory requirements, privacy, sensitivity, and ownership.

The cost of any data management mechanism is usually the primary consideration of any organization. Often organizations do not implement a data policy because they think it is easier to allow data to be stored in whatever way each business unit or user desires. However, if an organization does not adopt formal data policies and procedures, data security issues can arise because of the different storage methods used. For example, suppose an organization’s research department decides to implement a Microsoft SQL Server database to store all research data, but the organization does not have a data policy. If the database is implemented without a thorough understanding of the types of data that will be stored and the user needs, the research department may end up with a database that is difficult to navigate and manage.

Liability involves protecting the organization from legal issues. Liability is directly affected by legal and regulatory requirements that apply to the organization. Data issues that can cause liability issues include data misuse, data inaccuracy, data breach, and data loss.

Data privacy is determined as part of data analysis. Data classifications must be determined based on the value of the data to the organization. Once the data classifications are determined, data controls should be implemented to ensure that the appropriate security controls are implemented based on data classifications. Privacy laws and regulations must also be considered.

Sensitive data is any data that could adversely affect an organization or individual if it were released to the public or obtained by attackers. When determining sensitivity, you should understand the type of threats that can occur, the vulnerability of the data, and the data type. For example, Social Security numbers are more sensitive than physical address data.

Data ownership is the final issue that you must consider as part of data policy design. This is particularly important if multiple organizations store their data within the same database. One organization may want completely different security controls in place to protect its data. Understanding legal ownership of data is important to ensure that you design a data policy that takes into consideration the different requirements of multiple data owners. While this is most commonly a consideration when multiple organizations are involved, it can also be an issue with different business units in the same organization. For example, human resources department data has different owners and therefore different requirements than research department data.

Roles and Responsibilities

The roles that are usually tied to asset security are data owners and data custodians. Data owners are the personnel who actually own a given set of data. These data owners determine the level of access that any user is given to their data. Data custodians are the personnel who actually manage the access to a given set of data. While data owners determine the level of access given, it is the data custodians who actually configure the appropriate controls to grant or deny the user’s access, based on the data owner’s approval.

Note

Both of these roles are introduced in the “Security Roles and Responsibilities” section of Chapter 1.

Data Owner

Data owners must understand the way in which the data they are responsible for is used and when that data should be released. They must also determine the data’s value to and impact on the organization. A data owner should understand what it will take to restore or replace data and the cost that will be incurred during this process. Finally, data owners must understand when data is inaccurate or no longer needed by the organization.

In most cases, each business unit within an organization designates a data owner, who must be given the appropriate level of authority for the data for which he or she is responsible. Data owners must understand any intellectual property rights and copyright issues for the data. Data owners are responsible for ensuring that the appropriate agreements are in place if third parties are granted access to the data.

Data Custodian

Data custodians must understand the levels of data access that can be given to users. Data custodians work with data owners to determine the level of access that should be given. This is an excellent example of separations. By having separate roles such as data owners and data custodians, an organization can ensure that no single role is responsible for data access. This prevents fraudulent creation of user accounts and assignment of rights.

Data custodians should understand data policies and guidelines. They should document the data structures in the organization and the levels of access given. They are also responsible for data storage, archiving, and backups. Finally, they should be concerned with data quality and should therefore implement the appropriate audit controls.

Centralized data custodians are common. Data owners give the data custodians the permission level that users and groups should be given. Data custodians actually implement the access control lists (ACLs) for the devices, databases, folders, and files.

Data Quality

Data quality is defined as data’s fitness for use. Data quality must be maintained throughout the data life cycle, including during data capture, data modification, data storage, data distribution, data usage, and data archiving. Security professionals must ensure that their organization adopts the appropriate quality control and quality assurance measures so that data quality does not suffer. Data quality is most often ensured by ensuring data integrity, which protects data from unintentional, unauthorized, or accidental changes. With data integrity, data is known to be good, and information can be trusted as being complete, consistent, and accurate. System integrity ensures that a system will work as intended.

Security professionals should work to document data standards, processes, and procedures to monitor and control data quality. In addition, internal processes should be designed to periodically assess data quality. When data is stored in databases, quality control and assurance are easier to ensure using the internal data controls in the database. For example, you can configure a number field to only allow the input of specific currency amounts. By doing this, you would ensure that only values that use two decimal places could be input into the data fields. This is an example of input validation.

Data contamination occurs when data errors are introduced. Data errors can be reduced through implementation of the appropriate quality control and assurance mechanisms. Data verification, an important part of the process, evaluates how complete and correct the data is and whether it complies with standards. Data verification can be carried out by personnel who have the responsibility of entering the data. Data validation evaluates data after data verification has occurred and tests data to ensure that data quality standards have been met. Data validation must be carried out by personnel who have the most familiarity with the data.

Organizations should develop procedures and processes that keep two key data issues in the forefront: error prevention and correction. Error prevention is provided at data entry, while error correction usually occurs during data verification and validation.

Data Documentation and Organization

Data documentation ensures that data is understood at its most basic level and can be properly organized into data sets. Data sets ensure that data is arranged and stored in a relational way so that data can be used for multiple purposes. Data sets should be given unique, descriptive names that indicate their contents.

By documenting the data and organizing data sets, organizations can also ensure that duplicate data is not retained in multiple locations. For example, the sales department may capture all demographic information for all customers. However, the shipping department may also need access to this same demographic information to ensure that products are shipped to the correct address. In addition, the accounts receivable department will need access to the customer demographic information for billing purposes. There is no need for each business unit to have separate data sets for this information. Identifying the customer demographic data set as being needed by multiple business units prevents duplication of efforts across business units.

Within each data set, documentation must be created for each type of data. In the customer demographic data set example, customer name, address, and phone number are all collected. For each of the data types, the individual parameters for each data type must be created. While an address may allow a mixture of numerals and characters, a phone number should allow only numerals. In addition, each data type may have a maximum length. Finally, it is important to document which data is required—meaning that it must be collected and entered. For example, an organization may decide that fax numbers are not required but phone numbers are required. Remember that each of these decisions is best made by the personnel working most closely with the data.

Once all the documentation has occurred, the data organization must be mapped out. This organization will include all interrelationships between the data sets. It should also include information on which business units will need access to data sets or subsets of a data set.

Note

Big data is a term for large or complex sets so large or complex that they cannot be analyzed by traditional data processing applications. Specialized applications have been designed to help organizations with their big data. The big data challenges that may be encountered include data analysis, data capture, data search, data sharing, data storage, and data privacy.

Identify and Classify Information and Assets

Security professionals should ensure that the organizations they work for properly identify and classify all organizational information and assets. The first step in this process is to identify all information and assets the organization owns and uses. To perform information and asset identification, security professionals should work with the representatives from each department or functional area. Once the information and assets are identified, security professionals should perform data and asset classification and document sensitivity and criticality of data.

Security professionals must understand private sector classifications, military and government classifications, the information life cycle, databases, and data audit.

Data and Asset Classification

Data and assets should be classified based on their value to the organization and their sensitivity to disclosure. Assigning a value to data and assets allows an organization to determine the resources that should be used to protect them. Resources that are used to protect data include personnel resources, monetary resources, access control resources, and so on. Classifying data and assets allows you to apply different protective measures. Data classification is critical to all systems to protect the confidentiality, integrity, and availability (CIA) of data.

After data is classified, the data can be segmented based on its level of protection needed. The classification levels ensure that data is handled and protected in the most cost-effective manner possible. The assets could then be configured to ensure that data is isolated or protected based on these classification levels. An organization should determine the classification levels it uses based on the needs of the organization. A number of private sector classifications and military and government information classifications are commonly used.

Note

The common private sector classifications and military and government classifications are discussed later in this section.

The information life cycle, covered in more detail later in this chapter, should also be based on the classification of the data. Organizations are required to retain certain information, particularly financial data, based on local, state, or government laws and regulations.

Sensitivity and Criticality

Sensitivity is a measure of how freely data can be handled. Some data requires special care and handling, especially when inappropriate handling could result in penalties, identity theft, financial loss, invasion of privacy, or unauthorized access by an individual or many individuals. Some data is also subject to regulation by state or federal laws and requires notification in the event of a disclosure.

Data is assigned a level of sensitivity based on who should have access to it and how much harm would be done if it were disclosed. This assignment of sensitivity is called data classification.

Criticality is a measure of the importance of the data. Data that is considered sensitive may not necessarily be considered critical. Assigning a level of criticality to a particular data set requires considering the answers to a few questions:

  • Will you be able to recover the data in case of disaster?

  • How long will it take to recover the data?

  • What is the effect of this downtime, including loss of public standing?

Data is considered essential when it is critical to the organization’s business. When essential data is not available, even for a brief period of time, or when its integrity is questionable, the organization is unable to function. Data is considered required when it is important to the organization but organizational operations would continue for a predetermined period of time even if the data were not available. Data is nonessential if the organization is able to operate without it during extended periods of time.

Once the sensitivity and criticality of data are understood and documented, the organization should then work to create a data classification system. Most organizations will either use a private sector classification system or a military and government classification system.

PII

Personally identifiable information (PII) was defined and explained in Chapter 1. PII is considered information that should be classified and protected. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-122 gives guidelines on protecting the confidentiality of PII.

Image

According to SP 800-122, organizations should implement the following recommendations to effectively protect PII:

  • Organizations should identify all PII residing in their environment.

  • Organizations should minimize the use, collection, and retention of PII to what is strictly necessary to accomplish their business purpose and mission.

  • Organizations should categorize their PII by the PII confidentiality impact level.

  • Organizations should apply the appropriate safeguards for PII based on the PII confidentiality impact level.

  • Organizations should develop an incident response plan to handle breaches involving PII.

  • Organizations should encourage close coordination among their chief privacy officers, senior agency officials for privacy, chief information officers, chief information security officers, and legal counsel when addressing issues related to PII.

SP 800-122 defines PII as “any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual’s identity, such as name, social security number, date and place of birth, mother’s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.” To distinguish an individual is to identify an individual. To trace an individual is to process sufficient information to make a determination about a specific aspect of an individual’s activities or status. Linked information is information about or related to an individual that is logically associated with other information about the individual. In contrast, linkable information is information about or related to an individual for which there is a possibility of logical association with other information about the individual.

All PII should be assigned confidentiality impact levels based on the FIPS 199 designations. Those designations are

  • LOW if the loss of confidentiality, integrity, or availability could be expected to have a limited adverse effect on organizational operations, organizational assets, or individuals.

  • MODERATE if the loss of confidentiality, integrity, or availability could be expected to have a serious adverse effect on organizational operations, organizational assets, or individuals.

  • HIGH if the loss of confidentiality, integrity, or availability could be expected to have a severe or catastrophic adverse effect on organizational operations, organizational assets, or individuals.

Determining the impact from a loss of confidentiality of PII should take into account relevant factors. Several important factors that organizations should consider are as follows:

  • Identifiability: How easily PII can be used to identify specific individuals

  • Quantity of PII: How many individuals are identified in the information

  • Data field sensitivity: The sensitivity of each individual PII data field, as well as the sensitivity of the PII data fields together

  • Context of use: The purpose for which PII is collected, stored, used, processed, disclosed, or disseminated

  • Obligation to protect confidentiality: The laws, regulations, standards, and operating practices that dictate an organization’s responsibility for protecting PII

  • Access to and location of PII: The nature of authorized access to PII

PII should be protected through a combination of measures, including operational safeguards, privacy-specific safeguards, and security controls. Operational safeguards should include policy and procedure creation and awareness, training, and education programs. Privacy-specific safeguards help organizations collect, maintain, use, and disseminate data in ways that protect the confidentiality of the data and include minimizing the use, collection, and retention of PII; conducting privacy impact assessments; de-identifying information; and anonymizing information. Security controls include separation of duties, least privilege, auditing, identification and authorization, and others from NIST SP 800-53.

Note

NIST SP 800-53 is covered in more detail in Chapter 1.

Organizations that collect, use, and retain PII should use NIST SP 800-122 to help guide the organization’s efforts to protect the confidentiality of PII.

PHI

Protected health information (PHI), also referred to as electronic protected health information (EPHI or ePHI), is any individually identifiable health information. NIST SP 800-66 provides guidelines for implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule. The Security Rule applies to the following covered entities:

  • Covered healthcare providers: Any provider of medical or other health services, or supplies, who transmits any health information in electronic form in connection with a transaction for which HHS (U.S. Department of Health and Human Services) has adopted a standard.

  • Health plans: Any individual or group plan that provides or pays the cost of medical care (e.g., a health insurance issuer and the Medicare and Medicaid programs).

  • Healthcare clearinghouses: A public or private entity that processes another entity’s healthcare transactions from a standard format to a nonstandard format, or vice versa.

  • Medicare prescription drug card sponsors: A nongovernmental entity that offers an endorsed discount drug program under the Medicare Modernization Act.

Each covered entity must ensure the confidentiality, integrity, and availability of PHI that it creates, receives, maintains, or transmits; protect against any reasonably anticipated threats and hazards to the security or integrity of EPHI; and protect against reasonably anticipated uses or disclosures of such information that are not permitted by the Privacy Rule.

The Security Rule is separated into six main sections as follows:

  • Security Standards General Rules: Includes the general requirements all covered entities must meet; establishes flexibility of approach; identifies standards and implementation specifications (both required and addressable); outlines decisions a covered entity must make regarding addressable implementation specifications; and requires maintenance of security measures to continue reasonable and appropriate protection of PHI.

  • Administrative Safeguards: Defined in the Security Rule as the “administrative actions and policies, and procedures to manage the selection, development, implementation, and maintenance of security measures to protect electronic protected health information and to manage the conduct of the covered entity’s workforce in relation to the protection of that information.”

  • Physical Safeguards: Defined as the “physical measures, policies, and procedures to protect a covered entity’s electronic information systems and related buildings and equipment, from natural and environmental hazards, and unauthorized intrusion.”

  • Technical Safeguards: Defined as the “the technology and the policy and procedures for its use that protect electronic protected health information and control access to it.”

  • Organizational Requirements: Includes standards for business associate contracts and other arrangements, including memoranda of understanding between a covered entity and a business associate when both entities are government organizations; and requirements for group health plans.

  • Policies and Procedures and Documentation Requirements: Requires implementation of reasonable and appropriate policies and procedures to comply with the standards, implementation specifications and other requirements of the Security Rule; maintenance of written (which may be electronic) documentation and/or records that includes policies, procedures, actions, activities, or assessments required by the Security Rule; and retention, availability, and update requirements related to the documentation.

NIST SP 800-66 includes a linking of the NIST Risk Management Framework (RMF) and the Security Rule. It also includes key activities that should be carried out for each of the six main sections listed above of the Security Rule. Organizations that collect, use, and retain PHI should use NIST SP 800-66 to help guide the organization’s efforts to provide confidentiality, integrity, and availability for PHI.

Proprietary Data

Proprietary data is defined as internally generated data or documents that contain technical or other types of information controlled by an organization to safeguard its competitive edge. Proprietary data may be protected under copyright, patent, or trade secret laws. While there are no standards to govern the protection of proprietary data, organizations must ensure that the confidentiality, integrity, and availability of proprietary data are protected. Because of this, many organizations protect proprietary data with the same types of controls that are used for PII and PHI.

Security professionals should ensure that proprietary data is identified and properly categorized to ensure that the appropriate controls are put into place.

Private Sector Classifications

Image

Organizations in the private sector can classify data using four main classification levels, listed from highest sensitivity level to lowest:

  1. Confidential

  2. Private

  3. Sensitive

  4. Public

Note

It is up to each organization to determine the number and type of classifications. Other options include “protected” to indicate legally protected data, and “proprietary” to indicate company-owned data (in a legal sense).

Data that is confidential includes trade secrets, intellectual data, application programming code, and other data that could seriously affect the organization if unauthorized disclosure occurred. Data at this level would only be available to personnel in the organization whose work relates to the data’s subject. Access to confidential data usually requires authorization for each access. In most cases, the only way for external entities to have authorized access to confidential data is as follows:

  • After signing a confidentiality agreement

  • When complying with a court order

  • As part of a government project or contract procurement agreement

Data that is private includes any information related to personnel, including human resources records, medical records, and salary information, that is only used within the organization. Data that is sensitive includes organizational financial information and requires extra measures to ensure its CIA and accuracy. Public data is data that would not cause a negative impact on the organization.

Military and Government Classifications

Image

Military and governmental entities usually classify data using five main classification levels, listed from highest sensitivity level to lowest:

  1. Top Secret: Disclosure would cause exceptionally grave danger to national security.

  2. Secret: Disclosure would cause serious damage to national security.

  3. Confidential: Disclosure would cause damage to national security.

  4. Sensitive but unclassified: Disclosure might harm national security.

  5. Unclassified: Any information that can generally be distributed to the public without any threat to national interest.

U.S. federal agencies use the Sensitive but Unclassified (SBU) designation when information is not classified but still needs to be protected and requires strict controls over its distribution. There are over 100 different labels for SBU, including:

  • For official use only

  • Limited official use

  • Sensitive security information

  • Critical infrastructure information

Executive order 13556 created a standard designation Controlled Unclassified Information (CUI). Implementation is in progress.

Data that is top secret includes weapon blueprints, technology specifications, spy satellite information, and other military information that could gravely damage national security if disclosed. Data that is secret includes deployment plans, missile placement, and other information that could seriously damage national security if disclosed. Data that is confidential includes strength of forces in the United States and overseas, technical information used for training and maintenance, and other information that could seriously affect the government if unauthorized disclosure occurred. Data that is sensitive but unclassified includes medical or other personal data that might not cause serious damage to national security if disclosed but could cause citizens to question the reputation of the government. Military and government information that does not fall into any of the four other categories is considered unclassified and usually has to be granted to the public based on the Freedom of Information Act.

Note

Enacted on July 4, 1966, and taking effect one year later, the Freedom of Information Act (FOIA) provides a powerful tool to advocates for access to information. Under the FOIA, anyone may request and receive any records from federal agencies unless the documents are officially declared exempt based upon specific categories, such as top secret, secret, and confidential. To learn more about how to explore for FOIA data or make a FOIA request, visit https://www.foia.gov.

Information Life Cycle

Organizations should ensure that any information they collect and store is managed throughout the life cycle of that information. If no information life cycle is followed, the storage required for the information will grow over time until more storage resources are needed. Security professionals must therefore ensure that data owners and custodians understand the information life cycle.

Image

For most organizations, the five phases of the information life cycle are as follows:

  1. Create/receive

  2. Distribute

  3. Use

  4. Maintain

  5. Dispose/store

During the create/receive phase, data is either created by organizational personnel or received by the organization via the data entry portal. If the data is created by organizational personnel, it is usually placed in the location from which it will be distributed, used, and maintained. However, if the data is received via some other mechanism, it may be necessary to copy or import the data to an appropriate location. In this case, the data will not be available for distribution, usage, and maintenance until after the copy or import.

After the create/receive phase, organizational personnel must ensure that the data is properly distributed. In most cases, this involves placing the data in the appropriate location and possibly configuring the access permissions as defined by the data owner. Keep in mind, however, that in many cases the storage location and appropriate user and group permissions may already be configured. In such a case, it is just a matter of ensuring that the data is in the correct distribution location. Distribution locations include databases, shared folders, network-attached storage (NAS), storage area networks (SANs), and data libraries.

Once data has been distributed, personnel within the organization can use the data in their day-to-day operations. While some personnel will have only read access to data, others may have write or full control permissions. Remember that the permissions allowed or denied are designated by the data owner but configured by the data custodian.

Now that data is being used in day-to-day operations, data maintenance is key to ensuring that data remains accessible and secure. Maintenance includes auditing, performing backups, monitoring performance, and managing data.

Once data has reached the end of the life cycle, you should either properly dispose of it or ensure that it is securely stored. Some organizations must maintain data records for a certain number of years per local, state, or federal laws or regulations. This type of data should be archived for the required period. In addition, any data that is part of litigation should be retained as requested by the court of law, and organizations should follow appropriate chain of custody and evidence documentation processes. Data archival and destruction procedures should be clearly defined by the organization.

All organizations need procedures in place for the retention and destruction of data. Data retention and destruction must follow all local, state, and government regulations and laws. Documenting proper procedures ensures that information is maintained for the required time to prevent financial fines and possible incarceration of high-level organizational officers. These procedures must include both retention period and destruction process.

Figure 2-1 shows the information life cycle.

A flow diagram depicts information lifecycle. The steps of the lifecycle are as follows: Create/Receive, Distribute, Use, Maintain, and Dispose/Store.
Figure 2-1 Information Life Cycle

Databases

Databases have become the technology of choice for storing, organizing, and analyzing large sets of data. Users generally access a database though a client interface. As the need arises to provide access to entities outside the enterprise, the opportunities for misuse increase. In this section, concepts necessary to discuss database security are covered as well as the security concerns surrounding database management and maintenance.

DBMS Architecture and Models

Databases contain data and the main difference in database models is how that information is stored and organized. The model describes the relationships among the data elements, how the data is accessed, how integrity is ensured, and acceptable operations. The five models or architectures we discuss are

  • Relational

  • Hierarchical

  • Network

  • Object-oriented

  • Object-relational

The relational model uses attributes (columns) and tuples (rows) to organize the data in two-dimensional tables. Each cell in the table, representing the intersection of an attribute and a tuple, represents a record.

When working with relational database management systems (RDBMSs), you should understand the following terms:

  • Relation: A fundamental entity in a relational database in the form of a table.

  • Tuple: A row in a table.

  • Attribute: A column in a table.

  • Schema: Description of a relational database.

  • Record: A collection of related data items.

  • Base relation: In SQL, a relation that is actually existent in the database.

  • View: The set of data available to a given user. Security is enforced through the use of views.

  • Degree: The number of columns in a table.

  • Cardinality: The number of rows in a relation.

  • Domain: The set of allowable values that an attribute can take.

  • Primary key: Columns that make each row unique.

  • Foreign key: An attribute in one relation that has values matching the primary key in another relation. Matches between the foreign key and the primary key are important because they represent references from one relation to another and establish the connection among these relations.

  • Candidate key: An attribute in one relation that has values matching the primary key in another relation.

  • Referential integrity: Requires that for any foreign key attribute, the referenced relation must have a tuple with the same value for its primary key.

An important element of database design that ensures that the attributes in a table depend only on the primary key is a process called normalization. Normalization includes

  • Eliminating repeating groups by putting them into separate tables

  • Eliminating redundant data (occurring in more than one table)

  • Eliminating attributes in a table that are not dependent on the primary key of that table

In the hierarchical model, data is organized into a hierarchy. An object can have one child (an object that is a subset of the parent object), multiple children, or no children. To navigate this hierarchy, you must know the branch in which the object is located. An example of the use of this system is the Windows registry and a Lightweight Directory Access Protocol (LDAP) directory.

In the network model, as in the hierarchical model, data is organized into a hierarchy but, unlike the hierarchical model, objects can have multiple parents. Because of this, knowing which branch to find a data element in is not necessary because there will typically be multiple paths to it.

The object-oriented model has the ability to handle a variety of data types and is more dynamic than a relational database. Object-oriented database (OODB) systems are useful in storing and manipulating complex data, such as images and graphics. Consequently, complex applications involving multimedia, computer-aided design (CAD), video, graphics, and expert systems are more suited to the object-oriented model. It also has the characteristics of ease of reusing code and analysis and reduced maintenance.

Objects can be created as needed, and the data and the procedures (or methods) go with the object when it is requested. A method is the code defining the actions that the object performs in response to a message. This model uses some of the same concepts of a relational model. In the object-oriented model, a relation, column, and tuple (relational terms) are referred to as class, attribute, and instance objects.

The object-relational model is the marriage of object-oriented and relational technologies, combining the attributes of both. This is a relational database with a software interface that is written in an object-oriented programming (OOP) language. The logic and procedures are derived from the front-end software rather than the database. This means each front-end application can have its own specific procedures.

Database Interface Languages

Access to information in a database is facilitated by an application that allows you to obtain and interact with data. These interfaces can be written in several different languages. This section discusses some of the more important data programming languages:

  • ODBC: Open Database Connectivity (ODBC) is an application programming interface (API) that allows communication with databases either locally or remotely. An API on the client sends requests to the ODBC API. The ODBC API locates the database, and a specific driver converts the request into a database command that the specific database will understand.

  • JDBC: As one might expect from the title, Java Database Connectivity (JDBC) makes it possible for Java applications to communicate with a database. A Java API is what allows Java programs to execute SQL statements. It is database agnostic and allows communication with various types of databases. It provides the same functionality as the ODBC.

  • XML: Data can now be created in Extensible Markup Language (XML) format, but the XML:DB API allows XML applications to interact with more traditional databases, such as relational databases. It requires that the database have a database-specific driver that encapsulates all the database access logic.

  • OLE DB: Object Linking and Embedding Database (OLE DB) is a replacement for ODBC, extending its functionality to non-relational databases. Although it is COM-based and limited to Microsoft Windows–based tools, it provides applications with uniform access to a variety of data sources, including service through ActiveX objects.

Data Warehouses and Data Mining

Data warehousing is the process of combining data from multiple databases or data sources in a central location called a warehouse. The warehouse is used to carry out analysis. The data is not simply combined but is processed and presented in a more useful and understandable way. Data warehouses require stringent security because the data is not dispersed but located in a central location.

Data mining is the process of using special tools to organize the data into a format that makes it easier to make business decisions based on the content. It analyzes large data sets in a data warehouse to find non-obvious patterns. These tools locate associations between data and correlate these associations into metadata. It allows for more sophisticated inferences (sometimes called business intelligence [BI]) to be made about the data. Three measures should be taken when using data warehousing applications:

  • Control metadata from being used interactively.

  • Monitor the data purging plan.

  • Reconcile data moved between the operations environment and data warehouse.

Database Maintenance

Database administrators must regularly conduct database maintenance. Databases must be backed up regularly. All security patches and updates for the hardware and software, including the database software, must be kept up to date. Hardware and software upgrades are necessary as organizational needs increase and as technology advances.

Security professionals should work with database administrators to ensure that threat analysis for databases is performed at least annually. They should also work to develop the appropriate mitigations and controls to protect against the identified threats.

Database Threats

Security threats to databases usually revolve around unwanted access to data. Two security threats that exist in managing databases involve the processes of aggregation and inference. Aggregation is the act of combining information from various sources. The way this can become a security issue with databases is when a user does not have access to a given set of data objects, but does have access to them individually or least some of them and is able to piece together the information to which he should not have access. The process of piecing the information together is called inference. Two types of access measures can be put in place to help prevent access to inferable information:

  • Content-dependent access control bases access on the sensitivity of the data. For example, a department manager might have access to the salaries of the employees in his/her department but not to the salaries of employees in other departments. The cost of this measure is an increased processing overhead.

  • Context-dependent access control bases the access to data on multiple factors to help prevent inference. Access control can be a function of factors such as location, time of day, and previous access history.

Database Views

Access to the information in a database is usually controlled through the use of database views. A view refers to the given set of data that a user or group of users can see when they access the database. Before a user is able to use a view, she must have permission on both the view and all dependent objects. Views enforce the concept of least privilege.

Database Locks

Database locks are used when one user is accessing a record that prevents another user from accessing the record at the same time to prevent edits until the first user is finished. Locking not only provides exclusivity to writes but also controls reading of unfinished modifications or uncommitted data.

Polyinstantiation

Polyinstantiation is a process used to prevent data inference violations like the database threats previously covered. It does this by enabling a relation to contain multiple tuples with the same primary keys, with each instance distinguished by a security level. It prevents low-level database users from inferring the existence of higher-level data.

OLTP ACID Test

An online transaction processing (OLTP) system is used to monitor for problems such as processes that stop functioning. Its main goal is to prevent transactions that don’t happen properly or are not complete from taking effect. An ACID test ensures that each transaction has the following properties before it is committed:

  • Atomicity: Either all operations are complete, or the database changes are rolled back.

  • Consistency: The transaction follows an integrity process that ensures that data is consistent in all places where it exists.

  • Isolation: A transaction does not interact with other transactions until completion.

  • Durability: After it’s verified, the transaction is committed and cannot be rolled back.

Data Audit

While an organization may have the most up-to-date data management plan in place, data management alone is not enough to fully protect data. Organizations must also put into place a data auditing mechanism that will help administrators identify vulnerabilities before attacks occur. Auditing mechanisms can be configured to monitor almost any level of access to data. However, auditing mechanisms affect the performance of the systems being audited. Always carefully consider any performance impact that may occur as a result of the auditing mechanism. While auditing is necessary, it is important not to audit so many events that the auditing logs are littered with useless or unused information.

Confidential or sensitive data should be more carefully audited than public information. As a matter of fact, it may not even be necessary to audit access to public information. But when considering auditing for confidential data, an organization may decide to audit all access to that data or just attempts to change the data. Only the organization and its personnel are able to develop the best auditing plan.

Finally, auditing is good only if there is a regular review of the logs produced. Administrators or security professionals should obtain appropriate training on reviewing audit logs. In addition, appropriate alerts should be configured if certain critical events occur. For example, if multiple user accounts are locked out due to invalid login attempts over a short period of time, this may be an indication that systems are experiencing a dictionary or other password attack. If an alert were scheduled to notify administrators when a certain number of lockouts occur over a period of time, administrators may be able to curtail the issue before successful access is achieved by the attacker.

Information and Asset Ownership

While information and assets within an organization are ultimately owned by the organization, it is usually understood that information and assets within the organization are owned and managed by different business units. These business units must work together to ensure that the organizational mission is achieved and that the information and assets are protected.

For this reason, security professionals must understand where the different information and assets are located and work with the various owners to ensure that the information and assets are protected. The owners that security professionals need to work with include data owners, system owners, and business/mission owners. As part of asset ownership, security professionals should ensure that appropriate asset management procedures are developed and followed, as described in Chapter 7, “Security Operations.”

Protect Privacy

Asset privacy involves ensuring that all organizational assets have the level of privacy that is needed. Privacy is the right of an individual to control his own information. Privacy is discussed in detail in Chapter 1, but when it comes to asset security, you need to understand how to protect asset privacy. This section discusses the privacy protection responsibilities of owners and data processors, data remanence, and collection limitation.

Owners

Security professionals must work with the owners of information and assets to determine who should have access to the information and assets, the value of the information and assets, and the controls that should be implemented to protect the privacy of information and assets. As a result, security professionals must understand the role of data owners, system owners, and business/mission owners.

Data Owners

As stated earlier, data owners actually own the data. Unfortunately, in most cases, data owners do not own the systems on which their data resides. Therefore, it is important that the data owner work closely with the system owner. Even if the appropriate ACLs are configured for the data, the data can still be compromised if the system on which the data resides is not properly secured.

System Owners

System owners are responsible for the systems on which data resides. While the data owner owns the data and the data custodian configures the appropriate permissions for user access to the data, the system owner must determine the parameters that govern the system, such as what types of data and applications can be stored on the system, who owns the data and applications, and who determined the users that can access the data and applications.

System Custodians

System custodians are responsible for administering the systems on which data resides based on the parameters set forth by the system owner.

Business/Mission Owners

Business or mission owners must ensure that all operations fit within the business goals and mission. This includes ensuring that collected data is necessary for the business to function. Collecting unnecessary data wastes time and resources. Because the business/mission owner is primarily concerned with the overall business, conflicts between data owners, data custodians, and system owners may need to be resolved by the business/mission owner, who will need to make the best decision for the organization. For example, say that a data owner requests more room on a system for the storage of data. The data owner strongly believes that the new data being collected will help the sales team be more efficient. However, storage on the system owner’s asset is at a premium. The system owner is unwilling to allow the data owner to use the amount of space he has requested. In this case, the business/mission owner would need to review both sides and decide whether collecting and storing the new data would result in enough increased revenue to justify the cost of allowing the data owner more storage space. If so, it may also be necessary to invest in more storage media for the system or to move the data to another system that has more resources available. But keep in mind that moving the data would possibly involve another system owner.

Security professionals should always be part of these decisions because they understand the security controls in place for any systems involved and the security controls needed to protect the data. Moving the data to a system that does not have the appropriate controls may cause more issues than just simply upgrading the system on which the data currently resides. Only a security professional is able to objectively assess the security needs of the data and ensure that they are met.

Data Processors

Data processors are any personnel within an organization who process the data that has been collected throughout the entire life cycle of the data. If any individual accesses the data in any way, that individual can be considered a data processor. However, in some organizations, data processors are only those individuals who can enter or change data.

No matter which definition an organization uses, it is important that security professionals work to provide training to all data processors on the importance of asset privacy, especially data privacy. This is usually included as part of the security awareness training. It is also important to include any privacy standards or policies that are based on laws and regulations. Once personnel have received the appropriate training, they should sign a statement saying that they will abide by the organization’s privacy policy.

Data Remanence

Whenever data is erased or removed from a storage media, residual data can be left behind. This can allow data to be reconstructed when the organization disposes of the media, resulting in unauthorized individuals or groups gaining access to private data. Media that security professionals must consider include magnetic hard disk drives, solid-state drives, magnetic tapes, and optical media, such as CDs and DVDs. When considering data remanence, security professionals must understand three countermeasures:

  • Clearing: This includes removing data from the media so that data cannot be reconstructed using normal file recovery techniques and tools. With this method, the data is only recoverable using special forensic techniques. Overwriting is a clearing technique that writes data patterns over the entire media, thereby eliminating any trace data. Another clearing technique is disk wiping.

  • Purging: Also referred to as sanitization, purging makes the data unreadable even with advanced forensic techniques. With this technique, data should be unrecoverable. Degaussing, a purging technique, exposes the media to a powerful, alternating magnetic field, removing any previously written data and leaving the media in a magnetically randomized (blank) state.

  • Destruction: Destruction involves destroying the media on which the data resides. Encryption scrambles the data on the media, thereby rendering it unreadable without the encryption key. Destruction is the physical act of destroying media in such a way that it cannot be reconstructed. Shredding involves physically breaking media to pieces. Pulverizing involves reducing media to dust. Pulping chemically alters the media. Finally, burning incinerates the media.

The majority of these countermeasures work for magnetic media. However, solid-state drives present unique challenges because they cannot be overwritten. Most solid-state drive vendors provide sanitization commands that can be used to erase the data on the drive. Security professionals should research these commands to ensure that they are effective. Another option for these drives is to erase the cryptographic key. Often a combination of these methods must be used to fully ensure that the data is removed.

Data remanence is also a consideration when using any cloud-based solution for an organization. Security professionals should be involved in negotiating any contract with a cloud-based provider to ensure that the contract covers data remanence issues, although it is difficult to determine that the data is properly removed. Using data encryption is a great way to ensure that data remanence is not a concern when dealing with the cloud.

Collection Limitation

For any organization, a data collection limitation exists based on what is needed. Systems owners and data custodians should monitor the amount of free storage space so that they understand trends and can anticipate future needs before space becomes critical. Without appropriate monitoring, data can grow to the point where system performance is affected. No organization wants to have a vital data storage system shut down because there is no available free space. Disk quotas allow administrators to set disk space limits for users and then automatically monitor disk space usage. In most cases, the quotas can be configured to notify users when they are nearing space limits.

Collection of data is also limited based on laws and regulations and, in some cases, on gaining the consent of the subject of the data. Organizations should ensure that they fully document any laws and regulations that affect the collection of private data and adjust any private data collection policies accordingly. Organizations should document and archive the consent of the data subject. In addition, this consent should be renewed periodically, especially if the collection policy changes in any way.

Security professionals should work with system owners and data custodians to ensure that the appropriate monitoring and alert mechanisms are configured. System owners and data custodians can then be proactive when it comes to data storage needs.

Asset Retention

Asset and data retention requirements vary based on several factors, including asset or data type, asset or data age, and legal and regulatory requirements. Security professionals must understand where data is stored and the type of data stored. In addition, security professionals should provide guidance on managing and archiving data. Therefore, data retention policies must be established with the help of organizational personnel. The assets that store data will use the data retention policies to help guide the asset retention guidelines. If a storage asset needs to be replaced, a thorough understanding of the data that resides on the asset is essential to ensure that data is still retained for the required period.

A retention policy usually contains the purpose of the policy, the portion of the organization affected by the policy, any exclusions to the policy, the personnel responsible for overseeing the policy, the personnel responsible for data, the data types covered by the policy, and the retention schedule. Security professionals should work with data owners to develop the appropriate data retention policy for each type of data the organization owns. Examples of data types include, but are not limited to, human resources data, accounts payable/receivable data, sales data, customer data, and email.

Security professionals should ensure that asset retention policies are written as well. While asset retention policies are often governed by the data retention policies, organizations may find it necessary to replace physical assets while needing to retain the data stored on the asset. Security professionals should ensure that the data residing on an asset that will be retired is fully documented and properly retained as detailed by the data retention policy. This will usually require that the data is moved to another asset. For example, suppose an organization stores all the PII data it retains on a SQL server located on the organization’s demilitarized zone (DMZ). If the organization decides to replace the SQL server with a new Windows Server 2016 computer, it will be necessary to back up the PII from the old server and restore it to the new server. In addition, the organization may want to retain the backup of the PII and store it in a safe or other secured location, in case the organization should ever need it. Then the organization must ensure that the PII cannot be retrieved from the hard drive on the old server. This may require physical destruction of the hard drive.

To design asset and data retention policies, the organization should answer the following questions:

  • What are the legal/regulatory requirements and business needs for the asset/data?

  • What are the types of assets/data?

  • What are the retention periods and destruction needs for the assets/data?

The personnel who are most familiar with each asset and data type should work with security professionals to determine the asset and data retention policies. For example, human resources personnel should help design the data retention policies for all human resources assets and data. While designing asset and data retention policies, an organization must consider the media and hardware that will be used to retain the data. Then, with this information in hand, the organization and/or business unit should draft and formally adopt the asset and data retention policies.

Once the asset and data retention policies have been created, personnel must be trained to comply with these policies. Auditing and monitoring should be configured to ensure data retention policy compliance. Periodically, data owners and processors should review the data retention policies to determine whether any changes need to be made. All data retention policies, implementation plans, training, and auditing should be fully documented. In addition, IT support staff should work to ensure that the assets on which the data is stored are kept up to date with the latest security patches and updates.

Remember that within most organizations, it is not possible to find a one-size-fits-all solution because of the different types of assets or data. Only those most familiar with each asset or data type can determine the best retention policy for that asset or data. While a security professional should be involved in the design of the retention policies, the security professional is there to ensure that security is always considered and that retention policies satisfy organizational needs. The security professional should act only in an advisory role and should provide expertise when needed.

Data Security Controls

Now it is time to discuss the data security controls that organizations must consider as part of a comprehensive security plan. Security professionals must understand the following as part of data security controls: data security, data states (data at rest, data in transit, and data in use), data access and sharing, data storage and archiving, baselines, scoping and tailoring, standards selection, and cryptography.

Data Security

Data security includes the procedures, processes, and systems that protect data from unauthorized access. Unauthorized access includes unauthorized digital and physical access. Data security also protects data against any threats that can affect data confidentiality, integrity, or availability.

To provide data security, security should be implemented using a defense-in-depth strategy, as discussed in Chapter 1. If a single layer of access is not analyzed, then data security is at risk. For example, you can implement authentication mechanisms to ensure that users must authenticate before accessing the network. But if you do not have the appropriate physical security controls in place to prevent unauthorized access to your facility, an attacker can easily gain access to your network just by connecting an unauthorized device to the network.

Security professionals should make sure their organization implements measures and safeguards for any threats that have been identified. In addition, security professionals must remain vigilant and constantly be on the lookout for new threats.

Data States

Three basic data states must be considered as part of asset security. These three states are data at rest, data in transit, and data in use. Security professionals must ensure that controls are implemented to protect data in all three of these states.

Data at Rest

Data at rest is data that is being stored and not being actively used at a certain point in time. While data is at rest, security professionals must ensure that the confidentiality, integrity, and availability of the data are ensured. Confidentiality can be provided by implementing data encryption. Integrity can be provided by implementing the appropriate authentication mechanisms and ACLs so that only authenticated, authorized users can edit data. Availability can be provided by implementing a fault-tolerant storage solution, such as RAID.

Data in Transit

Data in transit is data that is being transmitted over a network. While data is being transmitted, security professionals must ensure that the confidentiality, integrity, and availability of the data are ensured. Confidentiality can be provided by implementing link encryption or end-to-end encryption. As with data at rest, authentication and ACLs can help with data integrity of data in transit. Availability can be provided by implementing server farms and dual backbones.

Data in Use

Data in use is data that is being accessed or manipulated in some way. Data manipulation includes editing the data and compiling the data into reports. The main issues with data in use are to ensure that only authorized individuals have access to or can read the data and that only authorized changes are allowed to the data. Confidentiality can be provided by using privacy or screen filters to prevent unauthorized individuals from reading data on a screen. It can also be provided by implementing a document shredding policy for all reports that contain PII, PHI, proprietary data, or other confidential information. Data integrity can be provided by implementing the appropriate controls on the data. Data locks can prevent data from being changed, and data rules can ensure that changes only occur within defined parameters. For certain data types, organizations may decide to implement two-person controls to ensure that data changes are entered and verified. Availability can be provided by using the same strategies as used for data at rest and data in transit. In addition, organizations may wish to implement locks and views to ensure that users needing access to data obtain the most up-to-date version of that data.

Data Access and Sharing

Personnel must be able to access and share data in their day-to-day duties. This access starts when the data owner approves access for a user. The data custodian then gives the user the appropriate permissions for the data. But these two steps are an oversimplification of the process. Security professionals must ensure that the organization understands issues such as the following:

  • Are the appropriate data policies in place to control the access and use of data?

  • Do the data owners understand the access needs of the users?

  • What are the different levels of access needed by the users?

  • Which data formats do the users need?

  • Are there subsets of data that should have only restricted access for users?

  • Of the data being collected, is there clearly identified private versus public data?

  • Is data being protected both when it is at rest and when it is in transit?

  • Are there any legal or jurisdictional issues related to data storage location, data transmission, or data processing?

While the data owners and data custodians work together to answer many of these questions, security professionals should be involved in guiding them through this process. If a decision is made to withhold data, the decision must be made based on privacy, confidentiality, security, or legal/regulatory restrictions. The criteria by which these decisions are made must be recorded as part of an official policy.

Data Storage and Archiving

Data storage and archiving are related to how an organization stores data—both digital data and physical data in the form of hard copies. It is very easy for data to become outdated. Once data is outdated, it is no longer useful to the organization.

While data storage used to be quite expensive, it has become cheaper in recent years. Security professionals should work with data owners and data custodians to help establish a data review policy to ensure that data is periodically reviewed to determine whether it is needed and useful for the organization. Data should be archived in accordance with retention policies and schedules. Data that is no longer needed or useful for the organization should be destroyed. The exception is data that has been archived that must be kept for a certain duration based on a set retention policy period, especially data that may be on legal hold.

Note

Retention is a set of rules within an organization that dictates types of unaltered data that must be kept and for how long. Archiving is the process of securely storing unaltered data for later potential retrieval. Data should be retained in accordance with a documented schedule, stored securely in accordance with its classification, and securely disposed of at the end of the retention period.

When considering data storage and archiving, security professionals need to ensure that the different aspects of storage are properly analyzed to ensure appropriate deployment. This includes analyzing server hardware and software, database maintenance, data backups, and network infrastructure. Each part of the digital trail that the data will travel must be understood so that the appropriate policies and procedures can be put into place to ensure asset privacy.

Data that is still needed and useful to the organization should remain in primary storage for easy access by users. Data marked for archiving must be moved to some sort of backup media or secondary storage. Organizations must determine the form of data archive storage that will best suit their needs. For some business units in the organization, it may be adequate to archive the data to magnetic tape or optical media, such as DVDs. With these forms of storage, restoring the data from the archive can be a laborious process. For business units that need an easier way to access the archived data, some sort of solid-state or hot-pluggable drive technology may be a better way to go.

No matter which media your organization chooses for archival purposes, security professionals must consider the costs of the mechanisms used and the security of the archive. Storing archived data that has been backed up to DVD in an unlocked file cabinet may be more convenient for a business unit, but it does not provide any protection of the data on the DVD. In this case, the security professional may need to work with the business unit to come up with a more secure storage mechanism for data archives. When data is managed centrally by the IT or data center staff, personnel usually better understand security issues related to data storage and may therefore not need as much guidance from security professionals.

Baselines

One practice that can make maintaining security simpler is to create and deploy standard images that have been secured with security baselines. A baseline is a set of configuration settings that provides a floor of minimum security in the image being deployed. Organizations should capture baselines for all devices, including network devices, computers, host computers, and virtual machines.

Baselines can be controlled through the use of Group Policy in Windows. These policy settings can be made in the image and applied to both users and computers. These settings are refreshed periodically through a connection to a domain controller and cannot be altered by the user. It is also quite common for the deployment image to include all of the most current operating system updates and patches as well.

When a network makes use of these types of technologies, the administrators have created a standard operating environment. The advantages of such an environment are more consistent behavior of the network and simpler support issues. System scans should be performed weekly to detect changes from the baseline.

Security professionals should help guide their organization through the process of establishing baselines. If an organization implements very strict baselines, it will provide a higher level of security that may actually be too restrictive. If an organization implements a very lax baseline, it will provide a lower level of security that will likely result in security breaches. Security professionals should understand the balance between protecting organizational assets and allowing users access, and they should work to ensure that both ends of this spectrum are understood.

Scoping and Tailoring

Scoping and tailoring are closely tied to the baselines. Scoping and tailoring allow an organization to narrow its focus to identify and address the appropriate risks.

Scoping instructs an organization on how to apply and implement security controls. Baseline security controls are the minimums that are acceptable to the organization. When security controls are selected based on scoping, documentation should be created that includes the security controls that were considered, whether the security controls were adopted, and how the considerations were made.

Tailoring allows an organization to more closely match security controls to the needs of the organization. When security controls are selected based on tailoring, documentation should be created that includes the security controls that were considered, whether the security controls were adopted, and how the considerations were made.

NIST SP 800-53, which is covered extensively in Chapter 1, provides some guidance on tailoring.

Standards Selection

Because organizations need guidance on protecting their assets, security professionals must be familiar with the standards that have been established. Many standards organizations have been formed, including NIST, the U.S. Department of Defense (DoD), and the International Organization for Standardization (ISO).

Note

Standards are covered extensively in Chapter 1. To locate information on a particular NIST or ISO standard, refer to the Index.

The U.S. DoD Instruction 8510.01 establishes a certification and accreditation process for DoD information systems. It can be found at http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodi/851001_2014.pdf.

The ISO organization works with the International Electrotechnical Commission (IEC) to establish many standards regarding information security. The ISO/IEC standards that security professionals need to understand are covered in Chapter 1.

Security professionals may also need to research other standards, including standards from the European Network and Information Security Agency (ENISA), European Union (EU), and U.S. National Security Agency (NSA). It is important that the organization researches the many standards available and apply the most beneficial guidelines based on the organization’s needs.

Data Protection Methods

Data is protected in a variety of ways. Security professionals must understand the different data protection methods and know how to implement them. Data protection methods should include administrative (managerial), logical (technical), and physical controls. All types of controls are covered extensively in Chapter 1.

The most popular method of protecting data and ensuring data integrity is by using cryptography.

Cryptography

Cryptography, also referred to as encryption, can provide different protection based on which level of communication is being used. The two types of encryption communication levels are link encryption and end-to-end encryption.

Note

Cryptography is discussed in greater detail in Chapter 3, “Security Architecture and Engineering.”

Link Encryption

Link encryption encrypts all the data that is transmitted over a link. In this type of communication, the only portion of the packet that is not encrypted is the data-link control information, which is needed to ensure that devices transmit the data properly. All the information is encrypted, with each router or other device decrypting its header information so that routing can occur and then re-encrypting before sending the information to the next device.

If the sending party needs to ensure that data security and privacy are maintained over a public communication link, then link encryption should be used. This is often the method used to protect email communication or when banks or other institutions that have confidential data must send that data over the Internet.

Link encryption protects against packet sniffers and other forms of eavesdropping and occurs at the data link and physical layers of the OSI model. Advantages of link encryption include: All the data is encrypted, and no user interaction is needed for it to be used. Disadvantages of link encryption include: Each device that the data must be transmitted through must receive the key, key changes must be transmitted to each device on the route, and packets are decrypted at each device.

End-to-End Encryption

End-to-end encryption encrypts less of the packet information than link encryption. In end-to-end encryption, packet routing information and packet headers and addresses are not encrypted. This allows potential hackers to obtain more information if a packet is acquired through packet sniffing or eavesdropping.

End-to-end encryption has several advantages. A user usually initiates end-to-end encryption, which allows the user to select exactly what gets encrypted and how. It affects the performance of each device along the route less than link encryption because every device does not have to perform encryption/decryption to determine how to route the packet.

Information and Asset Handling Requirements

Organizations should establish the appropriate information and asset handling requirements to protect their assets. As part of these handling requirements, personnel should be instructed on how to mark, label, store, and destroy or dispose of media.

Handling standards inform custodians and users how to protect the information they use and systems with which they interact. Handling standards dictate by classification level how information must be stored, transmitted, communicated, accessed, retained, and destroyed. Handling standards can extend to incident management and breach notification. Handling standards extend to automated tools, such as data loss prevention (DLP) solutions. Handling standards should be succinctly documented in a usable format. Handling standard compliance should be referenced in the acceptable use policy (AUP). Users should be introduced to handling standards during the onboarding process. Handling standards should be reinforced throughout the user life cycle.

Marking, Labeling, and Storing

Plainly label all forms of storage media (tapes, optical drives and so on) and store them safely. Some guidelines in the area of media control are to

  • Accurately and promptly mark all data storage media.

  • Ensure proper environmental storage of the media.

  • Ensure the safe and clean handling of the media.

  • Log data media to provide a physical inventory control.

The environment where the media will be stored is also important. For example, damage starts occurring to magnetic media above 100 degrees.

Labeling is the vehicle for communicating the assigned classification to custodians, users, and applications (for example, access control and DLP). Labels make it easy to identify the data classification. Labels can take many forms: electronic, print, audio, or visual. Labels should be appropriate for the intended audience. Labels transcend institutional knowledge and provide stability in environments that experience personnel turnover. Labeling recommendations are tied to media type. In electronic form, the classification label should be a part of the document name (for example, Customer Transaction History_Protected). On written or printed documents, the classification label should be clearly watermarked as well as in either the document header or footer. For physical media, the classification label should be clearly marked on the case using words or symbols.

Destruction

During media disposal, you must ensure no data remains on the media. The most reliable, secure means of removing data from magnetic storage media, such as a magnetic tape cassette, is through degaussing, which exposes the media to a powerful, alternating magnetic field. It removes any previously written data, leaving the media in a magnetically randomized (blank) state. More information on the destruction of media is given earlier in this chapter, in the “Data Remanence” section and in Chapter 7.

Exam Preparation Tasks

As mentioned in the section “About the CISSP Cert Guide, Third Edition” in the Introduction, you have a couple of choices for exam preparation: the exercises here, Chapter 9, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 2-1 lists a reference of these key topics and the page numbers on which each is found.

Image

Table 2-1 Key Topics for Chapter 2

Key Topic Element

Description

Page Number

List

NIST SP 800-122 recommendations to effectively protect PII

147

List

Private sector classifications

151

List

Military and government classifications

152

List

Information life cycle

153

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

access control list (ACL)

aggregation

atomicity

authentication

availability

base relation

baseline

candidate key

cardinality

certification

column or attribute

confidentiality

consistency

contamination

criticality

cryptography

data criticality

data custodian

data mining

data owner

data processors

data purging

data quality

data sensitivity

data structure

data warehouse

data warehousing

database locks

database views

defense in depth

degree

domain

durability

EPHI

foreign key

guideline

hierarchical database

inference

information assets

intangible assets

integrity

International Electrotechnical Commission (IEC)

International Organization for Standardization (ISO)

ISO/IEC 27000

isolation

Java Database Connectivity (JDBC)

liability

network-attached storage (NAS)

Object Linking and Embedding (OLE)

Object Linking and Embedding Database (OLE DB)

object-oriented programming (OOP)

object-oriented database (OODB)

object-relational database

OLTP ACID test

online transaction processing (OLTP) system

Open Database Connectivity (ODBC)

personally identifiable information (PII)

policy

polyinstantiation

protected health information (PHI)

record

referential integrity

relation

relational database

remanence

row

schema

sensitivity

standard

system owner

tangible assets

view

Answer Review Questions

1. What is the highest military security level?

  1. Confidential

  2. Top Secret

  3. Private

  4. Sensitive

2. Who is responsible for deciding which users have access to data?

  1. Business owner

  2. System owner

  3. Data owner

  4. Data custodian

3. Which term is used for the fitness of data for use?

  1. Data sensitivity

  2. Data criticality

  3. Data quality

  4. Data classification

4. What is the highest level of classification for private sector systems?

  1. Public

  2. Sensitive

  3. Private

  4. Confidential

5. What is the first phase of the information life cycle?

  1. Maintain

  2. Use

  3. Distribute

  4. Create/receive

6. Which organizational role owns a system and must work with other users to ensure that data is secure?

  1. Business owner

  2. Data custodian

  3. Data owner

  4. System owner

7. What is the last phase of the information life cycle?

  1. Distribute

  2. Maintain

  3. Dispose/store

  4. Use

Answers and Explanations

1. b. Military and governmental entities classify data using five main classification levels, listed from highest sensitivity level to lowest:

  1. Top Secret

  2. Secret

  3. Confidential

  4. Sensitive but unclassified

  5. Unclassified

2. c. The data owner is responsible for deciding which users have access to data.

3. c. Data quality is the fitness of data for use.

4. d. Private sector systems usually use the following classifications, from highest to lowest:

  1. Confidential

  2. Private

  3. Sensitive

  4. Public

5. d. The phases of the information life cycle are as follows:

  1. Create/receive

  2. Distribute

  3. Use

  4. Maintain

  5. Dispose/store

6. d. The system owner owns a system and must work with other users to ensure that data is secure.

7. c. The phases of the information life cycle are as follows:

  1. Create/receive

  2. Distribute

  3. Use

  4. Maintain

  5. Dispose/store

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.228.35