Chapter 3

Domain 2: Asset Security (Protecting Security of Assets)

Abstract

The Asset Security domain focuses on controls such as data classification clearances, labels, retention and ownership of data. Data remanence is discussed, including newly testable material such as the remanence properties of Solid State Drives (SSDs), which are a combination of EEPROM and RAM, and have quite different remanence properties compared to magnetic drives. The domain wraps up with a discussion of controls determination, including scoping and tailoring. The domain concludes with a discussion of well-known standards, including PCI-DSS and the ISO 27000 series.

Keywords

Random Access Memory
Remanence
Reference Monitor
Read Only Memory
Solid State Drive
Tailoring

Exam objectives in this chapter

Classifying Data
Ownership
Memory and Remanence
Data Destruction
Determining Data Security Controls

Unique Terms and Definitions

RAM—Random Access Memory, volatile hardware memory that loses integrity after loss of power
Remanence: Data that persists beyond noninvasive means to delete it.
Reference Monitor—Mediates all access between subjects and objects
ROM—Read Only Memory, nonvolatile memory that maintains integrity after loss of power
Scoping—The process of determining which portions of a standard will be employed by an organization
SSD—Solid State Drive, a combination of flash memory (EEPROM) and DRAM
Tailoring—The process of customizing a standard for an organization

Introduction

The Asset Security (Protecting Security of Assets) domain focuses on controls such as data classification clearances, labels, retention and ownership of data. We will discuss data remanence, including newly testable material such as the remanence properties of Solid State Drives (SSDs), which are a combination of EEPROM and RAM, and have quite different remanence properties compared to magnetic drives. The domain wraps up with a discussion of controls determination, including standards, scoping and tailoring.

Classifying Data

Data classification has existed for millennia. In 678 AD the defenders of Constantinople first used Greek fire to defend the city vs. invading ships. The liquid was launched from the city walls, and could burn on water. “The composition and use of Greek fire was a state secret that died with the Byzantium empire, in fact disappeared long before Byzantium had run its course. To this day historians have been unable to agree on the composition and use of Greek fire, in spite of repeated attempts by chemists and historians to discern its nature from a fragmented historical record.” [1] Note that data classification is testable, but this historical example is not testable.
The day-to-day management of access control requires management of labels, clearances, formal access approval, and need to know. These formal mechanisms are typically used to protect highly sensitive data, such as government or military data.

Labels

Objects have labels, and as we will see in the next section, subjects have clearances. A critical security step is the process of locating sensitive information, and labeling or marking it as sensitive. How the data is labeled should correspond to the organizational data classification scheme.
The object labels used by many world governments are confidential, secret and top secret. According to Executive Order 12356—National Security Information:
“Top Secret” shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause exceptionally grave damage to the national security.
“Secret” shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause serious damage to the national security.
“Confidential” shall be applied to information, the unauthorized disclosure of which reasonably could be expected to cause damage to the national security. [2]
This describes the classification criteria. A security administrator who applies a label to an object must follow these criteria. Additional labels exist, such as unclassified (data that is not sensitive), SBU (Sensitive but Unclassified), and For Official Use Only (FOUO). SBU describes sensitive data that is not a matter of national security, such as the healthcare records of enlisted personnel. This data must be protected, even though its release would not normally cause national security issues.
Private sector companies use labels such as “Internal Use Only” and “Company Proprietary.”

Security Compartments

Compartments allow additional control over highly sensitive information. This is called Sensitive Compartmented Information (SCI). Compartments used by the United States include HCS, COMINT (SI), GAMMA (G), TALENT KEYHOLE (TK), and others (these are listed as examples to illustrate the concept of compartments; the specific names are not testable). These compartments require a documented and approved need to know in addition to a normal clearance such as top secret.

Clearance

A clearance is a formal determination of whether or not a user can be trusted with a specific level of information. Clearances must determine the subject’s current and potential future trustworthiness; the latter is harder (and more expensive) to assess. For example: are there any issues, such as debt or drug or alcohol abuse, which could lead an otherwise ethical person to violate their ethics? Is there a personal secret that could be used to blackmail this person? A clearance attempts to make these determinations.
In many world governments, these clearances mirror the respective object labels of confidential, secret, and top secret. Each clearance requires a myriad of investigations and collection of personal data. Once all data has been gathered (including a person’s credit score, arrest record, interviews with neighbors and friends, and more), an administrative judge makes a determination on whether this person can be trusted with U.S. national security information.

Note

A great resource to see what is required to obtain a U.S. government security clearance can be found at http://www.dod.mil/dodgc/doha/industrial/. This Web site, maintained by the United States Department of Defense Office of Hearings and Appeals (known as DOHA), posts U.S. government security clearance decisions for contractors who have appealed their initial decision (one does not appeal a favorable decision, so these have all been denied). It is fascinating to read the circumstances behind why people have either been granted or lost their U.S. government security clearance. The applicant’s name and any identifying information have been removed from the content but the circumstances of their case are left for all to read. Typically, drug use and foreign influence are the two most popular reasons why people are not granted a U.S. Government clearance.

Formal Access Approval

Formal access approval is documented approval from the data owner for a subject to access certain objects, requiring the subject to understand all of the rules and requirements for accessing data, and consequences should the data become lost, destroyed, or compromised.

Note

When accessing North Atlantic Treaty Organization (NATO) information, the compartmented information is called, “NATO Cosmic.” Not only would a user be required to have the clearance to view NATO classified information, they would also require formal access approval from the NATO security official (data owner) to view the Cosmic compartmented information. Note that compartments are a testable concept, but the name of Cosmic compartment itself is not testable.

Need to Know

Need to know refers to answering the question: does the user “need to know” the specific data they may attempt to access? It is a difficult question, especially when dealing with large populations across large IT infrastructures. Most systems rely on least privilege and require the users to police themselves by following policy and only attempt to obtain access to information that they have a need to know. Need to know is more granular than least privilege: unlike least privilege, which typically groups objects together, need to know access decisions are based on each individual object.

Sensitive Information/Media Security

Though security and controls related to the people within an enterprise are vitally important, so is having a regimented process for handling sensitive information, including media security. This section discusses concepts that are an important component of a strong overall information security posture.

Sensitive Information

All organizations have sensitive information that requires protection, and that sensitive information physically resides on some form of media. In addition to primary storage, backup storage must also be considered. It is also likely that sensitive information is transferred, whether internally or externally, for use. Wherever the data exists, there must be processes that ensure the data is not destroyed or inaccessible (a breach of availability), disclosed (a breach of confidentiality), or altered (a breach of integrity).

Handling

People handling sensitive media should be trusted individuals who have been vetted by the organization. They must understand their role in the organization’s information security posture. Sensitive media should have strict policies regarding its handling. Policies should require the inclusion of written logs detailing the person responsible for the media. Historically, backup media has posed a significant problem for organizations.

Storage

When storing sensitive information, it is preferable to encrypt the data. Encryption of data at rest greatly reduces the likelihood of the data being disclosed in an unauthorized fashion due to media security issues. Physical storage of the media containing sensitive information should not be performed in a haphazard fashion, whether the data is encrypted or not. Care should be taken to ensure that there are strong physical security controls wherever media containing sensitive information is accessible.

Retention

Media and information have a limited useful life. Retention of sensitive information should not persist beyond the period of usefulness or legal requirement (whichever is greater), as it needlessly exposes the data to threats of disclosure when the data is no longer needed by the organization. Keep in mind there may be regulatory or other legal reasons that may compel the organization to maintain such data beyond its time of utility.

Ownership

Primary information security roles include business or mission owners, data owners, system owners, custodians, and users. Each plays a different role in securing an organization’s assets.

Business or Mission Owners

Business Owners and Mission Owners (senior management) create the information security program and ensure that it is properly staffed, funded, and has organizational priority. They are responsible for ensuring that all organizational assets are protected.

Data Owners

The Data Owner (also called information owner) is a management employee responsible for ensuring that specific data is protected. Data owners determine data sensitivity labels and the frequency of data backup. They focus on the data itself, whether in electronic or paper form. A company with multiple lines of business may have multiple data owners. The data owner performs management duties; Custodians perform the hands-on protection of data.

Exam Warning

Do not confuse the Data Owner with a user who “owns” his/her data on a discretionary access control system (see Chapter 6, Domain 5: Identity and Access Management, for more information on DAC, or discretionary access control systems).
The Data Owner (capital “O”) is responsible for ensuring that data is protected. A user who “owns” data (lower case “o”) has read/write access to objects.

System Owner

The System Owner is a manager responsible for the actual computers that house data. This includes the hardware and software configuration, including updates, patching, etc. They ensure the hardware is physically secure, operating systems are patched and up to date, the system is hardened, etc. Technical hands-on responsibilities are delegated to Custodians, discussed next.

Note

The difference between a System Owner and a Data Owner is straightforward. The System Owner is responsible for securing the computer hardware and software. The Data Owner is responsible for protecting the data contained within the computer.
For example: for a database server, the system owner would secure the hardware and software, including patching the Database Management System (such as MySQL or Oracle). The data owner would secure the data itself: sensitive data contained within database tables, such as Personally Identifiable Information (PII).

Custodian

A Custodian provides hands-on protection of assets such as data. They perform data backups and restoration, patch systems, configure antivirus software, etc. The Custodians follow detailed orders; they do not make critical decisions on how data is protected. The Data Owner may dictate, “All data must be backed up every 24 hours.” The Custodians would then deploy and operate a backup solution that meets the Data Owner’s requirements.

Users

Users must follow the rules: they must comply with mandatory policies, procedures, standards, etc. They must not write their passwords down or share accounts, for example. Users must be made aware of these risks and requirements. You cannot assume they will know what to do, nor assume they are already doing the right thing: they must be told, via information security awareness. They must also be made aware of the penalty for failing to comply with mandatory directives such as policies.

Data Controllers and Data Processors

Data controllers create and manage sensitive data within an organization. Human resources employees are often data controllers: they create and manage sensitive data, such as salary and benefit data, reports from employee sanctions, etc.
Data processors manage data on behalf of data controllers. An outsourced payroll company is an example of a data processor. They manage payroll data (used to determine the amount to pay individual employees) on behalf of a data controller, such as an HR department.

Data Collection Limitation

Organizations should collect the minimum amount of sensitive information that is required.
The Organisation (sic) for Economic Co-operation and Development (OECD, discussed in Chapter 2, Domain 1: Security and Risk Management) Collection Limitation Principle discusses data limitation: “There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject” [3]

Memory and Remanence

The 2015 exam update added timely topics such as remanence properties of Solid State Drives (SSDs), discussed shortly. We will begin by discussing computer memory itself, followed by remanence properties of volatile and nonvolatile memory. Note that related concepts such as memory protection and CPU design are described in Chapter 4, Domain 3: Security Engineering.

Data Remanence

The term data remanence is important to understand when discussing media sanitization and data destruction. Data remanence is data that persists beyond noninvasive means to delete it. Though data remanence is sometimes used specifically to refer to residual data that persists on magnetic storage, remanence concerns go beyond just that of magnetic storage media. Security professionals must understand the remanence properties of various types of memory and storage, and appreciate the steps to make data unrecoverable.

Memory

Memory is a series of on-off switches representing bits: 0s (off) and 1s (on). Memory may be chip-based, disk-based, or use other media such as tape. RAM is Random Access Memory: “random” means the CPU may randomly access (jump to) any location in memory. Sequential memory (such as tape) must sequentially read memory, beginning at offset zero, to the desired portion of memory. Volatile memory (such as RAM) loses integrity after a power loss; nonvolatile memory (such as ROM, disk, or tape) maintains integrity without power.
Real (or primary) memory, such as RAM, is directly accessible by the CPU and is used to hold instructions and data for currently executing processes. Secondary memory, such as disk-based memory, is not directly accessible.

Cache Memory

Cache memory is the fastest memory on the system, required to keep up with the CPU as it fetches and executes instructions. The data most frequently used by the CPU is stored in cache memory. The fastest portion of the CPU cache is the register file, which contains multiple registers. Registers are small storage locations used by the CPU to store instructions and data.
The next fastest form of cache memory is Level 1 cache, located on the CPU itself. Finally, Level 2 cache is connected to (but outside) the CPU. SRAM (Static Random Access Memory) is used for cache memory.

Note

As a general rule, the memory closest to the CPU (cache memory) is the fastest and most expensive memory in a computer. As you move away from the CPU, from SRAM, to DRAM to disk, to tape, etc., the memory becomes slower and less expensive.

RAM and ROM

RAM is volatile memory used to hold instructions and data of currently running programs. It loses integrity after loss of power. RAM memory modules are installed into slots on the computer motherboard. RAM is also becoming increasingly embedded in computer motherboards, making upgrading difficult, if not impossible.
ROM (Read Only Memory) is nonvolatile: data stored in ROM maintains integrity after loss of power. A computer Basic Input Output System (BIOS) Firmware is stored in ROM. While ROM is “read only,” some types of ROM may be written to via flashing, as we will see shortly in the “Flash Memory” section.

Note

The volatility of RAM is a subject of ongoing research. Historically, it was believed that DRAM lost integrity after loss of power. The “cold boot” attack has shown that RAM has remanence: it may maintain integrity seconds or even minutes after power loss. This has security ramifications: encryption keys usually exist in plaintext in RAM, and may be recovered by “cold booting” a computer off a small OS installed on DVD or USB key, and then quickly dumping the contents of memory. A video on the implications of cold boot called “Lest We Remember: Cold Boot Attacks on Encryption Keys” is available at http://citp.princeton.edu/memory/
Remember that the exam sometimes simplifies complex matters. For the exam, simply remember that RAM is volatile (though not as volatile as we once believed).

DRAM and SRAM

Static Random Access Memory (SRAM) is fast, expensive memory that uses small latches called “flip-flops” to store bits. Dynamic Random Access Memory (DRAM) stores bits in small capacitors (like small batteries), and is slower and cheaper than SRAM. The capacitors used by DRAM leak charge, and must be continually refreshed to maintain integrity, typically every few to a few hundred milliseconds, depending on the type of DRAM. Refreshing reads and writes the bits back to memory. SRAM does not require refreshing, and maintains integrity as long as power is supplied.

Firmware

Firmware stores small programs that do not change frequently, such as a computer’s BIOS (discussed below), or a router’s operating system and saved configuration. Various types of ROM chips may store firmware, including PROM, EPROM, and EEPROM.
PROM (Programmable Read Only Memory) can be written to once, typically at the factory. EPROMs (Erasable Programmable Read Only Memory) and EEPROMs (Electrically Erasable Programmable Read Only Memory) may be “flashed,” or erased and written to multiple times. The term “flashing” derives from the use of EPROMs: flashing ultraviolet light on a small window on the chip erased the EPROM. The window was usually covered with foil to avoid accidental erasure due to exposure to light. EEPROMs are the modern type of ROM, electrically erasable via the use of flashing programs.
A Programmable Logic Device (PLD) is a field-programmable device, which means it is programmed after it leaves the factory. EPROMs, EEPROMS, and Flash Memory are examples of PLDs.
Flash Memory
Flash memory (such as USB thumb drives) is a specific type of EEPROM, used for small portable disk drives. The difference is any byte of an EEPROM may be written, while flash drives are written by (larger) sectors. This makes flash memory faster than EEPROMs, but still slower than magnetic disks.

Note

Firmware is chip-based, unlike magnetic disks. The term “flash drive” may lead some to think that flash memory drives are “disk drives.” They are physically quite different, and have different remanence properties.
A simple magnetic field will not erase flash memory. Secure destruction methods used for magnetic drives, such as degaussing (which we will discuss shortly) will not work with flash drives.

Solid State Drives (SSDs)

A Solid State Drive (SSD) is a combination of flash memory (EEPROM) and DRAM. Degaussing has no effect on SSDs. Also: while physical disks have physical blocks (“block 1” is on a specific physical location on a magnetic disk), blocks on SSDs are logical, and are mapped to physical blocks. Also: SSDs do not overwrite blocks that contain data: the device will instead write data to an unused block, and mark the previous block unallocated.
A process called garbage collection later takes care of these old blocks: “Unused and unerased blocks are moved out of the way and erased in the background. This is called the ‘garbage collection’ process. Working in the background, garbage collection systematically identifies which memory cells contain unneeded data and clears the blocks of unneeded data during off-peak times to maintain optimal write speeds during normal operations.” [4]
The TRIM command improves garbage collection. “TRIM is an attribute of the ATA Data Set Management Command. The TRIM function improves compatibility, endurance, and performance by allowing the drive to do garbage collection in the background. This collection eliminates blocks of data, such as deleted files.” [5] While the TRIM command improves performance: it does not reliably destroy data.
A ‘sector by sector overwrite’ behaves very differently on an SSD vs. a magnetic drive, and does not reliably destroy all data. Also, electronically shredding a file (overwriting the file’s data before deleting it, which we will discuss shortly) is not effective.
Tests performed by the Department of Computer Science and Engineering, University of California, San Diego found: “Overall, the results for overwriting are poor: while overwriting appears to be effective in some cases across a wide range of drives, it is clearly not universally reliable. It seems unlikely that an individual or organization expending the effort to sanitize a device would be satisfied with this level of performance.” [6]
Data on SSD drives that are not physically damaged may be securely removed via ATA Secure Erase. SanDisk provides the following details: “When the relevant secure erase command is executed on the SanDisk SSD, all blocks in the physical address space, regardless of whether they are currently or were previously allocated to the logical space, are completely erased (the “logical to physical mapping table” is also erased). Additionally, a new encryption key is generated and the old key is discarded.
This erase operation does not overwrite the blocks like an HDD write or format command would. Data is written to flash on a page-level and a page must be completely erased before it can be written to again. Unlike HDDs, which may leave remnants of data in regions between tracks, an erased flash cell is restored to the same content it contained at the time it was manufactured. As in the case with an HDD, physical blocks that have been marked “bad” may still contain remnant user data. There is no way to access these blocks to overwrite them, and secure erase makes no attempt to do so. Because the secure erase operation also regenerates the internal encryption key, it is not possible to decrypt the data, even if it were accessible.” [7]
The two valid options for destroying data on SSD drives are ATA secure erase and destruction. Destruction is the best method for SSD drives that are physically damaged.

Data Destruction

All forms of media should be securely cleaned or destroyed before disposal to prevent object reuse, which is the act of recovering information from previously-used objects, such as computer files. Objects may be physical (such as paper files in manila folders) or electronic (data on a hard drive).
Object reuse attacks range from nontechnical attacks such as dumpster diving (searching for information by rummaging through unsecured trash) to technical attacks such as recovering information from unallocated blocks on a disk drive. Dumpster diving was first popularized in the 1960s by “phone phreaks” (in “hacker speak” a phreak is a hacker who hacks the phone system). An early famous dumpster diver was Jerry Schneider, who scavenged parts and documents from Pacific Telephone and Telegraph’s dumpsters. Schneider was so familiar with the phone company’s practices that he was able to leverage dumpster diving and social engineering attacks to order and receive telephone equipment without paying. He was later arrested for this crime in 1972. Read more about Jerry’s attacks at http://www.bookrags.com/research/jerry-schneider-omc/.
All cleaning and destruction actions should follow a formal policy, and all such activity should be documented, including the serial numbers of any hard disks, type of data they contained, date of cleaning or destruction, and personnel performing these actions.

Overwriting

Simply “deleting” a file removes the entry from the File Allocation Table (FAT) and marks the data blocks as “unallocated.” Reformatting a disk destroys the old FAT and replaces it with a new one. In both cases, data itself usually remains and can be recovered through the use of forensic tools. This issue is called data remanence (there are “remnants” of data left behind).
Overwriting writes over every character of a file or entire disk drive and is far more secure than deleting or formatting a disk drive. Common methods include writing all zeroes or writing random characters. Electronic “shredding” or “wiping” overwrites the file’s data before removing the FAT entry.
Many tools perform multiple rounds of overwrites to the same data, though the usefulness of the additional passes is questionable. There are no known commercial tools (today) that can recover data overwritten with a single pass.
One limitation of overwriting is you cannot tell if a drive has been securely overwritten by simply looking at it, so errors made during overwriting can lead to data exposure. It may also be impossible to overwrite damaged media.

Note

For many years security professionals and other technologists accepted that data could theoretically be recovered even after having been overwritten. Though the suggested means of recovery involved both a clean room and an electron microscope, which is likely beyond the means of most would be attackers, organizations typically employed either what has been referred to as the DoD (Department of Defense) short method, DoD standard method or Gutmann approach [8] to wiping, which involved either 3, 7, or 35 successive passes, respectively. For (undamaged) magnetic media: now it is commonly considered acceptable in industry to have simply a single successful pass to render data unrecoverable. This has saved organizations many hours that were wasted on unnecessary repeat wipes.

Degaussing

Degaussing destroys the integrity of magnetic media such as tapes or disk drives by exposing them to a strong magnetic field, destroying the integrity of the media and the data it contains. The drive integrity is typically so damaged that a degaussed disk drive usually can no longer be formatted.

Destruction

Destruction physically destroys the integrity of media by damaging or destroying the media itself, such as the platters of a disk drive. Destructive measures include incineration, pulverizing, shredding, and bathing metal components in acid.
Destruction of objects is more secure than overwriting. It may not be possible to overwrite damaged media (though data may still be recoverable). As previously discussed: data on media such as Solid State Drives cannot be reliably removed via overwriting. Also, some magnetic media such as WORM (Write Once Read Many) drives and CD-Rs (Compact Disc–Recordable) can only be written once, and cannot be subsequently overwritten. Highly sensitive data should be degaussed or destroyed (perhaps in addition to overwriting). Destruction enhances defense-in-depth, allowing confirmation of data destruction via physical inspection.

Shredding

A simple form of media sanitization is shredding, a type of physical destruction. Though this term is sometimes used in relation to overwriting of data, here shredding refers to the process of making data printed on hard copy, or on smaller objects such as floppy or optical disks, unrecoverable. Sensitive information such as printed information needs to be shredded prior to disposal in order to thwart a dumpster diving attack.
Paper shredders cut paper to prevent object reuse. Strip-cut shredders cut the paper into vertical strips. Cross-cut shredders are more secure than strip-cut, and cut both vertically and horizontally, creating small paper “confetti”. Given enough time and access to all of the shredded materials, attackers can recover shredded documents, though it is more difficult with cross-cut shredders.
Dumpster diving is a physical attack in which a person recovers trash in hopes of finding sensitive information that has been merely discarded in whole rather than being run through a shredder, incinerated, or otherwise destroyed. Figure 3.1 shows locked shred bins that contain material that is intended for shredding. The locks are intended to ensure that dumpster diving is not possible during the period prior to shredding.
image
Figure 3.1 Locked Shred Bins
Source: http://commons.wikimedia.org/wiki/File:Confidential_shred_bins.JPG Photograph by: © BrokenSphere / Wikimedia Commons. Image under permission of Creative Commons Attribution ShareAlike 3.0

Determining Data Security Controls

Determining which data security controls to employ is a critical skill. Baselines, standards, scoping and tailoring are used to choose and customize which controls are employed. Also: controls determination will be dictated by whether the data is at rest or in motion.

Certification and Accreditation

Let’s begin the discussion of standards by describing certification and accreditation. Certification means a system has been certified to meet the security requirements of the data owner. Certification considers the system, the security measures taken to protect the system, and the residual risk represented by the system. Accreditation is the data owner’s acceptance of the certification, and of the residual risk, which is required before the system is put into production.

Standards and Control Frameworks

A number of standards are available to determine security controls. Some, such as PCI-DSS (Payment Card Industry Data Security Standard,), are industry-specific (vendors who use credit cards as an example). Others, such as OCTAVE®, ISO 17799/27002, and COBIT, are more general.

PCI-DSS

The Payment Card Industry Data Security Standard (PCI-DSS) is a security standard created by the Payment Card Industry Security Standards Council (PCI-SSC). The council is comprised of American Express, Discover, Master Card, Visa, and others. PCI-DSS seeks to protect credit cards by requiring vendors using them to take specific security precautions: “PCI-DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design, and other critical protective measures. This comprehensive standard is intended to help organizations proactively protect customer account data.” [9]
The core principles of PCI-DSS (available at https://www.pcisecuritystandards.org/security_standards/index.php) are:
Build and Maintain a Secure Network and Systems
Protect Cardholder Data
Maintain a Vulnerability Management Program
Implement Strong Access Control Measures
Regularly Monitor and Test Networks
Maintain an Information Security Policy [10]

OCTAVE®

OCTAVE® stands for Operationally Critical Threat, Asset, and Vulnerability Evaluationsm, a risk management framework from Carnegie Mellon University. OCTAVE® describes a three-phase process for managing risk. Phase 1 identifies staff knowledge, assets, and threats. Phase 2 identifies vulnerabilities and evaluates safeguards. Phase 3 conducts the Risk Analysis and develops the risk mitigation strategy.
OCTAVE® is a high-quality free resource that may be downloaded from: http://www.cert.org/octave/

ISO 17799 and the ISO 27000 Series

ISO 17799 was a broad-based approach for information security code of practice by the International Organization for Standardization (based in Geneva, Switzerland). The full title is “ISO/IEC 17799:2005 Information technology—Security Techniques—Code of Practice for Information Security Management.” ISO 17799:2005 signifies the 2005 version of the standard. It was based on BS (British Standard) 7799 Part 1.
ISO 17799 had 11 areas, focusing on specific information security controls:
1. Policy
2. Organization of information security
3. Asset management
4. Human resources security
5. Physical and environmental security
6. Communications and operations management
7. Access control
8. Information systems acquisition, development, and maintenance
9. Information security incident management
10. Business continuity management
11. Compliance [11]
ISO 17799 was renumbered to ISO 27002 in 2005, to make it consistent with the 27000 series of ISO security standards. ISO 27001 is a related standard, formally called “ISO/IEC 27001:2005 Information technology—Security techniques—Information Security Management Systems—Requirements.” ISO 27001 was based on BS 7799 Part 2.
Note that the title of ISO 27002 includes the word “techniques”; ISO 27001 includes the word “requirements.” Simply put, ISO 27002 describes information security best practices (Techniques), and ISO 27001 describes a process for auditing (requirements) those best practices.

COBIT

COBIT (Control Objectives for Information and related Technology) is a control framework for employing information security governance best practices within an organization. COBIT was developed by ISACA (Information Systems Audit and Control Association, see http://www.isaca.org).
According to ISACA, “the purpose of COBIT is to provide management and business process owners with an information technology (IT) governance model that helps in delivering value from IT and understanding and managing the risks associated with IT. COBIT helps bridge the gaps amongst business requirements, control needs and technical issues. It is a control model to meet the needs of IT governance and ensure the integrity of information and information systems.” [12]
COBIT has four domains: Plan and Organize, Acquire and Implement, Deliver and Support, and Monitor and Evaluate. There are 34 Information Technology processes across the four domains. More information about COBIT is available at: http://www.isaca.org/Knowledge-Center/COBIT/Pages/Overview.aspx. Version 4.1 was released in 2007; Version 5 was released in April 2012.

ITIL®

ITIL® (Information Technology Infrastructure Library) is a framework for providing best services in IT Service Management (ITSM). More information about ITIL® is available at: http://www.itil-officialsite.com.
ITIL® contains five “Service Management Practices—Core Guidance” publications:
Service Strategy
Service Design
Service Transition
Service Operation
Continual Service Improvement
Service Strategy helps IT provide services. Service Design details the infrastructure and architecture required to deliver IT services. Service transition describes taking new projects and making them operational. Service Operation covers IT operations controls. Finally, continual service improvement describes ways to improve existing IT services.

Scoping and Tailoring

Scoping is the process of determining which portions of a standard will be employed by an organization. For example: an organization that does not employ wireless equipment may declare the wireless provisions of a standard are out of scope, and therefore do not apply.
Tailoring is the process of customizing a standard for an organization. It begins with controls selection, continues with scoping, and finishes with the application of compensating controls. NIST Special Publication 800-53 (Security and Privacy Controls for Federal Information Systems and Organizations) describes the tailoring process:
“Identifying and designating common controls in initial security control baselines;
Applying scoping considerations to the remaining baseline security controls;
Selecting compensating security controls, if needed;
Assigning specific values to organization-defined security control parameters via explicit assignment and selection statements;
Supplementing baselines with additional security controls and control enhancements, if needed; and
Providing additional specification information for control implementation, if needed.” [13]
The “parameters” mentioned include items such as password complexity policies.

Protecting Data in Motion and Data at Rest

Data at rest is stored data: residing on a disk and/or in a file. Data in motion is data that is being transferred across a network. Each form of data requires different controls for protection, which we will discuss next.

Drive and Tape Encryption

Drive and tape encryption protect data at rest, and are one of the few controls that will protect data after physical security has been breached. These controls are recommended for all mobile devices and media containing sensitive information that may physically leave a site or security zone. Encryption may also be used for static systems that are not typically moved (such as file servers).
Whole-disk encryption of mobile device hard drives is recommended. Partially encrypted solutions, such as encrypted file folders or partitions, often risk exposing sensitive data stored in temporary files, unallocated space, swap space, etc.
Disk encryption/decryption may occur in software or hardware. Software-based solutions may tax the computer’s performance, while hardware-based solutions offload the cryptographic work onto another CPU, such as the hardware disk controller.
Many breach notification laws concerning Personally Identifiable Information (PII) contain exclusions for lost data that is encrypted. An example is the 2009 update to the U.S. Health Insurance Portability and Accountability Act (HIPAA) concerning breaches of electronic Protected Healthcare Information (ePHI).
Breach of unencrypted ePHI requires notification to the affected individuals; breaches of more than 500 individuals’ data require additional notification to the press and the U.S. Department of Health and Human Services. Encrypted data is excluded from these rules: “secure health information as specified by the guidance through encryption or destruction are relieved from having to notify in the event of a breach of such information.” [5]

Exam Warning

Note that while HIPAA is in the Common Body of Knowledge (CBK), these specific details are not. This point is raised to highlight the criticality of encrypting PII on mobile devices, regardless of industry.

Media Storage and Transportation

All sensitive backup data should be stored offsite, whether transmitted offsite via networks, or physically moved as backup media. Sites using backup media should follow strict procedures for rotating media offsite.
Always use a bonded and insured company for offsite media storage. The company should employ secure vehicles and store media at a secure site. Ensure that the storage site is unlikely to be impacted by the same disaster that may strike the primary site, such as a flood, earthquake, or fire. Never use informal practices, such as storing backup media at employees’ houses.

Learn By Example

Offsite Backup Storage

The importance of strong policy and procedures regarding offsite backup media storage is illustrated by the massive loss of PII by the State of Ohio in June 2007. The breach was initially announced as affecting 64,000 State of Ohio employees; it was later discovered that over 800,000 records (most were not state employees) were lost.
Ohio’s electronic data standards required offsite storage of one set of backup tapes. The Ohio Administrative Knowledge System met the standard via an informal arrangement, where an employee would take a set of tapes and store them at home. This ill-advised practice had been in use for over 2 years when it led to the loss of PII when an intern’s car was broken into, and the tapes were stolen. See http://www.technewsworld.com/story/57968.html for more information.
While offsite storage of backup data is recommended, always use a professional bonded service. Encrypting backup data adds an extra layer of protection.

Protecting Data in Motion

Data in motion is best protected via standards-based end-to-end encryption, such as IPSEC VPN. This includes data sent over untrusted networks such as the Internet, but VPNs may also be used as an additional defense-in-depth measure on internal networks such as a private corporate WAN, or private circuits such as T1s leased from a service provider. We will discuss VPNs and various types of circuits in more detail in Chapter 5, Domain 4: Communications and Network Security.

Summary of Exam Objectives

In this domain we discussed the concept of data classification, in use for millennia. We discussed the roles required to protect data, including business or mission owners, data owners, system owners, custodians and users.
An understanding of the remanence properties of volatile and nonvolatile memory and storage mediums are critical security concepts to master. We discussed RAM, ROM, types of PROMS, flash memory, and Solid State Drives (SSDs), including remanence properties and secure destruction methods. Finally, we discussed well-known standards, including PCI-DSS and the ISO 27000 series, as well as standards processes including scoping and tailoring.

Self Test

Note

Please see the Self Test Appendix for explanations of all correct and incorrect answers.
1. What type of memory is used often for CPU registers?
A. DRAM
B. Firmware
C. ROM
D. SRAM
2. What type of firmware is erased via ultraviolet light?
A. EPROM
B. EEPROM
C. Flash memory
D. PROM
3. What describes the process of determining which portions of a standard will be employed by an organization?
A. Baselines
B. Policies
C. Scoping
D. Tailoring
4. What nonvolatile memory normally stores the operating system kernel on an IBM PC-compatible system?
A. Disk
B. Firmware
C. RAM
D. ROM
5. What was ISO 17799 renamed as?
A. BS 7799-1
B. ISO 27000
C. ISO 27001
D. ISO 27002
6. Which of the following describes a duty of the Data Owner?
A. Patch systems
B. Report suspicious activity
C. Ensure their files are backed up
D. Ensure data has proper security labels
7. Which control framework has 34 processes across four domains?
A. COSO
B. COBIT
C. ITIL®
D. OCTAVE®
8. Which phase of OCTAVE® identifies vulnerabilities and evaluates safeguards?
A. Phase 1
B. Phase 2
C. Phase 3
D. Phase 4
9. Which of the following is the best method for securely removing data from a Solid State Drive that is not physically damaged?
A. ATA secure erase
B. Bit-level overwrite
C. Degaussing
D. File shredding
10. The release of what type of classified data could lead to “exceptionally grave damage to the national security”?
A. Confidential
B. Secret
C. Sensitive but Unclassified (SBU)
D. Top Secret
11. A company outsources payroll services to a 3rd party company. Which of the following roles most likely applies to the 3rd party payroll company?
A. Data controller
B. Data hander
C. Data owner
D. Data processor
12. Which managerial role is responsible for the actual computers that house data, including the security of hardware and software configurations?
A. Custodian
B. Data owner
C. Mission owner
D. System owner
13. What method destroys the integrity of magnetic media such as tapes or disk drives by exposing them to a strong magnetic field, destroying the integrity of the media and the data it contains?
A. Bit-level overwrite
B. Degaussing
C. Destruction
D. Shredding
14. What type of relatively expensive and fast memory uses small latches called “flip-flops” to store bits?
A. DRAM
B. EPROM
C. SRAM
D. SSD
15. What type of memory stores bits in small capacitors (like small batteries)?
A. DRAM
B. EPROM
C. SRAM
D. SSD

Self Test Quick Answer Key

1. D
2. A
3. C
4. A
5. D
6. D
7. B
8. B
9. A
10. D
11. D
12. D
13. B
14. C
15. A
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.178.133