Policies and Procedures
This chapter covers the following subjects:
Legislative and Organizational Policies: In this section, you learn about ways to classify data, laws that protect individual privacy, personnel security policies and how to implement them, service-level agreements, and the safe disposal of computers.
Incident Response Procedures: Here we discuss the processes and procedures involved in computer security incident management. Proper planning for incident response is key, as is the ability to document the lessons learned.
IT Security Frameworks: This short section gets into the basics of security frameworks and how they can help to organize your IT processes and procedures.
This is the last chapter of actual objective content for the Security+ exam, but you will no doubt see several questions on the exam about these topics. This chapter is all about procedures and policies. Conceptually, this chapter is a bit more “high level.” It deals with larger concepts concerning the organization and its employees. Like the last chapter, it is less tech-oriented, and more people-oriented. When going through this chapter, try to keep an open mind as to the different roles a security person might be placed in. Imagine branching out beyond computers, servers, and networks and developing security for the entire organization and its personnel.
Some smaller companies don’t have much in the way of policies. Arguably, that is why a percentage of them fail. You will see many companies of all sizes create their own policies or embrace ones that other organizations are using, or perhaps apply for a company-wide standards certification from an organization such as the International Organization for Standardization (ISO). Some organizations are bound by legislative policy and organized protocols. In general, policies are designed to protect employees and make the organization more productive and efficient.
It’s important to distinguish between policies and procedures. A policy is something that an individual employee, or entire organization, should adhere to, but is usually expressed in broad terms. A procedure is usually much more specific. Although it is often stated in detail, it can potentially be interpreted more loosely. Standard operating procedures used by corporations, government, and the military are usually pretty tight. But other procedures might be a bit more relaxed, and as long as the employee gets to the final goal efficiently, procedures can often be overlooked to a certain degree. However, incident response procedures—once developed by an organization—are usually followed to the letter. Otherwise there can be legal repercussions. Keep in mind that a procedure could be a part of an overall policy.
To help organize the many procedures and policies, we need a plan. An IT security framework is just that—it’s like the blueprint for your organization’s security goals. It defines, organizes, and interconnects the various policies and procedures that can make people giddy.
The concepts in this chapter are meant to oversee everything else in the book from a more managed perspective. By using a well-planned IT security framework (or frameworks), our procedures and policies, and technology in general, all start to flow together.
There are myriad legislative laws and policies. For the Security+ exam, we are concerned only with a few that affect, and protect, the privacy of individuals. In this section, we cover those and some associated security standards.
More important for the Security+ exam are organizational policies. Organizations usually define policies that concern how data is classified, expected employee behavior, and how to dispose of IT equipment that is no longer needed. These policies begin with a statement or goal that is usually short, to the point, and open-ended. They are normally written in clear language that can be understood by most everyone. They are followed by procedures (or guidelines) that detail how the policy will be implemented.
Table 18-1 shows an example of a basic policy and corresponding procedure.
Employees will identify themselves in a minimum of two ways when entering the complex.
1. When employees enter the complex, they will first enter a guard room. This will begin the authentication process.
2. In the guard room, they must prove their identification in two ways:
By showing their ID badge to the on-duty guard.
By being visible to the guard so that the guard can compare their likeness to the ID badge’s photo. The head of the employee should not be obstructed by hats, sunglasses, and so on. In essence, the employee should look similar to the ID photo. If the employee’s appearance changes for any reason, that person should contact human resources for a new ID badge.
* If guards cannot identify the “employee,” they will contact the employee’s supervisor, human resources, or security in an attempt to confirm the person’s identity. If the employee is not confirmed, they will be escorted out of the building by security.
3. After the guard has acknowledged the identification, employees will swipe their ID badge against the door scanner to complete the authentication process and gain access to the complex.
Keep in mind that this is just a basic example; technical documentation specialists will tailor the wording to fit the feel of the organization. Plus, the procedure will be different depending on the size and resources of the organization and the type of authentication scheme used, which could be more or less complex. However, the policy (which is fairly common) is written in such a way as to be open-ended, allowing for the procedure to change over time. We talk about many different policies as they relate to the Security+ exam in this section.
Sensitive data is information that can result in a loss of security, or loss of advantage to a company, if accessed by unauthorized persons. Often, information is broken down into two groups: classified (which requires some level of security clearance) and nonclassified.
ISO/IEC 27002:2013 (which revises the older ISO/IEC 27002:2005) is a security standard that among other things can aid companies in classifying their data. Although you don’t need to know the contents of that document for the Security+ exam, you should have a basic idea of how to classify information. For example, classification of data can be broken down, as shown in Table 18-2.
Information available to anyone. Also referred to as unclassified or nonclassified.
Used internally by a company, but if it becomes public, no critical consequences result. This, and the next three levels, is known as private information. It might also be classified as proprietary information.
Information that can cause financial and operational loss to the company.
Data that should never become public and is critical to the company.
Top secret information
The highest sensitivity of data; few people should have access, and security clearance may be necessary. Information is broken into sections on a need-to-know basis.
In this example, loss of public and internal information probably won’t affect the company very much. However, unauthorized access, misuse, modification, or loss of confidential, secret, or top secret data can affect users’ privacy, trade secrets, financials, and the general security of the company. By setting data roles such as owner, custodian, and privacy officer, and by classifying data and enforcing policies that govern who has access to what information, a company can limit its exposure to security threats. Different organizations will classify data in various ways, but they will usually be similar to Table 18-2. For example, you might also see the high, medium, and low classifications. Or, for instance, Red Hat Linux uses the Top Secret, Secret, and Confidential classifications (just as in Table 18-2), but considers everything else simply unclassified. All of these types of interpretations of data classifications are implementations of mandatory access control (MAC) discussed in Chapter 11, “Access Control Methods and Models.” It’s the incorporation of these types of classifications that is a key element in the multilevel security of Trusted Operating Systems (TOSs). Trusted Operating Systems such as Red Hat, OS X 10.6 and higher, and HP-UX utilize multilevel security concepts such as these to meet government requirements.
Regardless of the format used, if data exists that is considered to be secret or confidential, then the security admin should strongly consider using data-handling electronics (DHE) devices. Data handling is the process of ensuring that research data is stored, archived, or disposed of in a safe and secure manner during and after the conclusion of a research project.
Moving beyond government classification requirements, many companies need to be in compliance with specific laws when it comes to the disclosure of information. In the United States there are a few acts you should know about, as shown in Table 18-3. In addition, there are several bills in process that could be passed in the near future regarding data breach notification.
Privacy Act of 1974
Establishes a code of fair information practice.
Governs the collection, use, and dissemination of personally identifiable information about persons’ records maintained by federal agencies.
Governs the disclosure of financial and accounting information. Enacted in 2002.
Health Insurance Portability and Accountability Act
Governs the disclosure and protection of health information. Enacted in 1996.
Enables commercial banks, investment banks, securities firms, and insurance companies to consolidate. Enacted in 1999.
Protects against pretexting. Individuals need proper authority to gain access to nonpublic information such as Social Security numbers.
Help America Vote Act of 2002
Main goal was to replace punchcard and lever-based voting systems.
Governs the security, confidentiality, and integrity of personal information collected, stored, or otherwise used by various electronic and computer-based voting systems.
California SB 1386
Requires California businesses that store computerized personal information to immediately disclose breaches of security.
Enacted in 2003.
Many computer technicians have to deal with SOX and HIPAA at some point in their careers, and although these types of acts create a lot of paperwork and protocol, the expected result is that, in the long run, they will help companies protect their data and keep sensitive information private.
SOX sparked another concept known as governance, risk, and compliance (GRC), which deals with the continuous security monitoring of: overall management of information systems and control structures; risk management processes; and compliance with stated requirements, be they government related or otherwise.
Most organizations have policies governing employees. The breadth and scope of these policies vary from organization to organization. For example, a small company might have a few pages defining how employees should behave (a code of ethics) and what to do in an emergency. Larger organizations might go so far as to certify to a particular standard such as ISO 9001:2015 and ISO 9001:2008. This means that the organization will comply with a set of quality standards that is all-encompassing, covering all facets of the business. An organization would have to be examined and finally accredited by an accrediting certification body to state that it is ISO certified. This is a rigorous process and is not for the average organization. For many companies, this would create too much documentation and would bog the company down in details and minutia.
We as IT people are more interested in policies that deal with the security of the infrastructure and its employees. As a security administrator, you might deal with procedural documentation specialists, technical documentation specialists, and even outside consultants. You should become familiar with policies and as many procedures as possible, focusing on policies that take security into account, but remember that actual work must take precedence!
Let’s define a few types of policies that are common to organizations. We focus on the security aspect of these policies.
Acceptable use policies (AUPs) define the rules that restrict how a computer, network, or other system may be used. They state what users are, and are not, allowed to do when it comes to the technology infrastructure of an organization. Often, an AUP must be signed by the employees before they begin working on any systems. This protects the organization, but it also defines to employees exactly what they should, and should not, be working on. If a director asks a particular employee to repair a particular system that was outside the AUP parameters, the employee would know to refuse. If employees are found working on a system that is outside the scope of their work, and they signed an AUP, it is grounds for termination. As part of an AUP, employees enter into an agreement acknowledging they understand that the unauthorized sharing of data is prohibited. Also, employees should understand that they are not to take any information or equipment home without express permission from the various parties listed in the policy. This can sometimes be in conflict with a BYOD policy where users are permitted to bring their own devices into work and use them for work purposes. At that point, strong policies for data ownership need to be developed, identifying what portion of the data on a mobile device is owned by the organization, and what portion is owned by the employee. Any organizational data on a mobile device should be backed up.
Change management is a structured way of changing the state of a computer system, network, or IT procedure. The idea behind this is that change is necessary, but that an organization should adapt with change, and be knowledgeable of it. Any change that a person wants to make must be introduced to each of the heads of various departments that it might affect. They must approve the change before it goes into effect. Before this happens, department managers will most likely make recommendations and/or give stipulations. When the necessary people have signed off on the change, it should be tested and then implemented. During implementation, it should be monitored and documented carefully.
Because there are so many interrelated parts and people in an IT infrastructure, it is sometimes difficult for the left hand to know what the right hand is doing, or has done in the past. For example, after a network analysis, a network engineer might think that an unused interface on a firewall doesn’t necessarily need to exist anymore. But does he know this for sure? Who installed and configured the interface? When was it enabled? Was it ever used? Perhaps it is used only rarely by special customers making a connection to a DMZ; perhaps it is used with a honeynet; or maybe it is for future use or for testing purposes. It would be negligent for the network engineer to simply modify the firewall without at least asking around to find out whether the interface is necessary. More likely, there will be forms involved that require the network engineer to state the reason for change and have it signed by several other people before making the change. In general, this will slow down progress, but in the long run it will help to cover the network engineer. People were warned, and as long as the correct people involved have signed off on the procedure or technical change, the network engineer shouldn’t have to worry. In a larger organization that complies with various certifications such as ISO 9001:2015, it can be a complex task. IT people should have charts of personnel and department heads. There should also be current procedures in place that show who needs to be contacted in the case of a proposed change.
Separation of duties is when more than one person is required to complete a particular task or operation. This distributes control over a system, infrastructure, or particular task. Job rotation is one of the checks and balances that might be employed to enforce the proper separation of duties. It is when two or more employees switch roles at regular intervals. It is used to increase user insight and skill level, and to decrease the risk of fraud and other illegal activities. Both of these policies are enforced to increase the security of an organization by limiting the amount of control a person has over a situation and by increasing employees’ knowledge of what other employees are doing. For more information on these and similar concepts, see Chapter 11.
Some organizations require employees to take X number of consecutive days of vacation over the course of a year as part of their annual leave. For example, a company might require an IT director to take five consecutive days’ vacation at least once per year to force another person into his role for that time period. Although a company might state that this helps the person to rest and focus on his job, and incorporate job rotation, the underlying security concept is that mandatory vacations can help to stop any possible malicious activity that might occur such as fraud, sabotage, embezzlement, and so on. Because IT people are smart, and often access the network remotely in a somewhat unobserved fashion, auditing becomes very important.
Onboarding is when a new employee is added to an organization, and to its identity and access management system. It incorporates training, formal meetings, lectures, and human resources employee handbooks and videos. It can also be implemented when a person changes roles within an organization. It is known as a socialization technique used to ultimately provide better job performance and higher job satisfaction. Onboarding is associated with federated identity management discussed in Chapter 11. It is also sometimes connected to an employee’s role in the company, and therefore role-based access control (RBAC).
Offboarding is the converse, and correlates to procedurally removing an employee from a federated identity management system, restricting rights and permissions, and possibly debriefing the person or conducting an exit interview. This happens when a person changes roles within an organization, or departs the organization altogether.
An organization will commonly work with business partners, but no business relationship lasts forever, and new ones are often developed. So, onboarding and offboarding can apply to business partners as well. The main concerns are access to data. In Chapter 6, “Network Design Elements,” we discussed extranets and the community cloud, which are both commonly used technologies with business partners. These technologies allow an organization to carefully select which data the business partner has access to. As relationships with business partners are severed, a systematic audit of all shared data should be made, including the various types of connectivity, permissions, policies, and even physical access to data.
When it comes to information security, due diligence is ensuring that IT infrastructure risks are known and managed. An organization needs to spend time assessing risk and vulnerabilities and might state in a policy how it will give due diligence to certain areas of its infrastructure. It can help to study the history of the company, particularly the failures, errors, and user issues that have been documented—essentially, lessons learned.
Due care is the mitigation action that an organization takes to defend against the risks that have been uncovered during due diligence.
Due process is the principle that an organization must respect and safeguard personnel’s rights. This is to protect the employee from the state and from frivolous lawsuits.
With so many possible organizational policies, employees need to be trained to at least get a basic understanding of them. Certain departments of an organization require more training than others. For example, Human Resources personnel need to understand many facets of the business and their corresponding policies, especially policies that affect personnel. HR people should be thoroughly trained in guidelines and enforcement. Sometimes the HR people train management and other employees on the various policies that those trainees are expected to enforce. In other cases, the trainer would be an executive assistant or outside consultant.
Security awareness training is an ongoing process. Different organizations have varying types of security awareness training, and employees with different roles in the organization receive different types of training. This type of training is often coupled with the signing of a user agreement. The user, when signing this, accepts and acknowledges specific rules of conduct, rules of behavior, and possibly the nondisclosure of any training (known as a nondisclosure agreement, or NDA).
Historically, the CompTIA exams themselves require that you sign an NDA. This means that you agree not to share any of the contents of the exam with anyone else.
All employees should be trained on personally identifiable information (PII). This is information used to uniquely identify, contact, or locate a person. This type of information could be a name, birthday, Social Security number, biometric information, and so on. Employees should know what identifies them to the organization and how to keep that information secret and safe from outsiders. Another key element of user education is the dissemination of the password policy. They should understand that passwords should be complex, and know the complexity requirements. They should also understand never to give out their password or ask for another person’s password to any resource.
IT personnel should be trained on what to do in the case of account changes—for example, temporarily disabling the account of employees when they take a leave of absence or disabling the account (or deleting it, less common) of an employee who has been terminated. All IT personnel should be fluent in the organization’s password policy, lockout policy, and other user-related policies so that they can explain them to any other employees.
Some users might need to take additional privacy training, HIPAA training, or other types of security awareness training depending on the type of organization they work for. This user training might take the form of role-based training, where the instructors and trainees act out the roles they might play, such as network administrator, security analyst, and so on. Instructors will often devise their training to take advantage of learning management systems and training metrics so that they can gauge the effectiveness of the training, validate compliance with policies, and analyze the security posture of the trainees in general.
Table 18-4 breaks down and summarizes the various policy types mentioned in this section.
Policy that defines the rules that restrict how a computer, network, or other system may be used.
A structured way of changing the state of a computer system, network, or IT procedure.
Separation of duties
When more than one person is required to complete a task.
When a particular task is rotated among a group of employees.
When an organization requires employees to take X number of consecutive days’ vacation over the course of a year as part of their annual leave.
When a new employee is added to an organization, and to its identity and access management system. It is associated with user training, federated identity management, and RBAC. Offboarding correlates to removing an employee from a federated identity management system.
Ensuring that IT infrastructure risks are known and managed.
The mitigation action that an organization takes to defend against the risks that have been uncovered during due diligence.
The principle that an organization must respect and safeguard personnel’s rights.
Before we begin, I should mention that the following information is not intended as legal advice. Before signing any contracts, an organization should strongly consider consulting with an attorney.
An organization often has in-depth policies concerning vendors. I can’t tell you how many times I’ve seen issues occur because the level of agreement between the organization and the vendor was not clearly defined. A proper service-level agreement (SLA) that is analyzed by the organization carefully before signing can be helpful. A basic service contract is usually not enough; a service contract with an SLA will have a section within it that formally and clearly defines exactly what a vendor is responsible for and what the organization is responsible for—a demarcation point so to speak. It might also define performance expectations and what the vendor will do if a failure of service occurs, timeframes for repair, backup plans, and so on. To benefit the organization, these will usually be legally binding and not informal. Due to this, it would benefit the organization to scrutinize the SLA before signing, and an organization’s attorney should be involved in that process.
For instance, a company might use an ISP for its T3 connection. The customer will want to know what kind of fault-tolerant methods are on hand at the ISP and what kind of uptime they should expect, which should be monitored by a network admin. The SLA might have some sort of guarantee of measurable service that can be clearly defined; perhaps a minimum level of service and a target level of service. Before signing an SLA such as this, it is recommended that an attorney, the IT director, and other organizational management review the document carefully and make sure that it covers all the points required by the organization.
On a separate note: A business partners agreement (BPA) is a type of contract that can establish the profits each partner will get, what responsibilities each partner will have, and exit strategies for partners.
You will also see BPA stand for something else. An SLA that requires products and services over and over again is known as a blanket purchase agreement (BPA)—similar to a blanket order. These are common in government contracts, but some organizations use them also. One thing to make sure is that there is some type of ending for the contract length. Some less than reputable cloud providers will design open-ended BPAs—try to avoid these.
Sometimes, multiple government agencies will enter into a memorandum of understanding (MoU), or a letter of intent, in regard to a BPA; it could be that two agencies have a sort of convergence when it comes to ordering services.
Another type of agreement is the interconnection security agreement (ISA). It is an agreement that is established between two (or more) organizations that own and operate connected IT systems and data sets. Its purpose is to specifically document the technical and security requirements of the interconnection between the organizations. This is the type of agreement you need in this scenario because the data is sensitive and the CIO requires that there is a clear understanding of security controls to be implemented and agreed upon. As far as governing the security of data and systems, it is a more precise agreement than an SLA.
The ISA differs from the SLA, BPA, and MoU in the following ways:
An SLA is a contract between a service provider and a customer that specifies the nature of the service to be provided and the level of service that the provider will offer to the customer. It can be a very basic agreement, or it could also state the technical and performance parameters, but it will probably not include any specific security controls.
A BPA does not have any inherent security planning in the way an ISA does.
An MoU is not an agreement at all, but an understanding between two organizations or government agencies. It does not specify any security controls either. However, a memorandum of agreement (MoA) will constitute a legal agreement between two parties wishing to work together on a project, but still will not detail any security controls.
Now, I don’t expect you to go out and get a postgraduate degree in business law, but it’s a good idea to know these terms in case you need to interface with the business people at your organization, and to better understand the special contractual relationships between your organization and other organizations. That said, let’s get back to some tech talk!
Organizations might opt to recycle computers and other equipment or donate them. Rarely do organizations throw away equipment. It might be illegal to do so depending on your location and depending on what IT equipment is to be thrown away. The first thing an IT person should do is consult the organization’s policy regarding computer disposal, and if necessary, consult local municipal guidelines.
A basic example of a policy and procedure that an organization enforces might look like the following:
Policy: Recycle or donate IT equipment that has been determined to be outdated and nonproductive to the company.
Step 1. Define the equipment to be disposed of.
Step 2. Obtain temporary storage for the equipment.
Step 3. Have the appropriate personnel analyze the equipment.
Verify whether the equipment is outdated and whether it can be used somewhere else in the organization.
If a device can be used in another area of the organization, it should be formatted, flashed, or otherwise reset to the original default, and then transported to its new location.
If a device cannot be reused in the organization, move to Step 4.
Step 4. Sanitize the devices or computers.
Check for any removable media inside, or connected to, the computer. These should be analyzed and recycled within the organization if possible.
Remove any RAM, label it, and store it.
Remove the hard drive, sanitize it, and store it. If necessary based on organizational policies, pulverize or otherwise destroy the device.
Reset any UEFI/BIOS or other passwords to the default setting.
Step 5. Recycle or donate items as necessary.
Again, this is just an example of a basic recycle policy and procedure, but it gives you an idea of the type of method an organization might employ to best make use of its IT equipment and to organize the entire recycling/donating process.
In Step 4, the policy specifies to sanitize the hard drive; sanitizing the hard drive is a common way of removing data, but not the only one. The way data is removed might vary depending on its proposed final destination. Data removal is the most important element of computer recycling. Proper data removal goes far beyond file deletion or the formatting of digital media. The problem with file deletion/formatting is data remanence, or the residue, that is left behind, from which re-creation of files can be accomplished with the use of software such as SpinRite or other data recovery applications. Companies typically employ one of three options when met with the prospect of data removal:
Clearing: This is the removal of data with a certain amount of assurance that it cannot be reconstructed. The data is actually recoverable with special techniques. In this case, the media is recycled and used within the company again. The data-wiping technique is used to clear data from media by overwriting new data to that media or by performing low-level formats. A regular format within the operating system is not enough because it can leave data behind, known as data remanence. Data remanence can exist within cluster tips. A cluster tip is the last bit of a cluster that is not used by a file, and typically is not automatically erased with the rest of the cluster when doing standard formatting or deleting. Remnants of data can exist in these cluster tips, which can be removed by using specific third-party data-wiping software, purging, or low-level formats. The low-level format is initiated through third-party software (or, in some cases, in the BIOS), which formats the drive in a way that is similar to when the drive first came from the manufacturer. In some cases, patterns of ones and zeros are written to the entire drive. Several software programs are available to accomplish this. Low-level formats are often frowned upon because they can reduce the lifespan of a drive, and so might not be acceptable if the drive is to be recycled within the company.
Purging: Also known as sanitizing, this is once again the removal of data, but this time, it’s done in such a way so that it cannot be reconstructed by any known technique; in this case the media is released outside the company. Special bit-level erasure software (or other means) is employed to completely destroy all data on the media. This type of software will comply with the U.S. Department of Defense (DoD) 5220.22-M standard, which requires seven full passes of rewrites. It is also possible to degauss the disk, which renders the data unreadable but might also cause physical damage to the drive. Tools such as electromagnetic degaussers and permanent magnet degaussers can be used to permanently purge information from a disk.
Destruction: This is when the storage media is physically destroyed through pulverizing, shredding, pulping, incineration, and so on. At this point, the media can be disposed of in accordance with municipal guidelines. Some organizations require a certificate of destruction to show that a drive has indeed been destroyed. This is obtained from the third-party vendor that performs the drive destruction.
The type of data removal used will be dictated by the data stored on the drive. If there is no personally identifiable information, or other sensitive information, it might simply be cleared and released outside the company. But in many cases, organizations will specify purging of data if the drive is to leave the building. In cases where a drive previously contained confidential or top secret data, the drive will usually be destroyed.
Incident response is a set of procedures that an investigator follows when examining a computer security incident. Incident response procedures are a part of computer security incident management, which can be defined as the monitoring and detection of security events on a computer network and the execution of proper responses to those security events.
However, often, IT employees of the organization discover the incident. Sometimes they act as the investigators also. It depends on the resources and budget of the organization. So, it is important for the IT personnel to be well briefed on policies regarding the reporting and disclosure of incidents.
Don’t confuse an incident with an event. An example of a single event might be a single stop error on a Windows computer. In many cases, the blue screen of death (BSOD) won’t occur again, and regardless, it has been logged in case that it does. The event should be monitored, but that is about all. An example of an incident would be when several DDoS attacks are launched at an organization’s web servers over the course of a work day. This will require an incident response team that might include the security administrator, IT or senior management, and possibly a liaison to the public and local municipality.
You will find that organizations might use varying incident response processes. The National Institute of Standards and Technology (NIST) breaks the process down into four main phases. The CompTIA Security+ objectives include six phases. In the past, I have worked with organizations that break the process down into as many as ten phases. The key is to know the policies/procedures of whatever organization you work for. As far as the CompTIA Security+ objectives’ incident response plan (IR), the process can be summed up as follows:
Phase 1. Preparation: It all comes down to preparation. Consider a data breach, for example. An organization with no planning will take much longer to repair the problem and will have a hard time controlling the damage and loss. But an organization with a well-planned incident response procedure (in advance), a strong security posture, and a knowledgeable chief information security officer (CISO) will be able to limit the damage (to data and to the company reputation) by: quickly discovering the breach; having an internal response team ready to take action; obtaining forensics data quickly; and beginning a seamless notification process and inquiry response plan.
Phase 2. Identification: The recognition of whether an event that occurs should be classified as an incident. Once identified, you might be required to make contact with other groups or escalate the problem if necessary.
Phase 3. Containment: Isolating the problem. For example, if it is a network attack, the attacker should be extradited to a padded cell where the attacker can be analyzed and monitored. Or if only one server has been affected so far by a worm or virus, it should be physically disconnected from the network. The same goes for devices—they should be removed from the network or from a connected computer if the incident concerns them. This phase might also include evidence gathering (in a way that preserves the evidence’s integrity) and further investigation so that you can ascertain exactly what happened and why.
Phase 4. Eradication: Removal of the attack or threat, quarantine of the computer(s), device removal if necessary, and other mitigation techniques covered previously in this book.
Phase 5. Recovery: Retrieve data, repair systems, re-enable servers and networks, reconstitute server rooms and/or the IT environment, and so on. Damage and loss control comes into play here; it can be a very slow process to make sure that as much data is recovered as possible.
Phase 6. Lessons learned: The scenario should be reviewed to define what went wrong and why, ultimately defining the lessons to be learned—how the organization can improve. Document the process and make any changes to procedures and processes that are necessary for the future. Damage and loss should be calculated and that information should be shared with the accounting department of the organization. The affected systems should be monitored for any repercussions.
At any time during these steps, you might be required to notify your superior and/or escalate the problem to someone with more experience than you. That will depend on your organization’s rules and whether you encounter something that you don’t understand. It happens, and you need to be able to swallow your pride and escalate if necessary.
Of course, an incident response policy can be much more in depth, specify exact procedures, and vary in content from organization to organization. To find out more about common practices and standards for incident response, see the ISO/IEC 27002:2013 (or 27002:2005) standard, or NIST Special Publication 800-61 Revision 2. Due to the length and breadth of the information, there is far too much to cover in this book. (I supplied some links to these resources in the “View Recommended Resources” document online, or you can search the Internet for one of several documents that whittles down the content to a more manageable size—but still pretty hefty reading material!) The Security+ exam expects you to know only the basics of incident response.
The six-phase process listed previously is a typical example; however, an organization might have more or fewer phases, and its procedures might vary. An organization’s typical incident response policy and procedures generally detail the following:
Initial incident management process: This includes who first found the problem, tracking tickets, and various levels of change controls. It also defines first responders who perform preliminary analysis of the incident data and determine whether the incident is actually an incident or just an event, and the criticality of the incident.
Emergency response detail: If the incident is deemed to be an emergency, this details how the event is escalated to an emergency incident. It also specifies a coordinator of the incident, how and when the cyber incident response team will meet, lock-down procedures, containment of the incident, repair and test of systems, and further investigation procedures to find the culprit (if there is one).
Collection and preservation of evidence: Sherlock Holmes based his investigations on traditional clues such as footprints, fingerprints, and cigar ash. Analogous to this, a security investigator needs to collect log files, alerts, captured packets, and so on, and preserve the integrity of this information by retaining forensic images of data. Modification of any information or image files during the investigative process will most likely void its validity in a court of law. One way to preserve evidence properly is to establish a chain of custody—the chronological documentation or paper trail of evidence. This is something that should be set up immediately at the start of an investigation; it documents who had custody of evidence all the way up to litigation or a court trial (if necessary) and verifies that the evidence has not been modified. Your work might also be affected by a legal hold, which is a notification that the normal disposition of data, media, and documents is suspended. An incident response policy lists proper procedures when it comes to the procurement of evidence.
Damage and loss control: The incident response policy also covers how to stop the spread of damage to other IT systems and how to minimize or completely curtail loss of data.
But a lot of this is really just posturing. The toughest part of the job is figuring out what happened during an incident, and how it happened. That means hardcore forensics. This might be taken care of internally, but more often than not it will be a job for third-party vendors—forensics consultants and specialists.
The incident response policy might define how computer forensics (or digital forensics) should be carried out. It might detail how information is to be deciphered from a hard disk or other device. Often, it dictates the use of hard drive hashing so that computer forensics experts can identify tampering by outside entities. It might also specify a list of rules to follow when investigating what an attacker did. For example, forensics investigators verify the integrity of data to ensure that it has not been tampered with. It is important that computer forensics investigations are carried out properly in case legal action is taken. Policies detailing the proper collection and preservation of evidence can be of assistance when this is the case.
There are some basic forensic procedures concerning data acquisition that can be utilized within the incident response process. Most commonly, these are applied during the containment phase, but could be performed during other phases as well. Some of these include:
Capture and hash system images: If a computer’s data is to be used as evidence, the entire drive should be imaged (copied) before it is investigated. The imaging process should be secured and logged, and the image itself should be hashed; the hashing process should take place before and after the image is created. This will protect the image from tampering and prove the integrity of the image. Generally, imaging is done to the hard drive of the computer, but if the computer is on, memory and other components/media can also be imaged. It is important to consider order of volatility (OOV) when imaging any media, as discussed further down in this list. LiveCDs, LiveDVDs, and flash drives are commonly used to take an image of a computer. These are operating systems that run directly off of removable media. Because they are outside of the computer’s regular OS environment, they are excellent options if you don’t want to disturb the system. Examples of these include Knoppix and BackTrack.
Analyze data with software tools: The data files may have to be analyzed carefully. Forensic toolkits (FTKs) can be invaluable for this. Examples include Guidance Software’s EnCase, AccessData’s Forensic Toolkit, The Sleuth Kit (open source), Disk Investigator (freeware), and Defiant Technologies’ DiskDigger, to name a few.
Capture screenshots: A computer that is being investigated might be compromised. Therefore, it is usually not wise to use screen-capturing software that is installed on the affected computer. Instead, take actual photos of the various screens you wish to capture using a camera.
Review network traffic captures and logs: As part of an investigation, an analyst will review network captures made with a network sniffing program such as Wireshark (covered in Chapters 12, “Vulnerability and Risk Assessment,” and 13, “Monitoring and Auditing”). Logs should also be preserved, hashed, and stored, including firewall logs, server logs, and router/switch logs. Various network device logs are discussed in Chapters 6 through 9.
Capture video: Any video surveillance equipment that recorded an incident will need to be analyzed. Before doing so, recorded video should be captured to a computer or to an external media device. Once again, the process should be secured and logged so that a person cannot claim that the evidence has been tampered with. Different municipalities, governments, and organizations will have varying policies on how this is to be accomplished. A forensic analyst should be well versed on these policies before responding to an incident. Keep in mind that the time stamp for video might be incorrect. When this happens, the investigator should establish what “real” time is, using a legitimate time server. The “real” time should be compared to the time stamp of the video. The difference between the two is known as the record time offset.
Consider the order of volatility (OOV): OOV can be summarized as the life expectancy of various types of captured data during forensic analysis. For example, optical discs can be preserved for tens of years, and USB flash drives and tape backup can usually be preserved for years. Hard drives can be expected to last from 1 to 5 years. However, information stored in memory, cache, or CPU registers, and any running processes, only last for seconds (or even milliseconds or nanoseconds). The OOV of media and captured data should be considered when gathering evidence that will be used in a court of law.
Take statements from witnesses: Witnesses are people who were present during an event and were cognizant of what happened during the event. They are used during court cases and investigations to describe what they saw, heard, smelled, felt, and so on. A witness can corroborate evidence that was gathered from video, computer logs, captures, and other technical evidence.
Review licensing: Depending on the situation, you might need to locate licenses (or lack thereof) for software, client connections, and hardware; for example, the client access licenses (CALs) being used to access a Windows Server. License compliance violation can have legal ramifications, not to mention availability and integrity repercussions.
Track man hours and expenses: Every action that is taken by the investigators of an incident response team should be logged and documented so as to act as a proper audit trail. Investigators normally need to sign in before being allowed access to an affected area or computer. The total man hours, sign in and sign out times, as well as any expenses incurred should be thoroughly documented. Man hours might be tracked through a computer system. For more information on the login of users, and policies governing how and when they can log in, see Chapter 11.
When an examiner collects digital evidence, he or she should abide by best practices. One best practice is to document everything (a fairly simple concept that we have mentioned so many times that you should now have documentation on the brain). But best practices can be more encompassing. For example, the following example procedure defines a best practice for preserving evidence (including live, volatile data in memory):
1. Photograph the computer and scene.
2. If the computer is off, do not turn it on. (Skip to #7.)
3. If the computer is on, photograph the screen.
4. Collect live data from the RAM image.
5. Collect other live data such as logged-on users, the network connection state, and so on.
6. Only if the drive is encrypted, collect a logical image of the drive. Special software will be required.
7. Unplug the power cord from the computer. If the computer is a laptop or mobile device and it does not shut down properly, then remove the battery.
8. Diagram and label all cords.
9. Document all device model numbers and serial numbers that are visible.
10. Disconnect all cords and devices.
11. Collect an image of the hard drive using a hardware imager. Or, if a hardware imager is not available, use one of the software tools mentioned previously in this section. However, if that is the case, this step should be moved to earlier in the sequence (before all cables were disconnected). Next, hash the image.
12. Package all components using antistatic evidence bags.
13. Collect additional storage media and store it using antistatic evidence bags.
14. Keep all media away from magnets, radio transmitters, and so on.
15. Collect instruction manuals, documentation, and notes.
16. Document all steps performed during the seizure.
That is a general procedure. But it will vary depending on the scene, the tools you have at your disposal, and whether or not the computer was on (or sleeping) when you arrived.
Now, in general, I know what you are thinking: With all these policies and procedures in place, how does anything ever get done?! And how do incidents get analyzed quickly enough so as not to become a disaster? Well, training is important. Personnel need to be trained quickly and efficiently without getting too much into the minutia of things. They also need to be trained to take action quickly. By narrowing down an organization’s policies to just what an employee needs to know, you can create a short but sweet list of key points for the employee to remember. Need-to-know is in itself an important security concept in companies. It is designed as much to hide information from people as it is to prevent information overload. For example, if a person were choking, the information you want to know is how to perform the Heimlich Maneuver; you don’t care why a person chokes, what the person ate for breakfast, or how specifically the maneuver works. This concept helps when there is an event or incident; the employees don’t need to sift through wads of policies to find the right action to take, because they are on a need-to-know basis and will quickly execute what they have been trained to do. Need-to-know also comes into play when confidential or top secret information is involved. In classified environments, top secret information is divided into pieces, only some of which particular people have access to. This compartmentalizing of information not only helps to secure data, it also increases productivity and efficiency in the workforce.
We have discussed a lot in this book so far. It can leave some people’s heads spinning. One way to reduce the chaos is to implement an IT security framework. This could be something that your organization devises or it could be a widely accepted set of standards. The goal of an IT security framework is to provide an implementable set of security controls for the IT environment and document the processes, procedures, and policies used to perform the implementation.
While an organization might opt to create its own framework, it makes sense for organizations—especially larger ones—to use standards that have already been thoroughly planned out, or at least base their framework on those standards; for example, the ISO/IEC 27000 family of information security standards. We mentioned ISO/IEC 27002:2013 already but you will find that there are several others. You can find more information at this link: https://www.iso.org/isoiec-27001-information-security.html. Then there is the NIST, which defines all kinds of guidelines and recommendations within the SP 800 and SP 1800 publication groups. See this link for more information: http://csrc.nist.gov/publications/PubsSPs.html. Next, there is ISACA’s Control Objectives for Information and Related Technologies (COBIT) framework, which divides IT into four sections: 1) plan and organize; 2) acquire and implement; 3) deliver and support; and 4) monitor and evaluate. That pretty much sums up everything we’ve talked about in this book! Also, you might be interested in the Information Technology Infrastructure Library (ITIL), Business Information Services Library (BiSL), and Project Management Body of Knowledge (PMBOK). A good NIST document that combines the usage of several of these can be found at this link: https://www.nist.gov/sites/default/files/documents/cyberframework/cybersecurity-framework-021214.pdf. You will also find that the U.S. government and military have their own resources on the subject, or depending on the scenario, will use one of the aforementioned standards.
So, some of these are regulatory, and you as an employee must abide by any of them that are applicable to your organization or profession. Some are nonregulatory, but usually the organization strongly urges its employees to accept them. Most of what I detailed so far are used in the United States, but there are other specific standards and guidelines used by other countries. In some cases, for example in the European Union, guidelines are international.
Reference frameworks can also be industry-specific, or could define how precise tasks and problems within an organization are to be approached. For example, the company you work for might repair mobile devices for corporations. This company would require a specific secure configuration guide detailing how the mobile devices are repaired, stored, handled, and so on. Or, you might be interested in benchmarking your servers. A detailed list of procedures is vital so that you obtain reliable results in a controlled environment. Then there is software development: When building software, you might embrace the concept of use case analysis, which is a requirement analysis technique practiced in software engineering. The use case analysis can benefit from well-written procedures within an IT security framework. Let’s not forget about software-defined networking (SDN), which is an approach to computer networking that allows admins to programmatically control and manage network behavior via open interfaces such as OpenFlow and Cisco’s Open Network Environment. SDN can benefit greatly from a well-thought-out framework.
Your IT security framework might include risk analysis and vulnerability assessment tools and how to use them. For example, using the Security Content Automation Protocol (SCAP) to automate vulnerability management. The framework might also incorporate how to properly utilize enterprise resource planning (ERP) software, which is used to manage and automate many back-office functions of technology in a larger organization. The examples are endless—really, just about anything we talked about in this book can be incorporated into your IT security framework.
So you see, the IT security framework could be large or small. It might deal with a specific task, or many tasks within an organization. But often, the content in the framework can be applied to many different solutions and implementations. The goal is to organize a group of processes, procedures, and policies of your organization into a single cohesive agenda that all employees can easily understand and work within.
From a security perspective, what this means is that the IT security framework—if designed properly—can help an organization to provide for defense in depth of systems and networks, and increase the confidentiality, integrity, and availability of data.
For an organization to realize a high level of security, the implementation of policies and procedures is highly recommended, and in some cases may be mandatory. Data sensitivity can be classified to better define which users are allowed to access specific data. Personnel policies such as privacy policies, acceptable use, change management, separation of duties, job rotation, succession planning, and onboarding are all very useful to an organization in that they help to identify exactly what a user is supposed to be doing—and not doing—and how the user will be trained and brought into the mold, so to speak.
Policies are also used to describe what happens to data when it is no longer needed, and what should be done with the media that holds the data; whether it is the clearing of data, the purging of data and other methods of sanitizing data, or even the destruction of the media. This might be necessary at the end of a particular device’s lifespan.
The terms policy and procedure are sometimes used interchangeably—it will depend on the organization. However, for many companies, the policy is often a broader concept, whereas the procedure is a very specific step-by-step instruction. Perhaps the most important procedure is the one that defines what an organization will do during an incident. One thing that we can easily forget to do is to try to learn from incidents—because they will happen at some point. Proper documentation can really drive home the idea of the lesson learned. It can help us to recall what the specific problem was and why it occurred, ultimately allowing us to define ways to prevent it from happening again. It’s all of these policies and procedures, and the people who implement them, that contribute to the overall security plan of an organization. All of the technical know-how and the assessments and analysis that we discussed throughout the book can be leveraged by the power of well-defined organizational policies and procedures. And those can collectively be planned and organized through the use of an IT security framework.
Use the features in this section to study and review the topics in this chapter.
Review the most important topics in the chapter, noted with the Key Topic icon in the outer margin of the page. Table 18-5 lists a reference of these key topics and the page number on which each is found.
Key Topic Element
Acts passed concerning the disclosure of data and PII
Summary of policy types
Six phases of incident response process
Define the following key terms from this chapter, and check your answers in the glossary:
change management, separation of duties, acceptable use policy (AUP), mandatory vacations, onboarding, due diligence, due care, due process, personally identifiable information (PII), service-level agreement (SLA), memorandum of understanding (MoU), interconnection security agreement (ISA), incident response, incident management, first responders, chain of custody
Complete the Real-World Scenarios found on the companion website (www.pearsonitcertification.com/title/9780789758996). You will find a PDF containing the scenario and questions, and also supporting videos and simulations.
Answer the following review questions. Check your answers with the correct answers that follow.
1. Which method would you use if you were disposing hard drives as part of a company computer sale?
2. Which of these governs the disclosure of financial data?
D. Top secret
3. You are told by your manager to keep evidence for later use at a court proceeding. Which of the following should you document?
A. Disaster recovery plan
B. Chain of custody
C. Key distribution center
4. Which law protects your Social Security number and other pertinent information?
C. The National Security Agency
D. The Gramm-Leach-Bliley Act
5. Which of the following is not one of the steps of the incident response process?
6. Your company expects its employees to behave in a certain way. How could a description of this behavior be documented?
A. Chain of custody
B. Separation of duties
C. Code of ethics
D. Acceptable use policy
7. You are a forensics investigator. What is the most important reason for you to verify the integrity of acquired data?
A. To ensure that the data has not been tampered with
B. To ensure that a virus cannot be copied to the target media
C. To ensure that the acquired data is up to date
D. To ensure that the source data will fit on the target media
8. You are the security administrator for your organization. You have just identified a malware incident. Of the following, what should be your first response?
9. Employees are asked to sign a document that describes the methods of accessing a company’s servers. Which of the following best describes this document?
A. Acceptable use policy
B. Chain of custody
C. Incident response
D. Privacy Act of 1974
10. One of the developers for your company asks you what he should do before making a change to the code of a program’s authentication. Which of the following processes should you instruct him to follow?
A. Chain of custody
B. Incident response
C. Disclosure reporting
D. Change management
11. As a network administrator, one of your jobs is to deal with Internet service providers. You want to ensure that a provider guarantees end-to-end traffic performance. What is this known as?
12. When it comes to security policies, what should HR personnel be trained in?
C. Guidelines and enforcement
D. Vulnerability assessment
13. In a classified environment, clearance to top secret information that enables access to only certain pieces of information is known as what?
A. Separation of duties
B. Chain of custody
D. Need to know
14. What is documentation that describes minimum expected behavior known as?
A. Need to know
B. Acceptable usage
C. Separation of duties
D. Code of ethics
15. You are the security administrator for your company. You have been informed by human resources that one of the employees in accounting has been terminated. What should you do?
A. Delete the user account.
B. Speak to the employee’s supervisor about the person’s data.
C. Disable the user account.
D. Change the user’s password.
16. Your organization already has a policy in place that bans flash drives. What other policy could you enact to reduce the possibility of data leakage?
A. Disallow the saving of data to a network share
B. Enforce that all work files have to be password protected
C. Disallow personal music devices
D. Allow unencrypted HSMs
17. Which of the following requires special handling and policies for data retention and distribution? (Select the two best answers.)
B. Personal electronic devices
18. One of the accounting people is forced to change roles with another accounting person every three months. What is this an example of?
A. Least privilege
B. Job rotation
C. Mandatory vacation
D. Separation of duties
19. Your organization uses a third-party service provider for some of its systems and IT infrastructure. Your IT director wants to implement a governance, risk, and compliance (GRC) system that will oversee the third party and promises to provide overall security posture coverage. Which of the following is the most important activity that should be considered?
A. Baseline configuration
B. SLA monitoring
C. Security alerting and trending
D. Continuous security monitoring
20. Which of the following is the least volatile when performing incident response procedures?
C. Hard drive
D. RAID cache
21. Which of the following is a best practice when a mistake is made during a forensic examination?
A. The examiner should document the mistake and work around the problem.
B. The examiner should attempt to hide the mistake during the examination.
C. The examiner should disclose the mistake and assess another area of the disc.
D. The examiner should verify the tools before, during, and after an examination.
1. B. Purging (or sanitizing) removes all the data from a hard drive so that it cannot be reconstructed by any known technique. If a hard drive were destroyed, it wouldn’t be of much value at a company computer sale. Clearing is the removal of data with a certain amount of assurance that it cannot be reconstructed; this method is usually used when recycling the drive within the organization. Formatting is not nearly enough to actually remove data because it leaves data residue, which can be used to reconstruct data.
2. A. SOX, or Sarbanes-Oxley, governs the disclosure of financial and accounting data. HIPAA governs the disclosure and protection of health information. GLB, or the Gramm-Leach-Bliley Act of 1999, enables commercial banks, investment banks, securities firms, and insurance companies to consolidate. Top secret is a classification given to confidential data.
3. B. A chain of custody is the chronological documentation or paper trail of evidence. A disaster recovery plan details how a company will recover from a disaster with such methods as backup data and sites. A key distribution center is used with the Kerberos protocol. Auditing is the verification of logs and other information to find out who did what action and when and where.
4. D. The Gramm-Leach-Bliley Act protects private information such as Social Security numbers. HIPAA deals with health information privacy. SOX, or the Sarbanes-Oxley Act of 2002, applies to publicly held companies and accounting firms and protects shareholders in the case of fraudulent practices.
5. D. Non-repudiation, although an important part of security, is not part of the incident response process. Eradication, containment, and recovery are all parts of the incident response process.
6. C. The code of ethics describes how a company wants its employees to behave. A chain of custody is a legal and chronological paper trail. Separation of duties means that more than one person is required to complete a job. Acceptable use policy is a set of rules that restricts how a network or a computer system may be used.
7. A. Before analyzing any acquired data, you need to make sure that the data has not been tampered with, so you should verify the integrity of the acquired data before analysis.
8. A. Most organizations’ incident response procedures will specify that containment of the malware incident should be first. Next would be the removal, then recovery of any damaged systems, and finally monitoring that should actually be going on at all times.
9. A. Acceptable use (or usage) policies set forth the principles for using IT equipment such as computers, servers, and network devices. Employees are commonly asked to sign such a document that is a binding agreement that they will try their best to adhere to the policy.
10. D. He should follow the change management process as dictated by your company’s policies and procedures. This might include filing forms in paper format and electronically, and notifying certain departments of the proposed changes before they are made.
11. A. An SLA, or service-level agreement, is the agreement between the Internet service provider and you, defining how much traffic you are allowed and what type of performance you can expect. A VPN is a virtual private network. A DRP is a disaster recovery plan. And WPA is Wi-Fi Protected Access.
12. C. Human resources personnel should be trained in guidelines and enforcement. A company’s standard operating procedures will usually have more information about this. However, a security administrator might need to train these employees in some areas of guidelines and enforcement.
13. D. In classified environments, especially when accessing top secret information, a person can get access to only what he needs to know.
14. D. A code of ethics is documentation that describes the minimum expected behavior of employees of a company or organization. Need to know deals with the categorizing of data and how much an individual can access. Acceptable usage defines how a user or group of users may use a server or other IT equipment. Separation of duties refers to a task that requires multiple people to complete.
15. C. When an employee has been terminated, the employee’s account should be disabled, and the employee’s data should be stored for a certain amount of time, which should be dictated by the company’s policies and procedures. There is no need to speak to the employee’s supervisor. It is important not to delete the user account because the company may need information relating to that account later on. Changing the user’s password is not enough; the account should be disabled.
16. C. By creating a policy that disallows personal music devices, you reduce the possibility of data leakage. This is because many personal music devices can store data files, not just music files. This could be a difficult policy to enforce since smartphones can play music and store data. That’s when you need to configure your systems so that those devices cannot connect to the organization’s network. DLP devices would also help to prevent data leakage. Network shares are part of the soul of a network; without them, there would be chaos as far as stored data. If network shares are configured properly, there shouldn’t be much of a risk of data leakage. Password protecting files is something that would be hard to enforce, and the encryption used could very easily be subpar and easily cracked. Hardware security modules (HSMs) are inherently encrypted; that is their purpose. To allow an HSM would be a good thing, but there are no unencrypted HSMs.
17. B and D. PII (personally identifiable information) must be handled and distributed carefully to prevent ID theft and fraud. In a BYOD environment, personal electronic devices should also be protected and secured and require special policies as well because the devices are being used for personal and business purposes. Phishing is the attempt at obtaining information fraudulently. SOX (Sarbanes-Oxley) is an act that details the disclosure of banking information.
18. B. Job rotation is when people switch jobs, usually within the same department. This is done to decrease the risk of fraud. It is closely linked with separation of duties, which is when multiple people work together to complete a task; each person is given only a piece of the task to accomplish. Least privilege is when a process (or a person) is given only the bare minimum needed to complete its function. Mandatory vacations are when an employee is forced to take X number of consecutive days of vacation away from the office.
19. D. The most important activity when implementing a GRC system in this scenario is continuous security monitoring. It will provide for a secure posture while overseeing the work of the third-party vendor. Baselining is important as well as part of vulnerability management, but the answer “baseline configuration” refers more to the building of a baseline, and not the constant monitoring of that baseline. An SLA is a service-level agreement, which, once agreed to, isn’t something you normally monitor so to speak. It is a contract of sorts. Security alerting and trending is a part of continuous security monitoring.
20. C. Of the listed answers, a hard drive would be considered the least volatile when performing incident response procedures. The order of volatility defines any type of registers as the most volatile, and cache and RAM as slightly less volatile. On the other hand, backup tapes are less volatile than hard drives, and optical discs are less volatile as well. Those last two options make for good options if forensics data needs to be stored over the long term.
21. A. The best practice in this scenario is to document. In fact, you should always document. Document everything to be on the safe side. Work around the problem as best you can. Never try to hide anything. It could be costly to the investigation, and your livelihood. You shouldn’t have to assess another area of the disc, because you have made a copy (or more than one) and should be able to still access that portion of the disc where the mistake occurred. You should always verify the tools and software used, but this is more of a standard procedure and less of a best practice; besides, it doesn’t necessarily have to do with the mistake.