Appendix A

Answers to the “Do I Know This Already?” Quizzes and Review Questions

Chapter 1

Do I Know This Already?

1. A. Spear phishing is one of the most common social engineering attacks where the attacker searches for public information about the victim to send a targeted email to steal information. Typo squatting (or typosquatting) is a technique used by adversaries that leverages human error when typing a URL in their web browser. Pharming is the term used to describe a threat actor redirecting a victim from a valid website or resource to a malicious one that could be made to appear as a valid site to the user. From there, an attempt is made to extract confidential information from the user or to install malware in the victim’s system.

2. B. The Social Engineering Toolkit (SET) is an example of a tool that can be used specifically to perform social engineering attacks.

3. A. Vishing is social engineering attack in which the attacker calls the user over the phone and then persuades the user to reveal sensitive information or perform a given action. Smishing is a type of phishing campaign using SMS text messages instead of email.

4. D. An access control vestibule is a small space that can usually fit only one person, used to combat tailgating. Tunnel-gap or tunnel-traps are not correct social engineering terms. Piggyback is the act of following someone while opening a door to enter a building or a room.

5. A. Pretexting is the act of impersonating someone else.

6. B. Pharming is a social engineering technique where an attacker incorporates malicious ads on trusted websites, which results in users’ browsers being inadvertently redirected to sites hosting malware.

7. B. Spear phishing is phishing attempts that are constructed in a very specific way and directly targeted to specific individuals or companies.

8. C. Whaling is a social engineering attack similar to phishing and spear phishing. However, in whaling attacks the attacker targets executives and key personnel of an organization (aka the “big fish”).

9. B. Attackers use the social engineering scarcity technique to create a feeling of urgency in a decision-making context. It is possible to use specific language in an interaction to present a sense of urgency and manipulate the victim.

10. D. All of the available answers can be used as recommendations for user security awareness training and education.

Review Questions

1. Dumpster diving

2. Social engineering

3. Prepending

4. Lack of user awareness

5. A public building with shared office space

6. Typo squatting

7. Social engineering

8. Tailgating

9. Pharming

10. To deter shoulder surfing

Chapter 2

Do I Know This Already?

1. A. Ransomware is a type of malware that restricts access to a computer system and demands that a ransom be paid. It informs the user that in order to decrypt the files or unlock the computer to regain access to the files, a payment would have to be made to one of several banking services (typically crypto currencies like Bitcoin).

2. A. Trojans appear to perform desirable functions but are actually performing malicious functions behind the scenes.

3. B. Rootkit is a type of malware designed to gain administrator-level control over a computer system without being detected.

4. C. Fileless malware works differently from traditional malware that puts malicious executables within the file system; instead, it works in a memory-based environment.

5. A. A group of compromised computers (bots), known as a botnet, is typically controlled by a command-and-control (C2) server/system.

6. B. A dictionary password attack pulls words from the dictionary or word lists to attempt to discover a user’s password. A dictionary attack uses a predefined dictionary to look for a match between the encrypted password and the encrypted dictionary word.

7. A. In password spraying an attacker attempts to compromise a system using a large number of usernames with a few commonly used passwords.

8. A. Skimming is a type of attack in which an attacker captures credit card information or information from other similar cards (gift cards, loyalty cards, identification cards, and so on) from a cardholder surreptitiously. Attackers use a device called a skimmer that can be installed at strategic locations such as ATMs and gas pumps to collect card data.

9. D. Tainting, overfitting, and transfer attacks are types of adversarial techniques against machine learning (ML) implementations.

10. A. A supply-chain attack occurs when attackers target security weaknesses in the supply network and install malicious software or hardware implants to perform different nefarious activities.

11. D. Attackers can perform virtual machine (VM) escape, API, and DNS attacks to compromise cloud-hosted applications and services.

12. B. A downgrade attack is a type of cryptographic attack that forces the rollback of a strong algorithm in favor of an older, lower-quality algorithm or mode of operation.

Review Questions

1. Botnet

2. Bot

3. Pop-up windows with advertisements

4. You have been infected with a worm.

5. Ransomware

6. Logic bomb

7. Through email

8. Trojan

Chapter 3

Do I Know This Already?

1. A. The two types of privilege escalation attacks are vertical and horizontal. A horizontal privilege escalation attack occurs when a user accesses functions or content reserved for other users. Vertical privilege escalation occurs when a lower-privileged user accesses functions reserved for higher-privileged users—for example, if a standard user can access functions of an administrator. This is also known as privilege elevation and is the most common description. To protect against this type of situation, you should update the network device firmware. In the case of an operating system, it should again be updated, and use of some type of access control system is also advisable—for example, User Account Control (UAC).

2. B. Stored, or persistent, XSS attacks occur when the malicious code or script is permanently stored on a vulnerable or malicious server, using a database. These attacks are typically carried out on websites hosting blog posts (comment forms), web forums, and other permanent storage methods. An example of a stored XSS attack is a user requesting the stored information from the vulnerable or malicious server, which causes the injection of the requested malicious script into the victim’s browser. In this type of attack, the vulnerable server is usually a known or trusted site.

3. A. DLL injection occurs when code is run within the address space of another process by forcing it to load a dynamic link library (DLL). Ultimately, this type of attack can influence the behavior of a program that was not originally intended. This attack can be uncovered through penetration testing.

4. D. A null pointer dereference occurs when a program dereferences a pointer that it expects to be valid but is null, which can cause the application to exit or the system to crash. From a programmatical standpoint, the main way to prevent this situation is to use meticulous coding. Programmers can use special memory error analysis tools to enable error detection for a null pointer dereference.

5. D. Directory traversal, path traversal, and the ../ (“dot-dot-slash”) attack are methods of accessing unauthorized parent (or worse, root) directories. They are often used on web servers that have PHP files and are Linux or UNIX-based but can also be perpetrated on Microsoft operating systems (in which case, it would be .. or the “dot-dot-backslash” attack).

6. B. Integer overflows occur when arithmetic operations attempt to create a numeric value that is too big for the available memory space. This creates a wrap and can cause resets and undefined behavior in programming languages such as C and C++. The security ramification is that the integer overflow can violate the program’s default behavior and possibly lead to a buffer overflow.

7. B. Race conditions are also known as time-of-check (TOC) or time-of-use (TOU) attacks.

8. C. Error handling or error exception handling code should be checked thoroughly so that a malicious user can’t find out any additional information about the system. These error handling methods are sometimes referred to technically as pseudocodes.

9. B. A fuzzer is a program that can send crafted messages to a vulnerable application or system to find input validation vulnerabilities.

10. D. A replay attack is a network attack in which a valid data transmission is maliciously or fraudulently repeated or delayed. It differs from session hijacking in that the original session is simply intercepted and analyzed for later use. In a replay attack, an attacker might use a packet sniffer to intercept data and retransmit it later. In this way, the attacker can impersonate the entity that originally sent the data.

11. A. Session replay attacks occur when an attacker steals a user’s valid session ID and reuses that ID to perform malicious transactions and activities with a web application.

12. D. Cross-site request forgery (XSRF) attacks leverage the trust that the application has in the targeted user. For example, the attacker could inherit the privileges of the user to perform an undesired action, such as stealing sensitive information, creating users, or downloading malware.

13. D. All of the available answers are best practices to help protect application programming interfaces (APIs).

14. C. Resource exhaustion is an attack against availability that is designed to bring the network, or access to a particular TCP/IP host/server, to its knees by flooding it with useless traffic. Resource exhaustion attacks are a form of denial-of-service (DoS) attacks. They can also leverage software vulnerabilities such as memory leaks and file descriptor leaks.

15. D. A memory leak is a type of resource leak caused when a program does not release memory properly. The lack of freed-up memory can reduce the performance of a computer, especially in systems with shared memory or limited memory. A kernel-level leak can lead to serious system stability issues. The memory leak might happen on its own due to poor programming, or it could be that code resides in the application that is vulnerable and is later exploited by an attacker who sends specific packets to the system over the network.

16. A. An attacker can launch an SSL strip attack in different ways. One of the most common ways is to create a wireless hotspot and lure the victims to connect to it.

17. D. Attackers can potentially modify drivers through the use of driver shimming (the adding of a small library that intercepts API calls) and driver refactoring (the restructuring of driver code).

18. D. Pass the hash attacks leverage deficiencies in Windows NTLM implementations.

Review Questions

1. Mimikatz

2. Driver refactoring

3. SSL stripping attack

4. Swagger

5. Clickjacking

6. Race condition

7. Address space layout randomization (ASLR)

8. XML External Entity (XXE)

Chapter 4

Do I Know This Already?

1. A. An initialization vector (IV) attack is a type of related-key attack, which occurs when an attacker observes the operation of a cipher using several different keys and finds a mathematical relationship between those keys, allowing the attacker to ultimately decipher data.

2. B. ARP cache poisoning (also known as ARP spoofing) is an example of an attack that leads to a man-in-the-middle scenario. An ARP spoofing attack can target hosts, switches, and routers connected to a Layer 2 network by poisoning the ARP caches of systems connected to the subnet and by intercepting traffic intended for other hosts on the subnet.

3. A. With an on-path attack, the attacker must first infect the victim’s computer with a Trojan. The attacker usually gets the malware onto the victim’s computer through some form of trickery or deceit.

4. D. Attackers launch MAC flooding attacks by sending numerous unknown MAC addresses to a network switch to cause a DoS condition. In addition, when the Layer 2 forwarding table limit is exceeded, packets are flooded to all ports in a virtual LAN (VLAN). This, in turn, enables the attacker to sniff network connections over a switched network while disrupting network performance.

5. D. Domain hijacking is a type of hijacking attack in which the attacker changes the registration of a domain name without the permission of the original owner/registrant. One of the most common methods to perform a domain hijacking is using social engineering.

6. B. One specific type of DDoS attack is the DNS amplification attack. Amplification attacks generate a high volume of packets ultimately intended to flood a target website.

7. B. Visual Basic for Applications (VBA) is an event-driven programming capability in Microsoft operating systems and applications. Attackers have used VBA to create malicious macros in applications such as Excel or Word.

Review Questions

1. PowerShell

2. DMARC

3. Cross-site scripting (XSS)

4. DNS poisoning

5. MAC cloning or spoofing

Chapter 5

Do I Know This Already?

1. A. The name hacktivist is often applied to different kinds of activities—from hacking for social change, to hacking to promote political agendas, to full-blown cyberterrorism. Due to the ambiguity of the term, a hacktivist could be inside a company or attack from the outside and will have varying amounts of resources and funding. However, a hacktivist is usually far more competent than a script kiddie.

2. B. Cybercriminals might work on their own, or they might be part of criminal syndicates and organized crime—a centralized enterprise run by people motivated mainly by money.

3. D. The level of sophistication/capability, resources, and funding are all attributes of threat actors that put them into different categories (that is, state-sponsored actors, script kiddies, hacktivists, criminals).

4. C. The Diamond Model is designed to represent a cybersecurity incident and is made up of four parts. Active intrusions start with an adversary who targets a victim. The adversary will use various capabilities along some form of infrastructure to launch an attack against the victim. Capabilities can be various forms of tools, techniques, and procedures, while the infrastructure is what connects the adversary and victim. The lines connecting each part of the model depict a mapping of how one point reached another. This mapping helps you understand the motives, intent, sophistication, capabilities, and resources that a threat actor may have.

5. A. One of the most effective attacks for mass compromise is to attack the supply chain of a vendor to tamper with hardware and/or software. This tampering might occur in-house or earlier, while in transit through the manufacturing supply chain.

6. D. Attackers can leverage misconfigured and insecure cloud deployments including unpatched applications, operating systems, and storage buckets.

7. A. Threat intelligence refers to knowledge about an existing or emerging threat to assets, including networks and systems. Threat intelligence includes context, mechanisms, indicators of compromise (IoCs), implications, and actionable advice.

8. A. Open-source intelligence (OSINT) applies to offensive security (ethical hacking/penetration testing) and defensive security. In offensive security, OSINT enables you to leverage public information from DNS records, social media sites, websites, search engines, and other sources for reconnaissance—in other words, to obtain information about a targeted individual or an organization. When it comes to threat intelligence, OSINT refers to public and free sources of threat intelligence.

9. D. Vendor websites, threat feeds, and vulnerability feeds could all be used for threat and vulnerability research.

10. C. The MITRE ATT&CK framework (https://attack.mitre.org) is a collection of different matrices of tactics and techniques. InfraGard is a collaborative effort between the FBI and the private sector. The Common Vulnerability and Exposure (CVE) is a standard to identify vulnerabilities created and maintained by MITRE. The Common Weakness Enumeration (CWE) is a standard to identify the weaknesses (root cause) of security vulnerabilities. CWE was also created by MITRE.

Review Questions

1. MITRE PRE-ATT&CK

2. STIX

3. TAXII

4. National Vulnerability Database (NVD)

5. Wireless

6. Authorized or ethical

7. Indicators of compromise (IoCs)

8. Advanced persistent threat (APT)

9. Shadow IT

Chapter 6

Do I Know This Already?

1. A. Infrastructure as a service (IaaS) is a type of cloud service that offers computer networking, storage, load balancing, routing, and VM hosting. Platform as a service (PaaS) provides various software solutions to organizations, especially the capability to develop applications in a virtual environment without the cost or administration of a physical platform. Software as a service (SaaS) is a cloud service model where the cloud provider offers the complete infrastructure and the application. Examples of SaaS include Gmail, Office 365, Webex, Zoom, Dropbox, Google Drive, and many other applications you use every day.

2. D. Encryption, authentication methods, and identity management are all security concerns in cloud deployments and environments.

3. B. A zero-day vulnerability is a type of vulnerability that is disclosed by an individual or exploited by an attacker before the creator of the software can create a patch to fix the underlying issue. Attacks leveraging zero-day vulnerabilities can cause damage even after the creator knows of the vulnerability because it may take time to release a patch to prevent the attacks and fix damage caused by them.

4. D. Default settings and passwords, weak encryption, and open permissions are examples of the most prevalent types of weak configurations that can be leveraged by an attacker to perform malicious activities and compromise systems.

5. D. Protocols such as Telnet, FTP (without encryption), and HTTP without encryption should be avoided at all times because they are considered unsecure.

6. D. Vendor management, system integration, lack of vendor support, supply chain, and outsourced code development should all be assessed when performing an analysis of third-party risks.

7. A. A security update is a broadly released fix for a product-specific security-related vulnerability or group of vulnerability. Security vulnerabilities are rated based on their severity, which is indicated in the Microsoft Security Bulletin as critical, important, moderate, or low.

8. D. Legacy platforms and products that have passed the end of support date are often affected by unfixed security vulnerabilities and do not have modern security features. When a device is past the last day of support, vendors will not investigate or patch security vulnerabilities in those devices.

9. D. A security breach could have direct financial impact to a corporation (such as fines and lawsuits). The brand and reputation of a company can also be damaged by major cybersecurity incidents and breaches. Cybersecurity incidents can also lead to outages and availability loss.

10. D. Attackers can leverage different types of obfuscation and evasion techniques to go undetected (including encoding of data, tunneling, and encryption).

Review Questions

1. Cloud access security broker (CASB)

2. Zero-day vulnerability

3. Community cloud

4. Remote code execution (RCE)

5. SMTP with TLS encryption

Chapter 7

Do I Know This Already?

1. B. Threat hunting is the act of proactively and iteratively looking for threats in your organization.

2. A. The MITRE ATT&CK is a collection of matrices that outline adversary tactics, techniques, and procedures (TTPs) that modern attackers use.

3. A. Most of the vulnerabilities disclosed to the public are assigned Common Vulnerability and Exposure (CVE) identifiers. CVE is a standard created by MITRE (www.mitre.org) that provides a mechanism to assign an identifier to vulnerabilities so that you can correlate the reports of those vulnerabilities among sites, tools, and feeds.

4. C. A false positive is a broad term that describes a situation in which a security device triggers an alarm, but no malicious activity or actual attack is taking place. In other words, false positives are false alarms, and they are also called benign triggers.

5. A. A true positive is a successful identification of a security attack or a malicious event. A true negative occurs when the intrusion detection device identifies an activity as acceptable behavior and the activity is actually acceptable. False positives are false alarms, and false negative is the term used to describe a network intrusion device’s inability to detect true security events under certain circumstances—in other words, a malicious activity that is not detected by the security device.

6. A. Vulnerability scanners can often log in to the targeted system to perform deep analysis of the operating system, running applications, and security misconfigurations. This technique is called a credentialed scan.

7. D. SIEMs can provide log collection, normalization, and correlation.

8. A. NetFlow is a technology invented by Cisco to collect network metadata about all the different “flows” of traffic on your network.

9. A. Security Orchestration, Automation, and Response (SOAR) systems extend beyond traditional SIEMs to allow organizations to collect security threat data and alerts from multiple sources and to perform many different automated response capabilities.

10. D. Unlike traditional SIEM platforms, SOAR solutions can also be used for threat and vulnerability management, security incident response, and security operations automation (including playbook and runbook automation, as well as orchestration of multiple SOC tools).

Review Questions

1. A web application vulnerability scanner

2. Security advisories and bulletins

3. CVE Numbering Authorities (CNAs)

4. National Vulnerability Database (NVD)

5. Three

6. Medium

7. Threat hunting

Chapter 8

Do I Know This Already?

1. D. Ethical hacking, pen testing, and penetration testing are all terms used to define the process of finding vulnerabilities and mimicking what an attacker could do against your systems and networks. Penetration testing is done after obtaining permission from the system or network owner.

2. A. In the known environment pen testing type, the pen tester starts out with a significant amount of information about the organization and its infrastructure. The tester would normally be provided network diagrams, IP addresses, configurations, and a set of user credentials, for example. If the scope includes an application assessment, the tester might also be provided the source code of the target application. The idea of this type of test is to identify as many security holes as possible.

3. D. The pre-engagement tasks include items such as contract negotiations, the statement of work (SOW), scoping, and the rules of engagement.

4. D. The rules of engagement document typically includes the testing timeline, location of the testing, time window of the testing, preferred method of communication, the security controls that could potentially detect or prevent testing, IP addresses or networks from which testing will originate, and the scope of the engagement.

5. A. Open-source intelligence (OSINT) gathering is the term used when a penetration tester uses public records to perform passive reconnaissance.

6. D. Active reconnaissance is carried out mostly by using network and vulnerability scanners. Nmap is an open-source network and port scanner. Nessus is a vulnerability scanner sold by Tenable. Nikto is an open-source web application vulnerability scanner.

7. B. The term war driving is used because the attacker can just drive around and get a huge amount of information over a very short period of time. A similar concept is war flying. In war flying an attacker can fly drones or similar devices to obtain information about a wireless network or even collect pictures or videos of facilities in some cases.

8. A. The blue team identifies the defenders of the organizations. Blue teams typically include the computer security incident response team (CSIRT) and information security (InfoSec) teams.

9. C. Purple teams integrate the defensive capabilities of a blue team with the adversarial techniques used by the red team. Often the purple team is not a separate team, but a solid dynamic between the blue and red teams.

10. A. White teams are individuals who are focused on governance, management, risk assessment, and compliance.

Review Questions

1. Persistence

2. Privilege escalation

3. Partially known environment

4. Bug bounties

5. Passive

Chapter 9

Do I Know This Already?

1. A. Configuration management is an ongoing process created with the goal of maintaining computer systems, servers, network infrastructure, and software in a desired, consistent state. One of the primary goals of configuration management is to ensure that your infrastructure performs as it’s expected to as changes are made over time.

2. C. After a minimum desired state of security is defined, baselines should be taken to assess the current security state of computers, servers, network devices, and the network in general. Baseline configurations should be properly documented and reviewed to include a set of specifications for information systems or configuration items within those systems. Baseline configurations are used by security professionals, along with network and system administrators, as a basis for future deployments, releases, or changes to information systems and applications.

3. A. You should make sure that your organization has appropriate naming conventions for describing IT infrastructure, applications, and users. Appropriate naming conventions are used to avoid conflicts and to be able to correlate data among disparate systems.

4. B and C. The General Data Protection Regulation (GDPR) is a regulation in the European Union and the European Economic Area focused on data protection and privacy. Another example is the California Consumer Privacy Act (CCPA). These regulations give consumers the right to know what personal information is being collected by companies, government, and any other organizations.

5. D. Data loss prevention (DLP) systems can be software or hardware-based solutions and are categorized in three general types: endpoint, network, and storage DLP systems.

6. D. You should always encrypt data at rest, in use, and in motion in order to protect sensitive data.

7. C. Hashes are used in digital signatures, in file and message authentication, and as a way to protect the integrity of sensitive data—for example, data entered into databases or perhaps entire hard drives. A hash is generated through the use of a hash function to verify the integrity of the file or message, most commonly after transit over a network.

8. A. A hot site is a near duplicate of the original site of the organization that can be up and running within minutes (in some cases longer). Computers and phones are installed and ready to go, a simulated version of the server room stands ready, and the vast majority of the data is replicated to the site on a regular basis in the event that the original site is not accessible to users for whatever reason.

9. D. A cold site has tables, chairs, bathrooms, and possibly some technical setup—for example, basic phone, data, and electric lines. Otherwise, configuration of computers and data restoration are necessary before the site can be properly utilized. This type of site is used only if a company can handle the stress of being nonproductive for a week or more.

10. B. A honeypot is generally a single computer but could also be a file, group of files, or an area of unused IP address space, whereas a honeynet is a group of computers, servers, or an area of a network; a honeynet is used when a single honeypot is not sufficient. Either way, the individual computers, or group of servers, will usually not house any important company information. Various analysis tools are implemented to study the attacker; these tools, along with a centralized group of honeypots (or a honeynet), are known collectively as a honeyfarm.

11. D. Honeyfiles are used as bait files intended to lure adversaries to access them and then send alarms to a security analyst to potentially learn the tactics and techniques used by the attacker.

12. A. In a DNS sinkhole you configure one or more DNS servers to provide false results to attackers and redirect them to areas in the network where you can observe their tactics and techniques. DNS sinkholes have been used to contain different types of malware such as the infamous WannaCry ransomware and to disrupt certain malicious DNS operations in denial-of-service (DoS) and other attacks.

Review Questions

1. Warm site

2. Honeyfiles

3. Digital rights management (DRM)

4. Data in use/processing

5. Tokenization

Chapter 10

Do I Know This Already?

1. A. IaaS is a service that offers computer networking, storage, load balancing, routing, and VM hosting. More and more organizations are seeing the benefits of offloading some of their networking infrastructure to the cloud.

2. A. A community cloud is a mix of public and private cloud deployments where multiple organizations can share the public portion.

3. A. Google Drive, Office 365, and Dropbox are examples of the software as a service (SaaS) cloud service model.

4. C. A managed security service provider (MSSP) provides services to manage your security devices and can also help monitor and respond to security incidents.

5. D. Managed service providers (MSPs) can deliver network, application, system, and management services using a pay-as-you-go model. An MSP is an organization that can manage your network infrastructure, servers, and in some cases your security devices. Companies that provide services to manage your security devices and can also help monitor and respond to security incidents are called managed security service providers (MSSPs).

6. C. The term edge computing describes an ecosystem of resources and applications in new network services (including 5G and IoT). One of the main benefits is to provide greater network speeds, low latency, and computational power near the user.

7. A. Thin clients are computer systems that run from resources stored on a central server or from the cloud instead of a local (on-premises) system. When you use a thin client, you connect remotely to a server-based computing environment where the applications, sensitive data, and memory are stored.

8. D. Docker Swarm, Apache Mesos, and Kubernetes are technologies and solutions to manage, deploy, and orchestrate containers.

9. A. VM sprawl (otherwise known as virtualization sprawl) occurs when an organization can no longer effectively control and manage all the VMs on a network or in the cloud.

10. A. In a VM escape attack, the guest VM breaks out of its isolated environment and attacks the hypervisor or compromises other VMs hosted and controlled by the hypervisor.

Review Questions

1. Transit gateway

2. Resource policies

3. Cloud services integration

4. Serverless architecture

5. SDN

Chapter 11

Do I Know This Already?

1. A. The traditional software development methodology is the waterfall model, which is a software and hardware development and project management methodology that has at least five to seven phases that follow in strict linear order. Each phase cannot start until the previous phase has been completed.

2. B. You can integrate with log aggregation tools to maintain and analyze logs of every element that goes into the provisioning. This allows you to respond quickly and deprovision the application in the event that something went wrong. When you go back, you can check the logs and accurately find and fix the root cause of the error.

3. D. Unit testing, integration testing, and identifying a code integrity manager can help software (code) integrity.

4. C. Threat modeling enables you to prioritize threats to an application based on their potential impact. This modeling process includes identifying assets to the system or application, uncovering vulnerabilities, identifying threats, documenting threats, and rating those threats according to their potential impact. The more risk, the higher the rating. Threat modeling is often incorporated into the software development lifecycle (SDLC) during the design, testing, and deployment phases.

5. D. Input validation, principle of least privilege, and failing securely are all important security principles that should be incorporated into the SDLC.

6. D. Compile time refers to the duration of time during which the statements written in any programming language are checked for errors. Compile-time errors might include syntax errors in the code and type-checking errors. A programmer can check these errors without actually running the program and instead check it in the compile stage when it is converted into machine code.

7. D. One of the most popular OWASP projects is the Top 10 Web Application Security Risks. You can find the latest Top 10 Web Application Security Risks at https://owasp.org/www-project-top-ten. All of these answers are top web application security risks.

8. C. In an example of software diversity, a compiler is modified to generate variants of a binary (target application) that operates in the same way when processing benign input; however, it may operate in a different manner when given malicious input. This new aspect of software diversity is handled by generating variants of a program by building a binary with a diversifying compiler that can randomize the code layout, stack variables, and random allocations of heap objects at different locations in each variant.

9. A. Continuous integration (CI) is a software development practice in which programmers merge code changes in a central repository multiple times a day. Continuous delivery (CD) sits on top of CI and provides a way for automating the entire software release process. When you adopt CI/CD methodologies, each change in code should trigger an automated build-and-test sequence. This automation should also provide feedback to the programmers who made the change.

10. C. Elasticity is the ability of an underlying infrastructure to react to a sudden increase in demand by provisioning more resources in an automated way. Elasticity and scalability are often achieved by deploying technologies such as load balancers and by deploying applications and resources in multiple geographical locations (data centers around the world). Other technologies such as enabling concurrent processing (parallel processing) and automated container deployments (that is, using Kubernetes) allow organizations to auto-scale.

Review Questions

1. Static analysis

2. Software integrity measurement

3. Development

4. Staging

5. Agile

Chapter 12

Do I Know This Already?

1. D. Microsoft Active Directory (AD) allows administrators to organize elements of a network, such as users, computers, and devices, into a hierarchical containment structure.

2. D. Users authenticate against a directory service to ensure that it is highly available at all times. A best practice requires that it be distributed in multiple locations.

3. C. In biometrics, facial recognition is the most common and least accurate way to identify a user; it has higher false rejection and higher false acceptance rates than other biometric security methods.

4. B. Speaker verification is a 1:1 match where one speaker’s voice is matched to one template, also called a voice print or voice model.

5. B. You must have this physical item with you—something you have such as a crypto card, token, or key fob that is used as a method to authenticate you as a user.

6. A. Physical movements such as the way you walk, typing patterns, and mouse movements are examples of a personality trait, behavior, or observable biometric that can be used to authenticate you as a person.

7. A. Authentication, authorization, and accounting (AAA) is a framework for intelligently controlling access to computer resources, enforcing policies, and auditing usage. These processes working in concert are important for effective network management and security.

8. A. Authentication provides a method of identifying a user, typically by having the user enter a valid username and password before access to the network is granted. Authentication is based on each user having a unique set of login credentials for gaining network access.

9. C. Enterprises that elect to use a cloud computing model need to pay only for the resources that they use, with none of the maintenance and upkeep costs. The price adjusts up or down depending on how much is consumed.

10. B. On-premises authentication is preferred. Many companies these days operate under some form of regulatory control, regardless of the industry. The most common one is the Health Insurance Portability and Accountability Act (HIPAA) for private health information, but there are other government and industry regulations. For companies that are subject to such regulations, it is imperative that they remain compliant and know where their data is at all times.

Review Questions

1. Directory services

2. Lookup

3. Federation

4. Vein or vein authentication

5. Gait analysis

6. The crossover error rate (CER) describes the point where the false rejection rate (FRR) and false acceptance rate (FAR) are equal. The crossover error rate describes the overall accuracy of a biometric system.

Chapter 13

Do I Know This Already?

1. A. Geographic dispersal of computing and data assets if a disaster, natural or person-made, occurs ensures the company can continue to function. In such cases, the company relies on infrastructure in another city, state, or country to be available.

2. A. RAID 5 writes the parity over all disks, making it possible to continue to run even if one disk fails. Recovery is done by removing the failed drive and executing the recovery process, where the drive is rebuilt and added to the array once completed.

3. A. RAID 0 is known as disk striping; it is the process of dividing data into blocks and spreading the data blocks across multiple storage devices, such as hard disks or solid-state drives (SSDs). In a Redundant Array of Independent Disks (RAID) group, there is no parity. The more hard drives in a RAID 0 array, the higher probability of array failure.

4. B. By deploying NIC teaming on your server, you can maintain a connection to multiple physical switches and use a single IP address. Load balancing becomes readily available, fault tolerance becomes instant instead of waiting for DNS records to time out or update, and management becomes simpler.

5. B. Redundant power is a critical component in high-availability systems. In the simplest solution, two power supplies can drive a load through bused (vertical line shared among equipment) or N+1 configuration, where two power supply outputs are load-shared together or have one active and one or more in standby mode.

6. B. SAN-connected servers contain special fiber interface cards called host bus adapters (HBAs). They are configured as pairs, typically called HBA1 and HBA2. The fiber is then connected to a pair of SAN network switches.

7. A. Azure Active Directory is a fully managed multi-tenant cloud-based offering from Microsoft that offers identity and access capabilities for applications running in an on-premises environment. It is not a replacement for on-premises Active Directory Services but could be, or it can be used with it to extend on-prem directory services and sync the directories to cloud applications.

8. D. Cloud-based backups are a model of data storage in which the data can be accessed, managed, and stored in a remote cloud server via the Internet. Cloud backups are maintained and supported by a cloud storage provider responsible for keeping the user’s data available and accessible at any time.

9. C. Backups to a cloud services backup (IaaS) provider can be slower than on-premises backups where they are connected at gigabit speeds. Depending on the amount of data being backed up, restoring data from them could take many hours or even days to complete.

10. D. A highly available system should be able to quickly recover from any sort of failure state to minimize interruptions for the end user. Best practice for achieving high availability is to eliminate single points of failure or any node that would impact the system as a whole if it becomes dysfunctional. The highest level of uptime is considered “five nines,” or 99.999 percent, which refers to a standard of reliability. Five nines is equivalent to downtime of only 5 minutes and 15 seconds per year (1 minute and 18 seconds in a quarter, or 26 seconds monthly). These are very high standards to meet.

11. D. Without the network, systems will be unable to communicate with each other. When building the restore order for your organization, do not forget this critical step, and make sure you add it to your desktop exercises.

12. A, B, C. These are all steps that can be taken to enhance an organization’s resilience and provide fault tolerance and diversity.

Review Questions

1. Geographical dispersal is the practice of placing valuable data assets around the city, state, country, or world to provide an extra level of protection from attacks, mistakes, and disasters.

2. In the simplest of terms, disk redundancy is a system’s ability to write data to two or more disks at the same time. Having the same data stored on separate disks enables you to recover the data in the event of a disk failure.

3. An uninterruptible power supply or uninterruptible power source (UPS) is an electrical device that provides emergency power to a load when the input power source or mains power fails. Generally, UPSs are battery based—a bank of batteries and circuits that provide power during main power failure.

4. Data replication via a SAN is the most common method of replication. Replicating data from one data center to another via dual SANs allows you to replicate large volumes of data quickly using SAN technology.

5. Reverting to known state is returning the system to a state prior to a specific moment in time or state of existence.

Chapter 14

Do I Know This Already?

1. A. A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or designer after manufacturing—hence the term field-programmable.

2. A. SCADA systems are capable of managing parts inventories for just-in-time manufacturing, regulate industrial automation and robots, and monitor processes and quality control.

3. A. A SCADA programmable logic controller (PLC) is an industrial computer control system that continuously monitors the state of input devices and makes decisions based on a custom program to control the state of output devices.

4. B. Smart watches can monitor your pulse, heart, blood pressure, exercises, calories, and sleep patterns to help you become more fit.

5. A. Lighting and air conditioning controls are part of building automation and can be controlled to help reduce the overexpense of energy use by properly implementing sensors to know when people are in the building.

6. D. Users should utilize the strongest encryption method available on devices, always create and rotate very complex passwords, set up a continuous method to log and audit access to the devices, and finally, ensure use of the latest manufacturer updates.

7. D. Today’s modern surveillance systems are capable of performing facial recognition, monitoring data center access, and if exploited, pivoting to and compromising other systems in the network.

8. D. Zigbee creates flexibility for developers and end users while delivering stellar interoperability. It was created on the IEEE’s 802.15.4 standard, using the 2.4-GHz band and a self-healing true mesh network, and has a defined rate of 250 kbps. It is best suited for intermittent data transmissions from a sensor or input device.

9. C. Cryptography constraints for building secure embedded systems hardware and software have to do with the amount of code required to implement a secure algorithm and the processing power required to crunch the number.

10. C and D. A system on a chip (SoC) is essentially an integrated circuit or an IC that takes a single platform and integrates an entire electronic or computer system onto it.

11. A. Nucleus RTOS from Mentor is a real-time operating system (RTOS) software component that rapidly switches between tasks, giving the impression that multiple programs are being executed at the same time on a single processing core. In fact, the processing core can execute only one program at any one time.

12. B. Embedded computers used in unmanned system applications are often characterized by their low SWaP-C (size, weight, power, and cost) profiles, small form factor (SFF), and rugged operating ranges, which are vital components for unmanned aerial vehicles (UAVs),

Review Questions

1. Arduino devices are hardware and software combined into an extremely flexible platform; they can read inputs such as a light on a sensor or a button press. The Arduino software is easy for beginners to use yet flexible enough for advanced users. It runs on Mac, Windows, and Linux.

2. Field-programmable gate arrays (FPGAs) are integrated circuits designed to be configured by a customer or designer after manufacturing—hence the term field programmable.

3. In the manufacturing process, control systems can help with the reduction of product errors and discards, due to earlier problem detection and remedies. These systems improve productivity, maximizing the effectiveness of machine uptime.

4. Cybersecurity and attacks on the platform are the biggest problem with IoT devices being developed and sold. Cybersecurity must be designed into IoT devices from the ground up and at all points in the ecosystem to prevent vulnerabilities in one part from jeopardizing the security of the entire system.

5. The CAN bus system enables each ECU to communicate with all other ECUs, without complex dedicated wiring. An ECU can prepare and broadcast information (that is, sensor data) via the CAN bus, consisting of two wires—CAN low and CAN high. The broadcasted data is accepted by all other ECUs on the CAN network.

Chapter 15

Do I Know This Already?

1. A. An access control vestibule, formerly known as a mantrap, allows security guards to see visitors before they are allowed through the second door. Guards can use cameras and voice communication to ascertain identity.

2. C. Vestibules are an excellent access control addition. Entries with panels built from prefabricated composite or metal are used as a way for companies to control the heat and airflow in their facilities.

3. B. Optical detectors convert incoming optical energy into electrical signals. The two main types of optical detectors are photon detectors and thermal detectors. Photon detectors produce one electron for each incoming photon of optical energy. The electron is then detected by the electronic circuitry.

4. D. Robot sentries act as 24/7/365 guards, continuously monitoring and alerting on differentials. Robot sentries report anything out of the ordinary to the appropriate personnel, who then can take additional action.

5. D. The reception desk plays a front-line role in the physical security program of a company. Receptionists do this by creating a buffer between the corporate offices, employees, and contractors. Visitors are unable to pass until the employees meeting them come and pick them up. This reduces loitering and inquisitive visitors.

6. D. Biometric locks provide a unique way of making sure people are who they say they are by monitoring or matching human characteristics such as a fingerprint, retina, or voice prior to unlocking and granting access.

7. A. A skeleton key normally work with warded or lever locks. With a warded lock, a skeleton key lacks interior notches to interfere with or correspond with the wards, or obstructions, thereby allowing it to open the lock.

8. C. A proximity reader is capable of reading a prox card that is within a few millimeters of the reader/pad; it does this through induction.

9. C. A vault can consist of an entire room or even multiple rooms. Vaults allow everything inside to be protected through multiple layers of security measures, including guards, alarms, cameras, locks, gates, and secure doors.

10. D. Pulverizing grinds devices down to bare-metal scraps. There is nothing left that would allow recovery, unlike shredding and degaussing.

Review Questions

1. An access badge is a credential used to gain entry to an area having automated readers for access control entry points.

2. Appropriately placed signage provides direction and guidance for staff and visitors; it also provides clear expectations and the repercussions for failure to abide by those rules.

3. It enables corporate, industrial, and data centers to blend into their environment. When you surround the premises with trees, bushes, and vegetation and implement low-profile security measures around the perimeter, the building becomes one with the area. This ensures it does not stick out and become a highly visible target.

4. One of the two people is there as an observer; this person monitors the person performing the work and ensures that person is performing work exactly as described in the change request and can also question any variance. The monitor typically reports any unusual or suspicious activity immediately to security or the guards’ office.

5. A proximity reader or prox reader, typically an RFID reader, reads a card by placing it near (within proximity of) the reader. The reader sends energy in the form of a field to the card, powering up the card, which enables the reader to read the information stored on the prox card. Prox cards are used as part of an access control system.

Chapter 16

Do I Know This Already?

1. A. A key generation algorithm that selects a private key uniformly at random from a set of possible private keys is one of three digital signature scheme algorithms. This algorithm outputs the private key and a corresponding public key.

2. D. Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication.

3. D. Quantum cryptography, or quantum key distribution (QKD), uses a series of photons (light particles) to transmit data from one location to another over a fiber-optic cable. By comparing measurements of the properties of a fraction of these photons, the two endpoints can determine what the key is and if it is safe to use.

4. B. Photons travel to a receiver, which uses two beam splitters (horizontal/vertical and diagonal) to “read” the polarization of each photon.

5. A. A blockchain is essentially a digital ledger of transactions that are duplicated and distributed across the entire network of computer systems on the blockchain.

6. B. Cipher suites usually contain a set of algorithms that include a key exchange algorithm, bulk encryption algorithm, and message authentication code (MAC) algorithm. The key exchange algorithm is used to exchange a key between two devices.

7. D. Steghide, Foremost, Xiao, Stegais, and Concealment are tools that can be used to conceal data in images and audio files.

8. A. The most common use cases are mobile devices and portable systems. Because of the low-power draw requirements, you may use smaller symmetric key sizes and elliptic-curve asymmetric encryption.

9. C. Time is used in generating keys used in cryptography. They have been mostly replaced with more randomness.

10. D. Key stretching runs a password through an algorithm to produce an enhanced key.

11. C and D. Bcrypt and Password-Based Key Derivation Function 2 (PBKDF2) are key derivation functions (KDFs) that are primarily used for key stretching, which provides a means to “stretch” a key or password, making an existing key or password stronger and protecting against brute-force attacks.

Review Questions

1. Digital signatures employ asymmetric cryptography. In many instances, they provide a layer of validation and security to messages sent through a nonsecure channel. Properly implemented, a digital signature gives the receiver reason to believe the message was sent by the claimed sender.

2. Key stretching techniques are used to make a possibly weak key, typically a password or passphrase, more secure against brute-force attacks by increasing the resources (time and possibly space) needed to test each possible key.

3. Salts defend against a precomputed hash attack. Because salts are different in each case, they also protect commonly used passwords, or those users who use the same password on several sites, by making all salted hash instances for the same password different from each other.

4. Block ciphers are an encryption method that applies a deterministic algorithm along with a symmetric key to encrypt a block of text instead of encrypting one bit at a time as in stream ciphers.

5. Asymmetric encryption is also known as public key cryptography; asymmetric encryption uses two keys to encrypt plaintext. Secret keys are exchanged over the network. This type of encryption ensures that malicious persons do not misuse the keys. It is important to note that anyone with the secret key can decrypt the message, and this is why asymmetric encryption uses two related keys to boost security.

6. In an asymmetric key system, each user has a pair of keys: a private key and a public key. To send an encrypted message, you must encrypt the message with the recipient’s public key. The recipient then decrypts the message with his or her private key. The easiest thing to remember is that public keys encrypt and private keys decrypt.

7. Ephemeral describes something of a temporary or short duration. Ephemeral keys are designed to be used for a single transaction or session. The term ephemeral is increasingly used in computer technology.

Chapter 17

Do I Know This Already?

1. D. DNSSEC is used in securing the chain of trust that exists between the Domain Name System (DNS) records that are stored at each domain level.

2. B. You can enable LDAPS by installing a properly formatted certificate from a certificate authority (CA) according to the guidelines.

3. B. SSH uses asymmetric (public key) RSA cryptography for both connection and authentication.

4. B. S/MIME is based on asymmetric cryptography that uses a pair of mathematically related keys to operate: a public key and a private key.

5. C. AH is an optional packet header used to guarantee connectionless integrity and data origin authentication for IP packets.

6. C. IMAPS downloads a message only when you click on it, and attachments aren’t automatically downloaded. IMAPS operates on port 993 (SSL/TLS).

7. D. HTTPS uses an encryption protocol to encrypt communications. The protocol is called Transport Layer Security (TLS).

8. D. Enterprises looking to deploy time synchronization should utilize three public servers, set up a local internal NTP server that is used for all internal hosts as a reference timekeeper, and only have the internal NTP server make requests to the public servers. You should ensure that you are standardizing on UTC time across all systems; it will make researching attacks and issues more relevant.

9. A. A virtual private network (VPN) provides privacy and security to users by creating a private network connection across a public network connection. VPNs are used to access company networks protected by firewalls that deny inbound and outbound access to/from systems on the network.

10. A, B, C, D. All the responses are correct. You should use caching-only DNS servers, use DNS forwarders, use DNS advertisers and resolvers, protect DNS from cache pollution, enable DDNS for secure connections only, disable zone transfers, and use firewalls to control communication to and from the DNS servers.

Review Questions

1. A cryptographic protocol or encryption protocol is an abstract of a protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives.

2. SSH uses encryption to ensure secure transfer of information between the host and the client. Host refers to the remote server you are trying to access, and the client is the computer you are using to access the host.

3. S/MIME is based on asymmetric cryptography, which uses a pair of mathematically related keys to operate: a public key and a private key.

4. SRTP and SRTCP use the Advanced Encryption Standard (AES) as the default cipher.

5. You can enable LDAPS by installing a properly formatted certificate from a certificate authority (CA) according to the guidelines. LDAPS over SSL/TLS uses TCP port 636.

Chapter 18

Do I Know This Already?

1. A. According a report published by Cybersecurity Ventures in May 2019, damages from ransomware cost businesses an astonishing $11 billion in lost revenue, productivity, and remediation.

2. B. Antimalware software that uses behavior-based malware detection can detect previously unknown threats by identifying malware based on characteristics and behaviors.

3. C. One of the primary functions of endpoint detection and response (EDR) is providing forensics and analysis tools to research identified threats and search for suspicious activities (similar to threat hunting).

4. A. Data loss prevention (DLP) software and tools monitor and control endpoint activities, filter data streams on corporate networks, and monitor data in the cloud to protect data at rest and in motion.

5. B. The newer specification addresses several limitations of the BIOS, including restrictions on hard disk partition size and the amount of time the BIOS takes to perform its tasks.

6. D. Database tokenization is the process of turning sensitive data into nonsensitive data called tokens that can be used in a database or internal system without bringing it into scope.

7. D. The secure session cookies store information about a user session after the user logs in to an application. This information is very sensitive because an attacker can use a session cookie to impersonate the victim.

8. A. Patch management is a process that helps acquire, test, and install multiple patches (code changes) on existing applications and software tools on a computer, enabling systems to stay updated on existing patches and determining which patches are the appropriate ones.

9. C. When a self-encrypting drive (SED) is installed into a mixed-disk configuration or a configuration containing unencrypted drives, it operates as an unencrypted disk. Likewise, a pool consisting of all SEDs might replicate to a pool with only a few SEDs or no SEDs at all.

10. C. Code from the outside needs to be validated prior to running it on a secure CPU. This tamper resistance can be implemented in many ways—for example, using a dedicated ROM that can only be accessed by the hardware root of trust.

11. D. The TPM uses a unique key to digitally sign the log recorded by the UEFI.

12. A. Sandboxing is a strategy that isolates a test environment for applications. It provides an extra layer of security that prevents malware or harmful applications from negatively affecting your system.

Review Questions

1. Antimalware software uses signature-based detection, behavior-based detection, and sandboxing.

2. Boot integrity refers to using a secure method to boot a system and verify the integrity of the operating system and loading mechanism. Boot integrity represents the first step toward achieving a trusted infrastructure.

3. In boot attestation, software integrity measurements are immediately committed to during boot, thus relaxing the traditional requirement for secure storage.

4. Full-disk encryption (FDE) is a cryptographic method that applies encryption to the entire hard drive, including data, files, operating system, and software programs. FDE encryption places an exterior guard on the internal contents of the device.

5. Self-encrypting drives (SEDs) are disk drives that use an encryption key to secure the data stored on the disk. This encryption protects the data and array from data theft when a drive is removed from the array. Because SED operates across all disks in an array at once, the drive must be configured as an SED when introduced to the array.

Chapter 19

Do I Know This Already?

1. A. If you migrate some of these low-resource servers to a virtual environment, you could end up spending more on licensing but less on hardware, due to the nature of virtualization. In fact, the goal is to have the gains of hardware savings outweigh the losses of licensing. Load balancing and clustering deal with an operating system utilizing the hardware of multiple servers. This will not be the case when you go virtual, nor would it have been the case anyway, because clustering and load balancing are used in environments where the server is very resource-intensive. Baselining, unfortunately, will remain the same; you should analyze all of your servers regularly, whether they are physical or virtual. These particular servers should not encounter latency or lowered throughput because they are low-resource servers in the first place. If, however, you considered placing a Windows Server that supports 5000 users into a virtual environment, you should definitely expect latency.

2. B. You can defend against pivoting by providing proper access control, network segmentation, DNS security, reputation security, and proper patch management.

3. C. A remote-access VPN is typically used for client access to a headend device, which connects them to the corporate network. Most remote access VPNs use IPsec or SSL/TLS connections.

4. D. One specific type of DDoS is the DNS amplification attack. Amplification attacks generate a high volume of packets ultimately intended to flood a target website. In the case of a DNS amplification attack, the attacker initiates DNS requests with a spoofed source IP address. The attacker relies on reflection; responses are not sent back to the attacker but are instead sent “back” to the victim server. Because the DNS response is larger than the DNS request (usually), it amplifies the amount of data being passed to the victim. An attacker can use a small number of systems with little bandwidth to create a sizable attack. However, a DNS amplification attack can also be accomplished with the aid of a botnet, which has proven to be devastating to sections of the Internet during the period when the attack was carried out.

5. D. Domain hijacking is a type of hijacking attack in which the attacker changes the registration of a domain name without the permission of the original owner/registrant. One of the most common methods used to perform a domain hijacking is using social engineering.

6. B. Some companies (such as Cisco) offer hardware-based NAC solutions, whereas other organizations offer paid software-based NAC solutions and free ones such as PacketFence (https://packetfence.org), which is open source. The IEEE 802.1X standard, known as port-based network access control, or PNAC, is a basic form of NAC that enables the establishment of authenticated point-to-point connections, but NAC has grown to include software; 802.1X is now considered a subset of NAC.

7. B. As administrator, you might need to take an alternate path to manage network devices. In this case, you might require out-of-band management. This is common for devices that do not have a direct network connection, such as UPSs, PBX systems, and environmental controls.

8. D. Port security is a security feature present in most routers and switches, and it is used to provide access control by restricting the Media Access Control (MAC) addresses that can be connected to a given port. This differs from a MAC access list because it works only on the source MAC address without matching the MAC destination.

9. D. IP proxy secures a network by keeping machines behind it anonymous; it does this through the use of NAT. For example, a basic four-port router can act as an IP proxy for the clients on the LAN it protects. An IP proxy can be the victim of many network attacks, especially DoS attacks. Regardless of whether the IP proxy is an appliance or a computer, it should be updated regularly, and its log files should be monitored periodically and audited according to organization policies.

10. D. A Layer 2 access control list (ACL) operates at the data link layer of the OSI model and implements filters based on Layer 2 information. An example of this type of access list is a MAC access list, which uses information about MAC addresses to create the filter.

11. D. There are different route manipulation attacks, but one of the most common is the Border Gateway Protocol (BGP) hijacking attack. BGP is a dynamic routing protocol used to route Internet traffic. An attacker can launch a BGP hijacking by configuring or compromising an edge router to announce prefixes that have not been assigned to his or her organization. If the malicious announcement contains a route that is more specific than the legitimate advertisement or presents a shorter path, the victim’s traffic may be redirected to the attacker. In the past, threat actors have leveraged unused prefixes for BGP hijacking to avoid attention from the legitimate user or organization.

12. D. As administrator, you can use QoS capabilities to control application prioritization. Protocol discovery features in Cisco AVC show the mix of applications currently running on the network. This information helps you define QoS classes and policies, such as how much bandwidth to provide to mission-critical applications and how to determine which protocols should be policed. Per protocol bidirectional statistics are available, such as packet and byte counts, as well as bit rates. After you classify the network traffic, you can apply class-based weighted fair queuing (CBWFQ) for guaranteed bandwidth.

13. D. An anycast address is assigned to a group of interfaces on multiple nodes. Packets are delivered to the “first” interface only. Anycast is structured like unicast addresses.

14. A. To capture packets and measure throughput, you need a tap on the network before you can start monitoring. Most tools that collect throughput leverage a single point configured to provide raw data, such as pulling traffic from a switch or router. If the access point for the traffic is a switch, typically a network port is configured as a Switched Port Analyzer (SPAN) port, sometimes also called port mirroring or port spanning. The probe capturing data from a SPAN port can be either a local probe or data from a SPAN port that is routed to a remote monitoring tool.

15. A. Baselining is the process of measuring changes in networking, hardware, software, applications, and so on. The process of documenting and accounting for changes in a baseline is known as baseline reporting. Baseline reporting enables you to identify the security posture of an application, system, or network. The security posture can be defined as the risk level to which a system, or other technology element, is exposed. Security posture assessments (SPAs) use baseline reporting and other analyses to discover vulnerabilities and weaknesses in systems.

16. C. File integrity is important when securing log files, and File Integrity Monitoring (FIM) helps you maintain this integrity. Encrypting the log files through the concept known as hashing is a good way to verify the integrity of the log files if they are moved and/or copied. You could also flat-out encrypt the entire contents of the file so that other users cannot view it. Integrity means that data has not been tampered with. Authorization is necessary before data can be modified in any way; this is done to protect the data’s integrity.

Review Questions

1. Cost

2. Network segmentation

3. Clientless

4. DNS amplification

5. Network access control (NAC)

6. Out-of-Band

7. Port security

8. Proxy server

9. Layer 2

10. Border Gateway Protocol (BGP)

11. Class-based weighted fair queuing (CBWFQ)

12. Anycast

Chapter 20

Do I Know This Already?

1. A. WPA3 includes a more robust authentication mechanism than WPA2. It also provides a higher level of encryption capabilities. It enables a very robust authentication based on passwords by utilizing a technology called Simultaneous Authentication of Equals (SAE). This innovation in Wi-Fi security replaces the preshared key (PSK). SAE helps protect against brute-force password attacks and offline dictionary attacks.

2. B. Counter-mode/CBC-MAC protocol or CCMP is based on the Advanced Encryption Standard (AES). It provides a stronger mechanism for securing privacy and integrity over Temporal Key Integrity Protocol (TKIP), which was previously used with WPA. An advantage to CCMP is that it utilizes 128-bit keys as well as a 48-bit initialization vector. This enhancement greatly reduces the possibility of replay attacks. One drawback to using CCMP over TKIP is that it requires additional processing power. That is why you will typically see it supported on newer hardware.

3. A. 802.1X is an IEEE standard that defines port-based network access control (PNAC). Not to be confused with 802.11X WLAN standards, 802.1X is a data link layer authentication technology used to connect hosts to a LAN or WLAN. 802.1X allows you to apply a security control that ties physical ports to end-device MAC addresses, and prevents additional devices from being connected to the network. It is a good way of implementing port security, much better than simply setting up MAC filtering.

4. D. Following are the three components to an 802.1X connection:

  • Supplicant: A software client running on a workstation. This is also known as an authentication agent.

  • Authenticator: A wireless access point or switch.

  • Authentication server: An authentication database, most likely a RADIUS server.

5. D. The supplicant is a software client running on a workstation. It is also known as an authentication agent.

6. D. The Protected Extensible Authentication Protocol (PEAP) uses MS-CHAPv2, which supports authentication via Microsoft Active Directory databases. It competes with EAP-TTLS and includes legacy password-based protocols. It creates a TLS tunnel by acquiring a public key infrastructure (PKI) certificate from a server known as a certificate authority (CA). The TLS tunnel protects user authentication much like EAP-TTLS.

7. B. EAP-FAST uses a protected access credential instead of a certificate to achieve mutual authentication. FAST stands for Flexible Authentication via Secure Tunneling.

8. C. The preshared key (PSK) used to enable connectivity between wireless clients and the WAP is a complex passphrase. PSK is automatically used when you select WPA-Personal in the Security Mode section. The other option is WPA-Enterprise, which uses a RADIUS server. So, if you ever see the term WPA2-PSK, this means that the WAP is set up to use the WPA2 protocol with a preshared key, and not an external authentication method such as RADIUS.

9. B. Wi-Fi Protected Setup (WPS) is a security vulnerability. Created originally to give users easy connectivity to a wireless access point, later all major manufacturers suggested that it be disabled (if possible). In a nutshell, the problem with WPS was the eight-digit code. It effectively worked as two separate smaller codes that collectively could be broken by a brute-force attack within hours.

10. D. Strategic wireless access point (WAP) placement is vital. That is why it is essential to perform a site survey before deploying wireless equipment. A site survey is typically performed using Wi-Fi Analyzer tools to produce a heat map of all wireless activity in the area.

Review Questions

1. SAE

2. EAP-FAST

3. CCMP

4. Supplicant

5. Authenticator

6. Wi-Fi Analyzer

7. AES

Chapter 21

Do I Know This Already?

1. A. Near-field communication (NFC) has obvious benefits for contactless payment systems or any other non-contact-oriented communications between devices. However, for optimal security, you should use contact-oriented readers and cards.

2. B. Geofencing is an excellent way to be alerted to users entering and exiting an organization’s physical premises. It can provide security for wireless networks by defining the physical borders and allowing or disallowing access based on the physical location of the user, or more accurately, the user’s computer or mobile device.

3. A. You should encrypt data communication between a device and the organization and enable full device encryption of data stored on the device or in removable storage.

4. D. One of the characteristics of an MDM solution is the use of over-the-air (OTA) device management updates. OTA historically refers to the deployment and configuration performed via a messaging service, such as Short Message Service (SMS), Multimedia Messaging Service (MMS), or Wireless Application Protocol (WAP). Now it’s used to indicate remote configuration and deployment of mobile devices.

5. D. Insecure user configurations such as rooting and jailbreaking can be blocked from MDM, as can sideloading—the art of loading third-party apps from a location outside the official application store for that device. Note that sideloading can occur in several ways: by direct Internet connection (usually disabled by default), by connecting to a second mobile device via USB OTG (USB On-The-Go) or Bluetooth, by copying apps directly from a microSD card, or by tethering to a PC or Mac.

6. B. Bluejacking is the sending of unsolicited messages to Bluetooth-enabled devices such as mobile phones. You can stop bluejacking by setting the affected Bluetooth device to undiscoverable or by turning off Bluetooth altogether.

7. B. Encryption is one of the best ways to ensure that data is secured and that applications work properly without interference from potential attackers. However, you should consider whole device encryption, which encrypts the internal memory and any removable (SD) cards.

8. B. Companies may implement strategies such as choose your own device (CYOD), where employees select a device from a company-approved list, or corporate-owned, personally enabled (COPE), where the company supplies employees with phones that can also be used for personal activities.

Review Questions

1. Virtual desktop infrastructure (VDI)

2. SEAndroid

3. Geotagging

4. Application deny/block list

5. Bluejacking

6. USB On-The-Go (USB OTG)

7. Bluesnarfing

8. Sideloading

Chapter 22

Do I Know This Already?

1. A. Resource policies in cloud computing environments are meant to control access to a set of resources. A policy is deployed to manage the access to the resource itself.

2. B. In cloud computing environments, the management of things like API keys, passwords, and certificates is typically handled by some kind of secrets management tool. It provides a mechanism for managing the access and auditing the secrets used in the cloud environment.

3. A. In cloud computing environments, storage is referred to as buckets. The access to these buckets is controlled by an Identity and Access Management (IAM) policy.

4. D. In cloud computing environments, the concept of a public subnet is one that has a route to the Internet. A private subnet would be a subnet in the cloud environment that does not have a route to the Internet.

5. D. A cloud access security broker (CASB) is a tool that organizations utilize to control access to and use of cloud-based computing environments and resources. Many cloud-based tools are available for corporate and personal use. The flexible access nature of these tools makes them a threat to data leak prevention and the like. It is very easy to make the mistake of copying a file that contains sensitive data into the wrong folder and making it available to the world. This is the type of scenario that CASB solutions help mitigate.

6. B. The concept of a Secure Web Gateway (SWG) is top of mind these days. At the time of this writing, we are currently in a global pandemic that has forced millions to work from home. For many years, the solution for providing employees a way to work from home was simply a remote-access VPN back into the office. This solution allowed all of the traffic from the employees’ computers to flow back through the corporate network, which would in turn traverse the same security controls that were in place if the employees were plugged into the corporate network. With the advent of cloud-based access to applications and storage, this solution is no longer the most efficient way of securing remote workers. This is where the SWG comes into play. A Secure Web Gateway enables you to secure your remote workers’ Internet access while not overloading the corporate Internet pipe. This approach is sometimes thought of as a cloud firewall. However, an SWG typically has many other protection mechanisms in place, including things like CASB. One example of an SWG is Cisco Umbrella.

7. B. Cloud-based firewalls can work at all layers of the OSI model. However, in many cloud computing environments, the firewall is used at the application layer to control access and mitigate threats to applications being hosted by the cloud environment.

8. B. A cloud native control is typically provided by the actual cloud-computing environment vendor. A non-cloud native control is provided by a third-party vendor. For instance, each cloud computing environment has security controls built into its platform. However, these controls might not be sufficient for all use cases. That is where third-party solutions come into play. Many companies out there today provide these third-party solutions to supplement those areas where cloud native controls are lacking. Of course, most virtual machine–based controls can be deployed in any cloud computing environment. However, for it to be native, it must go through various integration, testing, and certification efforts. Because each cloud computing environment is built differently and on different platforms, the controls must be adapted to work as efficiently as possible with the specifications of the environment. Many times, the decision between utilizing cloud native versus third-party solutions comes down to the actual requirements of the use case and the availability of the solution that fits those requirements.

Review Questions

1. Resource policies

2. Secrets management

3. Buckets

4. Public subnet

5. CASB

6. SWG

7. Application

8. Cloud native

Chapter 23

Do I Know This Already?

1. B. An identity provider (IdP) is a service provider that also manages the authentication and authorization process on behalf of other systems in the federation.

2. B. Authentication is the process of proving the identity of a subject or user. Once a subject has identified itself in the identification step, the enforcer has to validate the identity—that is, be sure that the subject (or user) is the one it is claiming to be. This is done by requesting that the subject (or user) provide something that is unique to the requestor. This could be something known only by the user, usually referred to as authentication by knowledge, or owned only by the user, usually referred to as authentication by ownership, or it could be something specific to the user, usually referred to as authentication by characteristic.

3. A. Most certificates are based on the X.509 standard, which is a common PKI standard developed by the ITU-T that often incorporates the single sign-on (SSO) authentication method.

4. D. SAML is an open standard for exchanging authentication and authorization data between identity providers. SAML is used in many single sign-on (SSO) implementations.

5. D. A service account is typically used on a server to provide a separate set of credentials and permissions to an application or service that is running. For instance, a server that is running Apache web server might have an apache_user account. That account would then be provided with only the access it needs to be able to perform the functions that services provide.

6. B. A tool you can use to validate who is logged in to a Windows system is the PsLoggedOn application. For this application to work, it has to be downloaded and placed somewhere on your local computer that will be remotely checking hosts. After it’s installed, simply open a command prompt and execute the command C:PsToolspsloggedon.exe\HOST_TO_CONNECT.

7. C. For Linux machines, various commands can show who is logged in to a system, such as the w, who, users, whoami, and last “user name” commands.

8. B. Geolocation is the actual method of determining the physical location of the user trying to authenticate.

9. C. A time-based attribute can be used when authenticating and authorizing a user. When a user logs in and provides his or her identity, that user can be given specific access based on the time he or she connects. For instance, if you do not expect that someone should be connecting to your wireless network at 3 a.m., then you can set a policy that blocks access between specific hours of operation.

10. B. Geotagging is the process of attaching location information in the metadata of files, such as pictures taken from a smartphone.

Review Questions

1. whoami

2. Biometrics

3. Two

4. One-time password (OTP)

5. Attribute-based access control (ABAC)

6. Geolocation

Chapter 24

Do I Know This Already?

1. A. Password keys are a technology typically deployed by corporations when implementing two-factor authentication. The primary use case is remote access to the organization’s environment. However, many organizations also use them internally. These keys are especially important to use for access to highly sensitive data or applications that serve that data—for instance, financial applications or anything involving intellectual property, such as source code. Many types of password keys are on the market these days. They come in various form factors and are utilized in different ways. For instance, some are used by inserting into a computer USB port. Some are simply one-time password tokens used as a second factor when authenticating, whereas others are a combination of different functions.

2. B. A password vault is also often referred to as a password manager. It is simply a piece of software that is utilized to store and manage credentials. Typically, the credentials are stored in an encrypted database. Having an encrypted database protects the credentials from being compromised if the database is obtained by a threat actor through the compromise of a system holding the database.

3. A. NIST defines knowledge-based authentication (KBA) as authentication of an individual based on knowledge of information associated with his or her claimed identity in public databases. Knowledge of such information is considered to be private rather than secret, because it may be used in contexts other than authentication to a verifier, thereby reducing the overall assurance associated with the authentication process. A popular use case for this type of authentication is to recover a username or reset a password. Typically, a set of predetermined questions is asked of the user. These questions must have already been provided by the end user at the time of account setup or provided as an authenticated user at a later time. The idea is that the information that was provided is something that only the user would know. That is why it is important to utilize a set of questions that cannot be easily guessed or are not public knowledge.

4. D. Using single sign-on (SSO), a user can log in once but gain access to multiple systems without being asked to log in again. This system is complemented by single sign-off, which is basically the reverse; logging off signs off a person from multiple systems. SSO is meant to reduce password fatigue or password chaos, which occurs when a person can become confused and possibly even disoriented when having to log in with several different usernames and passwords. This system is also meant to reduce IT help desk calls and password resets.

5. D. Single sign-on (SSO) is a derivative of federated identity management (also called FIM or FIdM). In this system, a user’s identity and attributes are shared across multiple identity management systems. These various systems can be owned by one organization; for example, Microsoft offers the Forefront Identity Manager software, which can control user accounts across local and cloud environments.

6. B. Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between identity providers. SAML is used in many single sign-on (SSO) implementations.

The OASIS Security Assertion Markup Language standard is currently the most-used standard for implementing federated identity processes. SAML is an XML-based framework that describes the use and exchange of SAML assertions in a secure way between business entities. The standard describes the syntax and rules to request, create, use, and exchange these assertions.

7. B. Discretionary access control (DAC) is an access control policy generally determined by the owner. Objects such as files and printers can be created and accessed by the owner. Also, the owner decides which users are allowed to have access to the objects and what level of access they may have. The levels of access, or permissions, are stored in access control lists (ACLs).

8. B. Mandatory access control (MAC) is an access control policy determined by a computer system, not by a user or owner, as it is in DAC. Permissions are predefined in the MAC model. Historically, it has been used in highly classified government and multilevel military systems, but you can find lesser implementations of it in today’s more common operating systems as well. The MAC model defines sensitivity labels that are assigned to subjects (users) and objects (files, folders, hardware devices, network connections, and so on).

9. B. Least privilege is a security principle in which users are given only the privileges needed to do their job and not one iota more. A basic example would be the Guest account in a Windows computer. This account (when enabled) can surf the web and use other basic applications but cannot make any modifications to the computer system. However, least privilege as a principle goes much further. One of the ideas behind the principle is to run the user session with only the processes necessary, thus reducing the amount of CPU power needed.

10. B. Role-based access control (RBAC) is an access model that, like MAC, is controlled by the system, and, unlike DAC, not by the owner of a resource. However, RBAC is different from MAC in the way that permissions are configured. RBAC works with sets of permissions instead of individual permissions that are label-based. A set of permissions constitutes a role. When users are assigned to roles, they can then gain access to resources.

Review Questions

1. Least privilege

2. Implicit deny

3. Linux

4. Role-based access control (RBAC)

5. Discretionary access control (DAC)

6. Privileged access management (PAM)

7. Mandatory access control (MAC)

8. Kerberos

9. Attribute-based access control (ABAC)

10. Hardware security module (HSM)

11. Knowledge-based authentication (KBA)

Chapter 25

Do I Know This Already?

1. A. Users need a private key to encrypt the digital signature of a private email. The difference in type of key is the level of confidentiality. A public key certificate obtained by a web browser is public and might be obtained by thousands of individuals. The private key used to encrypt the email is not to be shared with anyone.

2. B. A public key certificate obtained by a web browser is public and might be obtained by thousands of individuals. A private key used to encrypt email is not to be shared with anyone.

3. A. Most certificates are based on the X.509 standard, which is a common PKI standard developed by the ITU-T that often incorporates the single sign-on (SSO) authentication method.

4. D. Components of an X.509 certificate include the following:

  • Owner (user) information, including public key

  • Certificate authority information, including name, digital signature, serial number, issue and expiration dates, and version

5. D. Many companies have subdomains for their websites. Generally, if you connect to a secure website that uses subdomains, a single certificate allows for connections to the main website and the subdomains. This is known as a wildcard certificate; for example, *.h4cker.org, meaning all subdomains of h4cker.org.

6. B. By modifying the Subject Alternative Name (SAN) field, an organization can specify additional hostnames, domain names, IP addresses, and so on.

7. B. Canonical Encoding Rules (CER) is a restricted version of BER in that it allows the use of only one encoding type; all others are restricted.

8. B. If a root CA is compromised, all of its certificates are then also compromised, which could affect an entire organization and beyond. The entire certificate chain of trust can be affected. One way to add a layer of security to avoid root CA compromise is to set up an offline root CA. Because it is offline, it cannot communicate over the network with the subordinate CAs, or any other computers for that matter. Certificates are transported to the subordinate CAs physically using USB flash drives or other removable media. Of course, you would need to have secure policies regarding the use and transport of media, and would need to incorporate data loss prevention (DLP), among other things. But the offline root CA has some obvious security advantages compared to an online root CA. Consider this offline mindset when dealing with critical data and encryption methods.

9. B. One way to add security to a certificate validation process is to use certificate pinning, also known as SSL pinning or public key pinning. It can help detect and block many types of on-path attacks by adding an extra step beyond normal X.509 certificate validation. Essentially, a client obtains a certificate from a CA in the normal way but also checks the public key in the server’s certificate against a hashed public key used for the server name. This functionality must be incorporated into the client side, so it is important to use a secure and up-to-date web browser on each client in order to take advantage of certificate pinning.

Review Questions

1. In X.509, the owner does not use a symmetric key.

2. A digital certificate includes the certificate authority’s digital signature and the user’s public key. A user’s private key should be kept private and should not be within the digital certificate.

3. Decentralized. When creating key pairs, PKI has two methods: centralized and decentralized. In centralized, keys are generated at a central server and are transmitted to hosts. In decentralized, keys are generated and stored on a local computer system for use by that system.

4. Certificate revocation lists are digitally signed by the certificate authority for security purposes. If a certificate is compromised, it will be revoked and placed on the CRL. CRLs are later generated and published periodically.

5. The public key infrastructure is based on the asymmetric encryption concept.

6. You should implement a certificate revocation list so that stolen certificates, or otherwise revoked or held certificates, cannot be used.

7. A compromised certificate should be published to the certificate revocation list.

8. Public key encryption to authenticate users and private keys to encrypt the database. PKI uses public keys to authenticate users. If you are looking for a cryptographic process that allows for decreased latency, then symmetrical keys (private) would be the way to go. So, the PKI system uses public keys to authenticate the users, and the database uses private keys to encrypt the data.

9. A key escrow is implemented to secure a copy of the user’s private key (not the public key) in case it is lost.

10. The browser must present the public key, which is matched against the CA’s private key.

Chapter 26

Do I Know This Already?

1. A. Using the Linux traceroute command enables you to document hosts’ locations on your local network and map out the current location/configuration and connected devices. A baseline network diagram should be used and continuously updated to document systems.

2. B. nslookup is a simple but practical command-line tool. It is principally used to find the IP address that corresponds to a host or the domain name that corresponds to an IP address (a process called reverse DNS lookup).

3. C. The Linux-centric head command reads the first 10 lines of a given filename.

4. B. The cat command is a widely used and universal tool. It copies standard input to standard output. The command supports scrolling if the text file doesn’t fit the current screen.

5. B. A shell can have two types of variables: environment variables that are exported to all processes spawned by the shell and shell (local) variables.

6. D. The environment is an area that the shell builds every time it starts a session. This area contains variables that define system properties.

7. C. Wireshark is available on Windows, Linux, and macOS. There are both command-line and GUI versions.

8. D. The pps-multi command sets the number of packets to send for each time interval. This option must appear in combination with the following option: pps. This option takes an integer number as its argument. The value of the number is constrained to being greater than or equal to one. The default number for this option is one; therefore, you should use --pps-multi=# to set pps.

9. B. You can use the dd tool to create a complete image of the hard disk /dev/hda by using # dd if = /dev/hda of = ~/hdadisk.img. This image can be used to preserve forensic evidence of a computer system that was attacked or used to exploit other systems.

10. D. Three of the most notable exploitation frameworks are Metasploit, Core Impact, and Immunity Canvas, although there are a number of less famous frameworks.

11. C. Hashcat is one of the most popular and widely used password crackers in existence. It is available on every operating system and supports more than 300 different types of hashes. It enables highly parallelized password-cracking capabilities, enabling you to crack multiple different passwords on multiple different devices at the same time and to support a distributed hash-cracking system via overlays. Cracking is optimized with integrated performance tuning and temperature monitoring.

12. C. There are three methods to achieve data sanitization: physical destruction, cryptographic erasure, and data erasure. However, the downside of these techniques is that they damage the storage media and do not allow it to be sold or reused. The only real method to ensure the data is unrecoverable is to shred the hard drive or data as it is.

Review Questions

1. In computing, traceroute is a computer network diagnostic command for displaying possible routes and measuring transit delays of packets across a network.

2. Cuckoo Sandbox is open-source software for automating analysis of suspicious files. To do so, it makes use of custom components that monitor the behavior of the malicious processes while running in an isolated environment. You can throw any suspicious file at it, and in a matter of minutes, Cuckoo will provide a detailed report outlining the behavior of the file when executed inside a realistic but isolated environment.

3. ping verifies IP-level connectivity to another TCP/IP computer by sending Internet Control Message Protocol (ICMP) echo request messages.

4. hping supports TCP, UDP, ICMP, and RAW-IP protocols; has a traceroute mode; can send files between a covered channel, and provides many other features. It has a wide range of additional uses, including firewall testing, manual path MTU discovery, advanced traceroute, remote OS fingerprinting, advanced port scanning, remote uptime guessing, and TCP/IP stack auditing.

5. The curl command-line tool can transfer data to or from a server, using any of the supported protocols: HTTP, FTP, IMAP, POP3, SCP, SFTP, SMTP, TFTP, TELNET, LDAP, or FILE. Curl is powered by Libcurl.

Chapter 27

Do I Know This Already?

1. A. Playbook documents are step-by-step procedures and should be high level and focused on specific areas such as malware, insider threats, unauthorized access, ransomware, and phishing.

2. D. Data breach notification laws are becoming more common: the European Union’s General Data Protection Regulation (GDPR), for instance, requires that companies report data security incidents within 72 hours of discovery.

3. C. A triage matrix provides an understanding of the severity of an incident so that it can be prioritized quickly and correctly.

4. C. The lessons learned phase of the incident response process should be performed no later than two weeks from the end of the incident. A Post Incident Response (PIR) meeting ensures information is fresh on the team’s mind.

5. A. The tabletop exercise is often used to validate and/or improve an organization’s incident response (IR) plan. Real-life scenarios are used to put the response plan to the test, highlighting areas where your team excels and areas to be addressed.

6. C and D. The Diamond Model places the basic components of malicious activity at one of the four points on a diamond shape: adversary, infrastructure, capability, and victim. The model provides for analysis of threats related to intrusion.

7. B. Procedures describe the way adversaries implement a technique. A procedure concerns the particular instance use and can be useful for understanding exactly how the technique is used and for replication of an incident with adversary emulation as well for specifics on how to detect the instance in use.

8. D. There are five key stakeholders for any IR team; they are IT Services, Security Management, Legal, Human Resources, and Public Relations.

9. C. When your team is engaged with an incident, you should have them set up proactive alerting. They don’t need to call everyone every time, but your handlers need to plan ahead. Your incident response team needs to keep key contacts up to date so that when they have to notify contacts, it doesn’t come as a surprise. Notifying key contacts with only incident-relevant data as it becomes available reduces overcommunication of nonimportant data.

10. C. A disaster recovery (DR) plan is a formal document created by organizations that contains detailed instructions on how to respond to unplanned incidents such as natural disasters, power outages, cyber attacks, or other disruptive events.

11. C. Disruptions lead to lost revenue, brand damage, and dissatisfied customers. The longer the recovery time, the greater the adverse impact to the business.

12. C. Many people think a disaster recovery (DR) plan is the same as a business continuity plan (BCP), but a DR plan focuses mainly on restoring an IT infrastructure and operations after a crisis. It’s actually just one part of a complete business continuity plan, because a BCP looks at the continuity of the entire organization.

13. A. A continuity of operations plan (COOP) ensures the restoration of organizational functions in the shortest possible time, even if services resume at a reduced level of effectiveness or availability.

14. A, B, D. An incident response team is a group of people who prepare for and respond to any emergency incident, such as a natural disaster or an interruption of business operations.

15. B. NIST SP 800-53 outlines the requirements that contractors and federal agencies need to take to meet the Federal Information Security Management Act (FISMA). It requires data retention for a minimum of three years.

Review Questions

1. You should regularly test and update your incident response plan. Everyone who is part of the plan should understand their role and the role of others to help reduce confusion during a real event.

2. Regardless of how you choose to eradicate an infection, you need to have a plan for increased monitoring of any affected systems for some period of time after the eradication process within 30 days.

3. Incident response simulations are internal events that provide a structured opportunity to practice your incident response plan and procedures during a realistic scenario. SIRS events are fundamentally about being prepared and iteratively improving your response capabilities.

4. The Diamond Model of Intrusion Analysis emphasizes the relationships and characteristics of four basic components: the adversary, capabilities, infrastructure, and victims.

5. Privilege escalation: Attackers often need more privileges on a system to get access to more data and permissions: for this, they need to escalate their privileges, often to an Admin.

Chapter 28

Do I Know This Already?

1. A. Historical vulnerability scans can provide significant insight after an incident. By comparing the previous scans with the most recent, you can also look for variances in devices and systems that may have been changed.

2. D. Network vulnerability scans should include all devices with an IP address (workstations, laptops, printers and multifunction printers, IoT devices, routers, switches, hubs, IDS/IPS, servers, wireless networks, and firewalls) and all the software running on them.

3. B. Sensors should be deployed before an incident. Sensor placement around the network allows for greater visibility and can aid in forensic investigation by quickly identifying the depths of the spread.

4. C. SIEMs can tune sensitivity to what is considered suspect behavior or suspicious files (risk-based prioritization) to help reduce or increase the amount of data/matches during an investigation.

5. D. The log data you collect from your systems and devices may seem pretty routine. These logs could contain the precise evidence needed to investigate and successfully eradicate incidents from your network.

6. D. Session Initiation Protocol (SIP) is a signaling protocol used to establish, maintain, and tear down a call when terminated. SIP allows the calling parties’ called user agents to locate one another using the network.

7. A, B, C. Syslog is available on most network devices (for example, routers, switches, and firewalls), as well as printers and Linux-based systems. A syslog server listens for and then logs data messages coming from syslog clients. Rsyslog and syslog-ng build on syslog capabilities by adding support for advanced filtering and configuration. Event Viewer is a Windows-based tool that enables users or administrators to view event logs on Windows-based remote or local systems.

8. C. If you need to access logs for a specific time window—for instance, journalctl -since 2021-03-15 15:05:00—the time format is YYYY-MM-DD HH:MM:SS. journalctl is a valuable tool used to collect logs and sort through mountains of data to help you find a needle in a haystack.

9. A. NXLog can process high volumes of event logs from many different sources. Log processing includes rewriting, correlating, alerting, filtering, pattern matching, log file rotation buffering, and prioritized processing. Application, system, and security logs are Windows-specific Event Viewer logs.

10. C. Bandwidth monitors track bandwidth use over all areas of the network, including devices, applications, servers, WAN, and Internet links. One benefit of deploying bandwidth monitors is that they map out historical trends for capacity planning. With bandwidth monitors, you can quickly identify abnormal bandwidth usage, top talkers, and unique communications, all useful in finding infected systems that may be exfiltrating data or scanning the network looking to spread to other hosts.

11. A. Metadata is created from every activity you perform, whether it’s on your personal computer or online, every email, web search, and social or public application. Metadata is defined as “data that provides information about other data.” On its own, it might not seem like much, but when combined with additional context can lead to a break in a case.

12. C. A flow is a unidirectional set of packets sharing common session attributes, such as source and destination, IP, TCP/UDP ports, and type of service. NetFlow statefully tracks flows or sessions, aggregating packets associated with each flow into flow records, which are bidirectional flows.

13. D. Protocol analyzers allow network engineers and security teams to capture network traffic and perform analysis of the captured data to identify potential malicious activity or problems with network traffic. The network traffic data can be observed in real time for troubleshooting purposes, monitored by an alerting tool such as a SIEM to identify active network threats, and/or retained to perform forensic analysis.

Review Questions

1. Indicators can be anything from additional TCP/UDP ports being shown as open to detection of unauthorized software, or scheduled host system events and even unrecognized outbound communications.

2. Data correlation allows you to take data and logs from disparate systems, like Windows server events, firewalls logs, VPN connections, and RAS devices, and bring them all together to see exactly what took place during that event.

3. Logging for critical process information about user, system, and web application behavior can help incident responders build a better understanding of what normal looks like when an application is running and being used.

4. The DNS protocol has two message types: queries and replies. Both use the same format. These messages are used to transfer resource records (RRs). An RR contains a name, a time-to-live (TTL), a class (normally IN), a type, and a value. There are nearly a dozen different types of logs that are of particular interest; obtaining and including them in your investigation can help build a full picture.

5. VoIP technology is an attractive platform to criminals. The reason is that call managers and VoIP systems are global telephony services, in which it is difficult to verify the user’s location and identification.

Chapter 29

Do I Know This Already?

1. A. You spend time up front building a whitelist of approved applications for the application approved list. Then with your central management or endpoint security solution, you roll out the whitelist enterprisewide to all endpoints.

2. B. An application block list or deny list is a basic access control mechanism that denies specific applications or code on the list from being installed or run.

3. D. The main action of the quarantine function is to safely store reported objects such as malware, infected files, or potentially unwanted applications. During an investigation, this storage log should be one of the places investigators check for evidence and history of suspect files.

4. D. With mobile device management (MDM), it is important to establish a baseline understanding of the changes that are common to specific types of mobile devices. Mobile platforms have several attack vectors that you need to consider: hardware, firmware, mobile OS, applications, and the device combination.

5. D. Data loss prevention (DLP) is an end-to-end goal that ensures users do not send sensitive or critical information outside the corporate network. The term routinely describes software products that help a network administrator control the data that users can view or transfer. Intellectual property, corporate data, and customer data are some of the types of data you would use DLP to help protect against exfiltration.

6. A. Once a compromised endpoint has been infected because malware, a virus, or a Trojan was detected, or it is part of a much wider attack, it should be quarantined and isolated.

7. C. When you know where the issue is and what systems have been affected, you should contain those systems so that threats cannot spread through the network. To do this, you disable network access for the affected computers and devices, or you place them in a sandbox network. You also should change passwords so intruders no longer have access.

8. D. Network segmentation is an architectural approach that divides a network into multiple networks, subnets, or segments. It allows network administrators to control the flow of traffic between these networks based on a granular policy.

9. A. A SOAR system alerts for suspected phishing emails, endpoint attacks, failed user logins, malware, and other threat information that come from a variety of detection sources, such as SIEMs, systems, switches, and logging services. IT operations use a SOAR runbook for reference for routine procedures that administrators perform. A SOAR playbook provides manual orchestration of incident response. Web content filtering is the practice of blocking access to web content that may be deemed offensive or inappropriate, or even contain dangerous items.

Review Questions

1. The purpose is to specify an index of approved software applications or executable files that are permitted to be present and active on a computer system.

2. When a file is quarantined, the file is moved to a location on disk where it cannot be executed.

3. A set of tools and processes is used to ensure that sensitive data is not lost, misused, or accessed by unauthorized users. These tools allow only authorized persons to have access and to run copy/move commands on those specific files.

4. Certificate revocation is the act of invalidating a certificate before its scheduled expiration date. A certificate should be revoked immediately when its private key shows signs of being compromised.

5. A runbook consists of a series of conditional steps to perform actions such as enriching data, containing threats, and sending notifications automatically as part of the incident response or security operations process.

Chapter 30

Do I Know This Already?

1. D. Data preservation is the first step when litigation has been filed or will be soon filed, with a focus on the preservation of data in its current state, such as emails, SMS, MMS, and deleted messages (still on disk and not destroyed) on all devices including cell phones, PCs, and mobile devices.

2. B. File formats can vary, as well as settings for recording video, such as frames per second (FPS) and video resolution. These features can all factor into how and what video information is stored.

3. B. It has become imperative that evidence collection standards and procedures are consistent, documented, and coherent in the chain of custody. Strict guidelines are to be followed, and accurate documentation must be kept.

4. B. Evidence tagging helps identify collected items. A tag can consist of something as little as a sticker with the date, time, control number, and name or initials of the investigator. Using a control number is an easy way to identify a piece of evidence in documentation, such as a chain of custody. The combination of a tag and photographs can provide the exact location and condition in which the item was collected.

5. A. The collection of evidence should start with the most volatile item and end with the least volatile. The order of volatility is the order in which the digital evidence is collected.

6. D. Forensic artifacts are objects that have forensic value. They are not behaviorally driven; they do not necessarily reflect the behavior or intent of a threat actor or adversary. Some of the artifacts that can be extracted from suspect hosts are logs, the registry, the browser history, the RDP cache, and Windows Error Reporting (WER).

7. B. RAM is considered volatile memory. It is perceived to be more trusted than nonvolatile memory, like ROM, disk, magnetic, or optical storage. Investigations using live forensic techniques require special handling because the volatile data in RAM can contain code used by attackers.

8. D. Checksums may also be called hashes. Small changes in a file produce different-looking checksums. You can use checksums to check files and other data for errors that occur during transmission or storage, as well as for evidence in a forensic investigation to ensure it hasn’t been tampered with.

9. B. Cloud-based and on-premises are simply terms that describe where systems store data. Many of the same vulnerabilities that affect on-premises systems also affect cloud-based systems.

10. D. A copy of digital evidence must be properly preserved and collected in accordance with forensic best practices. Otherwise, the digital evidence may be inadmissible in court, or spoliation sanctions may be imposed.

11. A. Organizations should have a legal hold process to perform e-discovery to preserve and gather such information.

12. C. Forensic data recovery is the extraction of data from damaged, deleted, or purposely destroyed evidence sources in a forensically sound manner. This method of recovering data means that any evidence resulting from it can later be relied on in a court of law.

13. D. Nonrepudiation makes it difficult to successfully deny who and where a message came from as well as the authenticity and integrity of that message. Digital signatures can offer nonrepudiation when it comes to online transactions.

14. A. Counterintelligence is information gathered and activities conducted to protect against espionage, other intelligence activities, or sabotage conducted by or on behalf of other elements. The intelligence is designed to quickly direct resources to the most significant problems first and address them head on.

Review Questions

1. Whether evidence is admissible is determined by following three rules: (1) Best evidence means that courts prefer original evidence rather than copies to avoid alteration of evidence. (2) The exclusionary rule means that data collected in violation of the Fourth Amendment (no unreasonable searches or seizures) is not admissible. (3) Hearsay is second-hand evidence and is often not admissible, but some exceptions apply.

2. It must be (1) sufficient, which is to say convincing without question; (2) competent, which means it is legally qualified; and (3) relevant, which means it must matter to the case at hand.

3. Computers use checksum-style techniques to check data for problems in the background. You could also use checksums to verify the integrity of any other type of file, from applications to documents and media. Forensic investigators use checksums to ensure data is not tampered with after it has been collected from an incident.

4. By definition, forensic copies are exact, bit-for-bit duplicates of the original. To verify this, you can use a hash function to produce a type of unique checksum of the source data. Hash functions have four defining properties that make them useful; they are deterministic, collision resistant, pre-image resistant, and computationally efficient.

5. Network forensic analysis tools (NFATs) typically provide the same functionality as packet sniffers, protocol analyzers, and SIEM software in a single product. NFAT software focuses primarily on collecting, examining, and analyzing network traffic.

Chapter 31

Do I Know This Already?

1. A. Managerial controls are techniques and concerns addressed by an organization’s management (managers and executives). Generally, these controls focus on decisions and the management of risk. They also concentrate on procedures, legal and regulatory policies, the software development lifecycle (SDLC), the computer security lifecycle, information assurance, and vulnerability management/scanning. In short, these controls focus on how the security of your data and systems is managed.

2. B. Operational controls are the controls executed by people. They are designed to increase individual and group system security. They include user awareness and training, fault tolerance and disaster recovery plans, incident handling, computer support, baseline configuration development, and environmental security. The people who carry out the specific requirements of these controls must have technical expertise and understand how to implement what management desires of them.

3. A. Technical controls are the logical controls executed by the computer system. Technical controls include authentication, access control, auditing, and cryptography. The configuration and workings of firewalls, session locks, RADIUS servers, or RAID 5 arrays would be within this category, as well as concepts such as least privilege implementation.

4. D. Preventative controls are employed before an event and are designed to prevent an incident. Examples include biometric systems designed to keep unauthorized persons out, network intrusion prevention systems (NIPSs) to prevent malicious activity, and RAID 1 to prevent loss of data. They are also sometimes referred to as deterrent controls. Preventative controls enforce security policy and should prevent incidents from happening. The only way to bypass a preventative control is to find a flaw in its implementation or logic. These controls are usually not optional. Examples of preventative controls are access lists, passwords, and fences.

5. D. Detective controls aim at monitoring and detecting any unauthorized behavior or hazard. These types of controls are generally used to alert of a failure in other types of controls such as preventative, deterrent, and compensating controls. Detective controls are very powerful while an attack is taking place, and they are useful in the post-mortem analysis to understand what has happened. Audit logs, intrusion detection systems, motion detection, and Security Information and Event Management (SIEM) systems are examples of detective controls.

6. B. Corrective controls are used after an event. They limit the extent of damage and help the company recover from damage quickly. Tape backup, hot sites, and other fault tolerance and disaster recovery methods are also included here. They are sometimes referred to as compensating controls. Corrective controls include all the controls used during an incident to correct the problem. Quarantining an infected computer, sending a guard to block an intruder, and terminating an employee for not having followed the security policy are all examples of corrective controls.

7. B. Compensating controls, also known as alternative controls, are mechanisms put in place to satisfy security requirements that are either impractical or too difficult to implement. For example, instead of using expensive hardware-based encryption modules, an organization might opt to use network access control (NAC), data loss prevention (DLP), and other security methods. Or, on the personnel side, instead of implementing separation of duties, an organization might opt to do additional logging and auditing. You should approach compensating controls with great caution. They do not give the same level of security as their replaced counterparts.

Review Questions

1. Managerial

2. Operational

3. Technical

4. Preventative

5. Deterrent

6. Detective

7. Corrective

8. Compensating

9. Physical

10. Physical

Chapter 32

Do I Know This Already?

1. A. The General Data Protection Regulation is a European Union (EU) law that was enacted in 2018. Its overall focus is on data protection and privacy for individuals. Although it is a law enacted in the EU, it applies to any organization collecting information about people in the EU. This means that if your organization collects and handles the personal data of EU citizens, then this regulation applies to you. For instance, if you run a business that offers goods or services in the EU and that business requires you to collect information about your customers, then you would be required to follow the GDPR requirements. Not following them could result in large fines. This is one of the factors that makes GDPR a larger concern to organizations than many other laws that have been in place for many years. GDPR penalties can be very high. For additional information on GDPR, refer to https://gdpr.eu/.

2. B. The Sarbanes–Oxley Act (SOX), enacted in 2002, governs the disclosure of financial and accounting information.

3. A. The Health Insurance Portability and Accountability Act (HIPAA), enacted in 1996, governs the disclosure and protection of health information.

4. D. The Center for Internet Security (CIS) is a nonprofit organization that was established in 2000. Its overall goal is to provide security best practice guidance for enhancing the security of cyberspace.

5. C. The National Institute of Standards and Technology (NIST) developed the Risk Management Framework (RMF) in 2017 as a result of an executive order from the president, which required all federal agencies to comply with it. For mor information about the NIST RMF, visit www.nist.gov/cyberframework/risk-management-framework.

6. C. The National Institute of Standards and Technology (NIST) developed the Cybersecurity Framework (CSF) in 2014 as a result of executive order 13636 from the president. The NIST CSF is made of five core functions: Identify, Protect, Detect, Respond, and Recover. For more information about the NIST CSF, visit www.nist.gov/cyberframework.

7. A. The Cloud Security Alliance (CSA) is a nonprofit organization established in 2008 with the goal of promoting security best practices in cloud computing environments. The Cloud Controls Matrix is a framework established by the CSA for cloud computing. This organization also developed the reference architecture to help cloud providers with guidance on developing secure interoperability best practices. For more information related to CSA, visit https://cloudsecurityalliance.org/.

8. C. Security Content Automation Protocol (SCAP) was created to provide a standardized solution for security automation. The SCAP mission is to maintain system security by ensuring security configuration best practices are implemented in the enterprise network, verifying the presence of patches, and maintaining complete visibility of the security posture of systems and the organization at all times.

Review Questions

1. General Data Protection Regulation (GDPR)

2. Sarbanes-Oxley Act (SOX)

3. Health Insurance Portability and Accountability Act (HIPAA)

4. Center for Internet Security

5. National Institute of Standards and Technology

6. Cloud Security Alliance

7. Security Content Automation Protocol (SCAP)

Chapter 33

Do I Know This Already?

1. A. The Privacy Act of 1974 sets many standards when it comes to the security of personally identifiable information (PII). However, most organizations will go further and define their own privacy policy, which explains how users’ identities and other similar information will be secured. For example, if an organization has an Internet-based application that internal and external users access, the application will probably retain some of their information—possibly details of their identity. Not only should this information be secured, but the privacy policy should state in clear terms what data is allowed to be accessed, and by whom, as well as how the data will be retained and distributed (if at all). An organization might also enact a policy that governs the labeling of data to ensure that all employees understand what data they are handling and to prevent the mishandling of confidential information. Before any system administrators or other personnel gather information about these users, they should consult the privacy policy.

2. B. Acceptable use policies (AUPs) define the rules that restrict how a computer, network, or other system may be used. They state what users are and are not allowed to do when it comes to the technology infrastructure of an organization. Often, employees must sign an AUP before they begin working on any systems. This policy protects the organization but also defines to employees exactly what they should and should not be working on.

3. A. Separation of duties defines when more than one person is required to complete a particular task or operation. This distributes control over a system, infrastructure, or particular task.

4. D. When it comes to information security, due diligence means ensuring that IT infrastructure risks are known and managed. An organization needs to spend time assessing risk and vulnerabilities and might state in a policy how it will give due diligence to certain areas of its infrastructure.

5. D. Due care is the mitigation action that an organization takes to defend against the risks that have been uncovered during due diligence.

6. B. Due process is the principle that an organization must respect and safeguard personnel’s rights. The purpose is to protect the employee from the state and from frivolous lawsuits.

7. A. All employees should be trained on personally identifiable information (PII). This information is used to uniquely identify, contact, or locate a person. This type of information could be a name, birthday, Social Security number, biometric information, and so on. Employees should know what identifies them to the organization and how to keep that information secret and safe from outsiders. Another key element of user education is the dissemination of the password policy. Employees should understand that passwords need to be complex and know the complexity requirements. They should also understand that they should never give out their password or ask for another person’s password to any resource.

8. C. A memorandum of understanding (MOU) is not an agreement at all, but an understanding between two organizations or government agencies. It does not specify any security controls either. However, a memorandum of agreement (MOA) does constitute a legal agreement between two parties wishing to work together on a project but still does not detail any security controls.

9. B. A business partnership agreement (BPA) is a type of contract that can establish the profits each partner will get, what responsibilities each partner will have, and exit strategies for partners. Often this type of agreement applies to supply chain and business partners.

10. B. Top secret means the highest sensitivity of data; few people should have access, and security clearance may be necessary. Information is broken into sections on a need-to-know basis.

11. B. When you’re discussing policies for systems internal to your organization or devices that are owned by your organization, such as servers and laptops, it is also important to detail policies regarding how they should be deployed. For instance, service accounts should follow a specific credential policy. Often service accounts have higher-level permissions to enable the specific service. That means these accounts can cause more damage if they are compromised. They should always be configured using the least privilege approach. There should also be a service account for each service.

12. B. Change management is a structured way of changing the state of a computer system, network, or IT procedure. The idea behind this is that change is necessary, but that an organization should adapt with change and be knowledgeable of it. Any change that a person wants to make must be introduced to the heads of each department that it might affect. They must approve the change before it goes into effect. Before this happens, department managers will most likely make recommendations and/or give stipulations. When the necessary people have signed off on the change, it should be tested and then implemented. During implementation, it should be monitored and documented carefully.

Review Questions

1. Acceptable use policy (AUP)

2. Separation of duties

3. Code of ethics

4. Acceptable use policy (AUP)

5. Change management

6. Service-level agreement (SLA)

7. Public information

8. Job rotation

9. Guidelines and enforcement

Chapter 34

Do I Know This Already?

1. A. Risks to your organization or environment can come in many shapes and forms. The primary concern of most organizations is external risk. This is, of course, the biggest concern to most because it is risk that comes from an external entity who could have many different motivations and therefore targets the inside of your organization. Many imagine an external “hacker” as someone sitting in a basement hammering away at the keyboard in front of 10 different monitors in a black hoodie. That, of course, is usually not the case. External risk most likely is from an organized threat actor or an organization of threat actors who have various motivations. To carry out their attacks, they will use many different methods. The primary goal of external attackers is to gain access to your organization’s computing environment, gain a foothold, and keep it as long as possible to carry out their objectives, whatever they may be.

2. B. Many organizations tend to overlook internal risks. The majority of internal risk stems from employees or those internal to the organization such as contractors. While the motivation of external cybercriminals may be to gain and keep access, the internal threat actor already has access to the organization’s environment. This person’s motivations are usually different from those of external threat actors—although the goal may be the same in the end. Most internal attacks result in the exfiltration or destruction of sensitive data that belongs to the organization. The theft of intellectual property (IP) is a primary goal of internal and external threat actors.

3. A. Risk management can be defined as the identification, assessment, and prioritization of risks, and the mitigating and monitoring of those risks. Specifically, when we talk about computer hardware and software, risk management is also known as information assurance (IA). The two common models of IA include the well-known CIA, and the DoD’s “Five Pillars of IA,” which comprise the concepts of the CIA triad (confidentiality, integrity, and availability) but also include authentication and nonrepudiation.

4. D. Some organizations opt to avoid risk. Risk avoidance usually entails not carrying out a proposed plan because the risk factor is too great. An example of risk avoidance: If a high-profile organization decided not to implement a new and controversial website based on its belief that too many attackers would attempt to exploit it.

5. C. An example of risk transference (also known as risk sharing) would be an organization that purchases cybersecurity insurance for a group of servers in a data center. The organization still takes on the risk of losing data in the case of server failure, theft, and disaster, but transfers the risk of losing the money those servers are worth in case they are lost.

6. B. Risk assessment is the attempt to determine the number of threats or hazards that could possibly occur in a given amount of time to your computers and networks.

7. B. Qualitative risk assessment is an assessment that assigns numeric values to the probability of a risk and the impact it can have on the system or network.

8. C. NIST defines risk mitigation as “prioritizing, evaluating, and implementing the appropriate risk-reducing controls/countermeasures recommended from the risk management process.”

9. A. Person-made disasters can be defined as being caused by the influence of humans.

10. A. Mean time between failures (MTBF) defines the average number of failures per million hours of operation for a product in question.

11. A. Although it’s impossible to predict the future accurately, it can be quantified on an average basis using concepts such as mean time between failures (MTBF).

Review Questions

1. Recovery plan

2. Recovery time objective (RTO)

3. Impact determination

4. Residual risk

5. Single point of failure

6. Mean time between failures (MTBF)

7. Quantitative risk assessment

8. Risk mitigation

9. Qualitative risk assessment

10. Risk assessment

Chapter 35

Do I Know This Already?

1. A. Regardless of what industry you are responsible for protecting, the ultimate goal is to protect your intellectual property. This is, of course, the crown jewels and can be in many different forms. For a company that develops software, the crown jewels are the source code of the product it is selling. Even a company that produces food products has intellectual property that it is trying to protect, such as a secret recipe.

2. B. The US Securities and Exchange Commission (SEC) requires that any publicly traded company provide a public notification and disclosure of a data breach.

3. A. Unauthorized access to top secret information would cause grave damage to national security.

4. D. Unauthorized access to unclassified information would cause no damage to national security.

5. D. Unauthorized access to private information could cause severe damage to the organization. Examples of information or assets that could receive this type of classification are human resources (HR) information (for example, employee salaries) and medical records.

6. B. According to the Executive Office of the President, Office of Management and Budget (OMB) and the U.S. Department of Commerce, Office of the Chief Information Officer, personally identifiable information, or PII, refers to “information which can be used to distinguish or trace an individual’s identity.”

7. B. Data minimization is a term used to explain a concept or approach to privacy design. The overall concept of data minimization is simply to minimize the amount of your personal information that is consumed by online entities. Data minimization is a privacy tool that is used in many different ways. For instance, a website may choose to not store your personal information if it is not needed—as opposed to many that store it and even resell it for a profit. Additionally, it can be used to develop policies regarding the amount of time the data that is collected about you is actually maintained before being permanently deleted. Individuals can also use tools that will clear information from applications such as web browsers. As you know, web browsers collect a large amount of data, which could in turn be compromised. Minimizing this data reduces the risk of such a compromise.

8. C. Another privacy enhancing technology concept is data masking. The goal of data masking is to protect or obfuscate sensitive data. This goal must be achieved while not rendering the data unusable in any way. An example of data masking being used in a real-world environment would be data that is displayed on terminal screens in banks or doctors’ offices. Social Security numbers can be masked to show only the last four digits so that they can be used for verification purposes while not exposing the full Social Security number.

9. B. The data owner, also called the information owner, is usually part of the management team and maintains ownership of and responsibility over a specific piece or subset of data. Part of the responsibility of this role is to determine the appropriate classification of the information, ensure that the information is protected with controls, to periodically review classification and access rights, and to understand the risk associated with the information.

10. D. The General Data Protection Regulation (GDPR) in the European Union (EU) defines the information lifecycle in four different phases. It is sometimes named differently based on the source; however, the phases themselves are still the same. Starting with the collection of data, this is obviously the phase where the data is consumed by the data processor. GDPR states that when collecting data there must be a defined consent from the data owner as well as a clear definition of how the data will be used. The overall intent is to follow the principle of collecting only data that is necessary and not overcollecting.

11. A. Before data is collected, stored, secured, and disposed of throughout the information lifecycle, it is important to understand how that data, if compromised, could impact the privacy of the individuals whose data it holds. To accomplish this, an organization should complete an impact assessment on any new projects that are to be instated where data will be collected as well as any time the scope of the data use will change. These impact assessments are also sometimes called Privacy Impact Assessments (PIAs) or Data Privacy Impact Assessments (DPIAs). The result of an impact assessment should produce some kind of report that will identify specific high risks to the data subjects and provide recommendations on how that risk can be minimized. A PIA or DPIA is also something that is required in the GDPR.

12. D. Another privacy concept that has been adopted by the General Data Protection Regulation (GDPR) in the European Union (EU) is the terms of agreement. In many cases, this is called the data processing agreement. The overall purpose of the data processing agreement is to protect the personal information and the individuals the data is about. The agreement is actually a legal contract that is agreed upon by any entities that will fit the role of data processor in the information lifecycle. This is one of the basic requirements of GDPR and will lead to fines if not followed by an organization collecting data.

13. D. Along with the agreement of how data will be collected, utilized, and processed by an organization, the organization must also provide notification to the individuals it is collecting data from or about. Again, this is a requirement for the General Data Protection Regulation (GDPR) in the European Union (EU). GDPR ensures that individuals are notified about how their data is being used. This is done via a privacy notice. The notice itself is a document sent from the collecting organization stating how it is conforming to data privacy principles.

Review Questions

1. Private

2. Top secret

3. PII

4. PHI

5. Data minimization

6. Tokenization

7. Data controller

8. Data protection officer (DPO)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.171.180