Chapter 2
Configure and Implement Endpoint Security Controls

Did you ever think about how many layers are in an onion? As you peel back one layer, there seems to be another after another. A comprehensive security solution should be designed the same way, that is, as a series of layers. This is often called defense in depth. This chapter discusses comprehensive security solutions including hardening techniques, using trusted operating systems, and implementing compensating controls. The defense-in-depth approach looks at more than just basic security concepts. This methodology is the sum of the methods, techniques, tools, people, and controls used to protect critical assets and information.

Hardening Techniques

Hardening techniques include a variety of steps carried out to remove unwanted services and features for the purpose of making it harder for an attacker to access a computer successfully by reducing the attack surface. Because it's easy to overlook something in the hardening process, companies should adopt a standard methodology to harden computers and devices. Different OSs such as macOS, Linux, and Windows will require different security baselines. Some administrators refer to a golden image as a master image that can be used to clone and deploy other devices consistently. System cloning is an effective method of establishing a baseline configuration for your organization. It requires effort and expertise to establish and maintain images for deployment. Also, hardening techniques for workstations will be different from hardening techniques for servers.

Although this may seem like a simple concept, good security practices start with physical security. If an attacker can physically access a system, it becomes a trivial task to take control of it. Systems should be physically secured. Training users to turn off systems when not in use is a basic control, along with the implementation of password-protected screensavers and automatic logoffs.

Physical equipment and software have life cycles and will not last forever. When physical equipment reaches its final stages of use in an organization, plans should be made around end of life (EOL), and that equipment should be removed from the network. Software has a similar cycle, and once it has reached the end of support by the manufacturer, plans should be in place for a replacement.

Hosts should be hardened so that they are secure before the OS even fully boots. Several items can be used as boot loader protections, including the following:

  • Secure Boot This is a security standard developed by members of the PC industry to help make sure your PC boots using only software that is trusted by the device manufacturer. Secure Boot uses self-signed 2048-bit RSA keys in X.509 certificate format and is enabled in the UEFI/BIOS.
  • Measured Launch This method works with Trusted Platform Module (TPM) and the Secure Boot process to determine if an OS is allowed to load and what portions can execute. With Secure Boot, PCs with UEFI firmware and a Trusted Platform Module (TPM) can be configured to load only trusted operating system boot loaders. With Measured Boot or Measured Launch, the PC's firmware logs the boot process, and Windows can send it to a trusted server that will objectively assess the PC's health.
  • IMA Integrity Measurement Architecture (IMA) was developed by IBM to verify the integrity and trust of Linux OSs.
  • BIOS/UEFI Unified Extensible Firmware Interface (UEFI) first became a requirement back on Windows 8. UEFI is a replacement or add-on to BIOS that is similar to an OS that runs before your final OS starts up. It was designed to block rootkits and other malware that could take control of BIOS-based systems.

Securing the network equipment and host computers represents the multilayer security approach that is sometimes called defense in depth. Here are some of the general areas that you should examine when hardening host systems:

  • Using Application Approved List and Application Block/Deny List An application approved list can be defined as a list of entities that are granted access. An application block/deny list is just the opposite; it lists what cannot be accessed. As an example, you might place YouTube on an application block/deny list so that employees cannot access that website. Think of an application approved list as implicit “allow none” unless added to the list and an application block/deny list as implicit “allow all” unless added to the list.
  • Implementing Security/Group Policy Microsoft created Group Policy with the introduction of Windows 2000. You can think of group policies as groupings of user configuration settings and computer configuration settings that can be linked to objects in Active Directory (AD). These are applied to users and computers. Group Policy allows the security administrator to maintain a consistent security configuration across hundreds of computers. When setting up security options in Group Policy, the initial security settings relate specifically to the Account Policies and Local Policies nodes. Account policies contain a password policy, account lockout policy, and Kerberos policy. Local policies apply to audit policies, user rights, and security options.
  • Attestation Services Attestation means that you are validating something as true. Attestation services can be designed as hardware-based, software-based, or hybrid. The Trusted Platform Module (TPM) is a specialized form of hardware security module that creates and stores cryptographic keys. TPM enables tamper-resistant full-disk encryption for a local hard drive.
  • NX/XN Bit Use NX (No-eXecute) is a bit in CPUs that greatly enhances the security of that CPU as it operates. The purpose of NX is to segregate memory areas used for processor instruction and data storage. This feature used to be found only on CPUs with the Harvard architecture (with storage and instruction memory areas separated). Now, thanks to growing concerns for security, processors of the von Neumann architecture (shared storage and instruction memories) are also adopting the NX bit feature.

Today, given various processor manufacturers, these are different terms for essentially the same bit. Obviously, when a CPU manufacturer brands a No-eXecute bit as their own, they can promote its unique security features. Be aware that Intel markets its technology as the XD (execute disable) bit, while AMD brands its technology as EVP (enhanced virus protection). Finally, for CPUs built on the ARM architecture, the feature is called XN (execute never).

Another approach to protecting data is using secure and encrypted enclaves. A secure enclave allows an application to run securely at the hardware level. All data is encrypted in memory and is decrypted only while at the hardware level. The data is secure even if the OS or root user is compromised.

Address Space Layout Randomization Use

Address space layout randomization (ASLR) is a technique designed to protect against buffer overflow attacks, initially implemented in 2003. Presently, all major operating systems—server, desktop, and mobile—incorporate ASLR.

How does ASLR work? In a buffer overflow attack, an attacker needs to know the location in the code where a given function accepts input. The attacker will feed just the right amount of garbage to that code location, including a malicious payload. Ideally, the attacker also includes an instruction to go to another point in the code, and the malicious payload and instruction will run with the privileges of the application.

To say that making a buffer overflow work “properly” is difficult is an understatement. Rarely does an attacker have the actual source code to know the precise location in the code where the targeted function accepts input. Even if the location is available, buffer overflow development requires a large number of “hit-and-miss” trials. However, overflow attacks do happen, and worse, they are repeatable, given that the code location doesn't change.

How does ASLR protect against this? ASLR randomizes the location of different portions of the code. Therefore, even if an attacker managed to make a buffer overflow work once, it may never work again on the same code.

The challenge to software developers is that their code must be compiled to support ASLR from the start. Many years ago this posed a difficult hurdle, but ASLR support is now the default.

Even so, the implementation of ASLR is not infallible with regard to application compatibility. In late November 2017, it was suspected that Microsoft's ASLR was broken in Windows versions 8 through 10. Microsoft explained that the problem was a configuration issue when working with applications that don't opt in to ASLR. More can be learned about mandatory ASLR here:

blogs.technet.microsoft.com/srd/2017/11/21/clarifying-the-behavior-of-mandatory-aslr

Hardware Security Module and Trusted Platform Module

A wide variety of products are available to encrypt data in existing disk and media drive products. Data-at-rest encryption options include software encryption, such as encrypted file system (EFS) and VeraCrypt.

There are two well-known hardware encryption options to better protect data. Those hardware encryption options are the Hardware Security Module (HSM) and the Trusted Platform Module (TPM).

An HSM is a type of secure cryptoprocessor used for managing cryptographic keys. While connected to an HSM, a system can make keys, sign objects, and validate signatures.

A TPM is a specialized chip that can be installed on the motherboard of a computer, and it is used for hardware authentication. The TPM authenticates the computer in question rather than the user. It uses the boot sequence of the computer to determine the trusted status of a platform. The TPM places the cryptographic processes at the hardware level. If someone removes the drives and attempts to boot the hard drive from another computer, the hard drive will fail and deny all access. This provides a greater level of security than a software encryption option that may have been used to encrypt only a few folders on the hard drive. TPM was designed as an inexpensive way to report securely on the environment that booted and to identify the system.

Both HSM and TPM work well for hard drives and fixed storage devices, but portable devices must also be protected against damage, unauthorized access, and exposure. One good approach is to require all employees who use portable devices, USB thumb drives, handheld devices, or any removable storage media devices to be held responsible for their safekeeping and proper security. This starts with policy and extends to user training. For example, policy might be configured to require laptop and tablet computer users to connect to the corporate intranet at least once a week to receive the latest software patches and security updates. Policy can also be established that requires the use of encryption on portable devices. Depending on the company and the level of security needed, the security professional might also restrict the use of personal devices at work and block the ability of these devices to be plugged into company equipment.

Another option for drive encryption is a self-encrypting drive (SED). A SED is a hard disk drive (HDD) or solid-state drive (SSD) designed to automatically encrypt and decrypt drive data without the need for user input or disk encryption software. When the SED is powered on in the host system, data being written to and read from the drive is being encrypted and decrypted instantly; no other steps or software are needed to encrypt and decrypt the drive's data.

As you have learned so far, security professionals need to know about many common types of security tools, techniques, and procedures, as well as when and how to use them. Hardening techniques focus on reducing the attack surface of an endpoint system by disabling unnecessary or unwanted services and changing security options from defaults to more secure settings that match the device's risk profile and security needs.

Patching and updating systems also help. Having a fully patched system image is part of a hardening process. System configuration standards, naming standards, hardening scripts, programs, and procedures help to ensure that systems are correctly inventoried and protected. Drive encryption keeps data secure if drives are stolen or lost. At the end of their life cycle, when devices are retired or fail, sanitization procedures are used to ensure that remnant data doesn't leak. Wiping drives and physical destruction are both common options.

Other controls and techniques can include the following:

  • Using a Standard Operating Environment A standard operating system is a standard build of a host system. The idea is that a standard build is used throughout the organization. One advantage is the reduction in the total cost of ownership (TCO). However, the real advantage is that the configuration is consistent. This standardized image is easier to test where there is a uniform environment when updates are required and when security patches are needed.
  • Fixing Known Vulnerabilities Building a secure baseline is a good start to host security, but one big area of concern is fixing known vulnerabilities. To stay on top of this process, you should periodically run vulnerability assessment tools. Vulnerability assessment tools such as Nessus, SAINT, and Retina are designed to run on a weekly or monthly basis to look for known vulnerabilities and problems. Identifying these problems and patching them in an expedient manner helps to reduce overall risk of attack.

    Exercise 2.1 shows you how to run a security scanner to identify vulnerabilities.

  • Hardening and Removing Unnecessary Services Another important component of securing systems is the process of hardening the system. The most direct way of beginning this process is by removing unwanted services. Think of it as a variation of the principle of least privilege. This process involves removing unnecessary applications, disabling unneeded services, closing unnecessary ports, and setting restrictive permissions on files. This process reduces the attack surface, and it is intended to make the system more resistant to attack. Although you should apply the process to all systems for which you are responsible, you must handle each OS uniquely and take different steps to secure it.
  • Applying Command Shell Restrictions Restricting the user's access to the command prompt is another way to tighten security. Many commands that a user can run from the command prompt can weaken security or allow a malicious individual to escalate privilege on a host system. Consider the default configuration of a Windows Server 2019 computer: Telnet, TFTP, and a host of other command-line executables are turned off by default. This is a basic example of command-line restrictions. In another example, say that you have a kiosk in the lobby of your business where customers can learn more about your products and services and even fill out a job application. Additional capability should be disabled to implement the principle of least privilege. Although it is important to provide users with what they need to do the job or task at hand, it's good security practice to disable access to nonessential programs. In some situations, this may include command shell restrictions. Allowing a user to run commands from the command line can offer a hacker an avenue for attack. Command-line access should be restricted unless needed.

    Exercise 2.2 shows you how to bypass command shell restrictions.

  • Using Warning Banners Warning banners are brief messages that inform users of specific policies and procedures regarding the use of applications and services. A warning banner can be a splash screen, pop-up, or message box that informs the user of specific rules. Warning banners are crucial in that they inform the user about specific behavior or activities that may or may not be allowed. As the warning banner states the result of specific behavior, any excuses are removed from the user so that a violation can be logged. Warning banners should contain what is considered proper usage, expectations of privacy, and penalties for noncompliance.
  • Using Restricted Interfaces A restricted interface is a profile that dictates what programs, menus, applications, commands, or functions are available within an environment. This technique allows a security administrator to control the user's environment and dictate the objects to which they have access. The environment is considered a restricted interface because the user can use it only to interface with the operating system, installed applications, and resources. In modern operating systems, an individual profile can follow the user to any mobile device under the administrator's control.
  • Configuring Dedicated Interfaces A dedicated interface is a port that is devoted to specific traffic. As an example, many companies place their wireless LAN on a dedicated interface and keep it separate from other internal network traffic.
  • Using Out-of-Band Management Out-of-band management is the concept of employing a dedicated management channel, separate from the network channel or cabling used by servers.
  • Configuring a Management Interface A management interface is designed to be used as a way to manage a computer or server that may be powered off or otherwise unresponsive. A management interface makes use of a network connection to the hardware rather than to an operating system or login shell. Management interfaces often use an out-of-band NIC.
  • Managing a Data Interface A data interface is used with databases to generate process templates. Process templates are reusable collections of activity types. They allow system integrators and others who work with different clients to manipulate similar types of data.
  • Scripting and Replication One of the great things about PowerShell is its ability to script basic window commands of SQL server objects easily. You can also use it to script replication objects. This can be used as part of a disaster recovery plan so that you always have a script available to re-create replications.

    Scripting and replication are also an approach for automating patch management.

Trusted Operating Systems

A trusted operating system (trusted OS) can be defined as one that has implemented sufficient controls to support multilevel security. Multilevel security provides the OS with the ability to process and handle information at different security levels. At the very least, this granularity may mean that you can process data as a user or as root or administrator. Trusted OSs must be tested to demonstrate evidence of correctness to meet specific standards. These standards require the trusted OS to have undergone testing and validation. Testing offers the OS vendor a way to promote the features of the system. Testing allows the buyer to verify the system and to check that the OS performs in the manner the vendor claims.

Trusted operating systems extend beyond software and have to take into consideration the hardware on which they reside. This is the purpose of the trusted computer base. The trusted computer base (TCB) is the sum of all of the protection mechanisms within a computer, and it is responsible for enforcing the security policy. This includes hardware, software, controls, and processes.

The following documents are some of the guidelines used to validate a trusted OS:

  • Trusted Computer System Evaluation Criteria (TCSEC)

    One of the original trusted OS testing standards was the Trusted Computer System Evaluation Criteria (TCSEC). TCSEC, also known as the Orange Book, was developed to evaluate stand-alone systems. It actually has been deprecated and has long ago been replaced by the Common Criteria, but it deserves mention as it was one of the first trusted OS testing standards. Its basis of measurement is confidentiality. It was designed to rate systems and place them into one of four categories:

    • A: Verified Protection An A-rated system is the highest security division.
    • B: Mandatory Security A B-rated system has mandatory protection of the TCB.
    • C: Discretionary Protection A C-rated system provides discretionary protection of the TCB.
    • D: Minimal Protection A D-rated system fails to meet any of the standards of A, B, or C, and basically it has no security controls.
  • Information Technology Security Evaluation Criteria Information Technology Security Evaluation Criteria (ITSEC) was another early standard developed in the 1980s and first published in May 1990. It was designed to meet the needs of the European market. ITSEC examines the confidentiality, integrity, and availability of an entire system. It was unique in that it was the first standard to unify markets and bring all of Europe under one set of guidelines. The evaluation is actually divided into two parts: one part evaluates functionality, and the other part evaluates assurance. There are 10 functionality (F) classes and 7 assurance (E) classes. Assurance classes rate the effectiveness and correctness of a system.
  • Common Criteria The International Organization for Standardization (ISO) created Common Criteria (ISO 15408) to be a global standard that built on TCSEC, ITSEC, and others. Common Criteria essentially replaced ITSEC. Common Criteria examined different areas of the trusted OS, including physical and logical controls, startup and recovery, reference mediation, and privileged states. Common Criteria categorizes assurance into one of eight increasingly strict levels of assurance. These are referred to as evaluation assurance levels (EALs). EALs provide a specific level of confidence in the security functions of the system being analyzed. The seven levels of assurance are as follows:
    • EAL 1: Functionality tested
    • EAL 2: Structurally tested
    • EAL 3: Methodically checked and tested
    • EAL 4: Methodically designed, tested, and reviewed
    • EAL 5: Semi-formally designed and tested
    • EAL 6: Semi-formally verified, designed, and tested
    • EAL 7: Formally verified, designed, and tested

Regardless of how it is tested or which specific set of criteria is used, a trusted OS includes the following basic attributes:

  • Hardware Protection A trusted OS must be designed from the ground up. Secure hardware is the beginning.
  • Long-Term Protected Storage A trusted OS must have the ability to offer protected storage that lasts across power cycles and other events.
  • Isolation A trusted OS must be able to isolate programs. It must be able to keep program A from accessing information from program B.
  • Separation of User Processes from Supervisor Processes User and supervisor functions must be separated.

The TCB is responsible for confidentiality and integrity. It is the only portion of a system that operates at a high level of trust. This level of trust is where the security kernel resides. The security kernel handles all user and application requests for access to system resources. A small security kernel is easy to verify, test, and validate as secure.

So, while the trusted OS is built on the TCB, both of these concepts are based on theory. Much of the work on these models started in the early 1970s. During this period, the U.S. government funded a series of papers focused on computer security. These papers form the basic building blocks for trusted computing security models. Security models determine how security will be implemented, what subjects can access the system, and to what objects they will have access. Simply stated, they are a way to formalize the design of a trusted OS. Security models build on controls designed to enforce integrity and confidentiality.

Mandatory access control (MAC) has been used by the government for many years. All files controlled by the MAC policies are based on different categorized security levels including classified, secret, or top secret. MAC allows for the system to run at the same or lower levels. Overriding MAC requires authorization from senior management.

Examples of trusted OSs include SELinux, SEAndroid, and Trusted Solaris. SELinux (Security-Enhanced Linux), available now for just over 20 years, started as a collaborative effort between the National Security Agency (NSA) and Red Hat, and it continues to be improved. SELinux brings MAC to the Linux kernel, allowing for much stricter access control. For the CASP+ exam, remember this point as a way to distinguish kernel from middleware.

Middleware is a type of computer software that provides services to software applications beyond those available from the operating system. It can be described as “software glue.” Middleware makes it easier for software developers to implement communication and input/output, so they can focus on the specific purposes of their applications. While core kernel functionality can be provided only by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems.

The Android operating system uses the Linux kernel at its core and also provides an application framework that developers incorporate into their applications. In addition, Android provides a middleware layer, including libraries that provide services such as data storage, screen display, multimedia, and web browsing. Because the middleware libraries are compiled to machine language, services execute quickly. Middleware libraries also implement device-specific functions, so applications and the application framework need not concern themselves with variations between various Android devices.

SEAndroid brings the same MAC benefit to the Android kernel. Android uses the concept of application sandboxing, or isolating and restricting its applications in their own respective memory and drive space. Starting with version 4.3, Android took on SELinux to extend that isolation even further. Between versions 4.3 and 5.0, Android partially enforced the restriction to a subset of domains. In Android speak, a domain is akin to a running process. With Android kernel 5.0 and later, Android fully enforces SELinux in the kernel.

Trusted Solaris also provides MAC as well as features like read-only protection for host or guest environments that Solaris dubs “immutable zones.” The immunity provided is applied via a zone configuration property file that is used to set any exemptions to the file system. Those exemptions allow writes to be permitted. At the time of this writing, the property file is set to one of five possible settings, ranging from “strict” (absolutely no writes) to “none” (full read-write access), with intermediate variants of access to the /etc and /var directories.

Compensating Controls

Security is hardly a new concern for most organizations. In many companies, security is relegated to the technology agenda and gets only marginal attention and budget consideration. In today's economy, many computer security officers (CSOs) are being asked to provide better security than was provided yesterday with more modest budgets. For companies to survive in today's world, a paradigm shift is needed—the real threat is no longer a stranger lurking outside the company's main gate. Over the last decade, information-related crime and cyberattacks have become the crime of choice for a growing cadre of criminals.

Effective security requires the CASP+ to work with others throughout the organization to integrate the needs of the company into holistic security solutions using compensating controls. Given a scenario, a CASP+ should be able to facilitate collaboration across diverse business units to achieve the related security goals. A comprehensive security solution is essential to the enterprise's continuity of business operations and maintaining the confidentiality and integrity of data. The integration of enterprise tools is needed to protect information and systems from unauthorized access, use, disclosure, disruption, modification, or destruction and sometimes requires thinking outside the box.

  • Antivirus This is a no-brainer, right? This no such thing as a 100 percent trusted network, and endpoints are vulnerable to the connected network. Give your endpoint some added protection of antivirus unless, for some specialized reason, it would cause interruptions.
  • Application Controls If changes to the application allow for reducing risk while business needs remain satisfied, then why not make use of application controls that further harden the system? Application control includes completeness and validity checks, identification, authentication, authorization, input controls, and forensic controls, among others. An example of an application control is the validity check, which reviews the data entered into a data entry screen to ensure that it meets a set of predetermined range criteria.
  • Host-Based Intrusion Detection System (HIDS)/Host-Based Intrusion Prevention System (HIPS) HIDSs and HIPSs can be useful as detective and preventative controls. They provide more information for your security operations center (SOC) personnel. If incident handling, HIDSs and HIPSs might also be helpful in containing the incident by letting you know if other hosts were affected.
  • Host-Based Firewall This is another endpoint layer of defense where we are reminded that no network is to be trusted 100 percent. The host-based firewall may not help stop a host from launching an incident, but it can help mitigate the host being another victim.
  • Endpoint Detection and Response (EDR) EDR is a relatively new term, but in the age of advanced persistent threats (APTs), your endpoints will benefit from endpoint detection and response. EDR is far more comprehensive and capable than a HIDS/HIPS. An EDR solution offers multiple capabilities and tools. EDR software is used to help companies identify and remediate threats related to network-connected endpoints. These tools inform security professionals of vulnerable or infected endpoints and guide them through the remediation process. After incidents have been resolved, EDR tools help teams investigate issues and the vulnerable components that allowed an endpoint to become compromised. Here are some examples:
    • MVISION Endpoint Security
    • VMware Carbon Black EDR
    • Palo Alto Networks Traps
    • Microsoft Defender for Endpoint
  • Redundant Hardware Consider the scenario where you know the mean time between failures (MTBF) of a particular technology is unacceptable. It could be because that business need is particularly critical and cannot be serviced easily. But one way to reduce downtime is to inject some high availability (HA) in there, utilizing redundant hardware. Where there is one, make it two. With redundant hardware, the MTBF hasn't changed, but the risk of a failure causing an outage is much lower.
  • Self-Healing Hardware Another new term and concept, self-healing hardware is pretty self-explanatory. In the event of a failure or security incident, your hardware detects, responds to, and fixes the failure's impact. Personally, I find this concept a bit baffling, but CompTIA would like to make sure you're aware of it. You should understand that self-healing is not limited to hardware. Consider the scenario where a system responds to a software issue by rolling back the change to resume a known-good state.
  • Self-Encrypting Drives Similar to the autonomous self-healing hardware, a self-encrypting drive will initiate encryption of newly written data.
  • User and Entity Behavior Analytics (UEBA) This is a fascinating expansion of the older field of user analytics where only employee behavior is monitored. User and entity behavior analytics includes becoming more in tune with both employees and the entity. This technology helps mitigate a variety of risks by detecting odd behavior, such as detecting an unauthorized user or a Trojaned device.

Summary

A CASP+ must understand the need to harden and secure endpoint devices. Securing the environment must include the endpoint, not ignore the last line of defense. Various technologies and techniques were discussed, including how the endpoint is left more secure. Be familiar with all the listed compensating and mitigating controls, and understand how each control may or may not reduce a particular risk.

The CASP+ should understand how trusted operating systems can provide a far smaller attack surface, providing security from the kernel outward. Lastly, the CASP+ should be able to name and explain the purpose of various hardware and software-based controls.

Exam Essentials

Understand how specific endpoints face different risks. Consider scenarios where risks may affect endpoints differently. What sort of hardening or controls would apply? Controls may or may not be applied for a variety of reasons.

Know why and when to apply hardening techniques. Consider scenarios where certain hardening techniques would or would not be effective.

Know that techniques are not exclusive or one-size-fits-all. Of course, as you read through techniques or technologies discussed in the chapter, think about how you can (and perhaps should) apply multiple controls to maximize risk mitigation.

Understand how a compensating control might mitigate a risk. The exam might throw a risk at you and then offer several compensating controls. Will you be able to spot which control will have the best (or least) effect on that risk? Be familiar with compensating controls such as antivirus, application controls, HIDSs/HIPSs, host-based firewalls, endpoint detection and response (EDR), redundant hardware, self-healing hardware, and user and entity behavior analytics (UEBA).

Review Questions

You can find the answers in Appendix.

  1. What term describes removing unwanted services and features for the purpose of making it more difficult for an attacker to attack a computer successfully?
    1. Locking down
    2. Reducing the attack surface
    3. Hardening
    4. Mitigating risk
  2. Which of the following areas are included as part of the Trusted Computer Base?
    1. Hardware
    2. Hardware and firmware
    3. Processes and controls
    4. All of the above
  3. The Hardware Security Module (HSM) and the Trusted Platform Module (TPM) provide what hardening technique?
    1. Hard drive encryption
    2. Trusted user authentication
    3. Portable drive encryption
    4. Protection against buffer overflow
  4. Which trusted OS started as a collaborative effort between the NSA and Red Hat?
    1. SEAndroid
    2. SELinux
    3. Trusted Solaris
    4. TrustedARM
  5. Which of the following will have the least effect in reducing the threat of personal portable drives being used in the organization?
    1. Policy
    2. User training
    3. Host-based HSM and TPM
    4. Prohibiting personal portable drives in the organization
  6. Which is not a trusted operating system?
    1. SEAndroid
    2. SELinux
    3. Trusted Solaris
    4. TrustedARM
  7. What cryptoprocessor is used to manage cryptographic keys?
    1. Trusted Platform Module (TPM)
    2. Hardware Security Module (HSM)
    3. Self-encrypting drive (SED)
    4. Unified Extensible Firmware Interface (UEFI)
  8. What is the primary purpose of attestation services?
    1. Authenticating processes
    2. Attesting false positives
    3. Validating something as true
    4. Isolating a process from attack
  9. Which of the following is NOT a basic attribute of a trusted OS?
    1. Long-term protected storage
    2. Separation of user processes from supervisor processes
    3. Isolation
    4. Air gap
  10. What is a primary benefit of using a standard build or standard operating systems throughout the organization?
    1. Reduced cost of ownership
    2. Patch management diversity
    3. Increased logging
    4. Smaller network footprint
  11. Which of the following is used with databases to generate process templates?
    1. Management interface
    2. Dedicated interface
    3. Data interface
    4. Restricted interface
  12. What standard replaced the Trusted Computer System Evaluation Criteria (TCSEC), developed to evaluate stand-alone systems?
    1. Rainbow tables
    2. Red teaming
    3. Orange U-hardening
    4. Common Criteria
  13. What compensating control is a form of high availability (HA)?
    1. Endpoint detection and response (EDR)
    2. Host-based firewall
    3. Host-based intrusion detection system (HIDS)
    4. Redundant hardware
  14. How many evaluation assurance levels (EALs) are referenced in the Common Criteria?
    1. Five
    2. Six
    3. Seven
    4. Eight
  15. What term describes a hard drive that automatically initiates encryption of newly written data?
    1. Self-healing drive
    2. TBD encryption
    3. Self-encrypting drive
    4. TPM-based encryption
  16. What hardening technique was designed to block rootkits and other malware that could take control of BIOS-based systems and was first required in Windows 8?
    1. BIOS/UEFI
    2. NS/XN
    3. ASLR
    4. SEDs
  17. What is the purpose of the NX (No-eXecute) bit?
    1. Monitor for buffer overflow attempts
    2. Perform hardware encryption during processing
    3. Segregate the processor's memory areas
    4. Allow the BIOS to be protected
  18. What technology helps mitigate a variety of risks by detecting odd behavior, such as detecting an unauthorized user or a Trojaned device?
    1. SED
    2. TPM
    3. UEBA
    4. UA
  19. How does ASLR protect against buffer overflow attacks?
    1. Relocating the process in memory
    2. Encrypting executable code
    3. Randomizing portions of the code
    4. Encrypting code while in memory during processing
  20. What is the term that describes the isolation and restriction of applications in their own respective memory and drive space in the trusted OS SEAndroid?
    1. Security enhanced applications
    2. Out-of-band applications
    3. Application sandboxing
    4. Application isolation
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.23.12