Did you ever think about how many layers are in an onion? As you peel back one layer, there seems to be another after another. A comprehensive security solution should be designed the same way, that is, as a series of layers. This is often called defense in depth. This chapter discusses comprehensive security solutions including hardening techniques, using trusted operating systems, and implementing compensating controls. The defense-in-depth approach looks at more than just basic security concepts. This methodology is the sum of the methods, techniques, tools, people, and controls used to protect critical assets and information.
Hardening techniques include a variety of steps carried out to remove unwanted services and features for the purpose of making it harder for an attacker to access a computer successfully by reducing the attack surface. Because it's easy to overlook something in the hardening process, companies should adopt a standard methodology to harden computers and devices. Different OSs such as macOS, Linux, and Windows will require different security baselines. Some administrators refer to a golden image as a master image that can be used to clone and deploy other devices consistently. System cloning is an effective method of establishing a baseline configuration for your organization. It requires effort and expertise to establish and maintain images for deployment. Also, hardening techniques for workstations will be different from hardening techniques for servers.
Although this may seem like a simple concept, good security practices start with physical security. If an attacker can physically access a system, it becomes a trivial task to take control of it. Systems should be physically secured. Training users to turn off systems when not in use is a basic control, along with the implementation of password-protected screensavers and automatic logoffs.
Physical equipment and software have life cycles and will not last forever. When physical equipment reaches its final stages of use in an organization, plans should be made around end of life (EOL), and that equipment should be removed from the network. Software has a similar cycle, and once it has reached the end of support by the manufacturer, plans should be in place for a replacement.
Hosts should be hardened so that they are secure before the OS even fully boots. Several items can be used as boot loader protections, including the following:
Securing the network equipment and host computers represents the multilayer security approach that is sometimes called defense in depth. Here are some of the general areas that you should examine when hardening host systems:
Today, given various processor manufacturers, these are different terms for essentially the same bit. Obviously, when a CPU manufacturer brands a No-eXecute bit as their own, they can promote its unique security features. Be aware that Intel markets its technology as the XD (execute disable) bit, while AMD brands its technology as EVP (enhanced virus protection). Finally, for CPUs built on the ARM architecture, the feature is called XN (execute never).
Another approach to protecting data is using secure and encrypted enclaves. A secure enclave allows an application to run securely at the hardware level. All data is encrypted in memory and is decrypted only while at the hardware level. The data is secure even if the OS or root user is compromised.
Address space layout randomization (ASLR) is a technique designed to protect against buffer overflow attacks, initially implemented in 2003. Presently, all major operating systems—server, desktop, and mobile—incorporate ASLR.
How does ASLR work? In a buffer overflow attack, an attacker needs to know the location in the code where a given function accepts input. The attacker will feed just the right amount of garbage to that code location, including a malicious payload. Ideally, the attacker also includes an instruction to go to another point in the code, and the malicious payload and instruction will run with the privileges of the application.
To say that making a buffer overflow work “properly” is difficult is an understatement. Rarely does an attacker have the actual source code to know the precise location in the code where the targeted function accepts input. Even if the location is available, buffer overflow development requires a large number of “hit-and-miss” trials. However, overflow attacks do happen, and worse, they are repeatable, given that the code location doesn't change.
How does ASLR protect against this? ASLR randomizes the location of different portions of the code. Therefore, even if an attacker managed to make a buffer overflow work once, it may never work again on the same code.
The challenge to software developers is that their code must be compiled to support ASLR from the start. Many years ago this posed a difficult hurdle, but ASLR support is now the default.
Even so, the implementation of ASLR is not infallible with regard to application compatibility. In late November 2017, it was suspected that Microsoft's ASLR was broken in Windows versions 8 through 10. Microsoft explained that the problem was a configuration issue when working with applications that don't opt in to ASLR. More can be learned about mandatory ASLR here:
blogs.technet.microsoft.com/srd/2017/11/21/clarifying-the-behavior-of-mandatory-aslr
A wide variety of products are available to encrypt data in existing disk and media drive products. Data-at-rest encryption options include software encryption, such as encrypted file system (EFS) and VeraCrypt.
There are two well-known hardware encryption options to better protect data. Those hardware encryption options are the Hardware Security Module (HSM) and the Trusted Platform Module (TPM).
An HSM is a type of secure cryptoprocessor used for managing cryptographic keys. While connected to an HSM, a system can make keys, sign objects, and validate signatures.
A TPM is a specialized chip that can be installed on the motherboard of a computer, and it is used for hardware authentication. The TPM authenticates the computer in question rather than the user. It uses the boot sequence of the computer to determine the trusted status of a platform. The TPM places the cryptographic processes at the hardware level. If someone removes the drives and attempts to boot the hard drive from another computer, the hard drive will fail and deny all access. This provides a greater level of security than a software encryption option that may have been used to encrypt only a few folders on the hard drive. TPM was designed as an inexpensive way to report securely on the environment that booted and to identify the system.
Both HSM and TPM work well for hard drives and fixed storage devices, but portable devices must also be protected against damage, unauthorized access, and exposure. One good approach is to require all employees who use portable devices, USB thumb drives, handheld devices, or any removable storage media devices to be held responsible for their safekeeping and proper security. This starts with policy and extends to user training. For example, policy might be configured to require laptop and tablet computer users to connect to the corporate intranet at least once a week to receive the latest software patches and security updates. Policy can also be established that requires the use of encryption on portable devices. Depending on the company and the level of security needed, the security professional might also restrict the use of personal devices at work and block the ability of these devices to be plugged into company equipment.
Another option for drive encryption is a self-encrypting drive (SED). A SED is a hard disk drive (HDD) or solid-state drive (SSD) designed to automatically encrypt and decrypt drive data without the need for user input or disk encryption software. When the SED is powered on in the host system, data being written to and read from the drive is being encrypted and decrypted instantly; no other steps or software are needed to encrypt and decrypt the drive's data.
As you have learned so far, security professionals need to know about many common types of security tools, techniques, and procedures, as well as when and how to use them. Hardening techniques focus on reducing the attack surface of an endpoint system by disabling unnecessary or unwanted services and changing security options from defaults to more secure settings that match the device's risk profile and security needs.
Patching and updating systems also help. Having a fully patched system image is part of a hardening process. System configuration standards, naming standards, hardening scripts, programs, and procedures help to ensure that systems are correctly inventoried and protected. Drive encryption keeps data secure if drives are stolen or lost. At the end of their life cycle, when devices are retired or fail, sanitization procedures are used to ensure that remnant data doesn't leak. Wiping drives and physical destruction are both common options.
Other controls and techniques can include the following:
Exercise 2.1 shows you how to run a security scanner to identify vulnerabilities.
Exercise 2.2 shows you how to bypass command shell restrictions.
Scripting and replication are also an approach for automating patch management.
A trusted operating system (trusted OS) can be defined as one that has implemented sufficient controls to support multilevel security. Multilevel security provides the OS with the ability to process and handle information at different security levels. At the very least, this granularity may mean that you can process data as a user or as root or administrator. Trusted OSs must be tested to demonstrate evidence of correctness to meet specific standards. These standards require the trusted OS to have undergone testing and validation. Testing offers the OS vendor a way to promote the features of the system. Testing allows the buyer to verify the system and to check that the OS performs in the manner the vendor claims.
Trusted operating systems extend beyond software and have to take into consideration the hardware on which they reside. This is the purpose of the trusted computer base. The trusted computer base (TCB) is the sum of all of the protection mechanisms within a computer, and it is responsible for enforcing the security policy. This includes hardware, software, controls, and processes.
The following documents are some of the guidelines used to validate a trusted OS:
One of the original trusted OS testing standards was the Trusted Computer System Evaluation Criteria (TCSEC). TCSEC, also known as the Orange Book, was developed to evaluate stand-alone systems. It actually has been deprecated and has long ago been replaced by the Common Criteria, but it deserves mention as it was one of the first trusted OS testing standards. Its basis of measurement is confidentiality. It was designed to rate systems and place them into one of four categories:
Regardless of how it is tested or which specific set of criteria is used, a trusted OS includes the following basic attributes:
The TCB is responsible for confidentiality and integrity. It is the only portion of a system that operates at a high level of trust. This level of trust is where the security kernel resides. The security kernel handles all user and application requests for access to system resources. A small security kernel is easy to verify, test, and validate as secure.
So, while the trusted OS is built on the TCB, both of these concepts are based on theory. Much of the work on these models started in the early 1970s. During this period, the U.S. government funded a series of papers focused on computer security. These papers form the basic building blocks for trusted computing security models. Security models determine how security will be implemented, what subjects can access the system, and to what objects they will have access. Simply stated, they are a way to formalize the design of a trusted OS. Security models build on controls designed to enforce integrity and confidentiality.
Mandatory access control (MAC) has been used by the government for many years. All files controlled by the MAC policies are based on different categorized security levels including classified, secret, or top secret. MAC allows for the system to run at the same or lower levels. Overriding MAC requires authorization from senior management.
Examples of trusted OSs include SELinux, SEAndroid, and Trusted Solaris. SELinux (Security-Enhanced Linux), available now for just over 20 years, started as a collaborative effort between the National Security Agency (NSA) and Red Hat, and it continues to be improved. SELinux brings MAC to the Linux kernel, allowing for much stricter access control. For the CASP+ exam, remember this point as a way to distinguish kernel from middleware.
Middleware is a type of computer software that provides services to software applications beyond those available from the operating system. It can be described as “software glue.” Middleware makes it easier for software developers to implement communication and input/output, so they can focus on the specific purposes of their applications. While core kernel functionality can be provided only by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems.
The Android operating system uses the Linux kernel at its core and also provides an application framework that developers incorporate into their applications. In addition, Android provides a middleware layer, including libraries that provide services such as data storage, screen display, multimedia, and web browsing. Because the middleware libraries are compiled to machine language, services execute quickly. Middleware libraries also implement device-specific functions, so applications and the application framework need not concern themselves with variations between various Android devices.
SEAndroid brings the same MAC benefit to the Android kernel. Android uses the concept of application sandboxing, or isolating and restricting its applications in their own respective memory and drive space. Starting with version 4.3, Android took on SELinux to extend that isolation even further. Between versions 4.3 and 5.0, Android partially enforced the restriction to a subset of domains. In Android speak, a domain is akin to a running process. With Android kernel 5.0 and later, Android fully enforces SELinux in the kernel.
Trusted Solaris also provides MAC as well as features like read-only protection for host or guest environments that Solaris dubs “immutable zones.” The immunity provided is applied via a zone configuration property file that is used to set any exemptions to the file system. Those exemptions allow writes to be permitted. At the time of this writing, the property file is set to one of five possible settings, ranging from “strict” (absolutely no writes) to “none” (full read-write access), with intermediate variants of access to the /etc
and /var
directories.
Security is hardly a new concern for most organizations. In many companies, security is relegated to the technology agenda and gets only marginal attention and budget consideration. In today's economy, many computer security officers (CSOs) are being asked to provide better security than was provided yesterday with more modest budgets. For companies to survive in today's world, a paradigm shift is needed—the real threat is no longer a stranger lurking outside the company's main gate. Over the last decade, information-related crime and cyberattacks have become the crime of choice for a growing cadre of criminals.
Effective security requires the CASP+ to work with others throughout the organization to integrate the needs of the company into holistic security solutions using compensating controls. Given a scenario, a CASP+ should be able to facilitate collaboration across diverse business units to achieve the related security goals. A comprehensive security solution is essential to the enterprise's continuity of business operations and maintaining the confidentiality and integrity of data. The integration of enterprise tools is needed to protect information and systems from unauthorized access, use, disclosure, disruption, modification, or destruction and sometimes requires thinking outside the box.
A CASP+ must understand the need to harden and secure endpoint devices. Securing the environment must include the endpoint, not ignore the last line of defense. Various technologies and techniques were discussed, including how the endpoint is left more secure. Be familiar with all the listed compensating and mitigating controls, and understand how each control may or may not reduce a particular risk.
The CASP+ should understand how trusted operating systems can provide a far smaller attack surface, providing security from the kernel outward. Lastly, the CASP+ should be able to name and explain the purpose of various hardware and software-based controls.
Understand how specific endpoints face different risks. Consider scenarios where risks may affect endpoints differently. What sort of hardening or controls would apply? Controls may or may not be applied for a variety of reasons.
Know why and when to apply hardening techniques. Consider scenarios where certain hardening techniques would or would not be effective.
Know that techniques are not exclusive or one-size-fits-all. Of course, as you read through techniques or technologies discussed in the chapter, think about how you can (and perhaps should) apply multiple controls to maximize risk mitigation.
Understand how a compensating control might mitigate a risk. The exam might throw a risk at you and then offer several compensating controls. Will you be able to spot which control will have the best (or least) effect on that risk? Be familiar with compensating controls such as antivirus, application controls, HIDSs/HIPSs, host-based firewalls, endpoint detection and response (EDR), redundant hardware, self-healing hardware, and user and entity behavior analytics (UEBA).
You can find the answers in Appendix.
3.149.23.12