Chapter 18

Implementing Host or Application Security Solutions

This chapter covers the following topics related to Objective 3.2 (Given a scenario, implement host or application security solutions) of the CompTIA Security+ SY0-601 certification exam:

  • Endpoint Protection

    • Antivirus

    • Anti-malware

    • Endpoint detection and response (EDR)

    • DLP

    • Next-generation firewall (NGFW)

    • Host-based intrusion prevention system (HIPS)

    • Host-based intrusion detection system (HIDS)

    • Host-based firewall

  • Boot Integrity

    • Boot security/Unified Extensible Firmware Interface (UEFI)

    • Measured boot

    • Boot attestation

  • Database

    • Tokenization

    • Salting

    • Hashing

  • Application Security

    • Input validations

    • Secure cookies

    • Hypertext Transfer Protocol (HTTP) headers

    • Code signing

    • Allow list

    • Block list/deny list

    • Secure coding practices

    • Static code analysis

      • Manual code review

      • Dynamic code analysis

      • Fuzzing

    • Hardening

      • Open ports and services

      • Registry

      • Disk encryption

      • OS

      • Patch management

        • Third-party updates

        • Auto-update

      • Self-encrypting drive (SED)/full-disk encryption (FDE)

        • Opal

      • Hardware root of trust

      • Trusted Platform Module (TPM)

      • Sandboxing

When applications are not correctly programmed, bad things happen; for example, applications can be exploited, allowing attackers in through your creation. No amount of network hardening, user training, or auditing is going to help. Application security is critical for the long-term operations and survival of any business. Application security begins with a secure design and secure coding, from start to finish. This process needs to be maintained over the lifecycle of the application through consistent patching and testing.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz enables you to assess whether you should read this entire chapter thoroughly or jump to the “Chapter Review Activities” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 18-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.”

Table 18-1 “Do I Know This Already?” Section-to-Question Mapping

Foundation Topics Section

Questions

Endpoint Protection

1–4

Boot Integrity

5

Database

6

Application Security

7

Hardening

8

Self-Encrypting Drive/Full-Disk Encryption

9

Hardware root of trust

10

Trusted Platform Module

11

Sandboxing

12

Caution

The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the self-assessment. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.

1. Just how destructive are viruses?

  1. Viruses have cost companies billions.

  2. Viruses have cost companies hundreds of dollars.

  3. Viruses have cost companies millions of dollars.

  4. Viruses are mostly an annoyance and cost very little.

2. What is one of the ways that antimalware software detects malware and hostile code?

  1. Scanning ports of the malware to determine if it is hostile

  2. Making behavior-based observations

  3. Copying the suspect file and compressing it

  4. Realizing that malware is mostly an annoyance

3. What is one of the primary functions of endpoint detection and response (EDR)?

  1. Securing the endpoint by disabling the EDR function

  2. Scanning the ports of the system to determine if the open ports are listening

  3. Providing forensics and analysis tools to research identified threats.

  4. Providing endpoint integrity by encrypting the hard disk

4. What is one of the ways that DLP protects from data loss and misuse?

  1. Monitoring and controlling endpoint activities

  2. Locking data in a bitwise vault

  3. Observing behavior-based activities

  4. Stopping malware from executing by placing it in a sandbox

5. What new specification does Unified Extensible Firmware Interface (UEFI) bring that wasn’t in the standard BIOS?

  1. Lowers the threshold for encrypting hard disks

  2. Increases hard disk partition size

  3. Provides extra fields to annotate the type of drive

  4. Stops viruses from executing during shutdown

6. What is the tokenization process?

  1. Using a single token to encrypt all hard disks

  2. Turning nonsensitive data into encrypted data

  3. Turning insensitive data into sensitive data

  4. Turning sensitive data into nonsensitive data

7. What do secure cookies store?

  1. Information about all the sites a user visited over 24 hours

  2. All of the cookies that were collected over two days

  3. Information about the application session after users log out

  4. Information about a user session after the user logs in to an application

8. What process does patch management help users accomplish?

  1. Helps acquire, test, and install multiple patches (code changes) on existing applications

  2. Manages the deployment of security codes to systems

  3. Helps remove patches that do not match the operating system

  4. Manages the Key repository store in the patch management system

9. A self-encrypting drive (SED) installed into a mixed-disk configuration or a configuration containing unencrypted drives operates in which manner?

  1. Operates as an SE disk

  2. Operates as a encrypted disk

  3. Operates as an unencrypted disk

  4. Rejects the drive and powers down

10. As part of the root of trust, any code from outside a system that is intended to run on a secure CPU requires which component?

  1. A signed certificate from a CA that is applied to the CPU

  2. Dedicated RAM that can be accessed by any root authority

  3. Dedicated ROM that can be accessed by the hardware root of trust

  4. Validated code that is secure and signed by a CSR/root

11. UEFI is a replacement for a standard BIOS, UEFI, and the Trusted Platform Module. TPM uses what to sign the log recorded by the UEFI?

  1. Nothing; it’s automatically signed each time it is booted.

  2. It uses an enforced signature code and entry.

  3. It uses four elements that include the log, the UEFI, the system time, and a binary signature.

  4. It uses a unique key to digitally sign the log.

12. Sandboxing is a strategy that isolates a test environment for applications to protect them from what?

  1. Malware and viruses

  2. Application information spillage and unauthorized disclosure

  3. Enforcement of access to system and startup log files

  4. Sandthrow attack elements that facilitate hacker access

Foundation Topics

Endpoint Protection

Endpoint protection is a term often used interchangeably with endpoint security. Endpoint protection describes security solutions that address endpoint device security issues, securing and protecting endpoints against zero-day exploits, attacks, and inadvertent data leakage resulting from human error.

Targeted attacks and advanced persistent threats (APTs) can’t be prevented through antivirus solutions alone, making endpoint protection a necessary component of the full spectrum of security solutions capable of securing data for the world’s leading enterprises. Endpoint protection solutions provide centrally managed security solutions that protect endpoints such as servers, workstations, and mobile devices that are used to connect to enterprise networks.

Note

The idea of data-centric intelligence is to protect the data on the go. For instance, data can be accessed from remote locations; however, the storage of data is not allowed. You must keep a check on the data that comes in and goes out of the organization.

Tip

Security experts must place greater emphasis on protecting data. Technology now allows it to go everywhere.

Antivirus

Antivirus software, or anti-virus software, also known as antimalware, is a computer program used to prevent, detect, and remove malware. Antivirus software was originally developed to detect and remove computer viruses, hence the name. Just how destructive are viruses? According to a report published by Cybersecurity Ventures in May 2019, the damages from ransomware cost businesses an astonishing $11 billion in lost productivity and remediation. There are many examples of viruses and Trojans damaging networks and costing billions.

Antivirus software is now an integral part of most corporate computing environments and policies. The process of selecting antivirus software varies greatly from organization to organization, but it’s important to implement and monitor the solution after it’s in place. Every endpoint device, workstation, tablet, and phone should have antivirus software installed. Newer firewalls, routers, and switches implement and integrate antivirus software and should be enabled where possible. Your entire server environment should have antivirus protection installed.

Antimalware

In the past, antivirus software typically dealt with older, more well-known threats, such as Trojans, viruses, keyloggers, and worms. Antimalware, on the other hand, emerged to focus on newer, increasingly dangerous threats and infections spread via malvertising and zero-day exploits. Today, however, antivirus and antimalware products are generally the same. Some security vendors continue to refer to their products as antivirus software even though their technology is more similar to antimalware and covers a wide variety of newer threats.

Antimalware software uses three strategies to protect systems from malicious software, starting with signature-based malware detection, behavior-based malware detection, and sandboxing. These techniques protect against threats from malware in different ways.

Many antivirus and antimalware tools depend on signature-based malware detection. Malicious software is generally identified by comparing a hash of the suspicious code with a database of known malware hashes. Signature-based detection uses a database of known malware definitions to scan for malware.

When antimalware software detects a file that matches the malware signature, it flags that file as potential malware. The limitation of malware detection based on signatures is that it can only identify known malware.

Antimalware software that uses behavior-based malware detection can detect previously unknown malware and threats by identifying malware based on characteristics and behaviors. This type of malware detection evaluates an object based on its intended actions before it can execute that specific behavior. An object is considered malicious if it attempts to perform an abnormal or unauthorized action. Behavior-based detection in newer antimalware products is sometimes powered by machine learning (ML) algorithms.

Sandboxing offers another way for antimalware software to detect malware (see the “Sandboxing” section later in the chapter). A sandbox is an isolated computing environment developed to run unknown applications and prevent them from affecting the underlying system. Antimalware programs that use sandboxing run suspicious or previously unknown programs in a sandbox and monitor the results. If the malware demonstrates malicious behavior (after it is detonated) that behaves suspiciously, the antimalware terminates it.

Endpoint Detection and Response

Endpoint detection and response, also known as endpoint threat detection and response, is an integrated endpoint security solution that combines real-time continuous monitoring and collection of endpoint data with rules-based automated response and analysis capabilities. EDR is often used to describe emerging security systems that detect and investigate suspicious activities on hosts and endpoints, employing a high degree of automation (machine learning) to enable security teams to quickly identify and respond to threats.

The primary functions of an EDR system are to

  1. Monitor and collect activity data from endpoints that could indicate a threat.

  2. Analyze this data to identify threat patterns.

  3. Automatically respond to identified threats to remove or contain them and notify security personnel.

  4. Provide forensics and analysis tools to research identified threats and search for suspicious activities (similar to threat hunting).

Data Loss Prevention

Data loss prevention, or DLP, is a data protection strategy. It is a set of tools that detect potential data breaches and data exfiltration transmissions. It has capabilities to prevent them by monitoring, detecting, and blocking sensitive data while in use/processing, in motion/transit, and at rest.

DLP software classifies regulated, confidential, and business-critical data and identifies violations of policies defined by organizations or within a predefined policy pack, typically driven by regulatory compliance such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA), Payment Card Industry Data Security Standard (PCI DSS), or General Data Protection Regulation (GDPR). When those violations are identified, DLP enforces remediation with alerts, encryption, and other protective actions to prevent end users from accidentally or maliciously sharing data that could put the organization at risk.

Data loss prevention software and tools monitor and control endpoint activities, filter data streams on corporate networks, and monitor data in the cloud to protect data at rest, in motion/transit, and in use/processing. DLP also provides reporting to meet compliance and auditing requirements and identify areas of weakness and anomalies for forensics and incident response.

Next-Generation Firewall

The term next-generation firewall (NGFW) refers to the third generation of firewall technology, combining the traditional firewall and other functionality such as filtering, application awareness, deep packet inspection, and intrusion prevention systems. Next-generation firewalls filter network traffic to protect an organization from internal and external threats and also maintain features of stateful sessions. NGFWs provide organizations with Secure Sockets Layer (SSL)/Transport Layer Security (TLS) inspection, application control, intrusion prevention, and advanced visibility across the entire attack surface. As the threat landscape expands due to co-location and multicloud adoption, and businesses grow to satisfy complex customer needs, traditional firewalls fall further behind, unable to offer protection at scale, and leading to poor user experience and weak security posture. NGFWs not only block malware but also include paths for future updates, giving them the flexibility to evolve with the threat landscape and keep the network secure as new threats arise. Figure 18-1 shows the process of an NGFW:

  1. Data is sent to the firewall.

  2. The packet is inspected and sent to the web server.

  3. The return traffic is inspected and sent back to the requester.

An illustration of NGFW mechanics.

FIGURE 18-1 NGFW Mechanics

Host-based Intrusion Prevention System

The host-based intrusion prevention system (HIPS) is an intrusion detection system that is capable of monitoring and analyzing the internals of a computing system server as well as the network packets on its network.

Starting from the network layer all the way up to the application layer, HIPS protects from known and unknown malicious attacks. HIPS regularly checks the characteristics of a single host and the various events that occur within the host for suspicious activities. HIPS can be implemented on various types of machines, including servers, workstations, and computers.

A HIP’s typically uses a database of system objects monitored to identify intrusions by analyzing system calls, application logs, and file-system modifications of binaries, password files, capability databases, and access control lists (ACLs). For every object in question, the HIPS remembers each object’s attributes and creates a checksum for the contents. This information gets stored in a secure database for later comparison.

The system also checks whether appropriate regions of memory have not been modified. Generally, it keeps a list of trusted programs. A program that oversteps its permissions is blocked from carrying out unapproved actions.

A HIPS has numerous advantages. First, enterprise users require increased protection from unknown malicious attacks. HIPS uses a specific prevention method and system that has a better chance of stopping such attacks as compared to traditional protective measures. A major benefit of using such systems is the requirement to run and manage multiple security applications to protect PCs, such as antivirus, antispyware, and firewalls. Figure 18-2 illustrates the placement of a HIPS in a basic network topology, where the sensor software is typically loaded on servers and PCs to help provide protection and alerting. Every host on a network can become a HIPS sensor; however, sound strategy is to deploy them where critical assets need to be monitored and alerted.

An illustration of HIPS is shown.

FIGURE 18-2 HIPS and Its Sensor Function in Action

Host-based Intrusion Detection System

A host-based intrusion detection system (HIDS) is an application that operates on information collected from individual computer systems. This vantage point allows a HIDS to analyze activities on the host it monitors at a high level of detail; it can often determine which processes and/or users are involved in malicious activities. HIDSs contextually understand the outcome of an attempted attack because they can directly access and monitor the data files and system processes targeted by these attacks. HIDSs utilize two types of information sources: operating system audit trails and system logs. Operating system audit trails are usually generated at the innermost (kernel) level of the operating system, and typically are more detailed and better protected than system logs. System logs are much less obtuse and much smaller than audit trails, and are normally far easier to comprehend.

Most HIDS software establishes a “digital inventory” of files and their attributes in a known state, and it uses that inventory as a baseline for monitoring any system changes. The “inventory” is usually a file containing all of the SHA-2 checksums for individual files and directories. This must be stored offline on a secured, read-only medium that is not available to an attacker. On a server with no read-only media (a blade server, for example), one method to accomplish this is to store the statically compiled intrusion detection application and its data files on a remote computer. When you wish to run a HIDS report, you can secure copy (SCP) the remote files to /tmp (or its equivalent) on the target server and run them from there. When you modify any files on the server, you should rerun the application and then make a new data set, which should be stored on the remote computer.

The benefit of running HIDS is that it is like a tripwire: if someone enters the system and modifies or changes a file, this activity “trips” HIDS, which alerts the security team. While HIPS can prevent an intrusion, HIDS is only capable of alerting. HIPSs are necessary in any enterprise environment and protect hosts and prevent against known and unknown malicious attacks from the network layer up through the application layer. HIDSs can detect malicious activity and send alerting messages, but they do not prevent attacks. One problem with host-based intrusion detection systems is that any information that they might gather needs to be communicated outside of the machine if a central monitoring system is to be used. If the machine is being actively attacked, particularly in the case of a denial-of-service (DoS) attack, this may not be possible.

Figure 18-3 illustrates how a HIDS works.

An illustration of the process of HIDS.

FIGURE 18-3 HIDS in Action

Host-based Firewall

A host-based firewall is a firewall installed on each individual desktop, laptop computer, or server that controls incoming and outgoing network traffic and determines whether to allow it into a particular device (for example, the Microsoft Windows Defender Firewall that comes with a Windows-based computer). These types of firewalls provide a granular way to protect the individual hosts from viruses and malware and to control the spread of harmful infections throughout the network.

The vast majority of companies use host-based firewalls in addition to perimeter-based firewalls to enhance internal security. For example, some of the malware attacks that might get past a perimeter firewall can be stopped at an individual device or workstation, using a host-based firewall. A host-based firewall setup can also be simpler for some users to implement and manage. The host-based firewall can also be configured to a particular computer, where customization can make the firewall more effective.

Windows Defender Firewall with Advanced Security provides host-based, two-way network traffic filtering and blocks unauthorized network traffic flowing into or out of the local device. You should configure your Windows Defender Firewall utilizing the Microsoft best practice guides specific to your Windows version. To open Windows Defender Firewall with Advanced Security, go to the Start menu, select Run, type wf.msc, and then select OK. As you can see from Figure 18-4, Windows Defender Firewall has quite a few configuration options, providing flexibility and functionality.

A screenshot of the Windows Defender Firewall.

FIGURE 18-4 Windows Defender Firewall

Windows Defender Firewall can be configured to allow specific applications access to the Internet via inbound and outbound rules:

  • Rules can be inbound, outbound, or both.

  • Rules can include services, which specify the type of traffic or port number.

  • Rules can be set to allow or deny.

  • Most firewall configurations start by blocking all traffic by default and then allowing only specific traffic in and out.

Boot Integrity

Boot integrity refers to using a secure method to boot a system and verify the integrity of the operating system and loading mechanism. There is actually a boot integrity usage model created by MITRE, and it goes beyond just computers. Boot integrity represents the first step toward achieving a trusted infrastructure. This model applies equally well to the compute, network, and storage domains. Every network switch, router, or firewall runs a compute layer operating a specialized operating system to provide networking and security functions. This model enables a service provider to make claims about the boot integrity of the network and compute platforms, as well as the operating system and hypervisor instances running in them. Boot integrity supported in the hardware makes the system robust and less vulnerable to tampering and targeted attacks. It enables an infrastructure service provider to make quantifiable claims about the boot-time integrity of the prelaunch and launch components. This provides a means, therefore, to observe and measure the integrity of the infrastructure. In a cloud infrastructure, these security features refer to the virtualization technology in use, which comprises two layers:

  • The boot integrity of the BIOS, firmware, and hypervisor. This capability can be referred to as a trusted platform boot.

  • The boot integrity of the virtual machines that host the workloads and applications. You want these applications to run on trusted virtual machines.

Boot Security/Unified Extensible Firmware Interface

Unified Extensible Firmware Interface (UEFI) is a specification for a software program that connects a computer’s firmware to its operating system. UEFI is expected to eventually replace the BIOS. If you purchase a new computer today, it likely has UEFI.

Like the BIOS, UEFI is installed at the time of manufacturing and is the first program that runs when a computer is turned on. It checks to see what hardware components the computing device has, wakes up the components, and then hands them over to the operating system. This newer specification addresses several limitations of the BIOS, including restrictions on hard disk partition size and the amount of time the BIOS takes to perform its tasks.

Because UEFI is programmable, original equipment manufacturer (OEM) developers can add applications and drivers, allowing UEFI to function as a lightweight operating system.

Unified Extensible Firmware Interface is managed by a group of chipset, hardware, system, firmware, and operating system vendors called the UEFI Forum. The specification is most often pronounced by naming the letters U-E-F-I.

Measured Boot

Measured boot is a feature that was introduced in Windows 8; it was created to help better protect your machine from rootkits and other malware. Measured boot checks each startup component, including the firmware all the way to the boot drivers, and it stores this information in what is called a Trusted Platform Module (TPM). The PC’s firmware logs the boot process, and Windows can send it to a trusted server that can objectively assess the PC’s health.

In Windows 10, measured boot uses the following process:

  1. The PC’s UEFI firmware stores a hash of the firmware, bootloader, boot drivers, and everything that will be loaded before the antimalware app in the TPM.

  2. At the end of the startup process, Windows starts the non-Microsoft remote attestation client.

  3. The TPM uses the unique key to digitally sign the log recorded by the UEFI.

  4. The client sends the log to the server, possibly with other security information.

Figure 18-5 illustrates the measured boot and remote attestation process.

An illustration of the role of measured boot.

FIGURE 18-5 Measured Boot

Boot Attestation

Boot attestation enables a remote platform to measure and report its system state in a secure way to a third party. In boot attestation, software integrity measurements are immediately committed to during boot, thus relaxing the traditional requirement for secure storage and reporting as shown previously in Figure 18-5. The third-party platform attests to a verifying platform about the trustworthiness of the software running on the host platform done in the post-boot process.

Database

Securing databases and applications that utilize these databases starts with five best practices:

  • Separate the database and web servers.

  • Encrypt stored data, files, and backups of the database.

  • Use a web application (or database) firewall (WAF). Remember to keep patches current and enable security controls.

  • Ensure physical security (this is usually a given, but don’t leave anything to chance). Ensure your databases are in locked cabinets, utilize the hardening best practices for your specific database, manage access to the database, and tightly guard secrets.

  • Make sure you have audit procedures and active monitoring of the database activity enabled. Also, ensure alerts generate tickets and receive proper attention.

Tokenization

Database tokenization is the process of turning sensitive data into nonsensitive data called tokens that can be used in a database or internal system without bringing it into scope. The tokens are then sent to an organization’s internal systems for use, and the original data is stored in a secure token vault. There is no key, or algorithm, that can be used to derive the original data for a token; instead, tokens are sent to a token vault. Figure 18-6 illustrates the tokenization process.

The token value can be used in applications as a substitute for the real data. If and when the real data needs to be retrieved, the token is submitted to the vault, and the index is used to cross reference and fetch the real value for use in the authorization process. To the end user, this operation is performed seamlessly by the browser or application nearly instantaneously. Users are likely not even aware that the data is stored in the cloud in a different format.

An illustration of tokenization process is shown.

FIGURE 18-6 Tokenization

The advantage of tokens is that there is no mathematical relationship to the real data they represent. If they are breached, they have no meaning. No key can reverse them back to the real data values. Consideration can also be given to the design of a token to make it more useful. Apple Pay uses a similar technology; for example, the last four digits of a payment card number can be preserved in the token so that the tokenized number (or a portion of it) can be printed on the customer’s receipt so he or she can see a reference to his or her actual credit card number. The printed characters might be all asterisks plus those last four digits. In this case, the merchant has only a token, not a real card number, for security purposes.

Note

Remember that tokenization assigns a random surrogate value with no mathematical relationship and can still be reversed by linking the token back to the original data. Outside of the system, a token has no value; it is just meaningless data.

Salting

In cryptography, a salt is random data that is used as an additional input to a one-way function that hashes data, a password, or a passphrase. Salts are used to safeguard passwords in storage. Historically, a password was stored in plaintext on a system, but over time additional safeguards were developed to protect a user’s password against being read from the system. A salt is one of those methods.

Salts can help defend against a precomputed hash attack (for example, rainbow tables). Because salts do not have to be memorized by humans, they can make the size of the hash table required for a successful attack prohibitively large without placing a burden on the users. Because salts are different in each case, they also protect commonly used passwords, or those users who use the same password on several sites, by making all salted hash instances for the same password different from each other.

Hashing

Hashing is a one-way function where data is mapped to a fixed-length value. Hashing is primarily used for authentication. Salting is an additional step that can be added during hashing (typically seen in association with hashed passwords); it adds an additional value to the end of the password to change the hash value produced. Whereas encryption is meant to protect data in transit, hashing is meant to verify that a file or piece of data hasn’t been altered—that it is authentic. In other words, it serves as a checksum.

Here’s how hashing works. Each hashing algorithm outputs at a fixed length. For instance, let’s look at SHA-256, which means that the algorithm is going to output a hash value that is 256 bits, usually represented by a 64-character hexadecimal string. The size of the data block differs from one algorithm to another. But for a particular algorithm, it remains the same. For example, SHA-1 takes in the message/data in blocks of 512 bits only. So, if the message is exactly 512 bits long, the hash function runs only once, which is 80 rounds in case of SHA-1. Similarly, if the message is 1024 bits, it’s divided into two blocks of 512 bits, and the hash function is run twice. Because there is nearly no chance of a message being exactly the same size as the block, a technique called padding is used. Padding takes the entire message and divides it into fixed-size data blocks. The hash function is repeated as many times as the number of data blocks. The output of the first data block is fed as input along with the second data block. If you change one bit anywhere in the message, the entire hash value changes; this is called the avalanche effect.

Every hash value is unique. If two different files produce the same unique hash value, this is called a collision, and it makes the algorithm essentially useless. Let’s say you want to digitally sign a piece of software and make it available for download on your website. To do this, you create a hash of the script or executable you’re signing. Then after adding your digital signature, you hash that too. Following this, the whole thing is encrypted so that it can be downloaded.

Application Security

Application security describes security measures at the application level, the goal of which is to prevent data or code within the app from being stolen, intercepted, or hijacked. It encompasses the security considerations that happen during application development and design, and it also involves systems and approaches to protect apps after they get deployed.

Application security can include hardware, software, and procedures that identify or minimize security vulnerabilities. A router that prevents anyone from viewing a computer’s IP address from the Internet is a form of hardware application security. But security measures at the application level are also typically built into the software, such as an application firewall, that strictly defines what activities are allowed and prohibited. Procedures can entail things like an application security routine that includes protocols such as regular testing.

Input Validations

Input validation is the first step in checking the type and content of data supplied by a user or application. Improper input validation is a major factor in many web security vulnerabilities, including cross-site scripting (XSS) and SQL injection.

What is input validation? Any system or application that processes input data needs to ensure that it is valid. This applies both to information provided directly by the user and data received from other systems. Validation can be done on many levels, from simply checking the input types and lengths (syntactic validation) to ensuring that supplied values are valid in the application context (semantic validation).

In web applications, input validation typically means checking the values of web form input fields to ensure that a date field contains a valid date, an email field contains a valid email address, and so on. This initial client-side validation is performed directly in the browser, but submitted values also need to be checked on the server side.

Note

While we generally talk about user input or user-controlled input, a good practice is to check all inputs to an application and treat them as untrusted until validated.

Let’s see how to ensure proper input validation in web applications. Traditionally, form fields and other inputs were validated in JavaScript, either manually or using a dedicated library. Implementing validation is a tedious and error-prone process, so it’s a good idea to check for existing validation features before you go the DIY route. Many languages and frameworks come with built-in validators that make form validation much easier and more reliable. For input data that should match a specific JSON or XML schema, you should validate input against that schema.

Secure Cookies

The secure cookie attribute is an option that can be set by the application server when sending a new cookie to the user within an HTTP response. The purpose of the secure attribute is to prevent cookies from being observed by unauthorized parties due to the transmission of the cookies in clear text.

Cookies may contain sensitive information that shouldn’t be accessible to an attacker eavesdropping on a channel. To ensure that cookies aren’t transmitted in clear text, you can send them with a secure flag.

Web browsers supporting the “secure” flag only send cookies having the secure flag when the request uses HTTPS. This means that setting the secure flag of a cookie prevents browsers from sending it over an unencrypted channel. The unsecure cookies issue is commonly raised in penetration test reports performed by pen testers if the environment they are running on is missing the correct credentials.

The secure session cookies store information about a user session after the user logs in to an application. This information can be highly sensitive because an attacker can use a session cookie to impersonate the victim in what’s called session hijacking or cookie hijacking. The goal of session hijacking is to steal a valid and authorized cookie from real users to gain unauthorized access to their account or an entire system.

Hypertext Transfer Protocol Headers

Hypertext Transfer Protocol (HTTP) is an application-layer protocol for transmitting hypermedia documents, such as HTML. It was designed for communication between web browsers and web servers, but it can also be used for other purposes. HTTP (port 80) has mostly been replaced with HTTPS (port 443). As a matter of fact, Chrome and Firefox can alert you to an untrusted site (displaying an “untrusted” white page if you browse to an HTTP-only page).

HTTP headers can be grouped according to their contexts, and there are hundreds of header and header fields. Here, we touch on only the most common and relevant:

  • General headers apply to both requests and responses, but with no relation to the data transmitted in the body.

  • Request headers contain more information about the resource to be fetched or about the client requesting the resource.

  • Response headers hold additional information about the response, like its location or about the server providing it.

  • Entity headers contain information about the body of the resource, like its content length or MIME type.

End-to-End Headers

End-to-end headers must be transmitted to the final recipient of the message: the server for a request or the client for a response. Intermediate proxies must retransmit these headers unmodified.

Hop-by-Hop Headers

Hop-by-hop headers are meaningful only for a single transport-level connection and must not be retransmitted by proxies or cached. Note that only hop-by-hop headers may be set using the Connection general header.

One of the more common headers is User-Agent, which contains a characteristic string that enables the network protocol peers to identify the application type, operating system, software vendor, or software version of the requesting software user agent—with the web browser showing if you’re on a Mac or Linux host or using your cell phone. That’s how it is able to render a page to your phone properly. Because HTTP traffic is in plaintext, that is, clear text, you need to protect your server with firewalls or HIDS to alert and mitigate potential threats.

The latest version of HTTP is HTTP/3, the third major version. HTTP/3 uses QUIC, a transport layer protocol, where user space congestion control is used over UDP.

Code Signing

Code signing is the process of digitally signing executables and scripts to confirm the software author can guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity. Digitally signing code provides both data integrity to prove that the code was not modified and source authentication to identify who signed the code. Furthermore, digitally signing code defines code signing use cases and can help identify security problems that arise when applying code signing solutions to those use cases. Code signing is what allows you to be sure you are downloading the right file from the right author/publisher instead of from an attacker who wants to steal your data.

Before developers can sign their work, they need to generate a public/private key pair. This is often done locally through software tools such as OpenSSL. Developers then give the public key and the organization’s identity information to a trustworthy certificate authority (CA). The CA verifies the authenticity of identity information and then issues the certificate to the developer. This is the code signing certificate that was signed by the CA’s private key and contains the developer organization’s identity and the developer’s public key.

When developers are ready to “sign” their work to establish authorship, they take all the code they wrote and then hash it. The value that is spit out is then encoded using a private key (usually generated by the author), along with the code signing certificate that contains the public key and identity of the author (proving the authorship). The output of this process is then added to the software to be shipped out.

This process constitutes a code signing operation. The public key of the CA is already preinstalled in most browsers and operating system trust stores. When a user tries to download the software, that user uses the CA’s public key to verify the authenticity of the code signing certificate embedded in the software to confirm that it’s from a trustworthy CA. The developer’s public key is then extracted from the certificate and used to decrypt the encrypted hash. Then the software is hashed again, and the new value is compared to the decrypted one. If the user’s hash value and developer’s hash value match, the software hasn’t been corrupted or tampered with during transmission. The result is your assurance that you can run the code safely.

Allow List

The allow list, or whitelist, is much easier and safer to use for well-defined inputs such as numbers, dates, or postcodes. That way, you can clearly specify permitted values and reject everything else. With HTML5 form validation, you get predefined allow list logic in the built-in data type definitions, so if you indicate that a field contains an email address, you have ready email validation. If only a handful of values are expected, you can use regular expressions to explicitly allow list them.

The allow list gets tricky with free-form text fields, where you need some way to allow the vast majority of available characters, potentially in many different alphabets. Unicode character categories can be useful to allow, for example, only letters and numbers in a variety of international scripts. You should also apply normalization to ensure that all input uses the same encoding and no invalid characters are present. The allow list requires a lot of resource time to maintain and needs to be continuously updated as the company works with new applications and removes old ones.

Block List/Deny List

When approaching input validation from a security perspective, you might be tempted to implement it by simply disallowing elements that might be used in an injection attack. For example, you might try to ban apostrophes and semicolons to prevent SQL injection, parentheses to stop malicious users from inserting a Java-Script function, or angle brackets to eliminate the risk of someone entering HTML tags. This is called block listing or deny listing, and it’s usually a bad idea because the developer can’t possibly know or anticipate all possible inputs and attack vectors. Blocklist-based validation is hard to implement and maintain and very easy for an attacker to bypass.

Let’s say you want to use these lists anyway. Now you have added an additional maintenance point, and it’s required that you understand these block lists can potentially break things, and your upper layer programming should not depend on it to stop attacks.

Secure Coding Practices

Secure coding practices must be incorporated into all lifecycle stages of an application development process. The software development lifecycle (SDLC) processes incorporate major components of a development process, starting with requirements, architecture design, implementation, testing, deployment, and maintenance. You can integrate secure coding principles into SDLC components by providing a general description of how the secure coding principles are addressed in architecture and design documents. If a secure coding principle is not applicable to your project, this point should be explicitly documented along with a brief explanation. You can perform automated and static application security testing as part of the overall application testing process to ensure checkpoints are met through the entire process. Development and testing environments should redact all sensitive data or use deidentified data. Figure 18-7 illustrates security processes in the SDLC.

An illustration of security processes in software development lifecycle is shown.

FIGURE 18-7 Security Processes in the SDLC

Static Code Analysis

Static application security testing (SAST), or static code analysis, is a testing methodology that analyzes source code to find security vulnerabilities that make your organization’s applications susceptible to attack. SAST scans an application before the code is compiled. It’s also known as known environment/white box testing. Static code analysis takes place very early in the software development lifecycle because it does not require a working application and can take place without code being executed. It helps developers identify vulnerabilities in the initial stages of development and quickly resolve issues without breaking builds or passing on vulnerabilities to the final release of the application.

Static code analysis tools give developers real-time feedback while they code and can help them fix issues before they pass the code to the next phase of the SDLC. This analysis prevents costly and security-related issues from being considered an afterthought. SAST tools also provide graphical representations of the issues found, from source to sink. They help you navigate the code easier. Some tools point out the exact location of vulnerabilities and highlight the risky code. Tools can also provide in-depth guidance on how to fix issues and the best place in the code to fix them, without requiring deep security domain expertise.

Developers can also create the customized reports they need with SAST tools; these reports can be exported offline and tracked using dashboards. Tracking all the security issues reported by the tool in an organized way can help developers remediate these issues promptly and release applications with minimal problems. This process contributes to the creation of a secure SDLC.

It’s important to note that SAST tools must be run on the application on a regular basis, such as during daily/monthly builds, every time code is checked in, or during a code release.

There are six steps to running a static code analysis effectively:

Step 1. Finalize the tool; select a static analysis tool that can perform the specific code review. The tool should be able to understand the underlying framework of the software.

Step 2. Create the scanning infrastructure and deploy the tool.

Step 3. Customize the tool to suit the needs of the analysis.

Step 4. Prioritize and onboard applications. Scan high-risk applications first.

Step 5. Analyze scan results, remove all false positives, and track each result. The team should be apprised of each defect and schedule appropriate and timely remediation.

Step 6. Provide governance and training. Proper governance ensures that your development team employs the tools properly and consistently.

Manual Code Review

Manual code review is the process of reading source code line by line in an attempt to identify potential vulnerabilities. This is a tedious process that requires skill, experience, persistence, and patience. There are three primary phases of a manual secure code review and it starts with the interview, then the code review, and finally reporting results.

During the interview with the developers, the review team has a chance to understand the intent of the application before reviewing the code. First getting a basic understanding of the intended business function or capability of the application, the interview focuses on key security touch points, such as the developers’ approach to authentication, data validation, and logging.

After the interview, the code review begins where the review team works individually to review the application as a whole. Rather than handing off individual code files to specific team members, each member reviews the entire application. This approach plays to the strengths of each individual reviewer, often resulting in the identification of different, yet relevant, findings. The other advantage, of course, is that having multiple eyes on the same set of code serves as a quality check to ensure findings are valid.

After the individual code reviews are completed, the team meets to share results. Each reviewer has the chance to review the others’ findings, providing an opportunity to discuss why certain findings may appear in one team member’s list but not in another’s. This reporting results phase also helps ensure that the findings being reported are relevant. The final list of findings, along with descriptions and potential mitigations, is then presented to the developers using a standard report format.

Dynamic Code Analysis

Dynamic code analysis adopts the opposite approach in regard to static code analysis: it is executed while a program is in operation. Dynamic application security testing (DAST) looks at the application from the outside in, by examining it in its running state and trying to manipulate it to discover security vulnerabilities. The dynamic test simulates attacks against a web application and analyzes the application’s reactions, determining whether it is vulnerable. Dynamic analysis is capable of exposing a subtle flaw or vulnerability too complicated for static analysis alone to reveal. Used wisely, automated tools can dramatically improve the return on testing investment. Automated testing tools are an ideal option in certain situations. Automating tests that are run on a regular basis during the SDLC is also helpful. As the enterprise strives to secure the SDLC, it must be noted that there is no panacea. Neither static nor dynamic testing alone can offer blanket protection. Ideally, an enterprise will perform both static and dynamic analyses. This approach benefits from the synergistic relationship that exists between static and dynamic testing.

Fuzzing

Fuzzing is the art of automatic bug detection involving invalid, unexpected, or random data as inputs to a computer or program. The goal of fuzzing is to stress the application and cause unexpected behavior, resource leaks, or crashes. This capability makes a fuzzer a great asset in assessing the security and stability of applications.

Fuzzing works by first generating test cases. Each security test case can be generated as a random or semirandom data set, and then is sent as input to the application. The data set can be generated either in conformance to the format requirements of the system’s input or as a completely malformed chunk of data the system was not meant to understand or process.

What do you think would happen to an application if negative numbers, null characters, or even special characters were sent to some input fields? Do you know how your application would behave? Answering these questions helps with figuring out the trigger. Depending on your needs, you would select the best fuzzer for the job. Mutation or generation fuzzers are defined by the way they handle test case generation. Generation fuzzers generate new test cases from a supplied model, whereas mutation fuzzers mutate a supplied seed input object. There are fuzzers that can do both.

Hardening

Application hardening, also known as application shielding, is the act of applying levels of security to protect applications from IP theft, misuse, vulnerability exploitation, tampering, or even repackaging by people with ill intentions. Application hardening is usually performed via security solutions or tools with specialized hardening capabilities that greatly increase the effort required by attackers to modify the application, making it no longer viable or worthwhile to target. The most robust tools shield applications from both static and dynamic threats.

Open Ports and Services

All communication that happens over the Internet is exchanged via ports. Every IP address contains two kinds of ports, TCP and UDP, and there can be up to 65,535 of each for any given IP address. Services that connect to the Internet (web browsers, email clients, and file transfer services, for example) use specific ports to receive information.

Any Internet-connected service requires specific ports to be open in order to function. A problem arises when legitimate services are exploited through code vulnerabilities or malicious services are introduced to a system via malware. Cybercriminals can use these services in conjunction with open ports to gain access to sensitive data.

Closing unused ports is like shutting the door on those cybercriminals. That’s why it’s considered best practice to close any ports that aren’t associated with a known legitimate service.

As system administrator, you can scan for and close open ports that are active and exchanging information on your networks. Closing open ports requires knowing which ports are actually required by the applications and services running on a network. Some of them are universal—for example, port 80 is the port for web traffic (HTTP). Others are reserved by specific services.

Once you know which ports must remain open, you can conduct a scan to identify open ports that might be exposing your systems to cyber attacks. Many free online tools make this scanning process easier. If a port is (1) open and (2) not associated with any known service on the network, you should investigate which application or service is using the port and should remove it immediately. You should also have continuous monitoring in place to ensure an attacker hasn’t found a port you missed and has gained a foothold.

Registry

A user registry is a data source that contains account information, such as usernames and passwords. An application server can retrieve this information to authenticate and authorize users. Securing and hardening of the registry can vary with the operating system, and if possible be done right after the system is built and before it is put into production. Be warned, however, that performing any work on the registry could completely destroy your system, making it irrecoverable. In most cases with the Windows registry, you use an MMC (snap-in) to add the security template. Enter regedit from the Start > Run dialog box and you are presented with the registry.

There are several expert guides you can use to harden your specific version of Windows. It is highly recommend you utilize one of them. Best practices for the registry are as follows: do not allow remote access, do not store passwords using reversible encryption methods, and restrict the ability to access the computer to only network administrators. You also should deny guest accounts the ability to log on as a service, as a batch job, locally, or via Remote Desktop Protocol (RDP). Then configure the Microsoft network client to always digitally sign communications.

Disk Encryption

Disk encryption is a technology which protects information by converting it into unreadable code that cannot be deciphered easily by unauthorized people. Disk encryption uses disk encryption software or hardware to encrypt every bit of data that goes on a disk or disk volume.

Full-disk encryption is a cryptographic method that applies encryption to the entire hard drive, including data, files, the operating system, and software programs. Full-disk encryption places an exterior guard on the internal contents of the device. Unlike past iterations of full-disk encryption, the process to encrypt hard drives has become quite simple and is supported by all the major vendors.

Under no circumstance should you store confidential files unencrypted (at rest) on your hard disks. You should use disk encryption when storing confidential or sensitive data. Disk encryption can mitigate risks of data exposure from loss or theft of stored data. Full-disk encryption can provide “blanket” protection so users do not have to protect individually stored files, and it ensures that any remnants of data are secure. Such remnants can be temporary files, browser cache files, and application-specific automatic backups. All current operating systems provide a disk encryption capability. All major operating systems support disk encryption, so there should be no reason not to use it.

Operating System

Hardening of the operating system is the act of configuring an OS securely, updating it, creating rules and policies to help govern the system in a secure manner, and removing unnecessary applications and services. This is done to minimize a computer OS’s exposure and attack surface to threats and to mitigate possible risk. Although every OS is different, you can do some common things to protect your operating system:

  • Remove unnecessary and unused programs. Every program installed on a device is a potential entry point for a bad actor, so be sure to clean them up regularly. If a program has not been okayed or vetted by the company, it should not be allowed. Hackers look for security holes when attempting to compromise networks, so this is your chance to minimize their opportunities.

  • Use updates, keeping your programs up to date and installing the latest versions of updates. There’s no single action that ensures protection, especially from zero-day attacks, but using updates is an easy and effective step to take.

  • Make patch management a part of any regular security regimen. Doing so involves planning, testing, implementing, and auditing consistently to ensure the OS is patched, as well as individual programs on the client’s computer.

  • Establish and use group policies. Sometimes, user error can lead to a successful cyber attack. One way to prevent such attacks is to define the groups that have access and stick to those rules. Update user policies and make sure all users are aware of and compliant with these procedures. For instance, enforce the use of strong passwords. Also, consider using security templates in the group policy objects (GPOs); these are often used by corporate environments and are essentially text files that represent a security configuration. You could basically use a security template to help manage your group policy and ensure consistency across your entire organization.

  • Configure baselines. This is how you measure changes in networking, hardware, software, and so on. Baselines are created by selecting something to measure and doing so consistently for a period of time. After you establish a baseline, measure it on a schedule that meets your security maintenance standards and needs.

  • Do not allow users to open web pages from inside applications like Word, Excel, and Outlook. Also, disable macros.

Patch Management

Patch management is a process that helps acquire, test, and install multiple patches (code changes) on existing applications and software tools on a computer, enabling systems to stay updated with existing patches and determining which patches are the appropriate ones. Managing patches thus becomes easy and simple. One of the most simple, common, and overlooked ways to protect systems is to keep them patched. Consider securing your auto-updates with onsite servers. Third-party applications are rarely checked for patches, updates, or whether the application creator discontinued support; therefore, you must stay vigilant.

  • Third-party updates: Nearly every application, tool, or widget provides updates, and usually on a regular basis. Therefore, you should make sure your users do not disable this capability by using group policies. Set up systems to perform updates during off hours. Keep an eye on updates to make sure they do not cause more harm than good. Generally, it’s a good idea to test all third-party patches on a handful of test machines. Provided they perform well, you can roll them out en masse. Some third-party patches do not automatically notify or download for users. In that case, you should have a good inventory of every software package running in the organization and monitor them for updates and known exploits.

  • Auto-updates: Most operating systems enable auto-updates by default, and if you do not force a reboot every evening through group policies, users tend to hit the “Snooze” button on rebooting for patch updates. You should allow the download of auto-updates of critical patches where possible and schedule the reboot, rollout, or installation during off hours to make sure you have as small of an impact on the organization as possible. First, make sure your backups are working and that they show that they successfully performed a backup recently. You can also configure an onsite patching server and ensure all your systems only connect to your patching server to receive their updates. In addition to saving Internet bandwidth, this approach also allows you to test updates on lab systems to make sure they do not have adverse affects.

Self-Encrypting Drive/Full-Disk Encryption

A self-encrypting drive (SED) is a disk drive that uses an encryption key to secure the data stored on the disk. This encryption protects the data and array from data theft when a drive is removed from the array.

SED operates across all disks in an array at once. If one drive in a RAID set is removed from the array, a new set of encryption key shares is generated automatically and shared among the remaining disks. If a second drive is removed from the same RAID set, another set of encryption key shares is generated. SEDs are configured at the factory. When the drives are installed into an array, the array automatically detects the new SEDs and locks them. This process is automatic; there are no GUI user controls for SED.

All of the drives in an array, including spares, must be of the same type and model and must be running specific firmware. A self-encrypting drive installed into a mixed-disk configuration or a configuration containing unencrypted drives operates as an unencrypted disk. Likewise, a pool consisting of all SEDs might replicate to a pool with only a few SEDs or no SEDs at all. Hard drive full-disk encryption is made by HDD vendors using the OPAL and Enterprise standards developed by the Trusted Computing Group. Key management takes place within the hard disk controller, and encryption keys are 128- or 256-bit Advanced Encryption Standard (AES) keys. Full-disk encryption (FDE) and self-encrypting drives encrypt data as it is written to the disk and decrypt data as it is read off the disk. FDE makes sense for laptops, which are highly susceptible to loss or theft, but not so much in a data center.

Note

Self-encrypting drives are identified in the GUI with a gold key icon.

Note

The term self-encrypting drive (SED) is often used when referring to full-disk encryption (FDE) on hard disks. The Trusted Computing Group (TCG) Opal security subsystem storage standard provides industry-accepted standardized SEDs. SEDs automatically encrypt all data in the drive, preventing attackers from accessing the data through the operating system. SED vendors include Seagate Technology, Hitachi, Western Digital, Samsung, and Toshiba.

OPAL

Opal is a standard developed by the Trusted Computing Group (TCG). Hitachi, Western Digital, Seagate, Samsung, and Toshiba are the disk drive manufacturers offering TCG OPAL SATA drives. The Opal Security Subsystem Class (or SSC) is an implementation profile for storage devices built to protect the confidentiality of stored user data against unauthorized access once it leaves the owner’s control (involving a power cycle and subsequent deauthentication). With Opal, you can also enable interoperability between multiple SD vendors within a system.

The Opal SSC encompasses these functions: Security Provider Support Interface Communication Protocol, cryptographic features, authentication, table management, access control and personalization issuance, and SSC discovery. SCC discovery is a process in which the host examines the storage device’s configurations, capabilities, and class.

Hardware Root of Trust

The hardware root of trust is the foundation on which all secure operations of a computing system depend. It contains the keys used for cryptographic functions and enables a secure boot process. It is inherently trusted and therefore must be secure by design. Hardware root of trust can be defined by the four basic building blocks:

  • The protective hardware provides a trusted execution environment (TEE) for the privileged software to run.

  • At a minimum, it must perform one or more proven cryptographic functions, like AES-based cryptography

  • A form of tamper protection must be present and available for the entire runtime.

  • It must have a flexible yet simple user interface that the host can interact with, through either the host CPU and/or a host controller toggling general-purpose I/O (GPIOs).

To meet the root of trust criteria, a hardware root of trust needs to include a variety of components, starting with a security perimeter. The security perimeter defines what needs to be protected on the system on a chip (SoC). It can be implemented in various ways, including via a private bus that connects to the main bus through a gateway.

Next, a root of trust is required to have a secure CPU that runs secure code software/firmware. The security features supported in a hardware root of trust policy are enabled and used by the software running on that CPU. The resources around the CPU help facilitate the security and performance of these functions.

The third element of a root of trust is the runtime memory. When running software on the CPU, developers and designers need to protect the runtime data required by the software—more explicitly, the STACK, HEAP, and global data. This data contains keys in plaintext and other sensitive information. It is critical to ensure tight security around this block.

Tamper resistance is the fourth element of a root of trust, which is essential for a hardware root of trust. Code from the outside needs to be validated prior to running it on the secure CPU. Tamper resistance can be implemented in many ways—for example, using a dedicated ROM that can be accessed only by the hardware root of trust.

The fifth element of a root of trust is a true random number generator (TRNG). This capability always produces a high level of entropy required for the various security functions. Secure, untampered access to this module is critical.

A secure clock or secure counter is the sixth element of a root of trust. It is important for applications that require a reliable time measurement. Using a secure clock is effective only if the hardware root of trust has access to a clock source that cannot be tampered with. Time is essential in a root of trust.

The seventh element of a root of trust is secure storage. Secure access to persistent storage is essential for applications requiring state knowledge. It’s critical that the information cannot be tampered with, nor can the access to the information be tampered with.

The general-purpose I/O GPIO framework extension (GpioClx) simplifies the task of writing a driver for a GPIO controller device. GpioClx provides driver support for peripheral devices that connect to GPIO pins on systems.

Trusted Platform Module

Trusted Platform Module (TPM) technology is designed to provide hardware-based, security-related functions. A TPM chip is a secure crypto-processor that is designed to carry out cryptographic operations. The chip includes multiple physical security mechanisms to make it tamper resistant, and malicious software is unable to tamper with the security functions of the TPM. Some of the key advantages of using TPM technology are that you can do the following:

  • Generate, store, and limit the use of cryptographic keys.

  • Use TPM technology for platform device authentication by using the TPM’s unique RSA key, which is burned into itself.

  • Help ensure platform integrity by taking and storing security measurements.

TPM is used for system integrity measurements and for key creation and use. During a system’s boot process, the boot code that is loaded can be measured and recorded in the TPM. The integrity measurements can be used as evidence for how a system started and to make sure that a TPM-based key was used only when the correct software was used to boot the system.

Starting with Windows 10, the operating system automatically initializes and takes ownership of the TPM. This means that in most cases, we recommend that you avoid configuring the TPM through the TPM management console (tpm.msc).

Note

Trusted Platform Modules (TPMs) and Hardware Security Modules (HSMs) provide strong hardware-based cryptographic solutions across a number of use cases, including password protection and device identification and authentication.

Sandboxing

Sandboxing is a strategy that isolates a test environment for applications away from critical system resources and other programs. It provides an extra layer of security that prevents malware or harmful applications from negatively affecting your system. Often virus companies launch unknown code in sandboxes to see how it reacts. Because the sandbox is isolated, it cannot affect the overall system. The sandbox provides a safe environment for opening suspicious files, running untrusted programs, or downloading URLs, without affecting the devices they are on. It can be used anytime, for any situation, to safely examine a file or code that could be malicious, before serving it up to devices—all the while keeping it isolated from a PC and the company network.

Sandboxing is used as a resource to test software that could end up being categorized as “safe” or “unsafe.” As malware becomes more prevalent and dangerous, malicious applications, links, and downloads could potentially gain endless access to a network’s data if they’re not tested by sandbox software first. Sandboxing can be used as a tool to detect malware attacks and block them before they enter a network. The system allows your IT team to test code and understand exactly how it works before it invades an endpoint device with malware or viruses; this gives the IT team insight and tips on what to look out for in other scenarios.

As a key measure in network and web security strategies, sandboxing provides an additional layer of security to analyze threats, separating them from the network to ensure online threats do not compromise operations. The application or file can be run if needed, with all changes being discarded after the sandbox is closed to eliminate risk of corrupted devices.

Sandbox software is available as a cloud-based or appliance-based solution and offers different advantages depending on your business needs.

Chapter Review Activities

Use the features in this section to study and review the topics in this chapter.

Review Key Topics

Review the most important topics in the chapter, noted with the Key Topic icon in the outer margin of the page. Table 18-2 lists a reference of these key topics and the page number on which each is found.

Table 18-2 Key Topics for Chapter 18

Key Topic Element

Description

Page Number

Section

Antivirus

451

Section

Antimalware

452

List

Primary functions of an EDR system

453

Section

Data Loss Prevention

453

Section

Next-Generation Firewalls

453

Section

Host-based Intrusion Prevention System

454

Section

Host-based Intrusion Detection System

456

Section

Host-based Firewall

457

Section

Boot Integrity

458

Section

Boot Security/Unified Extensible Firmware Interface

459

List

Measured boot process

460

Section

Boot Attestation

460

Section

Database

461

Figure 18-6

Tokenization process

462

Section

Salting

462

Section

Hashing

463

Section

Application Security

463

Section

Input Validations

464

Section

Secure Cookies

465

Section

Hypertext Transfer Protocol Headers

465

Section

Code Signing

466

Section

Allow List

467

Section

Block List/Deny List

467

Figure 18-7

Security processes in the software development lifecycle (SDLC)

468

Section

Static Code Analysis

468

List

Running a static code analysis

469

Section

Manual Code Review

470

Section

Dynamic Code Analysis

470

Section

Fuzzing

471

Section

Hardening

471

Section

Open Ports and Services

471

Section

Registry

472

Section

Disk Encryption

473

List

Protection methods to harden the operating system

473

Section

Patch Management

474

Section

Self-Encrypting Drive/Full-Disk Encryption

475

Section

Hardware Root of Trust

476

List

Advantages of Trusted Platform Module (TPM) technology

478

Section

Sandboxing

478

Define Key Terms

Define the following key terms from this chapter, and check your answers in the glossary:

endpoint protection

antivirus software

antimalware

endpoint detection and response

DLP

next-generation firewall (NGFW)

host-based intrusion prevention system (HIPS)

host-based intrusion detection system (HIDS)

host-based firewall

boot integrity

Unified Extensible Firmware Interface (UEFI)

measured boot

boot attestation

tokenization

salt

hashing

input validation

secure cookie

code signing

allow list

block listing

deny lists

static code analysis

manual code review

dynamic code analysis

fuzzing

hardening

registry

disk encryption

patch management

self-encrypting drive (SED)

full-disk encryption (FDE)

Opal

hardware root of trust

Trusted Platform Module (TPM)

sandboxing

Review Questions

Answer the following review questions. Check your answers with the answer key in Appendix A.

1. What are the three strategies that antimalware software uses to protect systems from malicious software?

2. What is the first step toward achieving a trusted infrastructure on computers and networking devices?

3. In boot attestation, what is measured and committed during the boot process?

4. What places an exterior guard on the internal contents of a device?

5. What aspect of a disk array requires that replacement drives be configured to match the encryption protection at installation?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.200.77.92