CHAPTER 6

Security Controls for Host Devices

This chapter presents the following topics:

•   Trusted operating system

•   Endpoint security software

•   Host hardening

•   Boot loader protections

•   Vulnerabilities associated with hardware

•   Terminal Services/application delivery services

It’s not inline network encryptors, proxy servers, and load balancers that users directly interface with each day but rather the host devices such as desktops and laptops. Given the users’ laser focus on these device types, hackers will be equally focused on attacking them. Naturally, we must match the attacker’s effort with a myriad of security controls that specifically secure host devices.

In this chapter, we take a look at trusted operating systems to serve as a starting point for a secure computer. Next, we dive into endpoint security software, which is composed of a variety of security tools designed to secure the local computer. We follow this up with host-hardening techniques, which involve various configurations and changes to default settings to lock down a host device. After that we look at boot loader protections to ensure that a computer boots up securely. The last two sections tackle hardware vulnerabilities as well as Terminal Services and application delivery services. By locking down the host devices, users will be assured of a secure and productive working environment to help achieve company objectives.

Trusted Operating System

The concept of a trusted operating system has been around since even before the days of the DoD Trusted Computer System Evaluation Criteria (TCSEC), known as the “Orange Book.” In the early days of computer security, it was believed that if a trusted computing base (TCB) could be built, it would be able to prevent all security issues from occurring. In other words, if we could just build a truly secure computer system, we could eliminate the issue of security problems. Although this is a laudable goal, the reality of the situation quickly asserted itself—that is, no matter how secure we think we have made the system, something always seems to happen that allows a security event to occur. The discussion turned from attempting to create a completely secure computer system to creating a system in which we can place a certain level of trust. Thus, the Orange Book was developed in which different levels were defined related to varying levels of trust that could be placed in systems certified at those levels.

The Orange Book, although containing many interesting concepts that are as valid today as they were when the document was created, was replaced by the Common Criteria (CC), which is a multinational program in which evaluations conducted in one country are accepted by others that also subscribe to the tenets of the CC. At the core of both the Orange Book and the CC is this concept of building a computer system or device in which we can place a certain amount of trust and thus feel more secure about the protection of our systems and the data they process. The problem with this concept is that common operating systems have evolved over the years to maximize ease of use, performance, and reliability. The desire for a general-purpose platform on which to install and run any number of other applications does not lend itself to a trusted environment in which high assurance often equates to a more restrictive environment. This leads to a generalization that if you have an environment requiring maximum flexibility, a trusted platform is not the way to go.

In general, somebody wanting to utilize a trusted operating system probably has a requirement for a multilevel environment. Multilevel security is just what its name implies. On the same system you might, for example, have users who have Secret clearance as well as others who have Top-Secret clearance. You will also have information that is labeled as Secret and other information that is Top-Secret stored on the system. The operating system must provide assurances that individuals who have only a Secret clearance are never exposed to information classified as Top-Secret, and so forth.

Implementation of such a system requires a method to provide a label on all files (and a similar mechanism for all users) that declares the security level of the data. The trusted operating system will have to make sure that information is never copied from a document labeled Top-Secret to a document labeled Secret because of the potential for disclosing information. In the Common Criteria, the requirements for implementing such a system are described in the Labeled Security Protection Profile. In the older Orange Book, this level of security was enabled through the implementation of mandatory access control (MAC). Some vendors have gone through the process of obtaining a certification verifying compliance with the requirements for multilevel security, resulting in products such as Trusted Solaris. Other products have not gone through the certification, but may still provide an environment of trust allowing for this level of separation. An example of this is Security-Enhanced Linux (SELinux).

Microsoft, which has seen many vulnerabilities discovered in its Windows operating systems, attempted to address this issue of trust with its Next-Generation Secure Computing Base (NGSCB) effort. This effort highlighted what has been stated about trusted platforms because it offered users the option of a more secure computing environment, but this came at the expense of giving up a level of control as to what applications and files could be run on an NGSCB PC. The Microsoft initiative was announced in 2002 and given the code name Palladium. It would have resulted in a secure “fortress” designed to provide enhanced data protection while ensuring digital rights management and content control. Two years later, Microsoft announced it was shelving the project because of waning interest from consumers, but then turned around and said the project wasn’t dead completely but was just not going to take as prominent a role.

A little clarification is important at this point. Another term that is often heard related to the subject of trusted computing is “trustworthy computing.” The two are not the same. A trusted system is one in which a failure of the system will break the security policy upon which the system was designed. A trustworthy system, on the other hand, is one that will not fail. Trustworthy computing is not a new concept, but it has taken on a larger presence due to the Microsoft initiative by the same name. This initiative is designed to help instill more public trust in the company’s software by focusing on the four key pillars of security, privacy, reliability, and business integrity.

Images

EXAM TIP    Make sure you understand the concept of multilevel security; it is not simply implemented by normal access control mechanisms, such as is seen in most Windows- and Unix-based operating systems. Multilevel security implements multiple classification levels, and the operating system has to maintain separation between these levels of all data and users.

The CC implemented Evaluation Assurance Levels (EALs) to rate operating systems according to their level of security testing and design. Although the CC has deprecated EALs, they still might appear on the exam due to historical relevance. Here is a breakdown of the different EAL levels:

•   EAL1: Functionally Tested

•   EAL2: Structurally Tested

•   EAL3: Methodically Tested and Checked

•   EAL4: Methodically Designed, Tested, and Reviewed

•   EAL5: Semi-formally Designed and Tested

•   EAL6: Semi-formally Verified Design and Tested

•   EAL7: Formally Verified Design and Tested

Images

EXAM TIP    The CC has replaced EALs with Protection Profiles. The reason for this is the widespread belief that many operating system vendors were doctoring their EAL scores by manipulating the evaluation process. As a result, EALs assigned to operating systems should be taken with a grain of salt. Protection Profiles promise to provide greater consistency, repeatability, and objectivity to all evaluation testing to counter such tactics.

Although most users won’t have a need for a highly trusted operating system, you’ll find these systems in various high-security government and military environments. What they lack in functionality and ease of use they make up for with security. Such an operating system will have a steeper learning curve, but this is a necessary sacrifice for the furtherance of national security. What follows in the next few topics are examples of trusted operating systems.

SELinux

A project of the National Security Agency (NSA) and the Security-Enhanced Linux (SELinux) community, SELinux is a group of security extensions that can be added to Linux to provide additional security enhancements to the kernel. SELinux provides a mandatory access control (MAC) system that restricts users to policies and rules set by the administrator. It also defines access and rights for all users, applications, processes, and files on the OS. Unlike many OSs, SELinux operates on the principle of default denial, where anything not explicitly allowed is implicitly denied. SELinux is commonly implemented on Android distributions, Red Hat Enterprise Linux, CentOS, Debian, and Ubuntu, among many others. SELinux can operate in one of three modes:

•   Disabled   SELinux does not load a security policy.

•   Permissive   SELinux displays warnings but does not enforce security policy.

•   Enforcing   SELinux enforces security policy.

SEAndroid

As stated previously, SELinux commonly runs on Android, hence the adapted version called SEAndroid. As of Android version 4.4 (KitKat), Android supports SEAndroid with the “enforcing” mode, which means that permission denials are not only logged but also enforced by a security policy. This helps limit malicious or corrupt applications from causing damage to the OS. The benefits described previously for SELinux have been grafted onto the Android OS.

Trusted Solaris

Although deprecated now in favor of Solaris Trusted Extensions, Trusted Solaris was a group of security-evaluated OSs based on earlier versions of Solaris. The Solaris Trusted Extensions added enhancements to Trusted Solaris, including accounting, auditing, device allocation, mandatory access control labeling, and role-based access control.

Least Functionality

The principle of least privilege (or functionality) is a requirement that only the necessary privileges are granted to users to access resources—nothing more and nothing less. If a task exists that is not explicitly in a user’s job description, they should not be able to perform that task. This helps limit the permissions and rights of users to prevent unauthorized behaviors, not to mention causing accidental damage to their own systems. It also prevents any malware running on their systems from easily escalating privileges. As a result, least functionality helps achieve the goals of a trusted operating system.

Endpoint Security Software

Endpoint security refers to a security approach in which each device (that is, endpoint) is responsible for its own security. That is not to say that other layers of security aren’t required, but rather the endpoint must also directly contribute to its own security as well. We have already discussed an example of endpoint security: the host-based firewall. Instead of relying solely on physical network firewalls to filter traffic for all hosts on the network, the host-based firewall implements filtering specifically at the host endpoint. Often, other security mechanisms such as virtual private networks (VPNs) will make it harder for network security devices (such as intrusion detection systems) to do their job because the contents of packages will be encrypted. Therefore, if examination of the contents of a packet is important, it will need to be done at the endpoint. A number of very common software packages are also designed to push protection to the devices, including antimalware, antivirus, anti-spyware, and spam-filtering software, to name a few. We will discuss each of these, and more, in this section.

Antimalware

Antimalware software is a general-purpose security tool designed to prevent, detect, and eradicate multiple forms of malware such as viruses, worms, Trojan horses, spyware, and more. The term “malware” is short for malicious software and encompasses a number of different pieces of programming designed to inflict damage on a computer system or its data, to deny use of the system or its resources to authorized users, or to steal sensitive information that may be stored on the computer.

Images

NOTE    Malware is often mischaracterized as a type of malicious software akin to viruses or worms. To be clear, viruses and worms are a type of malware, and malware is the general category to which viruses, worms, spyware, and so on belong.

For malware to be effective, its malicious intent generally must be concealed from the user. This can be done by attaching the malware to another program, or making it part of the program itself—while still remaining hidden. One of the things that malware will often do is to attempt to replicate itself (in the case of worms and viruses, for example), and often the nefarious purpose may not immediately manifest itself so that the malware can accomplish maximum penetration before it performs its destructive purpose.

Although most of us are guilty of calling malware a “virus,” we need to be much more specific for the exam, as shown in the following list:

•   Virus   Malicious code that replicates after attaching itself to files on a victim’s device. When the victim’s files run, the virus is able to execute its payload. In other words, viruses cannot replicate on their own.

•   Worm   Self-replicating malicious code that can execute and spread independently of the victim’s applications or files. Unlike viruses, worms replicate on their own with zero human intervention.

•   Trojan horse   Malicious code disguised as seemingly harmless or friendly code.

•   Spyware   Malware that collects sensitive information about infected victims.

•   Rootkit   A stealth-like group of files that seek administrator or root privileges for total and near-invisible control of a device. Some rootkits can obtain kernel-level privileges on the device, which makes them difficult to detect and eradicate.

•   Ransomware   Malicious software that encrypts the victim’s files or threatens to publish them unless the victim pays a timely ransom—usually in the form of cryptocurrency for untraceability.

•   Keylogger   Software (or hardware) that captures a victim’s physical keystrokes on the keyboard. Although not necessarily illegal, many keyloggers are used for capturing passwords and other sensitive information.

•   Grayware   Software that behaves in an irritating or abnormal way, but isn’t classified with the more destructive forms of malware like viruses, worms, and Trojan horses. For example, grayware might change your home page, rearrange your desktop icons, or perform other annoying actions.

•   Adware   Applications that generate unwanted pop-ups or advertisements. Like grayware, adware generally isn’t considered a “major league” form of malware, but that doesn’t mean it can’t be.

•   Logic bomb   A form of malware that only runs after certain conditions are met, such as a specific date/time or when the Calculator application has been launched 15 times.

Many security vendors that create antimalware create applications designed to prevent, detect, and remove malware infections. The preferred route is to prevent infection in the first place. Antimalware packages that are designed to prevent installation of malware on a system provide what is known as real-time protection, because in order to prevent infection, the antimalware package must spot an attempt to infect a system and prevent it from occurring. Antimalware packages designed to detect and remove malware will perform scheduled or manual scans of the computer system, which includes all files, programs, and the operating system. Real-time protection requires the antimalware package be run continuously, whereas detect-and-remove antimalware packages can be run on an occasional basis.

Images

EXAM TIP    Know the different types of malware and the different software applications that will defend against them. Not all products will protect against all types of malware. Some may be designed to protect against spyware or others viruses, and still others may do both.

Antivirus

Although most of today’s antimalware tools target multiple forms of malware, many businesses still use tools that are more limited in scope, such as antivirus software. Antivirus software is designed specifically to remediate viruses, worms, and Trojan horses—and that’s about it. Such tools can accomplish their goal in a couple of ways:

•   Signature-based detection   A detection method that looks for patterns of data known to be part of a virus. This works well for viruses that are already known and are not known to evolve or change—as opposed to polymorphic viruses that can modify themselves in order to avoid signature-based detection. Another method to detect such self-modifying viruses is to analyze code in order to allow for slight variations of known viruses. This will generally allow the antivirus software to detect viruses that are a variant of an older virus, which is quite common.

•   Heuristic-based detection   A detection method based on analysis of code in order to detect previously unknown or new variants of existing viruses. The new viruses that have not been seen before are often referred to as zero-day (or 0-day) viruses or threats.

Images

EXAM TIP    Sandboxing suspected malicious content is another method of discovering and eradicating malware. A sandbox is a tightly controlled environment that only allows certain access by the program to control the potential for undesirable activity. If the code attempts to do anything that appears to be malicious, an alarm is generated and the program is either deleted or quarantined. If no malicious activity is detected, the file is allowed.

Because virus detection is an inexact science, two possible types of errors can occur. A false-positive error is one in which the antivirus program decides that a certain file is malicious when, in fact, it is not. A false-negative error occurs when the antivirus software decides that a file is safe when it actually does contain malicious content. Obviously, the desire is to limit both of these errors. The challenge is to “tighten” the system to a point so that it catches all (or most) viruses while rejecting as few benign programs as possible.

When you’re selecting an antivirus vendor, one important factor to consider is how frequently the database of virus signatures is updated. With variations of viruses and new viruses occurring on a daily basis, it is important that your antivirus software uses the most current list of virus signatures in order to stand a chance of protecting your systems. Most antivirus vendors offer signature updating for some specified initial period of time—for example, one or two years. At the conclusion of this period, a subscription renewal will be required in order to continue to obtain information on new threats. Although it is tempting to let this ongoing expense lapse, this is not generally a good idea because your system would then only be protected against viruses that are known up to the point when you quit receiving updates.

Anti-Spyware

Spyware is a special breed of malicious software. It is designed to collect information about the users of the system without their knowledge. The type of information that may be targeted by spyware includes Internet surfing habits (sites that have been visited or queries that have been made), user IDs and passwords for other systems, and personal information that might be useable in an identity theft attempt.

Images

EXAM TIP    A special type of spyware is keyloggers. A keylogger will record all keystrokes an individual makes, thus providing an exact image of the activity of the user. Although keyloggers are often part of malicious software installed on a system as a result of an individual running a program or clicking a link that they should not have, keyloggers can also be installed by the owner of a computer in order to monitor employees or other individuals who use the system.

Anti-spyware is designed to perform a similar function to antivirus software, except its purpose is to prevent, detect, and remove spyware infections. Windows Defender was originally just an anti-spyware tool but has since incorporated multiple forms of malware eradication into its purview. Anti-spyware software can be employed in a real-time mode to prevent infection by scanning all incoming data and files, attempting to identify spyware before it can be activated on the system. Alternatively, anti-spyware software can be run periodically to scan all files on your system in order to determine if it has already been installed. It will concentrate on operating system files and installed programs. Similar to antivirus software, anti-spyware software looks for known patterns of existing spyware. As a result, anti-spyware software also relies on a database of known threats and requires frequent updates to this database in order to be most effective. Some anti-spyware software does not rely on a database of signatures but instead scans certain areas of an operating system where spyware often resides.

Writers of spyware have gotten clever in their attempts to evade anti-spyware detection. Some now have a pair of programs that run (if you were not able to prevent the initial infection) and monitor each other so that if one of the programs is killed, the other part of the pair will immediately respawn it. Some spyware also watches special operating system files (such as the Windows registry), and if the user attempts to restore certain keys the spyware has modified, or attempts to remove registry items it has added, it will quickly set them back again. One trick that may help in removing persistent spyware is to reboot the computer in safe mode and then run the anti-spyware package to allow it to remove the installed spyware.

Spam Filters

Spam is the term used to describe unsolicited bulk e-mail messages. It is also often used to refer to unsolicited bulk messages sent via instant messaging, newsgroups, blogs, or any other method that can be used to send a large number of messages. E-mail spam is also sometimes referred to as unsolicited bulk e-mail (UBE). It frequently contains commercial content, and this is the reason for sending it in the first place—for quick, easy mass marketing. Increasingly today, e-mail spam is sent using botnets, which are networks of compromised computers that are referred to as bots or zombies. The bots on the compromised systems will stay inactive until they receive a message that activates them, at which time they can be used for mass mailing of spam or other nefarious purposes such as a denial-of-service attack on a system or network. Networks of bots (referred to as botnets) usually number in the thousands but can grow to, without exaggeration, tens of millions of systems, as with the Conficker, Bredolab, and other botnets.

Spam is generally not malicious but rather is simply annoying, especially when numerous spam e-mail messages are received daily. Preventing spam from making it to your inbox so that you don’t have to deal with it is the goal of spam filters. Spam filters are basically special versions of the more generic e-mail filters. Spam filtering can be accomplished in several different ways. One simple way is to look at the content of the e-mail and search for special keywords that are often found in spam (such as various drugs, such as Cialis, commonly found in mass-mailing advertising). The problem with keyword searches is the issue discussed before of false positives. Filtering on the characters “cialis” would also cause an e-mail with the word “specialist” to be filtered because the letters are found within it. Users are generally much more forgiving of an occasional spam message slipping through the filter rather than having valid e-mail filtered, so this is a critical issue. Usually, when an e-mail has been identified as spam, it will be sent to a special “quarantine” folder. The user can then periodically check the folder to ensure that legitimate e-mail has not been inadvertently filtered.

Another method for filtering spam is to keep a “blacklist” of sites that are known to be friendly to spammers. If e-mail is received from an IP address for one of these sites, it will be filtered. The lists may also contain known addresses for botnets that have been active in the past. An interesting way to populate these blacklists is through the use of spamtraps. These are e-mail addresses that are not real in the sense that they are not assigned to a real person or entity. They are seeded on the Internet so that when spammers attempt to collect lists of e-mail addresses by searching through websites and other locations on the Internet for e-mail addresses, these bogus e-mail addresses are picked up. Any time somebody sends an e-mail to them, because they are not legitimate addresses, it is highly likely that the e-mail is coming from a spammer and the IP address from which the e-mail was generated can be placed on the blacklist.

Images

CAUTION    It’s highly recommended that you never respond to a spam e-mail. Responding to such an e-mail provides confirmation to the spammer that the e-mail address is legitimate and is being used by a legitimate user who is reading the e-mail. Also, if your e-mail application asks you to display pictures and links, say no. Pictures may get downloaded from malicious sites, which acts like a fish “tugging” the hacker’s “line.” Plus, the links may contain malicious code that hijacks your connections or redirects you to an attacker’s site. Disabling HTML altogether would protect you from many of these e-mail threats.

Individuals who want to avoid having their e-mail harvested from websites can use a method such as modifying their address in such a way that human users will quickly recognize it as an e-mail but automated programs may not. An example of this for a user with the e-mail of [email protected] might be john(at)abcxyzcorp(dot)com.

The more generic e-mail filters can be used to block spam but also other incoming or outgoing e-mails. They may block e-mail from sites known to send malicious content, may block based on keywords that might indicate the system is being used for other-than-official purposes, or could filter outgoing traffic based on an analysis of the content to ensure that sensitive company data is not sent (or at least not sent in an unencrypted manner).

Images

NOTE    One method spammers use to slip by keyword filters is to not include text in the body of the e-mail but rather take a screen capture of the advertisement and include it as an image. Doing this means that the filter simply sees the body as including an image. Some organizations address this by not allowing pictures in the body of incoming e-mail messages, but filtering based on this alone may result in false-positive errors.

Patch Management

Managing an organization’s software updates is a classic case of picking your poison. If you patch systems too quickly, you risk breaking your stuff. If you patch systems too slowly due to testing, you risk others breaking your stuff. Although patch management cannot completely solve these challenges, it helps balance the competing desires of testing patches while not waiting too long to deploy them. Shown here are the common types of software updates:

•   Security patch   Software updates that fix application vulnerabilities

•   Hotfix   Critical updates for various software issues that should not be delayed

•   Service packs   Large collection of updates for a particular product released as one installable package

•   Rollups   Smaller collection of updates for a particular product

Patching is necessary because software that is actively supported by vendors, or internal developer teams, is never truly “finished.” There’s always room for improvement, whether the goals are to enhance the software’s reliability, functionality, performance, or, most commonly, security. Software patches are developed by the application vendor, or in-house developer, due to bugs discovered with the software during in-house code testing, public beta testing, or by white hat and black hat hackers alike. Unless software is developed in-house, updates typically stem from the software vendor’s website. Since updates are published for various operating systems, applications, and even firmware, organizations will sometimes be overwhelmed. The larger the organization, the more unique products they have that require patching. Certain vendors release updates all the time due to increased product popularity (and the resulting attention it receives from hackers).

Although most products try to help by automatically downloading updates, this effectively kills your patch management solution due to lack of testing, bandwidth control, compliance monitoring, and so forth. Testing updates is key because, as with medications in the pharmaceutical industry, updates should be thoroughly tested after development to ensure no adverse side effects are experienced on production systems. A proper patch management solution involves detecting, assessing, acquiring, testing, deploying, and maintaining software updates. This will help ensure that all operating systems, applications, and firmware continue to receive the latest updates with minimal security risks. Given the strong security focus of software updates, it is critical that organizations take patch management seriously. The patch management steps are as follows:

•   Detect   Discover missing updates.

•   Assess   Determine issues and resulting mitigations expected from the patch.

•   Acquire   Download the patch.

•   Test   Install and assess the patch on quality assurance systems or virtual machines.

•   Deploy   Distribute the patch to production systems.

•   Maintain   Manage systems by observing any negative effects from updates, and if other security patches are needed.

Images

TIP    Some good examples of products that offer patch management include Microsoft System Center Configuration Manager, Kaseya Security Patch Management, Solar Winds Patch Manager, and Quest KACE.

HIPS/HIDS

Earlier we discussed the use of firewalls to block or filter network traffic to a system. Although this is an important security step to take, it is not sufficient to cover all situations. Some traffic may be totally legitimate based on the firewall rules you set, but may result in an individual being able to exploit a vulnerability within the operating system or an application program. Firewalls are prevention technology; they are designed to prevent a security incident from occurring. Intrusion detection systems (IDSs) were initially designed to detect when your prevention technologies failed, allowing an attacker to gain unauthorized access to your system. Later, as IDS technology evolved, these systems became more sophisticated and were placed in-line so that they did not simply detect when an intrusion occurred but rather could prevent it as well. This led to the development of intrusion prevention systems (IPSs).

Early IDS implementations were designed to monitor network or system activity, looking for signs that an intrusion had occurred. If one was detected, the IDS would notify administrators of the intrusion, who could then respond to it. Two basic methods were used to detect intrusive activity. The first, anomaly-based detection, is based on statistical analysis of current network or system activity versus historical norms. Anomaly-based systems build a profile of what is normal for a user, system, or network, and any time current activity falls outside of the norm, an alert is generated. The type of things that might be monitored include the times and specific days a user may log into the system (for example, do they ever access the system on weekends or late at night?), the type of programs a user normally runs, the amount of network traffic that occurs, specific protocols that are frequently (or never) used, and what connections generally occur. If, for example, a session is attempted at midnight on a Saturday, when the user has never accessed the system on a weekend or after 6:00 P.M., this might very well indicate that somebody else is attempting to penetrate the system.

The second method to accomplish intrusion detection is based on attack signatures. A signature-based system relies on known attack patterns and monitors for them. Certain commands or sequences of commands may have been identified as methods to gain unauthorized access to a system. If these are ever spotted, it indicates that somebody is attempting to gain unauthorized access to the system or network. These attack patterns are known as signatures, and signature-based systems have long lists of known attack signatures they monitor for. The list will occasionally need to be updated to ensure that the system is using the most current and complete set of signatures.

Advantages and disadvantages are associated with both types of systems, and some implementations actually combine both methods in order to try and cover all possible avenues of attack. Signature-based systems suffer from the tremendous disadvantage that they, by definition, must rely on a list of known attack signatures in order to work. If a new vulnerability is discovered (a zero-day exploit), there will not be a signature for it and therefore signature-based systems will not be able to spot it. There will always be a lag between the time a new vulnerability is discovered and the time when vendors are able to create a signature for it and push it out to their customers. During this period of time, the system will be at risk to an exploit taking advantage of this new vulnerability. This is one of the key points to consider when evaluating different vendor IDS and IPS products—how long does it take them to push out new signatures?

Because anomaly-based systems do not rely on a signature, they have a better chance of detecting previously unknown attacks—as long as the activity falls outside of the norm for the network, system, or user. One of the problems with systems that strictly use anomalous activity detection is that they need to constantly adapt the profiles used because user, system, and network activity changes over time. What may be normal for you today may no longer be normal if you suddenly change work assignments. Another issue with strictly profile-based systems is that a number of attacks may not appear to be abnormal in terms of the type of traffic they generate and therefore may not be noticed. As a result, many systems combine both types so that all the aforementioned advantages can be used to create the best-possible monitoring situation.

IDS and IPS also have the same issues with false-positive and false-negative errors as was discussed before. Tightening an IDS/IPS to spot all intrusive activity so that no false negatives occur (that is, so no intrusion attempts go unnoticed) means that the number of false positives (that is, activity identified as intrusive that in actuality is not) will more than likely increase dramatically. Because an IDS generates an alert when intrusive activity is suspected and because an IPS will block the activity, falsely identifying valid traffic as intrusive will cause either legitimate work to be blocked or an inordinate number of alert notifications that administrators will have to respond to. Frequently, when the number of alerts generated is high, and most turn out to be false positives, administrators will get into the extremely poor habit of simply ignoring the alerts. Tuning your IDS/IPS for your specific environment is therefore an extremely important activity.

Host-Based Intrusion Detection and Prevention Systems

Just as we discussed in the case of firewalls, an IDS or IPS can be placed in various locations. One of them is on the host itself. When it is installed at this level, it is known as a host-based intrusion detection system (HIDS) or host-based intrusion prevention system (HIPS). Some of the original IDSs were HIDSs because they were run on large mainframe computers before the use of PCs became widespread. In addition to monitoring network traffic to and from the system, an HIDS/HIPS may also monitor the programs running on the host and the files that are being accessed by them. It may also monitor regions of memory to ensure only appropriate areas have been modified, and may also keep track of specific information on files, including generating a checksum or hash for them to determine if they have been modified. It is interesting to note that due to its function, an HIDS/HIPS may itself become the object of an intruder who wants to go unnoticed and therefore may attempt to modify the HIDS/HIPS and its data. In addition to human intruders, an HIDS/HIPS may also be useful in detecting and preventing certain types of malware from adversely impacting the host.

Data Loss Prevention

Data loss prevention (DLP) is, in a way, the opposite of intrusion prevention systems. Think about what intrusion prevention systems do—they detect, notify, and mitigate inbound attacks. They help stop the bad stuff from being brought into the company. As for opposites, what solution could we use to detect, notify, and mitigate good stuff from getting out of the company? In other words, how do we prevent data from being leaked or otherwise falling into unauthorized hands? That’s where data loss prevention comes in.

DLP involves the technology, processes, and procedures designed to detect when unauthorized removal of data from a system occurs. Like host-based firewalls, DLPs are often implemented at the endpoint (host level); therefore, the host can determine if unauthorized attempts at destroying, moving, or copying data are taking place. DLP solutions will respond by blocking the transfer or dropping the connection entirely. DLP policies are created that identify sensitive content based on classification, and then actions to take based on which unauthorized behaviors are performed on the content in question. This guards against malicious attacks and accidents as well.

Images

NOTE    Microsoft Office 365 has a DLP tool that can prevent the accidental sharing of sensitive information such as credit numbers, driver’s license numbers, and social security numbers.

Host-Based Firewalls

In your car, the firewall sits between the passenger compartment and the engine. It is a fireproof barrier that protects the passengers within the car from the dangerous environment under the hood. A computer firewall serves a similar purpose—it is a protective barrier that is designed to shield the computer (the system, user, and data) from the “dangerous” environment it is connected to. This dangerous and hostile environment is the network, which in turn is most likely connected to the Internet. A firewall can reside in different locations. A network firewall will normally sit between the Internet connection and the network, monitoring all traffic that is attempting to flow from one side to the other. A host-based firewall serves a similar purpose, but instead of protecting the entire network, and instead of sitting on the network, it resides on the host itself and only protects the host. Whereas a network firewall will generally be a hardware device running very specific software, a host-based firewall is a piece of software running on the host.

Firewalls examine each packet sent or received to determine whether or not to allow it to pass. The decision is based on the rules the administrator of the firewall has set. These rules, in turn, should be based on the security policy for the organization. For example, if certain websites are prohibited based on the organization’s Internet Usage Policy (or the desires of the individual who owns the system), sites such as those containing adult materials or online gambling can be blocked. Typical rules for a firewall will specify any of a combination of things, including the source and/or destination IP address, the source and/or destination port (which in turn often identifies a specific service such as e-mail), a specific protocol (such as TCP or UDP), and the action the firewall is to take if a packet matches the criteria laid out in the rule.

Images

EXAM TIP    Typical actions include allow, deny, and alert. The rules in a firewall are examined in order, and rules will continue to be checked until a match is found or until no more rules are left. Because of this, the very last rule in the set of rules will generally be the “default” rule, which will specify the activity to take if no other rule was matched. The two extremes for this last rule are to deny all packets that didn’t match another rule and to allow all packets. The first is safer from a security standpoint; the second is a bit friendlier because it means that if there isn’t some rule specifically denying this access, then it will be allowed.

A screen capture of the simple firewall supplied by Microsoft for its Windows 10 operating system is shown in Figure 6-1. As can be seen, the program provides some simple options to choose from in order to establish the level of filtering desired. A finer level of detail can be obtained by going into the Advanced option, but most users never worry about anything beyond this initial screen. In the newer Windows operating systems, the firewall is based on the Windows Filtering Platform. This service allows applications to tie into the operating system’s packet processing to provide the ability to filter packets that match specific criteria. It can be controlled through a management console found in the Control Panel under Windows Firewall. It allows the user to select from a series of basic settings, but will also allow advanced control, giving the user the option to identify actions to take for specific services, protocols, ports, users, and addresses.

Images

Figure 6-1    Windows Defender Firewall with Advanced Security (Windows 10)

For Linux-based systems, a number of firewalls are available, with the most commonly used one being iptables, which replaced the previous most commonly used package, called ipchains. Iptables provides the functionality to accomplish the same basic functions as those found in commercial network-based firewalls. Common functionality includes the ability to accept or drop packets, to log packets for future examination, and to reject a packet, along with returning an error message to the host sending the package. As an example of the format and construction of firewall rules used by iptables, the rules that would allow WWW (port 80) and SSH (port 22) traffic would look like the following:

Images

The specifics of these commands is as follows: -A tells iptables to append the rule to the end of the chain; -p identifies the protocol to match (in this case, TCP); -i identifies the input interface; --dport and --sport specify the destination and source ports, respectively; -m state --state NEW states the packet should be the start of a new connection, and -j ACCEPT tells iptables to stop further processing of the rules and hand the packet over to the application.

An important point to note that is not always considered is that a firewall can filter packets that are either entering or leaving the host or network. This allows an individual or organization the opportunity to monitor and control what leaves the host or network, not just what enters it. This is related to, but is not entirely the same as, data exfiltration, which we will discuss later in this chapter.

Another consideration from the organization’s point of view is the ability to centrally manage the organization’s host firewalls if they are used. The issue here is whether you really want your users to have the ability to set their own rules on their hosts or whether it is better to have one policy that governs all user machines. If you let the users set their own rules, not only can they allow access to sites that might be prohibited based on the organization’s usage policy, but they might also inadvertently block important sites that could impact the functionality of the system.

Images

EXAM TIP    Understanding that firewall rules are checked in a specific order is critical for the correct implementation of those rules. You would not want to have the default rule, allowing (or denying) all other traffic, be placed first in your rule set because none of the other rules would ever be checked. Watch carefully the creation and placement of rules that include “any ip address,” “any port,” or “any protocol.”

Special-purpose host-based firewalls, such as host-based application firewalls, are also available for use. The purpose of an application firewall is to monitor traffic originating from or traveling to a specific application. Application firewalls can be more discriminating beyond simply looking at source/destination IP addresses, ports, and protocols. Application firewalls will understand something about the type of information that is generated by or sent to the specific application and can, in fact, make decisions on whether to allow or deny information based on the contents of the connection.

Images

EXAM TIP    A host-based application firewall will generally be used in conjunction with a packet-filtering host-based firewall instead of replacing it completely. This provides an additional level of filtering to better protect the host because they act on the application layer, which means they can inspect the contents of the traffic, allowing them to block specified content such as certain websites, malicious logic, and attempts to exploit known logical flaws in client software.

Firewall Scenarios and Solutions

Here are a few firewall scenarios to help immerse you a little deeper into strategizing appropriate firewall solutions:

Scenario: You are installing a network firewall that will examine all incoming and outgoing network traffic. Because you are installing a network firewall, does this eliminate the need to conduct any type of monitoring and filtering at the individual host level?

Solution: No. The network firewall can do a lot for your organization’s security, but there are some things it will miss that the hosts can catch. In particular, traffic that is encrypted will not be able to be analyzed by the network firewall, but monitoring and filtering conducted on the host can be done at a level that is post-decryption. Host-based application firewalls in particular are applicable in this context.

Scenario: Is it better to centrally manage host-based firewalls and filters or to provide the users the opportunity, and responsibility, to maintain their own systems?

Solution: It would be nice if we could trust users to maintain their own firewalls, and to filter traffic appropriately, but the reality of the situation is that because security is not their primary responsibility (nor is it their primary concern), users should not be expected to maintain their own filters and firewalls. In addition, assigning them the responsibility means that you will need to ensure they understand how to do it and know what needs to be done. In most organizations, this is not the norm for users.

Log Monitoring

Today’s security professionals should operate under the assumption that host devices have already been compromised. Regardless of how probably such a compromise might be at the moment, operating from the assumption of “implicit compromise” makes sense. A certain amount of nerves is actually healthy for us as security practitioners because it sharpens our senses to perform at their highest level. Having a heightened sense of urgency is needed when we’re competing against a faceless enemy of indeterminate skill, size, motivation, and location.

The unavoidable fact is that we cannot prevent all attacks; therefore, we turn to detection, which is epitomized through the discovery and analysis of malicious activities through log monitoring. We have a ton of logs to help us discover potential abuses, yet having so many logs also complicates our ability to detect, analyze, and respond to security breaches in a timely fashion. Just the Windows Event Viewer alone may have over 100,000 records.

Types of Logs

If organizations are going to maximize their log-monitoring capabilities, they must first be aware of the types of logs they have. Shown here are the most common log types:

•   Operating system logs

•   Web server logs

•   E-mail server logs

•   Database server logs

•   Host-firewall logs

•   Application logs

•   Packet sniffer logs

•   Antimalware logs

To tame this beast, security professionals should implement a log-monitoring tool that can automate the collection and analysis of various log types. With all the logs under the same roof, malicious event detection becomes much easier; plus, it’ll help us verify the effectiveness of our security controls.

Images

EXAM TIP    Be aware of the popular log formats. For example, the World Wide Web Consortium (W3C), Extended Log Format (ELF), and NCSA log formats are popular with web servers. Syslog is widely used for device and operating system logging purposes.

Since Windows is the most popular desktop operating system, it’s important to understand both the Event Viewer and the Windows Audit Policies.

Windows Event Viewer

The Windows Event Viewer is a logging tool that records various operating system, security, and application events using descriptions such as “information,” “warning,” “error,” and “critical.” These events are categorized into separate logs, as shown here:

•   Application   Contains events generated by applications. Useful for troubleshooting application issues.

•   Security   Contains audited events for account logins, resource access, and so on. Useful for auditing and determining accountability of human activities.

•   Setup   Contains setup events such as Windows Update installations. Useful for troubleshooting setup failures.

•   System   Contains events for operating system and hardware activities. Useful for troubleshooting driver, service, operating system, and hardware issues. This is arguably the most important log in Event Viewer.

•   Forwarded Events   Contains events forwarded from other systems. Useful for aggregating events from servers onto your IT workstations for centralized log monitoring.

Images

NOTE    Event Viewer’s security log is particularly important for security professionals due to its abundance of “auditing” events, which can show traces of system or information misuse. More to follow on this in the next section.

Windows Audit Policies

As with most operating systems, Windows has built-in auditing capabilities to help us determine accountability of outcomes—as in who committed the desirable or undesirable actions. Such outcomes may be in reference to successful or failed login attempts, file access, password changes, and so forth. However, many individuals think that auditing is just another word for logging. Is there a difference between the two? In certain contexts, no—but in light of information security, an important distinction does exist. Think of auditing as a specialized type of logging. Logging, in itself, is just an automated collection of records, whereas auditing is more fact-finding in nature.

Let’s take a look at three sequential log entries for a Sales employee named John Smith:

1.   John Smith successfully logged into the Sales-1 workstation at 7:30 A.M.

2.   John Smith successfully used the “Read” permission on the Sales shared folder located on the file server at 7:35 A.M.

3.   John Smith failed to access the Human Resources shared folder located on FileServer1 at 7:45 A.M.

Logging simply records these three activities into a log file. Auditing, however, digs deeper. Auditing is a more analytical, security, and human-focused form of logging in that it helps us to piece together a trail of evidence to determine if authorized or unauthorized actions are being conducted by users. In other words, auditing involves not only the generation but also the examination of logs to identify signs of security breaches. In the preceding example, John Smith failed to access the Human Resources share. This begs a few questions:

•   Why would John Smith, a Sales user, attempt to access a Human Resources directory?

•   Was this a deliberate malicious act or an accident?

•   Is the individual logged in as John Smith actually John Smith or someone else?

Through additional generation and review of such records, we’ll be able to reasonably determine if this was an attempted security breach or a false alarm.

Let’s take a look at Windows Group Policy auditing from the perspective of Windows Server Domain Controllers using the Group Policy Management tool. You would then navigate to Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesAudit Policy for configuration. After configuration, you would visit the Security log under the Event Viewer. Here are some examples of auditing policies:

•   Audit account logon events   Audits all attempts to log on with a domain user account, regardless from which domain computer the domain user login attempt originated. This policy is preferred over the “Audit logon events” policy below due to its increased scope.

•   Audit account management   Audits account activities such as the creation, modification, and deletion of user accounts, group accounts, and passwords.

•   Audit directory service access   Audits access to Active Directory objects such as OUs and Group Policies, in addition to users, groups, and computers. Think of this as a deeper version of “Audit account management.”

•   Audit logon events   Tracks all attempts to log onto the local computer (say, a Domain Controller), regardless of whether a domain account or a local account was used.

•   Audit object access   Audits access to non–Active Directory objects such as files, folders, registry keys, printers, and services. This is a big one for determining if users are trying to access files/folders from which they are prohibited.

•   Audit policy change   Audits attempts to change user rights assignment policies, audit policies, account policies, or trust policies (in the case of Domain Controllers).

•   Audit privilege use   Audits the exercise of user rights, such as adding workstations to a domain, changing the system time, backing up files and directories, and so on. Often considered a messy and “too much information” policy and therefore not generally recommended.

•   Audit process tracking   Audits the execution and termination of programs, also known as processes.

•   Audit system events   Audits events such as a user restarting or shutting down the computer, or when activities affect the system or security logs in Event Viewer.

Endpoint Detection and Response

Traditional antimalware, HIDS/HIPS, and DLP solutions are known for taking immediate eradication and recovery actions upon discovery of malicious code or activities. Although this is a good thing, such quick reactions may deprive us from fully understanding the threat’s scope. In other words, we don’t want to win the battle at the expense of losing the war. Greater threat intelligence must be ascertained, including the determination of the threat’s level of sophistication, and whether or not the threat is capable of using infected endpoints to attack other endpoints.

Endpoint detection and response (EDR) solutions will attempt to answer these concerns by initially monitoring the threat—collecting event information from memory, processes, the registry, users, files, and networking—and then uploading this data to a local or centralized database. The EDR solution will then correlate the uploaded information with other information already present in the database in order to re-analyze and, potentially, mitigate the previously detected threat from a position of increased strength. Other endpoints should be examined by EDR solutions to ensure similar threats are understood and eradicated in a timely fashion.

Images

NOTE    Examples of EDR solutions include Symantec’s Endpoint Detection and Response, FireEye’s Endpoint Security, and Guidance Software’s EnCase Endpoint Security.

Host Hardening

Implementing a series of endpoint security mechanisms as described in the previous section is one approach to securing a computer system. Another, more basic approach is to conduct host-hardening tasks designed to make it harder for attackers to successfully penetrate the system. Often this starts with the basic patching of software, but before attempting to harden the host, the first step should be to identify the purpose of the system—what function does this system provide? Whether the system is a PC for an employee or a network server of some sort, before you can adequately harden the system, you need to know what its intended purpose is. There is a constant struggle between usability and security. In order to determine what steps to take, you have to know what the system will be used for—and possibly of equal importance, what it is not intended to be used for.

Defining the standard operating environment for your organization’s systems is your first step in host hardening. This allows you to determine what services and applications are unnecessary and can thus be removed. In addition to unnecessary services and applications, similar efforts should be made when hardening a system to identify unnecessary accounts and to change the names and passwords for default accounts. Shared accounts should be discouraged, and if possible two-factor authentication can be used. An important point to remember is to always use encrypted authentication mechanisms. The access to resources should also be carefully considered in order to protect confidentiality and integrity. Deciding who needs what permissions is an important part of system hardening. This extends to the limiting of privileges, including restricting who has root or administrator privileges and more simply who has write permissions to various files and directories.

Standard Operating Environment/Configuration Baselining

It is generally true that the more secure a system is, the less useable it becomes. This is true if for no other reason than hardening your system should include removing applications that are not needed for the system’s intended purpose—which, by definition, means that it is less useable (because you will have removed an application).

If you’ve done a good job in determining the purpose for the system, you should be able to identify what applications and which users need access to the system. Your first hardening step will then be to remove all users and services (programs/applications) not needed for this system. An important aspect of this is identifying the standard environment for employees’ systems. If an organization does not have an identified standard operating environment (SOE), administrators will be hard pressed to maintain the security of systems because there will be so many different existing configurations. If a problem occurs requiring massive reimaging of systems (as sometimes occurs in larger security incidents), organizations without an identified SOE will spend an inordinate amount of time trying to restore the organization’s systems and will most likely not totally succeed. This highlights another advantage of having an SOE—it greatly facilitates restoration or reimaging procedures.

A standard operating environment will generally be implemented as a disk image consisting of the operating system (including appropriate service packs and patches) and required applications (also including appropriate patches). The operating system and applications should include their desired configuration. For Windows-based operating systems, the Microsoft Deployment Toolkit (MDT) can be used to create deployment packages that can be used for this purpose.

Images

EXAM TIP    A key concept to remember is to limit the services available on a system. This is true no matter what the operating system is. The more services that are available (and the more applications that are running), the more vulnerabilities you will need to be concerned with because each application may have one or more vulnerabilities that can be exploited. If you don’t need a specific service, don’t keep it around; otherwise, you may be needlessly exposing your system to possible exploitation.

Application Whitelisting and Blacklisting

An important part of host hardening is ensuring that only authorized applications are allowed to be installed and run. There are two basic approaches to achieving this goal:

•   Application whitelisting   This is a list of applications that should be permitted for installation and execution. Any applications not on the list are implicitly denied. Firewalls typically adopt this approach by implicitly denying all traffic, while generating exceptions of the traffic you wish to allow. The downside to this method is if you forget to put certain desired applications on the list, they will be prohibited.

•   Application blacklisting   This is a list of applications that should be denied installation and execution. Any applications not on the list are implicitly allowed. This method is frequently used by antimalware tools via definition databases. The advantage of blacklisting is that it’s less likely to block desirable software than whitelisting.

Prior to Windows 7, we used to implement Software Restrictions Policies via Group Policy to identify software for whitelisting or blacklisting purposes. This feature was usurped by Windows 7 Enterprise/Ultimate’s introduction of a Group Policy tool called AppLocker. AppLocker provides additional whitelisting and blacklisting capabilities for the following software scenarios:

•   Software that can be executed

•   Software that can be installed

•   Scripts that can run

•   Microsoft Store apps that can be executed

Images

NOTE    Most experts and industry standards suggest that whitelisting is superior to blacklisting because all the bad stuff is banned by default, with only the chosen few permitted. If you need one example to prove this, imagine making a wedding list of the 7+ billion people who aren’t invited, as opposed to the 100 who are invited!

Security/Group Policy Implementation

Group Policy is a feature of Windows-based operating systems dating back to Windows 2000. It is a set of rules that provides for centralized management and configuration of the operating system, user configurations, and applications in an Active Directory environment. The result of a Group Policy is to control what users are allowed to do on the system. From a security standpoint, Group Policy can be used to restrict activities that could pose possible security risks, limit access to certain folders, and disable the ability for users to download executable files, thus protecting the system from one avenue through which malware can attack. The Windows 10 operating system has several thousand Group Policy settings, including User Experience Virtualization, Windows Update for Business, and for Microsoft’s latest browser called Microsoft Edge.

Based on the Windows OS, the security settings include several important areas, such as Account Policies, Local Policies, Windows Defender Firewall with Advanced Security, Public Key Policies, Application Control Policies, and Advanced Audit Policy Configuration.

Images

EXAM TIP    On Windows Server systems, the Group Policy Management Console (GPMC) provides a method to manage all aspects of Group Policy for an entire organization, and is in fact the primary access point to Group Policy. The GPMC provides the capability to perform functions such as importing and exporting Group Policy Objects (GPOs), copying and modifying GPOs, and backing up and restoring GPOs.

Command Shell Restrictions

Restricting the ability of users to perform certain functions can help ensure that they don’t deliberately or inadvertently cause a breach in system security. This is especially true for operating systems that are more complex and provide greater opportunities for users to make a mistake. One very simple example of restrictions placed on users is those associated with files in Unix-based operating systems. Users can be restricted so that they can only perform certain operations on files, thus preventing them from modifying or deleting files that they should not be tampering with. A more robust mechanism used to restrict the activities of users is to place them in a restricted command shell.

A command shell is nothing more than an interface between the user and the operating system providing access to the resources of the kernel. A command-line shell provides a command-line interface to the operating system, requiring users to type the commands they want to execute. A graphical shell will provide a graphical user interface for users to interact with the system. Common Unix command-line shells include the Bourne shell (sh), Bourne-Again shell (bash), C shell (csh), and Korn shell (ksh).

A restricted command shell will have a more limited functionality than a regular command-line shell. For example, the restricted shell might prevent users from running commands with absolute pathnames, keep them from changing environment variables, and not allow them to redirect output. If the bash shell is started with the name rbash or if it is supplied with the --restricted or -r option when run, the shell will become restricted—specifically restricting the user’s ability to set or reset certain path and environment variables, to redirect output using “>” and similar operators, to specify command names containing slashes, and to supply filenames with slashes to various commands.

Patch Management

Many of the fundamentals of patch management were discussed earlier in this chapter. What we haven’t quite looked at yet are the methods of patch deployment. This section covers manual and automated methods of deploying patches to an organization’s infrastructure.

Manual

Sacrificing speed for control, organizations sometimes manually deploy patches to their host devices. Manual patching benefits us in a few different ways:

•   It places greater emphasis on patch testing in quality assurance labs or virtual machines.

•   Patches can be staggered to individual groups or departments as opposed to wide-scale rollouts. This helps prevent issues.

•   Rollbacks from failed patch deployments are easier as a result of staggered rollouts. This speeds up the recovery from issues.

The downside to manual deployment methods is the increased administrative effort involved in manual approval processes.

Automated

Manual patching is fine for smaller environments but it doesn’t bode well for the large ones. Imagine manually approving patches for thousands, tens of thousands, or more devices? Automated patching provides a centralized solution in which local or cloud-based servers automatically deploy patches to devices. As you would expect, automated patching is considerably faster than with the manual approach.

Frequently used automated patching solutions include Microsoft System Center Configuration Manager and Windows Server Update Services (WSUS). WSUS is popular, effective, and free, since it is an included role in most Windows Server operating systems. Organizations set up a local WSUS server—or a parent-child series of WSUS servers at headquarters and various branch offices—and then configure the Windows devices to connect to the WSUS server via Group Policy configurations. For non-domain-joined devices, consider using Microsoft Intune for a cloud-based solution that offers over-the-Internet patching and a variety of other exciting Mobile Device Management features.

The downside to automated patching stems from the fact that you’re simultaneously increasing the number of systems that will get patches, and how quickly. This leaves little time to stop a patching problem from spreading too far in a timely fashion. In other words, we may not realize a patching issue until all systems have received the patch, which leads to a nasty rollback process afterward.

Scripting and Replication   Scripting is becoming increasingly common for automating administrative tasks. It combines the speed benefits of automation with some of the control benefits of manual patching. Scripting also gives the ability to automate tasks in a way that a centralized patching solution could not achieve on its own. That is because we design the administrative code ourselves as opposed to relying solely on the third-party tool’s feature set.

Images

NOTE    Microsoft PowerShell scripting has matured to a point where Windows Servers no longer need GUIs. Not to mention, Linux has a loyal and longstanding scripting community that has developed innumerable scripts over the decades for every administrative task imaginable, including the deployment of patches.

Regardless of the nature of automated patching or scripting, if you use multiple patching servers to source your patches, be sure the servers converge their patches through an effective replication topology. Patching is too urgent a security control to delay through lack of server synchronization.

Configuring Dedicated Interfaces

Certain hosts, such as a server or a technician’s computer, are likely to have multiple network interface cards. One interface is likely provisioned for everyday LAN communications like Internet, e-mail, instant messaging, and the like. Meanwhile, the other interface is used to isolate the critical, behind-the-scenes management and monitoring traffic from the rest of the network. This second interface is referred to as a dedicated interface since it is dedicated to several key administrative functions. The details of these functions will be outlined throughout the next few topics, including out-of-band management, ACLs, and management and data interfaces.

Out-of-Band Management

An out-of-band interface is an example of a dedicated NIC interface through which network traffic is isolated from both the LAN and Internet channels. This is because the out-of-band NIC is designed to carry critical (and sometimes sensitive) administrative traffic that needs a dedicated link for maximum performance, reliability, and security. Shown here are some features of out-of-band-management:

•   Reboot hosts.

•   Turn on hosts via Wake on LAN.

•   Install an OS or reimagine a host.

•   Mount optical media.

•   Access a host’s firmware.

•   Monitor the host and network status.

There’s not much point to out-of-band management if it is bottlenecked by other areas of the network; therefore, be sure to provide it with adequate performance and reliability via quality-of-service policies. Ensure that traffic is sufficiently isolated from the regular network through subnetting or virtual local area network (VLAN) configurations. Also, when purchasing host devices, consider those with motherboards and NICs that have native support for out-of-band management to enhance your administrative flexibilities.

ACLs

The exact context of access control lists (ACLs) can vary, whether discussing things like file permissions or rules on a router, switch, or firewall. From a file system context, an ACL is a list of privileges that a subject (user) has to an object (resource). From a networking perspective, ACLs are a list of rules regarding traffic flows through an interface based on certain IP addresses and port numbers.

Images

NOTE    A well-known flaw with network ACLs is the relative ease of circumvention through IP spoofing. However, all is not lost. Hosts may be able to examine and, eventually, drop the traffic sent by a suspected spoofing device. This can be achieved through two different techniques: time-to-live (TTL) and IP identification (IPID) probes. Whenever hosts send IP traffic, the IP packet header contains a value for the TTL and IPID fields. Careful examination, and solicitation, of the TTL and IPID traffic sent by both the spoofing and victim devices will reveal vast differences between these two fields, thus exposing the spoofing device.

For dedicated interfaces, ACLs will need to be carefully configured to ensure that only approved traffic flows to and from the interface, at the exclusion of all others. This will help secure the source and destination nodes in the network communications.

Management Interface

Management interfaces are designed to remotely manage devices through a dedicated physical port on a router, switch, or firewall, or a port logically defined via a switch’s VLAN. In contrast to out-of-band management, management interfaces connect to an internal in-band network to facilitate monitoring and management of devices via a normal communications channel. Typically, these management interfaces are controlled through a command-line interface (CLI); therefore, you’ll likely use a terminal emulation protocol such as Telnet (insecure) or SSH (secure). It is common practice to use SSH for all management interface communications given its reliance on cryptographic security, including public and private keys, session keys, certificates, and hashing.

Images

TIP    A great tool for Telnet and SSH terminal emulation is PuTTY, which supports various protocols, including rlogin, SCP, SSH, and Telnet.

Data Interface

Unlike the out-of-band and in-band management topics just discussed, data interfaces carry everyday network traffic. From a traditional switch’s perspective, we’re referring to the Ethernet frame headers, whereas routers operate at the IP packet header. However, let’s not mistake everyday hosts and network traffic as being unimportant. A bevy of attack vectors exist on switches, and we’re going to focus our security considerations on switch ports since those are what host devices typically connect to. Here are some examples of security techniques that can be implemented on switch ports:

•   Port security   Permits traffic to switch ports from predefined MAC addresses only. This guards against unauthorized devices but is easily circumvented by MAC spoofing.

•   DHCP snooping   Restricts DHCP traffic to trusted switch ports only. This guards against rogue DHCP servers.

•   Dynamic ARP inspection   Drops ARP packets from switch ports and incorrect IP-to-MAC mapping. Guards against ARP spoofing.

•   IP source guard   Drops IP packets if the packet’s source IP doesn’t mesh with the switch’s IP-to-MAC bindings. Guards against IP spoofing.

External I/O Restrictions

To the untrained eye, it appears that organizations are unnecessarily paranoid about workers bringing external devices or peripherals to work. After all, how much harm can a tiny little flash drive, smartphone, or Bluetooth headset possibly cause? Answer: a lot. Although these devices can potentially carry many threats, they all originate from two primary directions:

•   Ingress   Bad stuff coming in (malware, password crackers, sniffers, keyloggers)

•   Egress   Good stuff going out (company, medical, and personal data)

Security professionals need to be fully aware of the different external devices that people may bring in, plus the risk factors and threats presented by each. In addition, we may need to look into outright preventing such devices from entering the workplace, and denying the devices that slip through from initializing upon attachment to a host device. This section takes a look at a variety of external devices, including USB, wireless, and audio and video components, plus the mitigations for the threats they introduce.

USB

Since 1996, the Universal Serial Bus (USB) data transfer and power capabilities have permitted the connectivity of virtually every device imaginable to a computer. The ubiquity of USB gives it the rare distinction of being not only the most popular standard for external device connections but also the source of the most device-based threats. Plugging in external devices containing storage such as flash drives, external hard drives, flash cards, smartphones, and tablets makes it easy for both innocent and not-so-innocent users to install malicious code onto a host. This includes malware, keyloggers, password crackers, and packet sniffers. Other USB attacks may seek to steal sensitive materials from the organization. Since most of these devices are small, they can easily be concealed in a pocket, backpack, purse, or box—and thus escape the notice of security staff or surveillance cameras.

To combat the USB threats, organizations often use technological means to block or strictly limit the use of USB devices. Figure 6-2 shows an example of restricting removable devices via Windows Group Policy.

Images

Figure 6-2    Group Policy restricting removable devices

USB Restrictions

There are several ways to disable USB devices:

•   Disable USB storage in the registry via careful modification of the following key: HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesUSBSTOR.

•   Set the Group Policy option “Prevent installation of removable devices” to Enabled.

•   Set the Group Policy option “All Removable Storage classes: Deny all access” to Enabled.

•   Disable USB ports in Device Manager by right-clicking them and selecting Disable Device.

•   Disable USB ports via the BIOS setup screen.

•   Disable USB ports on the top or front of the computer case by detaching the internal cable from the motherboard’s USB header.

•   Uninstall USB storage drivers from Device Manager.

Wireless

Like USB, wireless technologies bring convenience, practicality, and numerous attack vectors to an organization. Unlike their cabled brethren, wireless devices are susceptible to various over-the-air communication attacks, which may result in malware infection, device hijacking, denial-of-service (DoS) attacks, data leakage, and unauthorized network access.

The nature of the threats can vary based on whether the device uses radio frequencies such as Bluetooth, near field communication (NFC), radio-frequency identification (RFID), and 802.11 Wi-Fi or the infrared signals used by the infrared data association (IrDA) protocols. This section will cover the threats introduced by these wireless technologies and their mitigations.

Bluetooth   Bluetooth is a wireless technology standard designed for exchanging information between devices such as mice, keyboards, headsets, smartphones, smart watches, and gaming controllers—at relatively short distances and slow speeds. With various Bluetooth versions out there, devices may range widely in terms of signal range and bandwidth speeds. You may see ranges between 10 and 1,000 feet with speeds between 768 Kbps and 24 Mbps. Keep in mind that 1,000-foot Bluetooth distances are rare and typically achieved by Bluetooth hackers using amplifiers to perform their exploits from well out of sight.

Like any wireless technology, Bluetooth is subject to various attack vectors:

•   Bluesmacking   DoS attack against Bluetooth devices

•   Bluejacking   Delivery of unsolicited messages over Bluetooth to devices in order to install contact information on the victim’s device

•   Bluesnarfing   Theft of information from a Bluetooth device

•   Bluesniffing   Seeking out Bluetooth devices

•   Bluebugging   Remote controlling other Bluetooth devices

•   Blueprinting   Collecting device information such as vendor, model, and firmware info

Despite the litany of attack vectors, several countermeasures exist to reduce the threats against Bluetooth devices. One of the best things you can do is keep devices in a non-discoverable mode to minimize their visibility. Another idea is to change the default PIN code used when pairing Bluetooth devices. Also, disregard any pairing requests from unauthorized devices. If one is available, you should also consider installing a Bluetooth firewall application on your device. Enabling Bluetooth encryption between your device and the computer will help prevent eavesdropping. If possible, ensure the device has antimalware software to guard against various Bluetooth hacking tools.

NFC   Near field communication (NFC) is a group of communication protocols that permit devices such as smartphones to communicate when they are within 1.6 inches of each other. If you’re ever at a Starbucks drive-thru, you’ll frequently see customers paying for products by holding up their smartphone to the NFC payment reader. NFC payments are catching on due to their convenience, versatility, and having some security enhancements over credit cards. Security benefits include an extremely small signal range (which makes compromise more difficult), PIN/password protection, remote wiping of smartphone to guard against credit card number theft from lost devices, contactless or “bump” payment, credit card number being kept invisible to outsiders, and no credit card magnetic strip needed. Plus, the owner of the NFC card reader does not have access to the customer’s credit card information.

Although NFC is generally considered to be more secure than typical credit card payments, there are some downsides, including the following:

•   Cost prohibitive, particularly for small businesses, which reduces their competitive edge

•   Lack of support from many businesses

•   Hidden security vulnerabilities subjecting NFC to radio frequency interception and DoS attacks

There are various mitigations for NFC, including the following:

•   Encrypting the channel between the NFC device and the payment machine

•   Implementing data validation controls to guard against integrity-based attacks

•   End-user awareness training for NFC risks and best practices

•   Disabling NFC permanently or only when not in use

•   Only tapping tags that are physically secured, such as being located behind glass

•   Use of NFC-supported software with password protection

IrDA   Infrared Data Association (IrDA) created a set of protocols permitting communications between devices using infrared wireless signals. Unlike most wireless communications that use radio waves, infrared is a near-visible form of light. IrDA is generally considered to be accurate, relatively secure (primarily due to line-of-sight requirements), resilient toward interference, and can serve as a limited alternative to Bluetooth/Wi-Fi due to some environments having challenges with radio frequency devices or radio interference.

Although sometimes used as a communications method between laptops and printers, IrDA doesn’t see much use due to its limited speed (16 Mbps), range (2 meters), and line-of-sight requirements. IrDA doesn’t implement authentication, authorization, or cryptographic support. Plus, it is possible (although not easy) to eavesdrop on IrDA communication. The best mitigation is to be mindful of device position in relation to other untrusted users or devices to prevent eavesdropping, or switch to another wireless technology such as Bluetooth or Wi-Fi if possible.

802.11   Dating back to 1997, the 802.11 specification has been managed by the Institute of Electrical and Electronics Engineers (IEEE), which helps globally standardize wireless local area network communications. The frequencies used in the various 802.11 standards are commonly 2.4 GHz and 5 GHz; meanwhile, a newer 60 GHz frequency band has emerged. Although 802.11 forms the foundation for Wi-Fi technologies, they are not interchangeable terms. Given the large scope of 802.11 topics and standards, we will flesh these out over the next several sections.

Wireless Access Point (WAP)   Wireless access points (WAPs) are devices that connect a wireless network to a wired network—which creates a type of wireless network called “infrastructure mode.” If you build a wireless network without a WAP, this is known as an ad-hoc network. In most cases, wireless access points are incorporated into wireless broadband routers.

Images

EXAM TIP    More on some of these topics later, but be sure to implement wireless encryption such as WPA/2 and MAC filtering, update the firmware, change the default username/password to manage the WAP, and rename/disable the broadcast of the SSID.

Hotspots   Hotspots are wireless networks that are available for public use. These are frequently found at bookstores, coffee shops, hotels, and airports. They are notorious for having little to no wireless security. It is recommended that you establish a VPN connection to secure yourself on hotspots.

SSID   The service set identifier is a 32-alphanumeric-character identifier used to name a wireless network. This name is broadcasted via a special type of frame called a “beacon frame” that announces the presence of the wireless network. The SSID should be renamed in addition to disabling the broadcasting of it to decrease the visibility of the wireless network.

802.11a   The first revision to the 802.11 standard, 802.11a was released in 1999. It uses the orthogonal frequency-division multiplexing (OFDM) modulation method via the 5 GHz frequency with a maximum speed of 54 Mbps. This standard didn’t see a lot of action due to limited indoor range.

802.11b   Also released in 1999, 802.11b uses the direct sequence spread spectrum (DSSS) method via the 2.4 GHz frequency band at a top speed of 11 Mbps. Despite its slower speeds, it has excellent indoor range. This standard became the baseline for which technologies would eventually be called “Wi-Fi certified.”

802.11g   Released in 2003, 802.11g uses the 2.4 GHz band of 802.11b, but has the 54 Mbps speed of 802.11a. Like 802.11a, it uses the OFDM modulation technique. Given its excellent indoor range, this was a huge hit for many years and is still in use today.

802.11i   Released in 2004, 802.11i is a security standard calling for wireless security networks to incorporate the security recommendations included in what is now known as Wi-Fi Protected Access II (WPA2). More to follow on WPA2 later in this section.

802.11n   Although sold on the market since the mid-2000s as a draft standard, 802.11n was formally released in 2009 and supports OFDM via both the 2.4 GHz and 5 GHz frequencies. Having support for both frequencies is good because if the 2.4 GHz band has too much interference, we can switch to the less-crowded 5 GHz band. This standard’s speed can scale up to 600 Mbps if all eight multiple-input multiple-output (MIMO) streams are in use. Plus, it has nearly double the indoor range of the previous standards.

802.11ac   Released in 2013, 802.11ac uses the 5 GHz band and the OFDM modulation technique. Its reliance on 5 GHz helps it to avoid interference in the “chatty” 2.4 GHz band. It supports some of the fastest speeds on the market at 3+ Gbps and has good indoor range.

WEP   Wired Equivalent Privacy (WEP) is the original pre-shared key security method for 802.11 networks in the late 1990s and early 2000s. Prior to the pre-shared method, wireless networks used open system authentication in which no password was needed to connect. Given its name, the wireless encryption offered by WEP was equivalent to no encryption via cables. In other words, the goal was to make these two methods “equivalent.” WEP uses a fairly strong and fast symmetric cipher in RC4; however, WEP poorly manages RC4 by forcing it to use static encryption keys. In addition, WEP uses computationally small 24-bit initialization vectors (IVs), which are input values used to add more randomization to encrypted data. As a result, WEP hacking can easily be performed by capturing about 50,000 IVs to successfully crack the WEP key. WEP should be avoided on wireless networks unless no alternatives exist.

WPA   Wi-Fi Protected Access (WPA) was an interim upgrade over WEP in that it did away with static RC4 encryption keys, in addition to upgrading the IVs to 48 bits. Although WPA still uses RC4, it also supports Temporal Key Integrity Protocol to provide frequently changing encryption keys, message integrity, and larger IVs. Despite its vast improvements over WEP, WPA can be exploited via de-authentication attacks, offline attacks, or brute-force attacks.

WPA2   Wi-Fi Protected Access II (WPA2) is the complete representation of the 802.11i wireless security recommendations. Unlike WPA, it replaced RC4 with the globally renowned Advanced Encryption Standard (AES) cipher, while also being supplemented by the Counter Mode Cipher Block Chaining Message Authentication Code Protocol (CCMP) cipher. As with WPA, WPA2 supports either Personal or Enterprise mode implementations. WPA2 Personal mode uses pre-shared keys, which are shared across all devices in the entire wireless LAN, whereas WPA2 Enterprise mode includes Extensible Authentication Protocol (EAP) or Remote Authentication Dial-in User Service (RADIUS) for centralized client authentication, including Kerberos, token cards, and so on. The Enterprise mode method is designed for larger organizations with AAA (authentication, authorization, and accounting) servers managing the wireless network, and Personal mode is more common with small office/home office (SOHO) environments in which the WAP controls the wireless security for the network.

MAC Filter   This is a simple feature on WAPs where we whitelist the “good” wireless MAC addresses or blacklist the “bad” MAC addresses on the wireless network. Although this provides a basic level of protection, hackers can fairly easily circumvent it with MAC spoofing. It is best to supplement this security feature with others.

RFID   Radio frequency identification (RFID) uses antennas, radio frequencies, and chips (tags) to keep track of an object or person’s location. RFID has many applications, including inventory tracking, access control, asset tracking (such as laptops and smartphones), pet tracking, and people tracking (such as patients in hospitals and inmates in jails).

RFID uses a scanning antenna (also known as the reader or interrogator) and a transponder (RFID tag) to store information. For example, a warehouse may require RFID tags to be placed on staff smartphones, as per a mobile device security policy. A warehouse manager can then use an RFID scanner device to remotely monitor the smartphones.

Images

EXAM TIP    There are two types of RFID tags: active and passive. Active tags are more expensive, yet their built-in battery enables them to broadcast their signal potentially hundreds of meters. Passive tags are cheaper and can only get their power via the nearby “interrogation” of a reader device. This limits a passive tag’s broadcasting capability to, typically, just a few feet.

RFID does introduce some interference and eavesdropping risks due to other readers being capable of picking up the tag’s transmission. Plus, the tag can only broadcast so far, which doesn’t do any good for device theft. Some mitigations for RFID security threats include blocker tags, which seek to DoS unauthorized readers with erroneous tags, and kill switches, which seek to disable tags once activation is no longer required. Also, RFID encryption and authentication are supported on some newer RFID devices.

Drive Mounting

Before users can access any kind of storage device, it must first be mounted. Drive mounting is the process of an OS making files/folders available to users through its file system. File systems display mounted drives as either a disk icon (Windows or macOS) or a directory (Unix/Linux). Although internal storage devices are handled by IT, issues arise from end users freely connecting external hard drives and flash drives to their machines. Since OSs typically “auto-play” connected storage devices, any malware on the drive can automatically run—while simultaneously extracting sensitive data from the company.

The easiest countermeasure would be to prevent connectivity of external storage devices entirely; however, that isn’t always an option. Organizations may choose to limit such connectivity to company-owned devices that are already configured with drive encryption and password protection. Another consideration would be to disable auto-play of removable devices to disable processing of any malicious code on the drive. Also, Windows Group Policies can be configured to limit the permissions users have to the drive.

Drive Mapping

Drive mappings treat remote storage like local storage. Rather than having users manually browse to a remote computer to access its storage and content, a user simply clicks the local drive letter that is “mapped” to that remote storage. This makes accessing the remote drive as easy as accessing a local drive. Drive mappings are meant to be convenient and productive, yet such convenience can be a double-edged sword. Some users mistakenly believe the mapped drive is local; therefore, they may store inappropriate or personal content on it. Imagine their shock when other team members come across such content. End-user training can help prevent this from happening. Also, such convenient access to remote storage is extended to attackers. If attackers can just get to the local computer, they can easily catapult themselves to the mapped drive and plunder its content.

Mitigations for mapped drives are few but worthwhile. Users need to be reminded to always lock their computer when they step away as well as employ strong password practices. The key is to prevent the attacker from accessing the machine in the first place.

Webcams and Recording Microphones

You’ve often been told to “smile for the camera.” That is now more applicable than ever considering that cameras are built into today’s smartphones, tablets, laptops, portable and nonportable gaming consoles, and IoT devices. As you might expect, some privacy issues have arisen from this:

•   Applications often request permissions to a device’s camera/microphone. This subjects nearby users to unexpected (and legal) audio/video capturing with content delivery to third parties.

•   Device owners can invade the privacy of others by secretly capturing audio/video of them.

•   Device owners may take pictures of a company’s sensitive material.

•   Malware may infect the device and perform surveillance on the user.

Images

EXAM TIP    To counter the malware, be sure to use antivirus software, follow Internet and e-mail security best practices to minimize malware acquisition, and don’t “root” or “jailbreak” your device. In the latter case, you’d be weakening the overall security of the device, which would further subject it to malware infections. As for general applications that require permissions to your camera/microphone, you may be able to suppress some permission requirements through device-hardening techniques. Also, apps may be available on the app store that can perform these hardening techniques for you.

Mobile device management tools typically have configuration policies for disabling cameras and microphones. If the organization/users can stomach this outcome, it is the best solution to implement. Also, be sure to include all camera and microphone security requirements in a security policy to make it clear to users what devices are allowed and not allowed—and what the acceptable uses are.

SD Port

Secure Digital (SD) ports support the connection of small portable memory flash cards frequently attached to laptops, mobile devices, and some desktops. As with USB flash media, SD card connections are immediately rewarded with a mounted drive letter. If the OS is set to auto-play the flash card, malware may be able to run immediately—and potentially extract confidential materials from the system. Mitigations for SD port risks include the following:

•   Implement removable device Group Policies to suppress SD card connections.

•   Disable auto-play to prevent malicious code from automatically running.

•   Prohibit the use of SD card media via a security policy.

•   Prohibit the connection of external SD readers on PCs.

HDMI and Audio Output

High-Definition Multimedia Interface (HDMI) has blossomed over the past 15 years due to its ability to simultaneously support high-definition video (and now 4K) resolutions and surround sound audio systems—all on one cable. With desktops, laptops, monitors, and TVs frequently having HDMI ports, hackers have found a way to use them against us. Elite hackers are able to use available HDMI ports to hack into monitors to spy on users, steal data, and even manipulate what users see onscreen. Plus, with HDMI supporting Ethernet, attackers may use HDMI-compromised systems to attack other systems on Ethernet networks. The most practical solution at this time is to prohibit access to HDMI ports, or use DVI or DisplayPort technology as a workaround.

File and Disk Encryption

It’s tempting to think that file permissions should be enough to secure files. After all, if someone doesn’t have permission to a file, why would it also need to be encrypted? Answer: permissions aren’t pervasive. They effectively disappear when files or drives are moved to another machine, when attackers boot up machines with an alternate OS or boot media, and when attackers launch horizontal privilege escalation attacks.

File encryption typically addresses these concerns by encrypting files using a symmetric encryption key and then protecting the symmetric key with a hidden asymmetric private key. With a few exceptions, only the user with the hidden private key can decrypt the file. Those exceptions are when an encrypted file is shared with another user and when a data recovery agent (DRA)—usually the administrator or root account—attempts to access the file. If an unauthorized user attempts any of these exploits against the encrypted file, it will error out.

Images

NOTE    A popular file encryption tool is Microsoft’s Encrypting File System (EFS). Unless proactive steps are taken, file encryption carries some negatives. For example, access to an encrypted file is prevented if the private key is corrupted or lost, performance may decrease slightly, and sharing encrypted files can occasionally be tedious.

Disk encryption goes deeper by encrypting the entire internal drive, volume, or external drive. This simplifies the process of bulk file/folder encryption; plus, the drive may gain the protection from Trusted Platform Module (TPM) chips, secure and measured booting methods, and UEFI firmware. As a result, disk protection provides stronger protections against online and offline attacks, as opposed to just file encryption. Like file encryption, loss of private keys or recovery keys can complicate access to the drive, performance will drop a little bit, and moving drives between systems requires some added steps. Also, disk encryption may be vulnerable to what’s known as a cold boot attack. Such an attack involves the hacker obtaining encryption keys from RAM due to the RAM not clearing out in a timely manner during a recent reboot or shutdown operation. Additional attempts to ensure memory is cleared out should be taken, and increased physical security should help prevent access to the machine to thwart such attacks.

Careful planning and implementation of file and drive encryption should help mitigate all these challenges.

Images

NOTE    For additional information about file and disk encryption, refer to Chapter 15.

Firmware Updates

Firmware is a combination of low-level instructions and the nonvolatile memory it’s stored on such as electrically erasable programmable read-only memory (EEPROM). Your computer’s BIOS is a good example of a firmware chip. When devices are powered on, firmware guides the device’s startup, self-test, and diagnostic sequences before handing off control to an operating system. In addition to computer BIOSs, other devices have firmware, including network equipment, mobile devices, gaming consoles, smart TVs, appliances, IoT devices, and more.

Although organizations are generally keen on installing OS and application patches, firmware updates are often neglected. Unlike patches, which address software issues, firmware updates resolve hardware issues. Firmware can have vulnerabilities like any other software, yet the attacks against them can have ramifications on hardware such as permanently DoS’ing the hardware. This is when the firmware is irrecoverably damaged and no longer usable.

Although firmware updates are critical to an organization’s overall update strategy, you should first do some research with the firmware vendor before you update. Installing updated firmware carries a slight risk of permanently “bricking” your firmware and, subsequently, the device due to a failed update installation. As a result, you must be sure the update is needed and that you’re comfortable with the installation process. Some firmware updates are optional due to negligible benefits, or limited scope and applicability to customers; meanwhile, firmware updates marked as “critical” or “recommended” should be mandatory. Updates may resolve bugs, add new functions or security features, or increase resilience against various exploits, including rootkits. Rootkits are particularly dangerous since they can take over your machine while concealing themselves in your firmware. Plus, many diagnostic tools wouldn’t think to look in your firmware for threats.

You will probably have to manually download the firmware from the vendor; however, some devices can automate this process. Be sure to review the vendor’s website for instructions on backing up the current firmware, if possible, and recovering it in the event of a failed firmware update process. If no such disaster recovery options are available, you’ll want to be especially sure such a firmware upgrade is truly necessary before proceeding.

Boot Loader Protections

Boot loader protections provide assurances that only a trusted boot loader—the program that loads an OS—is permitted to run during a computer’s startup routine. This is important because many of the security controls, such as authentication, permissions, antimalware, and host-based firewalls, are only operational after an OS finishes loading. Boot loaders aren’t protected by those security controls. Looking elsewhere, Basic Input/Output Systems (BIOSs) offer limited security benefits; plus, many security practitioners are not accustomed to the security features offered by the more recent Unified Extensible Firmware Interface (UEFI) and Trusted Platform Module (TPM) chips.

Today’s hackers are attacking OS launches with rootkits, bootkits, alternate OSs, and unapproved storage devices; therefore, we need to provide assurances that only our boot loaders are approved for execution. This section takes a look at various boot loader protections offered by Secure Boot, Measured Launch, Integrity Measurement Architecture, BIOS/UEFI, attestation services, and TPM.

Secure Boot

For the better part of the past 40 years, the startup of a computer has been controlled by the BIOS firmware chip. Besides the easily circumvented BIOS password, there’s little else the BIOS offers in the way of startup security. After the BIOS completes its internal startup routines, it’ll blindly load whatever boot loader it encounters first. To prevent unauthorized boot loaders from starting, we should consider implementing a fairly new security feature called Secure Boot. Secure Boot is made available through UEFI firmware that will only load trusted, digitally signed boot files, as per the original equipment manufacturer (OEM).

Secure Boot Process

Here are the basic steps involved in the Secure Boot process:

1.   The computer is turned on.

2.   The firmware’s digital signature is validated to assure the host no rootkits are present.

3.   The firmware verifies that the boot loader on the storage device has a valid, trusted, and tamper-free digital signature.

4.   The firmware starts the trusted boot loader.

Since the hacker’s OS/malware shouldn’t have an approved digital signature, Secure Boot will not load the code. These signatures are stored in memory and must be updated through an OEM-supported database if you want to add unsupported boot/OS code of your own at launch. Windows 8+ and various Linux distributions have added support for Secure Boot. If your UEFI computer also has a BIOS, make sure Secure Boot is enabled in the BIOS if you want to use it.

Measured Launch

With malware increasingly infecting devices early in the boot cycle, we require a means to verify the trustworthiness of the boot environment. To the rescue is the Measured Launch (also known as Measured Boot) in which TPM chips measure the cryptographic integrity of several boot components through the use of digital signatures. These TPM-driven Measured Launches are implemented by specific TPM implementations such as Intel’s Trusted Execution Technology (TXT) and the one from the Trusted Computer Group (TCG). Upon startup, the OS will perform a chain of measurements on each boot component’s digital signature and then compare the measurements to those stored in the TPM chip in order to validate the boot process and prevent malware infections. These measurements may include the host’s firmware, OS boot files, certain applications, registry configurations, and even drivers. Upon validation of the boot components, the measurements are stored on the TPM, which will then serve as the baseline values for subsequent bootups. Should a measurement fail, the OS will not load due to its untrusted status. Like Secure Boot, Windows 8+ and various Linux distributions support Measured Launch. Just be mindful that early boot-cycle tests will slow down the bootup a bit.

Images

EXAM TIP    Whereas Secure Boot focuses on allowing only authentic OSs to run, Measured Launch scrutinizes the integrity of all boot components. Measured Launch goes even deeper to provide assurances of a trusted OS platform.

Integrity Measurement Architecture

Similar to Measured Launch, Integrity Measurement Architecture (IMA) is an open source method frequently used on Linux systems. It helps provide assurances that the Linux OS has a trusted boot environment. IMA works with the Linux kernel to validate OS file integrity prior to loading. After each critical boot file is hashed, its hash measurement is compared to the measurement stored on the TPM chip. If the two don’t match, the file is considered untrusted and does not load—thus the OS will not load.

BIOS/UEFI

The BIOS is a crucial firmware chip stored on a device’s motherboard that performs the hardware initialization and the subsequent OS startup. The BIOS code is stored on a special ROM chip that contains a small amount of code that can be updated (flashed) whenever the vendor releases an update. The two most important things the BIOS does are performing the Power-On Self-Test (POST) and launching an operating system. In just a second or two, the POST will check the CPU, BIOS, RAM, motherboard, and other hardware to ensure they are functional. Immediately afterward, the BIOS looks for a boot loader to transfer the startup to an operating system. The BIOS doesn’t have much in terms of security, but here are a few features:

•   User password   The user must supply this password the moment the machine turns on.

•   Supervisor password   The user must supply this password to enter the BIOS setup screen.

•   LoJack   Allows the user to track a lost laptop.

It’s pretty well-known as this point that the user and supervisor passwords can easily be erased by pulling the CMOS battery, or by using the CLR_CMOS button or jumper on the device’s motherboard.

Addressing the lack of security features, among many others, UEFI firmware is the heir apparent to the aging BIOS. The UEFI can perform the same functions as the BIOS, in addition to the following:

•   Faster bootup

•   Utilization of a GUID Partition Table (QPT) to access larger hard drives (2TB+) and more partitions

•   Support for Secure Boot and Measured Boot/Launch

•   Support for a mouse via the setup utility menu

•   CPU independence

•   Ability to use more memory

Security professionals will remark about UEFI’s ability to secure the “handoff” between the hardware initialization and the OS startup, whereas BIOS isn’t concerned about this.

Attestation Services

TPM chips provide attestation services to authenticate the identity and integrity of software. Such identification is initially tested by secure OS startup procedures such as Secure Boot and Measured Launch. However, the TPM gets the final say as to the overall trustworthiness of the computing platform. The TPM generates hashes for all critical bootup components, compares the hashes to a list of known hashes, and attests that no tampering has occurred. This information can then be shared with a third party who can independently verify the attested information in a process known as remote attestation. Attestation is also used for verifying that an entity requesting a certificate from a Certificate Authority (CA) is using a private key generated by a valid TPM chip.

TPM

Designed by the Trusted Computing Group (TCG), a Trusted Platform Module (TPM) is a secure chip that contains a cryptoprocessor built into modern computer motherboards for the purpose of performing various security functions relating to certificates, symmetric and asymmetric keys, and hashing. Central to TPMs are the built-in public/private key pair known collectively as the endorsement key (EK). This key is signed by a trusted Certificate Authority. In addition, the TPM has another built-in key known as the storage root key (SRK). This key is used to secure the other keys stored in the TPM.

TPM Features

Due to the built-in, and tamper-free, cryptographic keys built into the TPM, TPMs provide root of trust benefits—in other words, it is the entity on which all other trust is based. Basically, if the TPM says something is trustworthy, who are we to argue? However, TPMs provide specific forms of root of trust—chiefly the following:

•   Root of trust for reporting   Assures entities that the system state is trustworthy

•   Root of trust for storage   Assures entities that secrets remain secret

In essence, TPM chips provide low-level functions that allow more complex features to be supported, including the following:

•   Attestation services

•   Computing hashes

•   Generate random numbers

•   Integrity validation

•   Key generation and management

•   Performing public key cryptography functions

•   Secure storage of keys

•   Binding storage devices to a particular computer (Microsoft BitLocker)

•   Sealing the system’s state or configuration to a specific hardware and software configuration to prevent unauthorized changes

Vulnerabilities Associated with Hardware

With all the attention given to software vulnerabilities, attackers often go unnoticed when they exploit hardware vulnerabilities instead. Although far more software vulnerabilities are known and exploited, hardware vulnerabilities are, in some cases, even bigger than software vulnerabilities. Perhaps you heard of the recent and catastrophic CPU vulnerabilities called Meltdown and Spectre? They collectively affected nearly every CPU manufactured in the past two decades! Hardware of various types have vulnerabilities that you should keep an eye out for. Use the following list to guide your efforts:

•   Older PCs, laptops, and mobile devices are less likely to be vendor-supported and are more subject to DoS attacks due to slow performance.

•   Devices without UEFI chips won’t support Secure Boot or Measured Launch features.

•   Devices without TPM chips won’t support Measured Launch or the strongest drive encryption features.

•   Devices might have outdated firmware that the vendor isn’t updating anymore.

•   IoT devices generally have little to no security features configured, or even available.

•   Jailbroken iOS devices or rooted Android devices reduce security while voiding the warranty.

•   Manufacturer backdoors allow the vendor easy access to the device—in many cases without your knowledge.

•   Counterfeit hardware might be sold as a name-brand device unbeknownst to the device owner.

To reduce your risk of hardware vulnerabilities, you should always buy name-brand hardware from trusted distributors. If possible, you should also migrate away from unsupported legacy hardware, which is particularly susceptible to zero-day vulnerabilities. It’s also important that you follow the hardware vendor’s recommendations on proper installation, configuration, and maintenance. Also, don’t forget to update the firmware. Hackers like to target devices with outdated hardware for a reason.

Terminal Services/Application Delivery Services

This topic will be discussed at length in Chapter 13. For a brief summary, Microsoft has renamed Terminal Services as Remote Desktop Services (RDS), which provides desktop and application virtualization services via the Remote Desktop Protocol (RDP). The basic premise behind this is that the client offloads the majority of or all resource responsibilities onto the server, thereby defining the client’s role as a thin client. The degree of resource delegation will vary based on whether the RDS solution is hosting a remote desktop environment (which includes an OS and applications) or a RemoteApp (which includes applications only) for a client’s remote consumption.

RDS solutions have many roles, as summarized here:

•   Remote Desktop Connection Broker   Manages load balancing across RDS session host servers, in addition to reconnections to virtual desktops

•   Remote Desktop Gateway   Manages authorization to virtual desktops

•   Remote Desktop Licensing   Manages RDS client access licenses (CALs) to permit clients access to the RDS solution

•   Remote Desktop Session Host   Allows the server to host RemoteApp connections

•   Remote Desktop Virtualization Host   Permits users to access RemoteApp and virtual desktops

•   Remote Desktop Web Access   Permits users to access RDS through a web browser or the Windows Start menu

Chapter Review

In this chapter, we covered the analysis of scenarios for integrating security controls for host devices to meet security requirements. The first section was on trusted operating systems, which covered the Orange Book, Common Criteria, Evaluation Assurance Levels, Protection Profiles, and a couple Linux-based security modules called SELinux and SEAndroid. We also talked about the Trusted Solaris OS and its replacement in Solaris Trusted Extensions. We wrapped up this section with coverage of least functionality and how it locks down trusted OSs or otherwise provides the minimal permissions required for job/task completion.

The next topic for host device security focused on types of endpoint security software. The most fundamental of all types is antimalware software, which covers all malware forms. Although less common, tools such as antivirus and anti-spyware focus more specifically on certain types of malware. Spam filters are needed on e-mail servers and clients to keep organizational spam under control. Patch management is a useful strategy for installing patches across host devices throughout the organization. Devices also need host-based intrusion prevention and intrusion detective systems to stop threats or alert the organization about threats in progress. A fairly new security control called data loss prevention (DLP) is necessary for ensuring good organizational data doesn’t leave the corporate boundary. Critical to any host is the implementation of a host-based firewall such as the Windows Defender Firewall built into Windows 10. Log monitoring helps detect security breaches that are ongoing or have occurred previously. Lastly, we have endpoint detection response, which focuses on threat intelligence research to ensure we learn more about threats before removing them prematurely.

Host hardening is a large topic in itself, yet all of its components have one thing in common—they improve upon the default settings of the device. Fundamental to host hardening is the implementation of standard operating environments, which is typically achieved with OS disk images. This helps ensure consistency across desktop and server builds. Configuration baselining takes that approach to locking down the settings on desktop and server operating systems and applications. Part of these configuration baselines is the whitelisting and/or blacklisting of applications to ensure only appropriate applications are installed on systems. Group Policy plays a strong role in host hardening, given the thousands of user and computer-level options available for system lockdown. Command shell restrictions also harden the system by restricting what commands are available to the end user and IT personnel. Patch management appears again in this section, but only to highlight manual versus automated patch deployment methods.

The next several topics touched on requirements for configuring dedicated network interfaces, starting with out-of-band management, which focuses on managing traffic flow away from everyday network traffic. ACLs help to filter packets sent to/from these interfaces. A management interface is an in-band interface for communication with host devices via either a dedicated port or a logical port defined via a VLAN. Data interfaces support communications for everyday host devices, yet still must be locked down with a variety of switch security options. The topic of external I/O restrictions covers USB, a variety of wireless connectivity methods such as Bluetooth, NFC, IrDA, RF, 802.11 (and its various standards and security requirements), and RFID. Storage devices also come in external form, which may result in drive mounting and involves a user automatically accessing the drives contents. Meanwhile, drive mapping allows a user to map a remote, removable storage device located on one drive so it appears as a locally connected device on the user’s computer. Multimedia devices such as webcams, recording mics, and audio outputs also connect to a system externally and come with various security vulnerabilities and mitigations accordingly. SD ports accommodate small flash cards, which bring ingress and egress threats like USB flash drives, and HDMI ports are subject to monitor-based hijacking attacks. File encryption incorporates encryption techniques on files and folders, whereas disk encryption focuses on encrypting entire drives, volumes on drives, and external drives for complete protection against various online and offline attack vectors. The last topic of this section covers the importance of firmware updates on all supported device types.

The next section of the chapter covered boot loader protections—the first of these being Secure Boot, which focuses on loading only trusted OSs. Next was Measured Launch, which deepens the scope of Secure Boot by doing independent integrity checks on all critical boot components. Integrity Measurement Architecture is similar to Measured Launch but is open source and specific to Linux-based systems. We talked about the legacy BIOS firmware and its replacement in UEFI as well as the various security features offered by it. We talked about TPM chips and how they provide root of trust and attestation services for the host device’s integrity.

The second-to-last section covered vulnerabilities associated with hardware. This included coverage of old hardware not supported by vendors as well as hardware lacking UEFI chips and TPMs. It also included coverage of jailbroken or rooted mobile devices as well as counterfeit devices.

The last section of the chapter covered Terminal Services and application delivery services. Although certain fundamentals are covered in Chapter 13, we added a little extra to this topic by focusing on Microsoft Remote Desktop Services and its RemoteApp feature.

Quick Tips

The following tips should serve as a brief review of the topics covered in more detail throughout the chapter.

Trusted Operating System

•   A trusted OS is one we can place a certain level of trust in based on the various levels established by the Orange Book.

•   The Orange Book was replaced by the Common Criteria (CC), which is a multinational program in which evaluations conducted in one country are accepted by others that also subscribe to the tenets of the CC.

•   Multilevel security implements multiple classification levels, and the operating system has to maintain separation between these levels of all data and users.

•   Evaluation Assurance Levels (EALs) rate operating systems according to their level of security testing and design.

•   CC has replaced EALs with Protection Profiles, which define more accurate and trustworthy assurance levels for operating systems.

•   SELinux is a group of security extensions that can be added to Linux to provide additional security enhancements to the kernel.

•   SEAndroid is the SELinux extensions adapted to the Android OS.

•   Deprecated now in favor of Solaris Trusted Extensions, Trusted Solaris was a group of security-evaluated OSs based on earlier versions of Solaris.

•   The principle of least functionality is a requirement that only the necessary privileges are granted to users to access resources.

Endpoint Security Software

•   Endpoint security refers to a security approach in which each device is responsible for its own security.

•   Antimalware software is a general-purpose security tool designed to prevent, detect, and eradicate multiple forms of malware, such as viruses, worms, Trojan horses, spyware, and more.

•   Antivirus software is designed specifically to remediate viruses, worms, and Trojan horses.

•   Anti-spyware software specifically targets the removal of spyware.

•   Spam filters identify malicious or undesirable e-mails and prohibit them from invading the user’s mailboxes.

•   Patch management is the process of acquiring, testing, deploying, and maintaining a patching solution for an organization’s devices.

•   HIPS is a host-based program that prevents threats from attacking the system.

•   HIDS is a host-based program that generates alerts when the system is being attacked.

•   DLP prevents desirable and sensitive materials from leaving the corporate boundary unless the policy permits it.

•   Host-based firewalls control which traffic is allowed or denied from entering and exiting the computer.

•   Log monitoring is the process of examining host logs in order to detect signs of malicious activity on the device.

•   Endpoint detection and response (EDR) solutions will initially monitor a threat by collecting event information from memory, processes, the registry, users, files, and networking, and then upload this data to a local or centralized database.

Host Hardening

•   Hardening is designed to make it harder for attackers to successfully penetrate a system.

•   Standard operating environments include a pre-defined disk image of an operating system, applications, and configurations to provide consistent host device experiences across the organization.

•   Configuration baselines focus on standardizing configurations across applications or operating systems. Standard operating environments build the machine whereas configuration baselines configure the machine.

•   Application whitelisting focuses on explicitly allowing only certain applications, to the exclusion of all others.

•   Application blacklisting focuses on explicitly denying only certain applications, to the exclusion of all others.

•   Group Policy is a set of rules that provides for centralized management and configuration of the operating system, user configurations, and applications.

•   Command shell restrictions limit what commands are available to users and IT personnel.

•   Manual patch management improves controls, whereas automated patch management improves speed.

•   Configuring dedicated interfaces is necessary to ensure that an interface is isolated from all other interfaces and traffic flow patterns. This is necessary for management traffic.

•   Out-of-band management is an example of a dedicated interface in that it requires a separate communications channel.

•   Network ACLs use packet filters to lock down network interfaces.

•   A management interface is a dedicated physical port, or VLAN logical port, that permits in-band management of host devices. This doesn’t require an isolated and private communications link.

•   Data interfaces are the everyday communications channels that exist between hosts and network appliances such as switches. The majority of security features are switch-related to protect the hosts and network from attackers.

•   External I/O restrictions focus on disabling USB devices to guard against data exfiltration or malware propagation, as well as on the various wireless technologies—from Bluetooth and NFC to 802.11, IrDA, and RFID.

•   Drive mounting permits users to access the files and folders on the file system.

•   Drive mapping permits a user to map a drive on another system to a local drive letter on their computer.

•   Webcam and recording mics should be disabled or used sparingly to prevent spyware or other attacks from hijacking these devices and selling your data.

•   SD port restrictions should be little to no different from those of USB external drive connections. Ingress and egress threats are equally bad.

•   HDMI and audio output should be restricted due to the possibility of attackers using these cables to hijack the audio and video output of your devices.

•   File encryption is necessary for providing independent encryption capabilities to files and folders on a file system, whereas disk encryption encrypts the entire disk, volume, or external drive from various online and offline attacks.

•   Firmware updates are critical to securing devices from attacks that focus on outdated firmware. Some attacks can brick a device permanently, so this is an important update requirement.

Boot Loader Protections

•   Boot loader protections provide assurances that only trusted boot loaders—the program that loads an OS—are permitted to run during a computer’s startup routine.

•   Secure Boot is a feature made available through UEFI firmware that will only load trusted, digitally signed boot files, as per the original equipment manufacturer (OEM).

•   Measured Boot uses TPM chips to measure the cryptographic integrity of several boot components through the use of digital signatures.

•   Integrity Measurement Architecture (IMA) is an open source method frequently used on Linux systems.

•   BIOS is a crucial firmware chip stored on device motherboards that perform the hardware initialization and the subsequent OS startup.

•   UEFI firmware chips add various security features missing from BIOSs, such as faster bootup, larger partition sizes, Secure Boot and Measured Boot, mouse-driven setup utility, and the ability to utilize more memory.

•   TPM chips provide attestation services to authenticate the identity and integrity of software.

•   TPM is a secure chip that contains a cryptoprocessor built into modern computer motherboards for the purpose of performing various security functions relating to certificates, symmetric and asymmetric keys, and hashing.

Vulnerabilities Associated with Hardware

•   Hardware vulnerabilities are equal if not more significant than software vulnerabilities, even if there are less of them comparably.

•   Hardware vulnerabilities include older PCs, devices lacking UEFI and TPMs, outdated firmware, jailbroken or rooted devices, manufacturer backdoors, and counterfeit components.

Terminal Services/Application Delivery Services

•   Microsoft has renamed Terminal Services as Remote Desktop Services (RDS), which provides desktop and application virtualization services via the Remote Desktop Protocol (RDP).

•   The client offloads the majority of or all resource responsibilities onto the server, thereby defining the client’s role as a thin client.

•   RemoteApp is an RDS solution that permits applications to be hosted on the RDS server while accessed remotely by users.

Questions

The following questions will help you measure your understanding of the material presented in this chapter. Read all the choices carefully because there might be more than one correct answer. Choose all correct answers for each question.

1.   In a firewall, where should you place a “default” rule stating that any packet with any source/destination IP address and any source/destination port should be allowed?

A.   It should be the first rule so that it will always be checked.

B.   It doesn’t matter where it is placed as long as you have it in the rules somewhere.

C.   You should never have a rule like this in your rule set.

D.   It should be the last rule checked.

2.   Which of the following is a common firewall found and used on Linux-based machines?

A.   iptables

B.   Snort

C.   Defender

D.   Check Point

3.   You need to generate a rule that allows web-destined traffic to pass through your firewall. Which of the following rules will do that?

A.   Images

B.   Images

C.   Images

D.   Images

4.   An operating system is said to implement multilevel security if it:

A.   Introduces multiple levels of authorization such that users must authenticate themselves every time they wish to access a file

B.   Includes multiple layers of security, such as having both firewalls and intrusion detection/prevention systems built into it

C.   Implements a system where information and users may have multiple levels of security and the system is trusted to prevent users from accessing information they are not authorized to see

D.   Can be said to be both trustworthy and reliable

5.   If you require a trusted operating system environment, as described in this chapter, which of the following operating systems might you consider deploying?

A.   Windows 2008 Server

B.   SELinux

C.   Red Hat Linux

D.   Windows 7

6.   Which of the following is a program that replicates itself by attaching to other programs?

A.   Spyware

B.   Trojan horse

C.   Virus

D.   Worm

7.   As a parent, you may be interested in monitoring the activities of your child on your computer system. If you are interested in determining what activities your child is involved in on the computer, which of the following pieces of software might you be tempted to install?

A.   Trojan horse

B.   Phishing software

C.   Firewall

D.   Keylogger

8.   What is one of the major issues with spam filters that rely solely on keyword searches to determine what to filter?

A.   Keyword searches are too labor intensive and therefore take too long to accomplish (thus slowing the system response time down).

B.   Keyword searches may filter e-mail you don’t want to filter because the keyword may be found as part of legitimate text.

C.   It is hard to define the keywords.

D.   Keyword searches generally do not work.

9.   From a security standpoint, why is having a standard operating environment (SOE) important?

A.   Without an SOE, administrators will be hard pressed to maintain the security of systems because there could easily be so many different existing configurations that they would not be able to ensure all are patched and secured correctly.

B.   Having an SOE has nothing to do with security and is purely an administrative tool.

C.   Having an SOE allows administrators to take advantage of large-scale, bulk ordering of software, thus saving funds.

D.   Having an SOE is essential in order to implement Active Directory correctly.

10.   If you want to implement a restricted shell in a Unix environment, which of the following would you use?

A.   ksh

B.   csh

C.   rbash

D.   sh

11.   Which of the following technologies would be most appropriate in your inventory control efforts?

A.   RFID

B.   NFC

C.   IrDA

D.   802.11i

12.   From a security standpoint, why might you want to protect your database of inventory items?

A.   Regenerating it if it is lost can be costly.

B.   Losing something like this would be an indication of a lack of general security procedures and processes.

C.   Because it would contain information on the hardware and software platforms your organization uses and thus would provide an attacker with information that could be used to determine vulnerabilities you might be susceptible to.

D.   If a software or hardware vendor obtained a copy of it, you might find yourself inundated with sales calls trying to sell you any number of products.

13.   If you are in a banking environment, what type of information might you look for in traffic that is leaving your organization in order to protect against data exfiltration by somebody who may have gotten unauthorized access to your system? (Choose all that apply.)

A.   Files containing strings of 9-digit numbers (which might be social security numbers) or numbers that might represent bank accounts

B.   Large data files being sent out of your organization in an unencrypted manner

C.   Files, or even e-mail, that contain numerous occurrences of numbers that could be phone numbers or ZIP codes

D.   Files or e-mail that contain sequences of digits that could be credit or debit card numbers

14.   Which of the following is a common use for Trusted Platform Modules?

A.   To authenticate and decrypt external storage devices

B.   To authenticate and decrypt internal storage devices

C.   To perform antivirus scans

D.   All of the above

15.   Which type of intrusion detection/prevention system is based on statistical analysis of current network or system activity versus historical norms?

A.   Signature based

B.   Abnormal behavior based

C.   Pattern deviation based

D.   Anomaly based

16.   When a computer turns on, the UEFI checks to make sure that the operating system is on the supported list of digitally signed operating systems. Which of the following features provides this capability?

A.   BitLocker

B.   Group Policy

C.   Measured Launch

D.   Secure Boot

Answers

1.   D. You should have this as the last rule so that if none of the other rules is invoked, the system will fall through to this one and know what to do.

2.   A. Iptables is the specific firewall we discussed in the chapter, and it is found in releases of Linux.

3.   D. This is the sample rule we showed in the chapter that allows web traffic to pass.

4.   C. This is a description of multilevel security. Generally, when somebody wants to utilize trusted operating systems, it is because they want to implement multiple levels of security on the system.

5.   B. SELinux is the only one of the operating systems listed that implements mandatory access controls, which allow for multiple levels of security.

6.   C. This is the definition of a virus.

7.   D. Although you should be careful where you obtain it from, a keylogger will record all keystrokes that your child makes, allowing you to determine what they are doing on the computer.

8.   B. Filtering based solely on keywords could mean you filter e-mail that contains legitimate occurrences of the string you are searching for. The chapter used the example of filtering on “cialis,” which is often found in spam related to the sale of drugs; yet this pattern is also found in the word “specialist.” Thus, you might filter a perfectly legitimate e-mail.

9.   A. This is the key. If your organization has a large number of systems, without having a standard operating environment, configuration control could quickly get out of hand, and maintaining the security of numerous, disparate systems would become untenable.

10.   C. The rbash command invokes the bash shell in restricted mode.

11.   A. RFID was mentioned in this chapter as a technology that can be useful in tracking individual inventory items.

12.   C. Knowing what hardware and software you have provides an attacker a tremendous boost in terms of determining what attacks to try against you.

13.   A, B, C, D. All of these might very well be indicators of information being sent out of your organization that shouldn’t be. Even e-mails that are not encrypted and that contain more than one account or credit card number could indicate a problem.

14.   B. Trusted Platform Modules (TPMs) have many purposes, including authenticating internal storage devices and then decrypting them.

15.   D. This is the definition of anomaly-based detection.

16.   D. Secure Boot is a UEFI feature that only boots up operating systems that are digitally signed and supported by the vendor.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.190.217.134