Chapter 6. Security Controls for Host Devices

This chapter covers the following topics:

  • Trusted OS (e.g., How and When to Use It): This section defines the concept of trusted OS and describes how it has been used to improve system security. Topics include SELinux, SEAndroid, TrustedSolaris, and least functionality.

  • Endpoint Security Software: Topics covered include anti-malware, antivirus, anti-spyware, spam filters, patch management, HIPS/HIDS, data loss prevention, host-based firewalls, log monitoring, and endpoint detection response.

  • Host Hardening: Methods covered include standard operating environment/configuration baselining, security/group policy implementation, command shell restrictions, patch management, configuration of dedicated interfaces, peripheral restrictions, external I/O restrictions, file and disk encryption, and firmware updates.

  • Boot Loader Protections: Topics covered include the use of secure boot, measured launch, the Integrity Measurement Architecture, BIOS/UEFI, attestation services, and TPM.

  • Vulnerabilities Associated with Hardware: Concepts include standard operating environments and security/group policy implementation.

  • Terminal Services/Application Delivery Services: This section covers recommended security measures when using terminal services and application delivery services.

This chapter covers CAS-003 objective 2.2

Securing a network cannot stop at controlling and monitoring network traffic. Network attacks are created with the end goal of attacking individual hosts. This chapter covers options available to protect hosts and the issues these options are designed to address.

Trusted OS (e.g., How and When to Use It)

A trusted operating system (OS) is an operating system that provides sufficient support for multilevel security and evidence of meeting a particular set of government requirements. The goal of designating operating systems as trusted was first brought forward by the Trusted Computer System Evaluation Criteria (TCSEC).

The National Computer Security Center (NCSC) developed the TCSEC for the U.S. Department of Defense (DoD) to evaluate products. TCSEC issued a series of books, called the Rainbow Series, that focuses on computer systems and the networks in which they operate.

TCSEC’s Orange Book is a collection of criteria based on the Bell-LaPadula model that is used to grade or rate the security offered by a computer system product. The Orange Book discusses topics such as covert channel analysis, trusted facility management, and trusted recovery.

TCSEC was replaced by the Common Criteria (CC) international standard, which was the result of a cooperative effort. The CC uses Evaluation Assurance Levels (EALs) to rate systems, with different EALs representing different levels of security testing and design in a system. The resulting rating represents the potential the system has to provide security. It assumes that the customer will properly configure all available security solutions, so the vendor must provide proper documentation to allow the customer to fully achieve the rating. ISO/IEC 15408-1:2009 is the International Organization for Standardization version of CC.

CC has seven assurance levels, which range from EAL1 (lowest), where functionality testing takes place, through EAL7 (highest), where thorough testing is performed and the system design is verified:

  • EAL1: Functionally tested

  • EAL2: Structurally tested

  • EAL3: Methodically tested and checked

  • EAL4: Methodically designed, tested, and reviewed

  • EAL5: Semi-formally designed and tested

  • EAL6: Semi-formally verified design and tested

  • EAL7: Formally verified design and tested

Here are some examples of trusted operating systems and the EAL levels they provide:

  • Mac OS X 10.6 (rated EAL 3+)

  • HP-UX 11i v3 (rated EAL 4+)

  • Some Linux distributions (rated up to EAL 4+)

  • Microsoft Windows 7 (rated EAL 4+)


Although Common Criteria is moving away from the use of EALs and toward the use of protection profiles, for the exam you should know all about EALs!

Common Criteria is moving away from the use of EALs and toward the use of protection profiles, as shown in Table 6-1. Products can qualify for multiple profiles.


Table 6-1 Protection Profiles


Number of Protection Profiles

Access control devices and systems


Biometric systems and devices


Boundary protection devices and systems


Data protection




ICs, smart cards, and smart card–related devices and systems


Key management systems




Multi-function devices


Network and network-related devices and systems


Operating systems


Other devices and systems


Products for digital signatures


Trusted computing


Trusted operating systems should be used in any situation where security is paramount, such as in government agencies, when operating as a contractor for the DoD, or when setting up a web server that will be linked to sensitive systems or contain sensitive data. Note, however, that there may be a learning curve when using these operating systems as they are typically harder to learn and administer. The following sections discuss three trusted operating systems: SELinux, SEAndroid, and TrustedSolaris.


Security-Enhanced Linux (SELinux) is a Linux kernel security module that, when added to the Linux kernel, separates enforcement of security decisions from the security policy itself and streamlines the amount of software involved with security policy enforcement.

SELinux also enforces mandatory access control policies that confine user programs and system servers, and it limits access to files and network resources. It has no concept of a “root” superuser and does not share the well-known shortcomings of the traditional Linux security mechanisms. In high-security scenarios, where the sandboxing of the root account is beneficial, the SELinux system should be chosen over regular versions of Linux.


SEAndroid is an SELinux version that runs on Android devices. The SEAndroid 5.0 release moved to full enforcement of SELinux, building on the permissive release of SEAndroid 4.3 and the partial enforcement of Android 4.4.

Software runs on SEAndroid with only the minimum privileges needed to work correctly (which helps lessen the damage that malware can do), and it can sometimes block applications or functions that employees need. To manage this default SEAndroid behavior, you need shell and root access to the Android devices.

SSHDroid is an app that allows you to access Android devices from a computer using Secure Shell (SSH). You can gain root by using the Android Debug Bridge (adb) command, which is part of the Android software development kit (SDK), or you can root the device to get full access. Taking this approach isn’t for everyone because device vendors don’t support rooting.


TrustedSolaris is a set of security extensions incorporated in the Solaris 10 trusted OS. Solaris 10 5/09 is Common Criteria certified at EAL4. Enhancements include:

  • Accounting

  • Role-based access control

  • Auditing

  • Device allocation

  • Mandatory access control labeling

The TrustedSolaris environment allows the security administrator role to extend the list of trusted directories. The method is different in the TrustedSolaris 8 environment than in previous releases. For more information, see “Procedure for the Trusted Solaris 8 Operating Environment,” at

Least Functionality

The principle of least functionality calls for an organization to configure information systems to provide only essential capabilities and specifically prohibits and/or restricts the use of other functions.

Endpoint Security Software

Endpoint security is accomplished by ensuring that every computing device on a network meets security standards. The following sections discuss software and devices used to provide endpoint security, including antivirus software and other types of software and devices that enhance security.


We are not helpless in the fight against malware. There are both programs and practices that help mitigate the damage malware can cause. Anti-malware software addresses problematic software such as adware and spyware, viruses, worms, and other forms of destructive software. Most commercial applications today combine anti-malware, antivirus, and anti-spyware into a single tool. An anti-malware tool usually includes protection against malware, viruses, and spyware. An antivirus tool just protects against viruses. An anti-spyware tool just protects against spyware. Security professionals should review the documentation of any tool they consider to understand the protection it provides.

User education in safe Internet use practices is a necessary part of preventing malware. This education should be a part of security policies and should include topics such as:

  • Keeping anti-malware applications current

  • Performing daily or weekly scans

  • Disabling autorun/autoplay

  • Disabling image previews in Outlook

  • Avoiding clicking on email links or attachments

  • Surfing smart

  • Hardening the browser with content phishing filters and security zones


Antivirus software is designed to identify viruses, Trojans, and worms. It deletes them or at least quarantines them until they can be removed. This identification process requires that you frequently update the software’s definition files, the files that make it possible for the software to identify the latest viruses. If a new virus is created that has not yet been identified in the list, you will not be protected until the virus definition is added and the new definition file is downloaded.


Spyware tracks a user’s activities and can gather personal information that could lead to identity theft. In some cases, spyware can even direct the computer to install software and change settings. Most antivirus or anti-malware packages also address spyware, so ensuring that definitions for both programs are up to date is the key to addressing this issue. The avoidance of spyware can also be enhanced by adopting the safe browsing guidelines listed in the “Anti-malware” section, earlier in this chapter.

An example of a program that can be installed only with the participation of the user (by clicking where he or she shouldn’t have clicked) is a key logger. These programs record all keystrokes, which can include usernames and passwords. One approach that has been effective in removing spyware for Windows 7 is to reboot the machine in safe mode and then run the anti-spyware and allow it to remove the spyware. In safe mode, it is more difficult for the malware to avoid the removal process.

Spam Filters

Spam is both an annoyance to users and an aggravation to email administrators who must deal with the extra space the spam takes up on the servers. Above and beyond these concerns, however, is the possibility that a spammer can be routing spam through your email server, making it appear as though your company is the spammer.

Sending spam is illegal, so many spammers try to hide the source of the spam by relaying it through corporate email servers. Not only does this hide its true source, but it can cause the relaying company to get in trouble.

Today’s email servers have the ability to deny relaying to any email servers that a security professional does not specify. This type of relaying should be disallowed on your email servers to prevent your email system from being used as a spamming mechanism.

Spam filters are designed to prevent spam from being delivered to mailboxes. The issue with spam filters is that often legitimate email is marked as spam. Finding the right setting can be challenging. Users should be advised that no filter is perfect, and they should regularly check quarantined email for legitimate emails.

Patch Management

Software patches are updates released by vendors that either fix functional issues with or close security loopholes in operating systems, applications, and versions of firmware that run on network devices.

To ensure that all devices have the latest patches installed, a formal system should be deployed to ensure that all systems receive the latest updates after thorough testing in a non-production environment. It is impossible for the vendor to anticipate every possible impact a change may have on business-critical systems in the network. It is the responsibility of the enterprise to ensure that patches do not adversely impact operations.

Vendors generally make several types of patches available:

  • Hot fixes: A hot fix is an update that solves a security issue and should be applied immediately if the issue it resolves is relevant to the system.

  • Updates: An update solves a functionality issue rather than a security issue.

  • Service packs: A service pack includes all updates and hot fixes since the release of the operating system.


Intrusion detection systems (IDSs) are used to identify intrusions, and intrusion prevention systems (IPSs) are used to prevent them. For more information on specific deployment models, see Chapter 5, “Network and Security Components, Concepts, and Architectures.”


A host-based IDS (HIDS) is a system responsible for detecting unauthorized access or attacks against systems and networks. A host-based IPS (HIPS) reacts and takes an action in response to a threat. HIDS and HIPS implementations are covered more completely in Chapter 5.

The use of these devices is indicated when threats must be identified automatically for a single device. For example, in a scenario where a small number of security professionals are required to effectively monitor a large network for intrusions, HIDS and HIPS systems can allow them to continue their normal duties rather than manually monitor a dashboard, waiting for such intrusions. Alerts can be designed to inform them in a timely fashion to any intrusions and, in the case of the HIPS, to react to them.

Data Loss Prevention

Data leakage occurs when sensitive data is disclosed to unauthorized personnel either intentionally or inadvertently. The value of a data loss prevention (DLP) system lies in the level of precision with which it can locate and prevent the leakage of sensitive data. DLP software resides in endpoints and thus is considered another example of endpoint security software.

When data exfiltration is a concern, DLP can be used to both prevent sensitive data from leaving the premises as well as alert security professionals when attempts are occurring. By electronically labeling data with its proper classification, a DLP system can take action in real time when such attempts occur, regardless of whether they are intentional or unintentional

Host-Based Firewalls

A host-based firewall resides on a single host and is designed to protect that host only. Many operating systems today come with host-based (or personal) firewalls. Many commercial host-based firewalls are designed to focus attention on a particular type of traffic or to protect a certain application.

On Linux-based systems, a common host-based firewall is iptables, which replaces a previous package called ipchains. It has the ability to accept or drop packets. You create firewall rules much as you create an access list on a router. The following is an example of a rule set:

iptables -A INPUT -i eth1 -s -j DROP
iptables -A INPUT -i eth1 -s -j DROP
iptables -A INPUT -i eth1 -s 172. -j DROP

This rule set blocks all incoming traffic sourced from either the network or the network. Both of these are private IP address ranges. It is quite common to block incoming traffic from the Internet that has a private IP address as its source as this usually indicates that IP spoofing is occurring. In general, the following IP address ranges should be blocked as traffic sourced from these ranges is highly likely to be spoofed:

The range covers multicast traffic, and the range covers traffic from a loopback IP address. You may also want to include the APIPA range as well, as it is the range in which some computers give themselves IP addresses when the DHCP server cannot be reached.

On a Microsoft computer, you can use Windows Firewall with Advanced Security to block these ranges. The rule shown in Figure 6-1 blocks any incoming traffic from the network.

Settings to block any incoming traffic from the network are shown.

Figure 6-1 Using the Windows Firewall

Log Monitoring

Computers, their operating systems, and the firewalls that may be present on them generate system information that is stored in log files. You should monitor network events, system events, application events, and user events. Keep in mind that any auditing activity will impact the performance of the system being monitored. Organizations must find a balance between auditing important events and activities and ensuring that device performance is maintained at an acceptable level.


When designing an auditing mechanism, security professionals should remember the following guidelines:

  • Develop an audit log management plan that includes mechanisms to control the log size, backup processes, and periodic review plans.

  • Ensure that the ability to delete an audit log is a two-person control that must be completed by administrators.

  • Monitor all high-privilege accounts (including all root users and administrative-level accounts).

  • Ensure that the audit trail includes who processed a transaction, when the transaction occurred (date and time), where the transaction occurred (which system), and whether the transaction was successful.

  • Ensure that deleting the log and deleting data within the logs cannot occur.


Scrubbing is the act of deleting incriminating data from an audit log.

Audit trails detect computer penetrations and reveal actions that identify misuse. As a security professional, you should use audit trails to review patterns of access to individual objects. To identify abnormal patterns of behavior, you should first identify normal patterns of behavior. Also, you should establish the clipping level, which is a baseline of user errors above which violations will be recorded. For example, your organization may choose to ignore the first invalid login attempt, knowing that initial invalid login attempts are often due to user error. Any invalid login after the first one, however, would be recorded because it could be a sign of an attack.

Audit trails deter attackers’ attempts to bypass the protection mechanisms that are configured on a system or device. As a security professional, you should specifically configure the audit trails to track system/device rights or privileges being granted to a user and data additions, deletions, or modifications. You can use Group Policy in a Windows environment to create and apply audit policies to computers. Figure 6-2 shows the Group Policy Management Console.

The Group Policy Management Console is shown.

Figure 6-2 The Group Policy Management Console

Finally, audit trails must be monitored, and automatic notifications should be configured. If no one monitors the audit trail, the data recorded in the audit trail is useless. Certain actions should be configured to trigger automatic notifications. For example, you may want to configure an email alert to occur after a certain number of invalid login attempts because invalid login attempts may be a sign that a password attack is occurring.

Table 6-2 displays selected Windows audit policies and the threats to which they are directed.


Table 6-2 Windows Audit Policies

Audit Event

Potential Threat

Success and failure audit for file-access printers and object-access events or print management success and failure audit of print access by suspect users or groups for the printers

Improper access to printers

Failure audit for logon/logoff

Random password hack

Success audit for user rights, user and group management, security change policies, restart, shutdown, and system events

Misuse of privileges

Success audit for logon/logoff

Stolen password break-in

Success and failure write access auditing for program files (.EXE and .DLL extensions) or success and failure auditing for process tracking

Virus outbreak

Success and failure audit for file-access and object-access events or File Explorer success and failure audit of read/write access by suspect users or groups for the sensitive files

Improper access to sensitive files

Endpoint Detection Response

Endpoint detection and response (EDR) is a proactive endpoint security approach designed to supplement existing defenses. This advanced endpoint approach shifts security from a reactive threat approach to one that can detect and prevent threats before they reach the organization. It focuses on three essential elements for effective threat prevention: automation, adaptability, and continuous monitoring.

Some examples of EDR products are:

  • FireEye Endpoint Security

  • Carbon Black Cb Response

  • Guidance Software EnCase Endpoint Security

  • Cybereason Total Enterprise Protection

  • Symantec Endpoint Protection

  • RSA NetWitness Endpoint

The advantage of EDR systems is that they provide continuous monitoring. The disadvantage is that the software’s use of resources could impact performance of the device.

Host Hardening

Another of the ongoing goals of operations security is to ensure that all systems have been hardened to the extent that is possible while still providing functionality. The hardening can be accomplished both on physical and logical bases. From a logical perspective:

  • Unnecessary applications should be removed.

  • Unnecessary services should be disabled.

  • Unrequired ports should be blocked.

  • The connecting of external storage devices and media should be tightly controlled, if allowed at all.

  • Unnecessary accounts should be disabled.

  • Default accounts should be renamed, if possible.

  • Default passwords for default accounts should be changed.

Standard Operating Environment/Configuration Baselining

One practice that can make maintaining security simpler is to create and deploy standard images that have been secured with security baselines. A security baseline is a set of configuration settings that provide a floor of minimum security in the image being deployed.

Security baselines can be controlled through the use of Group Policy in Windows. These policy settings can be made in the image and applied to both users and computers. These settings are refreshed periodically through a connection to a domain controller and cannot be altered by the user. It is also quite common for the deployment image to include all of the most current operating system updates and patches as well.

When a network makes use of these types of technologies, the administrators have created a standard operating environment. The advantages of such an environment are more consistent behavior of the network and simpler support issues. Scans should be performed of the systems weekly to detect changes to the baseline. Virtual machine images can also be used for this purpose. Virtualization is covered in more detail in Chapter 13, “Cloud and Virtualization Technology Integration.”

Application Whitelisting and Blacklisting

Application whitelists are lists of allowed applications (with all others excluded), and blacklists are lists of prohibited applications (with all others allowed).

It is important to control the types of applications that users can install on their computers. Some application types can create support issues, and others can introduce malware. It is possible to use Windows Group Policy to restrict the installation of software on network computers, as illustrated in Figure 6-3. Using Windows Group Policy is only one option, and each organization should select a technology to control application installation and usage in the network.

Windows Group Policy is used to restrict the software installation on network computers.

Figure 6-3 Software Restriction

Security/Group Policy Implementation

One of the most widely used methods of enforcing a standard operating environment is using Group Policy in Windows. In an Active Directory environment, any users and computers that are members of a domain can be provided a collection of settings that comprise a security baseline. (It is also possible to use Local Security Policy settings on non-domain members, but this requires more administrative effort.)

Group Policy leverages the hierarchical structure of Active Directory to provide a common group of settings, called Group Policy Objects (GPOs), to all systems in the domain while adding or subtracting specific settings to certain subgroups of users or computers, called containers. Figure 6-3 illustrates how this works.

An additional benefit of using Group Policy is that an administrator can make changes to the existing policies by using the Group Policy Management Console (GPMC). Affected users and computers will download and implement any changes when they refresh the policy—which occurs at startup, shutdown, logon, and logoff. It is also possible for the administrator to force a refresh when time is of the essence.

The following are some of the advantages provided by the granular control available in the GPMC:

  • Ability to allow or disallow the inheritance of a policy from one container in Active Directory to one of its child containers

  • Ability to filter out specific users or computers from a policy’s effect

  • Ability to delegate administration of any part of the Active Directory namespace to an administrator

  • Ability to use Windows Management Instrumentation (WMI) filters to exempt computers of a certain hardware type from a policy

The following are some of the notable policies that relate to security:

  • Account Policies: These policies include password policies, account lockout policies, and Kerberos authentication policies.

  • Local Policies: These policies include audit, security, and user rights policies that affect the local computer.

  • Event Log: This policy controls the behavior of the event log.

  • Restricted Groups: This is used to control the membership of sensitive groups.

  • Systems Services: This is used to control the access to and behavior of system services.

  • Registry: This is used to control access to the registry.

  • File System: This includes security for files and folders and controls security auditing of files and folders.

  • Public Key Policies: This is used to control behavior of a PKI.

  • Internet Protocol Security Policies on Active Directory: This is used to create IPsec policies for servers.

Command Shell Restrictions

While Windows is known for its graphical user interface (GUI), it is possible to perform anything that can be done in the GUI at the command line. Moreover, many administrative tasks can be done only at the command line, and some of those tasks can be harmful and destructive to the system when their impact is not well understood.

Administrators of other operating systems, such as Linux or UNIX, make even more use of a command line in day-to-day operations. Administrators of routers and switches make almost exclusive use of a command line when managing those devices.

With the risk of mistakes, coupled with the possibility of those with malicious intent playing havoc at the command line, it is advisable in some cases to implement command shell restrictions. A restricted command shell is a command-line interface where only certain commands are available. In Linux and UNIX, a number of command-line shells are available, and they differ in terms of the power of the commands they allow. Table 6-3 lists some of the most common UNIX/Linux-based shells. Other popular shells include Windows PowerShell, used to interact with Windows systems, and the Linux terminal shell.


Table 6-3 Common UNIX/Linux-Based Shells

Shell Name





Similar to the C shell

Bourne shell


The most basic shell available on all UNIX systems

C shell


Similar to the C programming language in syntax

Korn shell


Based on the Bourne shell, with enhancements

Bash shell


Combines the advantages of the Korn shell and the C shell; the default on most Linux distributions

In Cisco IOS, the commands that are available depend on the mode in which the command-line interface ID is operating. You start out at user mode, where very few things can be done (and none of them very significant) and then progress to privileged mode, where more commands are available. However, you can place a password on the device for which the user will be prompted when moving from user mode to privileged mode. For more granular control of administrative access, user accounts can be created on the device, and privilege levels can be assigned to control what technicians can do, based on their account.

Patch Management

Basic patch management is covered earlier in this chapter. Let’s look at two ways to accomp lish it.


While manual patch management requires more administrative effort than an automated system (discussed in the next section), it can be done, using the following steps:

Step 1. Determine the priority of the patches.

Step 2. Test the patches prior to deployment to ensure that they work properly and do not cause system or security issues.

Step 3. Install the patches in the live environment.

Step 4. After patches are deployed, ensure that they work properly.


Most organizations manage patches through a centralized update solution such as Windows Server Update Services (WSUS). With such services, organizations can deploy updates in a controlled yet automatic fashion. The WSUS server downloads the updates, and they are applied locally from the WSUS server. Group Policy is also used in this scenario to configure the location of the server holding the updates.

Scripts can also be used to automate the patch process. This may offer more flexibility and control of the process than using the automated tools. A deep knowledge of scripting might be required, however.

In some cases, geographically dispersed servers may be used to provide the patches referenced in the scripts. In that case, proper replication must be set up to ensure that all patches are available on all patch servers. Windows PowerShell commands are increasingly being used to automate Windows functions. In the Linux environment, Linux shell scripting is used for this.

Configuring Dedicated Interfaces

Not all interfaces are created equal. Some, especially those connected to infrastructure devices and servers, need to be more tightly controlled and monitored due to the information assets to which they lead. The following sections look at some of the ways sensitive interfaces and devices can be monitored and controlled.

Out-of-Band Management

An interface that is out-of-band (OOB) is connected to a separate and isolated network that is not accessible from the local area network or the outside world. These interfaces are also typically live even when the device is off. OOB interfaces can be Ethernet or serial. Guidelines to follow when configuring OOB interfaces include the following:

  • Place all OOB interfaces in a separate subnet from the data network.

  • Create a separate virtual LAN (VLAN) on the switches for this subnet.

  • When crossing wide area network (WAN) connections, use a separate Internet connection for the production network.

  • Use Quality of Service (QoS) to ensure that the management traffic does not affect production performance.

  • To help get more bang for the investment in additional technology, consider using the same management network for backups.

  • If the network interface cards (NICs) support it, use Wake on LAN to make systems available even when they are shut down.

Some newer computers that have the Intel vPro chipset and a version of Intel Active Management Technology (Intel AMT) can be managed out-of-band even when the system is off. When this functionality is coupled with the out-of-band management feature in System Center 2016 R2 Configuration Manager, you can perform the following tasks:

  • Power on one or many computers (for example, for maintenance on computers outside business hours).

  • Power off one or many computers (for example, if the operating system stops responding).

  • Restart a nonfunctioning computer or boot from a locally connected device or known good boot image file.

  • Re-image a computer by booting from a boot image file that is located on the network or by using a Preboot Execution Environment (PXE) server.

  • Reconfigure the BIOS settings on a selected computer (and bypass the BIOS password if this is supported by the BIOS manufacturer).

  • Boot to a command-based operating system to run commands, repair tools, or diagnostic applications (for example, upgrading the firmware or running a disk repair tool).

  • Configure scheduled software deployments to wake up computers before the computers are running.


The inherent limitation of access control lists (ACLs) is their inability to detect whether IP spoofing is occurring. IP address spoofing is a technique hackers use to hide their trail or to masquerade as another computer. A hacker alters the IP address as it appears in the packet. This can sometimes allow the packet to get through an ACL that is based on IP addresses. IP address spoofing can also be used to make a connection to a system that trusts only certain IP addresses or ranges of IP addresses.

ACLs can also be used to control access to resource in servers and workstations. These are ACLs of a different type and are typically constructed as an access matrix in a table with subjects on one axis and objects on the other. At the intersection of the axes is a permission granted to a subject for an object.

For more on ACLs, see Chapter 5.

Management Interface

Management interfaces are used for accessing devices remotely. Typically, a management interface is disconnected from the in-band network and is connected to the device’s internal network. Through a management interface, you can access the device over the network by using utilities such as SSH and Telnet. Simple Network Management Protocol (SNMP) can use a management interface to gather statistics from a device.

In some cases, the interface is an actual physical port labeled as a management port; in other cases, it is a port that is logically separated from the network (for example, in a private VLAN). The point is to keep these interfaces used for remotely managing the device separate from the regular network traffic the device may encounter.

There are no disadvantages to using a management interface, but it is important to secure management interfaces. Cisco devices have dedicated terminal lines for remote management, called VTY ports. A VTY port should be configured with a password. To secure the 16 VTY lines that exist on some Cisco switches, use the following command set to set the password to Ci$co:

Switch#configure terminal
Switch(config)#line vty 0 15
Switch(config-line)#password Ci$co

Data Interface

Data interfaces are used to pass regular data traffic and are not used for either local or remote management. The interfaces may operate at either layer 2 or layer 3, depending on the type of device (router or switch). These interfaces can also have ACLs defined at either layer. On routers, we call them access lists, and on switches, we call the concept port security.

Some networking devices, such as routers and switches, can also have logical, or software, interfaces as well. An example is a loopback interface. This is an interface on a Cisco device that can be given an IP address and that will function the same as a hardware interface. Why would you use such an interface? Well, unlike hardware interfaces, loopback interfaces never go down. This means that as long as any of the hardware interfaces are functioning on the device, you will be able to reach the loopback interface. This makes a loopback interface a good candidate for making the VTY connection, which can be targeted at any IP address on the device.

Creating a loopback interface is simple. The commands are as follows:

Switch#configure terminal
Switch(config)#interface Loopback0
Switch(config-if)#ip address

External I/O Restrictions

One of the many ways malware and other problems can be introduced to a network (right around all your fancy firewalls and security devices) is through the peripheral devices that users bring in and connect to their computers. Moreover, sensitive data can also leave your network this way. To address this, you should implement controls over the types of peripherals users can bring and connect (if any). The following sections look at the biggest culprits.


The use of any type of USB devices (thumb drives, external hard drives, network interfaces, and so on) should be strictly controlled—and in some cases prohibited altogether. Granular control of this issue is possible thanks to Windows Group Policy (discussed earlier).

Some organizations choose to allow certain types of USB storage devices while requiring that the devices be encrypted before they can be used. It is also possible to allow some but not all users to use these devices, and it is even possible to combine digital rights management features with the policy to prohibit certain types of information from being copied to these devices.

For example, with Group Policy in Windows, you can use a number of policies to control the use of USB devices. Figure 6-4 shows a default domain policy to disallow the use of all removable storage. As you see, there are many other less drastic settings as well.

A screenshot shows Removable Storage Access dialog box.

Figure 6-4 Controlling the Use of USB Devices


Wireless technologies also provide openings for malware and other problems. In some cases, they allow unauthenticated access to the network, and in others they simply put information on personal devices at risk. Let’s look at some of these vulnerabilities.


Bluetooth is a wireless technology that is used to create personal area networks (PANs), which are short-range connections between devices and peripherals, such as headphones. It operates in the 2.4 GHz frequency at speeds of 1 to 3 Mbps and over a distance of up to 10 meters.

Several attacks can take advantage of Bluetooth technology. With Bluejacking, an unsolicited message is sent to a Bluetooth-enabled device, often for the purpose of adding a business card to the victim’s contact list. This type of attack can be prevented by placing the device in non-discoverable mode.

Bluesnarfing involves unauthorized access to a device using the Bluetooth connection. In this case, the attacker is trying to access information on the device rather than send messages to the device.

Use of Bluetooth can be controlled, and such control should be considered in high-security environments.

Increasingly, organizations are being pushed to allow corporate network access to personal mobile devices. This creates a nightmare for security administrators. Mobile device management (MDM) solutions attempt to secure these devices. These solutions include a server component, which sends management commands to the devices. There are a number of open specifications, such as Open Mobile Alliance (OMA) Device Management, but there is no real standard as yet. Among the technologies these solutions may control are Bluetooth settings and wireless settings.


Near field communication (NFC) is a set of communication protocols that allow two electronic devices, one of which is usually a mobile device, to establish communication when they are within 2 inches of each other. NFC-enabled devices can be provided with apps to read electronic tags or make payments when connected to an NFC-compliant apparatus. NFC capability is available in mobile devices such as smartphones.

NFC presents many security vulnerabilities, among them eavesdropping, data corruption and manipulation, and interception attacks. Physical theft of a device makes purchases from the phone possible. Therefore, organizations may want to forbid this functionality in company-owned smartphones or those that are allowed access to the company network through a BYOD (bring your own device) initiative.


The Infrared Data Association (IrDA) provides specifications for infrared (IR) communications. Infrared is a short-distance wireless process that uses light (in this case infrared light) rather than radio waves. It is used for short connections between devices that both have infrared ports. IR, which operates at speeds up to 4 Mbps and over a distance of up to 5 meters, requires a direct line of sight between the devices.

There is one infrared mode or protocol that can introduce security issues. The IrTran-P (image transfer) protocol is used in digital cameras and other digital image capture devices. All incoming files sent over IrTran-P are automatically accepted. Because incoming files might contain harmful programs, users should ensure that the files originate from a trustworthy source.


Radio frequency (RF) technologies differ in the frequency used and in the range over which the technology can broadcast. From an enterprise perspective, the technologies of most concern are 802.11 and radio frequency identification (RFID). These two widely used technologies are discussed in the following sections.


Before we can discuss 802.11 wireless, which has come to be known as wireless LAN (WLAN), we need to discuss the components and the structure of a WLAN. The following sections cover basic terms and concepts.

Access Point

An access point (AP) is a wireless transmitter and receiver that hooks into the wired portion of the network and provides an access point to this network for wireless devices. In some cases APs are simply wireless switches, and in other cases they are also routers. Early APs were devices with all the functionality built into each device. These “fat,” or intelligent, APs are increasingly being replaced with “thin” APs that are really only antennas that hook back into a central system called a controller.


A service set identifier (SSID) is a name or value assigned to identify a WLAN from other WLANs. The SSID can either be broadcast by the AP, as is done with a free hot spot, or it can be hidden.

Infrastructure Mode Versus Ad Hoc Mode

In most cases a WLAN includes at least one AP. When an AP is present, the WLAN is operating in Infrastructure mode. In this mode, all transmissions between stations go through the AP, and no direct communication between stations occurs. In Ad Hoc mode, there is no AP, and the stations communicate directly with one another.


WLAN Standards

The original 802.11 wireless standard has been amended a number of times to add features and functionality. This section discusses these amendments, which are sometimes referred to as standards, although they really are amendments to the original standard. The original 802.11 standard specified the use of either frequency-hopping spread spectrum (FHSS) or direct-sequence spread spectrum (DSSS) and supported operations in the 2.4 GHz frequency range at speeds of 1 Mbps and 2 Mbps.


The first amendment to the standard was 802.11a, which called for the use of orthogonal frequency-division multiplexing (OFDM). Because that would require hardware upgrades to existing equipment, this standard saw limited adoption for some time. It operates in the 5 GHz frequency band and, by using OFDM, supports speeds up to 54 Mbps.


The 802.11b amendment dropped support for FHSS and enabled an increase of speed to 11 Mbps. It was widely adopted because it both operates in the same frequency as 802.11 and is backward compatible with it and can coexist in the same WLAN.


The 802.11f amendment addressed problems introduced when wireless clients roam from one AP to another. With such roaming, the station must reauthenticate with the new AP, which in some cases introduced a delay that would break the application connection. This amendment improves the sharing of authentication information between APs.


The 802.11g amendment added support for OFDM, which made it capable of 54 Mbps. 802.11g also operates in the 2.4 GHz frequency, so it is backward compatible with both 802.11 and 802.11b. 802.11g just as fast as 802.11a, but many people switched to 802.11a because the 5 GHz band (used by 802.11a) is much less crowded than the 2.4 GHz band (used by 802.11g).


The 802.11n standard uses several newer concepts to achieve up to 650 Mbps. It does this using channels that are 40 MHz wide, using multiple antennas that allow for up to four spatial streams at a time (a feature called multiple input, multiple output [MIMO]). It can be used in both the 2.4 GHz and 5.0 GHz bands. However, it performs best in a pure 5.0 GHz network because in that case, it does not need to implement mechanisms that allow it to coexist with 802.11b and 802.11g devices but slow performance.


Operating in the 5 GHz band, 802.11ac has multi-station throughput of at least 1 Gbps and single-link throughput of at least 500 Mbps. This is accomplished by extending the air-interface concepts embraced by 802.11n: a wider RF bandwidth (up to 160 MHz), more MIMO spatial streams (up to eight), downlink multi-user MIMO or MU-MIMO (up to four clients), and high-density modulation (up to 256-QAM).

WLAN Security

To safely implement 802.11 wireless technologies, you must understand all the methods used to secure a WLAN. The following sections discuss the most important measures, including some measures that, although they are often referred to as security measures, provide no real security.


Wired Equivalent Privacy (WEP) was the first security measure used with 802.11. It was specified as the algorithm in the original specification. WEP can be used to both authenticate a device and encrypt the information between the AP and the device. The problem with WEP is that it implements the RC4 encryption algorithm in a way that allows a hacker to crack the encryption. It also was found that the mechanism designed to guarantee the integrity of data (that is, that the data has not changed) was inadequate, and it was possible for the data to be changed and for the change to go undetected.

WEP is implemented with a secret key or password that is configured on the AP, and any station needs that password in order to connect. Above and beyond the problem with the implementation of the RC4 algorithm, it is never good security for all devices to share the same password in this way.


To address the widespread concern with the inadequacy of WEP, the Wi-Fi Alliance, a group of manufacturers that promotes interoperability, created an alternative mechanism called Wi-Fi Protected Access (WPA) that is designed to improve on WEP. There are four types of WPA, but before we look at them, let’s first talk about how the original version improves over WEP.

First, WPA uses Temporal Key Integrity Protocol (TKIP) for encryption, which generates a new key for each packet. Second, the integrity check used with WEP is able to detect any changes to the data. WPA uses a message integrity check algorithm called Michael to verify the integrity of the packets.

There are two versions of WPA, as discussed in the following sections. Some legacy devices might support only WPA. You should always check with a device’s manufacturer to find out whether a security patch has been released that allows for WPA2 support.


Wi-Fi Protected Access 2 (WPA2) is an improvement over WPA. WPA2 uses Counter Cipher Mode with Block Chaining Message Authentication Code Protocol (CCMP), based on the Advanced Encryption Standard (AES), rather than TKIP. AES is a much stronger method and is required for Federal Information Processing Standards (FIPS)–compliant transmissions. There are also two versions of WPA2 (covered in the next section).

Personal Versus Enterprise

Both WPA and WPA2 come in Enterprise and Personal versions. The Enterprise versions require the use of an authentication server, typically a RADIUS server. The Personal versions do not and use passwords configured on the AP and the stations. Table 6-4 provides a quick overview of WPA and WPA2.


Table 6-4 WPA and WPA2


Access Control



WPA Personal

Preshared key



WPA Enterprise

802.1X (RADIUS)



WPA2 Personal

Preshared key



WPA2 Enterprise

802.1X (RADIUS)



SSID Broadcast

SSID broadcast is automatically turned on for most wireless APs. This feature can be disabled. When the SSID is hidden, a wireless station has to be configured with a profile that includes the SSID in order for users to connect. Although some view hiding the SSID as a security measure, it is not an effective measure because hiding the SSID only removes it from one type of frame, the beacon frame, while the SSID still exists in other frame types and can be easily learned by sniffing the wireless network.

MAC Filter

Another commonly discussed security measure is to create a MAC address filter list of allowed MAC addresses on the AP. When this is done, only the devices with MAC addresses on the list can make a connection to the AP. Although on the surface this might seem like a good security measure, in fact a hacker can easily use a sniffer to learn the MAC addresses of devices that have successfully authenticated. Then, by changing the MAC address on her device to one that is on the list, the hacker can gain entry.

MAC filters can also be configured to deny access to certain devices. The limiting factor in this method is that only the devices with the denied MAC addresses are specifically denied access. All other connections are allowed.

Open System Authentication

Open System Authentication (OSA) is the default authentication used in 802.11 networks using WEP. The authentication request contains only the station ID and authentication response. While it can be used with WEP, authentication management frames are sent in cleartext because WEP only encrypts data. Therefore, OSA is not secure.

Shared Key Authentication

Shared Key Authentication (SKA) uses WEP and a shared secret key for authentication. The challenge text is encrypted with WEP using the shared secret key. The client returns the encrypted challenge text to the wireless AP.

Another implementation of SKA is WPA-PSK. While it uses a shared key (as in WEP), it is more secure in that it uses TKIP to continually change the key automatically.


An increasingly popular method of tracking physical assets is to tag them with radio frequency identification (RFID) chips. This allows for tracking the location of the asset at any time. RFID technology uses either bar codes or magnetic strips to embed information that can be read wirelessly from some distance. RFID involves two main components:

  • RFID reader: This device has an antenna and an interface to a computer.

  • Transponder: This is the tag on the device that transmits its presence wirelessly.

The reader receives instructions from the human, using the software on the computer that is attached to the reader. This causes the reader to transmit signals that wake up or energize the transponder on the device. The device then responds wirelessly, thus allowing the reader to determine the location of the device and display that location to the user on the computer.

The tags can be one of two types: passive or active. Active tags have batteries, whereas passive tags receive their energy from the reader when the reader interrogates the device. As you would expect, passive tags are less expensive but have a range of only a few meters, whereas active tags are more expensive but can transmit up to 100 meters.

RFID has some drawbacks: The tag signal can be read by any reader in range, multiple readers in an area can interfere with one another, and multiple devices can interfere with one another when responding. In addition, given the distance limitations, when a stolen item is a certain distance away, you lose the ability to track it. Therefore, RFID technology should only be a part of a larger program that includes strong physical security.

Drive Mounting

Drive mounting makes a drive available to the operating system and requires the operating system to recognize the media format. Drive mounting occurs automatically in some systems as soon as the drive is connected. The dangers in allowing the connection or mounting of external drives are the same dangers presented by allowing USB drives: data leaks and the introduction of malware.

Drive mounting can be prevented by disabling and/or preventing the use of the ports to which the external devices are connected. While automatic drive mounting has the advantage of making life easier for the user, it has the disadvantage of making the introduction of malware possible.

Drive Mapping

Drive mapping is a process in which an external storage location is mapped or connected to a drive letter on the local computer, which makes it appear as if the remote drive is a local drive. Drive mapping is convenient in that the drive can be reconnected every time the computer is connected to a network that makes the connection possible.

Drive mapping is another operation that makes life easier for users but creates opportunities for those with ill intent. These mappings could be used to access drives with sensitive information. The decision to use drive mapping must include a conversation that addresses this trade-off.


Some malware can take control of a webcam and spy on the user. Unfortunately, this can also extend to IP cameras, which are often deployed as security cameras. Prohibiting the use of webcams is, therefore, a consideration. Webcams also present the danger of insiders photographing information that is sensitive. In scenarios where prohibiting the use of these devices is not possible, they should be physically secured using covers when not in use.

Recording Mic

There is also malware that can allow enabling the recording mic, which could allow eavesdropping on meetings and other sensitive conversations. This malware is especially common in Android devices. Prohibiting use of these devices might be advised. In scenarios where prohibiting the use of these devices is not possible, they should be physically secured using covers when not in use.

Audio Output

Malicious individuals can control audio output. For example, by using a software-defined radio (SDR) capable of monitoring wireless transmissions, it is possible to intercept a home security system’s unencrypted wireless communication with the sensors around the home. A hacker can take advantage of this capability and send his own signals to the main controls to prevent the sound of the alarm by preventing audio output.

SD Port

Just as USB devices can be used to introduce malware or exfiltrate data from a network, so can SD memory cards. As many laptops (and some desktops as well) now come with these ports, organizations may want to use the same approach to this issue as with USB devices: Prevent their use through the application of a Group Policy that is refreshed at regular intervals. In scenarios where prohibiting the use of these devices is not possible, they should be physically secured.


High-Definition Multimedia Interface (HDMI) supports Ethernet, so if someone hacks into a smart TV, she can gain control of other devices via the network interface it supports. For example, Universal Plug and Play (UPnP) is known to be especially vulnerable to attacks. Unneeded HDMI ports should be disabled.

File and Disk Encryption

While largely the same in concept, file and disk encryption are different from one another. Disk encryption occurs at the hardware level. File encryption, on the other hand, is a software process. Another difference is that disk encryption is effective when the device is off, while file encryption provides security while the device is on. The following sections look at both types.


While it can be helpful to control network access to devices, in many cases, devices such as laptops, tablets, and smartphones leave your network and also leave behind all the measures you have taken to protect the network. There is also a risk of these devices being stolen or lost. For these situations, the best measure to take is full disk encryption.

The best implementation of full disk encryption requires and makes use of a Trusted Platform Module (TPM) chip. A TPM chip is a security chip installed on a computer’s motherboard that is responsible for protecting symmetric and asymmetric keys, hashes, and digital certificates. This chip provides services to protect passwords and encrypt drives and digital rights, making it much harder for attackers to gain access to the computers that have TPM chips enabled.

Firmware Updates

Firmware includes any type of instruction stored in non-volatile memory devices such as read-only memory (ROM), electrically erasable programmable read-only memory (EPROM), or Flash memory. BIOS and UEFI code are the most common examples for firmware. Computer BIOS doesn’t go bad; however, it can become out of date or contain bugs. In the case of a bug, an upgrade will correct the problem. An upgrade may also be indicated when the BIOS doesn’t support some component that you would like to install, such as a larger hard drive or a different type of processor.

Today’s BIOS is typically written to an EEPROM chip and can be updated through the use of software. Each manufacturer has its own method for accomplishing this. Check out the manufacturer’s documentation for complete details. Regardless of the exact procedure used, the update process is referred to as flashing the BIOS. It means the old instructions are erased from the EEPROM chip, and the new instructions are written to the chip.

Firmware can be updated by using an update utility from the motherboard vendor. In many cases, the steps are as follows:

Step 1. Download the update file to a flash drive.

Step 2. Insert the flash drive and reboot the machine.

Step 3. Use the specified key sequence to enter the UEFI/BIOS setup.

Step 4. If necessary, disable secure boot.

Step 5. Save the changes and reboot again.

Step 6. Re-enter the CMOS settings again.

Step 7. Choose the boot options and boot from the flash drive.

Step 8. Follow the specific directions with the update to locate the upgrade file on the flash drive.

Step 9. Execute the file (usually by typing flash).

Step 10. While the update is completing, ensure that you maintain power to the device.

Boot Loader Protections


When a system is booting up, there is a window of opportunity for breaking into the system. For example, when physical access is possible, you could set a system to boot to other boot media and then access the hard drive. For this reason, boot loader protection mechanisms should be utilized, as discussed in the following sections.

Secure Boot

Secure boot is a term that applies to several technologies that follow the Secure Boot standard. Its implementations include Windows Secure Boot, measured launch, and Integrity Measurement Architecture (IMA).

Figure 6-5 shows the three main actions related to Secure Boot in Windows:

The three main actions related to Secure Boot in Windows.

Figure 6-5 Secure Boot

  1. The firmware verifies all UEFI executable files and the OS loader to be sure they are trusted.

  2. Windows Boot Components verifies the signature on each component to be loaded. Any non-trusted components will not be loaded and will trigger remediation.

  3. The signatures on all boot-critical drivers are checked as part of secure boot verification in Winload (Windows Boot Loader) and by the Early Launch Anti-Malware driver.

The disadvantage is that systems that ship with UEFI Secure Boot enabled do not allow the installation of any other operating system. This prevents installing any other operating systems or running any live Linux media.

Measured Launch

A measured launch is a launch in which the software and platform components have been identified, or “measured,” using cryptographic techniques. The resulting values are used at each boot to verify trust in those components. A measured launch is designed to prevent attacks on these components (system and BIOS code) or at least to identify when these components have been compromised. It is part of the Intel Trusted Execution Technology (Intel TXT). TXT functionality is leveraged by software vendors including HyTrust, PrivateCore, Citrix, and VMware.

An application of measured launch is Measured Boot by Microsoft in Windows 10 and Windows Server 2016. It creates a detailed log of all components that loaded before the anti-malware. This log can be used to both identify malware on the computer and maintain evidence of boot component tampering.

One possible disadvantage of measured launch is potential slowing of the boot process.

Integrity Measurement Architecture

Another approach that attempts to create and measure the runtime environment is an open source trusted computing component called Integrity Measurement Architecture (IMA). IMA creates a list of components and anchors the list to the TPM chip. It can use the list to attest to the system’s runtime integrity. Anchoring the list to the TPM chip in hardware prevents its compromise.


Unified Extensible Firmware Interface (UEFI) is an alternative to using BIOS to interface between the software and the firmware of a system. Most images that support UEFI also support legacy BIOS services as well. Some of its advantages are:

  • Ability to boot from large disks (over 2 TB) with a GUID partition table

  • CPU-independent architecture

  • CPU-independent drivers

  • Flexible pre-OS environment, including network capability

  • Modular design

UEFI operates between the OS layer and the firmware layer, as shown in Figure 6-6.

Extensible Firmware Interface is pointing toward Operating System at top and toward Firmware at bottom. Hardware is placed at the bottom below Firmware.

Figure 6-6 UEFI

Attestation Services

Attestation services allow an authorized party to detect changes to an operating system. Attestation services involve generating a certificate for the hardware that states what software is currently running. The computer can use this certificate to attest that unaltered software is currently executing. Windows operating systems have been capable of remote attestation since Windows 8.


TPM chips are discussed earlier in this chapter. Two particularly popular uses of TPM are binding and sealing. Binding actually “binds” the hard drive through encryption to a particular computer. Because the decryption key is stored in the TPM chip, the hard drive’s contents are available only when the drive is connected to the original computer. But keep in mind that all the contents are at risk if the TPM chip fails and a backup of the key does not exist.

Sealing, on the other hand, “seals” the system state to a particular hardware and software configuration. This prevents attackers from making any changes to the system. However, it can also make installing a new piece of hardware or a new operating system much harder. The system can only boot after the TPM chip verifies system integrity by comparing the original computed hash value of the system’s configuration to the hash value of its configuration at boot time.

A TPM chip consists of both static memory and versatile memory that is used to retain the important information when the computer is turned off:


BitLocker and BitLocker to Go by Microsoft are well-known full disk encryption products. The former is used to encrypt hard drives, including operating system drives, and the latter is used to encrypt information on portable devices such as USB devices. However, there are other options. Additional whole disk encryption products include:

  • PGP Whole Disk Encryption

  • Secure Star DriveCrypt

  • Sophos SafeGuard

  • MobileArmor Data Armor

Virtual TPM

A virtual TPM (VTPM) chip is a software object that performs the functions of a TPM chip. It is a system that enables trusted computing for an unlimited number of virtual machines on a single hardware platform. A VTPM makes secure storage and cryptographic functions available to operating systems and applications running in virtual machines.

Figure 6-7 shows one possible implementation of VTPM by IBM. The TPM chip in the host system is replaced by a more powerful VTPM (PCIXCC-vTPM). The virtual machine (VM) named Dom-TPM is a VM whose only purpose is to proxy for the PCIXCC-vTPM and make TPM instances available to all other VMs running on the system.

A possible implementation of VTPM by IBM is shown.

Figure 6-7 vTPM Possible Solution 1

Another possible approach suggested by IBM is to run VTPMs on each VM, as shown in Figure 6-8. In this case, the VM named Dom-TPM talks to the physical TPM chip in the host and maintains separate TPM instances for each VM.

VTPMs are run on each VM.

Figure 6-8 vTPM Possible Solution 2

Vulnerabilities Associated with Hardware

While security professionals devote a lot of time to chasing software vulnerabilities, they often forget about hardware vulnerabilities. Remember that one of the most well-known hacks—the Target hack—took advantage of a hardware encryption flaw. Another example of a hardware vulnerability is the hacking of a car system and the subsequent takeover of the control system. Hackers have embraced hardware attacks because of the difficulty in detecting them, but the compromising of hardware goes beyond backdoors. Vulnerabilities also include the following:

  • Backdoors that affect embedded RFID chips and memory

  • Eavesdropping through protected memory without any other hardware being opened

  • Faults induced to interrupt normal behavior

  • Hardware modification tampering with hardware or jailbroken software

  • Backdoors or hidden methods for bypassing normal computer authentication systems

  • Counterfeit product made to gain malicious access to systems

The only assured way of preventing such vulnerabilities is to tightly control the manufacturing process for all products. The DoD uses the Trusted Foundry program to validate all vendors in this regard. No longer can organizations simply purchase the cheapest devices from Asia; they must now begin to grapple with the creation of their own programs that emulate the Trusted Foundry program.

Terminal Services/Application Delivery Services

Just as operating systems can be provided on demand with technologies like virtual desktop infrastructure (VDI), applications can also be provided to users from a central location. Two models can be used to implement this:

  • Server-based application virtualization (terminal services): In server-based application virtualization, an application runs on servers. Users receive the application environment display through a remote client protocol, such as Microsoft Remote Desktop Protocol (RDP) or Citrix Independent Computing Architecture (ICA). Examples of terminal services include Remote Desktop Services and Citrix Presentation Server.

  • Client-based application virtualization (application streaming): In client-based application virtualization, the target application is packaged and streamed to the client PC. It has its own application computing environment that is isolated from the client OS and other applications. A representative example is Microsoft Application Virtualization (App-V).

Figure 6-9 compares these two approaches.

Application Streaming and Terminal Services are depicted.

Figure 6-9 Application Streaming and Terminal Services

When using either of these technologies, you should force the use of encryption, set limits to the connection life, and strictly control access to the server. These measures can prevent eavesdropping on any sensitive information, especially the authentication process.

Exam Preparation Tasks

As mentioned in the section “How to Use This Book” in the Introduction, you have a couple choices for exam preparation: the exercises here and the practice exams in the Pearson IT Certification test engine.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 6-5 lists these key topics and the page number on which each is found.


Table 6-5 Key Topics for Chapter 6

Key Topic Element


Page Number

Table 6-1

Protection profiles



Guidelines for auditing


Table 6-2

Windows audit policies


Figure 6-3

Software restriction


Table 6-3

Common UNIX/Linux-based shells



Manual patch management



WLAN standards


Table 6-4

WPA and WPA2



Boot loader protections



TPM components



VDI models


Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

access control lists (ACLs)

attestation identity key (AIK)




client-based application virtualization

data interfaces

data leakage

data loss prevention (DLP) software

definition files

endorsement key (EK)

host-based firewalls

host-based IDS

imprecise methods

Integrity Measurement Architecture (IMA)

intrusion detection system (IDS)

management interface

measured boot (launch)

Orange Book


platform configuration register (PCR)

precise methods


Secure Boot

server-based application virtualization

software patches

storage keys

storage root key (SRK)

trusted operating system

Trusted Platform Module (TPM) chip

Unified Extensible Firmware Interface (UEFI)

virtual desktop infrastructure (VDI)

virtual Trusted Platform Module (VTPM)

Review Questions

1. Which organization first brought forward the idea of a trusted operating system?

  • IEEE



  • IANA

2. Which of the following is not a safe computing practice?

  • Perform daily scans.

  • Enable autorun.

  • Don’t click on email links or attachments.

  • Keep anti-malware applications current.

3. Which implementation of DLP is installed at network egress points?

  • imprecise

  • precise

  • network

  • endpoint

4. The following is an example of what type of rule set?

iptables -A INPUT -i eth1 -s -j DROP
iptables -A INPUT -i eth1 -s -j DROP
iptables -A INPUT -i eth1 -s 172. -j DROP

  • iptables

  • ipchains

  • ipconfig

  • ipcmp

5. Which of the following is not a part of hardening an OS?

  • Unnecessary applications should be removed.

  • Unnecessary services should be disabled.

  • Unrequired ports should be opened.

  • External storage devices and media should be tightly controlled.

6. ACLs are susceptible to what type of attack?

  • MAC spoofing

  • IP spoofing

  • whaling

  • DNS poisoning

7. Which of the following is used to manage a device using Telnet?

  • data interface

  • management interface

  • USB

  • Bluetooth

8. Which attack involves unauthorized access to a device using a Bluetooth connection?

  • Bluesnarfing

  • Bluejacking

  • Bluefishing

  • Bluefilling

9. What type of chip makes full drive encryption possible?

  • out-of-band

  • TPM

  • clipper

  • sealed

10. What services allow for changes to an operating system to be detected by an authorized party?

  • sealing

  • attestation

  • verification

  • bonding

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.