Images

System Hardening and Baselines

People can have the Model T in any color—so long as it’s black.

—HENRY FORD

Images

In this chapter, you will learn how to

Images   Harden operating systems and network operating systems

Images   Implement host-level security

Images   Harden applications

Images   Establish group policies

Images   Secure alternative environments (SCADA, real-time, and so on)

The many uses for systems and operating systems require flexible components that allow users to design, configure, and implement the systems they need. Yet it is this very flexibility that causes some of the biggest weaknesses in computer systems. Computer and operating system developers often build and deliver systems in “default” modes that do little to secure the systems from external attacks. From the view of the developer, this is the most efficient mode of delivery, as there is no way they can anticipate what every user in every situation will need. From the user’s view, however, this means a good deal of effort must be put into protecting and securing the system before it is ever placed into service. The process of securing and preparing a system for the production environment is called hardening. Unfortunately, many users don’t understand the steps necessary to secure their systems effectively, resulting in hundreds of compromised systems every day.

Hardening systems, servers, workstations, networks, and applications is a process of defining the required uses and needs and then aligning security controls to limit a system’s desired functionality. Once this is determined, you have a system baseline that you can compare changes to over the course of a system’s lifecycle.

Images Overview of Baselines

The process of establishing a system’s operational state is called baselining, and the resulting product is a system baseline that describes the capabilities of a software system. Once the process has been completed for a particular hardware and software combination, any similar systems can be configured with the same baseline to achieve the same level of applicaiton. Uniform baselines are critical in large-scale operations, because maintaining separate configurations and security levels for hundreds or thousands of systems is far too costly.

Constructing a baseline or hardened system is similar for servers, workstations, and network operating systems (NOSs). The specifics may vary, but the objects are the same.

Images Hardware/Firmware Security

Hardware, in the form of servers, workstations, and even mobile devices, can represent a weakness or vulnerability in the security system associated with an enterprise. While hardware can be easily replaced if lost or stolen, the information that is contained by the devices complicates the security picture. Data or information can be safeguarded from loss by backups, but this does little in the way of protecting it from disclosure to an unauthorized party. There are software measures that can assist in the form of encryption, but these also have drawbacks in the form of scalability and key distribution.

FDE/SED

Full drive encryption (FDE) and self-encrypting drives (SED) are methods of implementing cryptographic protection on hard drives and other similar storage media with the express purpose of protecting the data even if the drive is removed from the machine. Portable machines, such as laptops, have a physical security weakness in that they are relatively easy to steal and then can be attacked offline at the attacker’s leisure. The use of modern cryptography, coupled with hardware protection of the keys, makes this vector of attack much more difficult. In essence, both of these methods offer a transparent, seamless manner of encrypting the entire hard drive using keys that are only available to someone who can properly log into the machine.

TPM

The Trusted Platform Module (TPM) is a hardware solution on the motherboard, one that assists with key generation and storage as well as random number generation. When the encryption keys are stored in the TPM, they are not accessible via normal software channels and are physically separated from the hard drive or other encrypted data locations. This makes the TPM a more secure solution than storing the keys on the machine’s normal storage.

Hardware Root of Trust

A hardware root of trust is the concept that if one has trust in a source’s specific security functions, this layer can be used to promote security to higher layers of a system. Because roots of trust are inherently trusted, they must be secure by design. This is usually accomplished by keeping them small and limiting their functionality to a few specific tasks. Many roots of trust are implemented in hardware that is isolated from the OS and the rest of the system so that malware cannot tamper with the functions they provide. Examples of roots of trust include TPM chips in computers and Apple’s Secure Enclave coprocessor in its iPhones and iPads. Apple also uses a signed Boot ROM mechanism for all software loading.

HSM

A hardware security module (HSM) is a device used to manage or store encryption keys. It can also assist in cryptographic operations such as encryption, hashing, and the application of digital signatures. HSMs are typically peripheral devices, connected via USB or a network connection. HSMs have tamper-protection mechanisms to prevent physical access to the secrets they guard. Because of their dedicated design, they can offer significant performance advantages over general-purpose computers when it comes to cryptographic operations. When an enterprise has significant levels of cryptographic operations, HSMs can provide throughput efficiencies.

Images

Storing private keys anywhere on a networked system is a recipe for loss. HSMs are designed to allow the use of a key without exposing it to the wide range of host-based threats.

UEFI/BIOS

Basic Input/Output System (BIOS) is the firmware that a computer system uses as a connection between the actual hardware and the operating system. BIOS is typically stored on nonvolatile flash memory, which allows for updates, yet persists when the machine is powered off. The purpose behind BIOS is to initialize and test the interfaces to the actual hardware in a system. Once the system is running, the BIOS translates low-level access to the CPU, memory, and hardware devices, making a common interface for the OS to connect to. This facilitates multiple hardware manufacturers and differing configurations against a single OS install.

Unified Extensible Firmware Interface (UEFI) is the current replacement for BIOS. UEFI offers a significant modernization over the decades-old BIOS, including dealing with modern peripherals such as high-capacity storage and high-bandwidth communications. UEFI also has more security designed into it, including provisions for secure booting.

Secure Boot and Attestation

One of the challenges in securing an OS is the myriad of drivers and other add-ons that hook into the OS and provide specific added functionality. If these additional programs are not properly vetted before installation, this pathway can provide a means by which malicious software can attack a machine. And because these attacks can occur at boot time, at a level below security applications such as antivirus software, they can be very difficult to detect and defeat. UEFI offers a solution to this problem, called Secure Boot, which is a mode that when enabled only allows signed drivers and OS loaders to be invoked. Secure Boot requires specific setup steps, but once enabled, it blocks malware that attempts to alter the boot process. Secure Boot enables the attestation that the drivers and OS loaders being used have not changed since they were approved for use. Secure Boot is supported by Microsoft Windows and all major versions of Linux.

Integrity Measurement

Integrity measurement is the measuring and identification of changes to a specific system away from an expected value. Whether it’s the simple changing of data as measured by a hash value or the TPM-based integrity measurement of the system boot process and attestation of trust, the concept is the same: take a known value, store a hash or other keyed value, and then, at the time of concern, recalculate and compare values.

In the case of TPM-mediated systems, where the TPM chip provides a hardware-based root of trust anchor, the TPM system is specifically designed to calculate hashes of a system and store them in a Platform Configuration Register (PRC). This register can be read later and compared to a known, or expected, value, and if they differ, there is a trust violation. Certain BIOSs, UEFIs, and boot loaders can all work with the TPM chip in this manner, providing a means of establishing a trust chain during system boot.

Images

Understand how TPM, UEFI, Secure Boot, hardware root of trust, and integrity measurement work together to solve a specific security issue.

Firmware Version Control

Firmware is present in virtually every system, but in many embedded systems it plays an even more critical role because it may also contain the OS and application. Maintaining strict control measures over the changing of firmware is essential to ensuring the authenticity of the software on a system. Firmware updates require extreme quality measures to ensure that errors are not introduced as part of an update process. Updating firmware, although only occasionally necessary, is a very sensitive event, because failure can lead to system malfunction. If an unauthorized party is able to change the firmware of a system, as demonstrated in an attack against ATMs, an adversary can gain complete functional control over a system.

EMI/EMP

Electromagnetic interference (EMI) is an electrical disturbance that affects an electrical circuit. This is due to either electromagnetic induction or radiation emitted from an external source, either of which can induce currents into the small circuits that make up computer systems and cause logic upsets. An electromagnetic pulse (EMP) is a burst of current in an electronic device as a result of a current pulse from electromagnetic radiation. EMP can produce damaging current and voltage surges in today’s sensitive electronics. The main sources for EMP would be industrial equipment on the same circuit, solar flares, and nuclear bursts high in the atmosphere.

It is important to shield computer systems from circuits with large industrial loads, such as motors. These power sources can have significant noise, including EMI and EMPs that will potentially damage computer equipment. Another source of EMI is fluorescent lights. Be sure any cabling that goes near fluorescent light fixtures is well shielded and grounded.

Supply Chain

Hardware and firmware security is ultimately dependent on the manufacturer for the root of trust. In today’s world of global manufacturing with global outsourcing, fully understanding who your manufacturer supply chain is and how it changes from device to device, and even between lots, is difficult because many details can be unknown. Who manufactured all the components of the device you are ordering? If you’re buying a new PC, where did the hard drive come from? Can the new PC come preloaded with malware? Yes, it has happened.

Supply chain for assembled equipment can be very tricky, because not only do you have to worry about where you get the computer, but also where they get the parts and the software, including who wrote the software and with what libraries. These can be very difficult issues to negotiate if you have very strict rules concerning country of origin.

Images Operating System and Network Operating System Hardening

The operating system (OS) of a computer is the basic software that handles things such as input, output, display, memory management, and all the other highly detailed tasks required to support the user environment and associated applications. Most users are familiar with the Microsoft family of desktop operating systems: Windows Vista, Windows 7, Windows 8, and Windows 10. Indeed, the vast majority of home and business PCs run some version of a Microsoft operating system. Other users may be familiar with macOS, Solaris, or one of the many varieties of the UNIX/Linux operating system.

The Term Operating System

Operating system is the commonly accepted term for the software that provides the interface between computer hardware and the user. It is responsible for the management, coordination, and sharing of limited computer resources such as memory and disk space.

A network operating system (NOS) is an operating system that includes additional functions and capabilities to assist in connecting computers and devices, such as printers, to a local area network (LAN). Some of the more familiar network operating systems include Novell’s NetWare and PC Micro’s LANtastic. For most modern operating systems, including Windows Server, Solaris, and Linux, the terms operating system and network operating system are used interchangeably because they perform all the basic functions and provide enhanced capabilities for connecting to LANs. Network operating system can also apply to the operational software that controls managed switches and routers, such as Cisco’s IOS and Juniper’s Junos.

Protection Rings

Protection rings were devised in the Multics operating system in the 1960s to deal with security issues associated with time-sharing operations. Protection rings can be enforced by hardware, software, or a combination of the two, and they serve to act as a means of managing privilege in a hierarchical manner. Ring 0 is the level with the highest privilege and is the element that acts directly with the physical hardware (CPU and memory). Higher levels, with less privilege, must interact through adjoining rings through specific gates in a predefined manner. Use of rings separates elements such as applications from directly interfacing with the hardware without going through the OS and, specifically, the security kernel, as shown here.

Images

OS Security

The operating system itself is the foundation of system security. The operating system does this through the use of a security kernel. The security kernel is also called a reference monitor and is the component of the operating system that enforces the security policies of the operating system. The core of the OS is constructed so that all operations must pass through and be moderated by the security kernel, placing it in complete control over the enforcement of rules. Security kernels must exhibit some properties to be relied upon: they must offer complete mediation, as just discussed, and must be tamperproof and verifiable in operation. Because they are part of the OS and are in fact a piece of software, ensuring that security kernels are tamperproof and verifiable is a legitimate concern. Achieving assurance with respect to these attributes is a technical matter that is rooted in the actual construction of the OS and technically beyond the level of this book.

Data Execution Prevention

Data Execution Prevention (DEP) is a collection of hardware and software technologies to limit the ability of malware to execute in a system. Windows uses DEP to prevent code execution from data pages.

OS Types

Many different systems have the need for an operating system. Hardware in networks requires an operating system to perform the networking function. Servers and workstations require an OS to act as the interface between applications and the hardware. Specialized systems such as kiosks and appliances, both of which are forms of automated single-purpose systems, require an OS between the application software and hardware.

Network

Network components use a network operating system to provide the actual configuration and computation portion of networking. There are many vendors of networking equipment, and each has its own proprietary operating system. Cisco has the largest footprint with its IOS (for Internetworking Operating System). Juniper has Junos, which is built off of a stripped Linux core. As networking moves to software-defined networking (SDN), the concept of a network operating system will become more important and mainstream because it will become a major part of day-to-day operations in the IT enterprise.

Server

Servers require an operating system to bridge the gap between the server hardware and the applications that are being run. Currently, server OSs include Microsoft Windows Server, many flavors of Linux, and more and more VM/hypervisor environments. For performance reasons, Linux has a significant market share in the realm of server OSs, although Windows Server with its Active Directory technology has made significant inroads into market share.

Workstation

The OS on a workstation exists to provide a functional working space for a user to interact with the system and its various applications. Because of the high level of user interaction on workstations, it is very common to see Windows in this role. In large enterprises, the ability of Active Directory to manage users, configurations, and settings easily across the entire enterprise has given Windows client workstations an advantage over Linux.

Appliance

Appliances are standalone devices, wired into the network and designed to run an application to perform a specific function on traffic. These systems operate as headless servers, preconfigured with applications that run and perform a wide range of security services on the network traffic they see. For reasons of economics, portability, and functionality, the vast majority of appliances are built on top of a Linux-based system. As these are often customized distributions, keeping them patched becomes a vendor problem because this sort of work is outside the scope or ability of most IT people to properly manage.

Kiosk

Kiosks are standalone machines, typically operating a browser instance on top of a Windows OS. These machines are usually set up to automatically login to a browser instance that is locked to a website that allows all of the functionality desired. Kiosks are commonly used for interactive customer service applications, such as interactive information sites, menus, and so on. The OS on a kiosk needs to be able to be locked down to minimal function, have elements such as automatic login, and an easy way to construct the applications.

Mobile OS

Mobile devices began as phones with limited additional capabilities. But as the Internet and functionality spread to mobile devices, the capabilities of these devices have expanded as well. From smartphones to tablets, today’s mobile system is a computer, with virtually all the compute capability one could ask for—with a phone attached. The two main mobile OSs in the market today are Apple’s iOS and Google’s Android system.

Trusted Operating System

A trusted operating system is one that is designed to allow multilevel security in its operation. This is further defined by its ability to meet a series of criteria required by the U.S. government. Trusted OSs are expensive to create and maintain because any change must typically undergo a recertification process. The most common criteria used to define a trusted OS is the Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria, or CC), a harmonized set of security criteria recognized by many nations, including the United States, Canada, Great Britain, most of the EU countries, as well as others. Versions of Windows, Linux, mainframe OSs, and specialty OSs have been qualified to various Common Criteria levels.

Images

The term trusted operating system is used to refer to a system that has met a set of criteria and demonstrated correctness to meet requirements of multilevel security. The Common Criteria is one example of a standard used by government bodies to determine compliance to a level of security need.

Patch Management

Patch management is the process used to maintain systems in an up-to-date fashion, including all required patches. Every OS, from Linux to Windows, requires software updates, and each OS has different methods of assisting users in keeping their systems up to date. Microsoft, for example, typically makes updates available for download from its web site. While most administrators or technically proficient users may prefer to identify and download updates individually, Microsoft recognizes that nontechnical users prefer a simpler approach, which Microsoft has built into its operating systems. In Windows 7 forward, Microsoft provides an automated update functionality that will, once configured, locate any required updates, download them to your system, and even install the updates, if that is your preference.

In Windows 10 forward, Microsoft has adopted a new methodology treating the OS as a service and has dramatically updated its servicing model. Windows 10 now has a twice-per-year feature update release schedule, aiming for March and September, with an 18-month servicing timeline for each release. This model is called the Semi-Annual Channel model and is offered as a means of having a regular update/upgrade cycle of improvements over time for the software. For systems requiring longer term service, such as in embedded systems, Microsoft will offer a Long-Term Servicing Channel model. This model has less-frequent releases, expected every two to three years (with the next one for Windows expected in 2019). Each of these releases will be serviced for 10 years from the date of release.

How you patch a Linux system depends a great deal on the specific version in use and the patch being applied. In some cases, a patch will consist of a series of manual steps requiring the administrator to replace files, change permissions, and alter directories. In other cases, the patches are executable scripts or utilities that perform the patch actions automatically. Some Linux versions, such as Red Hat, have built-in utilities that handle the patching process. In those cases, the administrator downloads a specifically formatted file that the patching utility then processes to perform any modifications or updates that need to be made.

Regardless of the method you use to update the OS, it is critically important to keep systems up to date. New security advisories come out every day, and while a buffer overflow may be a “potential” problem today, it will almost certainly become a “definite” problem in the near future. Much like the steps taken to baseline and initially secure an OS, keeping every system patched and up to date is critical to protecting the system and the information it contains.

Vendors typically follow a hierarchy for software updates:

Images   Hotfix This term refers to a (usually) small software update designed to address a specific problem, such as a buffer overflow in an application that exposes the system to attacks. Hotfixes are typically developed in reaction to a discovered problem and are produced and released rather quickly.

Images   Patch This term refers to a more formal, larger software update that can address several or many software problems. Patches often contain enhancements or additional capabilities as well as fixes for known bugs. Patches are usually developed over a longer period of time.

Images   Service pack This refers to a large collection of patches and hotfixes rolled into a single, rather large package. Service packs are designed to bring a system up to the latest known-good level all at once, rather than requiring the user or system administrator to download dozens or hundreds of updates separately.

Disabling Unnecessary Ports and Services

An important management issue for running a secure system is to identify the specific needs of a system for its proper operation and to enable only items necessary for those functions. Disabling unnecessary ports and services prevents their use by unauthorized users and improves system throughput and increases security. Systems have ports and connections that need to be disabled if not in use.

Images

Disabling unnecessary ports and services is a simple way to improve system security. This minimalist setup is similar to the “implicit deny” philosophy and can significantly reduce an attack surface.

Just as we have a principle of least privilege, we should follow a similar track with least functionality on systems. A system should do what it supposed to do, and only what it is supposed to do. Any additional functionality is an added attack surface for an adversary and offers no additional benefit to the enterprise.

Secure Configurations

Operating systems can be configured in a variety of manners—from completely open with lots of functionality, whether it is needed or not, to stripped to the services needed to perform a particular task. Operating system developers and manufacturers all share a common problem: they cannot possibly anticipate the many different configurations and variations that the user community will require from their products. So, rather than spending countless hours and funds attempting to meet every need, manufacturers provide a “default” installation for their products that usually contains the base OS and some more commonly desirable options, such as drivers, utilities, and enhancements. Because the OS could be used for any of a variety of purposes, and could be placed in any number of logical locations (LAN, DMZ, WAN, and so on), the manufacturer typically does little to nothing with regard to security. The manufacturer may provide some recommendations or simplified tools and settings to facilitate securing the system, but in general, end users are responsible for securing their own systems. Generally this involves removing unnecessary applications and utilities, disabling unneeded services, setting appropriate permissions on files, and updating the OS and application code to the latest version.

Images

Weak security configurations are a result of many different items, each specific to a particular set of components and operating conditions. The path to avoid weak configurations involves a combination of information sources. One is manufacturer recommendations, another is industry best practices, and the last is testing.

This process of securing an OS is called hardening, and it is intended to make the system more resistant to attack, much like armor or steel is hardened to make it less susceptible to breakage or damage. Each OS has its own approach to security, and although the process of hardening is generally the same, different steps must be taken to secure each OS. The process of securing and preparing an OS for the production environment is not trivial; it requires preparation and planning. Unfortunately, many users don’t understand the steps necessary to secure their systems effectively, resulting in hundreds of compromised systems every day.

Images

System hardening is the process of preparing and securing a system and involves the removal of all unnecessary software and services.

You must meet several key requirements to ensure that the system hardening processes described in this section achieve their security goals. These are OS independent and should be a normal part of all system maintenance operations:

Images   The base installation of all OS and application software comes from a trusted source and is verified as correct by using hash values.

Images   Machines are connected only to a completely trusted network during the installation, hardening, and update processes.

Images   The base installation includes all current patches and updates for both the OS and applications.

Images   Current backup images are taken after hardening and updates to facilitate system restoration to a known state.

These steps ensure that you know what is on the machine, can verify its authenticity, and have an established backup version.

Disable Default Accounts/Passwords

Because accounts are necessary for many systems to be established, default accounts with default passwords are a way of life in computing. Whether the account is for the OS or an application, this is a significant security vulnerability if not immediately addressed as part of setting up the system or installing of the application. Disabling default accounts/passwords should be such a common practice that there should be no systems with this vulnerability. This is a simple task, and one that must be done. When you cannot disable the default account (and there will be times when disabling is not a viable option), the other alternative is to change the password to a very long one that offers strong resistance to brute-force attacks.

Configurations

Modern software is configuration driven. This means that setting proper configurations is essential for secure operation of the software. Using weak configurations or allowing access to configuration files so attackers can weaken or misconfigure a system is a security failure. Default configurations should be checked to ensure they employ the desired level of security.

Application Whitelisting/Blacklisting

Applications can be controlled at the OS at the time of start via blacklisting or whitelisting. Application blacklisting is essentially noting which applications should not be allowed to run on the machine. This is basically a permanent “ignore” or “call block” type of capability. Application whitelisting is the exact opposite: it consists of a list of allowed applications. Each of these approaches has advantages and disadvantages. Blacklisting is difficult to use against dynamic threats, as the identification of a specific application can easily be avoided through minor changes. Whitelisting is easier to employ from the aspect of the identification of applications that are allowed to run—hash values can be used to ensure the executables are not corrupted. The challenge in whitelisting is the number of potential applications that are run on a typical machine. For a single-purpose machine, such as a database server, whitelisting can be relatively easy to employ. For multipurpose machines, it can be more complicated.

Microsoft has two mechanisms that are part of the OS to control which users can use which applications:

Images

Using OS level restrictions to control what software can be used can prevent users from loading and running unauthorized software. Unauthorized software, whether because of licensing restrictions or because it is not vetted for use, can present risk to the enterprise. Controlling this risk via an enterprise operational control such as white listing can simplify compliance and improve baseline security posture.

Images   Software restrictive policies Employed via group policies and allow significant control over applications, scripts, and executable files. The primary mode is by machine and not by user account.

Images   User account level control Enforced via AppLocker, a service that allows granular control over which users can execute which programs. Through the use of rules, an enterprise can exert significant control over who can access and use installed software.

On a Linux platform, similar capabilities are offered from third-party vendor applications.

Sandboxing

Sandboxing refers to the quarantine or isolation of a system from its surroundings. It has become standard practice for some programs with an increased risk surface to operate within a sandbox, limiting the interaction with the CPU and other processes, such as memory. This works as a means of quarantine, preventing problems from getting out of the sandbox and onto the OS and other applications on a system.

Virtualization can be used as a form of sandboxing with respect to an entire system. You can build a VM, test something inside the VM, and, based on the results, make a decision with regard to stability or whatever concern was present.

Images Secure Baseline

While this process of establishing software’s base state is called baselining, and the resulting product is a baseline that describes the capabilities of the software, this is not necessarily secure. To secure the software on a system effectively and consistently, you must take a structured and logical approach. This starts with an examination of the system’s intended functions and capabilities to determine what processes and applications will be housed on the system. As a best practice, anything that is not required for operations should be removed or disabled on the system; then, all the appropriate patches, hotfixes, and settings should be applied to protect and secure it. This becomes the system’s secure baseline.

Software and hardware can be tied intimately when it comes to security, so they must be considered together. Once the process has been completed for a particular hardware and software combination, any similar systems can be configured with the same baseline to achieve the same level and depth of security and protection. Uniform software baselines are critical in large-scale operations, because maintaining separate configurations and security levels for hundreds or thousands of systems is far too costly.

After administrators have finished patching, securing, and preparing a system, they often create an initial baseline configuration. This represents a secure state for the system or network device and a reference point of the software and its configuration. This information establishes a reference that can be used to help keep the system secure by establishing a known-safe configuration. If this initial baseline can be replicated, it can also be used as a template when similar systems and network devices are being deployed.

Machine Hardening

The key management issue behind running a secure server setup is to identify the specific needs of a server for its proper operation and enable only items necessary for those functions. Keeping all other services and users off the system improves system throughput and increases security. Reducing the attack surface area associated with a server reduces the vulnerabilities now and in the future as updates are required.

Server Hardening Tips

Specific security needs can vary depending on the server’s specific use, but at a minimum, the following are beneficial:

Images   Remove unnecessary protocols such as Telnet, NetBIOS, Internetwork Packet Exchange (IPX), and File Transfer Protocol (FTP).

Images   Remove unnecessary programs such as Internet Information Services (IIS).

Images   Remove all shares that are not necessary.

Images   Rename the administrator account, securing it with a strong password.

Images   Remove or disable the Local Admin account in Windows.

Images   Disable unnecessary user accounts.

Images   Disable unnecessary ports and services.

Images   Keep the operating system (OS) patched and up to date.

Images   Keep all applications patched and up to date.

Images   Turn on event logging for determined security elements.

Images   Control physical access to servers.

Once a server has been built and is ready to be placed into operation, the recording of hash values on all of its crucial files will provide valuable information later in case of a question concerning possible system integrity after a detected intrusion. The use of hash values to detect changes was first developed by Gene Kim and Eugene Spafford at Purdue University in 1992. The concept became the product Tripwire, which is now available in commercial and open source forms. The same basic concept is used by many security packages to detect file-level changes.

Securing a Workstation

Workstations are attractive targets for crackers because they are numerous and can serve as entry points into the network and the data that is commonly the target of an attack. Although security is a relative term, following these basic steps will increase workstation security immensely:

Images   Remove unnecessary protocols such as Telnet, NetBIOS, and IPX.

Images   Remove unnecessary software.

Images   Remove modems unless needed and authorized.

Images   Remove all shares that are not necessary.

Images   Rename the administrator account, securing it with a strong password.

Images   Remove or disable the Local Admin account in Windows.

Images   Disable unnecessary user accounts.

Images   Disable unnecessary ports and services.

Images   Install an antivirus program and keep abreast of updates.

Images   If the floppy drive is not needed, remove or disconnect it.

Images   Consider disabling USB ports via BIOS to restrict data movement to USB devices.

Images   If no corporate firewall exists between the machine and the Internet, install a firewall.

Images   Keep the operating system (OS) patched and up to date.

Images   Keep all applications patched and up to date.

Images   Turn on event logging for determined security elements.

The primary method of controlling the security impact of a system on a network is to reduce the available attack surface area. Turning off all services that are not needed or permitted by policy will reduce the number of vulnerabilities. Removing methods of connecting additional devices to a workstation to move data—such as optical drives and USB ports—assists in controlling the movement of data into and out of the device. User-level controls, such as limiting e-mail attachment options, screening all attachments at the e-mail server level, and reducing network shares to needed shares only, can be used to limit excessive connectivity that can impact security.

Early versions of home operating systems did not have separate named accounts for separate users. This was seen as a convenience mechanism; after all, who wants the hassle of signing into the machine? This led to the simple problem that all users could then see and modify and delete everyone else’s content. Content could be separated by using access control mechanisms, but that required configuration of the OS to manage every user’s identity. Early versions of many OSs came with literally every option turned on. Again, this was a convenience factor, but it led to systems running processes and services that they never used, thus increasing the attack surface of the host unnecessarily.

Determining the correct settings and implementing them correctly is an important step in securing a host system. The following sections explore the multitude of controls and options that need to be employed properly to achieve a reasonable level of security on a host system.

Hardening Microsoft Operating Systems

Microsoft has spent years working to develop the most secure and securable OS on the market. As a desktop OS, Windows has provided a range of security features for users to secure their systems. Most of these options can be employed via group policies in enterprise setups, making them easily deployable and maintainable across an enterprise.

Here are some of the security capabilities in the Windows environment:

Images   User Account Control allows users to operate the system without requiring administrative privileges. If you’ve used Windows Vista and beyond, you’ve undoubtedly seen the “Windows needs your permission to continue” pop-ups.

Images   Windows Firewall includes an outbound filtering capability. Windows allows filtering of traffic coming into and leaving the system, which is useful for controlling things like peer-to-peer applications.

Images   BitLocker allows encryption of all data on a server, including any data volumes. This capability is only available in the higher-end distributions of Windows.

Images   Windows clients work with Network Access Protection. See the discussion of NAP in the following “Hardening Windows Server” section for more details.

Images   Windows Defender is a built-in malware detection and removal tool. Windows Defender detects many types of potentially suspicious software and can prompt the user before allowing applications to make potentially malicious changes.

Hardening Windows Server

Microsoft touted Windows Server 2008 as its “most secure server” to date upon its release. Although Microsoft has not touted security specifically since, many improvements have been continuously evolving across the Windows Server platform, including in Windows Server 2012 and 2016, making it arguably one of the most securable platforms in the enterprise.

Images   BitLocker allows encryption of all data on a server, including any data volumes. Improved BitLocker functionality to now allow administrator-less reboots.

Images   Role-based installation of functions and capabilities minimizes the server’s footprint. For example, if a server is going to be a web server, it does not need DNS or SMTP software, and thus those features are no longer installed by default.

Images   Network Access Protection (NAP) controls access to network resources based on a client computer’s identity and compliance with corporate governance policy. NAP allows network administrators to define granular levels of network access based on client identity, group membership, and the degree to which that client is compliant with corporate policies. NAP can also ensure that clients comply with corporate policies. Suppose, for example, that a sales manager connects their laptop to the corporate network. NAP can be used to examine the laptop and see if it is fully patched and running a company-approved antivirus product with updated signatures. If the laptop does not meet those standards, network access for that laptop can be restricted until the laptop is brought back into compliance with corporate standards.

Images   Read-only domain controllers can be created and deployed in high-risk locations, but they can’t be modified to add new users, change access levels, and so on. This new ability to create and deploy “read-only” domain controllers can be very useful in high-threat environments.

Images   More-granular password policies allow for different password policies on a group or user basis. This allows administrators to assign different password policies and requirements for the sales group and the engineering group, for example, if that capability is needed.

Images   Web sites or web applications can be administered within IIS 7. This allows administrators quicker and more convenient administration capabilities, such as the ability to turn on or off specific modules through the IIS management interface. For example, removing CGI support from a web application is a quick and simple operation in IIS 7.

Images   The traditional ROM-BIOS has been replaced with Unified Extensible Firmware Interface (UEFI). Microsoft is using the security-hardened 2.3.1 version, which prevents boot code updates without appropriate digital certificates and signatures.

Images   The trustworthy and verified boot process has been extended to the entire Windows OS boot code with a feature known as Secure Boot. UEFI and Secure Boot significantly reduce the risk of malicious code such as rootkits and boot viruses.

Images   Early Launch Anti-Malware (ELAM) has been instituted to ensure that only known, digitally signed antimalware programs can load right after Secure Boot finishes (without requiring UEFI or Secure Boot). This permits legitimate antimalware programs to get into memory and start doing their job before fake antivirus programs or other malicious code can act.

Images   DNSSEC is fully integrated.

Images   Data Classification with Rights Management Service is fully integrated so that you can control which users and groups can access which documents based on content or marked classification.

Images   Managed Service Accounts, introduced in Server 2008 R2, allow for advanced self-maintaining features with extremely long passwords, which automatically reset every 30 days, all under the control of Active Directory in the enterprise.

Images   Credential Guard enables the use of virtualization-based security to isolate credential information, preventing password hashes or Kerberos tickets from being intercepted. It uses an entirely new isolated Local Security Authority (LSA) process, which is not accessible to the rest of the operating system. All binaries used by the isolated LSA are signed with certificates that are validated before they are launched in the protected environment, making pass-the-hash-type attacks completely ineffective.

Images   Windows Server 2016 includes Device Guard to ensure that only trusted software can be run on the server. Using virtualization-based security, Device Guard can limit what binaries can run on the system based on the organization’s policy. If anything other than the specified binaries tries to run, Windows Server 2016 blocks it and logs the failed attempt so that administrators can see that there has been a potential breach. Device Guard is also integrated with PowerShell so that you can authorize which scripts can run on your system.

The tools available in each subsequent release of the Windows Server OS are designed to increase the difficulty factor for attackers, eliminating known methods of exploitation. The challenge is in administrating the security functions, although the integration of many of these via Active Directory makes this much more manageable than in the past.

Microsoft Security Compliance Manager

Microsoft provided a tool called Security Compliance Manager (SCM) to assist system and enterprise administrators with the configuration of security options across a wide range of Microsoft platforms. SCM allows administrators to use group policy objects (GPOs) to deploy security configurations across Internet Explorer, the desktop OSs, server OSs, and common applications such as Microsoft Office. Microsoft reluctantly retired SCM in the summer of 2017 in favor of a new tool set called Desired State Configuration (DSC).

Desired State Configuration (DSC)

Desired State Configuration (DSC) is a PowerShell-based approach to configuration management of a system. Rather than having documentation that describes the security settings for a system and expecting a user to set them, DSC performs the work via PowerShell functions. This makes security configuration a managed-by-code process that brings with it many advantages. Using DSC, it is easier and faster to adopt, implement, maintain, deploy, and share system configuration information. DSC brings the advantages of DevOps to system configuration in the Windows environment. While detailed PowerShell implementations are beyond the scope of this book, the concept of programmable configuration control is not. DSC is more than just PowerShell, for DSC configurations separate intent (or “what I want to do”) from execution (or “how I want to do it”). By separating the specifics of deployments, DSC enables multiple environments to be serviced by single DSC implementations that via configuration data can target dev, test, and production environments appropriately.

Microsoft Security Baselines

A security baseline is a group of Microsoft-recommended configuration settings with an explanation of their security impact. There are over 3000 Group Policy settings for Windows 10, which does not include over 1800 Internet Explorer 11 settings. So of these 4800 settings, only some are security related, and choosing which to set can be a laborious process. Security baselines bring an expert-based consensus view to this task. Microsoft provides a security compliance toolkit to facilitate the application of Microsoft-recommended baselines for a system. The Microsoft Security Compliance Toolkit (SCT) is a set of tools that allows enterprise security administrators to download, analyze, test, edit, and store Microsoft-recommended security configuration baselines for Windows.

Using the toolkit, administrators can compare their current group policy objects (GPOs) with Microsoft-recommended GPO baselines or other baselines. You can also edit them, store them in GPO backup file format, and apply them broadly through Active Directory or individually through local policy. The Security Compliance Toolkit consists of specific baselines based on OS and two tools—the Policy Analyzer tool and the Local Group Policy Object (LGPO) tool.

For further information, see Microsoft Security Compliance Toolkit 1.0 (https://docs.microsoft.com/en-us/windows/security/threat-protection/security-compliance-toolkit-10).

Microsoft Attack Surface Analyzer

One of the challenges in a modern enterprise is understanding the impact of system changes from the installation or upgrade of an application on a system. To help you overcome that challenge, Microsoft has released the Attack Surface Analyzer (ASA), a free tool that can be deployed on a system before a change and then again after a change to analyze the changes to various system properties as a result of the change.

Using ASA, developers can view changes in the attack surface resulting from the introduction of their code onto the Windows platform, and system administrators can assess the aggregate attack surface change by the installation of an application. Security auditors can use the tool to evaluate the risk of a particular piece of software installed on the Windows platform. And if ASA is deployed in a baseline mode before an incident, security incident responders can potentially use ASA to gain a better understanding of the state of a system’s security during an investigation.

Group Policies

Microsoft defines a group policy as “an infrastructure used to deliver and apply one or more desired configurations or policy settings to a set of targeted users and computers within an Active Directory environment. This infrastructure consists of a Group Policy engine and multiple client-side extensions (CSEs) responsible for writing specific policy settings on target client computers.” Introduced with the Windows 2000 operating system, group policies are a great way to manage and configure systems centrally in an Active Directory environment (Windows NT had policies, but technically not “group policies”). Group policies can also be used to manage users, making these policies valuable tools in any large environment.

Within the Windows environment, group policies can be used to refine, set, or modify a system’s Registry settings, auditing and security policies, user environments, logon/logoff scripts, and so on. Policy settings are stored in a group policy object (GPO) and are referenced internally by the OS using a globally unique identifier (GUID). A single policy can be linked to a single user, a group of users, a group of machines, or an entire organizational unit (OU), which makes updating common settings on large groups of users or systems much easier. Users and systems can have more than one GPO assigned and active, which can create conflicts between policies that must then be resolved at an attribute level. Group policies can also overwrite local policy settings. Group policies should not be confused with local policies. Local policies are created and applied to a specific system (locally), are not user specific (you can’t have local policy X for user A and local policy Y for user B), and are overwritten by GPOs. Further confusing some administrators and users, policies can be applied at the local, site, domain, and OU levels. Policies are applied in hierarchical order—local, then site, then domain, and so on. This means settings in a local policy can be overridden or reversed by settings in the domain policy if there is a conflict between the two policies. If there is no conflict, the policy settings are aggregated.

Windows Local Security Policies

Open a command prompt as either administrator or a user with administrator privileges on a Windows system. Type the command secpol and press ENTER (this should bring up the Local Security Policy utility). Expand Account Policies on the left side of the Local Security Policy window (which should have a + next to it). Click Password Policy. Look in the right side of the Local Security Policy window. What is the minimum password length? What is the maximum password age in days? Now explore some of the policy settings—but be careful! Changes made to the local security policy can affect the functionality or usability of your system.

Creating GPOs is usually done through either the Group Policy Object Editor, shown in Figure 14.1, or the Group Policy Management Console (GPMC). The GPMC is a more powerful GUI-based tool that can summarize GPO settings; simplify security filtering settings; backup, clone, restore, and edit GPOs; and perform other tasks. After creating a GPO, administrators will associate it with the desired targets. After association, group policies operate on a pull model, meaning that at a semi-random interval, the Group Policy client will collect and apply any policies associated to the system and the currently logged-on user.

images


Figure 14.1   Group Policy Object Editor

Microsoft group policies can provide many useful options, including the following:

Images   Network location awareness Systems are now “aware” of which network they are connected to and can apply different GPOs as needed. For example, a system can have a very restrictive GPO when connected to a public network and a less restrictive GPO when connected to an internal, trusted network.

Images   Ability to process without ICMP Older group policy processes would occasionally time out or fail completely if the targeted system did not respond to ICMP packets. Current implementations in Windows Vista and Windows 7 do not rely on ICMP during the GPO update process.

Images   VPN compatibility As a side benefit of network location awareness, mobile users who connect through VPNs can receive a GPO update in the background after connecting to the corporate network via VPN.

Images   Power management Starting with Windows Vista, power management settings can be configured using GPOs.

Images   Device access blocking Under Windows Vista and Windows 7, policy settings have been added that allow administrators to restrict user access to USB drives, CD-RW drives, DVD-RW drives, and other removable media.

Images   Location-based printing Users can be assigned to various printers based on their location. As mobile users move, their printer locations can be updated to the closest local printer.

Images

In Windows, policies are applied in hierarchical order. Local policies get applied first, then site policies, then domain policies, and finally OU policies. If a setting from a later policy conflicts with a setting from an earlier policy, the setting from the later policy “wins” and is applied. Keep this in mind when building group policies.

Hardening UNIX- or Linux-Based Operating Systems

Although you do not have the advantage of a single manufacturer for all UNIX operating systems (like you do with Windows operating systems), the concepts behind securing different UNIX- or Linux-based operating systems are similar, regardless of whether the manufacturer is Red Hat or Sun Microsystems. Indeed, the overall tasks involved with hardening all operating systems are remarkably similar.

Establishing General UNIX Baselines

General UNIX baselining follows similar concepts as baselining for Windows OSs: disable unnecessary services, restrict permissions on files and directories, remove unnecessary software, apply patches, remove unnecessary users, and apply password guidelines. Some versions of UNIX provide GUI-based tools for these tasks, while others require administrators to edit configuration files manually. In most cases, anything that can be accomplished through a GUI can be accomplished from the command line or by manually editing configuration files.

Like Windows systems, UNIX systems are easiest to secure and baseline if they are providing a single service or performing a single function, such as acting as a Simple Mail Transfer Protocol (SMTP) server or web server. Prior to performing any software installations or baselining, the administrator should define the purpose of the system and identify all required capabilities and functions. One nice advantage of UNIX systems is that you typically have complete control over what does or does not get installed on the system. During the installation process, the administrator can select which services and applications are placed on the system, offering an opportunity to not install services and applications that will not be required. However, this assumes that the administrator knows and understands the purpose of this system, which is not always the case. In other cases, the function of the system itself may have changed.

Runlevels

Runlevels are used to describe the state of init (initialization) and what system services are operating in UNIX systems. For example, runlevel 0 is shutdown. Runlevel 1 is single-user mode (typically for administrative purposes). Runlevels 2 through 5 are user defined (that is, administrators can define what services are running at each level). Runlevel 6 is for reboot.

Services on a UNIX system (called daemons) can be controlled through a number of different mechanisms. As the root user, an administrator can start and stop services manually from the command line or through a GUI tool. The OS can also stop and start services automatically through configuration files (usually contained in the /etc directory). (Note that UNIX systems vary a good deal in this regard, as some use a super-server process, such as inetd, while others have individual configuration files for each network service.) Unlike Windows, UNIX systems can also have different runlevels in which the system can be configured to bring up different services, depending on the runlevel selected.

Linux Hardening

One of the “strengths” behind Linux is the ability of a sysadmin to fully control all of the features, the ultimate in customizable solutions. This can lead to leaner and faster processing, but also can lead to security problems. Securing a Linux environment involves a couple different types of operations, as in how a sysadmin operates and how the system is configured. What’s more, there are the intricacies of the Linux system itself.

Linux has several separate operating spaces, each with its own characteristics. The application space is where user applications exist and run. These are above the kernel and can be changed while operating by simply restarting the application. The kernel space is integral to the system and can only be changed by rebooting the hardware. Thus, updates to kernel processes require a reboot to finish and become active.

Securing Linux is in many ways like securing any other operating system. Issues such as securing the services, keeping things up to date, and enforcing policies are all the same objectives regardless of the type or version of OS. The differences occur in the how one achieves these objectives. Using passwords as an example, there is no centralized method like Active Directory and group policies. Instead, these functions are controlled granularly using commands on the system. It is possible to manage passwords to the same degree as through unified systems; it just takes a bit more work. The same goes for controlling access to administrative or root access accounts. On a running UNIX system, you can see which processes, applications, and services are running by using the process status, or ps, command, as shown in Figure 14.2. To stop a running service, you can identify the service by its unique process identifier (PID) and then use the kill command to stop the service. For example, if you wanted to stop the bluetooth-applet service in Figure 14.2, you would use the command kill 2443. To prevent this service from starting again when the system is rebooted, you would have to modify the appropriate runlevels to remove this service, as shown in Figure 14.2, or modify the configuration files that control this service.

images


Figure 14.2   The ps command run on a Fedora system

Linux is built around the concept of a file—everything is a file. Files are files, as are directories. Devices are files, I/O locations are files, conduits between programs, called pipes, are files. Making everything addressable as a file makes permissions easier. Users are not files; they are subjects in the subject-object model. Subjects act upon objects according to permissions. Users exist in the singular, and in groups, and permissions are layered between the owner of the object, groups, and single subjects (users). In Linux, a group is a name for a list of users; this allows for shorter access control entry (ACE) lists on objects because groups are checked first. When a subject attempts to act upon an object, the security kernel examines the entries for the object’s access control entries until it finds a match. If no match, the action is not allowed.

Permissions on files are expressed in bit patterns, as illustrated in Figures 14.3 and 14.4. Permissions are modified using the chmod command and indicating a three-digit number that translates to the appropriate set of read, write, and execute permissions for the item. Figure 14.3 illustrates how the permissions are displayed during a file listing as well as how the relative positions relate to the owner, group, and others. Figure 14.4 illustrates the decoding pattern of the bit structure.

images


Figure 14.3   Linux permissions listing

images


Figure 14.4   Linux permission bit sequence

The common patterns frequently used in Linux systems are illustrated in Table 14.1.

Table 14.1 Common Linux File Permissions

Images

For applications in the user space on a Linux box, setting the correct permissions is extremely important. These permissions are what protect configuration and other settings that enable or disable a lot of functionality—and could, if set erroneously, allow attackers to perform a wide range of attacks, including installing malware that can watch other users. For these reasons and more, Linux can be an awesome system, with great performance and capability. The downside is that it requires significant expertise to do these things securely in today’s computing environment.

Directories also use the same nomenclature as files for permissions, but with minor differences. An r indicates that the contents can be read. A w indicates that the contents can be written, and x allows a directory to be entered. Both r and w have no effect without x being set. A setting of 777 indicates that anyone can list and create/delete files in the directory. 755 gives the owner full access, while others may only list the files. 700 restricts access to only the owner.

There are times when a user needs more permissions than their account holds, as in needing root permission to perform a task. Rather than logging in as root, and thus losing their identity in logs and such, the user can use the superuser command, su, in order to assume root privilege, provided they have the root password.

Antimalware

In the early days of PC use, threats were limited: most home users were not connected to the Internet 24/7 through broadband connections, and the most common threat was a virus passed from computer to computer via an infected floppy disk (much like the medical definition, a computer virus is something that can infect the host and replicate itself). But things have changed dramatically since those early days, and current threats pose a much greater risk than ever before. According to SANS Internet Storm Center, the average survival time of an unpatched Windows PC on the Internet is less than 60 minutes (http://isc.sans.org/survivaltime.html). This is the estimated time before an automated probe finds the system, penetrates it, and compromises it. Automated probes from botnets and worms are not the only threats roaming the Internet—there are viruses and malware spread by e-mail, phishing, infected web sites that execute code on your system when you visit them, adware, spyware, and so on. Fortunately, as the threats increase in complexity and capability, so do the products designed to stop them.

Malware

Malware comes in many forms and is covered specifically in Chapter 15. Antivirus solutions and proper workstation configurations are part of a defensive posture against various forms of malware. Additional steps include policy and procedure actions, prohibiting file sharing via USB or external media, and prohibiting access to certain web sites.

Antivirus

Antivirus (AV) products attempt to identify, neutralize, or remove malicious programs, macros, and files. These products were initially designed to detect and remove computer viruses, though many of the antivirus products are now bundled with additional security products and features.

Although antivirus products have had over two decades to refine their capabilities, the purpose of the antivirus products remains the same: to detect and eliminate computer viruses and malware. Most antivirus products combine the following approaches when scanning for viruses:

Images   Signature-based scanning Much like an intrusion detection system (IDS), the antivirus products scan programs, files, macros, e-mails, and other data for known worms, viruses, and malware. The antivirus product contains a virus dictionary with thousands of known virus signatures that must be frequently updated, as new viruses are discovered daily. This approach will catch known viruses but is limited by the virus dictionary—what it does not know about it cannot catch.

Images   Heuristic scanning (or analysis) Heuristic scanning does not rely on a virus dictionary. Instead, it looks for suspicious behavior—anything that does not fit into a “normal” pattern of behavior for the OS and applications running on the system being protected.

Images

Most current antivirus software packages provide protection against a wide range of threats, including viruses, worms, Trojans, and other malware. Use of an up-to-date antivirus package is essential in the current threat environment.

As signature-based scanning is a familiar concept, let’s examine heuristic scanning in more detail. Heuristic scanning typically looks for commands or instructions that are not normally found in application programs, such as attempts to access a reserved memory register. Most antivirus products use either a weight-based system or a rule-based system in their heuristic scanning (more effective products use a combination of both techniques). A weight-based system rates every suspicious behavior based on the degree of threat associated with that behavior. If the set threshold is passed based on a single behavior or a combination of behaviors, the antivirus product will treat the process, application, macro, and so on that is performing the behavior(s) as a threat to the system. A rule-based system compares activity to a set of rules meant to detect and identify malicious software. If part of the software matches a rule, or if a process, application, macro, and so on performs a behavior that matches a rule, the antivirus software will treat that as a threat to the local system.

Images

Heuristic scanning is a method of detecting potentially malicious or “virus-like” behavior by examining what a program or section of code does. Anything that is “suspicious” or potentially “malicious” is closely examined to determine whether or not it is a threat to the system. Using heuristic scanning, an antivirus product attempts to identify new viruses or heavily modified versions of existing viruses before they can damage your system.

Some heuristic products are very advanced and contain capabilities for examining memory usage and addressing, a parser for examining executable code, a logic flow analyzer, and a disassembler/emulator so they can “guess” what the code is designed to do and whether or not it is malicious.

As with IDS/IPS products, encryption and obfuscation pose a problem for antivirus products: anything that cannot be read cannot be matched against current virus dictionaries or activity patterns. To combat the use of encryption in malware and viruses, many heuristic scanners look for encryption and decryption loops. As malware is usually designed to run alone and unattended, if it uses encryption, it must contain all the instructions to encrypt and decrypt itself as needed. Heuristic scanners look for instructions such as the initialization of a pointer with a valid memory address, manipulation of a counter, or a branch condition based on a counter value. While these actions don’t always indicate the presence of an encryption/decryption loop, if the heuristic engine can find a loop, it might be able to decrypt the software in a protected memory space, such as an emulator, and evaluate the software in more detail. Many viruses share common encryption/decryption routines that help antivirus developers.

Current antivirus products are highly configurable and most offerings will have the following capabilities:

Images   Automated updates Perhaps the most important feature of a good antivirus solution is its ability to keep itself up to date by automatically downloading the latest virus signatures on a frequent basis. This usually requires that the system be connected to the Internet in some fashion and that updates be performed on a daily (or more frequent) basis.

Images   Automated scanning Most antivirus products allow for the scheduling of automated scans so that you can designate when the antivirus product will examine the local system for infected files. These automated scans can typically be scheduled for specific days and times, and the scanning parameters can be configured to specify what drives, directories, and types of files are scanned.

Images   Media scanning Removable media is still a common method for virus and malware propagation, and most antivirus products can be configured to automatically scan optical media, USB drives, memory sticks, or any other type of removable media as soon as they are connected to or accessed by the local system.

Images   Manual scanning Many antivirus products allow the user to scan drives, files, or directories (folders) “on demand.”

Images   E-mail scanning E-mail is still a major method of virus and malware propagation. Many antivirus products give users the ability to scan both incoming and outgoing messages as well as any attachments.

Images   Resolution When the antivirus product detects an infected file or application, it can typically perform one of several actions. The antivirus product may quarantine the file, making it inaccessible; it may try to repair the file by removing the infection or offending code; or it may delete the infected file. Most antivirus products allow the user to specify the desired action, and some allow for an escalation in actions, such as cleaning the infected file if possible and quarantining the file if it cannot be cleaned.

Images

The intentions of computer virus writers have changed over the years—from simply wanting to spread a virus in order to be noticed, to creating stealthy botnets as a criminal activity. One method of remaining hidden is to produce viruses that can morph to lower their detection rates by standard antivirus programs. The number of variants for some viruses has increased from less than 10 to greater than 10,000. This explosion in signatures has created two issues. One, users must constantly (sometimes more than daily) update their signature file. Two, and more important, detection methods are having to change as the number of signatures becomes too large to scan quickly. For end users, the bottom line is simple: update signatures automatically, and at least daily.

Antivirus solutions are typically installed on individual systems (desktops, servers, and even mobile devices), but network-based antivirus capabilities are also available in many commercial gateway products. These gateway products often combine firewall, IDS/IPS, and antivirus capabilities into a single integrated platform. Most organizations will also employ antivirus solutions on e-mail servers, as that continues to be a very popular propagation method for viruses.

While the installation of a good antivirus product is still considered a necessary best practice, there is growing concern about the effectiveness of antivirus products against developing threats. Early viruses often exhibited destructive behaviors; they were poorly written and modified files, and were less concerned with hiding their presence than they were with propagation. We are seeing an emergence of viruses and malware created by professionals, sometimes financed by criminal organizations or governments, that go to great lengths to hide their presence. These viruses and malware are often used to steal sensitive information or turn the infected PC into part of a larger botnet for use in spamming or attack operations.

Antivirus Software for Servers

The need for antivirus protection on servers depends a great deal on the use of the server. Some types of servers, such as e-mail servers, require extensive antivirus protection because of the services they provide. Other servers (domain controllers and remote access servers, for example) may not require any antivirus software, as they do not allow users to place files on them. File servers need protection, as do certain types of application servers. There is no general rule, so each server and its role in the network will need to be examined to determine whether it needs antivirus software.

Images

Antivirus is an essential security application on all platforms. There are numerous compliance schemes that mandate antivirus deployment, including Payment Card Industry Data Security Standard (PCI DSS) and North American Electric Reliability Council Critical Infrastructure protections (NERC CIP).

Antivirus Software for Workstations

Antivirus packages are available from a wide range of vendors. Running a network of computers without this basic level of protection will be an exercise in futility. Even though the number of widespread, indiscriminate broadcast virus attacks has decreased because of the effectiveness of antivirus software, it is still necessary to use antivirus software; the time and money you would spend cleaning up after a virus attack more than equals the cost of antivirus protection. The majority of viruses today exist to create zombie machines for botnets that enable others to control resources on your PC. Even more important, once connected by networks, computers can spread a virus from machine to machine with an ease that’s even greater than simple USB flash drive transfer. One unprotected machine can lead to problems throughout a network as other machines have to use their antivirus software to attempt to clean up a spreading infection.

Apple Mac computers were once considered by many users to be immune because very few examples of malicious software targeting Macs existed. This was not due to anything other than a low market share, and hence the devices were ignored by the malware community as a whole. As Mac has increased in market share, so has its exposure, and today a variety of macOS malware steals files and passwords and is even used to take users’ pictures with the computer’s built-in webcam. All users need to install antivirus software on their machines in today’s environment, because any computer can become a target.

Antispam

If you have an e-mail account, you’ve likely received spam, that endless stream of unsolicited, electronic junk mail advertising get-rich-quick schemes, asking you to validate your bank account’s password, or inviting you to visit one web site or another. Despite federal legislation (such as the CAN-SPAM Act of 2003) and promises from IT industry giants like Bill Gates, who in 2004 said, “Two years from now, spam will be solved,” spam is alive and well and filling up your inbox as you read this. Industry experts have been fighting the spam battle for years, and while significant progress has been made in the development of antispam products, unfortunately the spammers have proven to be very creative and very dedicated in their quest to fill your inbox.

Images

Spam is not a new problem. It’s reported that the first spam message was sent on May 1, 1978, by a Digital Equipment Corporation sales representative. This sales representative attempted to send a message to all ARPANET users on the West Coast.

Antispam products attempt to filter out that endless stream of junk e-mail so you don’t have to. Some antispam products operate at the corporate level, filtering messages as they enter or leave designated mail servers. Other products operate at the host level, filtering messages as they come into your personal inbox. Most antispam products use similar techniques and approaches for filtering out spam:

Images   Blacklisting Several organizations maintain lists of servers or domains that generate or have generated spam. Most gateway- or server-level products can reference these blacklists and automatically reject any mail coming from servers or domains on the blacklists.

Images   Header filtering The antispam products look at the message headers to see if they are forged. E-mail headers typically contain information such as sender, receiver, servers used to transmit the message, and so on. Spammers often forge information in message headers in an attempt to hide where the message is really coming from.

Images   Content filtering The content of the message is examined for certain key words or phrases that are common to spam but rarely seen in legitimate e-mails (“get rich now” for example). Unfortunately, content filtering does occasionally flag legitimate messages as spam.

Images   Language filtering Some spam products allow you to filter out e-mails written in certain languages.

Images   User-defined filtering Most antispam products allow end users to develop their own filters, such as always allowing e-mail from a specific source even if it would normally be blocked by a content filter.

Images   Trapping Some products will monitor unpublished e-mail addresses for incoming spam—anything sent to an unpublished and otherwise unused account is likely to be spam.

Images   Enforcing the specifications of the protocol Some spam-generation tools don’t properly follow the SMTP protocol. By enforcing the technical requirements of SMTP, some spam can be rejected as delivery is attempted.

Images   Egress filtering This technique scans mail as it leaves an organization to catch spam before it is sent to other organizations.

Spam

The topic of spam and all the interesting details of undesired e-mail are presented in Chapter 16. Spam is listed here because it is considered a client threat, but the main methods of combating spam are covered in Chapter 16.

Antispyware

Most antivirus products will include antispyware capabilities as well. While antivirus programs were designed to watch for the writing of files to the file system, many current forms of malware avoid the file system to avoid this form of detection. Newer antivirus products are adapting and scanning memory as well as watching file system access in an attempt to detect advanced malware. Spyware is the term used to define malware that is designed to steal information from the system, such as keystrokes, passwords, PINs, and keys. Antispyware helps protect your systems from the ever-increasing flood of malware that seeks to watch your keystrokes, steal your passwords, and report sensitive information back to attackers. Many of these attack vectors work in system memory to avoid easy detection.

Windows Defender

As part of its ongoing efforts to help secure its PC operating systems, Microsoft released a free utility called Windows Defender in February 2006. The stated purpose of Windows Defender is to protect your computer from spyware and other unwanted software. Windows Defender is now standard with all versions of the Windows desktop operating systems and is available via free download in both 32- and 64-bit versions. It has the following capabilities:

Images   Spyware detection and removal Windows Defender is designed to find and remove spyware and other unwanted programs that display pop-ups, modify browser or Internet settings, or steal personal information from your PC.

Images   Scheduled scanning You can schedule when you want your system to be scanned or you can run scans on demand.

Images   Automatic updates Updates to the product can be automatically downloaded and installed without user interaction.

Images   Real-time protection Processes are monitored in real time to stop spyware and malware when they first launch, attempt to install themselves, or attempt to access your PC.

Images   Software Explorer One of the more interesting capabilities within Windows Defender is the ability to examine the various programs running on your computer. Windows Defender allows you to look at programs that run automatically on startup, are currently running on your PC, or are accessing network connections on your PC. Windows Defender provides you with details such as the publisher of the software, when it was installed on your PC, whether or not the software is “good” or considered to be known malware, the file size, publication date, and other information.

Images   Configurable responses Windows Defender lets you choose what actions you want to take in response to detected threats (see Figure 14.5); you can automatically disable the software, quarantine it, attempt to uninstall it, and perform other tasks.

images


Figure 14.5   Windows Defender System Center

Pop-up Blockers

One of the most annoying nuisances associated with web browsing is the pop-up ad. Pop-up ads are online advertisements designed to attract web traffic to specific web sites, capture e-mail addresses, advertise a product, and perform other tasks. Pop-up blockers are programs designed to prevent this behavior, typically in browsers. If you’ve spent more than an hour surfing the Web, you’ve undoubtedly seen them. They’re created when the web site you are visiting opens a new web browser window for the sole purpose of displaying an advertisement. Pop-up ads typically appear in front of your current browser window to catch your attention (and disrupt your browsing). Pop-up ads can range from mildly annoying, generating one or two pop-ups, to system crippling if a malicious web site attempts to open thousands of pop-up windows on your system.

Similar to the pop-up ad is the pop-under ad, which opens up behind your current browser window. You won’t see these ads until your current window is closed, and they are considered by some to be less annoying than pop-ups. Another form of pop-up is the hover ad, which uses Dynamic HTML (DHTML) to appear as a floating window superimposed over your browser window. To some users, pop-up ads are as undesirable as spam, and many web browsers now allow users to restrict or prevent pop-ups with functionality either built into the web browser or available as an add-on.

Firefox also contains a built-in pop-up blocker (available by choosing Tools | Options and then selecting the Content tab). Popular add-ons such as the Google and Yahoo! toolbars also contain pop-up blockers. If these freely available options are not enough for your needs, many commercial security suites from McAfee, Symantec, and Check Point contain pop-up-blocking capabilities as well. Users must be careful when selecting a pop-up blocker, as some unscrupulous developers have created adware products disguised as free pop-up blockers or other security tools.

Images

Pop-up blockers are used to prevent web sites from opening additional web browser windows or tabs without specific user consent.

Pop-ups ads can be generated in a number of ways, including JavaScript and Adobe Flash, and an effective pop-up blocker must be able to deal with the many methods used to create pop-ups. When a pop-up is created, users typically can click a close or cancel button inside the pop-up or close the new window using a method available through the OS, such as closing the window from the taskbar in Windows. With the advanced features available to them in a web development environment, some unscrupulous developers program the close or cancel button in their pop-ups to launch new pop-ups, redirect the user, run commands on the local system, or even load software.

Pop-ups should not be confused with adware. Pop-ups are ads that appear as you visit web pages. Adware is advertising-supported software. Adware automatically downloads and displays ads on your computer after the adware has been installed, and these ads are typically shown while the software is being used. Adware is often touted as “free” software, as the user pays nothing for the software but must agree to allow ads to be downloaded and displayed before using the software. This approach is very popular on smartphones and mobile devices.

Whitelisting vs. Blacklisting Applications

Applications can be controlled at the OS level when they are started via blacklisting or whitelisting. Blacklisting is essentially noting which applications should not be allowed to run on the machine. This is basically a permanent “ignore” or “call block” type of capability. Whitelisting is the exact opposite: it consists of a list of allowed applications. Each of these approaches has advantages and disadvantages. Blacklisting is difficult to use against dynamic threats, as the identification of a specific application can easily be avoided through minor changes. Whitelisting is easier to employ from the aspect of the identification of applications that are allowed to run—hash values can be used to ensure the executables are not corrupted. The challenge in whitelisting is the number of potential applications that are run on a typical machine. For a single-purpose machine, such as a database server, whitelisting can be relatively easy to employ. For multipurpose machines, it can be more complicated.

Microsoft has two mechanisms that are part of the OS to control which users can use which applications:

Images   Software restrictive policies Employed via group policies and allow significant control over applications, scripts, and executable files. The primary mode is by machine and not by user account.

Images   User account level control Enforced via AppLocker, a service that allows granular control over which users can execute which programs. Through the use of rules, an enterprise can exert significant control over who can access and use installed software.

On a Linux platform, similar capabilities are offered from third-party vendor applications.

AppLocker

AppLocker is a component of Enterprise licenses of Windows 7 and later that enables administrators to enforce which applications are allowed to run via a set of predefined rules. AppLocker is an adjunct to software restriction policies (SRPs). SRPs required significant administration on a machine-by-machine basis and were difficult to administer across an enterprise. AppLocker was designed so the rules can be distributed and enforced by GPO. They both act to prevent the running of unauthorized software and malware on a machine, but AppLocker is significantly easier to administer. Figure 14.6 shows the AppLocker interface. Some of the features that are enabled via AppLocker are restrictions by user and the ability to run in an audit mode, where results are logged but not enforced, allowing settings to be tested before use.

images


Figure 14.6   AppLocker interface

Host-Based Firewalls

Personal firewalls are host-based protective mechanisms that monitor and control traffic passing into and out of a single system. Designed for the end user, software firewalls often have a configurable security policy that allows the user to determine which traffic is “good” and is allowed to pass and which traffic is “bad” and is blocked. Software firewalls are extremely commonplace—so much so that most modern OSs come with some type of personal firewall included.

Linux-based OSs have had built-in software-based firewalls for a number of years, including TCP Wrapper, ipchains, and iptables (see Figure 14.7).

images


Figure 14.7   Linux firewall

TCP Wrapper is a simple program that limits inbound network connections based on port number, domain, or IP address and is managed with two text files called hosts.allow and hosts.deny. If the inbound connection is coming from a trusted IP address and destined for a port to which it is allowed to connect; then the connection is allowed.

Ipchains is a more advanced, rule-based software firewall that allows for traffic filtering, Network Address Translation (NAT), and redirection. Three configurable “chains” are used for handling network traffic: input, output, and forward. The input chain contains rules for traffic that is coming into the local system. The output chain contains rules for traffic that is leaving the local system. The forward chain contains rules for traffic that was received by the local system but is not destined for the local system. Iptables is the latest evolution of ipchains. Iptables uses the same three chains for policy rules and traffic handling as ipchains, but with iptables each packet is processed only by the appropriate chain. Under ipchains, each packet passes through all three chains for processing. With iptables, incoming packets are processed only by the input chain, and packets leaving the system are processed only by the output chain. This allows for more granular control of network traffic and enhances performance.

In addition to the “free” firewalls that come bundled with OSs, many commercial personal firewall packages are available. Programs such as ZoneAlarm from Check Point Software Technologies provide or bundle additional capabilities not found in some bundled software firewalls. Many commercial software firewalls limit inbound and outbound network traffic, block pop-ups, detect adware, block cookies, block malicious processes, and scan instant messenger traffic. While you can still purchase or even download a free software-based personal firewall, most commercial vendors are bundling the firewall functionality with additional capabilities such as antivirus and antispyware.

Microsoft Windows has had a personal software firewall since Windows XP SP2. Windows Firewall is now part of Windows Defender (see Figure 14.8), is enabled by default, and provides warnings when disabled. Windows Firewall is fairly configurable; it can be set up to block all traffic, to make exceptions for traffic you want to allow, and to log rejected traffic for later analysis.

images


Figure 14.8   Windows Firewall is enabled by default.

With the introduction of the Vista operating system, Microsoft modified Windows Firewall to make it more capable and configurable. More options were added to allow for more granular control of network traffic as well as the ability to detect when certain components are not behaving as expected. For example, if your Microsoft Outlook client suddenly attempts to connect to a remote web server, Windows Firewall can detect this as a deviation from normal behavior and block the unwanted traffic.

Hardware Security

Hardware, in the form of servers, workstations, and even mobile devices, can represent a weakness or vulnerability in the security system associated with an enterprise. While hardware can be easily replaced if lost or stolen, the information that is contained by the devices complicates the security picture. Data or information can be safeguarded from loss by backups, but this does little in the way of protecting it from disclosure to an unauthorized party. There are software measures that can assist in the form of encryption, but these also have drawbacks in the form of scalability and key distribution.

Certain hardware protection mechanisms should be employed to safeguard information in servers, workstations, and mobile devices. Cable locks can be employed on mobile devices to prevent their theft. Locking cabinets and safes can be used to secure portable media, USB drives, and CDs/DVDs. Physical security is covered in more detail in Chapter 8.

Images

Physical security is an essential element of a security plan. Unauthorized access to hardware and networking components can make many security controls ineffective.

Images Network Hardening

While considering the baseline security of systems, you must consider the role the network connection plays in the overall security profile. The tremendous growth of the Internet and the affordability of multiple PCs and Ethernet networking have resulted in almost every computer being attached to some kind of network, and once computers are attached to a network, they are open to access from any other user on that network. Proper controls over network access must be established on computers by controlling the services that are running and the ports that are opened for network access. In addition to servers and workstations, however, network devices must also be examined: routers, switches, and modems, as well as various other components.

These network devices should be configured with very strict parameters to maintain network security. Like normal computer OSs that need to be patched and updated, the software that runs network infrastructure components needs to be updated regularly. Finally, an outer layer of security should be added by implementing appropriate firewall rules and router ACLs.

Network Devices, NAT, and Security

Chapter 9 discussed NAT (Network Address Translation). How do network devices that perform NAT services help secure private networks from Internet-based attacks?

Software Updates

Maintaining current vendor patch levels for your software is one of the most important things you can do to maintain security. This is also true for the infrastructure that runs the network. While some equipment is unmanaged and typically has no network presence and few security risks, any managed equipment that is responding on network ports will have some software or firmware controlling it. This software or firmware needs to be updated on a regular basis.

The most common device that connects people to the Internet is the network router. Dozens of brands of routers are available on the market, but Cisco Systems products dominate. The popular Cisco Internetwork Operating System (IOS) runs on more than 70 of Cisco’s devices and is installed countless times at countless locations. Its popularity has fueled research into vulnerabilities in the code, and over the past few years quite a few vulnerabilities have been reported. These vulnerabilities can take many forms because routers send and receive several different kinds of traffic, from the standard Telnet remote terminal, to routing information in the form of Routing Information Protocol (RIP) or Open Shortest Path First (OSPF) packets, to Simple Network Management Protocol (SNMP) packets. This highlights the need to update the Cisco IOS software on a regular basis.

Images

Although we focus on Cisco in our discussion, it’s important to note that every network device, regardless of the manufacturer, needs to be maintained and patched to remain secure.

Cisco IOS also runs on many of its Ethernet switching products. Like routers, these have capabilities for receiving and processing protocols such as Telnet and SNMP. Smaller network components do not usually run large software suites and typically have smaller software loaded on internal nonvolatile RAM (NVRAM). While the update process for this kind of software is typically called a firmware update, this does not change the security implications of keeping it up to date. In the case of a corporate network with several devices, someone must take ownership of updating the devices, and updates must be performed regularly according to security and administration policies.

Device Configuration

As important as it is to keep software up to date, properly configuring network devices is equally, if not more, important. Many network devices, such as routers and switches, now have advanced remote management capabilities, with multiple open ports accepting network connections. Proper configuration is necessary to keep these devices secure. Choosing a good password is very important in maintaining external and internal security, and closing or limiting access to any open ports is also a good step for securing the devices. On the more advanced devices, you must carefully consider what services the device is running, just as with a computer. Here are some general steps to take when securing networking devices:

Images   Limit access to only those who need it. If your networking device allows management via a web interface, SSH, or any other method, limit who can connect to those services. Many networking devices allow you to specify which IP addresses are allowed to connect to those management services.

Images   Choose good passwords. Always change default passwords and follow good password-selection guidelines. If the device supports encryption, ensure passwords are stored in encrypted format on the device.

Images   Password-protect the console and remote access. If the device supports password protection, ensure that all local and remote access capabilities are password protected.

Images   Turn off unnecessary services. If your networking equipment supports Telnet but your organization doesn’t need it, turn that service off. It’s always a good idea to disable or remove unused services. Your device may also support the use of ACLs to limit access to services such as Telnet and SSH on the device itself.

Images   Change the SNMP community strings. SNMP is widely used to manage networking equipment and typically allows a “public” string, which can typically only read information from a device, and a “private” string, which can often read and write to a device’s configuration. Some manufacturers use default or well-known strings (such as “public” for the public string). Therefore, you should always change both the public and private strings if you are using SNMP.

Images

The use of “public” as an SNMP community string is an extremely well-known vulnerability. Any system using an SNMP community string of “public” should have the string changed immediately.

Securing Management Interfaces

Some network security devices will have “management interfaces” that allow for remote management of the devices themselves. Often seen on firewalls, routers, and switches, a management interface allows connections to the device’s management application, an SSH service, or even a web-based configuration GUI, which are not allowed on any other interface. Due to this high level of access, management interfaces and management applications must be secured against unauthorized access. They should not be connected to public network connections (the Internet) and DMZ connections. Where possible, access to management interfaces and applications should be restricted within an organization so employees without the proper access rights and privileges cannot even connect to those interfaces and applications.

VLAN Management

A virtual LAN, or VLAN, is a group of hosts that communicate as if they were on the same broadcast domain. A VLAN is a logical construct that can be used to help control broadcast domains, manage traffic flow, and restrict traffic between organizations, divisions, and so on. Layer 2 switches, by definition, will not bridge IP traffic across VLANs, which gives administrators the ability to segment traffic quite effectively. For example, if multiple departments are connected to the same physical switch, VLANs can be used to segment the traffic such that one department does not see the broadcast traffic from the other departments. By controlling the members of a VLAN, administrators can logically separate network traffic throughout the organization.

Network Segmentation

Network segmentation is the use network addressing schemes to restrict machine to machine communication within specific boundaries. This mechanism uses the network structure and protocols themselves to accomplish a limitation of communication. This mechanism can restrict outside attackers from accessing machines, even if they have stolen credentials, for the network will not connect the attacker’s machine to the target machine.

IPv4 vs. IPv6

IPv4 (Internet Protocol version 4) is the de facto communication standard in use on almost every network around the planet. Unfortunately, IPv4 contains some inherent shortcomings and vulnerabilities. In an effort to address these issues, the Internet Engineering Task Force (IETF) launched an effort to update or replace IPv4; the result is IPv6. Using a new packet format and much larger address space, IPv6 is designed to speed up packet processing by routers and supply 3.4×1038 possible addresses (IPv4 uses only 32 bits for addressing; IPv6 uses 128 bits). Additionally, IPv6 has security “built in” with mandatory support for network layer security. Although widely adopted under IPv4, IPsec support is mandatory in IPv6. The issue now is one of conversion. IPv4 and IPv6 networks cannot talk directly to each other and must rely on some type of gateway. Many operating systems and devices currently support dual IP stacks and can run both IPv4 and IPv6. While adoption of IPv6 is proceeding, it is moving slowly and has yet to gain a significant foothold.

Images

If your network is not using IPv6, then you should disable IPv6 on all clients and servers to prevent malicious traffic from using this protocol to bypass security devices. This follows the principle of “if you are not using something, disable it.”

Images Application Hardening

Perhaps as important as OS and network hardening is application hardening—securing an application against local and Internet-based attacks. Hardening applications is fairly similar to hardening operating systems—you remove the functions or components you don’t need, restrict access where you can, and make sure the application is kept up to date with patches. In most cases, the last step in that list is the most important for maintaining application security. After all, applications must be accessible to users; otherwise, they serve no purpose. As most problems with applications tend to be buffer overflows in legitimate user input fields, patching the application is often the only way to secure it from attack.

Port Scanners

To find out what services are open on a given host or network devices, many administrators will use a tool called a port scanner. A port scanner is a tool designed to probe remote systems for open TCP and UDP services. Nmap is a very popular (and free) port scanner (see http://nmap.org).

Application Configuration Baseline

As with operating systems, applications (particularly those providing public services such as web servers and mail servers) will have recommended security and functionality settings. In some cases, vendors will provide those recommend settings, and, in other cases, an outside organization such as NSA, ISSA, or SANS will provide recommended configurations for popular applications. Many large organizations will develop their own application configuration baseline—that list of settings, tweaks, and modifications that creates a functional and hopefully secure application for use within the organization. Developing an application baseline and using it any time that application is deployed within the organization helps to ensure a consistent (and hopefully secure) configuration across the organization.

Application Patches

As obvious as this seems, application patches are most likely going to come from the vendor that sells the application. After all, who else has access to the source code? In some cases, such as with Microsoft’s IIS, this is the same company that sold the OS that the application runs on. In other cases, such as Apache, the vendor is OS independent and provides an application with versions for many different OSs.

Application patches are likely to come in three varieties: hotfixes, patches, and upgrades. As described for OSs earlier in the chapter, hotfixes are usually small sections of code designed to fix a specific problem. For example, a hotfix may address a buffer overflow in the login routine for an application. Patches are usually collections of fixes, tend to be much larger, and are usually released on a periodic basis or whenever enough problems have been addressed to warrant a patch release. Upgrades are another popular method of patching applications, and they tend to be presented with a more positive spin than patches. Even the term upgrade has a positive connotation—you are moving up to a better, more functional, and more secure application. For this reason, many vendors release “upgrades” that consist mainly of fixes rather than new or enhanced functionality.

Images

Some application “patches” contain new or enhanced functions, and some change user-defined settings back to defaults during installation of the patch. If you are deploying an application patch across a large group of users, it is important to understand exactly what that application patch really does. Patches should first be tested in a nonproduction environment before deployment to determine exactly how they affect the system and the network it is connected to.

Patch Management

In the early days of network computing, things were easy—fewer applications existed, vendor patches came out annually or quarterly, and access was restricted to authorized individuals. Updates were few and easy to handle. Now application and OS updates are pushed constantly as vendors struggle to provide new capabilities, fix problems, and address vulnerabilities. Microsoft created “Patch Tuesday” in an effort to condense the update cycle and reduce the effort required to maintain its products, and has now gone to continuous patching of its newest OS. As the number of patches continues to rise, many organizations struggle to keep up with patches—which patches should be applied immediately, which are compatible with the current configuration, which will not affect current business operations, and so on. To help cope with this flood of patches, many organizations have adopted patch management, the process of planning, testing, and deploying patches in a controlled manner.

Images

Patch management is the process of planning, testing, and deploying patches in a controlled manner.

Patch management is a disciplined approach to the acquisition, testing, and implementation of OS and application patches and requires a fair amount of resources to implement properly. To implement patch management effectively, you must first have a good inventory of the software used in your environment, including all OSs and applications. Then you must set up a process to monitor for updates to those software packages. Many vendors provide the ability to update their products automatically or to automatically check for updates and inform the user when updates are available.

Patch Management Solutions

Keeping track of current patch levels in a system or group of systems can be a daunting job. There are a variety of software solutions to assist administrators in this task. One of these programs is Secunia Personal Software Inspector (PSI), at http://secunia.com. This program, which is free for personal use, will track updates for applications installed on a machine.

Images

Keeping track of patch availability is merely the first step; in many environments, patches must be analyzed and tested. Does the patch apply to the software you are running? Does the patch address a vulnerability or critical issue that must be fixed immediately? What is the impact of applying that patch or group of patches? Will it break something else if you apply this patch? To address these issues, it is recommended that you use development or test platforms, where you can carefully analyze and test patches before placing them into a production environment. Although patches are generally “good,” they are not always exhaustively tested; some have been known to “break” other products or functions within the product being patched, and others have introduced new vulnerabilities while attempting to address an existing vulnerability. The extent of analysis and testing varies widely from organization to organization. Testing and analysis will also vary depending on the application or OS and the extent of the patch.

Once a patch has been analyzed and tested, administrators have to determine when to apply the patch. Because many patches require a restart of applications or services or even a reboot of the entire system, most operational environments apply patches only at specific times, to reduce downtime and possible impact and to ensure administrators are available if something goes wrong. Many organizations will also have a rollback plan that allows them to recover the systems back to a known-good configuration prior to the patch, in case the patch has unexpected or undesirable effects. Some organizations require extensive coordination and approval of patches prior to implementation, and some institute “lockout” dates where no patching or system changes (with few exceptions) can be made, to ensure business operations are not disrupted. For example, an e-commerce site might have a lockout between the Thanksgiving and Christmas holidays to ensure the site is always available to holiday shoppers.

Production Patching

Patching of production systems brings risk in the change process. This risk should be mitigated via a change management process. Change management is covered in detail in Chapter 21. Patching of production systems should follow the enterprise change management process.

With any environment, but especially with larger environments, it can be a challenge to track the update status of every desktop and server in the organization. Documenting and maintaining patch status can be a challenge. However, with a disciplined approach, training, policies, and procedures, even the largest environments can be managed. To assist in their patch-management efforts, many organizations use a patch-management product that automates many of the mundane and manpower-intensive tasks associated with patch management. For example, many patch-management products provide the following:

Patch Availability

Software vendors update software and eventually end support for older versions. Software that has reached end of life can represent a threat to security as it is no longer being patched against problems as they are discovered. This same outcome can result from a vendor going out of business. Software in these cases should be carefully monitored for increased risk to the enterprise.

Images   Ability to inventory applications and operating systems in use

Images   Notification of patches that apply to your environment

Images   Periodic or continual scanning of systems to validate patch status and identify missing patches

Images   Ability to select which patches to apply and to which systems to apply them

Images   Ability to push patches to systems on an on-demand or scheduled basis

Images   Ability to report patch success or failure

Images   Ability to report patch status on any or all systems in the environment

Patch management solutions can also be useful to satisfy audit or compliance requirements, as they can show a structured approach to patch management, show when and how systems are patched, and provide a detailed accounting of patch status within the organization.

Microsoft provides a free patch management product called Windows Server Update Services (WSUS), shown in Figure 14.9. Using the WSUS product, administrators can manage updates for any compatible Windows-based system in their organization. The WSUS product can be configured to download patches automatically from Microsoft based on a variety of factors (such as OS, product family, criticality, and so on). When updates are downloaded, the administrator can determine whether or not to push out the patches and when to apply them to the systems in their environment. The WSUS product can also help administrators track patch status on their systems, which is a useful and necessary feature.

images


Figure 14.9   Windows Server Update Services

Host Software Baselining

To secure, configure, and patch software, administrators must first know what software is installed and running on systems. Maintaining an accurate picture of what operating systems and applications are running inside an organization can be a very labor-intensive task for administrators—especially if individual users have the ability to load software onto their own servers and workstations. To address this issue, many organizations develop software baselines for hosts and servers. Sometimes called “default,” “gold,” or “standard” configurations, a software baseline contains all the approved software that should appear on a desktop or server within the organization. While software baselines can differ slightly due to disparate needs between groups of users, the more “standard” a software baseline becomes, the easier it will be for administrators to secure, patch, and maintain systems within the organization.

Vulnerability Scanner

A vulnerability scanner is a program designed to probe hosts for weaknesses, misconfigurations, old versions of software, and so on. There are essentially three main categories of vulnerability scanners: network, host, and application.

Images

Due to the number of checks they can perform, network scanners can generate a great deal of traffic and a large number of connections to the systems being examined, so care should be taken to minimize the impact on production systems and production networks.

A network vulnerability scanner probes a host (or hosts) for issues across its network connections. Typically a network scanner will either contain or use a port scanner to perform an initial assessment of the network to determine which hosts are alive and which services are open on those hosts. Each system and service is then probed. Network scanners are very broad tools that can run potentially thousands of checks, depending on the OS and services being examined. This makes them a very good “broad sweep” for network-visible vulnerabilities.

Network scanners are essentially the equivalent of a Swiss Army knife for assessments. They do lots of tasks and are extremely useful to have around, but they might not be as good as a tool dedicated to examining one specific type of service. However, if you can only run a single tool to examine your network for vulnerabilities, you’ll want that tool to be a network vulnerability scanner. Figure 14.10 shows a screenshot of Nessus from Tenable Network Security, a very popular network vulnerability scanner.

images


Figure 14.10   Nessus—a network vulnerability scanner

Bottom line: If you need to perform a broad sweep for vulnerabilities on one or more hosts across the network, a network vulnerability scanner is the right tool for the job.

Host vulnerability scanners are designed to run on a specific host and look for vulnerabilities and misconfigurations on that host. Host scanners tend to be more specialized because they’re looking for issues associated with a specific operating system or set of operating systems. A good example of a host scanner is the Microsoft Baseline Security Analyzer (MBSA), shown in Figure 14.11. MBSA is designed to examine the security state of a Windows host and offer guidance to address any vulnerabilities, misconfigurations, or missing patches. Although MBSA can be run against remote systems across the network, it is typically run on the host being examined and requires you to have access to that local host (at the Administrator level). The primary thing to remember about host scanners is that they are typically looking for vulnerabilities on the system they are running on.

images


Figure 14.11   Microsoft Baseline Security Analyzer

Images

If you want to scan a specific host for vulnerabilities, weak password policies, or unchanged passwords, and you have direct access to the host, a host vulnerability scanner might be just the tool to use.

Selecting the right type of vulnerability scanner isn’t that difficult. Just focus on what types of vulnerabilities you need to scan for and how you will be accessing the host/services/applications being scanned. It’s also worth noting that to do a thorough job, you will likely need both network-based and host-based scanners—particularly for critical assets. Host- and network-based scanners perform different tests and provide visibility into different types of vulnerabilities. If you want to ensure the best coverage, you’ll need to run both.

Application vulnerability scanners are designed to look for vulnerabilities in applications or certain types of applications. Application scanners are some of the most specialized scanners—even though they contain hundreds or even thousands of checks, they only look for misconfigurations or vulnerabilities in a specific type of application. Arguably the most popular type of application scanners are designed to test for weaknesses and vulnerabilities in web-based applications. Web applications are designed to be visible, interact with users, and accept and process user input—all things that make them attractive targets for attackers. More details on application vulnerability scanners can be found in Chapter 18.

Images

If you want to examine a specific application or multiple instances of the same type of application (such as a web site), an application scanner is the tool of choice.

Images Data-Based Security Controls

Security controls can be implemented on a host machine for the express purpose of providing data protection on the host. This section explores methods to implement the appropriate controls to ensure data security.

Data Security

Data or information is the most important element to protect in the enterprise. Equipment can be purchased, replaced, and shared without consequence; it is the information that is being processed that has the value. Data security refers to the actions taken in the enterprise to secure data, wherever it resides: in transit, at rest, or in use.

Data in Transit

Data has value in the enterprise, but for the enterprise to fully realize the value, data elements need to be shared and moved between systems. Whenever data is in transit, being moved from one system to another, it needs to be protected. The most common method of this protection is via encryption. What is important is to ensure that data is always protected in proportion to the degree of risk associated with a data security failure.

Data at Rest

Data at rest refers to data being stored. Data is stored in a variety of formats: in files, in databases, and as structured elements. Whether in ASCII, XML, JavaScript Object Notation (JSON), or a database, and regardless of on what media it is stored, data at rest still requires protection commensurate with its value. Again, as with data in transit, encryption is the best means of protection against unauthorized access or alteration.

Data in Use

Data is processed in applications, is used for various functions, and can be at risk when in system memory or even in the act of processing. Protecting data while in use is a much trickier proposition than protecting it in transit or in storage. While encryption can be used in these other situations, it is not practical to perform operations on encrypted data. This means that other means need to be taken to protect the data. Protected memory schemes and address space layout randomization are two tools that can be used to prevent data security failures during processing. Secure coding principles, including the definitive wiping of critical data elements once they are no longer needed, can assist in protecting data in use.

Images

Understanding the need to protect data in all three phases—in transit, at rest, and in use—is an important concept for the exam. The first step is to identify the phase the data is in, and the second is to identify the correct means of protection for that phase.

Data Encryption

Data encryption continues to be the best solution for data security. Properly encrypted, the data is not readable by an unauthorized party. There are numerous ways to enact this level of protection on a host machine.

Full Disk

Full disk encryption refers to the act of encrypting an entire partition in one operation. Then as specific elements are needed, those particular sectors can be decrypted for use. This offers a simple convenience factor and ensures that all of the data is protected. It does come at a performance cost, as the act of decrypting and encrypting takes time. For some high-performance datastores, especially those with latency issues, this performance hit may be critical. Although better performance can be achieved with specialized hardware, as with all security controls there needs to be an evaluation of the risk involved versus the costs.

Database

Major database engines have built-in encryption capabilities. The advantage to these encryption schemes is that they can be tailored to the data structure, protecting the essential columns while not impacting columns that are not sensitive. Properly employing database encryption requires that the data schema and its security requirements be designed into the database implementation. The advantage is in better protection against any database compromise, and the performance hit is typically negligible with respect to other alternatives.

Individual Files

Individual files can also be encrypted in a system. This can be done either at the OS level or via a third-party application. Managing individual file encryption can be tricky, as the problem moves to an encryption key security problem. When using built-in encryption methods with an OS, the key issue is resolved by the OS itself, with a single key being employed and stored with the user credentials. One of the advantages of individual file encryption comes when transferring data to another user. Transporting a single file via an unprotected channel such as e-mail can be done securely with single-file encryption.

USB Encryption

Universal Serial Bus (USB) offers an easy connection mechanism to connect devices to a computer. This acts as the mechanism of transport between the computer and an external device. When data traverses the USB connection, it typically ends up on a portable device and thus requires an appropriate level of security. Many mechanisms exist, from encryption on the USB device itself, to OS-enabled encryption, to independent encryption before the data is moved. Each of these mechanisms has advantages and disadvantages, and it is ultimately up to the user to choose the best method based on the sensitivity of the data.

Mobile Devices

Mobile device security, covered in detail in Chapter 12, is also essential when critical or sensitive data is transmitted to mobile devices. The protection of mobile devices goes beyond simple encryption of the data, as the device can act as an authorized endpoint for the system, opening up avenues of attack.

Handling Big Data

Big data is the industry buzzword for very large datasets being used in many enterprises. Datasets in the petabyte, exabyte, and even zettabyte range are now being explored in some applications. Datasets of these sizes require special hardware and software to handle them, but this does not alleviate the need for security. Planning for security on this scale requires enterprise-level thinking, but it is worth noting that eventually some subset of the information makes its way to a host machine for use. It is at this point that the data is vulnerable, because whatever protection scheme is in place on the large storage system, the data is outside that realm now. This means that local protection mechanisms, such as provided by Kerberos-based authentication, can be critical in managing this type of protection scheme.

Cloud Storage

Cloud computing is the use of online resources for storage, processing, or both. When data is stored in the cloud, encryption can be used to protect the data, so that what is actually stored is encrypted data. This reduces the risk of data disclosure both in transit to the cloud and back, as well as while in storage.

Storage Area Network

A storage area network (SAN) is a means of storing data across a secondary dedicated network. SANs operate to connect data storage devices as if they were local storage, yet they are separate and can be collections of disks, tapes, and other storage devices. Because the dedicated network is separate from the normal IP network, accessing the SAN requires going through one of the attached machines. This makes SANs a bit more secure than other forms of storage, although loss through a compromised client machine is still a risk.

Permissions/ACL

Access control lists (ACLs) form one of the foundational bases for security on a machine. ACLs can be used by the operating system to make determinations as to whether or not a user can access a resource. This level of permission restriction offers significant protection of resources and transfers the management of the access control problem to the management of ACLs, a smaller and more manageable problem.

Permissions Issues

Permissions are the cornerstone of security and ACLs are how they are enforced. ACL mistakes and failures result in improper configuration of permissions, one of the most common errors in security. This is a problem to keep in mind throughout the material in the book. One question that should be forefront in the professional’s mind, both in configuring and testing is this: are the permissions being done correctly?

Images Environment

A modern environment is separated into multiple areas designed to isolate the functions of development, test, and production. These areas are primarily used to prevent accidents from arising from untested code ending up in production, and they are segregated by access control list as well as hardware, thus preventing users from accessing multiple different areas of the environment. Special accounts are used to move code between these areas of the environment in order to eliminate issues of crosstalk.

Development

A development system is one that is sized, configured, and set up for developers to create applications and systems. The development hardware does not have to scale like production, and it probably does not need to be as responsive for certain transactions. The development platform does need to be of the same type of system, because developing on Windows and deploying to Linux is fraught with difficulties that can be avoided by matching development environments to production in terms of OS type and version. After code is successfully developed, it is moved to a test system.

Test

The test environment is one that fairly closely mimics the production environment, with the same versions of software (down to the patch level), same sets of permissions, file structures, and so on. The purpose of the test environment is to enable a system to be fully tested prior to being deployed into production. The test environment might not scale like production, but from the viewpoint of a software/hardware footprint, it looks exactly like production.

Staging

The staging environment is an optional environment, but it is commonly found when there are multiple production environments. After passing test, the system moves into staging, where it can be deployed to the different production systems. The primary purpose of staging is as a sandbox after test, so the test system can test the next set while the current set is deployed across the enterprise. One method of deployment is a staged deployment, where software is deployed to part of the enterprise and then paused to watch for unforeseen problems. If none occur, the deployment continues, stage by stage, until all of the production systems are changed. By moving software in this manner, you never lose the old production system until the end of the move, giving you time to judge and catch any unforeseen problems. This also prevents the total loss of production to a failed update.

Production

Production is the environment where the systems work with real data, doing the business that the system is supposed to perform. This is an environment where there are by design virtually no changes, except as approved and tested via the system’s change management process.

Images

Understand the different environments so that when a question is asked, you can determine the correct context from the question and pick the best environment—development, test, staging, or production—to answer the question.

Images Automation/Scripting

Automation and scripting are valuable tools for system administrators and others to safely and efficiently execute tasks. Although many tasks can be performed by simple command-line execution or through the use of GUI menu operations, the use of scripts has two advantages. First, prewritten and tested scripts remove the chance of error, either a typo or clicking the wrong option. Errors are common and can take significant time to undo; for example, erasing the wrong file or directory can take time to locate and restore from a backup. The second advantage is that scripts can be chained together to provide a means of automating action.

Automation is a major element of an enterprise security program. There is an entire set of protocols, standards, methods and architectures developed to support automation. The security community has developed automation methods associated with vulnerability management, including the Security Content Automation Protocol (SCAP), Common Vulnerabilities Enumeration (CVE), and more. Details can be found at http://measurablesecurity.mitre.org/.

Automated Courses of Action

Scripts are the best friend of administrators, analysts, investigators, any professional who values efficient and accurate technical work. Scripts allow you to automate courses of action, with the subsequent steps tested and, when necessary, approved. Scripts and automation are important enough that they are specified in National Institute of Standards and Technology Special Publication 800-53 series, which specifies security controls. For instance, under patching, not only is an automated method of determining which systems need patches specified, but also that the patching mechanism be automated. Automated courses of action reduce errors.

Automated courses of action can save time as well. If, during an investigation, one needs to take an image of a hard drive on a system, and calculate hash values, and record all of the details in a file for chain of custody, this all can be done in just a few command lines—or with a single script that has been tested and approved for use.

Continuous Monitoring

Continuous monitoring is the term used to describe a system that has monitoring built into it, so rather than monitoring being an external event that may or may not happen, monitoring is an intrinsic aspect of the action. From a big-picture point of view, continuous monitoring is the name used to describe a formal risk assessment process that follows the NIST Risk Management Framework (RMF) methodology. Part of that methodology process is the use of security controls. Continuous monitoring is the operational process by which you can monitor and know if controls are functioning in an effective manner.

As most enterprises have a large number of systems and larger number of security controls, part of an effective continuous monitoring plan is the automated handling of the continuous monitoring status data, to facilitate consumption in a meaningful manner. Automated dashboards and alerts that show out of standard conditions allow operators to focus on the parts of the system that need attention rather than staring a literally tons of data.

Configuration Validation

Configuration validation is a challenge as systems age and change over time. When a system is placed into service, its configuration should be validated against security standards, ensuring that the system will do what it is supposed to do, and only what it is supposed to do. No added functionality. All extra ports, services, accounts, and so on are disabled, removed, or turned off. The configuration files, including ACLs for the system, are correct and working as designed.

Images

Automation/scripting plays a key role in automated course of action, continuous monitoring, and configuration validation. These elements work together. On the exam, read the context of the question carefully and determine what specifically it is asking, as this will identify the best answer from the related options.

Over time, as things change, software is patched, and other things are added to or taken away from the server. Updates to the application, the OS, and even other applications on the server change the configuration. Is the configuration still valid? How does one monitor all of their machines to ensure valid configurations? Automated testing is a method that can scale and resolve this issue, making it just another part of the continuous monitoring system. Any other manual method eventually fails because of fluctuating priorities that will result in routine maintenance being deferred.

Templates

Templates are master recipes for the building of objects, be they servers, programs, or even entire systems. Templates contain all of the required configuration options and setup controls enabling the automation of item deployment. You can have multiple templates for a given service, each tailored to different requirements or circumstances. The end result is that you have predefined the setup and deployment options for the item, whether hardware or software. Security templates can provide directions for securely provisioning a system.

Templates are what make Infrastructure as a Service possible: you establish a business relationship with an IaaS firm (the time-consuming part), they need to collect billing information, and you need to review a lot of terms and conditions with your legal team. But, then, the part you want is the standing up of some piece of infrastructure (say, for example, a LAMP stack). A LAMP stack is a popular open source web platform that is ideal for running dynamic sites. It is composed of Linux, Apache, MySQL, and PHP/Python/Perl, hence the term LAMP. You want this server to be secured, patched, and have specific accounts for access. You fill out a web form, which uses your information to match to an appropriate template. You specify all the conditions and click the Create button. If you were going to stand this up on your own, it might take days to configure all of these elements from scratch, on hardware in-house. Next, the IaaS firm uses templates and master images, and your solution is online within minutes, or even seconds. If you have special needs, it might take a bit longer, but you get the idea: templates allow for the rapid, error-free creation of items such as configurations, the connection of services, testing, and deployment.

Master Image

Master images are premade, fully patched images of systems. A master image, in the form of a virtual machine, can be configured and deployed in seconds to replace a system that has become tainted or is untrustworthy because of an incident. Master images provide the true, clean backup of the operating systems, applications, and everything else but the data. When you architect your enterprise to take advantage of master images, you make many administrative tasks easier to automate, easier to do, and substantially more free of errors. Should an error be found, you have one image to fix and then deploy. Master images work very well for enterprises with multiple desktops, because you can create a master image that can be quickly deployed on new or repaired machines, bringing the systems to an identical and fully patched condition.

Images

Master images are key elements of template-based systems and, together with automation and scripting, make many previously laborious and error-prone tasks fast, efficient, and error-free. Understanding the role each of these technologies plays is important when examining the context of the question on the exam. Be sure to answer what is being asked for, because all three may play a role in the issue, but only one is the part being asking for.

Nonpersistence

Nonpersistence is when a change to a system is not permanent. Making a system nonpersistent can be a useful tool when you wish to prevent certain types of malware attacks, for example. A system that has been made nonpersistent is not able to save changes to its configuration, its apps, or anything else. There are utility programs that can freeze a machine from change, in essence making it nonpersistent. This is useful for machines deployed in places where users can invoke changes, download stuff from the Internet, and so on. Nonpersistence offers a means for the enterprise to address these risks, by not letting them happen in the first place. In some respects this is similar to whitelisting, only allowing approved applications to run.

Snapshots

Snapshots are instantaneous save points in time on virtual machines. These allow the virtual machine to be restored to that point in time. These work, because in the end, a VM is just a file on a machine, and setting the file back to a previous version reverts the VM to the state it was in at that time. Snapshots can be very useful in reducing risk, as you can take a snapshot, make a change to the system, and if the change is bad, revert back to the snapshot like the change had never been made.

Reverting to a Known State

Reverting to a known state is akin to reverting to a snapshot. Many OSs now have the capability to produce a restore point, which is a copy of key files that change upon updates to the OS. If you add a driver or update the OS, and the update results in problems, you can restore the system to the previously saved restore point. This is a very commonly used option in Microsoft Windows, and the system by default creates restore points before it processes updates to the OS, and at set points in time between updates. This gives users an ability to roll back the clock on the OS and restore to an earlier time, when they know the problem did not exist. Unlike snapshots, which record everything, this only protects the OS and associated files; it also does not result in the loss of users’ files, which is something that does happen with snapshots.

Rolling Back to Known Configuration

Rolling back to a known configuration is another way of saying “revert to a known state.” It is the specific language Microsoft uses with respect to rolling back the Registry values to a known good configuration on boot. If you make an incorrect configuration change in Windows and now the system won’t boot properly, you can select the Last Known Good Configuration option during boot setup menu and roll back the Registry to the last value that properly completed a boot cycle. Microsoft stores most configuration options in the Registry, and this is a way to revert to a previous set of configuration options for the machine.

Images

Last Known Good Configuration works for Window 7 and earlier versions. This is not the case for Windows 8 onward, as pressing f8 on bootup is not an option unless you change to Legacy mode.

Live Boot Media

A live boot media CD/USB is a device that contains a complete bootable system. These devices are specially formatted so as to be bootable from the media. This allows you a means of booting the device to an external OS source, should the one on the internal drive become unusable. This may be used as a recovery mechanism, although if the internal drive is encrypted, you will need backup keys to access it. This is also a convenient means of booting to a task-specific operating system, such as forensic tools or incident response tools that are separate from the OS on the machine.

Wrappers

TCP wrappers are structures used to enclose or contain some other system. Wrappers have been used in a variety of ways, including to obscure or hide functionality. A Trojan horse is a form of wrapper. Wrappers also can be used to encapsulate information, such as in tunneling or VPN solutions. Wrappers can act as a form of channel control, including integrity and authentication information that a normal signal cannot carry. It is common to see wrappers used in alternative environments to prepare communications for IP transmission.

Elasticity

Elasticity is the ability of a system to increase the workload using additional hardware resources—commonly dynamically added on demand—in order to scale out. If the workload increases, you scale by adding more resources; conversely, when demand wanes, you scale back by removing unneeded resources. Elasticity is one of the strengths of cloud environments, as you can configure them to scale up and down, and only pay for the actual resources you use. In a server farm that you own, you pay for the equipment, even when not in use. In an elastic cloud environment, you literally only pay for what you use.

Scalability

Scalability is the ability of the system to accommodate larger workloads through the addition of resources, either by making hardware stronger, scaling up, adding additional nodes, or scaling out. This term is commonly used in server farms and database clusters, as these both can have scale issues with respect to workload.

Images

Elasticity and scalability seem to be the same thing, but they are different. Elasticity is related to dynamically scaling a system with workload (scaling out), whereas scalability is a design element that enables both scaling up (to more capable hardware) and scaling out (to more instances).

Distributive Allocation

Distributive allocation is the transparent allocation of requests across a range of resources. When multiple servers are employed to respond to load, distributive allocation handles the assignment of jobs across the servers. When the jobs are stateful, as in database queries, the process ensures that the subsequent requests are returned to the same server to maintain transactional integrity. When the system is stateless, like web servers, other load-balancing routines are used to spread the work.

Images Alternative Environments

Alternative environments are those that are not traditional computer systems in a common IT environment. This is not to say that these environments are rare; in fact, there are millions of systems, composed of hundreds of millions of devices, all across society. Computers exist in many systems where they perform critical functions specifically tied to a particular system. These alternative systems are frequently static in nature; that is, their software is unchanging over the course of its function. Updates and revisions are few and far between. Although this may seem to run counter to current security practices, it doesn’t: because these alternative systems are constrained to a limited, defined set of functionality, the risk from vulnerabilities is limited. Examples of these alternative environments include embedded systems, SCADA (supervisory control and data acquisition) systems, mobile devices, mainframes, game consoles, and in-vehicle computers.

Alternative Environment Methods

Many of the alternative environments can be considered static systems. Static systems are those that have a defined scope and purpose and do not regularly change in a dynamic manner, unlike most PC environments. Static systems tend to have closed ecosystems, with complete control over all functionality by a single vendor. A wide range of security techniques can be employed in the management of alternative systems. Network segmentation, security layers, wrappers, and firewalls assist in the securing of the network connections between these systems. Manual updates, firmware control, and control redundancy assist in the security of the device operation.

Peripherals

Peripherals used to be basically dumb devices, with low to no interaction; however, with the low cost of compute power and the desire to program greater functionality, many of these devices have embedded computers in them. This has led to hacking of peripherals and the need to understand the security aspects of peripherals. From wireless keyboards and mice, to printers, to displays and storage devices, these items have all become sources of risk.

Wireless Keyboards

Wireless keyboards operate via a short range wireless signal between the keyboard and the computer. The main method of connection is either via a USB Bluetooth connector, in essence creating a small personal area network (PAN), or via a 2.4-GHz dongle. Wireless keyboards are frequently paired with wireless mice, thus removing those troublesome and annoying cables from the desktop. Because of the wireless connection, the signals to and from the peripherals are subject to interception, and attacks have been made on these devices.

Wireless Mice

Wireless mice are similar in nature to wireless keyboards. They tend to connect as a human interface device (HID) class of USB. This is part of the USB specification and is used for mice and keyboards, simplifying connections, drivers, and interfaces through a common specification.

One of the interesting security problems with wireless mice and keyboards has been the development of the mousejacking attack. This is when an attacker performs a man-in-the-middle attack on the wireless interface and can control the mouse and or intercept the traffic. When this attack first hit the environment, manufacturers had to provide updates to their software interfaces to block this form of attack. Some of the major manufacturers, like Logitech, made this effort for their mainstream product lines, but a lot of mice that are older were never patched. And smaller vendors have never addressed the vulnerability, so it still exists.

Displays

Computer displays are primarily connected to machines via a cable to one of several types of display connectors. However, for conferences and other group settings, a wide array of devices today can enable a machine to connect to a display via a wireless network. These devices are available from Apple, Google, and a wide range of A/V companies. The risk of using these devices is simple—who else within range of the wireless signal can watch what you are beaming to the display in the conference room? And would you even know if the signal was intercepted? In a word, you wouldn’t. This doesn’t mean these devices should not be used in the enterprise, but just that they should not be used for transmitting sensitive data to the screen.

Printers/MFDs

Printers have CPUs and a lot of memory. The primary purpose for this is to offload the printing from the device sending the print job to the print queue. Modern printers now come standard with a bidirectional channel so that you can send a print job to the printer and then the printer can send back information as to job status, printer status, and other items. Multifunction devices (MFDs) are like printers on steroids. They typically combine printing, scanning, and faxing all into a single device. This has become a popular market segment because it reduces costs and device proliferation in the office.

With printers being connected to the network, multiple people can connect and independently print jobs, thus sharing a fairly expensive high-speed duplexing printer. But with the CPU, firmware, and memory comes the risk of an attack vector, and hackers have demonstrated malware passed via a printer. This is not a mainstream issue yet, but it has passed the proof-of-concept phase, and in the future we will need to have software protect us from our printers.

External Storage Devices

The rise of network array storage (NAS) devices moved quickly from the enterprise into form factors that are found in homes. As users have developed large collections of digital videos and music, these external storage devices, running on the home network, solve the storage problem. These devices are typically fairly simple Linux-based appliances, with multiple hard drives in a RAID arrangement. With the rise of ransomware, these devices can spread infections to any and all devices that connect to the network. For this reason, precautions should be taken with respect to always-on connections to storage arrays.

Wi-Fi-Enabled MicroSD Cards

A class of Wi-Fi-enabled MicroSD cards were developed to eliminate the need to move the card from device to device in order to move the data. Primarily designed for digital cameras, these cards became very useful for creating Wi-Fi devices out of devices that had an SD slot. These cards have a tiny computer embedded in them that runs a stripped-down version of Linux. One of the major vendors in this space used a stripped-down version of BusyBox and had no security invoked at all, making the device completely open to hackers.

Phones and Mobile Devices

Mobile devices may seem to be a static environment, one where the OS rarely changes or is rarely updated, but as these devices have become more and more ubiquitous, offering greater capabilities, this is no longer the case. Mobile devices have regular OS software updates, and as users add applications, this makes most mobile devices a complete security challenge. Mobile devices frequently come with Bluetooth connectivity mechanisms. Protection of the devices from attacks against the Bluetooth connection, such as bluejacking and bluesnarfing, is an important mitigation. To protect against unauthorized connections, a Bluetooth device should always have discoverable mode turned off, unless the user is deliberately pairing the device.

Many different operating systems are used in mobile devices, with the most common of these by market share being Android and iOS from Apple. Android has by far the largest footprint, followed distantly by Apple’s iOS. Microsoft and Blackberry have their own OSs, but neither has a significant numbers of users.

Android

Android is a generic name associated with the mobile OS based on Linux. Google acquired the Android platform, made it open source, and began shipping devices in 2008. Android has undergone several updates since, and most systems have some degree of customization added for specific mobile carriers. Android has had numerous security issues over the years, ranging from vulnerabilities that allow attackers access to the OS, to malware-infected applications. The Android platform continues to evolve as the code is cleaned up and the number of vulnerabilities is reduced. The issue of malware-infected applications is much tougher to resolve, though, as the ability to create content and add it to the app store (Google Play) is considerably less regulated than in the Apple and Microsoft ecosystems.

The use of mobile device management (MDM) systems is advised in enterprise deployments, especially those with “bring your own device” (BYOD) policies. This and other security aspects specific to mobile devices are covered in Chapter 12.

iOS

iOS is the name of Apple’s proprietary operating system for its mobile platforms. Because Apple does not license the software for use other than on its own devices, Apple retains full and complete control over the OS and any specific capabilities. Apple has also exerted significant control over its application store (the App Store), which has dramatically limited the incidence of malware in the Apple ecosystem.

However, a common hack associated with iOS devices is the jailbreak. Jailbreaking is a process by which the user escalates their privilege level, bypassing the operating system’s controls and limitations. The user still has the complete functionality of the device, but also has additional capabilities, bypassing the OS-imposed user restrictions. There are several schools of thought concerning the utility of jailbreaking, but the important issue from a security point of view is that running any device with enhanced privileges can result in errors that cause more damage, because normal security controls are typically bypassed.

Embedded Systems

Embedded systems is the name given to computers that are included as an integral part of a larger system, typically hardwired in. From computer peripherals like printers, to household devices like smart TVs and thermostats, to the car you drive, embedded systems are everywhere. Embedded systems can be as simple as a microcontroller with fully integrated interfaces (a system on a chip) or as complex as the tens of interconnected embedded systems in a modern automobile. Embedded systems are designed with a single control purpose in mind and have virtually no additional functionality, but this does not mean that they are free of risk or security concerns. The vast majority of security exploits involve getting a device or system to do something it is capable of doing, and technically designed to do, even if the resulting functionality was never an intended use of the device or system.

The designers of embedded systems typically are focused on minimizing costs, with security seldom seriously considered as part of either the design or the implementation. Because most embedded systems operate as isolated systems, the risks have not been significant. However, as capabilities have increased, and these devices have become networked together, the risks have increased significantly. For example, smart printers have been hacked as a way into enterprises, and as a way to hide from defenders. And when next-generation automobiles begin to talk to each other, passing traffic and other information between them, and begin to have navigation and other inputs being beamed into systems, the risks will increase and security will become an issue. This has already been seen in the airline industry, where the separation of in-flight Wi-Fi, in-flight entertainment, and cockpit digital flight control networks has become a security issue.

Images

Understand static environments, systems in which the hardware, OS, applications, and networks are configured for a specific function or purpose. These systems are designed to remain unaltered through their lifecycle, rarely requiring updates.

Camera Systems

Digital camera systems have entered the computing world through a couple of different portals. First, there is the world of high-end digital cameras that have networking stacks, image processors, and even 4K video feeds. These are used in enterprises such as the news, where getting the data live without extra processing delays can be important. What is important to note is that most of these devices, although they are networked into other networks, have built-in virtual private networks (VPNs) that are always on, because the content is considered valuable enough to protect as a feature.

The next set of cameras reverses the quantity and quality characteristics. Whereas the high-end devices are fairly small in number, there is a growing segment of video surveillance cameras, including household surveillance, baby monitors, and the like. Hundreds of millions of these devices are sold, and they all have a sensor, a processor, a network stack, and so on. These are part of the Internet of Things (IoT) revolution, where millions of devices connect together either on purpose or by happenstance. It was a network of these devices, along with a default user name and password, that led to the Mirai botnet that actually broke the Internet for a while in the fall of 2016. The true root cause was a failure to follow a networking RFC concerning source addressing, coupled with the default user name and password and remote configuration, that enabled them to be taken over. Two sets of failures, working together, created weeks’ worth of problems.

Game Consoles

Computer-based game consoles can be considered a type of embedded system designed for entertainment. The OS in a game console is not there for the user, but rather there to support the specific application or game. There typically is no user interface to the OS on a game console for a user to interact with; rather, the OS is designed for a sole purpose. With the rise of multifunction entertainment consoles, the attack surface of a gaming console can be fairly large, but it is still constrained by the closed nature of the gaming ecosystem. Updates for the firmware and OS-level software are provided by the console manufacturer. This closed environment offers a reasonable level of risk associated with the security of the systems that are connected. As game consoles become more general in purpose and include features such as web browsing, the risks increase to levels commensurate with any other general computing platform.

Mainframes

Mainframes represent the history of computing, and although many people think they have disappeared, they are still very much alive in enterprise computing. Mainframes are high-performance machines that offer large quantities of memory, computing power, and storage. Mainframes have been used for decades for high-volume transaction systems as well as high-performance computing. The security associated with mainframe systems tends to be built into the operating system on specific-purpose mainframes. Mainframe environments tend to have very strong configuration control mechanisms, and very high levels of stability.

Mainframes have become a cost-effective solution for many high-volume applications because many instances of virtual machines can run on the mainframe hardware. This opens the door for many new security vulnerabilities—not on the mainframe hardware per se, but rather through vulnerabilities in the guest OS in the virtual environment.

SCADA/ICS

SCADA is an acronym for supervisory control and data acquisition, a system designed to control automated systems in cyber-physical environments. SCADA systems control manufacturing plants, traffic lights, refineries, energy networks, water plants, building automation and environmental controls, and a host of other systems. SCADA is also known by names such as distributed control systems (DCSs) and industrial control systems (ICSs). The variations depend on the industry and the configuration. Where computers control a physical process directly, a SCADA system likely is involved.

Most SCADA systems involve multiple components networked together to achieve a set of functional objectives. These systems frequently include a human machine interface (HMI), where an operator can exert a form of directive control over the operation of the system under control. SCADA systems historically have been isolated from other systems, but the isolation is decreasing as these systems are being connected across traditional networks to improve business functionality. Many older SCADA systems were airgapped from the corporate network; that is, they shared no direct network connections. This meant that data flows in and out were handled manually and took time to accomplish. Modern systems remove this constraint, with direct network connections between the SCADA networks and the enterprise IT network. These connections increase the attack surface and the risk to the system, and the more they resemble an IT networked system, the greater the need for security functions.

SCADA systems have been drawn into the security spotlight with the Stuxnet attack on Iranian nuclear facilities, initially reported in 2010. Stuxnet is malware designed to specifically attack a specific SCADA system and cause failures, resulting in plant equipment damage. This attack was complex and well designed, crippling nuclear fuel processing in Iran for a significant period of time. This attack raised awareness of the risks associated with SCADA systems, whether connected to the Internet or not (Stuxnet crossed an airgap to hit its target).

HVAC

Building-automation systems, climate control systems, HVAC (heating, ventilation, and air conditioning) systems, elevator control systems, and alarm systems are just some of the examples of systems that are managed by embedded systems. Although these systems used to be independent and standalone systems, the rise of hyperconnectivity has shown value in integrating them. Having a “smart building” that reduces building resources in accordance with the number and distribution of people inside increases efficiency and reduces costs. Interconnecting these systems and adding in Internet-based central control mechanisms does increase the risk profile from outside attacks.

Smart Devices/IoT

Smart devices and devices that comprise the Internet of Things (IoT) have taken the world’s markets by storm—from key fobs that can track things via GPS, to cameras that can provide surveillance, to connected household appliances, TVs, dishwashers, refrigerators, crockpots, washers and dryers. Anything with a microcontroller now seems to be connected to the Web so that external controls can be used. From the smart controllers from Amazon, the Echo, and its successors, to Google Home, to Microsoft Cortana, artificial intelligence has entered into the mix, enabling even greater functionality. Computer-controlled light switches, LED light bulbs, thermostats, and baby monitors—the smart home is connecting everything. You can carry a key fob that your front door recognizes, unlocking before you get to it. Of course, the security camera saw you first and alerted the system that someone was coming up the driveway. The only thing that can be said with confidence about this revolution is that someone will figure out how and why to connect virtually anything to the network.

All of these devices have a couple of similarities. They all have a network interface, because their connectivity is their purpose as a smart device or a member of the Internet of Things. On that network interface is some form of computer platform. With complete computer functionality now included in a System on a Chip (SoC) platform, which will be covered in a later section, these tiny devices can have a complete working computer for a couple of dollars in cost. The use of a Linux-type kernel as the core engine makes programming easier because the base of programmers is very large. Also, you have something that can be mass-produced and at a relatively low cost. The scaling of the software development over literally millions of units makes costs scalable and the driving element is functionality. Security or anything else that might impact new expanded functionality has taken a backseat.

Wearable Technologies

Wearable technologies include everything from biometric sensors for measuring heat rate, to step counters for measuring how much one exercises, to smart watches that combine both these functions, and many more. By measuring biometric signals such as pulse rate and body movements, it is possible to track fitness goals and even hours of sleep. These wearable devices are built using very small computers that run a real-time operating system, usually built from a stripped-down Linux kernel.

Home Automation

Home automation is one of the driving factors behind the IoT movement. From programmable smart thermostats, to electrical control devices that replace wall switches and enable voice-operated lights, the home environment is awash with tech. Locks can be operated electronically, allowing you to lock or unlock them remotely from your smartphone. Surveillance cameras connected to your smartphone can tell you when someone is at your door and allow you to talk to them without even being home or opening the door. Appliances can be set up to run when energy costs are lower, or to automatically order more food when you take the last of an item from the pantry or refrigerator. These are not things of a TV show about the future; they are available today and at fairly reasonable prices.

The tech behind these items is the same tech behind a lot of recent advances. This includes a small System on a Chip, a complete computer system, with a real-time operating system designed not as a general compute platform, but just to drive the needed elements; a network connection (usually wireless); some sensors to measure light, heat, or sound; and an application to integrate the functionality. The security challenge is that most of these devices literally have no security. Poor networking software led a legion of baby monitors and other home devices becoming part of a large botnet called Mirai, which attacked the Krebs on Security site with a DDoS rate that exceeded 600 Gbps in the fall of 2016.

Special-Purpose Systems

Special-purpose systems are those designed specifically for systems with specific uses, defined by their intended operating environment. Three primary types of special-purpose systems are medical devices, vehicles, and aircraft. Each of these has significant computer system elements providing much of the functionality control for the device, and each of these systems has its own security issues.

Medical Devices

Medical devices comprise a very wide group of devices—from small implantable devices, such as pacemakers, to multi-ton MRI machines. In between are devices that measure things and devices that actually control things, such as infusion pumps. Each of these has several interesting characteristics, the most important of which is that they can have a direct effect on human life. This makes security a function of safety.

Medical devices such as lab equipment and infusion pumps have been running on computerized controls for years. The standard choice has been an embedded Linux kernel that has been stripped of excess functionality and pressed into service in the embedded device. One of the problems with this approach is how one patches this kernel when vulnerabilities are found. Also, as the base system gets updated to a newer version, the embedded system stays trapped on the old version. This requires regression testing for problems, and most manufacturers will not undertake this labor-intensive chore.

Medical devices are manufactured under strict regulatory guidelines that are designed for static systems that do not need patching, updating, or changes. Any change would force a requalification, which is a lengthy, time-consuming, and expensive process. Because of this, these devices tend never to be patched. With the advent of several high-profile vulnerabilities, including Heartbleed and BASH shell attacks, most manufacturers simply recommended that the devices be isolated and never connected to an outside network. In concept, this is fine, but in reality this can never happen because all the networks in a hospital or medical center are connected.

A recall of nearly a half million pacemakers in 2017 for a software vulnerability that allows a hacker to access and change the performance characteristics of the device is proof of the problem. The good news is that the devices can be updated without being removed, but it will take a doctor’s visit to have the new firmware installed.

SoC

System on a Chip (SoC) technologies involve the miniaturization of the various circuits needed for a working computer system. These systems are designed to provide the full functionality of a computing platform on a single chip. This includes networking and graphics display. Some SoC solutions come with memory, and for others the memory is separate. SoCs are very common in the mobile computing market (both phones and tablets) because of their low power consumption and efficient design. Some SoCs have become household names as mobile phone companies have advertised their inclusion in their system (for example, the Snapdragon processor in Android devices). Quad-core and eight-core SoC systems are already in place, and they even have advanced designs such as Quad Plus One, where the fifth processor is slower and designed for simple processes and uses extremely small amounts of power. This way, when the quad cores are not needed, there is no significant energy usage.

The programming of SoC systems can occur at several different levels. Dedicated OSs and applications can be written for them, such as the Android fork of Linux, which is specific to the mobile device marketplace. At the end of the day, because these devices represent computing platforms for billions of devices worldwide, they have become a significant force in the marketplace.

RTOS

Real-time operating systems (RTOSs) are operating systems designed for systems in which the processing must occur in real time and where data cannot be queued or buffered for any significant time. RTOSs are not for general-purpose machines, but are programmed for a specific purpose. They still have to deal with contention, and scheduling algorithms are needed to deal with timing collisions, but in general an RTOS processes each input as it is received, or within a specific time slice defined as the response time.

Most general-purpose computer operating systems are multitasking by design. This includes Windows and Linux. Multitasking systems make poor real-time processors, primarily because of the overhead associated with separating tasks and processes. Windows and Linux may have interrupts, but these are the exception, not the rule, for the processor. RTOS-based software is written in a completely different fashion, designed to emphasize the thread in processing rather than handling multiple threads.

Vehicles

A modern vehicle has hundreds of computers in it, all interconnected on a bus. The CAN bus (controller area network bus) is a bus designed to allow multiple microcontrollers to communicate with each other without a central host computer. As individual microcontrollers were used in automobiles to control the engine, emissions, transmission, breaking, heating, electrical, and other systems, the wiring harnesses used to interconnect everything became a problem. Robert Bosch developed the CAN bus for cars, specifically to address the wiring harness issue, and when it was first deployed in 1986 at BMW, the weight reduction was over 100 pounds.

By 2008, all new U.S. and European cars had to use the CAN bus per SAE (Society of Automotive Engineers) regulations, and with the addition of more and more subsystems, this technology did not require selling to engineers. The CAN bus comes with a reference protocol specification, but recent auto hacking discoveries have revealed several interesting points. First, Toyota claimed in court that the only way to make a car go was to step on the gas pedal, and that software alone won’t do it. This claim has been proven false. Second, every automobile manufacturer has interpreted/ignored the reference architecture to varying degrees. Finally, as demonstrated by hackers at DEF CON, it is possible to disable cars on the go, over the Internet, as well as fool around with the entertainment console settings and such.

The bottom line for automobiles and vehicles is that they are comprised of multiple computers, all operating semi-autonomously and virtually without any security. The U.S. Department of Transportation is pushing for vehicle-to-vehicle communication so that cars can tell each other when traffic is changing ahead of them. Couple that with the advances in self-driving technology, and you can see how important it is that security become a stronger issue in the industry.

Aircraft/UAV

Aircraft also have a significant computer footprint inside, as most modern jets have what is called an all-glass cockpit. The old individual gauges and switches are replaced with a computer display with touchscreen. This enables greater functionality and is more reliable than the older systems. But as with cars, the connecting of all of this equipment onto busses that are then eventually connected to outside networks has led to a lot of security questions within the aviation industry. And, like the medical industry, change is difficult, because the level of regulation and testing precludes ever patching an operating system. This makes for systems that over time will become vulnerable as the base OS is thoroughly explored and every vulnerability mapped and exploited in non-aviation systems—and these use cases can easily be ported to planes.

Recent revelations have shown that the in-flight entertainment systems are separated from flight controls, not by separate networks, but by a firewall. This has led hackers to sound the alarm over aviation computing safety.

Images

This section presented a cornucopia of different special-purpose systems. For the purposes of the exam, it is important to remember three main elements. The technology components (SoC and RTOS), the connectivity component (Internet of Things), and the different marketplaces (home automation, wearables, medical devices, vehicles, and aviation). Read the question for clues as to which element is being asked about.

Unmanned aerial vehicles (UAVs) represent the next frontier of flight. These machines range from hobbyist devices that cost under $300 to full-size aircraft that can fly across oceans. What makes these systems different from regular aircraft is that the pilot is on the ground, flying the device via remote control. These devices have cameras, sensors, and processors to manage the information; even the simple home hobbyist versions have sophisticated autopilot functions. Because of the remote connection, they are either under direct radio control (rare) or connected via a networked system (much more common).

Images Industry-Standard Frameworks and Reference Architectures

Industry-standard frameworks and reference architectures are conceptual blueprints that define the structure and operation of the IT systems in the enterprise. Just as in an architecture diagram, which provides a blueprint for constructing a building, the enterprise architecture provides the blueprint and roadmap for aligning IT and security with the enterprise’s business strategy.

Regulatory

Industries under governmental regulation frequently have an approved set of architectures defined by regulatory bodies. Architectures like the electric industry have the NERC (North American Electric Reliability Corporation) Critical Infrastructure Protection (CIP) standards. This is a set of 14 individual standards that when taken together drive a reference framework/architecture for this bulk electric system in North America. Most industries in the U.S. find themselves regulated in one manner or another. When it comes to cybersecurity, more and more regulations are beginning to apply, from privacy, to breach notification, to due diligence and due care provisions. NIST has been careful to promote its Cyber Security Framework (CSF), covered later in this chapter, not as a government-driven “must,” stating it is optional.

Non-regulatory

There are some reference architectures that are neither industry specific nor regulatory, but rather technology focused like the NIST / CSA (Cloud Security Alliance) reference architecture for cloud-based systems. In the non-regulatory set is the NIST CSF (Cyber Security Framework), a consensus-created overarching framework to assist enterprises in their cybersecurity programs. The CSF has three main elements: a Core, Tiers, and Profiles. The core is built around five functions: Identify, Protect, Detect, Respond, and Recover. The core then has elements for each of these covering categories of actions, subcategories, and normative references to standards. The Tiers are a way of representing an organization’s level of achievement, from partial, to risk informed, to repeatable, to adaptive. These tiers are similar to maturity model levels. The profiles section is a section that describes current state of alignment for the elements and the desired state of alignment, a form of gap analysis. The NIST CSF is being mandated for government agencies, but it is completely voluntary in the private sector. This framework has been well received, partly because of its comprehensive nature and partly because of its consensus approach, which created a useable document.

National vs. International

The U.S. federal government has its own cloud-based reference architecture for systems that use the cloud. Called FedRAMP (the Federal Risk and Authorization Management Program), this process is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for systems using cloud products and services.

One of the more interesting international frameworks has been the harmonization between the U.S. and the EU with respect to privacy (U.S.) or data protection (EU). The rules and regulations covering privacy issues are so radically different, a special framework was created to harmonize the concepts, allowing the U.S. and EU to effectively do business together. This was referred to as the U.S.–EU Safe Harbor Framework. Changes in EU law, coupled with EU court determinations that the U.S.–EU Safe Harbor Framework is not a valid mechanism to comply with EU data protection requirements when transferring personal data from the European Union to the United States, forced a complete refreshing of the methodology. The new privacy-sharing methodology is called the EU–U.S. Privacy Shield Framework and became effective in the summer of 2016.

Industry-Specific Frameworks

There are several examples of industry-specific frameworks. Although some of these may not seem to be complete frameworks, they provide instructive guidance on how systems should be architected. Some of these frameworks are regulatory based like the electric industry CIP referenced above. Another industry-specific framework is the HITECH CSF (Common Security Framework) for use in the medical industry and enterprises that must address HIPAA/HITECH rules and regulations.

Images Benchmarks/Secure Configuration Guides

Benchmarks and secure configuration guides offer a set of guidance for setting up and operating systems to a secure level that is understood and documented. As each organization may differ, the standard for a benchmark is a consensus-based set of knowledge designed to deliver a reasonable set of security across as wide a base as possible. There are numerous sources for these guides, and three main sources exist for many of these systems. You can get benchmark guides from manufacturers of the software, the government, and an independent organization called Center for Internet Security. Not all systems have benchmarks, nor do all sources cover all systems, but searching for the correct configuration and setup directives can go a long way in establishing security.

The vendor/manufacturer guidance source is easy—pick the vendor for your product. The government sources are a bit more scattered, but two solid sources are the U.S. National Institute of Standards and Technology Computer Security Resource Center’s National Vulnerability Database National Checklist Program (NCP) Repository, https://nvd.nist.gov/ncp/repository and the U.S. Department of Defense’s Defense Information Security Agency’s Security Technical Implementation Guides (STIGs). These are detailed step-by-step implementation guides and a list is available at https://iase.disa.mil/stigs/Pages/index.aspx.

Platform/Vendor-Specific Guides

Setting up secure services is important to enterprises, and some of the best guidance comes from the manufacturer in the form of platform/vendor-specific guides. These guides include installation and configuration guidance, and in some cases operational guidance as well.

Web Server

There are many web servers that are used in enterprises. Web servers offer a connection between users (clients) and enterprise resources (data being provided), and therefore they are prone to adversarial attempts at penetration. Setting up any external-facing application securely is the key to preventing unnecessary risk. Fortunately for web servers, there are several authoritative and proscriptive sources of information available for properly securing the application. In the case of Microsoft’s IIS, and SharePoint Server, the company provides solid guidance on the proper configuration of the servers. The Apache foundation provides some information for its web server products as well.

Another good source of information is from the Center for Internet Security, as part of their benchmarking guides. The CIS guides provide authoritative, proscriptive guidance developed as part of a consensus effort between consultants, professionals and others. This guidance has been subject to significant peer review and has withstood the test of peer review via implementation. CIS guides are available for multiple versions of Apache, Microsoft, and other vendor’s products.

Operating System

The operating system (OS) is the interface between the applications that perform the tasks we want done and the actual physical computer hardware. As such, the OS becomes a key component for the secure operation of a system. Comprehensive, proscriptive configuration guides for all major operating systems are available from the manufacturer, or in an easier-to-digest form from CIS, as mentioned above, or from the U.S. government through the Department of Defense DISA STIGs (Security Technical Implementation Guides) program.

Media Gateways

A specialty application used to connect voice calling systems to IP networks, to enable voice over IP (VOIP) is called a media gateway. These application servers are a blend of hardware and software, part application server, part network infrastructure, and perform the necessary functions to integrate and separate voice and IP signals as required. These systems show the blurring of the lines when separating systems as either application servers or network devices.

Application Server

Application servers are the part of the enterprise that handles specific tasks we associate with IT systems. Whether it is an e-mail server, a database server, a messaging platform, or any other server, application servers are where the work happens. Proper configuration of an application server depends to a great degree on the server specifics. Standard application servers, such as e-mail and database servers, have guidance from the manufacturer, CIS, and STIGs. For less-standard servers, ones with significant customizations, such as a custom set of applications written in-house for your inventory control operations or order processing, or any other custom middleware, these also require proper configuration, but the true vendor in these cases is the in-house builders of the software. Ensuring proper security settings and testing should be part of the build program for these so that they can be integrated into the normal security audit process to ensure continued proper configuration.

Network Infrastructure Devices

Network infrastructure devices are particularly important to properly configure, for failures at this level can adversely affect the security of traffic being processed by them. Properly setting up these devices, switches, routers, concentrators, and other specialty devices can be challenging. The criticality of these devices makes them targets, for if a firewall fails, in many cases there are no indications until an investigation finds that it failed to do its job. Ensuring these devices are properly configured and maintained is not a job to gloss over, but one that requires professional attention by properly trained personnel and backed by routine configuration audits to ensure they stay properly configured. With respect to most of these devices, the greatest risk lies in the user configuration of the device via rulesets, and these are specific to each user and cannot be mandated by a manufacturer installation guide. Proper configuration and verification is site specific and many times individual device specific. Without a solid set of policies and procedures to ensure this work is properly maintained, these devices, while they may work, will not perform the services desired.

Images

An example of a network infrastructure device is an SSL decryptor, a piece of hardware designed to streamline SSL/TLS connections in an enterprise, relieving other servers of this computational intensive task.

General-Purpose guides

The best general-purpose guide to information security is probably the common set of CIS Critical Security Controls, which are 20 best-practice effective security controls. This project, originally known as the SANS Institute Top 20 Security Controls, began as a consensus project out of the U.S. Department of Defense and has over nearly 20 years morphed into the de facto standard for selecting an effective set of security controls. The framework is now maintained by the Center for Internet Security and can be found at https://www.cisecurity.org/cybersecurity-best-practices/.

Images

Determining the correct configuration information for the exam will be done in the careful parsing of the scenario in the question. It is common for these systems to consist of multiple major components—web server, database server, and application server—so read the question carefully to see what is specifically being asked for. The specifics matter, as they will point to the best answer.

Images For More Information

Microsoft’s Safety & Security Center https://support.microsoft.com/en-us/products/security

SANS Reading Room: Application and Database Security www.sans.org/reading_room/whitepapers/application/

Chapter 14 Review

Images   Chapter Summary


After reading this chapter and completing the exercises, you should understand the following about hardening systems and baselines.

Harden operating systems and network operating systems

Images   Security baselines are critical to protecting information systems, particularly those allowing connections from external users.

Images   The process of establishing a system’s security state is called baselining, and the resulting product is a security baseline that allows the system to run safely and securely.

Images   Hardening is the process by which operating systems, network resources, and applications are secured against possible attacks.

Images   Securing operating systems consists of removing or disabling unnecessary services, restricting permissions on files and directories, removing unnecessary software (or not installing it in the first place), applying the latest patches, removing unnecessary user accounts, and ensuring strong password guidelines are in place.

Images   Securing network resources consists of disabling unnecessary functions, restricting access to ports and services, ensuring strong passwords are used, and ensuring the code on the network devices is patched and up to date.

Images   Securing applications depends heavily on the application involved but typically consists of removing samples and default materials, preventing reconnaissance attempts, and ensuring the software is patched and up to date.

Implement host-level security

Images   Antimalware/spyware/virus protections are needed on host machines to prevent malicious code attacks.

Images   Whitelisting can provide strong protections against malware on key systems.

Images   Host-based firewalls can provide specific protections from some attacks.

Harden applications

Images   Patch management is a disciplined approach to the acquisition, testing, and implementation of OS and application patches.

Images   A hotfix is a single package designed to address a specific, typically security-related problem in an operating system or application.

Images   A patch is a fix (or collection of fixes) that addresses vulnerabilities or errors in operating systems or applications.

Images   A service pack is a large collection of fixes, corrections, and enhancements for an operating system, application, or group of applications.

Establish group policies

Images   Group policies are a method for managing the settings and configurations of many different users and systems in an Active Directory environment.

Images   Group policies can be used to refine, set, or modify a system’s Registry settings, auditing and security policies, user environments, logon/logoff scripts, and so on.

Images   Security templates are collections of security settings that can be applied to a system. Security templates can contain hundreds of settings that control or modify settings on a system, such as password length, auditing of user actions, and restrictions on network access.

Secure alternative environments

Images   Alternative environments include process control (SCADA) networks, embedded systems, mobile devices, mainframes, game consoles, transportation systems, and more.

Images   Alternative environments require security, but are not universally equivalent to IT systems, so the specifics can vary tremendously from system to system.

Images   Key Terms


antispam (484)

antivirus (AV) (481)

application hardening (494)

application vulnerability scanner (500)

baseline (461)

baselining (461)

benchmarks (519)

black listing (487)

continuous monitoring (504)

Desired State Configuration (DSC) (474)

elasticity(507)

firmware update (463)

globally unique identifier (GUID) (475)

group policy (475)

group policy object (GPO) (475)

hardening (460)

hardware security module (HSM) (462)

heuristic scanning (481)

host vulnerability scanner (498)

hotfix (467)

industry-standard frameworks (517)

network operating system (NOS) (464)

network segmentation (494)

network vulnerability scanner (498)

operating system (OS) (464)

patch (468)

patch management (467)

pop-up blocker (487)

process identifier (PID) (479)

reference architectures (517)

reference monitor (465)

runlevels (478)

scalability (508)

secure configuration guides (519)

security kernel (465)

security template (505)

service pack (468)

TCP wrappers (507)

trusted operating system (466)

Trusted Platform Module (TPM) (461)

white listing (488)

Images   Key Terms Quiz


Use terms from the Key Terms list to complete the sentences that follow. Don’t use the same term more than once. Not all terms will be used.

1.   _______________ is the process of establishing a system’s security state.

2.   Securing and preparing a system for the production environment is called _____________.

3.   A(n) _______________ is a small software update designed to address a specific, often urgent, problem.

4.   The basic software on a computer that handles input and output is called the _______________.

5.   ____________ is the use of the network architecture to limit communication between devices.

6.   A(n) _______________ is a bundled set of software updates, fixes, and additional functions contained in a self-installing package.

7.   In most UNIX operating systems, each running program is given a unique number called a(n) _______________.

8.   When a user or process supplies more data than was expected, a(n) _______________ may occur.

9.   _______________ are used to describe the state of init and what system services are operating in UNIX systems.

10.   A(n) _______________ is a collection of security settings that can be applied to a system.

Images   Multiple-Choice Quiz


1.   A small software update designed to address an urgent or specific problem is called what?

A.   Hotfix

B.   Service pack

C.   Patch

D.   None of the above

2.   In a UNIX operating system, which runlevel describes single-user mode?

A.   0

B.   6

C.   4

D.   1

3.   TCP wrappers do what?

A.   Help secure the system by restricting network connections

B.   Help prioritize network traffic for optimal throughput

C.   Encrypt outgoing network traffic

D.   Strip out excess input to defeat buffer overflow attacks

4.   File permissions under UNIX consist of what three types?

A.   Modify, read, and execute

B.   Read, write, and execute

C.   Full control, read-only, and run

D.   Write, read, and open

5.   The mechanism that allows for centralized management and configuration of computers and remote users in an Active Directory environment is called:

A.   Baseline

B.   Group policies

C.   Simple Network Management Protocol

D.   Security templates

6.   What feature in Windows Server 2008 controls access to network resources based on a client computer’s identity and compliance with corporate governance policy?

A.   BitLocker

B.   Network Access Protection

C.   inetd

D.   Process identifiers

7.   To stop a particular service or program running on a UNIX operating system, you might use the ______ command.

A.   netstat

B.   ps

C.   kill

D.   inetd

8.   Updating the software loaded on nonvolatile RAM is called:

A.   A buffer overflow

B.   A firmware update

C.   A hotfix

D.   A service pack

9.   The shadow file on a UNIX system contains which of the following?

A.   The password associated with a user account

B.   Group policy information

C.   File permissions for system files

D.   Network services started when the system is booted

10.   On a UNIX system, if a file has the permissions rwx r-x rw-, what permissions does the owner of the file have?

A.   Read only

B.   Read and write

C.   Read, write, and execute

D.   None

Images   Essay Quiz


1.   Explain the difference between a hotfix and a service pack, and describe why both are so important.

2.   A new administrator needs some help creating a security baseline. Create a checklist/template that covers the basic steps in creating a security baseline to assist them, and explain why each step is important.

Lab Projects

   Lab Project 14.1

Use a lab system running Linux with at least one open service, such as FTP, Telnet, or SMTP. From another lab system, connect to the Linux system and observe your results. Configure TCP wrappers on the Linux system to reject all connection attempts from the other lab system. Now try to reconnect, and observe your results. Document your steps and explain how TCP wrappers work.

   Lab Project 14.2

Using a system running Windows, experiment with the Password Policy settings under the Local Security Policy (Settings | Control Panel | Administrative Tools | Local Security Policy). Find the setting for Passwords Must Meet Complexity Requirements and make sure it is disabled. Set the password on the account you are using to bob. Now enable the Passwords Must Meet Complexity Requirements settings and attempt to change your password to jane. Were you able to change it? Explain why or why not. Set your password to something the system will allow and explain how you selected that password and how it meets the complexity requirements.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.121.55