The many uses for systems and operating systems require flexible components that allow users to design, configure, and implement the systems they need. Yet it is this very flexibility that causes some of the biggest weaknesses in computer systems. Computer and operating system developers often build and deliver systems in “default” modes that do little to secure the systems from external attacks. From the view of the developer, this is the most efficient mode of delivery, as there is no way they can anticipate what every user in every situation will need. From the user’s view, however, this means a good deal of effort must be put into protecting and securing the system before it is ever placed into service. The process of securing and preparing a system for the production environment is called hardening. Unfortunately, many users don’t understand the steps necessary to secure their systems effectively, resulting in hundreds of compromised systems every day.
Hardening systems, servers, workstations, networks, and applications is a process of defining the required uses and needs and then aligning security controls to limit a system’s desired functionality. Once this is determined, you have a system baseline that you can compare changes to over the course of a system’s lifecycle.
The process of establishing a system’s operational state is called baselining, and the resulting product is a system baseline that describes the capabilities of a software system. Once the process has been completed for a particular hardware and software combination, any similar systems can be configured with the same baseline to achieve the same level of application. Uniform baselines are critical in large-scale operations, because maintaining separate configurations and security levels for hundreds or thousands of systems is far too costly.
Constructing a baseline or hardened system is similar for servers, workstations, and network operating systems (NOSs). The specifics may vary, but the objects are the same.
Hardware, in the form of servers, workstations, and even mobile devices, can represent a weakness or vulnerability in the security system associated with an enterprise. While hardware can be easily replaced if lost or stolen, the information that is contained by the devices complicates the security picture. Data or information can be safeguarded from loss by backups, but this does little in the way of protecting it from disclosure to an unauthorized party. There are software measures that can assist in the form of encryption, but these also have drawbacks in the form of scalability and key distribution.
Full drive encryption (FDE) and self-encrypting drives (SED) are methods of implementing cryptographic protection on hard drives and other similar storage media with the express purpose of protecting the data even if the drive is removed from the machine. Portable machines, such as laptops, have a physical security weakness in that they are relatively easy to steal and then can be attacked offline at the attacker’s leisure. The use of modern cryptography, coupled with hardware protection of the keys, makes this vector of attack much more difficult. In essence, both of these methods offer a transparent, seamless manner of encrypting the entire hard drive using keys that are only available to someone who can properly log in to the machine.
FDE and SED began as software-only proprietary solutions, but a hardware-based standard called Opal has been created. Developed by the Trusted Computing Group (TCG), Opal is used for applying hardware-based encryption to mass storage devices, hard drives (rotating media), solid state drives, and optical drives. Having a standard has the advantages of interoperability between vendors and can be OS independent. Having it in hardware improves performance and increases security. The encryption/decryption keys are stored in the hard drive controller and are never loaded into system memory, keeping them safe from attack.
The Trusted Platform Module (TPM) is a hardware solution on the motherboard, one that assists with key generation and storage as well as random number generation. When the encryption keys are stored in the TPM, they are not accessible via normal software channels and are physically separated from the hard drive or other encrypted data locations. This makes the TPM a more secure solution than storing the keys on the machine’s normal storage.
A hardware root of trust is the concept that if one has trust in a source’s specific security functions, this layer can be used to promote security to higher layers of a system. Because roots of trust are inherently trusted, they must be secure by design. This is usually accomplished by keeping them small and limiting their functionality to a few specific tasks. Many roots of trust are implemented in hardware that is isolated from the OS and the rest of the system so that malware cannot tamper with the functions they provide. Examples of roots of trust include TPM chips in computers and Apple’s Secure Enclave coprocessor in its iPhones and iPads. Apple also uses a signed Boot ROM mechanism for all software loading.
A hardware security module (HSM) is a device used to manage or store encryption keys. It can also assist in cryptographic operations such as encryption, hashing, and the application of digital signatures. HSMs are typically peripheral devices, connected via USB or a network connection. HSMs have tamper-protection mechanisms to prevent physical access to the secrets they guard. Because of their dedicated design, they can offer significant performance advantages over general-purpose computers when it comes to cryptographic operations. When an enterprise has significant levels of cryptographic operations, HSMs can provide throughput efficiencies.
Storing private keys anywhere on a networked system is a recipe for loss. HSMs are designed to allow the use of a key without exposing it to the wide range of host-based threats.
Basic Input/Output System (BIOS) is the firmware that a computer system uses as a connection between the actual hardware and the operating system. BIOS is typically stored on nonvolatile flash memory, which allows for updates, yet persists when the machine is powered off. The purpose behind BIOS is to initialize and test the interfaces to the actual hardware in a system. Once the system is running, the BIOS translates low-level access to the CPU, memory, and hardware devices, making a common interface for the OS to connect to. This facilitates multiple hardware manufacturers and differing configurations against a single OS install.
Unified Extensible Firmware Interface (UEFI) is the current replacement for BIOS. UEFI offers a significant modernization over the decades-old BIOS, including dealing with modern peripherals such as high-capacity storage and high-bandwidth communications. UEFI also has more security designed into it, including provisions for secure booting. One of the key characteristics of the UEFI BIOS as opposed to the legacy BIOS is that UEFI BIOS is designed to work with the hardware platform to ensure that the flash memory that holds the BIOS cannot be changed without the proper cryptographic credentials. This forms a root of trust in the contents of the flash memory, specifically in the UEFI BIOS. The key used to sign the BIOS is controlled by the equipment manufacturer, thus preventing unauthorized changes to the BIOS. The BIOS performs a countercheck on all updates prior to loading them, using a private key stored on the BIOS, ensuring all updates are properly signed by the manufacturer. These steps create the root of trust for the system.
Measured boot is also a method of depending on the root of trust in starting a system, but rather than using signatures to verify subsequent components, a measured boot process hashes the subsequent processes and compares the hash values to known-good values. This has the advantage that it can be extended beyond items covered by the manufacturer, as the signatures come from the manufacturer and thus are limited to only specific items. The known-good hash values must be stored in a secure location, and the Trusted Platform Module (TPM) platform configuration registers (PCRs) comprise the secure location that is used.
One of the challenges in securing an OS is the myriad of drivers and other add-ons that hook into the OS and provide specific added functionality. If these additional programs are not properly vetted before installation, this pathway can provide a means by which malicious software can attack a machine. Also, because these attacks can occur at boot time, at a level below security applications such as antivirus software, they can be very difficult to detect and defeat. UEFI offers a solution to the problem of boot integrity, called Secure Boot, which is a mode that when enabled only allows signed drivers and OS loaders to be invoked. Secure Boot requires specific setup steps, but once enabled, it blocks malware that attempts to alter the boot process. Secure Boot enables the attestation that the drivers and OS loaders being used have not changed since they were approved for use. Secure Boot is supported by Microsoft Windows and all major versions of Linux.
Attestation means verifying the authenticity of a platform or device based on a trusted record of evidence. Secure Boot, for example, ensures the system boots into a trusted configuration by having evidence of each step’s authenticity verified.
Integrity measurement is the measuring and identification of changes to a specific system away from an expected value. Whether it’s the simple changing of data as measured by a hash value or the TPM-based integrity measurement of the system boot process and attestation of trust, the concept is the same: take a known value, store a hash or other keyed value, and then, at the time of concern, recalculate and compare values.
In the case of TPM-mediated systems, where the TPM chip provides a hardware-based root of trust anchor, the TPM system is specifically designed to calculate hashes of a system and store them in a platform configuration register (PRC). This register can be read later and compared to a known, or expected, value, and if they differ, there is a trust violation. Certain BIOSs, UEFIs, and bootloaders can all work with the TPM chip in this manner, providing a means of establishing a trust chain during system boot.
Firmware is present in virtually every system, but in many embedded systems it plays an even more critical role because it may also contain the OS and application. Maintaining strict control measures over the changing of firmware is essential to ensuring the authenticity of the software on a system. Firmware updates require extreme quality measures to ensure that errors are not introduced as part of an update process. Updating firmware, although only occasionally necessary, is a very sensitive event, because failure can lead to system malfunction. If an unauthorized party is able to change the firmware of a system, as demonstrated in an attack against ATMs, an adversary can gain complete functional control over a system.
Electromagnetic interference (EMI) is an electrical disturbance that affects an electrical circuit. This is due to either electromagnetic induction or radiation emitted from an external source, either of which can induce currents into the small circuits that make up computer systems and cause logic upsets. An electromagnetic pulse (EMP) is a burst of current in an electronic device as a result of a current pulse from electromagnetic radiation. EMP can produce damaging current and voltage surges in today’s sensitive electronics. The main sources for EMP would be industrial equipment on the same circuit, solar flares, and nuclear bursts high in the atmosphere.
It is important to shield computer systems from circuits with large industrial loads, such as motors. These power sources can have significant noise, including EMI and EMPs that will potentially damage computer equipment. Another source of EMI is fluorescent lights. Be sure any cabling that goes near fluorescent light fixtures is well shielded and grounded.
Hardware and firmware security is ultimately dependent on the manufacturer for the root of trust. In today’s world of global manufacturing with global outsourcing, fully understanding who your manufacturer supply chain is and how it changes from device to device, and even between lots, is difficult because many details can be unknown. Who manufactured all the components of the device you are ordering? If you’re buying a new PC, where did the hard drive come from? Can the new PC come preloaded with malware? Yes, it has happened.
Supply chain for assembled equipment can be very tricky, because not only do you have to worry about where you get the computer, but also where they get the parts and the software, including who wrote the software and with what libraries. These can be very difficult issues to negotiate if you have very strict rules concerning country of origin.
The operating system (OS) of a computer is the basic software that handles things such as input, output, display, memory management, and all the other highly detailed tasks required to support the user environment and associated applications. Most users are familiar with the Microsoft family of desktop operating systems: Windows 7, Windows 8, and Windows 10. Indeed, the vast majority of home and business PCs run some version of a Microsoft operating system. Other users may be familiar with macOS, Solaris, or one of the many varieties of the UNIX/Linux operating system.
A network operating system (NOS) is an operating system that includes additional functions and capabilities to assist in connecting computers and devices, such as printers, to a local area network (LAN). For most modern operating systems, including Windows Server, Solaris, and Linux, the terms operating system and network operating system are used interchangeably because they perform all the basic functions and provide enhanced capabilities for connecting to LANs. Network operating system can also apply to the operational software that controls managed switches and routers, such as Cisco’s IOS and Juniper’s Junos.
The Term Operating System
Operating system is the commonly accepted term for the software that provides the interface between computer hardware and the user. It is responsible for the management, coordination, and sharing of limited computer resources such as memory and disk space.
Protection rings were devised in the Multics operating system in the 1960s to deal with security issues associated with time-sharing operations. Protection rings can be enforced by hardware, software, or a combination of the two, and they serve to act as a means of managing privilege in a hierarchical manner. Ring 0 is the level with the highest privilege and is the element that acts directly with the physical hardware (CPU and memory). Higher levels, with less privilege, must interact through adjoining rings through specific gates in a predefined manner. Use of rings separates elements such as applications from directly interfacing with the hardware without going through the OS and, specifically, the security kernel, as shown here.
The operating system itself is the foundation of system security. The operating system does this through the use of a security kernel. The security kernel is also called a reference monitor and is the component of the operating system that enforces the security policies of the operating system. The core of the OS is constructed so that all operations must pass through and be moderated by the security kernel, placing it in complete control over the enforcement of rules. Security kernels must exhibit some properties to be relied upon: they must offer complete mediation, as just discussed, and must be tamperproof and verifiable in operation. Because they are part of the OS and are in fact a piece of software, ensuring that security kernels are tamperproof and verifiable is a legitimate concern. Achieving assurance with respect to these attributes is a technical matter that is rooted in the actual construction of the OS and technically beyond the level of this book.
Data Execution Prevention
Data Execution Prevention (DEP) is a collection of hardware and software technologies to limit the ability of malware to execute in a system. Windows uses DEP to prevent code execution from data pages.
Many different systems have the need for an operating system. Hardware in networks requires an operating system to perform the networking function. Servers and workstations require an OS to act as the interface between applications and the hardware. Specialized systems such as kiosks and appliances, both of which are forms of automated single-purpose systems, require an OS between the application software and hardware.
Network components use a network operating system to provide the actual configuration and computation portion of networking. There are many vendors of networking equipment, and each has its own proprietary operating system. Cisco has the largest footprint with its IOS (for Internetworking Operating System). Juniper has Junos, which is built off of a stripped Linux core. As networking moves to software-defined networking (SDN), the concept of a network operating system will become more important and mainstream because it will become a major part of day-to-day operations in the IT enterprise.
Servers require an operating system to bridge the gap between the server hardware and the applications that are being run. Currently, server OSs include Microsoft Windows Server, many flavors of Linux, and more and more VM/hypervisor environments. For performance reasons, Linux has a significant market share in the realm of server OSs, although Windows Server with its Active Directory technology has made significant inroads into market share.
The OS on a workstation exists to provide a functional working space for a user to interact with the system and its various applications. Because of the high level of user interaction on workstations, it is very common to see Windows in this role. In large enterprises, the ability of Active Directory to manage users, configurations, and settings easily across the entire enterprise has given Windows client workstations an advantage over Linux.
Appliances are standalone devices, wired into the network and designed to run an application to perform a specific function on traffic. These systems operate as headless servers, preconfigured with applications that run and perform a wide range of security services on the network traffic they see. For reasons of economics, portability, and functionality, the vast majority of appliances are built on top of a Linux-based system. As these are often customized distributions, keeping them patched becomes a vendor problem because this sort of work is outside the scope or ability of most IT people to properly manage.
Kiosks are standalone machines that typically operate a browser instance on top of a Windows OS. These machines are usually set up to automatically log in to a browser instance that is locked to a website that allows all of the functionality desired. Kiosks are commonly used for interactive customer service applications, such as interactive information sites, menus, and so on. The OS on a kiosk needs to be able to be locked down to minimal function, have elements such as automatic login, and offer an easy way to construct the applications.
Mobile devices began as phones with limited additional capabilities, but as the Internet and functionality spread to mobile devices, the capabilities of these devices have expanded as well. From smartphones to tablets to wearables, today’s mobile system is a computer, with virtually all the compute capability one could ask for—with a phone attached. The two main mobile OSs in the market today are Apple’s iOS and Google’s Android system.
A trusted operating system is one that is designed to allow multilevel security in its operation. This is further defined by its ability to meet a series of criteria required by the U.S. government. Trusted OSs are expensive to create and maintain because any change must typically undergo a recertification process. The most common criteria used to define a trusted OS is the Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria, or CC), a harmonized set of security criteria recognized by many nations, including the United States, Canada, Great Britain, most of the EU countries, as well as others. Versions of Windows, Linux, mainframe OSs, and specialty OSs have been qualified to various Common Criteria levels.
The term trusted operating system is used to refer to a system that has met a set of criteria and demonstrated correctness to meet requirements of multilevel security. The Common Criteria is one example of a standard used by government bodies to determine compliance to a level of security need.
Patch management is the process used to maintain systems in an up-to-date fashion, including all required patches. Every OS, from Linux to Windows, requires software updates, and each OS has different methods of assisting users in keeping their systems up to date.
In Windows 10 forward, Microsoft has adopted a newer methodology treating the OS as a service and has dramatically updated its servicing model. Windows 10 now has a twice-per-year feature update release schedule, aiming for March and September, with an 18-month servicing timeline for each release. Each of these releases will be serviced for 10 years from the date of release. Microsoft regularly issues patches for its Windows and Office products on a monthly schedule, which has become known as “Patch Tuesday.” Patch Tuesday occurs on the second Tuesday of each month. Windows 10 checks for updates about once per day. The typical Windows PC will automatically download these updates via Windows Update by Wednesday afternoon if it’s powered on and connected to the Internet. Administrators may choose to delay and test these updates before deploying them to PCs in their organizations.
For critical issues that are currently being exploited, Microsoft will issue out-of-band patches. When these are released, it is best practice to immediately update, as the flaws are being exploited actively by attackers.
For Microsoft cloud-based products like the Office 365 suite, patches are seamless and issued against the cloud-based product and are integrated by users typically upon next use.
How you patch a Linux system depends a great deal on the specific version in use and the patch being applied. In some cases, a patch will consist of a series of manual steps requiring the administrator to replace files, change permissions, and alter directories. In other cases, the patches are executable scripts or utilities that perform the patch actions automatically. Some Linux versions, such as Red Hat, have built-in utilities that handle the patching process. In those cases, the administrator downloads a specifically formatted file that the patching utility then processes to perform any modifications or updates that need to be made.
Regardless of the method you use to update the OS, it is critically important to keep systems up to date. New security advisories come out every day, and while a buffer overflow may be a “potential” problem today, it will almost certainly become a “definite” problem in the near future. Much like the steps taken to baseline and initially secure an OS, keeping every system patched and up to date is critical to protecting the system and the information it contains.
Vendors typically follow a hierarchy for software updates:
Hotfix This term refers to a (usually) small software update designed to address a specific problem, such as a buffer overflow in an application that exposes the system to attacks. Hotfixes are typically developed in reaction to a discovered problem and are produced and released rather quickly.
Patch This term refers to a more formal, larger software update that can address several or many software problems. Patches often contain enhancements or additional capabilities as well as fixes for known bugs. Patches are usually developed over a longer period of time.
Service pack This refers to a large collection of patches and hotfixes rolled into a single, rather large package. Service packs are designed to bring a system up to the latest known-good level all at once, rather than requiring the user or system administrator to download dozens or hundreds of updates separately.
An important management issue for running a secure system is to identify the specific needs of a system for its proper operation and to enable only items necessary for those functions. Disabling unnecessary ports and services prevents their use by unauthorized users and improves system throughput and increases security. Systems have ports and connections that need to be disabled if not in use.
Disabling unnecessary ports and services is a simple way to improve system security. This minimalist setup is similar to the “implicit deny” philosophy and can significantly reduce an attack surface.
Just as we have a principle of least privilege, we should follow a similar track with least functionality on systems. A system should do what it supposed to do, and only what it is supposed to do. Any additional functionality is an added attack surface for an adversary and offers no additional benefit to the enterprise.
Operating systems can be configured in a variety of manners—from completely open with lots of functionality, whether it is needed or not, to stripped to the services needed to perform a particular task. Operating system developers and manufacturers all share a common problem: they cannot possibly anticipate the many different configurations and variations that the user community will require from their products. So, rather than spending countless hours and funds attempting to meet every need, manufacturers provide a “default” installation for their products that usually contains the base OS and some more commonly desirable options, such as drivers, utilities, and enhancements. Because the OS could be used for any of a variety of purposes, and could be placed in any number of logical locations (LAN, screened subnet, WAN, and so on), the manufacturer typically does little to nothing with regard to security. The manufacturer may provide some recommendations or simplified tools and settings to facilitate securing the system, but in general, end users are responsible for securing their own systems. Generally this involves removing unnecessary applications and utilities, disabling unneeded services, setting appropriate permissions on files, and updating the OS and application code to the latest version.
Weak security configurations are a result of many different items, each specific to a particular set of components and operating conditions. The path to avoid weak configurations involves a combination of information sources. One is manufacturer recommendations, another is industry best practices, and the last is testing.
This process of securing an OS is called hardening, and it is intended to make the system more resistant to attack, much like armor or steel is hardened to make it less susceptible to breakage or damage. Each OS has its own approach to security, and although the process of hardening is generally the same, different steps must be taken to secure each OS. The process of securing and preparing an OS for the production environment is not trivial; it requires preparation and planning. Unfortunately, many users don’t understand the steps necessary to secure their systems effectively, resulting in hundreds of compromised systems every day.
System hardening is the process of preparing and securing a system and involves the removal of all unnecessary software and services.
You must meet several key requirements to ensure that the system hardening processes described in this section achieve their security goals. These are OS independent and should be a normal part of all system maintenance operations:
The base installation of all OS and application software comes from a trusted source and is verified as correct by using hash values.
Machines are connected only to a completely trusted network during the installation, hardening, and update processes.
The base installation includes all current patches and updates for both the OS and applications.
Current backup images are taken after hardening and updates to facilitate system restoration to a known state.
These steps ensure that you know what is on the machine, can verify its authenticity, and have an established backup version.
Because accounts are necessary for many systems to be established, default accounts with default passwords are a way of life in computing. Whether the account is for the OS or an application, this is a significant security vulnerability if not immediately addressed as part of setting up the system or installing of the application. Disabling default accounts/passwords should be such a common practice that there should be no systems with this vulnerability. This is a simple task, and one that must be done. When you cannot disable the default account (and there will be times when disabling is not a viable option), the other alternative is to change the password to a very long one that offers strong resistance to brute force attacks.
Modern software is configuration driven. This means that setting proper configurations is essential for secure operation of the software. Using weak configurations or allowing access to configuration files so attackers can weaken or misconfigure a system is a security failure. Default configurations should be checked to ensure they employ the desired level of security.
Applications can be controlled at the OS at the time of start via blacklisting or whitelisting. Application blacklisting is essentially noting which applications should not be allowed to run on the machine. This is basically a permanent “ignore” or “call block” type of capability. Application whitelisting is the exact opposite: it consists of a list of allowed applications. Each of these approaches has advantages and disadvantages. Blacklisting is difficult to use against dynamic threats, as the identification of a specific application can easily be avoided through minor changes. Whitelisting is easier to employ from the aspect of the identification of applications that are allowed to run—hash values can be used to ensure the executables are not corrupted. The challenge in whitelisting is the number of potential applications that are run on a typical machine. For a single-purpose machine, such as a database server, whitelisting can be relatively easy to employ. For multipurpose machines, it can be more complicated.
CompTIA updated a number of terms in the most recent exam objectives for CompTIA Security+ and has moved away from using terms like whitelisting and blacklisting and is now using allow list and block list/deny list, respectively.
Microsoft has two mechanisms that are part of the OS to control which users can use which applications:
Software restrictive policies Employed via group policies and allow significant control over applications, scripts, and executable files. The primary mode is by machine and not by user account.
User account level control Enforced via AppLocker, which is a service that allows granular control over which users can execute which programs. Through the use of rules, an enterprise can exert significant control over who can access and use installed software.
Using OS-level restrictions to control what software can be used can prevent users from loading and running unauthorized software. Unauthorized software, whether because of licensing restrictions or because it is not vetted for use, can present risk to the enterprise. Controlling this risk via an enterprise operational control such as whitelisting can simplify compliance and improve baseline security posture.
On a Linux platform, similar capabilities are offered from third-party vendor applications.
Sandboxing refers to the quarantine or isolation of a system from its surroundings. It has become standard practice for some programs with an increased risk surface to operate within a sandbox, limiting the interaction with the CPU and other processes, such as memory. This works as a means of quarantine, preventing problems from getting out of the sandbox and onto the OS and other applications on a system.
Virtualization can be used as a form of sandboxing with respect to an entire system. You can build a VM, test something inside the VM, and, based on the results, make a decision with regard to stability or whatever concern was present.
While the process of establishing software’s base state is called baselining, and the resulting product is a baseline that describes the capabilities of the software, it is not necessarily secure. To secure the software on a system effectively and consistently, you must take a structured and logical approach. This starts with an examination of the system’s intended functions and capabilities to determine what processes and applications will be housed on the system. As a best practice, anything that is not required for operations should be removed or disabled on the system; then, all the appropriate patches, hotfixes, and settings should be applied to protect and secure it. This becomes the system’s secure baseline.
Software and hardware can be integrally tied when it comes to security, so they must be considered together. Once the process has been completed for a particular hardware and software combination, any similar systems can be configured with the same baseline to achieve the same level and depth of security and protection. Uniform software baselines are critical in large-scale operations, because maintaining separate configurations and security levels for hundreds or thousands of systems is far too costly.
After administrators have finished patching, securing, and preparing a system, they often create an initial baseline configuration. This represents a secure state for the system or network device and a reference point of the software and its configuration. This information establishes a reference that can be used to help keep the system secure by establishing a known-safe configuration. If this initial baseline can be replicated, it can also be used as a template when similar systems and network devices are being deployed.
The key management issue behind running a secure server setup is to identify the specific needs of a server for its proper operation and enable only items necessary for those functions. Keeping all other services and users off the system improves system throughput and increases security. Reducing the attack surface area associated with a server reduces the vulnerabilities now and in the future as updates are required.
Securing a Workstation
Workstations are attractive targets for crackers because they are numerous and can serve as entry points into the network and the data that is commonly the target of an attack. Although security is a relative term, following these basic steps will increase workstation security immensely:
Remove unnecessary protocols such as Telnet and NetBIOS.
Remove unnecessary software.
Remove modems unless needed and authorized.
Remove all shares that are not necessary.
Rename the administrator account, securing it with a strong password.
Remove or disable the Local Admin account in Windows.
Disable unnecessary user accounts.
Disable unnecessary ports and services.
Install an antivirus program and keep abreast of updates.
If the floppy drive is not needed, remove or disconnect it.
Consider disabling USB ports via BIOS/UEFI settings to restrict data movement to USB devices.
If no corporate firewall exists between the machine and the Internet, install a firewall.
Keep the operating system (OS) patched and up to date.
Keep all applications patched and up to date.
Turn on event logging for determined security elements.
Server Hardening Tips
Specific security needs can vary depending on the server’s specific use, but at a minimum, the following are beneficial:
Remove unnecessary protocols such as Telnet, NetBIOS, and File Transfer Protocol (FTP).
Remove unnecessary programs such as Internet Information Services (IIS).
Remove all shares that are not necessary.
Rename the administrator account, securing it with a strong password.
Remove or disable the Local Admin account in Windows.
Disable unnecessary user accounts.
Disable unnecessary ports and services.
Keep the operating system (OS) patched and up to date.
Keep all applications patched and up to date.
Turn on event logging for determined security elements.
Control physical access to servers.
Once a server has been built and is ready to be placed into operation, the recording of hash values on all of its crucial files will provide valuable information later in case of a question concerning possible system integrity after a detected intrusion. The use of hash values to detect changes was first developed by Gene Kim and Eugene Spafford at Purdue University in 1992. The concept became the product Tripwire, which is now available in commercial and open source forms. The same basic concept is used by many security packages to detect file-level changes.
The primary method of controlling the security impact of a system on a network is to reduce the available attack surface area. Turning off all services that are not needed or permitted by policy will reduce the number of vulnerabilities. Removing methods of connecting additional devices to a workstation to move data—such as optical drives and USB ports—assists in controlling the movement of data into and out of the device. User-level controls, such as limiting e-mail attachment options, screening all attachments at the e-mail server level, and reducing network shares to needed shares only, can be used to limit excessive connectivity that can impact security.
Early versions of home operating systems did not have separate named accounts for separate users. This was seen as a convenience mechanism; after all, who wants the hassle of signing in to the machine? This led to the simple problem that all users could then see, modify, and delete everyone else’s content. Content could be separated by using access control mechanisms, but that required configuration of the OS to manage every user’s identity. Early versions of many OSs came with literally every option turned on. Again, this was a convenience factor, but it led to systems running processes and services that they never used, thus increasing the attack surface of the host unnecessarily.
Determining the correct settings and implementing them correctly is an important step in securing a host system. The following sections explore the multitude of controls and options that need to be employed properly to achieve a reasonable level of security on a host system.
Microsoft has spent years working to develop the most secure and securable OS on the market. As a desktop OS, Windows has provided a range of security features for users to secure their systems. Most of these options can be employed via group policies in enterprise setups, making them easily deployable and maintainable across an enterprise.
Here are some of the security capabilities in the Windows environment:
User Account Control allows users to operate the system without requiring administrative privileges. If you’ve used Windows, you’ve undoubtedly seen the “Windows needs your permission to continue” pop-ups.
Windows Firewall includes an outbound filtering capability. Windows allows filtering of traffic coming into and leaving the system, which is useful for controlling things like peer-to-peer applications.
BitLocker allows encryption of all data on a server, including any data volumes. This capability is only available in the higher-end distributions of Windows.
Windows clients can control applications with AppLocker. AppLocker allows administrators to configure which applications can be run on a Windows machine within an enterprise environment. This is part of the Microsoft OS.
Windows Defender (part of Windows Security) is a built-in malware detection and removal tool. Windows Defender detects many types of potentially suspicious software and can prompt the user before allowing applications to make potentially malicious changes.
Windows Server comes with a host of mechanisms that can be deployed to provide a secure platform:
BitLocker allows encryption of all data on a server, including any data volumes. Improved BitLocker functionality now allows administrator-less reboots.
Role-based installation of functions and capabilities minimizes the server’s footprint. For example, if a server is going to be a web server, it does not need DNS or SMTP software, and thus those features are no longer installed by default.
AppLocker can control which executables can run on a server. This feature, deployable from a central location and managed enterprise wide, enables administrators to define which applications are allowed to run on each server. This feature reduces malware spread and enables compliance with corporate governance policy.
Read-only domain controllers can be created and deployed in high-risk locations, but they can’t be modified to add new users, change access levels, and so on. This new ability to create and deploy “read-only” domain controllers can be very useful in high-threat environments.
More-granular password policies allow for different password policies on a group or user basis. This allows administrators to assign different password policies and requirements for the sales group and the engineering group, for example, if that capability is needed.
Websites or web applications can be administered within IIS 10. This allows administrators quicker and more convenient administration capabilities, such as the ability to turn on or off specific modules through the IIS management interface. For example, removing CGI support from a web application is a quick and simple operation in the Web Server (IIS) role and IIS version 10.
The traditional ROM-BIOS has been replaced with Unified Extensible Firmware Interface (UEFI). The current version is 2.8, which prevents boot code updates without appropriate digital certificates and signatures.
The trustworthy and verified boot process has been extended to the entire Windows OS boot code with a feature known as Secure Boot. UEFI and Secure Boot significantly reduce the risk of malicious code such as rootkits and boot viruses.
Early Launch Anti-Malware (ELAM) has been instituted to ensure that only known, digitally signed anti-malware programs can load right after Secure Boot finishes (without requiring UEFI or Secure Boot). This permits legitimate anti-malware programs to get into memory and start doing their job before fake antivirus programs or other malicious code can act.
DNSSEC is fully integrated.
Data Classification with Rights Management Service is fully integrated so that you can control which users and groups can access which documents based on content or marked classification.
Managed Service Accounts allow for advanced self-maintaining features with extremely long passwords, which automatically reset every 30 days, all under the control of Active Directory in the enterprise.
Credential Guard enables the use of virtualization-based security to isolate credential information, preventing password hashes or Kerberos tickets from being intercepted. Credential Guard uses an entirely new isolated Local Security Authority (LSA) process, which is not accessible to the rest of the operating system. All binaries used by the isolated LSA are signed with certificates that are validated before they are launched in the protected environment, making pass-the-hash-type attacks completely ineffective.
Windows Server 2019 includes Windows Defender Device Guard to ensure that only trusted software can be run on the server. Using virtualization-based security, this system can limit what binaries can run on the system based on the organization’s policy. If anything other than the specified binaries tries to run, Windows OS blocks it and logs the failed attempt so that administrators can see that there has been a potential breach. Windows Defender Device Guard is also integrated with PowerShell so that you can authorize which scripts can run on your system.
The tools available in each subsequent release of the Windows Server OS are designed to increase the difficulty factor for attackers, eliminating known methods of exploitation. The challenge is in administrating the security functions, although the integration of many of these via Active Directory makes this much more manageable than in the past.
In Microsoft Windows Server, both Device Guard and Credential Guard depend on Virtual Secure Mode (VSM). One lens to examine security is via segmentation. This forces a separation between programmatic elements. Windows Hyper-V hypervisor separates the hardware of its residing host, and its internal processes, from those of other virtual machines (VMs). VSM is based on this concept and leverages the hypervisor process to secure the server/desktop. Using VSM, specific processes and their associated memory become isolated from the host operating system. This forces malicious code to operate independently of the host OS and the hardware underneath.
Microsoft provided a tool called Security Compliance Manager (SCM) to assist system and enterprise administrators with the configuration of security options across a wide range of Microsoft platforms. SCM allows administrators to use group policy objects (GPOs) to deploy security configurations across Internet Explorer, the desktop OSs, server OSs, and common applications such as Microsoft Office. Microsoft retired SCM in the summer of 2017 in favor of a new toolset called Desired State Configuration (DSC).
Desired State Configuration (DSC) is a PowerShell-based approach to configuration management of a system. Rather than having documentation that describes the security settings for a system and expecting a user to set them, DSC performs the work via PowerShell functions. This makes security configuration a managed-by-code process that brings with it many advantages. Using DSC, it is easier and faster to adopt, implement, maintain, deploy, and share system configuration information. DSC brings the advantages of DevOps to system configuration in the Windows environment. While detailed PowerShell implementations are beyond the scope of this book, the concept of programmable configuration control is not. DSC is more than just PowerShell because DSC configurations separate intent (“what I want to do”) from execution (“how I want to do it”). By separating the specifics of deployments, DSC enables multiple environments to be serviced by single DSC implementations that via configuration data can target dev, test, and production environments appropriately.
One of the challenges in a modern enterprise is understanding the impact of system changes from the installation or upgrade of an application on a system. To help you overcome that challenge, Microsoft has released the Attack Surface Analyzer (ASA), a free tool that can be deployed on a system before a change and then again after a change to analyze the alterations to various system properties as a result of the change.
Microsoft Security Baselines
A security baseline is a group of Microsoft-recommended configuration settings with an explanation of their security impact. There are over 3000 Group Policy settings for Windows 10, which does not include over 1800 browser settings. So of these 4800-plus settings, only some are security related, and choosing which to set can be a laborious process. Security baselines bring an expert-based consensus view to this task. Microsoft provides a security compliance toolkit to facilitate the application of Microsoft-recommended baselines for a system. The Microsoft Security Compliance Toolkit (SCT) is a set of tools that allows enterprise security administrators to download, analyze, test, edit, and store Microsoft-recommended security configuration baselines for Windows.
Using the toolkit, administrators can compare their current group policy objects (GPOs) with Microsoft-recommended GPO baselines or other baselines. You can also edit them, store them in GPO backup file format, and apply them broadly through Active Directory or individually through local policy. The Security Compliance Toolkit consists of specific baselines based on OS and two tools—the Policy Analyzer tool and the Local Group Policy Object (LGPO) tool.
For further information, see Microsoft Security Compliance Toolkit 1.0 (www.microsoft.com/en-us/download/details.aspx?id=55319).
Using ASA, developers can view changes in the attack surface resulting from the introduction of their code onto the Windows platform, and system administrators can assess the aggregate attack surface change by the installation of an application. Security auditors can use the tool to evaluate the risk of a particular piece of software installed on the Windows platform. Also, if ASA is deployed in a baseline mode before an incident, security incident responders can potentially use ASA to gain a better understanding of the state of a system’s security during an investigation.
Microsoft defines a group policy as “an infrastructure used to deliver and apply one or more desired configurations or policy settings to a set of targeted users and computers within an Active Directory environment. This infrastructure consists of a Group Policy engine and multiple client-side extensions (CSEs) responsible for writing specific policy settings on target client computers.” Introduced with the Windows 2000 operating system, group policies are a great way to manage and configure systems centrally in an Active Directory environment (Windows NT had policies, but technically not “group policies”). Group policies can also be used to manage users, making these policies valuable tools in any large environment.
Within the Windows environment, group policies can be used to refine, set, or modify a system’s Registry settings, auditing and security policies, user environments, logon/logoff scripts, and so on. Policy settings are stored in a group policy object (GPO) and are referenced internally by the OS using a globally unique identifier (GUID). A single policy can be linked to a single user, a group of users, a group of machines, or an entire organizational unit (OU), which makes updating common settings on large groups of users or systems much easier. Users and systems can have more than one GPO assigned and active, which can create conflicts between policies that must then be resolved at an attribute level. Group policies can also overwrite local policy settings. Group policies should not be confused with local policies. Local policies are created and applied to a specific system (locally), are not user specific (you can’t have local policy X for user A and local policy Y for user B), and are overwritten by GPOs. Further confusing some administrators and users, policies can be applied at the local, site, domain, and OU levels. Policies are applied in hierarchical order—local, then site, then domain, and so on. This means settings in a local policy can be overridden or reversed by settings in the domain policy if there is a conflict between the two policies. If there is no conflict, the policy settings are aggregated.
Windows Local Security Policies
Open a command prompt as either administrator or a user with administrator privileges on a Windows system. Type the command secpol and press ENTER (this should bring up the Local Security Policy utility). Expand Account Policies on the left side of the Local Security Policy window (which should have a + next to it). Click Password Policy. Look in the right side of the Local Security Policy window. What is the minimum password length? What is the maximum password age in days? Now explore some of the policy settings—but be careful! Changes made to the local security policy can affect the functionality or usability of your system.
Creating GPOs is usually done through either the Group Policy Object Editor, shown in Figure 14.1, or the Group Policy Management Console (GPMC). The GPMC is a more powerful GUI-based tool that can summarize GPO settings; simplify security filtering settings; backup, clone, restore, and edit GPOs; and perform other tasks. After creating a GPO, administrators will associate it with the desired targets. After association, group policies operate on a pull model, meaning that at a semi-random interval, the Group Policy client will collect and apply any policies associated to the system and the currently logged-on user.
• Figure 14.1 Group Policy Object Editor
Microsoft group policies can provide many useful options, including the following:
Network location awareness Systems are now “aware” of which network they are connected to and can apply different GPOs as needed. For example, a system can have a very restrictive GPO when connected to a public network and a less restrictive GPO when connected to an internal, trusted network.
VPN compatibility As a side benefit of network location awareness, mobile users who connect through VPNs can receive a GPO update in the background after connecting to the corporate network via VPN.
Power management Power management settings can be configured using GPOs.
Device access blocking Policy settings have been added that allow administrators to restrict user access to USB drives, CD-RW drives, DVD-RW drives, and other removable media.
Location-based printing Users can be assigned to various printers based on their location. As mobile users move, their printer locations can be updated to the closest local printer.
In Windows, policies are applied in hierarchical order. Local policies get applied first, then site policies, then domain policies, and finally OU policies. If a setting from a later policy conflicts with a setting from an earlier policy, the setting from the later policy “wins” and is applied. Keep this in mind when building group policies.
Although you do not have the advantage of a single manufacturer for all UNIX operating systems (like you do with Windows operating systems), the concepts behind securing different UNIX- or Linux-based operating systems are similar, regardless of whether the manufacturer is Red Hat or Sun Microsystems. Indeed, the overall tasks involved with hardening all operating systems are remarkably similar.
General UNIX baselining follows similar concepts as baselining for Windows OSs: disable unnecessary services, restrict permissions on files and directories, remove unnecessary software, apply patches, remove unnecessary users, and apply password guidelines. Some versions of UNIX provide GUI-based tools for these tasks, while others require administrators to edit configuration files manually. In most cases, anything that can be accomplished through a GUI can be accomplished from the command line or by manually editing configuration files.
Like Windows systems, UNIX systems are easiest to secure and baseline if they are providing a single service or performing a single function, such as acting as a Simple Mail Transfer Protocol (SMTP) server or web server. Prior to performing any software installations or baselining, the administrator should define the purpose of the system and identify all required capabilities and functions. One nice advantage of UNIX systems is that you typically have complete control over what does or does not get installed on the system. During the installation process, the administrator can select which services and applications are placed on the system, offering an opportunity to not install services and applications that will not be required. However, this assumes that the administrator knows and understands the purpose of this system, which is not always the case. In other cases, the function of the system itself may have changed.
Services on a UNIX system (called daemons) can be controlled through a number of different mechanisms. As the root user, an administrator can start and stop services manually from the command line or through a GUI tool. The OS can also stop and start services automatically through configuration files (usually contained in the /etc directory). (Note that UNIX systems vary a good deal in this regard, as some use a super-server process, such as inetd, while others have individual configuration files for each network service.) Unlike Windows, UNIX systems can also have different runlevels in which the system can be configured to bring up different services, depending on the runlevel selected.
Runlevels are used to describe the state of init (initialization) and what system services are operating in UNIX systems. For example, runlevel 0 is shutdown. Runlevel 1 is single-user mode (typically for administrative purposes). Runlevels 2 through 5 are user defined (that is, administrators can define what services are running at each level). Runlevel 6 is for reboot.
One of the “strengths” behind Linux is the ability of a sysadmin to fully control all of the features—the ultimate in customizable solutions. This can lead to leaner and faster processing, but it also can lead to security problems. Securing a Linux environment involves a couple different types of operations, as in how a sysadmin operates and how the system is configured. What’s more, there are the intricacies of the Linux system itself.
Linux has several separate operating spaces, each with its own characteristics. The application space is where user applications exist and run. These are above the kernel and can be changed while operating by simply restarting the application. The kernel space is integral to the system and can only be changed by rebooting the hardware. Thus, updates to kernel processes require a reboot to finish and become active.
Securing Linux is in many ways like securing any other operating system. Issues such as securing the services, keeping things up to date, and enforcing policies are all the same objectives regardless of the type or version of OS. The differences occur in how one achieves these objectives. Using passwords as an example, there is no centralized method like Active Directory and group policies. Instead, these functions are controlled granularly using commands on the system. It is possible to manage passwords to the same degree as through unified systems; it just takes a bit more work. The same goes for controlling access to administrative or root access accounts. On a running UNIX system, you can see which processes, applications, and services are running by using the process status, or ps, command, as shown in Figure 14.2. To stop a running service, you can identify the service by its unique process identifier (PID) and then use the kill command to stop the service. For example, if you wanted to stop the bluetooth-applet service in Figure 14.2, you would use the command kill 2443. To prevent this service from starting again when the system is rebooted, you would have to modify the appropriate runlevels to remove this service, as shown in Figure 14.2, or modify the configuration files that control this service.
• Figure 14.2 The ps command run on a Fedora system
Linux is built around the concept of a file—everything is a file. Files are files, as are directories. Devices are files, I/O locations are files, conduits between programs, called pipes, are files. Making everything addressable as a file makes permissions easier. Users are not files; they are subjects in the subject-object model. Subjects act upon objects according to permissions. Users exist in the singular, and in groups, and permissions are layered between the owner of the object, groups, and single subjects (users). In Linux, a group is a name for a list of users; this allows for shorter access control entry (ACE) lists on objects because groups are checked first. When a subject attempts to act upon an object, the security kernel examines the entries for the object’s access control entries until it finds a match. If no match is found, the action is not allowed.
Permissions on files are expressed in bit patterns, as illustrated in Figures 14.3 and 14.4. Permissions are modified using the chmod command and indicating a three-digit number that translates to the appropriate set of read, write, and execute permissions for the item. Figure 14.3 illustrates how the permissions are displayed during a file listing as well as how the relative positions relate to the owner, group, and others. Figure 14.4 illustrates the decoding pattern of the bit structure.
• Figure 14.3 Linux permissions listing
• Figure 14.4 Linux permission bit sequence
The common patterns frequently used in Linux systems are illustrated in Table 14.1.
Table 14.1 Common Linux File Permissions
For applications in the user space on a Linux box, setting the correct permissions is extremely important. These permissions are what protect configuration and other settings that enable or disable a lot of functionality—and could, if set erroneously, allow attackers to perform a wide range of attacks, including installing malware that can watch other users. For these reasons and more, Linux can be an awesome system, with great performance and capability. The downside is that it requires significant expertise to do these things securely in today’s computing environment.
Directories also use the same nomenclature as files for permissions, but with minor differences. An r indicates that the contents can be read. A w indicates that the contents can be written, and x allows a directory to be entered. Both r and w have no effect without x being set. A setting of 777 indicates that anyone can list and create/delete files in the directory. 755 gives the owner full access, while others may only list the files. 700 restricts access to only the owner.
There are times when a user needs more permissions than their account holds, as in needing root permission to perform a task. Rather than logging in as root, and thus losing their identity in logs and such, the user can use the superuser command, su, in order to assume root privilege, provided they have the root password.
Endpoint protection is the concept of extending the security perimeter to the devices that are connecting to the network. A variety of endpoint protection solutions can be employed, including antivirus/anti-malware solutions, endpoint detection and response solutions, data loss prevention solutions, and firewalls. Host-based intrusion detection and prevention solutions can also be deployed at endpoints. Not all endpoints are the same with respect to either capability or the risks from attack, and endpoint solutions should be tailored to take those elements into account.
Antivirus (AV) products attempt to identify, neutralize, or remove malicious programs, macros, and files. These products were initially designed to detect and remove computer viruses, though many of the antivirus products are now bundled with additional security products and features. Most current antivirus software packages provide protection against a wide range of threats, including viruses, worms, trojans, and other malware. Use of an up-to-date antivirus package is essential in the current threat environment.
Although antivirus products have had over two decades to refine their capabilities, the purpose of the antivirus products remains the same: to detect and eliminate computer viruses and malware. Most antivirus products combine the following approaches when scanning for viruses:
Signature-based scanning Much like an intrusion detection system (IDS), the antivirus products scan programs, files, macros, e-mails, and other data for known worms, viruses, and malware. The antivirus product contains a virus dictionary with thousands of known virus signatures that must be frequently updated, as new viruses are discovered daily. This approach will catch known viruses but is limited by the virus dictionary—what it does not know about it cannot catch.
Heuristic scanning (or analysis) Heuristic scanning does not rely on a virus dictionary. Instead, it looks for suspicious behavior—anything that does not fit into a “normal” pattern of behavior for the operating system (OS) and applications running on the system being protected.
As signature-based scanning is a familiar concept, let’s examine heuristic scanning in more detail. Heuristic scanning typically looks for commands or instructions that are not normally found in application programs, such as attempts to access a reserved memory register. Most antivirus products use either a weight-based system or a rule-based system in their heuristic scanning (more effective products use a combination of both techniques). A weight-based system rates every suspicious behavior based on the degree of threat associated with that behavior. If the set threshold is passed based on a single behavior or a combination of behaviors, the antivirus product will treat the process, application, macro, and so on that is performing the behavior(s) as a threat to the system. A rule-based system compares activity to a set of rules meant to detect and identify malicious software. If part of the software matches a rule, or if a process, application, macro, and so on performs a behavior that matches a rule, the antivirus software will treat that as a threat to the local system.
Some heuristic products are very advanced and contain capabilities for examining memory usage and addressing, a parser for examining executable code, a logic flow analyzer, and a disassembler/emulator so they can “guess” what the code is designed to do and whether or not it is malicious.
Heuristic scanning is a method of detecting potentially malicious or “virus-like” behavior by examining what a program or section of code does. Anything that is “suspicious” or potentially “malicious” is closely examined to determine whether or not it is a threat to the system. Using heuristic scanning, an antivirus product attempts to identify new viruses or heavily modified versions of existing viruses before they can damage your system.
As with IDS/IPS products, encryption and obfuscation pose a problem for antivirus products: anything that cannot be read cannot be matched against current virus dictionaries or activity patterns. To combat the use of encryption in malware and viruses, many heuristic scanners look for encryption and decryption loops. As malware is usually designed to run alone and unattended, if it uses encryption, it must contain all the instructions to encrypt and decrypt itself, as needed. Heuristic scanners look for instructions such as the initialization of a pointer with a valid memory address, manipulation of a counter, or a branch condition based on a counter value. While these actions don’t always indicate the presence of an encryption/decryption loop, if the heuristic engine can find a loop, it might be able to decrypt the software in a protected memory space, such as an emulator, and evaluate the software in more detail. Many viruses share common encryption/decryption routines that help antivirus developers.
Current antivirus products are highly configurable, and most offerings will have the following capabilities:
Automated updates Perhaps the most important feature of a good antivirus solution is its ability to keep itself up to date by automatically downloading the latest virus signatures on a frequent basis. This usually requires that the system be connected to the Internet in some fashion and that updates be performed on a daily (or more frequent) basis.
Automated scanning Most antivirus products allow for the scheduling of automated scans so that you can designate when the antivirus product will examine the local system for infected files. These automated scans can typically be scheduled for specific days and times, and the scanning parameters can be configured to specify what drives, directories, and types of files are scanned.
Media scanning Removable media is still a common method for virus and malware propagation, and most antivirus products can be configured to automatically scan optical media, USB drives, memory sticks, or any other types of removable media as soon as they are connected to or accessed by the local system.
Manual scanning Many antivirus products allow the user to scan drives, files, or directories (folders) “on demand.”
E-mail scanning E-mail is still a major method of virus and malware propagation. Many antivirus products give users the ability to scan both incoming and outgoing messages as well as any attachments.
Resolution When the antivirus product detects an infected file or application, it can typically perform one of several actions. The antivirus product may quarantine the file, making it inaccessible. It may try to repair the file by removing the infection or offending code, or it may delete the infected file. Most antivirus products allow the user to specify the desired action, and some allow for an escalation in actions, such as cleaning the infected file if possible and quarantining the file if it cannot be cleaned.
The intentions of computer virus writers have changed over the years—from simply wanting to spread a virus in order to be noticed, to creating stealthy botnets as a criminal activity. One method of remaining hidden is to produce viruses that can morph to lower their detection rates by standard antivirus programs. The number of variants for some viruses has increased from less than 10 to greater than 10,000. This explosion in signatures has created two issues: One, users must constantly (sometimes more than daily) update their signature file. Two, and more important, detection methods are having to change as the number of signatures becomes too large to scan quickly. For end users, the bottom line is simple: update signatures automatically, and at least daily.
While the installation of a good antivirus product is still considered a necessary best practice, there is growing concern about the effectiveness of antivirus products against developing threats. Early viruses often exhibited destructive behaviors; they were poorly written and modified files and were less concerned with hiding their presence than they were with propagation. We are seeing an emergence of viruses and malware created by professionals, sometimes financed by criminal organizations or governments, that go to great lengths to hide their presence. These viruses and malware are often used to steal sensitive information or turn the infected PC into part of a larger botnet for use in spamming or attack operations.
Antivirus is an essential security application on all platforms. There are numerous compliance schemes that mandate antivirus deployment, including Payment Card Industry Data Security Standard (PCI DSS) and North American Electric Reliability Council Critical Infrastructure Protections (NERC CIP).
In the early days of PC use, threats were limited: most home users were not connected to the Internet 24/7 through broadband connections, and the most common threat was a virus passed from computer to computer via an infected floppy disk (much like the medical definition, a computer virus is something that can infect the host and replicate itself). But things have changed dramatically since those early days, and current threats pose a much greater risk than ever before. Automated probes from botnets and worms are not the only threats roaming the Internet—there are viruses and malware spread by e-mail, phishing, infected websites that execute code on your system when you visit them, adware, spyware, and so on. Anti-malware is the name of a product designed to protect your machine from malicious software or malware. Today, most anti-malware solutions are combined with antivirus solutions into a single product. Fortunately, as the threats increase in complexity and capability, so do the products designed to stop them. One of the most dangerous forms of malware is ransomware; it spreads quickly, encrypting a user’s files, and locking it until a ransom is paid. For more details on anti-malware products, reread the preceding “Antivirus” section and realize that malware is a different threat than a virus, but the defenses are the same.
Endpoint detection and response (EDR) solutions are integrated solutions that combine individual endpoint security functions into a complete package. Having a packaged solution makes updating easier, and frequently these products are designed to integrate into an enterprise-level solution with a centralized management platform. Some of the common EDR components include antivirus, anti-malware, software patching, firewall, and DLP solutions. Unified endpoint management (UEM) is a newer security model that focuses on the managing and securing devices in an enterprise such as desktops, laptops, smartphones, and other devices from a single location.
Data loss prevention (DLP) solutions serve to prevent sensitive data from leaving the network without notice. What better place to check than at endpoints? Well, it is important to understand what an endpoint is. For e-mail, the endpoint really is the server, and this offers a scalable location against multiple mailboxes. Applying DLP across endpoints to chase items such as USB downloads of data can be an exercise fraught with heavy maintenance of DLP rulesets, heavyweight clients that affect endpoint performance, and a lack of discrimination that can cause productivity issues. This has led to endpoint DLP monitoring, where file activity is reported to centralized systems, and to specialized DLP offerings such as the content DLP being rolled out by Microsoft across the Microsoft 365 environment. These endpoint solutions do not provide complete or comprehensive coverage but taken together can achieve many of the objectives with less cost and complexity.
Next-generation firewalls (NGFWs) act by inspecting the actual traffic crossing the firewall—not just looking at the source and destination addresses and ports but also at the actual content being sent. This makes next-generation firewalls a potent player in the hunt for malicious content on the way in and company secrets on the way out. As with all of these rule-driven platforms, the challenge is in maintaining appropriate rulesets that catch the desired bad traffic.
Host-based intrusion detection systems (HIDSs) act to detect undesired elements in network traffic to and from the host. Because the intrusion detection system is tied to the host, it can be very specific with respect to threats to the host OS and ignore those that would have no effect. Being deployed at a specific endpoint, it can be tuned to the specifics of the endpoint and endpoint applications, providing greater levels of specific detection. Intrusion detection systems were covered in detail in Chapter 13.
A host-based intrusion prevention system (HIPS) is a HIDS with additional components to permit it to respond automatically to a threat condition. The response can be as simple as dropping a packet, up to killing a connection. Intrusion prevention systems were covered in detail in Chapter 13.
Personal firewalls, or host-based firewalls, are host-based protective mechanisms that monitor and control traffic passing in to and out of a single system. Designed for the end user, software firewalls often have a configurable security policy that allows the user to determine which traffic is “good” and is allowed to pass and which traffic is “bad” and is blocked. The decision for good versus bad is based on the addresses being passed, both IP address and port combinations. Software firewalls are extremely commonplace—so much so that most modern OSs come with some type of personal firewall included. Having the firewall on the host OS provides the ability to tune the firewall to the usage pattern of the specific endpoint.
Linux-based OSs have had built-in software-based firewalls for a number of years, including TCP Wrapper, ipchains, and iptables (see Figure 14.5).
• Figure 14.5 A Linux firewal
TCP Wrapper is a simple program that limits inbound network connections based on port number, domain, or IP address and is managed with two text files called hosts.allow and hosts.deny. If the inbound connection is coming from a trusted IP address and destined for a port to which it is allowed to connect, then the connection is allowed.
Ipchains is a more advanced, rule-based software firewall that allows for traffic filtering, Network Address Translation (NAT), and redirection. Three configurable “chains” are used for handling network traffic: input, output, and forward. The input chain contains rules for traffic that is coming into the local system. The output chain contains rules for traffic that is leaving the local system. The forward chain contains rules for traffic that was received by the local system but is not destined for the local system. Iptables is the latest evolution of ipchains. Iptables uses the same three chains for policy rules and traffic handling as ipchains, but with iptables each packet is processed only by the appropriate chain. Under ipchains, each packet passes through all three chains for processing. With iptables, incoming packets are processed only by the input chain, and packets leaving the system are processed only by the output chain. This allows for more granular control of network traffic and enhances performance.
In addition to the “free” firewalls that come bundled with OSs, many commercial personal firewall packages are available. Many commercial software firewalls limit inbound and outbound network traffic, block pop-ups, detect adware, block cookies and malicious processes, and scan instant messenger traffic. While you can still purchase or even download a free software-based personal firewall, most commercial vendors are bundling the firewall functionality with additional capabilities such as antivirus and anti-spyware.
Microsoft Windows has had a personal software firewall since Windows XP SP2. Today, Windows Firewall is called Windows Defender Firewall (see Figure 14.6). It is enabled by default and provides warnings when disabled. Windows Defender Firewall is fairly configurable; it can be set up to block all traffic, to make exceptions for traffic you want to allow, and to log rejected traffic for later analysis.
In Windows 10, Microsoft modified Windows Defender Firewall to make it more capable and configurable. More options were added to allow for more granular control of network traffic as well as the ability to detect when certain components are not behaving as expected. For example, if your Microsoft Outlook client suddenly attempts to connect to a remote web server, Windows Defender Firewall can detect this as a deviation from normal behavior and block the unwanted traffic.
• Figure 14.6 Windows Defender Firewall is enabled by default
Applications can be controlled at the OS level when they are started via blacklisting or whitelisting. Blacklisting is essentially noting which applications should not be allowed to run on the machine. This is basically a permanent “ignore” or “call block” type of capability. Whitelisting is the exact opposite: it consists of a list of allowed applications. Each of these approaches has advantages and disadvantages. Blacklisting is difficult to use against dynamic threats, as the identification of a specific application can easily be avoided through minor changes. Whitelisting is easier to employ from the aspect of the identification of applications that are allowed to run—hash values can be used to ensure the executables are not corrupted. The challenge in whitelisting is the number of potential applications that are run on a typical machine. For a single-purpose machine, such as a database server, whitelisting can be relatively easy to employ. For multipurpose machines, it can be more complicated.
Microsoft has two mechanisms that are part of the OS to control which users can use which applications:
Software restrictive policies Employed via group policies and allow significant control over applications, scripts, and executable files. The primary mode is by machine and not by user account.
User account level control Enforced via AppLocker, which is a service that allows granular control over which users can execute which programs. Through the use of rules, an enterprise can exert significant control over who can access and use installed software.
On a Linux platform, similar capabilities are offered from third-party vendor applications.
AppLocker is a component of Enterprise licenses of Windows that enables administrators to enforce which applications are allowed to run via a set of predefined rules. AppLocker is an adjunct to software restriction policies (SRPs). SRPs required significant administration on a machine-by-machine basis and were difficult to administer across an enterprise. AppLocker was designed so the rules can be distributed and enforced by GPO. They both act to prevent the running of unauthorized software and malware on a machine, but AppLocker is significantly easier to administer. Figure 14.7 shows the AppLocker interface. Some of the features that are enabled via AppLocker are restrictions by user and the ability to run in an audit mode, where results are logged but not enforced, allowing settings to be tested before use.
• Figure 14.7 Microsoft AppLocker interface
Hardware, in the form of servers, workstations, and even mobile devices, can represent a weakness or vulnerability in the security system associated with an enterprise. While hardware can be easily replaced if lost or stolen, the information that is contained by the devices complicates the security picture. Data or information can be safeguarded from loss by backups, but this does little in the way of protecting it from disclosure to an unauthorized party. There are software measures that can assist in the form of encryption, but these also have drawbacks in the form of scalability and key distribution.
Certain hardware protection mechanisms should be employed to safeguard information in servers, workstations, and mobile devices. Cable locks can be employed on mobile devices to prevent their theft. Locking cabinets and safes can be used to secure portable media, USB drives, and CDs/DVDs. Physical security is covered in more detail in Chapter 8.
Physical security is an essential element of a security plan. Unauthorized access to hardware and networking components can make many security controls ineffective.
While considering the baseline security of systems, you must consider the role the network connection plays in the overall security profile. The tremendous growth of the Internet and the affordability of multiple PCs and Ethernet networking have resulted in almost every computer being attached to some kind of network, and once computers are attached to a network, they are open to access from any other user on that network. Proper controls over network access must be established on computers by controlling the services that are running and the ports that are opened for network access. In addition to servers and workstations, however, network devices must also be examined: routers, switches, and modems, as well as various other components.
These network devices should be configured with very strict parameters to maintain network security. Like normal computer OSs that need to be patched and updated, the software that runs network infrastructure components needs to be updated regularly. Finally, an outer layer of security should be added by implementing appropriate firewall rules and router ACLs.
Maintaining current vendor patch levels for your software is one of the most important things you can do to maintain security. This is also true for the infrastructure that runs the network. While some equipment is unmanaged and typically has no network presence and few security risks, any managed equipment that is responding on network ports will have some software or firmware controlling it. This software or firmware needs to be updated on a regular basis.
The most common device that connects people to the Internet is the network router. Dozens of brands of routers are available on the market, but Cisco Systems products dominate. The popular Cisco Internetwork Operating System (IOS) runs on more than 70 of Cisco’s devices and is installed countless times at countless locations. Its popularity has fueled research into vulnerabilities in the code, and over the past few years quite a few vulnerabilities have been reported. These vulnerabilities can take many forms because routers send and receive several different kinds of traffic, from the standard Telnet remote terminal, to routing information in the form of Routing Information Protocol (RIP) or Open Shortest Path First (OSPF) packets, to Simple Network Management Protocol (SNMP) packets. This highlights the need to update the Cisco IOS software on a regular basis.
Although we focus on Cisco in our discussion, it’s important to note that every network device, regardless of the manufacturer, needs to be maintained and patched to remain secure.
Cisco IOS also runs on many of its Ethernet switching products. Like routers, these have capabilities for receiving and processing protocols such as Telnet and SNMP. Smaller network components do not usually run large software suites and typically have smaller software loaded on internal nonvolatile RAM (NVRAM). While the update process for this kind of software is typically called a firmware update, this does not change the security implications of keeping it up to date. In the case of a corporate network with several devices, someone must take ownership of updating the devices, and updates must be performed regularly according to security and administration policies.
As important as it is to keep software up to date, properly configuring network devices is equally, if not more, important. Many network devices, such as routers and switches, now have advanced remote management capabilities, with multiple open ports accepting network connections. Proper configuration is necessary to keep these devices secure. Choosing a good password is very important in maintaining external and internal security, and closing or limiting access to any open ports is also a good step for securing the devices. On the more advanced devices, you must carefully consider what services the device is running, just as with a computer. Here are some general steps to take when securing networking devices:
Limit access to only those who need it. If your networking device allows management via a web interface, SSH, or any other method, limit who can connect to those services. Many networking devices allow you to specify which IP addresses are allowed to connect to those management services.
Choose good passwords. Always change default passwords and follow good password-selection guidelines. If the device supports encryption, ensure passwords are stored in encrypted format on the device.
Password-protect the console and remote access. If the device supports password protection, ensure that all local and remote access capabilities are password protected.
Turn off unnecessary services. If your networking equipment supports Telnet but your organization doesn’t need it, turn that service off. It’s always a good idea to disable or remove unused services. Your device may also support the use of ACLs to limit access to services such as Telnet and SSH on the device itself.
Change the SNMP community strings. SNMP is widely used to manage networking equipment and typically allows a “public” string, which can typically only read information from a device, and a “private” string, which can often read and write to a device’s configuration. Some manufacturers use default or well-known strings (such as “public” for the public string). Therefore, you should always change both the public and private strings if you are using SNMP.
The use of “public” as an SNMP community string is an extremely well-known vulnerability. Any system using an SNMP community string of “public” should have the string changed immediately. The use of older versions of SNMP as well as misconfigurations of SNMP can present a large security hole in a network.
Some network security devices will have “management interfaces” that allow for remote management of the devices themselves. Often seen on firewalls, routers, and switches, a management interface allows connections to the device’s management application, an SSH service, or even a web-based configuration GUI, which are not allowed on any other interface. Due to this high level of access, management interfaces and management applications must be secured against unauthorized access. They should not be connected to public networks (the Internet) and screened subnets (formerly DMZ). Where possible, access to management interfaces and applications should be restricted within an organization so employees without the proper access rights and privileges cannot even connect to those interfaces and applications.
A virtual LAN, or VLAN, is a group of hosts that communicate as if they were on the same broadcast domain. A VLAN is a logical construct that can be used to help control broadcast domains, manage traffic flow, and restrict traffic between organizations, divisions, and so on. Layer 2 switches, by definition, will not bridge IP traffic across VLANs, which gives administrators the ability to segment traffic quite effectively. For example, if multiple departments are connected to the same physical switch, VLANs can be used to segment the traffic such that one department does not see the broadcast traffic from the other departments. By controlling the members of a VLAN, administrators can logically separate network traffic throughout the organization.
Network segmentation is the use of network addressing schemes to restrict machine-to-machine communication within specific boundaries. This mechanism uses the network structure and protocols themselves to accomplish a limitation of communication. This mechanism can restrict outside attackers from accessing machines, even if they have stolen credentials, because the network will not connect the attacker’s machine to the target machine.
IPv4 (Internet Protocol version 4) is the de facto communication standard in use on almost every network around the planet. Unfortunately, IPv4 contains some inherent shortcomings and vulnerabilities. In an attempt to address these issues, the Internet Engineering Task Force (IETF) launched an effort to update or replace IPv4; the result is IPv6. Using a new packet format and much larger address space, IPv6 is designed to speed up packet processing by routers and supply 3.4×1038 possible addresses (IPv4 uses only 32 bits for addressing; IPv6 uses 128 bits). Additionally, IPv6 has security “built in,” with mandatory support for network layer security. Although widely adopted under IPv4, IPSec support is mandatory in IPv6. The issue now is one of conversion. IPv4 and IPv6 networks cannot talk directly to each other and must rely on some type of gateway. Many operating systems and devices currently support dual IP stacks and can run both IPv4 and IPv6. While adoption of IPv6 is proceeding, it is moving slowly and has yet to gain a significant foothold.
If your network is not using IPv6, you should disable IPv6 on all clients and servers to prevent malicious traffic from using this protocol to bypass security devices. This follows the principle of “if you are not using something, disable it.”
Perhaps as important as OS and network hardening is application hardening—securing an application against local and Internet-based attacks. Hardening applications is fairly similar to hardening operating systems—you remove the functions or components you don’t need, restrict access where you can, and make sure the application is kept up to date with patches. In most cases, the last step in that list is the most important for maintaining application security. After all, applications must be accessible to users; otherwise, they serve no purpose. As most problems with applications tend to be buffer overflows in legitimate user input fields, patching the application is often the only way to secure it from attack.
To find out what services are open on a given host or network device, many administrators will use a tool called a port scanner. A port scanner is designed to probe remote systems for open TCP and UDP services. Nmap is a very popular (and free) port scanner (see https://nmap.org).
As with operating systems, applications (particularly those providing public services such as web servers and mail servers) will have recommended security and functionality settings. In some cases, vendors will provide those recommended settings, and, in other cases, an outside organization such as NSA, ISSA, or SANS will provide recommended configurations for popular applications. Many large organizations will develop their own application configuration baseline—a list of settings, tweaks, and modifications which creates a functional and hopefully secure application for use within the organization. Developing an application baseline and using it any time that application is deployed within the organization helps to ensure a consistent (and hopefully secure) configuration across the organization.
As obvious as this seems, application patches are most likely going to come from the vendor that sells the application. After all, who else has access to the source code? In some cases, such as with Microsoft’s IIS, this is the same company that sold the OS that the application runs on. In other cases, such as Apache, the vendor is OS independent and provides an application with versions for many different OSs.
Application patches are likely to come in three varieties: hotfixes, patches, and upgrades. As described for OSs earlier in the chapter, hotfixes are usually small sections of code designed to fix a specific problem. For example, a hotfix may address a buffer overflow in the login routine for an application. Patches are usually collections of fixes, tend to be much larger, and are usually released on a periodic basis or whenever enough problems have been addressed to warrant a patch release. Upgrades are another popular method of patching applications, and they tend to be presented with a more positive spin than patches. Even the term upgrade has a positive connotation—you are moving up to a better, more functional, and more secure application. For this reason, many vendors release “upgrades” that consist mainly of fixes rather than new or enhanced functionality.
Some application “patches” contain new or enhanced functions, and some change user-defined settings back to defaults during installation of the patch. If you are deploying an application patch across a large group of users, it is important to understand exactly what that application patch really does. Patches should first be tested in a nonproduction environment before deployment to determine exactly how they affect the system and the network it is connected to.
In the early days of network computing, things were easy—fewer applications existed, vendor patches came out annually or quarterly, and access was restricted to authorized individuals. Updates were few and easy to handle. Now application and OS updates are pushed constantly as vendors struggle to provide new capabilities, fix problems, and address vulnerabilities. Microsoft created “Patch Tuesday” in an effort to condense the update cycle and reduce the effort required to maintain its products and has now gone to continuous patching of its newest OS. As the number of patches continues to rise, many organizations struggle to keep up with patches—which patches should be applied immediately, which are compatible with the current configuration, which will not affect current business operations, and so on. To help cope with this flood of patches, many organizations have adopted patch management, the process of planning, testing, and deploying patches in a controlled manner.
Patch management is a disciplined approach to the acquisition, testing, and implementation of OS and application patches and requires a fair amount of resources to implement properly. To implement patch management effectively, you must first have a good inventory of the software used in your environment, including all OSs and applications. Then you must set up a process to monitor for updates to those software packages. Many vendors provide the ability to update their products automatically or to automatically check for updates and inform the user when updates are available.
Keeping track of patch availability is merely the first step; in many environments, patches must be analyzed and tested. Does the patch apply to the software you are running? Does the patch address a vulnerability or critical issue that must be fixed immediately? What is the impact of applying that patch or group of patches? Will it break something else if you apply this patch? To address these issues, it is recommended that you use development or test platforms, where you can carefully analyze and test patches before placing them into a production environment. Although patches are generally “good,” they are not always exhaustively tested; some have been known to “break” other products or functions within the product being patched, and others have introduced new vulnerabilities while attempting to address an existing vulnerability. The extent of analysis and testing varies widely from organization to organization. Testing and analysis will also vary depending on the application or OS and the extent of the patch.
Once a patch has been analyzed and tested, administrators have to determine when to apply the patch. Because many patches require a restart of applications or services or even a reboot of the entire system, most operational environments apply patches only at specific times, to reduce downtime and possible impact and to ensure administrators are available if something goes wrong. Many organizations will also have a rollback plan that allows them to recover the systems back to a known-good configuration prior to the patch, in case the patch has unexpected or undesirable effects. Some organizations require extensive coordination and approval of patches prior to implementation, and some institute “lockout” dates where no patching or system changes (with few exceptions) can be made, to ensure business operations are not disrupted. For example, an e-commerce site might have a lockout between the Thanksgiving and Christmas holidays to ensure the site is always available to holiday shoppers.
Patching of production systems brings risk in the change process. This risk should be mitigated via a change management process. Change management is covered in detail in Chapter 21. Patching of production systems should follow the enterprise change management process.
With any environment, but especially with larger environments, it can be a challenge to track the update status of every desktop and server in the organization. Documenting and maintaining patch status can be a challenge. However, with a disciplined approach, training, policies, and procedures, even the largest environments can be managed. To assist in their patch-management efforts, many organizations use a patch-management product that automates many of the mundane and manpower-intensive tasks associated with patch management. For example, many patch-management products provide the following:
Ability to inventory applications and operating systems in use
Notification of patches that apply to your environment
Periodic or continual scanning of systems to validate patch status and identify missing patches
Ability to select which patches to apply and to which systems to apply them
Ability to push patches to systems on an on-demand or scheduled basis
Ability to report patch success or failure
Ability to report patch status on any or all systems in the environment
Software vendors update software and eventually end support for older versions. Software that has reached its end of service life can represent a threat to security, as it is no longer being patched against problems as they are discovered. This same outcome can result from a vendor going out of business. Software in these cases should be carefully monitored for increased risk to the enterprise.
Patch management solutions can also be useful to satisfy audit or compliance requirements, as they can show a structured approach to patch management, show when and how systems are patched, and provide a detailed accounting of patch status within the organization.
Microsoft provides a free patch management product called Windows Server Update Services (WSUS), shown in Figure 14.8. Using the WSUS product, administrators can manage updates for any compatible Windows-based system in their organization. The WSUS product can be configured to download patches automatically from Microsoft based on a variety of factors (such as OS, product family, criticality, and so on). When updates are downloaded, the administrator can determine whether or not to push out the patches and when to apply them to the systems in their environment. The WSUS product can also help administrators track patch status on their systems, which is a useful and necessary feature.
• Figure 14.8 Windows Server Update Service
To secure, configure, and patch software, administrators must first know what software is installed and running on systems. Maintaining an accurate picture of what operating systems and applications are running inside an organization can be a very labor-intensive task for administrators—especially if individual users have the ability to load software onto their own servers and workstations. To address this issue, many organizations develop software baselines for hosts and servers. Sometimes called “default,” “gold,” or “standard” configurations, a software baseline contains all the approved software that should appear on a desktop or server within the organization. While software baselines can differ slightly due to disparate needs between groups of users, the more “standard” a software baseline becomes, the easier it will be for administrators to secure, patch, and maintain systems within the organization.
A vulnerability scanner is a program designed to probe hosts for weaknesses, misconfigurations, old versions of software, and so on. There are essentially three main categories of vulnerability scanners: network, host, and application.
A network vulnerability scanner probes a host (or hosts) for issues across its network connections. Typically a network scanner will either contain or use a port scanner to perform an initial assessment of the network to determine which hosts are alive and which services are open on those hosts. Each system and service is then probed. Network scanners are very broad tools that can run potentially thousands of checks, depending on the OS and services being examined. This makes them a very good “broad sweep” for network-visible vulnerabilities.
Due to the number of checks they can perform, network scanners can generate a great deal of traffic and a large number of connections to the systems being examined, so care should be taken to minimize the impact on production systems and production networks.
Network scanners are essentially the equivalent of a Swiss Army knife for assessments. They do lots of tasks and are extremely useful to have around, but they might not be as good as a tool dedicated to examining one specific type of service. However, if you can only run a single tool to examine your network for vulnerabilities, you’ll want that tool to be a network vulnerability scanner. Figure 14.9 shows a screenshot of Nessus from Tenable Network Security, a very popular network vulnerability scanner.
• Figure 14.9 Nessus—a network vulnerability scanner
Bottom line: If you need to perform a broad sweep for vulnerabilities on one or more hosts across the network, a network vulnerability scanner is the right tool for the job.
Host vulnerability scanners are designed to run on a specific host and look for vulnerabilities and misconfigurations on that host. Host scanners tend to be more specialized because they’re looking for issues associated with a specific operating system or set of operating systems.
Selecting the right type of vulnerability scanner isn’t that difficult. Just focus on what types of vulnerabilities you need to scan for and how you will be accessing the host/services/applications being scanned. It’s also worth noting that to do a thorough job, you will likely need both network-based and host-based scanners—particularly for critical assets. Host- and network-based scanners perform different tests and provide visibility into different types of vulnerabilities. If you want to ensure the best coverage, you’ll need to run both.
Application vulnerability scanners are designed to look for vulnerabilities in applications or certain types of applications. Application scanners are some of the most specialized scanners—even though they contain hundreds or even thousands of checks, they only look for misconfigurations or vulnerabilities in a specific type of application. Arguably the most popular type of application scanners are designed to test for weaknesses and vulnerabilities in web-based applications. Web applications are designed to be visible, interact with users, and accept and process user input—all things that make them attractive targets for attackers.
Security controls can be implemented on a host machine for the express purpose of providing data protection on the host. This section explores methods to implement the appropriate controls to ensure data security.
Data or information is the most important element to protect in the enterprise. Equipment can be purchased, replaced, and shared without consequence; it is the information that is being processed that has the value. Data security refers to the actions taken in the enterprise to secure data, wherever it resides: in transit, at rest, or in use.
Data has value in the enterprise, but for the enterprise to fully realize the value, data elements need to be shared and moved between systems. Whenever data is in transit, being moved from one system to another, it needs to be protected. The most common method of this protection is via encryption. What is important is to ensure that data is always protected in proportion to the degree of risk associated with a data security failure.
Data is processed in applications, is used for various functions, and can be at risk when in system memory or even in the act of processing. Protecting data while in use is a much trickier proposition than protecting it in transit or in storage. While encryption can be used in these other situations, it is not practical to perform operations on encrypted data. This means that other means need to be taken to protect the data. Protected memory schemes and address space layout randomization are two tools that can be used to prevent data security failures during processing. Secure coding principles, including the definitive wiping of critical data elements once they are no longer needed, can assist in protecting data in use.
Data encryption continues to be the best solution for data security. Properly encrypted, the data is not readable by an unauthorized party. There are numerous ways to enact this level of protection on a host machine.
Full disk encryption refers to the act of encrypting an entire partition in one operation. Then, as specific elements are needed, those particular sectors can be decrypted for use. This offers a simple convenience factor and ensures that all of the data is protected. It does come at a performance cost, as the act of decrypting and encrypting takes time. For some high-performance data stores, especially those with latency issues, this performance hit may be critical. Although better performance can be achieved with specialized hardware, as with all security controls there needs to be an evaluation of the risk involved versus the costs.
Major database engines have built-in encryption capabilities. The advantage to these encryption schemes is that they can be tailored to the data structure, protecting the essential columns while not impacting columns that are not sensitive. Properly employing database encryption requires that the data schema and its security requirements be designed into the database implementation. The advantage is in better protection against any database compromise, and the performance hit is typically negligible with respect to other alternatives.
Individual files can also be encrypted in a system. This can be done either at the OS level or via a third-party application. Managing individual file encryption can be tricky, as the problem moves to an encryption key security problem. When using built-in encryption methods with an OS, the key issue is resolved by the OS itself, with a single key being employed and stored with the user credentials. One of the advantages of individual file encryption comes when transferring data to another user. Transporting a single file via an unprotected channel such as e-mail can be done securely with single-file encryption.
Universal Serial Bus (USB) offers an easy mechanism to connect devices to a computer. It acts as the mechanism of transport between the computer and an external device. When data traverses the USB connection, it typically ends up on a portable device and thus requires an appropriate level of security. Many mechanisms exist, from encryption on the USB device itself, to OS-enabled encryption, to independent encryption before the data is moved. Each of these mechanisms has advantages and disadvantages, and it is ultimately up to the user to choose the best method based on the sensitivity of the data.
Mobile device security, covered in detail in Chapter 12, is also essential when critical or sensitive data is transmitted to mobile devices. The protection of mobile devices goes beyond simple encryption of the data, as the device can act as an authorized endpoint for the system, opening up avenues of attack.
Big data is the industry buzzword for very large data sets being used in many enterprises. Data sets in the petabyte, exabyte, and even zettabyte range are now being explored in some applications. Data sets of these sizes require special hardware and software to handle them, but this does not alleviate the need for security. Planning for security on this scale requires enterprise-level thinking, but it is worth noting that eventually some subset of the information makes its way to a host machine for use. It is at this point that the data is vulnerable, because whatever protection scheme is in place on the large storage system, the data is outside that realm now. This means that local protection mechanisms, such as provided by Kerberos-based authentication, can be critical in managing this type of protection scheme.
Cloud computing is the use of online resources for storage, processing, or both. When data is stored in the cloud, encryption can be used to protect the data, so that what is actually stored is encrypted data. This reduces the risk of data disclosure both in transit to the cloud and back, as well as while in storage.
A storage area network (SAN) is a means of storing data across a secondary dedicated network. SANs operate to connect data storage devices as if they were local storage, yet they are separate and can be collections of disks, tapes, and other storage devices. Because the dedicated network is separate from the normal IP network, accessing the SAN requires going through one of the attached machines. This makes SANs a bit more secure than other forms of storage, although loss through a compromised client machine is still a risk.
Access control lists (ACLs) form one of the foundational bases for security on a machine. ACLs can be used by the operating system to make determinations as to whether or not a user can access a resource. This level of permission restriction offers significant protection of resources and transfers the management of the access control problem to the management of ACLs, which is a smaller and more manageable problem.
Permissions are the cornerstone of security, and ACLs are how they are enforced. ACL mistakes and failures result in the improper configuration of permissions—one of the most common errors in security. This is a problem to keep in mind throughout the material in the book. One question that should be forefront in any professional’s mind, both in configuring and testing, is “are the permissions being done correctly?”
A modern environment is separated into multiple areas designed to isolate the functions of development, test, and production. These areas are primarily used to prevent accidents from arising from untested code ending up in production, and they are segregated by access control list and hardware, thus preventing users from accessing multiple different areas of the environment. Special accounts are used to move code between these areas of the environment in order to eliminate issues of crosstalk.
A development system is one that is sized, configured, and set up for developers to create applications and systems. The development hardware does not have to scale like production, and it probably does not need to be as responsive for certain transactions. The development platform does need to be of the same type of system, because developing on Windows and deploying to Linux is fraught with difficulties that can be avoided by matching development environments to production in terms of OS type and version. After code is successfully developed, it is moved to a test system.
The test environment is one that fairly closely mimics the production environment, with the same versions of software (down to the patch level) and the same sets of permissions, file structures, and so on. The purpose of the test environment is to enable a system to be fully tested prior to being deployed into production. The test environment might not scale like production, but from the viewpoint of a software/hardware footprint, it looks exactly like production.
The staging environment is an optional environment, but it is commonly found when there are multiple production environments. After passing the test, the system moves into staging, where it can be deployed to the different production systems. The primary purpose of staging is as a sandbox after test, so the test system can test the next set while the current set is deployed across the enterprise. One method of deployment is a staged deployment, where software is deployed to part of the enterprise and then paused to watch for unforeseen problems. If none occur, the deployment continues, stage by stage, until all of the production systems are changed. By moving software in this manner, you never lose the old production system until the end of the move, giving you time to judge and catch any unforeseen problems. This also prevents the total loss of production to a failed update.
Production is the environment where the systems work with real data, doing the business that the system is supposed to perform. This is an environment where there are by design virtually no changes, except as approved and tested via the system’s change management process.
Automation and scripting are valuable tools for system administrators and others to safely and efficiently execute tasks. Although many tasks can be performed by simple command-line execution or through the use of GUI menu operations, the use of scripts has two advantages. First, prewritten and tested scripts remove the chance of error, either a typo or clicking the wrong option. Errors are common and can take significant time to undo; for example, erasing the wrong file or directory can take time to locate and restore from a backup. The second advantage is that scripts can be chained together to provide a means of automating action.
Automation is a major element of an enterprise security program. There is an entire set of protocols, standards, methods, and architectures developed to support automation. The security community has developed automation methods associated with vulnerability management, including the Security Content Automation Protocol (SCAP), Common Vulnerabilities Enumeration (CVE), and more. Details can be found at https://measurablesecurity.mitre.org/.
Scripts are the best friend of administrators, analysts, investigators, or any professional who values efficient and accurate technical work. Scripts allow you to automate courses of action, with the subsequent steps tested and, when necessary, approved. Scripts and automation are important enough that they are specified in National Institute of Standards and Technology Special Publication 800-53 series, which describes security controls. For instance, under patching, not only is an automated method of determining which systems need patches specified but also that the patching mechanism be automated. Automated courses of action reduce errors.
Automated courses of action can save time as well. If, during an investigation, one needs to take an image of a hard drive on a system, calculate hash values, and record all the details in a file for chain of custody, this all can be done in just a few command lines—or with a single script that has been tested and approved for use.
Continuous monitoring is the term used to describe a system that has monitoring built into it, so rather than monitoring being an external event that may or may not happen, monitoring is an intrinsic aspect of the action. From a big-picture point of view, continuous monitoring is the name used to describe a formal risk assessment process that follows the NIST Risk Management Framework (RMF) methodology. Part of that methodology process is the use of security controls. Continuous monitoring is the operational process by which you can monitor and know if controls are functioning in an effective manner.
As most enterprises have a large number of systems and larger number of security controls, part of an effective continuous monitoring plan is the automated handling of the continuous monitoring status data, to facilitate consumption in a meaningful manner. Automated dashboards and alerts that show out-of-standard conditions allow operators to focus on the parts of the system that need attention rather than staring at literally tons of data.
Configuration validation is a challenge as systems age and change over time. When a system is placed into service, its configuration should be validated against security standards, ensuring that the system will do what it is supposed to do, and only what it is supposed to do. No added functionality. All extra ports, services, accounts, and so on are disabled, removed, or turned off. The configuration files, including ACLs for the system, are correct and working as designed.
Over time, as things change, software is patched, and other things are added to or taken away from the server. Updates to the application, the OS, and even other applications on the server change the configuration. Is the configuration still valid? How does one monitor all of their machines to ensure valid configurations? Automated testing is a method that can scale and resolve this issue, making it just another part of the continuous monitoring system. Any other manual method eventually fails because of fluctuating priorities that will result in routine maintenance being deferred.
Templates are master recipes for the building of objects, be they servers, programs, or even entire systems. Templates contain all of the required configuration options and setup controls enabling the automation of item deployment. You can have multiple templates for a given service, each tailored to different requirements or circumstances. The end result is that you have predefined the setup and deployment options for the item, whether hardware or software. Security templates can provide directions for securely provisioning a system.
Templates are what make Infrastructure as a Service (IaaS) possible: you establish a business relationship with an IaaS firm (the time-consuming part), they need to collect billing information, and you need to review a lot of terms and conditions with your legal team. But, then, the part you want is the standing up of some piece of infrastructure (say, for example, a LAMP stack). A LAMP stack is a popular open source web platform that is ideal for running dynamic sites. It is composed of Linux, Apache, MySQL, and PHP/Python/Perl—hence the term LAMP. You want this server to be secured, patched, and have specific accounts for access. You fill out a web form, which uses your information to match to an appropriate template. You specify all the conditions and click the Create button. If you were going to stand this up on your own, it might take days to configure all of these elements from scratch on in-house hardware. Next, the IaaS firm uses templates and master images, and your solution is online within minutes, or even seconds. If you have special needs, it might take a bit longer, but you get the idea: templates allow for the rapid, error-free creation of items such as configurations, the connection of services, testing, and deployment.
Master images are premade, fully patched images of systems. A master image, in the form of a virtual machine, can be configured and deployed in seconds to replace a system that has become tainted or is untrustworthy because of an incident. Master images provide the true, clean backup of the operating systems, applications, and everything else but the data. When you architect your enterprise to take advantage of master images, you make many administrative tasks easier to automate, easier to perform, and substantially more free of errors. Should an error be found, you have one image to fix and then deploy. Master images work very well for enterprises with multiple desktops, because you can create a master image that can be quickly deployed on new or repaired machines, bringing the systems to an identical and fully patched condition.
Nonpersistence is when a change to a system is not permanent. Making a system nonpersistent can be a useful tool when you wish to prevent certain types of malware attacks, for example. A system that has been made nonpersistent is not able to save changes to its configuration, its apps, or anything else. There are utility programs that can freeze a machine from change, in essence making it nonpersistent. This is useful for machines deployed in places where users can invoke changes, download stuff from the Internet, and so on. Nonpersistence offers a means for the enterprise to address these risks, by not letting them happen in the first place. In some respects this is similar to whitelisting, only allowing approved applications to run.
Snapshots are instantaneous save points in time on virtual machines. These allow the virtual machine to be restored to that point in time. This works, because in the end, a VM is just a file on a machine, and setting the file back to a previous version reverts the VM to the state it was in at that time. Snapshots can be very useful in reducing risk, as you can take a snapshot, make a change to the system, and, if the change is bad, revert back to the snapshot like the change had never been made.
Reverting to a known state is akin to reverting to a snapshot. Many OSs now have the capability to produce a restore point, which is a copy of key files that change upon updates to the OS. If you add a driver or update the OS, and the update results in problems, you can restore the system to the previously saved restore point. This is a very commonly used option in Microsoft Windows, and the system by default creates restore points before it processes updates to the OS, and at set points in time between updates. This gives users the ability to roll back the clock on the OS and restore to an earlier time when they know the problem did not exist. Unlike snapshots, which record everything, this only protects the OS and associated files; it also does not result in the loss of users’ files, which is something that does happen with snapshots.
Rolling back to a known configuration is another way of saying “revert to a known state.” It was the specific language Microsoft uses with respect to rolling back the Registry values to a known-good configuration on boot. If you make an incorrect configuration change in Windows and now the system won’t boot properly, you could select the Last Known Good Configuration option in the boot setup menu and roll back the Registry to the last value that properly completed a boot cycle. Microsoft stores most configuration options in the Registry, and this is a way to revert to a previous set of configuration options for the machine. Microsoft discontinued the direct ability to roll back to the last known-good configuration after Windows 7, and now the only option is to boot into safe mode and diagnose the problem via those menu options.
A live boot media CD/USB is a device that contains a complete bootable system. These devices are specially formatted so as to be bootable from the media. This allows you a means of booting the device to an external OS source, should the one on the internal drive become unusable. This may be used as a recovery mechanism, although if the internal drive is encrypted, you will need backup keys to access it. This is also a convenient means of booting to a task-specific operating system, such as forensic tools or incident response tools that are separate from the OS on the machine.
TCP wrappers are structures used to enclose or contain some other system. Wrappers have been used in a variety of ways, including to obscure or hide functionality. A trojan horse is a form of wrapper. Wrappers also can be used to encapsulate information, such as in tunneling or VPN solutions. Wrappers can act as a form of channel control, including integrity and authentication information that a normal signal cannot carry. It is common to see wrappers used in alternative environments to prepare communications for IP transmission.
Elasticity is the ability of a system to increase the workload using additional hardware resources—commonly dynamically added on demand—in order to scale out. If the workload increases, you scale by adding more resources; conversely, when demand wanes, you scale back by removing unneeded resources. Elasticity is one of the strengths of cloud environments, as you can configure them to scale up and down, and only pay for the actual resources you use. In a server farm that you own, you pay for the equipment, even when not in use. In an elastic cloud environment, you literally only pay for what you use.
Scalability is the ability of the system to accommodate larger workloads through the addition of resources, either by making hardware stronger, scaling up, adding additional nodes, or scaling out. This term is commonly used in server farms and database clusters, as these both can have scale issues with respect to workload.
Elasticity and scalability seem to be the same thing, but they are different. Elasticity is related to dynamically scaling a system with workload (scaling out), whereas scalability is a design element that enables both scaling up (to more capable hardware) and scaling out (to more instances).
Distributive allocation is the transparent allocation of requests across a range of resources. When multiple servers are employed to respond to load, distributive allocation handles the assignment of jobs across the servers. When the jobs are stateful, as in database queries, the process ensures that the subsequent requests are returned to the same server to maintain transactional integrity. When the system is stateless, like web servers, other load-balancing routines are used to spread the work.
Alternative environments are those that are not traditional computer systems in a common IT environment. This is not to say that these environments are rare; in fact, there are millions of systems, composed of hundreds of millions of devices, all across society. Computers exist in many systems where they perform critical functions specifically tied to a particular system. These alternative systems are frequently static in nature; that is, their software is unchanging over the course of its function. Updates and revisions are few and far between. Although this may seem to run counter to current security practices, it doesn’t: because these alternative systems are constrained to a limited, defined set of functionality, the risk from vulnerabilities is limited. Examples of these alternative environments include embedded systems, SCADA (supervisory control and data acquisition) systems, mobile devices, mainframes, game consoles, and in-vehicle computers.
Many of the alternative environments can be considered static systems. Static systems are those that have a defined scope and purpose and do not regularly change in a dynamic manner, unlike most PC environments. Static systems tend to have closed ecosystems, with complete control over all functionality by a single vendor. A wide range of security techniques can be employed in the management of alternative systems. Network segmentation, security layers, wrappers, and firewalls assist in the securing of the network connections between these systems. Manual updates, firmware control, and control redundancy assist in the security of the device operation.
Peripherals used to be basically dumb devices, with low to no interaction; however, with the low cost of compute power and the desire to program greater functionality, many of these devices have embedded computers in them. This has led to hacking of peripherals and the need to understand the security aspects of peripherals. Items such as wireless keyboards and mice, printers, displays, and storage devices all become sources of risk.
Wireless keyboards operate via a short range wireless signal between the keyboard and the computer. The main method of connection is either via a USB Bluetooth connector, in essence creating a small personal area network (PAN), or via a 2.4GHz dongle. Wireless keyboards are frequently paired with wireless mice, thus removing those troublesome and annoying cables from the desktop. Because of the wireless connection, the signals to and from the peripherals are subject to interception, and attacks have been made on these devices.
Wireless mice are similar in nature to wireless keyboards. They tend to connect as a human interface device (HID) class of USB. This is part of the USB specification and is used for mice and keyboards, simplifying connections, drivers, and interfaces through a common specification.
One of the interesting security problems with wireless mice and keyboards has been the development of the mousejacking attack. This is when an attacker performs a man-in-the-middle attack on the wireless interface and can control the mouse and/or intercept the traffic. When this attack first hit the environment, manufacturers had to provide updates to their software interfaces to block this form of attack. Some of the major manufacturers, like Logitech, made this effort for their mainstream product lines, but a lot of mice that are older were never patched. Also, smaller vendors have never addressed the vulnerability, so it still exists.
Computer displays are primarily connected to machines via a cable to one of several types of display connectors. However, for conferences and other group settings, a wide array of devices today can enable a machine to connect to a display via a wireless network. These devices are available from Apple, Google, and a wide range of A/V companies. The risk of using these devices is simple—who else within range of the wireless signal can watch what you are beaming to the display in the conference room? And would you even know if the signal was intercepted? In a word, you wouldn’t. This doesn’t mean these devices should not be used in the enterprise, but just that they should not be used for transmitting sensitive data to the screen.
Printers have CPUs and a lot of memory. The primary purpose for this is to offload the printing from the device sending the print job to the print queue. Modern printers now come standard with a bidirectional channel so that you can send a print job to the printer and then the printer can send back information as to job status, printer status, and other items. Multifunction devices (MFDs) are like printers on steroids. They typically combine printing, scanning, and faxing all into a single device. This has become a popular market segment because it reduces costs and device proliferation in the office.
With printers being connected to the network, multiple people can connect and independently print jobs, thus sharing a fairly expensive high-speed duplexing printer. But with the CPU, firmware, and memory comes the risk of an attack vector, and hackers have demonstrated malware passed via a printer. This is not a mainstream issue yet, but it has passed the proof-of-concept phase, and in the future we will need to have software protect us from our printers.
The rise of network array storage (NAS) devices moved quickly from the enterprise into form factors that are found in homes. As users have developed large collections of digital videos and music, these external storage devices, running on the home network, solve the storage problem. These devices are typically fairly simple Linux-based appliances, with multiple hard drives in a RAID arrangement. With the rise of ransomware, these devices can spread infections to any and all devices that connect to the network. For this reason, precautions should be taken with respect to always-on connections to storage arrays.
A class of Wi-Fi-enabled MicroSD cards were developed to eliminate the need to move the card from device to device in order to move the data. Primarily designed for digital cameras, these cards became very useful for creating Wi-Fi devices out of devices that had an SD slot. These cards have a tiny computer embedded in them that runs a stripped-down version of Linux. One of the major vendors in this space used a stripped-down version of BusyBox and had no security invoked at all, making the device completely open to hackers.
Mobile devices may seem to be a static environment, one where the OS rarely changes or is rarely updated, but as these devices have become more and more ubiquitous, offering greater capabilities, this is no longer the case. Mobile devices have regular OS software updates, and as users add applications, this makes most mobile devices a complete security challenge. Mobile devices frequently come with Bluetooth connectivity mechanisms. Protection of the devices from attacks against the Bluetooth connection, such as bluejacking and bluesnarfing, is an important mitigation. To protect against unauthorized connections, a Bluetooth device should always have discoverable mode turned off, unless the user is deliberately pairing the device.
Mobile devices are covered in detail in Chapter 12.
Embedded systems is the name given to computers that are included as an integral part of a larger system, typically hardwired in. From computer peripherals like printers, to household devices like smart TVs and thermostats, to the car you drive, embedded systems are everywhere. Embedded systems can be as simple as a microcontroller with fully integrated interfaces (a system on a chip) or as complex as the tens of interconnected embedded systems in a modern automobile. Embedded systems are designed with a single control purpose in mind and have virtually no additional functionality, but this does not mean that they are free of risk or security concerns. The vast majority of security exploits involve getting a device or system to do something it is capable of doing, and technically designed to do, even if the resulting functionality was never an intended use of the device or system.
The designers of embedded systems typically are focused on minimizing costs, with security seldom seriously considered as part of either the design or the implementation. Because most embedded systems operate as isolated systems, the risks have not been significant. However, as capabilities have increased, and these devices have become networked together, the risks have increased significantly. For example, smart printers have been hacked as a way into enterprises, and as a way to hide from defenders. And when next-generation automobiles begin to talk to each other, passing traffic and other information between them, and begin to have navigation and other inputs being beamed into systems, the risks will increase and security will become an issue. This has already been seen in the airline industry, where the separation of in-flight Wi-Fi, in-flight entertainment, and cockpit digital flight control networks has become a security issue.
Digital camera systems have entered the computing world through a couple of different portals. First, there is the world of high-end digital cameras that have networking stacks, image processors, and even 4K video feeds. These are used in enterprises such as the news, where getting the data live without extra processing delays can be important. What is important to note is that most of these devices, although they are networked into other networks, have built-in virtual private networks (VPNs) that are always on, because the content is considered valuable enough to protect as a feature.
The next set of cameras reverses the quantity and quality characteristics. Whereas the high-end devices are fairly small in number, there is a growing segment of video surveillance cameras, including household surveillance, baby monitors, and the like. Hundreds of millions of these devices are sold, and they all have a sensor, a processor, a network stack, and so on. These are part of the Internet of Things (IoT) revolution, where millions of devices connect together either on purpose or by happenstance. It was a network of these devices, along with a default username and password, that led to the Mirai botnet that actually broke the Internet for a while in the fall of 2016. The true root cause was a failure to follow a networking RFC concerning source addressing, coupled with the default username and password and remote configuration, that enabled these devices to be taken over. Two sets of failures, working together, created weeks’ worth of problems.
Computer-based game consoles can be considered a type of embedded system designed for entertainment. The OS in a game console is not there for the user but rather to support the specific application or game. There typically is no user interface to the OS on a game console for a user to interact with; rather, the OS is designed for a sole purpose. With the rise of multifunction entertainment consoles, the attack surface of a gaming console can be fairly large, but it is still constrained by the closed nature of the gaming ecosystem. Updates for the firmware and OS-level software are provided by the console manufacturer. This closed environment offers a reasonable level of risk associated with the security of the systems that are connected. As game consoles become more general in purpose and include features such as web browsing, the risks increase to levels commensurate with any other general computing platform.
Mainframes represent the history of computing, and although many people think they have disappeared, they are still very much alive in enterprise computing. Mainframes are high-performance machines that offer large quantities of memory, computing power, and storage. Mainframes have been used for decades for high-volume transaction systems as well as high-performance computing. The security associated with mainframe systems tends to be built into the operating system on specific-purpose mainframes. Mainframe environments tend to have very strong configuration control mechanisms, and very high levels of stability.
Mainframes have become a cost-effective solution for many high-volume applications because many instances of virtual machines can run on the mainframe hardware. This opens the door for many new security vulnerabilities—not on the mainframe hardware per se, but rather through vulnerabilities in the guest OS in the virtual environment.
SCADA is an acronym for supervisory control and data acquisition, a system designed to control automated systems in cyber-physical environments. SCADA systems control manufacturing plants, traffic lights, refineries, energy networks, water plants, building automation and environmental controls, and a host of other systems. SCADA is also known by names such as distributed control systems (DCSs) and industrial control systems (ICSs). The variations depend on the industry and the configuration. Where computers control a physical process directly, a SCADA system likely is involved.
Most SCADA systems involve multiple components networked together to achieve a set of functional objectives. These systems frequently include a human machine interface (HMI), where an operator can exert a form of directive control over the operation of the system under control. SCADA systems historically have been isolated from other systems, but the isolation is decreasing as these systems are being connected across traditional networks to improve business functionality. Many older SCADA systems were air-gapped from the corporate network; that is, they shared no direct network connections. This meant that data flows in and out were handled manually and took time to accomplish. Modern systems remove this constraint, with direct network connections between the SCADA networks and the enterprise IT network. These connections increase the attack surface and the risk to the system, and the more they resemble an IT networked system, the greater the need for security functions.
SCADA systems have been drawn into the security spotlight with the Stuxnet attack on Iranian nuclear facilities, initially reported in 2010. Stuxnet is malware designed to attack a specific SCADA system and cause failures, resulting in plant equipment damage. This attack was complex and well designed, crippling nuclear fuel processing in Iran for a significant period of time. This attack raised awareness of the risks associated with SCADA systems, whether connected to the Internet or not (Stuxnet crossed an air gap to hit its target).
Building-automation systems, climate control systems, HVAC (heating, ventilation, and air conditioning) systems, elevator control systems, and alarm systems are just some of the examples of systems that are managed by embedded systems. Although these systems used to be independent and standalone systems, the rise of hyperconnectivity has shown value in integrating them. Having a “smart building” that reduces building resources in accordance with the number and distribution of people inside increases efficiency and reduces costs. Interconnecting these systems and adding in Internet-based central control mechanisms does increase the risk profile from outside attacks.
Smart devices and devices that comprise the Internet of Things (IoT) have taken the world’s markets by storm—from key fobs that can track things via GPS, to cameras that can provide surveillance, to connected household appliances, TVs, dishwashers, refrigerators, crockpots, and washers and dryers. Anything with a microcontroller now seems to be connected to the Web so that external controls can be used. From the smart controllers from Amazon, the Echo, and its successors, to Google Home, to Microsoft Cortana, artificial intelligence has entered into the mix, enabling even greater functionality. Computer-controlled light switches, LED light bulbs, thermostats, and baby monitors—the smart home is connecting everything. You can carry a key fob that your front door recognizes, unlocking before you get to it. Of course, the security camera saw you first and alerted the system that someone was coming up the driveway. The only thing that can be said with confidence about this revolution is that someone will figure out how and why to connect virtually anything to the network.
All of these devices have a couple of similarities. They all have a network interface, because their connectivity is their purpose as a smart device or a member of the Internet of Things. On that network interface is some form of computer platform. With complete computer functionality now included in a System on a Chip (SoC) platform, which will be covered in a later section, these tiny devices can have a complete working computer for a couple of dollars in cost. The use of a Linux-type kernel as the core engine makes programming easier because the base of programmers is very large. Also, you have something that can be mass-produced and at a relatively low cost. The scaling of the software development over literally millions of units makes costs scalable, and the driving element is functionality. Security or anything else that might impact new expanded functionality has taken a backseat.
Wearable technologies include everything from biometric sensors for measuring heat rate, to step counters for measuring how much one exercises, to smart watches that combine both these functions, and many more. By measuring biometric signals such as pulse rate and body movements, it is possible to track fitness goals and even hours of sleep. These wearable devices are built using very small computers that run a real-time operating system, usually built from a stripped-down Linux kernel.
Home automation is one of the driving factors behind the IoT movement. From programmable smart thermostats, to electrical control devices that replace wall switches and enable voice-operated lights, the home environment is awash with tech. Locks can be operated electronically, allowing you to lock or unlock them remotely from your smartphone. Surveillance cameras connected to your smartphone can tell you when someone is at your door and allow you to talk to them without even being home or opening the door. Appliances can be set up to run when energy costs are lower, or to automatically order more food when you take the last of an item from the pantry or refrigerator. These are not things of a TV show about the future; they are available today and at fairly reasonable prices.
The tech behind these items is the same tech behind a lot of recent advances. This includes a small System on a Chip (a complete computer system with a real-time operating system designed not as a general compute platform but just to drive the needed elements); a network connection (usually wireless); some sensors to measure light, heat, or sound; and an application to integrate the functionality. The security challenge is that most of these devices literally have no security. Poor networking software led to a legion of baby monitors and other home devices becoming part of a large botnet called Mirai, which attacked the Krebs on Security site with a DDoS rate that exceeded 600 Gbps in the fall of 2016.
Special-purpose systems are those designed specifically for systems with particular uses, defined by their intended operating environment. Three primary types of special-purpose systems are medical devices, vehicles, and aircraft. Each of these has significant computer system elements providing much of the functionality control for the device, and each of these systems has its own security issues.
Medical devices comprise a very wide group of devices—from small implantable devices, such as pacemakers, to multi-ton MRI machines. In between are devices that measure things and devices that actually control things, such as infusion pumps. Each of these has several interesting characteristics, the most important of which is that they can have a direct effect on human life. This makes security a function of safety.
Medical devices such as lab equipment and infusion pumps have been running on computerized controls for years. The standard choice has been an embedded Linux kernel that has been stripped of excess functionality and pressed into service in the embedded device. One of the problems with this approach is how one patches this kernel when vulnerabilities are found. Also, as the base system gets updated to a newer version, the embedded system stays trapped on the old version. This requires regression testing for problems, and most manufacturers will not undertake this labor-intensive chore.
Medical devices are manufactured under strict regulatory guidelines that are designed for static systems that do not need patching, updating, or changes. Any change would force a requalification, which is a lengthy, time-consuming, and expensive process. Because of this, these devices tend never to be patched. With the advent of several high-profile vulnerabilities, including Heartbleed and BASH shell attacks, most manufacturers simply recommended that the devices be isolated and never connected to an outside network. In concept, this is fine, but in reality, this can never happen because all the networks in a hospital or medical center are connected.
A recall of nearly a half million pacemakers in 2017 for a software vulnerability that allows a hacker to access and change the performance characteristics of the device is proof of the problem. The good news is that the devices can be updated without being removed, but it will take a doctor’s visit to have the new firmware installed.
System on a Chip (SoC) technologies involve the miniaturization of the various circuits needed for a working computer system. These systems are designed to provide the full functionality of a computing platform on a single chip. This includes networking and graphics display. Some SoC solutions come with memory, and for others the memory is separate. SoCs are very common in the mobile computing market (both phones and tablets) because of their low power consumption and efficient design. Some SoCs have become household names as mobile phone companies have advertised their inclusion in their system (for example, the Snapdragon processor in Android devices). Quad-core and eight-core SoC systems are already in place, and they even have advanced designs such as Quad Plus One, where the fifth processor is slower and designed for simple processes and uses extremely small amounts of power. This way, when the quad cores are not needed, there is no significant energy usage.
The programming of SoC systems can occur at several different levels. Dedicated OSs and applications can be written for them, such as the Android fork of Linux, which is specific to the mobile device marketplace. At the end of the day, because these devices represent computing platforms for billions of devices worldwide, they have become a significant force in the marketplace.
Real-time operating systems (RTOSs) are operating systems designed for systems in which the processing must occur in real time and where data cannot be queued or buffered for any significant time. RTOSs are not for general-purpose machines, but are programmed for a specific purpose. They still have to deal with contention, and scheduling algorithms are needed to deal with timing collisions, but in general an RTOS processes each input as it is received, or within a specific time slice defined as the response time.
Most general-purpose computer operating systems are multitasking by design. This includes Windows and Linux. Multitasking systems make poor real-time processors, primarily because of the overhead associated with separating tasks and processes. Windows and Linux may have interrupts, but these are the exception, not the rule, for the processor. RTOS-based software is written in a completely different fashion, designed to emphasize the thread in processing rather than handling multiple threads.
A modern vehicle has hundreds of computers in it, all interconnected on a bus. The CAN bus (controller area network bus) is designed to allow multiple microcontrollers to communicate with each other without a central host computer. As individual microcontrollers were used in automobiles to control the engine, emissions, transmission, breaking, heating, electrical, and other systems, the wiring harnesses used to interconnect everything became a problem. Robert Bosch developed the CAN bus for cars, specifically to address the wiring harness issue, and when it was first deployed in 1986 at BMW, the weight reduction was over 100 pounds.
By 2008, all new U.S. and European cars had to use the CAN bus per SAE (Society of Automotive Engineers) regulations, and with the addition of more and more subsystems, this technology did not require selling to engineers. The CAN bus comes with a reference protocol specification, but recent auto hacking discoveries have revealed several interesting points. First, Toyota claimed in court that the only way to make a car go was to step on the gas pedal, and that software alone won’t do it. This claim has been proven false. Second, every automobile manufacturer has interpreted/ignored the reference architecture to varying degrees. Finally, as demonstrated by hackers at DEF CON, it is possible to disable cars on the go, over the Internet, as well as fool around with the entertainment console settings and such.
The bottom line for automobiles and vehicles is that they are composed of multiple computers, all operating semi-autonomously and virtually without any security. The U.S. Department of Transportation is pushing for vehicle-to-vehicle communication so that cars can tell each other when traffic is changing ahead of them. Couple that with the advances in self-driving technology, and you can see how important it is that security become a stronger issue in the industry.
Aircraft also have a significant computer footprint inside, as most modern jets have what is called an all-glass cockpit. The old individual gauges and switches are replaced with a computer display with touchscreen. This enables greater functionality and is more reliable than the older systems. But as with cars, the connecting of all of this equipment onto busses that are then eventually connected to outside networks has led to a lot of security questions within the aviation industry. And, like the medical industry, change is difficult, because the level of regulation and testing precludes ever patching an operating system. This makes for systems that over time will become vulnerable as the base OS is thoroughly explored and every vulnerability mapped and exploited in non-aviation systems—and these use cases can easily be ported to planes.
Recent revelations have shown that the in-flight entertainment systems are separated from flight controls, not by separate networks, but by a firewall. This has led hackers to sound the alarm over aviation computing safety.
Unmanned aerial vehicles (UAVs) represent the next frontier of flight. These machines range from hobbyist devices that cost under $300 to full-size aircraft that can fly across oceans. What makes these systems different from regular aircraft is that the pilot is on the ground, flying the device via remote control. These devices have cameras, sensors, and processors to manage the information; even the simple home hobbyist versions have sophisticated autopilot functions. Because of the remote connection, they are either under direct radio control (rare) or connected via a networked system (much more common).
Industry-standard frameworks and reference architectures are conceptual blueprints that define the structure and operation of the IT systems in the enterprise. Just as in an architecture diagram, which provides a blueprint for constructing a building, the enterprise architecture provides the blueprint and roadmap for aligning IT and security with the enterprise’s business strategy.
Industries under governmental regulation frequently have an approved set of architectures defined by regulatory bodies. Architectures like the electric industry have the NERC (North American Electric Reliability Corporation) Critical Infrastructure Protection (CIP) standards. This is a set of 14 individual standards that, when taken together, drive a reference framework/architecture for this bulk electric system in North America. Most industries in the U.S. find themselves regulated in one manner or another. When it comes to cybersecurity, more and more regulations are beginning to apply—from privacy, to breach notification, to due diligence and due care provisions. NIST has been careful to promote its Cyber Security Framework (CSF), covered later in this chapter, not as a government-driven “must,” stating it is optional.
There are some reference architectures that are neither industry specific nor regulatory, but rather technology focused like the NIST / CSA (Cloud Security Alliance) reference architecture for cloud-based systems. In the nonregulatory set is the NIST CSF (Cyber Security Framework), a consensus-created overarching framework to assist enterprises in their cybersecurity programs. The CSF has three main elements: a core, tiers, and profiles. The core is built around five functions: Identify, Protect, Detect, Respond, and Recover. The core then has elements for each of these covering categories of actions, subcategories, and normative references to standards. The tiers are a way of representing an organization’s level of achievement—from partial, to risk informed, to repeatable, to adaptive. These tiers are similar to maturity model levels. The profiles section describes the current state of alignment for the elements and the desired state of alignment—a form of gap analysis. The NIST CSF is being mandated for government agencies, but it is completely voluntary in the private sector. This framework has been well received, partly because of its comprehensive nature and partly because of its consensus approach, which created a useable document.
The U.S. federal government has its own cloud-based reference architecture for systems that use the cloud. Called FedRAMP (the Federal Risk and Authorization Management Program), this process is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for systems using cloud products and services.
One of the more interesting international frameworks has been the harmonization between the U.S. and the EU with respect to privacy (U.S.) or data protection (EU). The rules and regulations covering privacy issues are so radically different, a special framework was created to harmonize the concepts, allowing the U.S. and EU to effectively do business together. This was referred to as the U.S.–EU Safe Harbor Framework. Changes in EU law, coupled with EU court determinations that the U.S.–EU Safe Harbor Framework is not a valid mechanism to comply with EU data protection requirements when transferring personal data from the European Union to the United States, forced a complete refreshing of the methodology. The new privacy-sharing methodology is called the EU–U.S. Privacy Shield Framework and became effective in the summer of 2016.
There are several examples of industry-specific frameworks. Although some of these may not seem to be complete frameworks, they provide instructive guidance on how systems should be architected. Some of these frameworks are regulatory based, like the electric industry CIP referenced earlier. Another industry-specific framework is the HITECH CSF (Common Security Framework) for use in the medical industry and enterprises that must address HIPAA/HITECH rules and regulations.
Benchmarks and secure configuration guides offer a set of guidance for setting up and operating systems to a secure level that is understood and documented. As each organization may differ, the standard for a benchmark is a consensus-based set of knowledge designed to deliver a reasonable set of security across as wide a base as possible. There are numerous sources for these guides, and three main sources exist for many of these systems. You can get benchmark guides from manufacturers of the software, the government, and an independent organization called Center for Internet Security. Not all systems have benchmarks, nor do all sources cover all systems, but searching for the correct configuration and setup directives can go a long way in establishing security.
The vendor/manufacturer guidance source is easy—pick the vendor for your product. The government sources are a bit more scattered, but two solid sources are the U.S. National Institute of Standards and Technology Computer Security Resource Center’s National Vulnerability Database National Checklist Program (NCP) Repository (https://nvd.nist.gov/ncp/repository) and the U.S. Department of Defense’s Defense Information Security Agency’s Security Technical Implementation Guides (STIGs). These are detailed step-by-step implementation guides, and a list is available at https://public.cyber.mil/stigs/.
Setting up secure services is important to enterprises, and some of the best guidance comes from the manufacturer in the form of platform/vendor-specific guides. These guides include installation and configuration guidance, and in some cases operational guidance as well.
There are many web servers used in enterprises. Web servers offer a connection between users (clients) and enterprise resources (data being provided), and therefore they are prone to adversarial attempts at penetration. Setting up any external-facing application securely is the key to preventing unnecessary risk. Fortunately for web servers, there are several authoritative and proscriptive sources of information available for properly securing the application. In the case of Microsoft’s IIS and SharePoint Server, the company provides solid guidance on the proper configuration of the servers. The Apache foundation provides some information for its web server products as well.
Another good source of information is from the Center for Internet Security, as part of its benchmarking guides. The CIS guides provide authoritative, proscriptive guidance developed as part of a consensus effort between consultants, professionals, and others. This guidance has been subjected to and has withstood significant peer review via implementation. CIS guides are available for multiple versions of Apache, Microsoft, and other vendors’ products.
The operating system (OS) is the interface between the applications that perform the tasks we want done and the actual physical computer hardware. As such, the OS becomes a key component for the secure operation of a system. Comprehensive, proscriptive configuration guides for all major operating systems are available from the manufacturer, or in an easier-to-digest form from CIS, as mentioned earlier, or from the U.S. government through the Department of Defense DISA STIGs (Security Technical Implementation Guides) program.
Application servers are the part of the enterprise that handles specific tasks we associate with IT systems. Whether it is an e-mail server, a database server, a messaging platform, or any other server, application servers are where the work happens. Proper configuration of an application server depends to a great degree on the server specifics. Standard application servers, such as e-mail and database servers, have guidance from the manufacturer, CIS, and STIGs. Less-standard servers, such as ones with significant customizations (for example, a custom set of applications written in-house for your inventory control operations or order processing) or any other custom middleware, also require proper configuration, but the true vendor in these cases is the in-house builders of the software. Ensuring proper security settings and testing should be part of the build program for these so that they can be integrated into the normal security audit process to ensure continued proper configuration.
A specialty application used to connect voice calling systems to IP networks, to enable voice over IP (VOIP), is called a media gateway. These application servers are a blend of hardware and software, part application server, part network infrastructure, and perform the necessary functions to integrate and separate voice and IP signals as required. These systems show the blurring of the lines when separating systems as either application servers or network devices.
Network infrastructure devices are particularly important to properly configure because failures at this level can adversely affect the security of traffic being processed by them. Properly setting up these devices, switches, routers, concentrators, and other specialty devices can be challenging. The criticality of these devices makes them targets, because if a firewall fails, in many cases there are no indications until an investigation finds that it failed to do its job. Ensuring these devices are properly configured and maintained is not a job to gloss over, but one that requires professional attention by properly trained personnel and backed by routine configuration audits to ensure they stay properly configured. With respect to most of these devices, the greatest risk lies in the user configuration of the device via rulesets, and these are specific to each user and cannot be mandated by a manufacturer installation guide. Proper configuration and verification is site specific and many times individual device specific. Without a solid set of policies and procedures to ensure this work is properly maintained, these devices, while they may work, will not perform the services desired.
An example of a network infrastructure device is an SSL decryptor, a piece of hardware designed to streamline SSL/TLS connections in an enterprise, relieving other servers of this computationally intensive task.
The best general-purpose guide to information security is probably the common set of CIS Critical Security Controls, which are 20 best-practice effective security controls. This project, originally known as the SANS Institute Top 20 Security Controls, began as a consensus project out of the U.S. Department of Defense and has over nearly 20 years morphed into the de facto standard for selecting an effective set of security controls. The framework is now maintained by the Center for Internet Security and can be found at https://www.cisecurity.org/cybersecurity-best-practices/.
Microsoft’s Safety & Security Center https://docs.microsoft.com/en-us/windows/security/
SANS Reading Room: Application and Database Security www.sans.org/reading_room/whitepapers/application/
After reading this chapter and completing the exercises, you should understand the following about hardening systems and baselines.
Security baselines are critical to protecting information systems, particularly those allowing connections from external users.
The process of establishing a system’s security state is called baselining, and the resulting product is a security baseline that allows the system to run safely and securely.
Hardening is the process by which operating systems, network resources, and applications are secured against possible attacks.
Securing operating systems consists of removing or disabling unnecessary services, restricting permissions on files and directories, removing unnecessary software (or not installing it in the first place), applying the latest patches, removing unnecessary user accounts, and ensuring strong password guidelines are in place.
Securing network resources consists of disabling unnecessary functions, restricting access to ports and services, ensuring strong passwords are used, and ensuring the code on the network devices is patched and up to date.
Securing applications depends heavily on the application involved but typically consists of removing samples and default materials, preventing reconnaissance attempts, and ensuring the software is patched and up to date.
Anti-malware/spyware/virus protections are needed on host machines to prevent malicious code attacks.
Whitelisting can provide strong protections against malware on key systems.
Host-based firewalls can provide specific protections from some attacks.
Patch management is a disciplined approach to the acquisition, testing, and implementation of OS and application patches.
A hotfix is a single package designed to address a specific, typically security-related problem in an operating system or application.
A patch is a fix (or collection of fixes) that addresses vulnerabilities or errors in operating systems or applications.
A service pack is a large collection of fixes, corrections, and enhancements for an operating system, application, or group of applications.
Group policies are a method for managing the settings and configurations of many different users and systems in an Active Directory environment.
Group policies can be used to refine, set, or modify a system’s Registry settings, auditing and security policies, user environments, logon/logoff scripts, and so on.
Security templates are collections of security settings that can be applied to a system. Security templates can contain hundreds of settings that control or modify settings on a system, such as password length, auditing of user actions, and restrictions on network access.
Endpoints require protection in the form of anti-malware/antivirus, as well as firewalls and intrusion detection/prevention systems.
Additional controls that should be examined include DLP solutions and EDR solutions.
Controlling what software executes is important via whitelisting/blacklisting and specific application controls like AppLocker, Device Guard, and Credential Guard.
Alternative environments include process control (SCADA) networks, embedded systems, mobile devices, mainframes, game consoles, transportation systems, and more.
Alternative environments require security but are not universally equivalent to IT systems, so the specifics can vary tremendously from system to system.
antivirus (AV) (533)
application hardening (542)
application vulnerability scanner (547)
Basic Input/Output System (BIOS) (514)
continuous monitoring (552)
Desired State Configuration (DSC) (526)
firmware update (516)
globally unique identifier (GUID) (527)
group policy (527)
group policy object (GPO) (527)
hardware security module (HSM) (514)
heuristic scanning (533)
host vulnerability scanner (546)
industry-standard frameworks (565)
measured boot (515)
network operating system (NOS) (517)
network segmentation (542)
network vulnerability scanner (546)
operating system (OS) (516)
patch management (519)
process identifier (PID) (530)
reference architectures (565)
reference monitor (517)
root of trust (514)
Secure Boot (515)
secure configuration guides (566)
security kernel (517)
security template (553)
service pack (520)
TCP Wrapper (555)
trusted operating system (519)
Trusted Platform Module (TPM) (513)
Unified Extensible Firmware Interface (UEFI) (514)
Virtual Secure Mode (VSM) (526)
Use terms from the Key Terms list to complete the sentences that follow. Don’t use the same term more than once. Not all terms will be used.
1. _______________ is the process of establishing a system’s security state.
2. Securing and preparing a system for the production environment is called _____________.
3. A(n) _______________ is a small software update designed to address a specific, often urgent, problem.
4. The basic software on a computer that handles input and output is called the _______________.
5. ____________ is the use of the network architecture to limit communication between devices.
6. A(n) _______________ is a bundled set of software updates, fixes, and additional functions contained in a self-installing package.
7. In most UNIX operating systems, each running program is given a unique number called a(n) _______________.
8. When a user or process supplies more data than was expected, a(n) _______________ may occur.
9. _______________ are used to describe the state of init and what system services are operating in UNIX systems.
10. A(n) _______________ is a collection of security settings that can be applied to a system.
1. A small software update designed to address an urgent or specific problem is called what?
B. Service pack
D. None of the above
2. In a UNIX operating system, which runlevel describes single-user mode?
3. TCP wrappers do what?
A. Help secure the system by restricting network connections
B. Help prioritize network traffic for optimal throughput
C. Encrypt outgoing network traffic
D. Strip out excess input to defeat buffer overflow attacks
4. File permissions under UNIX consist of what three types?
A. Modify, read, and execute
B. Read, write, and execute
C. Full control, read-only, and run
D. Write, read, and open
5. What is the mechanism that allows for centralized management and configuration of computers and remote users in an Active Directory environment called?
B. Group policies
C. Simple Network Management Protocol
D. Security templates
6. What feature in Windows Server 2008 controls access to network resources based on a client computer’s identity and compliance with corporate governance policy?
B. Network Access Protection
D. Process identifiers
7. To stop a particular service or program running on a UNIX operating system, you might use the ______ command.
8. Updating the software loaded on nonvolatile RAM is called what?
A. A buffer overflow
B. A firmware update
C. A hotfix
D. A service pack
9. The shadow file on a UNIX system contains which of the following?
A. The password associated with a user account
B. Group policy information
C. File permissions for system files
D. Network services started when the system is booted
10. On a UNIX system, if a file has the permissions rwx r-x rw-, what permissions does the owner of the file have?
A. Read only
B. Read and write
C. Read, write, and execute
1. Explain the difference between a hotfix and a service pack, and describe why both are so important.
2. A new administrator needs some help creating a security baseline. Create a checklist/template that covers the basic steps in creating a security baseline to assist them, and explain why each step is important.
• Lab Project 14.1
Use a lab system running Linux with at least one open service, such as FTP, Telnet, or SMTP. From another lab system, connect to the Linux system and observe your results. Configure TCP wrappers on the Linux system to reject all connection attempts from the other lab system. Try to reconnect and then observe your results. Document your steps and explain how TCP wrappers work.
• Lab Project 14.2
Using a system running Windows, experiment with the Password Policy settings under the Local Security Policy (Settings | Control Panel | Administrative Tools | Local Security Policy). Find the setting for Passwords Must Meet Complexity Requirements and make sure it is disabled. Set the password on the account you are using to bob. Now enable the Passwords Must Meet Complexity Requirements settings and attempt to change your password to jane. Were you able to change it? Explain why or why not. Set your password to something the system will allow and explain how you selected that password and how it meets the complexity requirements.