Chapter 3
Architecture and Design

COMPTIA SECURITY+ EXAM OBJECTIVES COVERED IN THIS CHAPTER INCLUDE THE FOLLOWING:

  • images 3.1 Explain use cases and purpose for frameworks, best practices and secure configuration guides.
    • Industry-standard frameworks and reference architectures
      • Regulatory
      • Non-regulatory
      • National vs. international
      • Industry-specific frameworks
    • Benchmarks/secure configuration guides
      • Platform/vendor-specific guides
        • Web server
        • Operating system
        • Application server
        • Network infrastructure devices
      • General purpose guides
    • Defense-in-depth/layered security
      • Vendor diversity
      • Control diversity
        • Administrative
        • Technical
      • User training
  • images 3.2 Given a scenario, implement secure network architecture concepts.
    • Zones/topologies
      • DMZ
      • Extranet
      • Intranet
      • Wireless
      • Guest
      • Honeynets
      • NAT
      • Ad hoc
    • Segregation/segmentation/isolation
      • Physical
      • Logical (VLAN)
      • Virtualization
      • Air gaps
    • Tunneling/VPN
      • Site-to-site
      • Remote access
    • Security device/technology placement
      • Sensors
      • Collectors
      • Correlation engines
      • Filters
      • Proxies
      • Firewalls
      • VPN concentrators
      • SSL accelerators
      • Load balancers
      • DDoS mitigator
      • Aggregation switches
      • Taps and port mirror
    • SDN
  • images 3.3 Given a scenario, implement secure systems design.
    • Hardware/firmware security
      • FDE/SED
      • TPM
      • HSM
      • UEFI/BIOS
      • Secure boot and attestation
      • Supply chain
      • Hardware root of trust
      • EMI/EMP
    • Operating systems
      • Types
        • Network
        • Server
        • Workstation
        • Appliance
        • Kiosk
        • Mobile OS
      • Patch management
      • Disabling unnecessary ports and services
      • Least functionality
      • Secure configurations
      • Trusted operating system
      • Application whitelisting/blacklisting
      • Disable default accounts/passwords
    • Peripherals
      • Wireless keyboards
      • Wireless mice
      • Displays
      • WiFi-enabled MicroSD cards
      • Printers/MFDs
      • External storage devices
      • Digital cameras
  • images 3.4 Explain the importance of secure staging deployment concepts.
    • Sandboxing
    • Environment
      • Development
      • Test
      • Staging
      • Production
    • Secure baseline
    • Integrity measurement
  • images 3.5 Explain the security implications of embedded systems.
    • SCADA/ICS
    • Smart devices/IoT
      • Wearable technology
      • Home automation
    • HVAC
    • SoC
    • RTOS
    • Printers/MFDs
    • Camera systems
    • Special purpose
      • Medical devices
      • Vehicles
      • Aircraft/UAV
  • images 3.6 Summarize secure application development and deployment concepts.
    • Development life-cycle models
      • Waterfall vs. Agile
    • Secure DevOps
      • Security automation
      • Continuous integration
      • Baselining
      • Immutable systems
      • Infrastructure as code
    • Version control and change management
    • Provisioning and deprovisioning
    • Secure coding techniques
      • Proper error handling
      • Proper input validation
      • Normalization
      • Stored procedures
      • Code signing
      • Encryption
      • Obfuscation/camouflage
      • Code reuse/dead code
      • Server-side vs. client-side execution and validation
      • Memory management
      • Use of third-party libraries and SDKs
      • Data exposure
    • Code quality and testing
      • Static code analyzers
      • Dynamic analysis (e.g., fuzzing)
      • Stress testing
      • Sandboxing
      • Model verification
    • Compiled vs. runtime code
  • images 3.7 Summarize cloud and virtualization concepts.
    • Hypervisor
      • Type I
      • Type II
      • Application cells/containers
    • VM sprawl avoidance
    • VM escape protection
    • Cloud storage
    • Cloud deployment models
      • SaaS
      • PaaS
      • IaaS
      • Private
      • Public
      • Hybrid
      • Community
    • On-premise vs. hosted vs. cloud
    • VDI/VDE
    • Cloud access security broker
    • Security as a Service
  • images 3.8 Explain how resiliency and automation strategies reduce risk.
    • Automation/scripting
      • Automated courses of action
      • Continuous monitoring
      • Configuration validation
    • Templates
    • Master image
    • Non-persistence
      • Snapshots
      • Revert to known state
      • Rollback to known configuration
      • Live boot media
    • Elasticity
    • Scalability
    • Distributive allocation
    • Redundancy
    • Fault tolerance
    • High availability
    • RAID
  • images 3.9 Explain the importance of physical security controls.
    • Lighting
    • Signs
    • Fencing/gate/cage
    • Security guards
    • Alarms
    • Safe
    • Secure cabinets/enclosures
    • Protected distribution/Protected cabling
    • Airgap
    • Mantrap
    • Faraday cage
    • Lock types
    • Biometrics
    • Barricades/bollards
    • Tokens/cards
    • Environmental controls
      • HVAC
      • Hot and cold aisles
      • Fire suppression
    • Cable locks
    • Screen filters
    • Cameras
    • Motion detection
    • Logs
    • Infrared detection
    • Key management

images The Security+ exam will test your understanding of the architecture and design of an IT environment and its related security. To pass the test and be effective in implementing security, you need to understand the basic concepts and terminology related to network security design and architecture as detailed in this chapter.

3.1 Explain use cases and purpose for frameworks, best practices and secure configuration guides.

Security is complicated. The task of designing and implementing security can be so daunting that many organizations may put it off until it’s too late and they have experienced a serious intrusion or violation. A means to simplify the process, or at least to get started, is to adopt predefined guidance and recommendations from trusted entities. There are many government, open-source, and commercial security frameworks, best practices, and secure configuration guides that can be used as both a starting point and a goalpost for security programs for large and small organizations.

Industry-standard frameworks and reference architectures

A security framework is a guide or plan for keeping your organizational assets safe. It provides a structure to the implementation of security for both new organizations and those with a long history. A security framework should provide perspective that security is not just an IT concern, but an important business operational function. A well-designed security framework should address personnel issues, network security, portable and mobile equipment, operating systems, applications, servers and endpoint devices, network services, business processes, user tasks, communications, and data storage.

Industry-standard frameworks are those that are adopted and respected by a majority of organizations within a specific line of business. A reference architecture may accompany a security framework. Often a reference architecture is a detailed description of a fictitious organization and how security could be implemented. This concept serves as a guide for real-world organizations to use as a template to follow for adapting and implementing a framework.

Some security frameworks are designed to help new organizations implement their initial and foundational security elements, whereas others are designed to improve the existing in-place security infrastructure.

Regulatory

A regulatory security framework is a security guidance established by a government regulation or law. Regulatory frameworks are thus crafted or sponsored by government agencies. However, this does not necessarily limit their use to government entities. Many regulatory frameworks are publicly available and thus can be adopted and applied to private organizations as well.

A security framework does not have to be designed specifically for an organization, nor does an entire framework need to be implemented. Each organization is unique and thus should use several frameworks to assemble a solution that addresses their specific security needs.

Non-regulatory

A nonregulatory security framework is any security guidance crafted by a nongovernment entity. This would include open-source communities as well as commercial entities. Nonregulatory frameworks may require a licensing fee or a subscription fee in order to view and access the details of the framework. Some commercial entities will even provide customized implementation guidance or compliance auditing.

National vs. international

A national security framework is any security guidance designed specifically for use within a particular country. The author of a national framework may attempt to restrict access to the details of their framework in order to control or limit implementation to just their local industries. National frameworks also may include country-specific limitations, requirements, utilities, or other concerns that are not applicable to any or most other countries. Such national nuances may also serve as a limiting factor for the use of such frameworks in other lands.

International security frameworks are designed on purpose to be nation independent. These are crafted with the goal of avoiding any country-specific limitations or idiosyncrasies in order to support worldwide adoption of the framework. Compliance with international security frameworks simplifies the interactions between organizations located across national borders by ensuring they have compatible and equivalent security protections.

Industry-specific frameworks

Industry-specific frameworks are those crafted to be applicable to one specific industry, such as banking, health care, insurance, energy management, transportation, or retail. These types of frameworks are tuned to address the most common issues within an industry and may not be as easily applicable to organizations outside of that target.


Benchmarks/secure configuration guides

A benchmark is a documented list of requirements that is used to determine whether or not a system, device, or software solution is allowed to operate within a securely management environment (Figure 3.1). A secure configuration guide is another term for a benchmark. It can also be known as a standard or a baseline.

Image described by caption and surrounding text.

FIGURE 3.1 The CIS Benchmarks website

A benchmark can include specific instructions on installation and configuration of a product. It may also suggest alterations, modifications, and supplemental tools, utilities, drivers, and controls to improve the security of the system. A benchmark may also recommend operational steps, SOPs (standard operation procedures), and end-user guides to maintain security while business tasks are taking place.

A benchmark can be adopted from external entities, such as government regulations, commercial guidance, or community recommendations. But ultimately, a benchmark should be customized for the organization’s assets, threats, and risks.


Platform/vendor-specific guides

Security configuration guides are often quite specific to an operating system/platform, application, or product vendor. These types of guides can be quite helpful in securing a product since they may provide step-by-step, click-by-click, command-by-command instructions on securing a specific application, OS, or hardware product.

Web server

Benchmarks and security configuration guides can focus on specific web server products, such as Microsoft’s Internet Information Service or Apache Web Server.

Operating system

Benchmarks and security configuration guides can focus on specific operating systems, such as Microsoft Windows, Apple Macintosh, Linux, or Unix.

Application server

Benchmarks and security configuration guides can focus on specific application servers, such as Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), databases, Network Area Storage (NAS), Storage Area Network (SAN), directory services, virtual private network (VPN), or Voice over Internet Protocol (VoIP).

Network infrastructure devices

Benchmarks and security configuration guides can focus on specific network infrastructure devices, such as firewalls, switches, routers, wireless access points, VPN concentrators, web security gateways, virtual machines/hypervisors, or proxies.

General purpose guides

General-purpose security configuration guides are more generic in their recommendations rather than being focused on a single software or hardware product. This makes them useful in a wide range of situations, but they provide less detail and instruction on exactly how to accomplish the recommendations. A product-focused guide might provide hundreds of steps of configuring a native firewall, whereas a general-purpose guide may provide only a few dozen general recommendations. This type of guide leaves the specific actions to accomplish the recommendations up to the system manager to determine how to accomplish the goals or implement the suggestions.

Defense-in-depth/layered security

Defense in depth is the use of multiple types of access controls in literal or theoretical concentric circles or layers. This form of layered security helps an organization avoid a monolithic security stance. A monolithic mentality is the belief that a single security mechanism is all that is required to provide sufficient security.

Only through the intelligent combination of countermeasures can you construct a defense that will resist significant and persistent attempts at compromise. Intruders or attackers would need to overcome multiple layers of defense to reach the protected assets.

As with any security solution, relying on a single security mechanism is unwise. Defense in depth, multilayered security, or diversity of defense uses multiple types of access controls in literal or theoretical concentric circles or layers. By having security control redundancy and diversity, an environment can avoid the pitfalls of a single security feature failing; the environment has several opportunities to deflect, deny, detect, and deter any threat. Of course, no security mechanism is perfect. Each individual security mechanism has a flaw or a workaround just waiting to be discovered and abused by a hacker.

Vendor diversity

Vendor diversity is important for establishing defense in depth in order to avoid security vulnerabilities due to one vendor’s design, architecture, and philosophy of security. No one vendor can provide a complete end-to-end security solution that protects against all known and unknown exploitations and intrusions. Thus, to improve the security stance of an organization, it is important to integrate security mechanisms from a variety of vendors, manufacturers, and programmers.

Control diversity

Control diversity is essential in order to avoid a monolithic security structure. Do not depend on a single form or type of security; instead, integrate a variety of security mechanisms into the layers of defense. Using three firewalls is not as secure as using a firewall, an IDS, and strong authentication.

Administrative

Administrative controls typically include security policies as well as mechanisms for managing people and overseeing business processes. It is important to ensure a diversity of administrative controls rather than relying on a single layer or single type of security mechanism.

Technical

Technical controls include any logical or technical mechanism used to provide security to an IT infrastructure. Technical security controls need to be broad and varied in order to provide a robust wall of protection against intrusions and exploit attempts. Single defenses, whether a single layer or repetitions of the same defense, can fall to a singular attack. Diverse and multilayered defenses require a more complex attack approach requiring numerous exploitations to be used in a series, successfully, without detection in order to compromise the target. The concept of attacking with a series of exploits is known as daisy-chaining.

User training

User training is always a key part of any security endeavor. Users need to be trained in how to perform their work tasks in accordance with the limitations and restrictions of the security infrastructure. Users need to understand, believe in, and support the security efforts of the organization; otherwise, users will by default cause problems with compliance, cause a reduction in productivity, and may cause accidental or intentional security control sabotage.

Exam Essentials

Be aware of industry-standard frameworks. A security framework is a guide or plan for keeping your organizational assets safe. It provides guidance and a structure to the implementation of security for organizations. Security frameworks may be regulatory, nonregulatory, national, international, and/or industry-specific.

Understand benchmarks. A benchmark is a documented list of requirements that is used to determine whether a system, device, or software solution is allowed to operate within a secure management environment. Benchmarks may be platform- or vendor-specific or general-purpose.

Define defense in depth. Defense in depth or layered security is the use of multiple types of access controls in literal or theoretical concentric circles or layers. Defense in depth should include vendor diversity and control diversity.

3.2 Given a scenario, implement secure network architecture concepts.

Reliable network security depends on a solid foundation. That foundation is the network architecture. Network architecture is the physical structure of your network, the divisions or segments, the means of isolation and traffic control, whether or not remote access is allowed, the means of secure remote connection, and the placement of sensors and filters. This section discusses many of the concepts of network architecture.

Zones/topologies

A network zone is an area of a network designed for a specific purpose, such as internal use or external use. Network zones are logical and/or physical divisions or segments of a LAN that allow for supplementary layers of security and control (see Figure 3.2). Each security zone is an area of a network that has a single defined level of security. That security may focus on encoding authorized access, preventing access, protecting confidentiality and integrity, or limiting traffic flow. Different security zones usually host different types of resources with different levels of sensitivity. Zones are often designated and isolated through the use of unique IP subnets and firewalls. Another term for network zone is network topology. There are many types of network zones; several are covered in the next sections.

Diagram shows LAN consisting of set of computers connected to internet through router.

FIGURE 3.2 A typical LAN connection to the Internet

DMZ

A demilitarized zone (DMZ) is a special-purpose subnet. A network consists of networking components (such as cables and switches) and hosts (such as clients and servers). Often, large networks are logically and physically subdivided into smaller interconnected networks. These smaller networks are known as subnets. Subnets are usually fairly generic, but some have special uses and/or configurations.

A DMZ is an area of a network that is designed specifically for low-trust users to access specific systems, such as the public accessing a web server. If the DMZ (as a whole or as individual systems within the DMZ) is compromised, the private LAN isn’t necessarily affected or compromised. Access to a DMZ is usually controlled or restricted by a firewall and router system.

The DMZ can act as a buffer network between the public untrusted Internet and the private trusted LAN. This implementation is known as a screened subnet. It is deployed by placing the DMZ subnet between two firewalls, where one firewall leads to the Internet and the other to the private LAN.

A DMZ can also be deployed through the use of a multihomed firewall (see Figure 3.3). Such a firewall has three interfaces: one to the Internet, one to the private LAN, and one to the DMZ.

Image described by caption and surrounding text.

FIGURE 3.3 A multihomed firewall DMZ

A DMZ gives an organization the ability to offer information services, such as web browsing, FTP, and email, to both the public and internal clients without compromising the security of the private LAN.

A typical scenario where a DMZ would be deployed is when an organization wants to offer resources, such as web server, email server, or file server, to the general public.

Extranet

An extranet (see Figure 3.4) is a privately controlled network segment or subnet that functions as a DMZ for business-to-business transactions. It allows an organization to offer specialized services to business partners, suppliers, distributors, or customers. Extranets are based on TCP/IP and often use the common Internet information services, such as web browsing, FTP, and email. Extranets aren’t accessible to the general public. They often require outside entities to connect using a VPN. This restricts unauthorized access and ensures that all communications with the extranet are secured. Another important security concern with extranets is that companies that are partners today may be competitors tomorrow. Thus, you should never place data into an extranet that you’re unwilling to let a future competitor have access to.

Image described by caption and surrounding text.

FIGURE 3.4 A typical extranet between two organizations

A common scenario for use of an extranet is when an organization needs to grant resource access to a business partner or external supplier. This allows the external entity to access the offered resources without exposing those resources to the open Internet and does not allow the external entities access into the private LAN.

Intranet

An intranet is a private network or private LAN. This term was coined in the 1990s when there was a distinction between traditional LANs and those adopting Internet technologies, such as the TCP/IP protocol, web services, and email. Now that most networks use these technologies, the term intranet is no longer distinct from LAN.

All organizations that have a network have an intranet. Thus, any scenario involving a private LAN is also an intranet.

Wireless

A wireless network is a network that uses radio waves as the communication media instead of copper or fiber-optic cables. A wireless network zone can be isolated using encryption (such as WPA-2) and unique authentication (so that only users and devices authorized for a specific network zone are able to log into that wireless zone).

Scenarios where wireless is a viable option include workspaces where portable devices are needed or when running network cables is cost prohibitive.

Guest

A guest zone or a guest network is an area of a private network designated for use by temporary authorized visitors. It allows nonemployee entities to partially interact with your private network, or at least with a subset of strictly controlled resources, without exposing your internal network to unauthorized user threats. A guest network can be a wireless or wired network. A guest network can also be implemented using VLAN enforcement.

Any organization that has a regularly recurring need to grant visitors and guests some level of network access—even if just to grant them Internet connectivity–should consider implementing a guest network.

Honeynets

A honeynet consists of two or more networked honeypots used in tandem to monitor or re-create larger, more diverse network arrangements. Often, these honeynets facilitate IDS deployment for the purposes of detecting and catching both internal and external attackers. See the section “Honeynet” in Chapter 2, “Technologies and Tools,” for more information.

NAT

In order for systems to communicate across the Internet, they must have an Internet-capable TCP/IP address. Unfortunately, leasing a sufficient number of public IP addresses to assign one to every system on a network is expensive. Plus, assigning public IP addresses to every system on the network means those systems can be accessed (or at least addressed) directly by external benign and malicious entities. One way around this issue is to use network address translation (NAT) (see Figure 3.5).

Diagram shows internal network with private class address separated from external network with real address by firewall, external network connected to router with link to ISP.

FIGURE 3.5 A typical Internet connection to a local network

NAT converts the private IP addresses (see the discussion of RFC 1918) of internal systems found in the header of network packets into public IP addresses. It performs this operation on a one-to-one basis; thus, a single leased public IP address can allow a single internal system to access the Internet. Because Internet communications aren’t usually permanent or dedicated connections, a single public IP address could effectively support three or four internal systems if they never needed Internet access simultaneously. So, when NAT is used, a larger network needs to lease only a relatively small number of public IP addresses.

NAT provides the following benefits:

  • It hides the IP addressing scheme and structure from external entities.
  • It serves as a basic firewall by only allowing incoming traffic that is in response to an internal system’s request.
  • It reduces expense by requiring fewer leased public IP addresses.
  • It allows the use of private IP addresses (RFC 1918).

Closely related to NAT is port address translation (PAT), which allows a single public IP address to host up to 65,536 simultaneous communications from internal clients (a theoretical maximum; in practice, you should limit the number to 100 or fewer in most cases). Instead of mapping IP addresses on a one-to-one basis, PAT uses the Transport layer port numbers to host multiple simultaneous communications across each public IP address.

The use of the term NAT in the IT industry has come to include the concept of PAT. Thus, when you hear or read about NAT, you can assume that the material is referring to PAT. This is true for most OSs and services; it’s also true of the Security+ exam.

Another issue to be familiar with is that of NAT traversal (NAT-T). Traditional NAT doesn’t support IPSec VPNs, because of the requirements of the IPSec protocol and the changes NAT makes to packet headers. However, NAT-T was designed specifically to support IPSec and other tunneling VPN protocols, such as Layer 2 Tunneling Protocol (L2TP), so organizations can benefit from both NAT and VPNs across the same border device/interface.

As the conversion from IPv4 to IPv6 takes place, there will be a need for NATing between these two IP structures. V4-to-v6 gateways or NAT servers will become more prevalent as the migration gains momentum, in order to maintain connectivity between legacy IPv4 networks and updated IPv6 networks. Once a majority of systems are using IPv6, the number of v4-to-v6 NATing systems will decline.

Scenarios where NAT implementation is essential include when using private IP addresses from RFC 1918 or when wanting to prevent external initiation of communications to internal devices.

Ad hoc

Ad hoc is a form of wireless network also known as the peer-to-peer network. It is a form of wireless network in which individual hosts connect directly to each other rather than going through a middleman. For more on this topic, see the Chapter 2 section “WiFi direct/ad hoc.”

Segregation/segmentation/isolation

Network segmentation involves controlling traffic among networked devices. Complete or physical network segmentation occurs when a network is isolated from all outside communications, so transactions can only occur between devices within the segmented network. Logical network segmentation can be imposed with switches using VLANs, or through other traffic-control means, including MAC addresses, IP addresses, physical ports, TCP or UDP ports, protocols, or application filtering, routing, and access control management. Network segmentation can be used to isolate static environments in order to prevent changes and/or exploits from reaching them.

Security layers exist where devices with different levels of classification or sensitivity are grouped together and isolated from other groups with different security levels. This isolation can be absolute or one-directional. For example, a lower level may not be able to initiate communication with a higher level, but a higher level may initiate with a lower level. Isolation can also be logical or physical. Logical isolation requires the use of classification labels on data and packets, which must be respected and enforced by network management, OSs, and applications. Physical isolation requires implementing network segmentation or air gaps between networks of different security levels.

Bridging between networks can be a desired feature of network design. Network bridging is self-configuring, is inexpensive, maintains collision-domain isolation, is transparent to Layer 3+ protocols, and avoids the 5-4-3 rule’s Layer 1 limitations (see https://en.wikipedia.org/wiki/5-4-3_rule). However, network bridging isn’t always desirable. It doesn’t limit or divide broadcast domains, doesn’t scale well, can cause latency, and can result in loops. In order to eliminate these problems, you can implement network separation or segmentation. There are two means to accomplish this. First, if communication is necessary between network segments, you can implement IP subnets and use routers. Second, you can create physically separate networks that don’t need to communicate. This can also be accomplished using firewalls instead of routers to implement secured filtering and traffic management.

All networks are involved in scenarios where segregation, segmentation, and isolation are needed. Without establishing a distinction between internal private networks and external public networks, maintaining privacy, security, and control is very challenging for the protection of sensitive data and systems. Network segmentation should be used to divide communication areas based on sensitivity of activities, value of data, risk of data loss or disclosure, level of classification, physical location, or any other distinction deemed important to an organization.

Physical

Physical segmentation occurs when no links are established between networks. This is also known as an air gap. If there are no cables and no wireless connections between two networks, then a physical network segregation/segmentation/isolation has been achieved. This is the most reliable means of prohibiting unwanted transfer of data. However, this configuration is also the most inconvenient for the rare events where communications are desired or necessary.

Logical (VLAN)

A virtual local area network (VLAN) is a hardware-imposed network segmentation created by switches. By default, all ports on a switch are part of VLAN 1. But as the switch administrator changes the VLAN assignment on a port-by-port basis, various ports can be grouped together and kept distinct from other VLAN port designations.

VLANs are used for traffic management. Communications between ports within the same VLAN occur without hindrance, but communications between VLANs require a routing function, which can be provided either by an external router or by the switch’s internal software (one reason for the term multilayer switch). VLANs are treated like subnets but aren’t subnets. VLANs are created by switches. Subnets are created by IP address and subnet mask assignments.

VLAN management is the use of VLANs to control traffic for security or performance reasons. VLANs can be used to isolate traffic between network segments. This can be accomplished by not defining a route between different VLANs or by specifying a deny filter between certain VLANs (or certain members of a VLAN). Any network segment that doesn’t need to communicate with another in order to accomplish a work task/function shouldn’t be able to do so. Use VLANs to allow what is necessary and to block/deny anything that isn’t necessary. Remember, “deny by default; allow by exception” isn’t a guideline just for firewall rules, but for security in general.

A VLAN consists of network divisions that are logically created out of a physical network. They’re often created using switches (see Figure 3.6). Basically, the ports on a switch are numbered; each port is assigned the designation VLAN1 by default. Through the switch’s management interfaces, the device administrator can assign ports other designations, such as VLAN2 or VLAN3, in order to create additional virtual networks.

Diagram shows corporate network consisting of group of computers and servers connected together by router. Computers and users inside group are connected logically instead of physically.

FIGURE 3.6 A typical segmented VLAN

VLANs function in much the same way as traditional subnets. In order for communications to travel from one VLAN to another, the switch performs routing functions to control and filter traffic between its VLANs.

VLANs are used to segment a network logically without altering its physical topology. They’re easy to implement, have little administrative overhead, and are a hardware-based solution (specifically a Layer 3 switch). As networks are being crafted in virtual environments or in the cloud, software switches are often used. In these situations, VLANs are not hardware-based on or implemented by the software of a switch whether a physical device or a virtual system.

VLANs let you control and restrict broadcast traffic and reduce a network’s vulnerability to sniffers, because a switch treats each VLAN as a separate network division. In order to communicate between segments, the switch must provide a routing function. It’s the routing function that blocks broadcasts between subnets and VLANs, because a router (or any device performing Layer 3 routing functions, such as a Layer 3 switch) doesn’t forward Layer 2 Ethernet broadcasts. This feature of a switch blocks Ethernet broadcasts between VLANs and so helps protect against broadcast storms. A broadcast storm is a flood of unwanted Ethernet broadcast network traffic.

Virtualization

Virtualization technology is used to host one or more OSs in the memory of a single host computer. This mechanism allows virtually any OS to operate on any hardware. It also lets multiple OSs work simultaneously on the same hardware. Common examples include VMware, Microsoft’s Virtual PC or Hyper-V, VirtualBox, and Apple’s Parallels.

Virtualization offers several benefits, such as the ability to launch individual instances of servers or services as needed, real-time scalability, and the ability to run the exact OS version required for an application. Virtualized servers and services are indistinguishable from traditional servers and services from a user’s perspective. Additionally, recovery from damaged, crashed, or corrupted virtual systems is often quick: you simply replace the virtual system’s main hard drive file with a clean backup version, and then relaunch the affected virtual system.

With regard to security, virtualization offers several benefits. It’s often easier and faster to make backups of entire virtual systems rather than the equivalent native hardware installed system. Plus, when there is an error or problem, the virtual system can be replaced by a backup in minutes. Malicious code compromises of virtual systems rarely affect the host OS. This allows for safer testing and experimentation.

Custom virtual network segmentation can be used in relation to virtual machines in order to make guest OSs members of the same network division as that of the host, or guest OSs can be placed into alternate network divisions. A virtual machine can be made a member of a different network segment from that of the host or placed into a network that only exists virtually and does not relate to the physical network media. See the later section “SDN” for more about this technique, known as software-defined networking.

Air gaps

An air gap is another term for physical network segregation, as discussed in the earlier section “Physical.”

Tunneling/VPN

A virtual private network (VPN) is a communication tunnel between two entities across an intermediary network. In most cases, the intermediary network is an untrusted network, such as the Internet, and therefore the communication tunnel is usually encrypted. Numerous scenarios lend themselves to the deployment of VPNs; for example, VPNs can be used to connect two networks across the Internet (see Figure 3.7) or to allow distant clients to connect into an office local area network (LAN) across the Internet (see Figure 3.8). Once a VPN link is established, the network connectivity for the VPN client is exactly the same as a LAN connected by a cable connection. The only difference between a direct LAN cable connection and a VPN link is speed.

Image described by caption and surrounding text.

FIGURE 3.7 Two LANs being connected using a VPN across the Internet

Diagram shows client computer with VPN software connected to corporate network through encrypted communications channel over internet. Corporate network includes VPN servers and workstations.

FIGURE 3.8 A client connecting to a network via a VPN across the Internet

VPNs offer an excellent solution for remote users to access resources on a corporate LAN. They have the following advantages:

  • They eliminate the need for expensive dial-up modem banks, including landline and ISDN.
  • They do away with long-distance toll charges.
  • They allow a user anywhere in the world with an Internet connection to establish a VPN link with the office network.
  • They provide security for both authentication and data transmission.

Sometimes VPN protocols are called tunneling protocols. This naming convention is designed to focus attention on the tunneling capabilities of VPNs.

VPNs work through a process called encapsulation. As data is transmitted from one system to another across a VPN link, the normal LAN TCP/IP traffic is encapsulated (encased, or enclosed) in the VPN protocol. The VPN protocol acts like a security envelope that provides special delivery capabilities (for example, across the Internet) as well as security mechanisms (such as data encryption).

When firewalls, intrusion detection systems, antivirus scanners, or other packet-filtering and -monitoring security mechanisms are used, you must realize that the data payload of VPN traffic won’t be viewable, accessible, scannable, or filterable, because it’s encrypted. Thus, in order for these security mechanisms to function against VPN-transported data, they must be placed outside of the VPN tunnel to act on the data after it has been decrypted and returned back to normal LAN traffic.

VPNs provide the following critical functions:

  • Access control restricts users from accessing resources on a network.
  • Authentication proves the identity of communication partners.
  • Confidentiality prevents unauthorized disclosure of secured data.
  • Data integrity prevents unwanted changes of data while in transit.

VPN links are established using VPN protocols. There are several VPN protocols, but these are the four you should recognize:

  • Point-to-Point Tunneling Protocol (PPTP)
  • Layer 2 Tunneling Protocol (L2TP)
  • OpenVPN (SSL VPN, TLS VPN)
  • Internet Protocol Security (IPsec) (see the Chapter 2 section “IPSec”)

PPTP was originally developed by Microsoft. L2TP was developed by combining features of Microsoft’s proprietary implementation of PPTP and Cisco’s Layer 2 Forwarding (L2F) VPN protocols. Since its development, L2TP has become an Internet standard (RFC 2661) and is quickly becoming widely supported.

Both L2TP and PPTP are based on Point-to-Point Protocol (PPP) and thus work well over various types of remote-access connections, including dial-up. L2TP can support just about any networking protocol. PPTP is limited to IP traffic. L2TP uses UDP port 1701, and PPTP uses TCP port 1723.

PPTP can use any of the authentication methods supported by PPP, including the following:

  • Challenge Handshake Authentication Protocol (CHAP)
  • Extensible Authentication Protocol (EAP)
  • Microsoft Challenge Handshake Authentication Protocol version 1 (MS-CHAP v.1)
  • Microsoft Challenge Handshake Authentication Protocol version 2 (MS-CHAP v.2)
  • Shiva Password Authentication Protocol (SPAP)
  • Password Authentication Protocol (PAP)

Not all implementations of PPTP can provide data encryption. For example, when working with a PPTP VPN between Windows systems, the authentication protocol MS-CHAP v.2 enables data encryption.

L2TP can rely on PPP and thus on PPP’s supported authentication protocols. This is typically referenced as IEEE 802.1x (see Chapter 4, “Identity and Access Management,” and Chapter 6, “Cryptography and PKI,” for their sections on IEEE 802.1x), which is a derivative of EAP from PPP. IEEE 802.1x enables L2TP to leverage or borrow authentication services from any available AAA server on the network, such as RADIUS or TACACS+. L2TP does not offer native encryption, but it supports the use of encryption protocols, such as Internet Protocol Security (IPSec). Although it isn’t required, L2TP is most often deployed using IPsec.

L2TP can be used to tunnel any routable protocol but contains no native security features. When L2TP is used to encapsulate IPsec, it obtains authentication and data- encryption features because IPsec provides them. The main reason to use L2TP-encapsulated IPSec instead of naked IPSec is when needing to traverse a Layer 2 network that is either untrustworthy or its security is unknown. This can include a telco’s business connection offerings, such as Frame Relay and Asynchronous Transfer Mode (ATM) or the public switched telephone network (PSTN). Otherwise, IPSec can be used without the extra overhead of L2TP.

OpenVPN is based on TLS (formally SSL) and provides an easy-to-configure but robustly secured VPN option. OpenVPN is an open-source implementation that can use either preshared secrets (such as passwords) or certificates for authentication. Many wireless access points support OpenVPN, which has a native VPN option for using a home or business WAP as a VPN gateway.

Site-to-site

A site-to-site VPN is a connection between two organizational networks. See the Chapter 2 section “Remote access vs. site-to-site” for more information.

Remote access

A remote-access VPN is a variant of the site-to-site VPN. The difference is that with a remote-access VPN one endpoint is the single entity of a remote user that connects into an organizational network. See the Chapter 2 section “Remote access vs. site-to-site” for more information.

Site-to-site and remote access VPNs are variants of tunnel mode VPN. Another type of VPN is the transport mode VPN, which provides end-to-end encryption and can be described as a host-to-host VPN. In this type of VPN, all traffic is fully encrypted between the endpoints, but those endpoints are only individual systems, not organizational networks.

Security device/technology placement

When designing the layout and structure of a network, it is important to consider the placement of security devices and related technology. The goal of planning the architecture and organization of the network infrastructure is to maximize security while minimizing downtime, compromises, or other interruptions to productivity.

Sensors

A sensor is a hardware or software tool used to monitor an activity or event in order to record information or at least take notice of an occurrence. A sensor may monitor heat, humidity, wind movement, doors and windows opening, the movement of data, the types of protocols in use on a network, when a user logs in, any activity against sensitive servers, and much more.

For sensors to be effective, they need to be located in proper proximity to be able to take notice of the event of concern. This might require the sensor to monitor all network traffic, monitor a specific doorway, or monitor a single computer system.

Collectors

A security collector is any system that gathers data into a log or record file. A collector’s function is similar to the functions of auditing, logging, and monitoring. A collector watches for a specific activity, event, or traffic, and then records the information into a record file. Targets could be, for example, logon events, door opening events, all launches of a specific executable, any access to sensitive files, or all activity on mission-critical servers.

A collector, like any auditing system, needs sufficient space on a storage device to record the data it collects. Such data should be treated as more sensitive than the original data, programs, or systems it was collected from. A collector should be placed where it has the ability to review and retrieve information on the system, systems, or network that it is intended to monitor. This might require a direct link or path to the monitored target, or it may be able to operate on a cloned or mirrored copy of communications, such as the SPAN, audit, mirror, or IDS port of a switch.

Correlation engines

A correlation engine is a type of analysis system that reviews the contents of log files or live events. It is programmed to recognize related events, sequential occurrences, and interdependent activity patterns in order to detect suspicious or violating events. Through a correlation engine’s ability to aggregate and analyze system logs using fuzzy logic and predictive analytics, it may be able to detect a problem or potential problem long before a human administrator would have taken notice.

A correlation engine does not need to be in line on the network or installed directly onto monitored systems. It must have access to the recorded logs or the live activity stream in order to perform its analysis. This could allow it to operate on or near a data warehouse or centralized logging server (which is a system that maintains a real-time cloned copy of all live logs from servers and other critical systems) or off a switch SPAN port.

Filters

A filter is used to recognize or match an event, address, activity, content, or keyword and trigger a response. In most cases a filter is used to block or prevent unwanted activities or data exchanges. The most common example of a filtering tool is a firewall.

A filter should be located in line along any communication path where control of data communications is necessary. Keep in mind that filters cannot inspect encrypted traffic, so filtering of such traffic must be done just before encryption or just after decryption.

Proxies

For an introduction to proxies, see the Chapter 2 section “Proxy.”

The placement or location of a proxy should be between source or origin devices and their destination systems. The location of a transparent proxy must be along the routed path between source and destination, whereas a nontransparent proxy can be located along an alternate or indirect routed path, since source systems will direct traffic to the proxy themselves.

Firewalls

For an introduction to firewalls, see the Chapter 2 section “Firewall.”

The placement or location of a firewall should be at any transition between network segments where there is any difference in risk, sensitivity, security, value, function, or purpose. It is standard security practice to deploy a hardware security firewall between an internal network and the Internet as well as between the Internet, a DMZ or extranet, and an intranet (Figure 3.9).

Image described by caption and surrounding text.

FIGURE 3.9 A potential firewall deployment related to a DMZ

Software firewalls are also commonly deployed on every host system, both servers and clients, throughout the organizational network.

It may also be worth considering implementing firewalls between departments, satellite offices, VPNs, and even different buildings or floors.

VPN concentrators

For an introduction to VPN concentrators, see the Chapter 2 section “VPN concentrator.”

A VPN concentrator should be located on the boundary or border of the organizational network at or near the primary Internet connection. The VPN concentrator may be located inside or outside of the primary network appliance firewall. If the security policy is that all traffic is filtered entering the private network, then the VPN concentrator must be located outside the firewall. If traffic from remote locations over VPNs is trusted, then the VPN concentrator can be located inside the firewall.

SSL accelerators

For an introduction to SSL/TLS accelerators, see the Chapter 2 section “SSL/TLS accelerators.”

An SSL/TLS accelerator should be located at the boundary or border of the organizational network at or near the primary Internet connection and before the resource server being accessed by those protected connections. Usually the SSL/TLS accelerator is located in line with the communication pathway so that no abusive network access can reach the network segment between the accelerator and the resource host. The purpose of this device or service is to offload the computational burden of encryption in order for a resource host to devote its system resources to serving visitors.

Load balancers

For an introduction to Load balancers, see the Chapter 2 section “Load balancer.”

A load balancer should be located in front of a group of servers, often known as a cluster, which all support the same resource. The purpose of the load balancer is to distribute the workload of connection requests among the members of the cluster group, so this determines its location. A load balancer is placed between requesting clients and the cluster or group of servers hosting a resource.

DDoS mitigator

A DDoS mitigator is a software solution, hardware device, or cloud service that attempts to filter and/or block traffic related to DoS attacks. See Chapter 1, “Threats, Attacks, and Vulnerabilities,” sections “DoS” and “DDoS” for information about these attacks.

A DDoS mitigator will attempt to differentiate legitimate packets from malicious packets. Benign traffic will be sent toward its destination, whereas abusive traffic will be discarded. Low-end DDoS mitigators may be called flood guards. Flood guarding is often a feature of firewalls. However, such solutions only change the focus of the DDoS attack rather than eliminate it. A low-level DDoS mitigator or flood guard solution will prevent the malicious traffic from reaching the target server, but the filtering system may itself be overloaded. This can result in the DoS event still being able to cut off communications for the network, even when the targeted server is not itself harmed in the process.

Commercial-grade DDoS solution, especially those based on a cloud service, operate differently. Instead of simply filtering traffic on the spot, they reroute traffic to the cloud provider’s core filtering network. The cloud-based DDoS mitigator will often use a load balancer in front of 10,000+ virtual machines in order to dilute and distribute the traffic for analysis and filtering. All garbage packets are discarded, and legitimate traffic is routed back to the target network.

A DDoS mitigator should be positioned in line along the pathway into the intranet, DMZ, and extranet from the Internet. This provides the DDoS mitigator with the ability to filter all traffic from external attack sources before it reaches servers or the network as a whole.

Aggregation switches

An aggregation switch is the main or master switch used as the interconnection point for numerous other switches. In the past, this device may have been known as the master distribution frame (MDF), central distribution frame, core distribution frame, or primary distribution frame. In large network deployments, an initial master primary switch is deployed near the demarcation point (which is the point where internal company wiring meets the external telco wiring), and then additional switches for various floors, departments, or network segments are connected off the master primary switch.

Taps and port mirror

A tap is a means to eavesdrop on network communications. In the past taps were physical connections to the copper wires themselves, often using a mechanical means to strip or pierce the insulation to make contact with the conductors. These types of taps were often called vampire taps. Today, taps can be installed in line without damaging the existing cable. To install an inline tap, first the original cable must be unplugged from the switch (or other network management device) and then plugged into the tap. Then the tap is plugged into the vacated original port. A tap should be installed wherever traffic monitoring on a specific cable is required and when a port mirroring function is either not available or undesired.

A port mirror is a common feature found on managed switches; it will duplicate traffic from one or more other ports out a specific port. A switch may have a hardwired Switched Port Analyzer (SPAN) port, which duplicates the traffic for all other ports, or any port can be set as the mirror, audit, IDS, or monitoring port for one or more other ports. Port mirroring takes place on the switch itself.

SDN

The concept of OS virtualization has given rise to other virtualization topics, such as virtualized networks. A virtualized network or network virtualization is the combination of hardware and software networking components into a single integrated entity. The resulting system allows for software control over all network functions: management, traffic shaping, address assignment, and so on. A single management console or interface can be used to oversee every aspect of the network, a task that required physical presence at each hardware component in the past. Virtualized networks have become a popular means of infrastructure deployment and management by corporations worldwide. They allow organizations to implement or adapt other interesting network solutions, including software-defined networks, virtual SANs, guest operating systems, and port isolation.

Software-defined networking (SDN) is a unique approach to network operation, design, and management. The concept is based on the theory that the complexities of a traditional network with on-device configuration (routers and switches) often force an organization to stick with a single device vendor, such as Cisco, and limit the flexibility of the network to adapt to changing physical and business conditions. SDN aims at separating the infrastructure layer (hardware and hardware-based settings) from the control layer (network services of data transmission management). Furthermore, this also negates the need for the traditional networking concepts of IP addressing, subnets, routing, and the like to be programmed into or deciphered by hosted applications.

SDN offers a new network design that is directly programmable from a central location, is flexible, is vendor neutral, and is based on open standards. Using SDN frees an organization from having to purchase devices from a single vendor. It instead allows organizations to mix and match hardware as needed, such as to select the most cost-effective or highest throughput–rated devices, regardless of vendor. The configuration and management of hardware are then controlled through a centralized management interface. In addition, the settings applied to the hardware can be changed and adjusted dynamically as needed.

Another way of thinking about SDN is that it is effectively network virtualization. It allows data transmission paths, communication decision trees, and flow control to be virtualized in the SDN control layer rather than being handled on the hardware on a per-device basis.

Another interesting development arising out of the concept of virtualized networks is the virtual storage area network (SAN). A SAN is a network technology that combines multiple individual storage devices into a single consolidated network-accessible storage container. A virtual SAN or a software-defined shared storage system is a virtual re-creation of a SAN on top of a virtualized network or an SDN.

A storage area network (SAN) is a secondary network (distinct from the primary communications network) used to consolidate and manage various storage devices. SANs are often used to enhance networked storage devices such as hard drives, drive arrays, optical jukeboxes, and tape libraries so they can be made to appear to servers as if they were local storage.

SANs can offer greater storage isolation through the use of a dedicated network. This makes directly accessing stored data difficult and forces all access attempts to operate against a server’s restricted applications and interfaces.


Exam Essentials

Comprehend network zones. A network zone is an area of a network designed for a specific purpose, such as internal use or external use. Network zones are logical and/or physical divisions or segments of a LAN that allow for supplementary layers of security and control.

Understand DMZs. A demilitarized zone (DMZ) is an area of a network that is designed specifically for public users to access. The DMZ is a buffer network between the public untrusted Internet and the private trusted LAN. Often a DMZ is deployed through the use of a multihomed firewall.

Understand extranets. An extranet is an intranet that functions as a DMZ for business-to-business transactions. Extranets let organizations offer specialized services to business partners, suppliers, distributors, or customers.

Understand intranets. An intranet is a private network or private LAN.

Know about guest networks. A guest zone or a guest network is an area of a private network designated for use by temporary authorized visitors.

Understand honeynets. A honeynet consists of two or more networked honeypots used in tandem to monitor or re-create larger, more diverse network arrangements.

Be aware of NAT. NAT converts the IP addresses of internal systems found in the headers of network packets into public IP addresses. It hides the IP addressing scheme and structure from external entities. NAT serves as a basic firewall by only allowing incoming traffic that is in response to an internal system’s request. It reduces expense by requiring fewer leased public IP addresses, and it allows the use of private IP addresses (RFC 1918).

Understand PAT. Closely related to NAT is port address translation (PAT), which allows a single public IP address to host multiple simultaneous communications from internal clients. Instead of mapping IP addresses on a one-to-one basis, PAT uses the Transport layer port numbers to host multiple simultaneous communications across each public IP address.

Know RFC 1918. RFC 1918 defines the ranges of private IP addresses that aren’t routable across the Internet: 10.0.0.0–10.255.255.255 (10.0.0.0 /8 subnet), 1 Class A range; 172.16.0.0–172.31.255.255 (172.16.0.0 /12 subnet), 16 Class B ranges; and 192.168.0.0–192.168.255.255 (192.168.0.0 /16 subnet), 256 Class C ranges.

Understand network segmentation. Network segmentation involves controlling traffic among networked devices. Logical network segmentation can be imposed with switches using VLANs, or through other traffic-control means, including MAC addresses, IP addresses, physical ports, TCP or UDP ports, protocols, or application filtering, routing, and access control management.

Comprehend VLANs. Switches are often used to create virtual LANs (VLANs)—logical creations of subnets out of a single physical network. VLANs are used to logically segment a network without altering its physical topology. They are easy to implement, have little administrative overhead, and are a hardware-based solution.

Understand virtualization. Virtualization technology is used to host one or more OSs within the memory of a single host computer. Related issues include snapshots, patch compatibility, host availability/elasticity, security control testing, and sandboxing.

Understand VPNs. A virtual private network (VPN) is a communication tunnel between two entities across an intermediary network. In most cases, the intermediary network is an untrusted network, such as the Internet, and therefore the communication tunnel is also encrypted.

Know VPN protocols. PPTP, L2TP, OpenVPN, and IPSec are VPN protocols.

Understand PPTP. Point-to-Point Tunneling Protocol (PPTP) is based on PPP, is limited to IP traffic, and uses TCP port 1723. PPTP supports PAP, SPAP, CHAP, EAP, and MS-CHAP v.1 and v.2.

Know L2TP. Layer 2 Tunneling Protocol (L2TP) is based on PPTP and L2F, supports any LAN protocol, uses UDP port 1701, and often uses IPSec for encryption.

Understand OpenVPN. OpenVPN is based on TLS (formerly SSL) and provides an easy-to-configure but robustly secured VPN option.

Realize the importance of security device placement. When designing the layout and structure of a network, it is important to consider the placement of security devices and related technology. The goal of planning the architecture and organization of the network infrastructure is to maximize security while minimizing downtime, compromises, or other interruptions to productivity.

Understand software-defined networking. Software-defined networking (SDN) is a unique approach to network operation, design, and management. SDN aims at separating the infrastructure layer (hardware and hardware-based settings) from the control layer (network services of data transmission management).

3.3 Given a scenario, implement secure systems design.

Any effective security infrastructure is built following the guidelines of a security policy and consists of secure systems. A secure system must be planned and developed with security not just as a feature but as a central core concept. This section discusses some of the important design concepts that contribute to secure systems.

Hardware/firmware security

Security is an integration of both hardware/firmware components and software elements. This section looks at several hardware and firmware security technologies.

FDE/SED

Full-disk encryption (FDE) or whole-disk encryption is often used to provide protection for an OS, its installed applications, and all locally stored data. FDE encrypts all of the data on a storage device with a single master symmetric encryption key. Anything written to the encrypted storage device, including standard files, temporary files, cached data, memory swapped data, and even the remnants of deletion and the contents of slack space, is encrypted when FDE is implemented.

However, whole-disk encryption provides only reasonable protection when the system is fully powered off. If a system is accessed by a hacker while it’s active, there are several ways around hard drive encryption. These include a FireWire direct memory access (DMA) attack, malware stealing the encryption key out of memory, slowing down memory-decay rates with liquid nitrogen, or even just user impersonation. The details of these attacks aren’t important for this exam. However, you should know that whole-disk encryption is only a partial security control.

To maximize the defensive strength of whole-disk encryption, you should use a long, complex passphrase to unlock the system on bootup. This passphrase shouldn’t be written down or used on any other system or for any other purpose. Whenever the system isn’t actively in use, it should be powered down (hibernation is fine, but not sleep mode) and physically locked against unauthorized access or theft. Hard drive encryption should be viewed as a delaying tactic, rather than as a true prevention of access to data stored on the hard drive.

Hard drive encryption can be provided by a software solution, as discussed previously, or through a hardware solution. One option is self-encrypting drives (SED). Some hard drive manufacturers offer hard drive products that include onboard hardware-based encryption services. However, most of these solutions are proprietary and don’t disclose their methods or algorithms, and some have been cracked with relatively easy hacks.

Using a trusted software encryption solution can be a cost-effective and secure choice. But realize that no form of hard drive encryption, hardware- or software-based, is guaranteed protection against all possible forms of attack.

USB encryption is usually related to USB storage devices, which can include both USB-connected hard drives as well as USB thumb drives. Some USB device manufacturers include encryption features in their products. These often have an autorun tool that is used to gain access to encrypted content once the user has been authenticated. An example of an encrypted USB device is an IronKey.

If encryption features aren’t provided by the manufacturer of a USB device, you can usually add them through a variety of commercial or open-source solutions. One of the best-known, respected, and trusted open-source solutions is VeraCrypt (Figure 3.10) (the revised and secure replacement for its abandoned predecessor, TrueCrypt). This tool can be used to encrypt files, folders, partitions, drive sections, or whole drives, whether internal, external, or USB.

Screenshot shows Veracrypt volume creation wizard which includes options to choose file system and cluster, details of random pool, header key and master key, completion percentage, speed and time along with Abort button.

FIGURE 3.10 VeraCrypt encryption dialog box

TPM

The trusted platform module (TPM) is both a specification for a cryptoprocessor and the chip in a mainboard supporting this function. A TPM chip is used to store and process cryptographic keys for a hardware-supported/implemented hard drive encryption system. Generally, a hardware implementation rather than a software-only implementation of hard drive encryption is considered more secure.

When TPM-based whole-disk encryption is in use, the user/operator must supply a password or physical USB token device to the computer to authenticate and allow the TPM chip to release the hard drive encryption keys into memory. Although this seems similar to a software implementation, the primary difference is that if the hard drive is removed from its original system, it can’t be decrypted. Only with the original TPM chip can an encrypted hard drive be decrypted and accessed. With software-only hard drive encryption, the hard drive can be moved to a different computer without any access or use limitations.

HSM

A hardware security module (HSM) is a special-purpose cryptoprocessor used for a wide range of potential functions. The functions of an HSM can include accelerated cryptography operations, managing and storing encryption keys, offloading digital signature verification, and improving authentication. An HSM can be a chip on a motherboard, an external peripheral, a network-attached device, or an add-on or extension adapter or card (which is inserted into a device, such as a router, firewall, or rack-mounted server blade). Often an HSM includes tamper protection technology in order to prevent or discourage abuse and misuse even if physical access is obtained by the attacker. One example of an HSM is the TPM (see the previous section).

UEFI/BIOS

Basic input/output system (BIOS) is the basic low-end firmware or software embedded in the hardware’s electrically erasable programmable read-only memory (EEPROM). The BIOS identifies and initiates the basic system hardware components, such as the hard drive, optical drive, video card, and so on, so that the bootstrapping process of loading an OS can begin. This essential system function is a target of hackers and other intruders because it may provide an avenue of attack that isn’t secured or monitored.

BIOS attacks, as well as complementary metal-oxide-semiconductor (CMOS) and device firmware attacks, are becoming common targets of physical hackers as well as of malicious code. If hackers or malware can alter the BIOS, CMOS, or firmware of a system, they may be able to bypass security features or initiate otherwise prohibited activities.

Protection against BIOS attacks requires physical access control for all sensitive or valuable hardware. Additionally, strong malware protection, such as current antivirus software, is important.

A replacement or improvement to BIOS is Unified Extensible Firmware Interface (UEFI). UEFI provides support for all of the same functions as BIOS with many improvements, such as support for larger hard drives (especially for booting), faster boot times, enhanced security features, and even the ability to use a mouse when making system changes (BIOS was limited to keyboard control only). UEFI also includes a CPU-independent architecture, a flexible pre-OS environment with networking support, secure boot (see the next section), and backward and forward compatibility. It also runs CPU-independent drivers (for system components, drive controllers, and hard drives).

Secure boot and attestation

Secure boot is a feature of UEFI that aims to protect the operating environment of the local system by preventing the loading or installing of device drivers or an operating system that is not signed by a preapproved digital certificate. Secure boot thus protects systems against a range of low-level or boot-level malware, such as certain rootkits and backdoors. Secure boot ensures that only drivers and operating systems that pass attestation (the verification and approval process accomplished through the validation of a digital signature) are allowed to be installed and loaded on the local system.

Although the security benefits of secure boot attestation are important and beneficial to all systems, there is one important drawback to consider: if a system has a locked UEFI secure boot mechanism, it may prevent the system’s owner from replacing the operating system (such as switching from Windows to Linux) or block them from using third-party vendor hardware that has not been approved by the motherboard vendor (which means the third-party vendor did not pay a fee to have their product evaluated and their drivers signed by the motherboard vendor). If there is any possibility of using alternate OSs or changing hardware components of a system, be sure to use a motherboard from a vendor that will provide unlock codes/keys to the UEFI secure boot.

Supply chain

Supply chain security is the concept that most computers are not built by a single entity. In fact, most of the companies we know of as computer manufacturers, such as Dell, HP, Asus, Acer, and Apple, mostly perform the final assembly rather than manufacture all of the individual components. Often the CPU, memory, drive controllers, hard drives, SSDs, and video cards are created by other third-party vendors. Even these vendors are unlikely to have mined their own metals or processed the oil for plastics or etched the silicon of their chips. Thus, any finished system has a long and complex history, known as its supply chain, that enabled it or caused it to come into existence.

A secure supply chain is one in which all of the vendors or links in the chain are reliable, trustworthy, reputable organizations that disclose their practices and security requirements to their business partners (although not necessarily to the public). Each link in the chain is responsible and accountable to the next link in the chain. Each hand-off, from raw materials to refined products to electronics parts to computer components to finished product, is properly organized, documented, managed, and audited. The goal of a secure supply chain is to ensure that the finished product is of sufficient quality, meets performance and operational goals, and provides stated security mechanisms, and that at no point in the process was any element subjected to unauthorized or malicious manipulation or sabotage.

Hardware root of trust

A hardware root of trust is based or founded on a secure supply chain. The security of a system is ultimately dependent upon the reliability and security of the components that make up the computer as well as the process it went through to be crafted from original raw materials. If the hardware that is supporting an application has security flaws or a backdoor, or fails to provide proper HSM-based cryptography functions, then the software is unable to accommodate those failings. Only if the root of the system—the hardware itself—is reliable and trustworthy can the system as a whole be considered trustworthy. System security is a chain of many interconnected links; if any link is weak, then the whole chain is untrustworthy.

EMI/EMP

Electromagnetic interference (EMI) is the noise caused by electricity when used by a machine or when flowing along a conductor. Copper network cables and power cables can pick up environmental noise or EMI, which can corrupt the network communications or disrupt the electricity feeding equipment. An electromagnetic pulse (EMP) is an instantaneous high-level EMI, which can damage most electrical devices in the vicinity.

EMI shielding is important for network-communication cables as well as for power- distribution cables. EMI shielding can include upgrading from UTP (unshielded twisted pair) to STP (shielded twisted pair), running cables in shielding conduits, or using fiber-optic networking cables. EMI-focused shielding can also provide modest protection against EMPs, although that depends on the strength and distance of the EMP compared to the device or cable. Generally, these two types of cables (networking and electrical) should be run in separate conduits and be isolated and shielded from each other. The strong magnetic fields produced by power-distribution cables can interfere with network-communication cables.

Operating systems

Any secure system design requires the use of a secure operating system. Although no operating system is perfectly secure, the selection of the right operating system for a particular task or function can reduce the ongoing burden of security management.

Types

There many ways to categorize or group operating systems. This section includes several specific examples of OS types, labels, and groupings. In all cases, the selection of an OS should focus on features and capabilities without overlooking the native security benefits. Although security can often be added through software installation, native security features are often superior.

Network

A network operating system (NOS) is any OS that has native networking capabilities and was designed with networking as a means of communication and data transfer. Most OSs today are NOSs, but not all OSs are network capable. There are still many situations where a stand-alone or isolated OS is preferred for function, stability, and security.

Server

A server is a form of NOS. It is a resource host that offers data, information, or communication functions to other requesting systems. Servers are the computer systems on a network that support and maintain the network. They require greater physical and logical security protections than workstations because they represent a concentration of assets, value, and capabilities. End users should be restricted from physically accessing servers, and they should have no reason to log on directly to a server—they should interact with servers over a network through their workstations.

Workstation

A workstation is another form of NOS. A workstation is a resource consumer. A workstation is typically where an end user will log in and then from the workstation reach out across the network to servers to access resources and retrieve data. Workstations are also called clients, terminals, or end-user computers. Access to workstations should be restricted to authorized personnel. One method to accomplish this is to use strong authentication, such as two-factor authentication with a smartcard and a password or PIN.

Appliance

An appliance OS is yet another variation of NOS. An appliance NOS is a stripped-down or single-purpose OS that is typically found on network devices, such as firewalls, routers, switches, wireless access points, and VPN gateways. An appliance NOS is designed around a primary set of functions or tasks and usually does not support any other capabilities.

Kiosk

A kiosk OS is either a stand-alone OS or a variation of NOS. A kiosk OS is designed for end-user use and access. The end user might be an employee of an organization or anyone from the general public. A kiosk OS is locked down so that only preauthorized software products and functions are enabled. A kiosk OS will revert to the locked-down mode each time it is rebooted, and some will even revert if they experience a flaw, crash, error, or any attempt to perform an unauthorized command or launch an unapproved executable. The goal and purpose of a kiosk OS is to provide a robust information service to a user while preventing accidental or intentional misuse of the system. A kiosk OS is often deployed in a public location and thus must be configured to implement security effectively for that situation.

Mobile OS

A mobile OS is yet another form of NOS. A mobile OS is designed to operate on a portable device. Although a portable device can be defined as any device with a battery, a mobile OS is designed for portable devices for which traditional NOSs are too large or too resource demanding. Many portable devices have less CPU processing capacity, RAM memory availability, and storage capabilities than full computer notebooks or workstation systems. A mobile OS is designed to optimize performance of limited resources. It may be designed around a few specific mobile device features, such as phone calls, text messages, and taking photographs, or may be designed to support a wide range of user-installed software or applications (“apps”). Some mobile OSs are extremely limited in their functions, whereas others can provide capabilities nearly equivalent to those of a traditional workstation NOS.

Patch management

See the Chapter 2 section “Patch management tools” for the security implications of patch management.

Disabling unnecessary ports and services

A key element in securing a system is to reduce its attack surface. The attack surface is the area that is exposed to untrusted networks or entities and that is vulnerable to attack. If a system is hosting numerous services and protocols, its attack surface is larger than that of a system running only essential services and protocols. The image in Figure 3.11 is from nmap.org, the source site for this tool, https://nmap.org/images/nmap-401-demoscan-798x774.gif.

Image described by caption and surrounding text.

FIGURE 3.11 Output from nmap showing open ports on scanned target systems

It’s tempting to install every service, component, application, and protocol available to you on every computer system you deploy. However, this temptation is in direct violation of the security best practice that each system should host only those services and protocols that are absolutely essential to its mission-critical operations.

Any unused application service ports should be specifically blocked or disabled. Port or interface disabling is a physical option that renders a connection port electrically useless. Port blocking is a service provided by a software or hardware firewall that blocks or drops packets directed toward disallowed ports.

Layer 4, the Transport layer, uses ports to indicate the protocol that is to receive the payload/content of the TCP or UDP packet. Ports also assist in supporting multiple simultaneous connections or sessions over a single IP address. There are 65,535 potential ports. See www.iana.org/assignments/port-numbers for a current complete list of ports and protocol associations.

There are a number of common protocol default ports you may want to know for exam purposes. Table 3.1 is a brief list of ports to consider memorizing. All listed ports are default ports, and custom configurations can use alternate port selections.

TABLE 3.1 Common protocols and default ports

Protocol/service Port(s) Notes
FTP TCP ports 20 (data) and 21 (control)
SSH TCP port 22 All protocols encrypted by SSH also use TCP port 22, such as SFTP, SHTTP, SCP, SExec, and slogin.
SMTP TCP port 25
DNS TCP and UDP port 53 TCP port 53 is used for zone transfers, whereas UDP port 53 is used for queries.
HTTP TCP port 80 or TCP port 8080
Post Office Protocol v3 (POP3) TCP port 110
NetBIOS Session service TCP port 139
Internet Message Access Protocol v4 (IMAP4) TCP port 143
HTTPS TCP port 443 (TCP port 80 in some configurations of TLS)
Remote Desktop Protocol (RDP) TCP port 3389

The real issue is that software isn’t trusted. Software (services, applications, components, and protocols) is written by people, and therefore, in all likelihood, it isn’t perfect. But even if software lacked bugs, errors, oversights, mistakes, and so on, it would still represent a security risk. Software that is working as expected can often be exploited by a malicious entity. Therefore, every instance of software deployed onto a computer system represents a collection of additional vulnerability points that may be exposed to external, untrusted, and possibly malicious entities.

From this perspective, you should understand that all nonessential software elements should be removed from a system before it’s deployed on a network, especially if that network has Internet connectivity. But how do you know what is essential and what isn’t? Here is a basic methodology:

  1. Plan the purpose of the system.
  2. Identify the services, applications, and protocols needed to support that purpose. Make sure these are installed on the system.
  3. Identify the services, applications, and protocols that are already present on the system. Remove all that aren’t needed.

Often, you won’t know if a specific service that appears on a system by default is needed. Thus, a trial-and-error test is required. If software elements aren’t clearly essential, disable them one by one and test the capabilities of the system. If the system performs as you expect, the software probably isn’t needed. If the system doesn’t perform as expected, then the software needs to be re-enabled. This process is known as application and system hardening.

You may discover that some services and protocols offer features and capabilities that aren’t necessary to the essential functions of your system. If so, find a way to disable or restrict those characteristics. This may include restricting ports or reconfiguring services through a management console.

The essential services on a system are usually easy to identify—they generally have recognizable names that correspond to the function of the server. However, you must determine which services are essential on your specific system. Services that are essential on a web server may not be essential on a file server or an email server. Some examples of possible essential services are as follows:

  • File sharing
  • Email
  • Web
  • File Transfer Protocol (FTP)
  • Telnet
  • SSH
  • Remote access
  • Network News Transfer Protocol (NNTP)
  • Domain Name Service (DNS)
  • Dynamic Host Configuration Protocol (DHCP)

Nonessential services are more difficult to identify. Just because a service doesn’t have the same name as an essential function of your server doesn’t mean it isn’t used by the underlying OS or as a support service. It’s extremely important to test and verify whether any service is being depended on by an essential service. However, several services are common candidates for nonessential services that you may want to locate and disable first (assuming you follow the testing method described earlier). These may include the following:

  • NetBIOS
  • Unix RPC
  • Network File System (NFS)
  • X services
  • R services
  • Trivial File Transfer Protocol (TFTP)
  • NetMeeting
  • Instant messaging
  • Remote-control software
  • Simple Network Management Protocol (SNMP)

Least functionality

One rule of thumb to adopt when designing and implementing security is that of least functionality. If you always select and install the solution with the least functionality or without any unnecessary additional capabilities and features, you will likely have a more secure result than opting for any solution with more options than necessary. This is another perspective on minimizing your attack surface. Rather than removing and blocking components that are unneeded or unwanted, select hardware and software systems that have minimal additional capabilities beyond what is strictly needed for the business function or task.

Secure configurations

All company systems should be operating within expected parameters and compliant with a defined baseline of secure configuration. Any system that is determined to be out of baseline should be removed from the production network in order to investigate the cause. If the deviation was due to a malicious event, then investigate and respond. If the deviation was due to normal work-related actions and activities, it may be necessary to update the baseline and/or implement more restrictive system modification policies, such as whitelisting or using static systems. A static system is an environment in which users cannot make changes or the few changes users can implement are only temporary and are discarded once the user logs out.

For more discussion of secure configurations and establishing a baseline, see the section “Secure baseline” later in this chapter.

Trusted operating system

Trusted OS is an access-control feature that requires a specific OS to be present in order to gain access to a resource. By limiting access to only those systems that are known to implement specific security features, resource owners can be assured that violations of a resource’s security will be less likely.

Another formal definition of trusted OS is any OS that has security features in compliance with government and/or military security standards that enable the enforcement of multilevel security policies (that is, enforcing mandatory access control using classification labels on subjects and objects). Examples of trusted OSs include Trusted Solaris, Apple macOS X, HP-UX, and AIX. Many other OSs can be altered to become trusted, such as Windows, Windows Server, and SELinux.

Application whitelisting/blacklisting

Applications can be specifically allowed or disallowed; see the Chapter 2 section “Application whitelisting” for details.

Disable default accounts/passwords

If you don’t need it, don’t keep it. This may be an optional mantra for you in real life, but in terms of security, it’s the first of two—the second is, lock down what’s left. Getting rid of unnecessary services and accounts is just the beginning of proper security and environment hardening. Leaving behind default or unused accounts gives hackers and attackers more potential points of compromise.

Always change default passwords to something unique and complex. All default passwords are available online (Figure 3.12). If available, always turn on password protection and set a complex password. Don’t assume physical access control is good enough or that logical remote access isn’t possible. If you discover you have a device with a hard-coded default password (meaning the original from-the-vendor password cannot be disabled or replaced), then remove that device from the network and replace it with a device that offers real security configuration options.

Image described by caption and surrounding text.

FIGURE 3.12 CIRT.net’s default password database

Peripherals

System hardware and peripherals require physical access controls and protections in order to maintain the logical security imposed by software. Without access control over the facility and physical environment, otherwise secured systems can be quickly compromised. Physical protections are used to protect against physical attacks, whereas logical protections protect only against logical attacks. Without adequate layers of protection, security is nonexistent. This section discusses several issues related to peripherals that often lead to security compromise because they’re overlooked or deemed non-serious threats.

Wireless keyboards

A wireless keyboard connects to a computer over Bluetooth, WiFi, or some other radio wave–based communication. In most cases, wireless keyboards do not use encryption, so any listening device within range may be able to eavesdrop on the characters typed into a wireless keyboard. Do not use wireless keyboards for sensitive systems or when you are typing sensitive, confidential, or valuable information.

Wireless mice

Like wireless keyboards, wireless mice connect to a computer over Bluetooth, WiFi, or some other radio wave–based communication. In most cases, wireless mice do not use encryption, so any listening device within range may be able to eavesdrop on the movements and activities of a wireless mouse. Do not use a wireless mouse on sensitive systems.

Displays

A display can show sensitive, confidential, or personal information. It is important to orient your system display so it is hard to see unless you are sitting or standing in your work position. You do not want others in the general area or who just walk by to be able to see the contents of your screen. It may be worthwhile to install screen filters, also called privacy filters. As discussed under “Screen filters” later in the chapter, these devices reduce the range of visibility of a screen down to a maximum of 30 degrees from perpendicular.

Some displays are wireless, and signals from the core computer to the display itself are unlikely to be encrypted. Anyone in the area may be able to eavesdrop on your display communications in order to see what is being shown through your monitor.

WiFi-enabled MicroSD cards

Many portable devices that do not have native wireless support can be enhanced using a microSD or SD card with WiFi. These memory storage cards include their own WiFi adapter, which can in turn provide wireless connectivity to the mobile device. While not turning the mobile device into a full-fledged NOS, it often allows for uploading or backing up of files (that are saved to the storage expansion card) to a cloud service or a network share. These WiFi–enabled SD and microSD cards may support wireless encryption, but not necessarily.

When these cards are used in a device that already has networking capabilities, the device can serve as an additional attack path for hackers—especially if it automatically connects to plain-text WiFi networks. In general, avoid the use of any WiFi–enabled storage expansion card. Although it’s less convenient, manually moving the SD card to a full computer system in order to upload files is faster and likely more secure.

Printers/MFDs

Many printers are network attached printers, meaning they can be directly connected to the network without being directly attached to a computer. A network-attached printer serves as its own print server. It may connect to the network via cable or through wireless. Some devices are more than just printers and may include fax, scanning, and other functions. These are known as multifunction devices (MFDs). Any device connected to a network can be a potential breach point. This may be due to flaws in the firmware of the device as well as whether or not the device uses communication encryption.

External storage devices

Universal Serial Bus (USB) devices are ubiquitous these days. Nearly every worker who uses a computer possesses a USB storage device, and most portable devices (such as phones, music players, and still or video cameras) connect via USB. However, this convenience comes at a cost to security. There are at least four main issues:

  • Just about any USB device can be used to either bring malicious code into or leak sensitive, confidential, and/or proprietary data out of an otherwise secure environment. Even a device not specifically designed as a storage device, such as a mobile phone, might still serve that function.
  • Most computers have the ability to boot off USB. This could allow a user to boot a computer to an alternate OS (such as Kali, a live Linux distribution used for hacking and/or penetration testing), which fully bypasses any security the native OS imposes.
  • Some more recent malware uses the Autoruns feature of Windows to spread from infected USB storage devices to the host computer. Such malware will succeed if security measures such as updating, patching, and hardening systems with up-to-date antivirus protection aren’t in place.
  • USB auto-typers have the ability to brute-force logins with thousands of attempts per second.

To protect against USB threats, the only real option is to fully disallow the use of USB devices and lock down all USB ports. Some organizations not only disable USB functionality but also physically fill USB ports with silicon, epoxy, or a similar material, thus ensuring that USB devices can’t be used. As businesses move to USB keyboards and mice, the epoxy trick is less effective: users can simply remove their input devices and attach a USB drive either directly or through a hub. Instead, more businesses are disabling USB boot in the (then-locked) BIOS and disabling USB Autoruns in the OS. Otherwise, allowing the use of USB typically leaves your organization’s system vulnerable to threats.

Digital cameras

Digital cameras can be a security risk when they are used to take photographs of sensitive documents or information on a computer screen. A digital camera can serve as a storage device when connected via a cable to a computer system, thus allowing the user to transmit confidential files to outside the organization or to bring in malware from outside. A digital camera might support wireless communications natively or have that feature added through the use of a WiFi–enabled storage card. Most digital cameras include GPS chips in order to geotag photos and videos created on the device. This can reveal sensitive or secret locations, as well as the time and date of a photo being taken or a video being recorded.

Exam Essentials

Understand hardware security. System hardware and peripherals require physical access controls and protections in order to maintain the logical security imposed by software. Without access control over the facility and physical environment, otherwise secured systems can be quickly compromised.

Know about FDE and SED. Full-disk encryption (FDE) or whole-disk encryption is often used to provide protection for an OS, its installed applications, and all locally stored data. FDE encrypts all of the data on a storage device with a single master symmetric encryption key. Another option is self-encrypting drives (SEDs).

Understand TPM. The trusted platform module (TPM) is both a specification for a cryptoprocessor and the chip in a mainboard supporting this function. A TPM chip is used to store and process cryptographic keys for a hardware-supported and -implemented hard drive encryption system.

Define HSM. A hardware security module (HSM) is a special-purpose cryptoprocessor used for a wide range of potential functions. The functions of an HSM can include accelerated cryptography operations, managing and storing encryption keys, offloading digital signature verification, and improving authentication.

Understand UEFI and BIOS. Basic input/output system (BIOS) is the basic low-end firmware or software embedded in the hardware’s electrically erasable programmable read- only memory (EEPROM). BIOS identifies and initiates the basic system hardware components. A replacement or improvement to BIOS is Unified Extensible Firmware Interface (UEFI). UEFI provides support for all of the same functions as that of BIOS with many improvements, such as support for larger hard drives (especially for booting), faster boot times, enhanced security features, and even the ability to use a mouse when making system changes.

Comprehend OS security. There is no fully secure OS. All of them have security flaws. Every OS needs some level of security management imposed on it.

Understand EMI shielding. Shielding is used to restrict or control interference from electromagnetic or radio frequency disturbances. This can include using shielded cabling or cabling that is resistant to interference, or running cables through shielded conduits.

Realize the importance of disabling unnecessary ports and services. If a system is hosting numerous services and protocols, its attack surface is larger than that of a system running only essential services and protocols.

Understand least functionality. One rule of thumb to adopt when designing and implementing security is that of least functionality. If you always select and install the solution with the least functionality or without any unnecessary additional capabilities and features, then you will likely have a more secure result than opting for any solution with more options than necessary.

Know the concept of trusted OS. Trusted OS is an access-control feature that requires a specific OS to be present in order to gain access to a resource. Another formal definition of trusted OS is any OS that has security features in compliance with government and/or military security standards that enable the enforcement of multilevel security policies.

Understand peripheral security. System hardware and peripherals require physical access controls and protections in order to maintain the logical security imposed by software.

3.4 Explain the importance of secure staging deployment concepts.

Secure staging is the controlled process of configuration and deployment for new systems, whether hardware or software. The goal of a secure staging process is to ensure compliance with the organization’s security policies and configuration baselines while minimizing risks associated with exposing an insecure system to a private network or even the Internet. This section discusses several elements that may be part of a secure staging system.

Sandboxing

Sandboxing is a means of quarantine or isolation. It’s implemented to keep new or otherwise suspicious software from being able to cause harm to production systems. It can be used against applications or entire OSs.

Sandboxing is simple to implement in a virtualization context because you can isolate a virtual machine with a few mouse clicks or commands. Once the suspect code is deemed safe, you can release it to integrate with the environment. If it’s found to be malicious, unstable, or otherwise unwanted, it can quickly be removed from the environment with little difficulty.

Environment

The organization’s IT environment must be configured and segmented to properly implement staging. This often requires at least four main network divisions: development, test, staging, and production.

Development

The development network is where new software code is being crafted by on-staff programmers and developers. For some organizations, this might also be where custom-built hardware is being created. This network is to be fully isolated from all other network divisions in order to prevent ingress of malware or egress of unfinished products.

Test

The test network is where in-development products or potentially final versions of products are subjected to a battery of evaluations, stress tests, vulnerability scans, and even attack attempts in order to determine whether the product is stable, secure, and ready for deploying into the production network.

Staging

The staging network is where new equipment, whether developed in-house or obtained from external vendors, is configured to be in compliance with the company’s security policy and configuration baseline. Once a system has been staged, it can be moved to the test network for evaluation. After the system has passed evaluation, it can be deployed into the production network.

Production

The production network is where the everyday business tasks and work processes are accomplished. It should only be operating on equipment and systems that have been properly staged and tested. The production network should be managed so that it is not exposed to the risk and unreliability of new systems and untested solutions. The goal of the production network is to support the confidentiality, integrity, and availability (among other goals) of the organization’s data and business tasks.

Secure baseline

The security posture is the level to which an organization is capable of withstanding an attack. An organization may have good or poor posture. A plan and implementation are parts of the security posture often known as the secure baseline or security baseline. These include detailed policies and procedures, implementation in the IT infrastructure and the facility, and proper training of all personnel.

One mechanism often used to help maintain a hardened system is to use a security baseline, a standardized minimal level of security that all systems in an organization must comply with. This lowest common denominator establishes a firm and reliable security structure on which to build trust and assurance. The security baseline is defined by the organization’s security policy. Creating or defining a baseline requires that you examine three key areas of an environment: the OS, the network, and the applications. It may include requirements of specific hardware components, OS versions, service packs, patches and upgrades, configuration settings, add-on applications, service settings, and more.

The basic procedure for establishing a security baseline or hardening a system is as follows:

  1. Remove unneeded components, such as protocols, applications, services, and hardware (including device drivers).
  2. Update and patch the OS and all installed applications, services, and protocols.
  3. Configure all installed software as securely as possible.
  4. Impose restrictions on information distribution for the system, its active services, and its hosted resources.

Documentation is an important aspect of establishing a security baseline and implementing security in an environment. Every aspect of a system, from design to implementation, tuning, and securing, should be documented. A lack of sufficient documentation is often the primary cause of difficulty in locking down or securing a server. Without proper documentation, all the details about the OS, hardware configuration, applications, services, updates, patches, configuration, and so on must be discovered before security improvements can be implemented. With proper documentation, a security professional can quickly add to the existing security without having to reexamine the entire environment.

A security template is a set of security settings that can be mechanically applied to a computer to establish a specific configuration. Security templates can be used to establish baselines or bring a system up to compliance with a security policy. They can be custom-designed for workstations and server functions or purposes. Security templates are a generic concept; however, specific security templates can be applied via Windows’ Group Policy system.

Security templates can be built by hand or by extracting settings from a preconfigured master. Once a security template exists, you can use it to configure a new or existing machine (by applying the template to the target either manually or through a Group Policy object [GPO]), or to compare the current configuration to the desired configuration. This latter process is known as security template analysis and often results in a report detailing the gaps in compliance.

Operating system hardening is the process of reducing vulnerabilities, managing risk, and improving the security provided by or for an OS. This is usually accomplished by taking advantage of an OS’s native security features and supplementing them with add-on applications such as firewalls, antivirus software, and malicious-code scanners. There are several online sources of security configuration hardening and configuration checklists, such as the NIST Security Configuration Checklists Program site at http://csrc.nist.gov/groups/SNS/checklists/ (Figure 3.13), which can be used as a starting point for crafting an organization-specific set of SOPs.

Image described by caption and surrounding text.

FIGURE 3.13 The NIST Security Configuration Checklists Program site

Hardening an OS includes protecting the system from both intentional directed attacks and unintentional or accidental damage. This can include implementing security countermeasures as well as fault-tolerant solutions for both hardware and software. Some of the actions that are often included in a system-hardening procedure include the following:

  • Deploy the latest version of the OS.
  • Apply any service packs or updates to the OS.
  • Update the versions of all device drivers.
  • Verify that all remote-management or remote-connectivity solutions that are active are secure. Avoid FTP, Telnet, and other clear-text or weak authentication protocols.
  • Disable all unnecessary services, protocols, and applications.
  • Remove or securely configure Simple Network Management Protocol (SNMP).
  • Synchronize time zones and clocks across the network with an Internet time server.
  • Configure event-viewer log settings to maximize capture and storage of audit events.
  • Rename default accounts, such as administrator, guest, and admin.
  • Enforce strong passwords on all accounts.
  • Force password changes on a periodic basis.
  • Restrict access to administrative groups and accounts.
  • Hide the last-logged-on user’s account name.
  • Enforce account lockout.
  • Configure a legal warning message that’s displayed at logon.
  • If file sharing is used, force the use of secure sharing protocols or use virtual private networks (VPNs).
  • Use a security and vulnerability scanner against the system.
  • Scan for open ports.
  • Disable Internet Control Message Protocol (ICMP) functionality on publicly accessible systems.
  • Consider disabling NetBIOS.
  • Configure auditing.
  • Configure backups.

The filesystem in use on a system greatly affects the security offered by that system. A filesystem that incorporates security, such as access control and auditing, is a more secure choice than a filesystem without incorporated security. One great example of a secured filesystem is the Microsoft New Technology File System (NTFS). It offers file- and folder-level access permissions and auditing capabilities. Examples of filesystems that don’t include security are file allocation table (FAT) and FAT32.

Integrity measurement

The primary means of integrity measurement or assessment is the use of a hash. See the Chapter 6 section, “Hashing,” for details.

Exam Essentials

Understand secure staging. Secure staging is the controlled process of configuration and deployment for new systems, whether hardware or software. The goal of a secure staging process is to ensure compliance with the organization’s security policies and configuration baselines while minimizing risks associated with exposing an insecure system to a private network or even the Internet.

Comprehend sandboxing. Sandboxing is a means of quarantine or isolation. It’s implemented to restrict new or otherwise suspicious software from being able to cause harm to production systems.

Understand a secure IT environment. The organization’s IT environment must be configured and segmented to properly implement staging. This often requires at least four main network divisions: development, test, staging, and production.

Realize the importance of a security baseline. A plan and implementation are parts of the security posture often known as the secure baseline or security baseline. These include detailed policies and procedures, implementation in the IT infrastructure and the facility, and proper training of all personnel.

3.5 Explain the security implications of embedded systems.

An embedded system is a computer implemented as part of a larger system. The embedded system is typically designed around a limited set of specific functions in relation to the larger product of which it’s a component. It may consist of the same components found in a typical computer system, or it may be a microcontroller (an integrated chip with on-board memory and peripheral ports). Examples of embedded systems include network-attached printers, smart TVs, HVAC controls, smart appliances, smart thermostats, embedded smart systems in vehicles, and medical devices.

Security concerns regarding embedded systems include the fact that most are designed with a focus on minimizing cost and extraneous features. This often leads to a lack of security and difficulty with upgrades or patches. Because an embedded system is in control of a mechanism in the physical world, a security breach could cause harm to people and property.

A static environment is a set of conditions, events, and surroundings that don’t change. In theory, once understood, a static environment doesn’t offer new or surprising elements. In technology, static environments are applications, OSs, hardware sets, or networks that are configured for a specific need, capability, or function, and then set to remain unaltered. A static IT environment is any system that is intended to remain unchanged by users and administrators. The goal is to prevent or at least reduce the possibility of a user implementing change that could result in reduced security or functional operation.

However, although the term static is used, there are no truly static systems. There is always the chance that a hardware failure, a hardware configuration change, a software bug, a software-setting change, or an exploit may alter the environment, resulting in undesired operating parameters or actual security intrusions. Many embedded systems are implemented as static solutions. It is important to understand the various ways to protect the stability and security of embedded and/or static systems. Static environments, embedded systems, and other limited or single-purpose computing environments need security management. Although they may not have as broad an attack surface and aren’t exposed to as many risks as a general-purpose computer, they still require proper security governance.

Manual Updates Manual updates should be used in static environments to ensure that only tested and authorized changes are implemented. Using an automated update system would allow untested updates to introduce unknown security reductions.

Firmware Version Control Similar to manual software updates, strict control over firmware versions in a static environment is important. Firmware updates should be implemented on a manual basis, only after testing and review. Oversight of firmware version control should focus on maintaining a stable operating platform while minimizing exposure to downtime or compromise.

Wrappers A wrapper is something used to enclose or contain something else. Wrappers are well known in the security community in relation to Trojan horse malware. A wrapper of this sort is used to combine a benign host with a malicious payload.

Wrappers are also used as encapsulation solutions. Some static environments may be configured to reject updates, changes, or software installations unless they’re introduced through a controlled channel. That controlled channel can be a specific wrapper. The wrapper may include integrity and authentication features to ensure that only intended and authorized updates are applied to the system.

SCADA/ICS

Supervisory control and data acquisition (SCADA) is a type of industrial control system (ICS). An ICS is a form of computer-management device that controls industrial processes and machines. SCADA is used across many industries, including manufacturing, fabrication, electricity generation and distribution, water distribution, sewage processing, and oil refining. A SCADA system can operate as a stand-alone device, be networked together with other SCADA systems, or be networked with traditional IT systems.

Most SCADA systems are designed with minimal human interfaces. Often, they use mechanical buttons and knobs or simple LCD screen interfaces (similar to what you might have on a business printer or a GPS navigation device). However, networked SCADA devices may have more complex remote-control software interfaces.

In theory, the static design of SCADA and the minimal human interface should make the system fairly resistant to compromise or modification. Thus, little security was built into SCADA devices, especially in the past. But there have been several well-known compromises of SCADA; for example, Stuxnet delivered the first-ever rootkit to a SCADA system located in a nuclear facility. Many SCADA vendors have started implementing security improvements into their solutions in order to prevent or at least reduce future compromises.

Smart devices/IoT

Smart devices are a range of mobile devices that offer the user a plethora of customization options, typically through installing apps, and may take advantage of on-device or in-the-cloud artificial intelligence (AI) processing. The products that can be labeled “smart devices” are constantly expanding and already include smartphones, tablets, music players, home assistants, extreme sport cameras, and fitness trackers.

Android is a mobile device OS based on Linux, which was acquired by Google in 2005. In 2008, the first devices hosting Android were made available to the public. The Android source code is made open source through the Apache license, but most devices also include proprietary software. Although it’s mostly intended for use on phones and tablets, Android is being used on a wide range of devices, including televisions, game consoles, digital cameras, microwaves, watches, e-readers, cordless phones, and ski goggles.

The use of Android in phones and tablets isn’t a good example of a static environment. These devices allow for a wide range of user customization: you can install both Google Play Store apps and apps from unknown external sources (such as Amazon’s App Store), and many devices support the replacement of the default version of Android with a customized or alternate version. However, when Android is used on other devices, it can be implemented as something closer to a static system.

Whether static or not, Android has numerous security vulnerabilities. These include being exposed to malicious apps, running scripts from malicious websites, and allowing insecure data transmissions. Android devices can often be rooted (breaking their security and access limitations) in order to grant the user full root-level access to the device’s low-level configuration settings. Rooting increases a device’s security risk, because all running code inherits root privileges.

Improvements are made to Android security as new updates are released. Users can adjust numerous configuration settings to reduce vulnerabilities and risks. Also, users may be able to install apps that add additional security features to the platform.

iOS is the mobile device OS from Apple that is available on the iPhone, iPad, iPod, and Apple TV. iOS isn’t licensed for use on any non-Apple hardware. Thus, Apple is in full control of the features and capabilities of iOS. However, iOS is also a poor example of a static environment, because users can install any of over one million apps from the Apple App Store. Also, it’s often possible to jailbreak iOS (breaking Apple’s security and access restrictions), allowing users to install apps from third parties and gain greater control over low-level settings. Jailbreaking an iOS device reduces its security and exposes the device to potential compromise. Users can adjust device settings to increase an iOS device’s security and install many apps that can add security features.

The Internet of Things (IoT) is a new subcategory or even a new class of devices that are Internet-connected in order to provide automation, remote control, or AI processing to traditional or new appliances or devices in a home or office setting. IoT devices are sometimes revolutionary adaptations of functions or operations we may have been performing locally and manually for decades, which we would not want to ever be without again. Other IoT devices are nothing more than expensive gimmicky gadgets that, after the first few moments of use, are forgotten about and/or discarded. The security issues related to IoT are about access and encryption. All too often an IoT device was not designed with security as a core concept or even an afterthought. This has already resulted in numerous home and office network security breaches. Additionally, once an attacker has remote access to or through an IoT device, they may be able to access other devices on the compromised network. When electing to install IoT equipment, evaluate the security of the device as well as the security reputation of the vendor. If the new device does not have the ability to meet or accept your existing security baseline, then don’t compromise your security just for a flashy gadget.

One possible secure compromise is to deploy a distinct network for the IoT equipment, which is kept separate and isolated from the primary network. This configuration is often known as the three dumb routers (Figure 3.14) (see https://www.grc.com/sn/ sn-545.pdf or https://www.pcper.com/reviews/General-Tech/Steve-Gibsons- Three-Router-Solution-IOT-Insecurity).

Diagram shows one network of IoT devices and another secure network comprising of workstation, printer and webcam are connected to border device, followed by router and internet.

FIGURE 3.14 Three-dumb-router network layout

While we often associate smart devices and IoT with home or personal use, they are also a concern to every organization. This is partly due to the use of mobile devices by employees within the company’s facilities and even on the organizational network. These concerns are often addressed in a BYOD, COPE, or CYOD policy (see the Chapter 2 section “Deployment models” for more information). Another aspect of network professional concern is that many IoT or networked automation devices are being added to the business environment. This includes environmental controls, such as HVAC management, air quality control, debris and smoke detection, lighting controls, door automation, personnel and asset tracking, and consumable inventory management and auto-reordering (such as coffee, snacks, printer toner, paper, and other office supplies). Thus, both smart devices and IoT devices are potential elements of a modern business network that need appropriate security management and oversight. For some additional reading on the importance of proper security management of smart devices and IoT equipment, please see “NIST Initiatives in IoT” at https://www.nist.gov/itl/applied-cybersecurity/nist-initiatives-iot.

Wearable technology

Wearable technology are offshoots of smart devices and IoT devices that are specifically designed to be worn by an individual. The most common examples of wearable technology are smart watches and fitness trackers. There are an astounding number of options in these categories available, with a wide range of features and security capabilities. When selecting a wearable device, consider the security implications. Is the data being collected in a cloud service that is secured for private use or is it made publicly available? What alternative uses is the collected data going to be used for? Is the communication between the device and the collection service encrypted? And can you delete your data and profile from the service completely if you stop using the device?

Home automation

A very popular element of smart devices and IoT is home automation devices. These include smart thermostats, ovens, refrigerators, garage doors, doorbells, door locks, and security cameras. These IoT devices may offer automation or scheduling of various mundane, tedious, or inconvenient activities, such as managing the household heating and cooling systems, adding groceries to an online shopping list, automatically opening or unlocking doors as you approach, recording visitors to your home, and cooking dinner so it is ready just as you arrive home from work.

The precautions related to home automation devices are the same as for smart devices, IoT, and wearables. Always consider the security implications, evaluate the included or lacking security features, consider implementing the devices in an isolated network away from your other computer equipment, and only use solutions that provide robust authentication and encryption.

HVAC

HVAC (heating, ventilation, and air conditioning) can be controlled by an embedded solution (which might be also known as a smart device or an IoT device). See the previous discussion on smart devices for security issues, and see the later section “HVAC” under “Environmental controls.” Physical security controls protect against physical attacks, while logical and technical controls only protect against logical and technical attacks.

SoC

A System on a Chip (SoC) is an integrated circuit (IC) or chip that has all of the elements of a computer integrated into a single chip. This often includes the main CPU, memory, a GPU, WiFi, wired networking, peripheral interfaces (such as USB), and power management. In most cases the only item missing from a SoC compared to a full computer is bulk storage. Often a bulk storage device must be attached or connected to the SoC to store its programs and other files, since the SoC usually contains only enough memory to retain its own firmware or OS.

The security risks of an SoC include the fact that the firmware or OS of an SoC is often minimal, which leaves little room for most security features. An SoC may be able to filter input (such as by length or to escape metacharacters), reject unsigned code, provide basic firewall filtering, use communication encryption, and offer secure authentication. But these features are not universally available on all SoC products. A few devices that use a SoC include the mini-computer Raspberry Pi, fitness trackers, smart watches, and some smartphones.

RTOS

A real-time operating system (RTOS) is designed to process or handle data as it arrives on the system with minimal latency or delay. An RTOS is usually stored on read-only memory (ROM) and is designed to operate in a hard real-time or soft real-time condition. A hard real-time solution is for mission-critical operations where delay must be eliminated or minimized for safety, such as autonomous cars. A soft real-time solution is used when some level of modest delay is acceptable under typical or normal conditions, as it is for most consumer electronics, such as the delay between a digitizing pen and a graphics program on a computer.

RTOSs can be event-driven or time-sharing. An event-driven RTOS will switch between operations or tasks based on preassigned priorities. A time-sharing RTOS will switch between operations or tasks based on clock interrupts or specific time intervals. An RTOS is often implemented when scheduling or timing is the most critical part of the task to be performed.

A security concern using RTOSs is that these systems are often very focused and single-purpose, leaving little room for security. They often use custom or proprietary code, which may include unknown bugs or flaws that could be discovered by attackers. An RTOS might be overloaded or distracted with bogus data sets or process requests by malware. When deploying or using RTOSs, use isolation and communication monitoring to minimize abuses.

Printers/MFDs

See the earlier section “Printers/MFDs” under “Peripherals” for an introduction to their security implications. A printer or multifunction device (MFD) can be considered an embedded device if it has integrated network capabilities that allow it to operate as an independent network node rather than a direct-attached dependent device. Thus, network-attached printers and other similar devices pose an increased security risk because they often house full-fledged computers within their chassis. Network security managers need to include all such devices in their security management strategy in order to prevent these devices from being the targets of attack, used to house malware or attack tools, or grant outsiders remote-control access.

Camera systems

See the earlier section “Digital cameras” under “Peripherals” for an introduction to their security implications. Some camera systems include an SoC or embedded components that grant them network capabilities. Cameras that operate as network nodes can be remotely controlled; can provide automation functions; and may be able to perform various specialty functions, such as time-lapse recording, tracking, facial recognition, or infrared or color-filtered recording. Such devices may be targeted by attackers, be infected by malware, or be remotely controlled by hackers. Network security managers need to include all such devices in their security management strategy to prevent these compromises.

Special purpose

The concept of embedded systems is rapidly expanding as computer control, remote access, remote management, automation, monitoring, and AI processing are being applied to professional and personal events, activities, and tasks. In addition to the concepts mentioned previously in this section, there are a handful of additional special purpose embedded systems you should be familiar with. These include mainframes, game consoles, medical devices, vehicles, and aircraft/UAVs.

Mainframes are high-end computer systems used to perform highly complex calculations and provide bulk data processing. Older mainframes may be considered static environments because they were often designed around a single task or supported a single mission-critical application. These configurations didn’t offer significant flexibility, but they did provide for high stability and long-term operation. Many mainframes were able to operate for decades.

Modern mainframes are much more flexible and are often used to provide high-speed computation power in support of numerous virtual machines. Each virtual machine can be used to host a unique OS and in turn support a wide range of applications. If a modern mainframe is implemented to provide fixed or static support of one OS or application, it may be considered a static environment.

Game consoles, whether home systems or portable systems, are potentially examples of static systems and embedded systems. The OS of a game console is generally fixed and is changed only when the vendor releases a system upgrade. Such upgrades are often a mixture of OS, application, and firmware improvements. Although game console capabilities are generally focused on playing games and media, modern consoles may offer support for a range of cultivated and third-party applications. The more flexible and open-ended the app support, the less of a static system it becomes.

Medical devices

A growing number of medical devices have been integrated with IoT technology to make them remotely accessible for monitoring and management. This may be a great innovation for medical treatment, but it also has security risks. All computer systems are subject to attack and abuse. All computer systems have faults and failings that can be discovered and abused by an attacker. Although most medical device vendors strive to provide robust and secure products, it is not possible to consider and test for every possibility of attack, access, or abuse. There have already been several instances of medical devices being remotely controlled, disabled, accessed, or attacked with a DoS. When using any medical device, consider whether remote access, wired or wireless, is essential to the medical care it is providing. If not, it may still make sense to disable the network feature of the medical device. Although the breach of a personal computer or smartphone may be inconvenient and/or embarrassing, the breach of a medical device can be life-threatening.

Vehicles

In-vehicle computing systems can include the components used to monitor engine performance and optimize braking, steering, and suspension, but can also include in-dash elements related to driving, environment controls, and entertainment. Early in-vehicle systems were static environments with little or no ability to be adjusted or changed, especially by the owner/driver. Modern in-vehicle systems may offer a wider range of capabilities, including linking a mobile device or running custom apps. In-vehicle computing systems may or may not have sufficient security mechanisms. Even if the system is only providing information, such as engine performance, entertainment, and navigation, it is important to consider what, if any, security features are included in the solution. Does it connect to cloud services, are communications encrypted, how strong is the authentication, is it easily accessible to unauthorized third parties? If the in-vehicle computing system is controlling the vehicle, which might be called automated driving or self-driving, it is even more important that security be a major design element of the system. Otherwise, a vehicle can be converted from a convenient means of transference to a box of death.

Aircraft/UAV

Automated pilot systems have been part of aircraft for decades. In most of the airplanes that you have flown on, a human pilot was likely only in full control of the craft during takeoff and landing, and not always even then. For most of the flight, the autopilot system was likely in control of the aircraft. The military, law enforcement, and hobbyists have been using unmanned aerial vehicles (UAVs) for years, but usually under remote control. Now, with flight automation systems, UAVs can take off, fly to a destination, and land fully autonomously. There are even many retail businesses experimenting with, and in some countries implementing, UAV delivery of food and/or other packages.

The security of automated aircraft and UAVs is a concern for all of us. Are these systems secure against malware infection, signal disruption, remote control takeover, AI failure, and remote code execution? Does the UAV have authenticated connections to the authorized control system? Are the UAV’s communications encrypted? What will the aircraft do in the event that all contact with the control system is blocked through DoS or signal jamming? A compromised UAV could result in the loss of your pizza, a damaged product, a few broken shingles, or severe bodily injury.

Exam Essentials

Understand embedded systems. An embedded system is a computer implemented as part of a larger system. The embedded system is typically designed around a limited set of specific functions in relation to the larger product of which it’s a component.

Comprehend static environments. Static environments are applications, OSs, hardware sets, or networks that are configured for a specific need, capability, or function, and then set to remain unaltered. Examples include SCADA, embedded systems, Android, iOS, mainframes, game consoles, and in-vehicle computing systems.

Understand static environment security methods. Static environments, embedded systems, and other limited or single-purpose computing environments need security management. These techniques may include network segmentation, security layers, application firewalls, manual updates, firmware version control, wrappers, and control redundancy and diversity.

Know SCADA and ICS. Supervisory control and data acquisition (SCADA) is a type of industrial control system (ICS). An ICS is a form of computer-management device that controls industrial processes and machines. SCADA is used across many industries.

Understand smart devices. A smart device is a mobile device that offers the user a plethora of customization options, typically through installing apps, and may take advantage of on-device or in-the-cloud artificial intelligence (AI) processing.

Comprehend IoT. The Internet of Things (IoT) is a new subcategory or maybe even a new class of devices connected to the Internet in order to provide automation, remote control, or AI processing to traditional or new appliances or devices in a home or office setting.

Understand SoC. A System on a Chip (SoC) is an integrated circuit (IC) or chip that has all of the elements of a computer integrated into a single chip.

Know RTOS. A real-time operating system (RTOS) is designed to process or handle data as it arrives onto the system with minimal latency or delay. An RTOS is usually stored on read-only memory (ROM) and is designed to operate in a hard real-time or soft real-time condition.

3.6 Summarize secure application development and deployment concepts.

Secure software starts with a secure development and deployment system. Only if a software product was designed, crafted, and distributed in a secure fashion is it possible for the final product to provide reliable and trustable security. This section discusses several aspects of secure software deployment and development.

Development life-cycle models

A development life-cycle model is a methodical ordering of the tasks of creating a new product or revising an existing one. A formal software development life-cycle (SDLC) model helps to ensure a more reliable and stable product by establishing a standardized process by which new ideas become actual software. Software development has only existed as long as computers—less than 100 years. Modern software is only 30 or 40 years old. The earliest forms of software development management were forged in the 1970s and 1980s, but it wasn’t until 1991, when the Software Engineering Institute established the Capability Maturity Model (CMM), that software management concepts were formally established and widely adopted.

Waterfall vs. Agile

Two of the dominant SDLC concepts are the waterfall model and the Agile model. The waterfall model (Figure 3.15) consists of seven stages, or steps. The original idea was that project development would proceed through these steps in order from first to last, with the restriction that returning to an earlier phase was not allowed. The name waterfall is derived from the concept of steps of rocks in a waterfall, where water falls onto each step to then move on down to the next, and the water is unable to flow back up. A more recent revision of the waterfall model allows for some movement back into earlier phases (hence the up arrows in the image) in order to address oversights or mistakes discovered in later phases.

Chart shows seven steps such as system requirements, software requirements, preliminary design, detailed design, code and debug, testing and operations and maintenance.

FIGURE 3.15 The waterfall model

The primary criticism of the waterfall model is the limitation to only return to the immediately previous phase. This prevents returning to the earliest phases to correct concept and design issues that are not discovered until later in the development process. Thus, it forces the completion of a product that is known to be flawed or not to fulfill goals.

To address this concern, the modified waterfall model was crafted. This version adds a verification and validation process to each phase so that as a phase is completed, a review process ensures that each phase’s purposes, functions, and goals were successfully and correctly fulfilled.

However, this model modification was not widely adopted before another variation was crafted, known as the spiral model. The spiral model (Figure 3.16) is designed around repeating the earlier phases multiple times, known as iterations, in order to ensure that each element and aspect of each phase is fulfilled in the final product.

Diagram shows repeated spiral paths divided into four quadrants that represent steps such as determine objectives and alternatives, evaluate alternatives, develop and verify next-level product and plan next phases.

FIGURE 3.16 The spiral model

In the diagram, each spiral traverses the first four initial phases. At the completion of an iteration, a prototype of the solution (P1, P2, …) is developed and tested. Based on the prototype, the spiral path is repeated. Multiple iterations are completed until the prototype fulfills all or most of the requirements of the initial phase or design goals and functions, at which point the final prototype becomes the final product.

One of the most modern SDLC models is Agile, based around adaptive development, where focusing on a working product and fulfilling customer needs is prioritized over rigid adherence to a process, use of specific tools, and detailed documentation. Agile focuses on an adaptive approach to development; it supports early delivery, continuous improvement, and flexible and prompt response to changes.

In 2001, 17 Agile development pioneers crafted the Manifesto for Agile Software Development (http://agilemanifesto.org), which states the core philosophy as follows:

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools

Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

Furthermore, the Agile Manifesto prescribed 12 principles that guide the development philosophy:

“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.

Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.

Business people and developers must work together daily throughout the project.

Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Working software is the primary measure of progress.

Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.

Continuous attention to technical excellence and good design enhances agility.

Simplicity—the art of maximizing the amount of work not done—is essential.

The best architectures, requirements, and designs emerge from self-organizing teams.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.”

Agile is quickly becoming the dominant SDLC model, adopted by programming groups both large and small.

Secure DevOps

DevOps, or development and operations, is a new IT movement in which many elements and functions of IT management are being integrated into a single automated solution. DevOps typically consists of IT development, operations, security, and quality assurance. Secure DevOps is a variant of DevOps that prioritizes security in the collection of tasks performed under this new umbrella concept. DevOps is adopted by organizations crafting software solutions for internal use as well as products destined for public distribution. DevOps has many goals, including reducing time to market, improving quality, maintaining reliability, and implementing security into the design and development process.

Transforming DevOps into secure DevOps, or at least prioritizing security within DevOps, often includes several components, as discussed in the next sections.

Security automation

Security automation is important to DevOps in order to ensure that issues and vulnerabilities are discovered earlier so they can be properly addressed before product release. This will include automating vulnerability scans and code attacks against preproduction code, using fuzz testing techniques to discover logic flaws or the lack of input sanitization, and using code scanners that evaluate software for flaws and input management mistakes.

Continuous integration

In order for security to be successful in any development endeavor, it must be integrated and maintained at the beginning and throughout the development process. Secure DevOps must adopt a continuous integration approach to ensure that automated tools, automated testing, and manual injection of security elements are included throughout the process of product development. Programmers need to adopt secure coding practices, security experts need to train programmers, and security auditors need to monitor code throughout development for proper security elements.

Baselining

Every development project needs a baseline. A baseline is a minimum level of function, response, and security that must be met in order for the project to proceed toward release. Any software product that does not meet or exceed the baseline is rejected and must return to development to be improved. A baseline is a form of quality control. Without quality control, neither the needs of the customer nor the reputation of the vendor are respected.

Immutable systems

An immutable system is a server or software product that, once configured and deployed, is never altered in place. Instead, when a new version is needed or a change is necessary, a revised version is crafted and the new system is then deployed to replace the old one. The concept of immutable systems is to prevent minor tweaks and changes to one system or another causing a complexity of configuration differences. In many organizations today, a single server is no longer sufficient to support a resource and its users, so numerous computers, often in a clustered arrangement, are deployed. Immutable systems ensure that each member of the server group is exactly the same, so when something needs to change, it is first developed and tested in a staging area, and then when finalized the new version fully replaces the previous version.

Infrastructure as code

Infrastructure as code is a change in how hardware management is perceived and handled. Instead of seeing hardware configuration as a manual, direct hands-on, one-on-one administration hassle, it is viewed as just another collection of elements to be managed in the same way that software and code are managed under DevOps. This alteration in hardware management approach has allowed many organizations to streamline infrastructure changes so that they occur more easily, more rapidly, more securely and safely, and more reliably than before. Infrastructure as code often requires the implementation of hardware management software, such as Puppet. Such solutions provide version control, code review, continuous integration, and code review to the portion of an IT infrastructure that was not able to be managed in this manner in the past.

Version control and change management

Change in a secure environment can introduce loopholes, overlaps, missing objects, and oversights that can lead to new vulnerabilities. The only way to maintain security in the face of change is to manage change systematically. Change management usually involves extensive planning, testing, logging, auditing, and monitoring of activities related to security controls and mechanisms. The records of changes to an environment are then used to identify agents of change, whether those agents are objects, subjects, programs, communication pathways, or the network itself.

The goal of change management is to ensure that no change leads to reduced or compromised security. Change management is also responsible for making it possible to roll back any change to a previous secured state. Change management can be implemented on any system, no matter what its level of security. Ultimately, change management improves the security of an environment by protecting implemented security from unintentional, tangential, or affected diminishments. Although an important goal of change management is to prevent unwanted reductions in security, its primary purpose is to make all changes subject to detailed documentation and auditing and thus able to be reviewed and scrutinized by management.

Change management should be used to oversee alterations to every aspect of a system, including hardware configuration and OS and application software. Change management should be included in design, development, testing, evaluation, implementation, distribution, evolution, growth, ongoing operation, and modification. It requires a detailed inventory of every component and configuration. It also requires the collection and maintenance of complete documentation for every system component, from hardware to software and from configuration settings to security features.

The change-control process of configuration, version control, or change management has several goals or requirements:

  • Implement changes in a monitored and orderly manner. Changes are always controlled.
  • A formalized testing process is included to verify that a change produces expected results.
  • All changes can be reversed.
  • Users are informed of changes before they occur to prevent loss of productivity.
  • The effects of changes are systematically analyzed.
  • The negative impact of changes on capabilities, functionality, and performance is minimized.

One example of a change-management process is a parallel run, which is a type of new system deployment testing where the new system and the old system are run in parallel. Each major or significant user process is performed on each system simultaneously to ensure that the new system supports all required business functionality that the old system supported or provided.

Change is the antithesis of security. In fact, change often results in reduced security. Therefore, security environments often implement a system of change management to minimize the negative impact of change on security. Change documentation is one aspect of a change-management system: it’s the process of writing out the details of changes to be made to a system, a computer, software, a network, and so on before they’re implemented. Then, the change documentation is transformed into a procedural document that is followed to the letter to implement the desired changes. After the changes are implemented, the system is tested to see whether security was negatively affected. If security has decreased, the change documentation can be used to guide the reversal of the changes to restore the system to a previous state in which stronger security was enforced.

Provisioning and deprovisioning

Provisioning is preallocation. When needing to deploy several new instances of a server to increase resource availability, the IT manager must provision hardware server resources to allocate to the new server instances. Provisioning is used to ensure that sufficient resources are available to support and maintain a system, software, or solution. Provisioning helps prevent the deployment of a new element without sufficient resources to support it.

Deprovisioning can be focused on two elements. It can focus on streamlining and fine-tuning resource allocation to existing systems for a more efficient distribution of resources. This can result in freeing sufficient resources to launch additional instances of a server. Deprovisioning can also focus on the release of resources from a server being decommissioned so that those resources return to the availability pool for use by other future servers.

Secure coding techniques

Secure coding concepts are those efforts designed to implement security into software as it’s being developed. Security should be designed into the concept of a new solution, but programmers still need to code the security elements properly and avoid common pitfalls and mistakes while coding.

Proper error handling

When errors occur, the program should fall back to a secure state. This is generally known as fail-secure design. However, the programmer must code this into the application in order for a true fail-secure response to take place. This should include error and exception handling. When a process, a procedure, or an input results in or causes an error, the system should revert to a more secure state. This could include resetting back to a previous state of operation, rebooting back into a secured state, or recycling the connection state to revert to secured communications. Errors should also provide minimal information to visitors and users, especially outside/external visitors and users. All detailed error messages should be stored in an access-restricted log file for the programmers and administrators. Whenever an exception is encountered, it should be rejected and the fail-secure response should be triggered.

Proper input validation

Input validation is an aspect of defensive programming intended to ward off a wide range of input-focused attacks, such as buffer overflows and fuzzing. Input validation checks each and every input received before it’s allowed to be processed. The check could be a length, a character type, a language type, a domain, or even a timing check to prevent unknown, unwanted, or unexpected content from making it to the core program.

Normalization

Normalization is a database programming and management technique used to reduce redundancy. The goal of normalization is to prevent redundant data, which is a waste of space and can also increase processing load. A normalized database is more efficient and can allow for faster data retrieval operations. Removing duplicate and redundant data ensures that sensitive data will exist in only one table or database (the original source), rather than being repeated within many others. This can reduce the difficulty of security sensitive data by allowing database security managers to implement access control over the original data source instead of having to attempt to lock down every duplicate copy.

Stored procedures

A stored procedure is a subroutine or software module that can be called on or accessed by applications interacting with a relational database management system (RDBMS). Stored procedures may be used for data validation during input, managing access control, assessing the logic of data, and more. Stored procedures can make some database applications more efficient, consistent, and secure.

Code signing

Code signing is the activity of crafting a digital signature of a software program in order to confirm that it was not changed and who it is from. See the Chapter 6 section “Digital signatures.”

Encryption

Encryption should be used to protect data in storage and data in transit. Programmers should adopt trusted and reliable encryption systems into their applications. See Chapter 6 for the broad discussion of encryption and cryptography.

Obfuscation/camouflage

Obfuscation or camouflage is the coding practice of crafting code specifically to be difficult to decipher by other programmers. These techniques might be adopted in order to prevent unauthorized third parties from understanding proprietary solutions. These techniques can also be adopted by malicious programmers to hide the true intentions and purposes of software.

Code reuse/dead code

Code reuse is the inclusion of preexisting code in a new program. Code reuse can be a way to quicken the development process by adopting and reusing existing code. However, care should be taken not to violate copyright or intellectual property restrictions when reusing code. It is also important to fully understand the reused code to ensure that backdoors or other exploitable flaws are not introduced to the new product through the recycled code.

Dead code is any section of software that is executed but whose output or result is not used by any other process. Effectively the execution of dead code is a waste of time and resources. Programmers should strive to minimize and eliminate dead code from their products in order to improve efficiency and minimize the potential for exploitable errors or flaws. Dead code is sometimes used as part of obfuscation.

Server-side vs. client-side execution and validation

Server-side validation is suited for protecting a system against input submitted by a malicious user. Most client-side executions of scripts or mobile applets can be easily bypassed by a skilled web hacker. Thus, any client-side filtering is of little defense if the submission to the server bypasses those protections. A web hacker can edit JavaScript or HTML, modify forms, alter URLs, and much more. Thus, never assume any client-side filtering was effective—all input should be reassessed on the server side before processing. Server-side validation should include a check for input length, a filter for known scriptable or malicious content (such as SQL commands or script calls), and a filter for metacharacters (see Chapter 1, “Threats, Attacks, and Vulnerabilities.”).

Client-side validation is also important, but its focus is on providing better responses or feedback to the typical user. Client-side validation can be used to indicate whether input meets certain requirements, such as length, value, content, and so on. For example, if an email address is requested, a client-side validation check can confirm that it uses supported characters and is of the typical construction username@FQDN.

Although all the validation can take place on the server side, it is often a more complex process and introduces delays to the interaction. A combination of server-side and client-side validation allows more efficient interaction while maintaining reasonable security defenses.

Memory management

Programmers should include code in their software that focuses on proper memory management. Software should preallocate memory buffers but also limit the input sent to those buffers. Including input limit checks is part of secure coding practices, but it may be seen as busy work during the initial steps of software creation. Some programmers focus on getting new code to function with the intention of returning to the code in the future to improve security and efficiency. Unfortunately, if the functional coding efforts take longer than expected, it can result in the security revisions being minimized or skipped. Always be sure to use secure coding practices, such as proper memory management, to prevent a range of common software exploitations, such as buffer overflow attacks.

Use of third-party libraries and SDKs

Third-party software libraries and software development kits (SDKs) are often essential tools for a programmer. Using preexisting code can allow programmers to focus on their custom code and logic. SDKs provide guidance on software crafting as well as solutions, such as special APIs, subroutines, or stored procedures, which can simplify the creation of software for complex execution environments.

However, when you are using third-party software libraries, the precrafted code may include flaws, backdoors, or other exploitable issues that are unknown and yet undiscovered. Attempt to vet any third-party code before relying on it. Similarly, an SDK might not have security and efficiency as a top priority, so evaluate the features and capabilities provided via the SDK for compliance with your own programming and security standards.

Data exposure

When software does not adequately protect the data it processes, it may result in unauthorized data exposure. Programmers need to include authorization, authentication, and encryption schemes in their products in order to protect against data leakage, loss, and exposure.

Code quality and testing

No amount of network hardening, auditing, or user training can compensate for bad programming. Solid application security is essential to the long-term survival of any organization. Application security begins with secure coding and design, which is then maintained over the life of the software through testing and patching. Code quality needs to be assessed prior to execution. Software testing needs to be performed prior to distribution.

Before deploying a new application into the production environment, you should install it into a lab or pilot environment. Once testing is complete, the deployment procedure should include the crafting of an installation how-to, which must include not only the steps for deployment but also the baseline of initial configuration. This can be a written baseline or a template file that can be applied. The purpose of an application configuration baseline is to ensure compliance with policy and reduce human error. Baselines can be reapplied periodically or validated against changing work conditions as needed.

Static code analyzers

Static code analyzers review the raw source code of a product without executing the program. This debugging effort is designed to locate flaws in the software code before the program is run on a target or customer system. Static code analysis is often a first step in software quality and security testing.

Dynamic analysis (e.g., fuzzing)

Dynamic analysis is the testing and evaluation of software code while the program is executing. The executing code is then subjected to a range of inputs to evaluate its behavior and responses. One method of performing dynamic analysis is known as fuzzing.

Fuzzing is a software-testing technique that generates inputs for targeted programs. The goal of fuzz testing is to discover input sets that cause errors, failures, and crashes, or to discover other unknown defects in the targeted program. Basically, a fuzz-tester brute-force attack generates inputs within given parameters far in excess of what a normal, regular user or environment would ever be able to do. The information discovered by a fuzzing tool can be used to improve software as well as develop exploits for it.

Once a fuzz-testing tool discovers a constructed input that causes an abnormal behavior in the target application, the input and response are recorded into a log. The log of interesting inputs is reviewed by a security professional or a hacker. With the right skills and tools, the results of fuzzing can be transformed into a patch that fixes discovered defects or an exploit that takes advantage of them.

Stress testing

Stress testing is another variation of dynamic analysis in which a hardware or software product is subjected to various levels of workload in order to evaluate its ability to operate and function under stress. Stress testing can start with a modest level of traffic and then increase to abnormally high levels. The purpose of stress testing is to gain an understanding of how a product will perform, react, or fail in the various circumstances between normal conditions and DoS level traffic or load.

Sandboxing

A sandbox is a software implementation of a constrained execution space used to contain an application. Sandboxing (Figure 3.17) is often used to protect the overall computer from a new, unknown, untested application. The sandbox provides the contained application with direct or indirect access to sufficient system resources to execute, but not the ability to make changes to the surrounding environment or storage devices (beyond its own files). Sandboxing is commonly used for software testing and evaluating potential malware, and it is the basis for the concept of virtualization.

Left diagram shows unrestricted access for all user data and all system resources to app without sandbox. Right diagram shows unrestricted access to app only for user data and system resources within sandbox.

FIGURE 3.17 Application sandboxing

Model verification

Model verification is a part of the software development process that is often used to ensure that the crafted code remains in compliance with a development process, an architectural model, or design limitations. Model verification can also extend to ensuring that a software solution is able to achieve the desired real-world results by performing operational testing. Model verification can ensure that a product maintains compliance with security baseline requirements during development.

Compiled vs. runtime code

Most applications are written in a high-level language that is more similar to human language, such as English, than to the 1s and 0s that make up machine language. High-level languages are easier for people to learn and use in crafting new software solutions. However, high-level languages must ultimately be converted to machine language in order to execute the intended operations.

If the code is converted to machine language using a compiler crafting an output executable, then the language is described as compiled. The resulting executable file can be run at any time.

If the code remains in its original human-readable form and is converted into machine language only at the moment of execution, the language is a runtime compiled language. Some runtime languages will compile/convert the entire code at once into machine language for execution, whereas others will compile/convert only one line at a time (sometimes known as just-in-time execution or compilation).

Compiled code is harder for an attacker to inject malware into, but it is harder to detect such malware. Runtime code is easier for an attacker to inject malware into, but it is easier to detect such malware.

Exam Essentials

Understand SDLC. A development life-cycle model is a methodical ordering of the tasks of creating a new product or revising an existing one. A formal software development life-cycle (SDLC) model helps ensure a more reliable and stable product by establishing a standard process by which new ideas become actual software.

Know the waterfall model. The waterfall model consists of seven stages or steps. The original idea was that project development would proceed through these steps in order from first to last with the restriction that returning to an earlier phase was not allowed.

Understand Agile. The Agile model is based on adaptive development, where focusing on a working product and fulfilling customer needs is prioritized over rigid adherence to a process, use of specific tools, or detailed documentation. Agile focuses on an adaptive approach to development and supports early delivery and continuous improvement, along with flexible and prompt responses to changes.

Comprehend secure DevOps. DevOps, or development and operations, is a new IT movement in which many elements and functions of IT management are being integrated into a single automated solution. DevOps typically consists of IT development, operations, security, and quality assurance. Secure DevOps is a variant of DevOps that prioritizes security in the collection of tasks performed under this new umbrella concept.

Understand change management. The goal of change management is to ensure that change does not lead to reduced or compromised security. Change in a secure environment can introduce loopholes, overlaps, missing objects, and oversights that can lead to new vulnerabilities. The only way to maintain security in the face of change is to systematically manage change. This usually involves extensive planning, testing, logging, auditing, and monitoring of activities related to security controls and mechanisms.

Know provisioning and deprovisioning. Provisioning is preallocation. Provisioning is used to ensure that sufficient resources are available to support and maintain a system, software, or solution. Deprovisioning can focus on streamlining and fine-tuning resource allocation to existing systems for a more efficient distribution of resources. It can also focus on the release of resources from a server that is being decommissioned so that those resources return to the availability pool for use by other future servers.

Understand secure coding concepts. Secure coding concepts are those efforts designed to implement security into software as it’s being developed. Security should be designed into the concept of a new solution, but programmers still need to code the security elements properly and avoid common pitfalls and mistakes while coding.

Comprehend error handling. When a process, a procedure, or an input causes an error, the system should revert to a more secure state. This could include resetting to a previous state of operation, rebooting back into a secured state, or recycling the connection state to revert to secured communications.

Understand input validation. Input validation checks each and every input received before it’s allowed to be processed. The check could be a length, a character type, a language type, a domain, or even a timing check to prevent unknown, unwanted, or unexpected content from making it to the core program.

Know about normalization. Normalization is a database programming and management technique used to reduce redundancy. The goal of normalization is to prevent redundant data, which is a waste of space and can also increase processing load.

Understand stored procedures. A stored procedure is a subroutine or software module that can be called upon or accessed by applications interacting with an RDBMS.

Know code signing. Code signing is the activity of crafting a digital signature of a software program in order to confirm that it was not changed and who it is from.

Understand obfuscation and camouflage. Obfuscation or camouflage is the coding practice of crafting code specifically to be difficult for other programmers to decipher.

Comprehend code reuse. Code reuse is the inclusion of preexisting code in a new program. Code reuse can be a way to quicken the development process.

Understand dead code. Dead code is any section of software that is executed but the output or result of the execution is not used by any other process. Effectively the execution of dead code is a waste of time and resources.

Know server-side validation. Server-side validation is suited for protecting a system against input submitted by a malicious user. It should include a check for input length, a filter for known scriptable or malicious content (such as SQL commands or script calls), and a metacharacter filter.

Understand client-side validation. Client-side validation focuses on providing better responses or feedback to the typical user. It can be used to indicate whether input meets certain requirements, such as length, value, content, and so on.

Know memory management. Software should include proper memory management, such as preallocating memory buffers but also limiting the input sent to those buffers. Including input limit checks is part of secure coding practices.

Understand third-party libraries and SDKs. Third-party software libraries and software development kits (SDKs) are often essential tools for a programmer. Using preexisting code can allow programmers to focus on their custom code and logic.

Comprehend code quality and testing. Application security begins with secure coding and design, which is then maintained over the life of the software through testing and patching.

Understand static code analyzers. Static code analyzers review the raw source code of a product without the program being executed. This debugging effort focuses on locating flaws in the software code before the program is run on a target or customer system.

Know dynamic analysis. Dynamic analysis is the testing and evaluation of software code while the program is executing. The executing code is then subjected to a range of inputs to evaluate its behavior and responses.

Understand fuzzing. Fuzzing is a software-testing technique that generates inputs for targeted programs. The goal of fuzz-testing is to discover input sets that cause errors, failures, and crashes, or to discover other defects in the targeted program.

Know about stress testing. Stress testing is another variation of dynamic analysis in which a hardware or software product is subjected to various levels of workload in order to evaluate its ability to operate and function under stress.

Understand sandboxing. A sandbox is a software implementation of a constrained execution space used to contain an application. Sandboxing is often used to protect the overall computer from a new, unknown, untested application.

Comprehend model verification. Model verification is often part of software development processes; it is used to ensure that the crafted code remains in compliance with a development process, architectural model, or design limitations.

Understand compiled code. If the code is converted to machine language using a compiler crafting an output executable, then the language is a compiled language.

Know about runtime code. If the code remains in its original human-readable form and then gets converted into machine language only at the moment of execution, the language is a runtime compiled language.

3.7 Summarize cloud and virtualization concepts.

Virtualization technology is used to host one or more OSs in the memory of a single host computer. This mechanism allows practically any OS to operate on any hardware. It also lets multiple OSs work simultaneously on the same hardware. Cloud computing is often remote virtualization. Please review the earlier section “Virtualization” in this chapter.

Cloud computing and virtualization, especially when you are virtualizing in the cloud, have serious risks associated with them. Once sensitive, confidential, or proprietary data leaves the confines of the organization, it also leaves the protections imposed by the organizational security policy and resultant infrastructure. Cloud services and their personnel might not adhere to the same security standards as your organization. It is important to investigate the security of a cloud service before adopting it.

With the increased burden of industry regulations, such as the Sarbanes-Oxley Act of 2002 (SOX), Health Insurance Portability and Accountability Act (HIPAA), and Payment Card Industry Data Security Standards (PCI DSS), it is essential to ensure that a cloud service provides sufficient protections to maintain compliance. Additionally, cloud service providers may not maintain your data in close proximity to your primary physical location. In fact, they may distribute your data across numerous locations, some of which may reside outside your country of origin. It may be necessary to add to a cloud service contract a limitation to house your data only within specific logical and geographic boundaries.

It is important to investigate the encryption solutions employed by a cloud service. Do you send your data to them pre-encrypted, or is it encrypted only after reaching the cloud? Where are the encryption keys stored? Is there segregation between your data and that belonging to other cloud users? An encryption mistake can reveal your secrets to the world or render your information unrecoverable.

What is the method and speed of recovery or restoration from the cloud? If you have system failures locally, how do you get your environment back to normal? Also consider whether the cloud service has its own disaster-recovery solution. If it experiences a disaster, what is its plan to recover and restore services and access to your cloud resources?

Other issues include the difficulty with which investigations can be conducted, concerns over data destruction, and what happens if the current cloud-computing service goes out of business or is acquired by another organization.

Snapshots are backups of virtual machines. They offer a quick means to recover from errors or poor updates. It’s often easier and faster to make backups of entire virtual systems rather than the equivalent native hardware installed system.

Virtualization doesn’t lessen the security management requirements of an OS. Thus, patch management is still essential. Patching or updating virtualized OSs is the same process as for a traditionally hardware installed OS. Also, don’t forget that you need to keep the virtualization host updated as well.

When you’re using virtualized systems, it’s important to protect the stability of the host. This usually means avoiding using the host for any purpose other than hosting the virtualized elements. If host availability is compromised, the availability and stability of the virtual systems are also compromised.

Elasticity refers to the flexibility of virtualization and cloud solutions to expand or contract based on need. In relation to virtualization, host elasticity means additional hardware hosts can be booted when needed and then used to distribute the workload of the virtualized services over the newly available capacity. As the workload becomes smaller, you can pull virtualized services off unneeded hardware so it can be shut down to conserve electricity and reduce heat.

Virtualized systems should be security tested. The virtualized OSs can be tested in the same manner as hardware installed OSs, such as with vulnerability assessment and penetration testing. However, the virtualization product may introduce additional and unique security concerns, so the testing process needs to be adapted to include those idiosyncrasies.

Hypervisor

The hypervisor, also known as the virtual machine monitor (VMM), is the component of virtualization that creates, manages, and operates the virtual machines. The computer running the hypervisor is known as the host OS, and the OSs running within a hypervisor-supported virtual machine are known as guest OSs.

Type I

A type I hypervisor (Figure 3.18, top) is a native or bare-metal hypervisor. In this configuration, there is no host OS; instead, the hypervisor installs directly onto the hardware where the host OS would normally reside. Type 1 hypervisors are often used to support server virtualization. This allows for maximization of the hardware resources while eliminating any risks or resource reduction caused by a host OS.

Bottom diagram shows hypervisor consists of virtual machine manager, Windows XP VM, Ubuntu VM, Windows 7 VM et cetera. Top diagram shows hypervisor consists of Ubuntu Guest OS, Windows Guest OS et cetera.

FIGURE 3.18 Hosted vs. bare-metal hypervisor

Type II

A type II hypervisor (Figure 3.18, bottom) is a hosted hypervisor. In this configuration, a standard regular OS is present on the hardware, and then the hypervisor is installed as another software application. Type II hypervisors are often used in relation to desktop deployments, where the guest OSs offer safe sandbox areas to test new code, allow the execution of legacy applications, support apps from alternate OSs, and provide the user with access to the capabilities of a host OS.

Application cells/containers

Another variation of virtualization is that focusing on applications instead of entire operating systems. Application cells or application containers (Figure 3.19) are used to virtualize software so they can be ported to almost any OS.

Diagram shows layers of system, host OS, hypervisor and VM or container. VM includes apps, bins or libs and guest operating systems whereas container includes apps and bins or libs.

FIGURE 3.19 Application containers vs. a hypervisor

VM sprawl avoidance

VM sprawl occurs when an organization deploys numerous virtual machines without an overarching IT management or security plan in place. Although VMs are easy to create and clone, they have the same licensing and security management requirements as a metal installed OS. Uncontrolled VM creation can quickly lead to a situation where manual oversight cannot keep up with system demand. To prevent or avoid VM sprawl, a policy for developing and deploying VMs must be established and enforced. This should include establishing a library of initial or foundation VM images that are to be used to develop and deploy new services.

VM escape protection

VM escaping occurs when software within a guest OS is able to breach the isolation protection provided by the hypervisor in order to violate the container of other guest OSs or to infiltrate a host OS. VM escaping can be a serious problem, but steps can be implemented to minimize the risk. First, keep highly sensitive systems and data on separate physical machines. An organization should already be concerned about over- consolidation resulting in a single point of failure, so running numerous hardware servers so each supports a handful of guest OSs helps with this risk. Keeping enough physical servers on hand to maintain physical isolation between highly sensitive guest OSs will further protect against VM escaping. Second, keep all hypervisor software current with vendor-released patches. Third, monitor attack, exposure, and abuse indexes for new threats to your environment.

Cloud storage

Cloud storage is the idea of using storage capacity provided by a cloud vendor as a means to host data files for an organization. Cloud storage can be used as a form of backup or support for online data services. Cloud storage may be cost effective, but it is not always high speed or low latency. Most do not yet consider cloud storage as a replacement for physical backup media solutions, but rather as a supplement for organizational data protection.

Cloud deployment models

Cloud computing is a popular term that refers to performing processing and storage elsewhere, over a network connection, rather than locally. Cloud computing is often thought of as Internet-based computing. Ultimately, processing and storage occur on computers somewhere, but the distinction is that the local operator no longer needs to have that capacity or capability locally. Thus more users can use cloud resources on an on-demand basis. From the end users’ perspective, all the work of computing is performed “in the cloud,” so the complexity is isolated from them.

Cloud computing is a natural extension and evolution of virtualization, the Internet, distributed architecture, and the need for ubiquitous access to data and resources. However, it does have some security and IT policy issues: privacy concerns, regulation compliance difficulties, use of open-/closed-source solutions, adoption of open standards, and whether cloud-based data is actually secured (or even securable). The primary security concerns related to cloud computing are determining and clarifying what security responsibilities belong to the cloud provider and which are the customer’s obligation—this should be detailed in the SLA/contract.

SaaS

Software as a Service (SaaS) is a derivative of Platform as a Service. It provides on-demand online access to specific software applications or suites without the need for local installation (and with no local hardware and OS requirements, in many cases). Software as a Service can be implemented as a subscription service, a pay-as-you-go service, or a free service.

PaaS

Platform as a Service (PaaS) is the concept of providing a computing platform and software solution stack to a virtual or cloud-based service. Essentially, it involves paying for a service that provides all the aspects of a platform (that is, an OS and a complete solution package). A PaaS solution grants the customer the ability to run custom code of their choosing without needing to manage the environment. The primary attraction of Platform as a Service is that you don’t need to purchase and maintain high-end hardware and software locally.

IaaS

Infrastructure as a Service (IaaS) takes the platform as a service model another step forward and provides not just on-demand operating solutions but complete outsourcing options. These can include utility or metered computing services, administrative task automation, dynamic scaling, virtualization services, policy implementation and management services, and managed/filtered Internet connectivity. Ultimately, Infrastructure as a Service allows an enterprise to quickly scale up new software- or data-based services/solutions through cloud systems without having to install massive hardware locally.

Private

A private cloud is a cloud service that is within a corporate network and isolated from the Internet. The private cloud is for internal use only.

A virtual private cloud is a service offered by a public cloud provider that provides an isolated subsection of a public or external cloud for exclusive use by an organization internally. In other words, an organization outsources its private cloud to an external provider.

Public

A public cloud is a cloud service that is accessible to the general public, typically over an Internet connection. Public cloud services may require some form of subscription or pay per use or may be offered for free. Although an organization’s or individual’s data is usually kept separated and isolated from other customers’ data in a public cloud, the overall purpose or use of the cloud is the same for all customers.

Hybrid

A hybrid cloud is a mixture of private and public cloud components. For example, an organization could host a private cloud for exclusive internal use but distribute some resources onto a public cloud for the public, business partners, customers, the external sales force, and so on.

Community

A community cloud is a cloud environment maintained, used, and paid for by a group of users or organizations for their shared benefit, such as collaboration and data exchange. This may allow for some cost savings compared to accessing private or public clouds independently.

On-premise vs. hosted vs. cloud

An on-premise solution is the traditional deployment concept in which an organization owns the hardware, licenses the software, and operates and maintains the systems on its own, usually within their own building.

A cloud solution is a deployment concept where an organization contracts with a third-party cloud provider. The cloud provider owns, operates, and maintains the hardware and software. The organization pays a monthly fee (often based on a per-user multiplier) to use the cloud solution.

A hosted solution is a deployment concept where the organization must license software and then operates and maintains the software. The hosting provider owns, operates, and maintains the hardware that supports the organization’s software.

On-premise solutions do not have ongoing monthly costs, but may be more costly because of initial up-front costs of obtaining hardware and licensing. On-premise solutions offer full customization, provide local control over security, do not require Internet connectivity, and provide local control over updates and changes. However, they also require significant administrative involvement for updates and changes, require local backup and management, and are more challenging to scale.

Cloud solutions often have lower up-front costs, lower maintenance costs, vendor-maintained security, and scalable resources, and they usually have high levels of uptime and availability from anywhere (over the Internet). However, cloud solutions do not offer customer control over OS and software, such as updates and configuration changes; offer minimal customization; and are often inaccessible without Internet connectivity. In addition, the security policies of the cloud provider might not match those of the organization.

VDI/VDE

See the Chapter 2 section “VDI” for a description of the virtual desktop infrastructure (VDI) model. Virtual desktop environment (VDE) is an alternate term for VDI.

Cloud access security broker

A cloud access security broker (CASB) is a security policy enforcement solution that may be installed on-premise or may be cloud-based. The goal of a CASB is to enforce proper security measures and ensure that they are implemented between a cloud solution and a customer organization.

Security as a Service

Security as a Service (SECaaS) is a cloud provider concept in which security is provided to an organization through or by an online entity. The purpose of an SECaaS solution is to reduce the cost and overhead of implementing and managing security locally. SECaaS often implements software-only security components that do not need dedicated on-premise hardware. SECaaS security components can include a wide range of security products, including authentication, authorization, auditing/accounting, antimalware, intrusion detection, penetration testing, and security event management.

Exam Essentials

Understand the risks associated with cloud computing and virtualization. Cloud computing and virtualization, especially when combined, have serious risks associated with them. Once sensitive, confidential, or proprietary data leaves the confines of the organization, it also leaves the protections imposed by the organizational security policy and resultant infrastructure. Cloud services and their personnel might not adhere to the same security standards as your organization.

Comprehend cloud computing. Cloud computing involves performing processing and storage elsewhere, over a network connection, rather than locally. Cloud computing is often thought of as Internet-based computing.

Understand hypervisors. The hypervisor, also known as the virtual machine monitor (VMM), is the component of virtualization that creates, manages, and operates the virtual machines.

Know about the type I hypervisor. A type I hypervisor is a native or bare-metal hypervisor. In this configuration, there is no host OS; instead, the hypervisor installs directly onto the hardware where the host OS would normally reside.

Know about the type II hypervisor. A type II hypervisor is a hosted hypervisor. In this configuration, a standard regular OS is present on the hardware, and the hypervisor is then installed as another software application.

Understand application cells/containers. Application cells or application containers are used to virtualize software so they can be ported to almost any OS.

Comprehend VM sprawl avoidance. VM sprawl occurs when an organization deploys numerous virtual machines without an overarching IT management or security plan in place. To prevent or avoid VM sprawl, a policy must be established and enforced regarding the procedure for developing and deploying VMs.

Understand VM escaping. VM escaping occurs when software within a guest OS is able to breach the isolation protection provided by the hypervisor in order to violate the container of other guest OSs or to infiltrate a host OS.

Know about cloud storage. Cloud storage is the idea of using storage capacity provided by a cloud vendor as a means to host data files for an organization. Cloud storage can be used as form of backup or support for online data services.

Understand cloud deployment models. Cloud deployment models include SaaS, PaaS, IaaS, private, public, hybrid, and community.

Define CASB. A cloud access security broker (CASB) is a security policy enforcement solution that may be installed on-premise, or may be cloud based.

Understand SECaaS. Security as a Service (SECaaS) is a cloud provider concept in which security is provided to an organization through or by an online entity.

3.8 Explain how resiliency and automation strategies reduce risk.

Risk reduction, mitigation, and even elimination should be a core strategy for every organization. Security management consists of the efforts to establish, administer, and maintain security throughout the organization. Many elements of security management focus on establishing resiliency as well as automation in order to reduce risk, improve uptime, and minimize expense.

Automation/scripting

Automation is the control of systems on a regular scheduled, periodic, or triggered basis that does not require manual hands-on interaction. Automation is often critical to a resilient security infrastructure. Automation includes concepts such as scheduled backups, archiving of log files, blocking of failed access attempts, and blocking communications based on invalid content in initial packets or due to traffic seeming like a port scan. Automation can also be implemented using scripting. Scripting is the crafting of a file of individual lines of commands that are executed one after another. Scripts can be set to launch on a schedule or based on a triggering event.

Automated courses of action

Automated courses of action ensure that a specific series of steps or activities are performed in the correct order each and every time. This helps ensure consistency of results, which in turn establishes consistent security.

Continuous monitoring

In order for security monitoring to be effective, it must be continuous in several ways. First, it must always be running and active. There should be no intentional time frame when security monitoring isn’t functioning. If security monitoring goes offline, all user activity should cease and administrators should be notified.

Second, security monitoring should be continuous across all user accounts, not just end users. Every single person has responsibilities to the organization to maintain its security. Likewise, everyone needs to abide by their assigned job-specific responsibilities and privileges. Any attempts to exceed or violate those limitations should be detected and dealt with.

Third, security monitoring should be continuous across the entire IT infrastructure. On every device possible, recording of system events and user activities should be taking place.

Fourth, security monitoring should be continuous for each user from the moment of attempted logon until the completion of a successful logoff or disconnect. At no time should the user expect to be able to perform tasks without security monitoring taking place.

Configuration validation

Automation is only effective if accurate. Repeating execution of a flawed program may leave the environment with reduced security rather than improved or maintained security. All systems need to have a defined baseline of configuration that is clearly documented. The configuration documentation should be used to validate all in-production systems on a regular basis. Only when systems are in proper compliance with a configuration baseline is security likely to be resilient; baseline compliance also supports the results of automated processes.

Templates

A template is a preestablished starting point. A template can be crafted for a plethora of concerns in an environment, including a security policy, a procedure, a contract, a submission form, a system image, a software configuration, and a firewall rule set. Starting security documentation, configuration, or management with a template is likely to produce more consistent and reliable results.

Master image

A master image or gold master is a crafted setup and configuration of a software product or an entire computer system. A master image is created just after the target system has been manually installed, patched, and configured. A master image is employed to quickly roll out new versions of a system. For example, when deploying 100 new workstations, you can install the master image of the preferred workstation software deployment configuration to quickly bring the new devices into compliance with production needs and security requirements.

Non-persistence

A nonpersistent system is a computer system that does not allow, support, or retain changes. Thus, between uses and/or reboots, the operating environment and installed software are exactly the same. A persistent system is one where changes are possible. Changes may be performed by authorized users, administrators, automated processes, or malware. To reduce the risk of change, various protection and recovery measures may need to be established.

Snapshots

A snapshot is a copy of the live current operating environment. Snapshots are mostly known relative to virtual machines and guest OSs. However, the term can be loosely employed to refer to any systemwide backup that can be restored to a previous state or condition of configuration and operation. A VM snapshot might only take a few minutes to create and restore, while a hard drive–based snapshot or cloning may take hours to create and restore.

Revert to known state

Revert to known state is a type of backup or recovery process. Many databases support a known state reversion in order to return to a state of data before edits or changes were implemented. Some systems will automatically create a copy of a known state in order to provide a rollback option, whereas others may require a manual creation of the rollback point. An example of a revert-to-known-state system is the restore point system of Windows. Whenever a patch or software product is installed, Windows can create a restore point that can be used to return the system to a previous configuration state.

Rollback to known configuration

Rollback to known configuration is a concept similar to that of reverting to a known state, but the difference is that a state retention may address a larger portion of the environment than just configuration. A known configuration is just a collection of settings, not likely to include any software elements, such as code present before a patch was applied. A rollback to known configuration is useful after a setting change that had undesired consequences, but not after installing a new version of a software product (for that use revert to known state, snapshot, or backup). One example of a rollback to known configuration is the Last Known Good Configuration (LKGC) found in Windows. Each time a user successfully logs into a Windows system, a copy of the registry is made at that moment and stored in the LKGC container. If the system is altered in such a way that the operating environment is unusable, then upon the next reboot an advanced boot option is to restore the LKGC. However, this option is available only once; if the user logs in while the system is still malfunctioning, the current configuration is stored in the LKGC container.

Live boot media

Live boot media is a portable storage device that can be used to boot a computer. Live boot media contains a read-to-run or portable version of an operating system. Live boot media may include CDs, DVDs, flash memory cards, and USB drives. Live boot media can be used as a portable OS when the local existing OS is not to be trusted (such as the computer sitting in a library or hotel lobby). Live boot media can also be used as a recovery and repair strategy to gain access to tools and utilities to operate on a target system without the system’s OS running.

Elasticity

Elasticity is the ability of a system to adapt to workload changes by allocating or provisioning resources in an automatic responsive manner. Elasticity is a common feature of cloud computing, where additional system resources or even additional hardware resources can be provisioned to a server when demand for its services increases.

Scalability

Scalability is the ability for a system to handle an ever-increasing level or load of work. It can also be the potential for a system to be expanded to accommodate future growth. Some amount of additional capacity can be implemented into a system so it can take advantage of the dormant resources automatically as need demands. A cloud system can further automate scalability by enabling servers to auto-clone across to other virtualization hosts as demand requires.

Distributive allocation

Distributive or distributed allocation is the concept of provisioning resources across multiple servers or services as needed, rather than preallocation or concentrating resources based exclusively on physical system location. This is a form of load balancing but with a focus on the supporting resources rather than the traffic or request load.

Redundancy

This concept applies to various aspects of operational security, including business continuity, backups, and avoiding single points of failure as a means to protect availability.

Redundancy is the implementation of secondary or alternate solutions. Commonly, redundancy refers to having alternate means to perform work tasks or accomplish IT functions. Redundancy helps reduce single points of failure and improves fault tolerance. When there are multiple pathways, copies, devices, and so on, there is reduced likelihood of downtime when something fails.

When backup systems or redundant servers exist, there needs to be a means by which you can switch over to the backup in the event the primary system is compromised or fails. Rollover, or failover, means redirecting workload or traffic to a backup system when the primary system fails. Rollover can be automatic or manual. Manual rollover, also known as cold rollover, requires an administrator to perform some change in software or hardware configuration to switch the traffic load over from the down primary to a secondary server. With automatic rollover, also known as hot rollover, the switch from primary to secondary system is performed automatically as soon as a problem is encountered. Fail-secure,fail-safe, and fail-soft are terms related to these issues. A system that is fail-secure is able to resort to a secure state when an error or security violation is encountered (also known as fail-closed). Fail-safe is a similar feature, but human safety is protected in the event of system failure. However, these two terms are often used interchangeably in logical or technical context to mean a system that is secure after a failure. Fail-soft describes a refinement of the fail-secure capability: only the portion of a system that encountered or experienced the failure or security breach is disabled or secured, whereas the rest of the system continues to function normally.

The insecure inverse of these is the fail-open response. With a fail-open result, all defenses or preventions are disabled or retracted. Thus, a door defaults to being unlocked or even wide open, and electric security defaults to open, unlimited access.

Fault tolerance

Fault tolerance is the ability of a system to handle or respond to failure smoothly. This can include software, hardware, or power failure.

Any element in your IT infrastructure, component in your physical environment, or person on your staff can be a single point of failure. A single point of failure is any element—such as a device, service, protocol, or communication link—that would cause total or significant downtime if compromised, violated, or destroyed, affecting the ability of members of your organization to perform essential work tasks. To avoid single points of failure, you should design your networks and your physical environment with redundancy and backups by doing such things as deploying dual-network backbones. By using systems, devices, and solutions with fault-tolerant capabilities, you improve resistance to single-point-of-failure vulnerabilities. Taking steps to establish a way to provide alternate processing, failover capabilities, and quick recovery also helps avoid single points of failure.

Another type of redundancy related to servers is clustering. Clustering means deploying two or more duplicate servers in such a way as to share the workload of a mission-critical application. Users see the clustered systems as a single entity. A cluster controller manages traffic to and among the clustered systems to balance the workload across all clustered servers. As changes occur on one of the clustered systems, they are immediately duplicated to all other cluster partners.

The use of redundant servers is another example of avoiding single points of failure. A redundant server is a mirror or duplicate of a primary server that receives all data changes immediately after they are made on the primary server. In the event of a failure of the primary server, the secondary or redundant server can immediately take over and replace the primary server in providing services to the network.

This switchover system can be either hot or cold. A hot switchover or hot failover is an automatic system that can often perform the task nearly instantaneously. A cold switchover or cold failover is a manual system that requires an administrator to perform the manual task of switching from the primary to the secondary system, and thus it often involves noticeable downtime.

Redundant servers can be located in the same server vault as the primary or can be located offsite. Offsite positioning of the redundant server offers a greater amount of security so that whatever disaster damaged the primary server is unlikely to be able to damage the secondary, offsite server. However, offsite redundant servers are more expensive due to the cost of housing them, as well as real-time communication links needed to support the mirroring operations.

High availability

Availability is the assurance of sufficient bandwidth and timely access to resources. High availability means the availability of a system has been secured to offer very reliable assurance that the system will be online, active, and able to respond to requests in a timely manner, and that there will be sufficient bandwidth to accomplish requested tasks in the time required. Both of these concerns are central to maintaining continuity of operations. Availability is often measured in terms of the nines (Table 3.2)—a percentage of availability within a given time frame, such as a year, month, week, or day. Many organizations strive to achieve five or size nines of availability.

TABLE 3.2 Availability percentages and downtimes

Availability % Downtime per year Downtime per month Downtime per week Downtime per day
90% (“one nine”) 36.5 days 72 hours 16.8 hours 2.4 hours
99% (“two nines”) 3.65 days 7.20 hours 1.68 hours 14.4 minutes
99.9% (“three nines”) 8.76 hours 43.8 minutes 10.1 minutes 1.44 minutes
99.99% (“four nines”) 52.56 minutes 4.38 minutes 1.01 minutes 8.64 seconds
99.999% (“five nines”) 5.26 minutes 25.9 seconds 6.05 seconds 864.3 milliseconds
99.9999% (“six nines”) 31.5 seconds 2.59 seconds 604.8 milliseconds 86.4 milliseconds

High availability is a form of fault tolerance—or, rather, a benefit of providing reliable fault tolerance. Fault tolerance is the ability of a network, system, or computer to withstand a certain level of failures, faults, or problems and continue to provide reliable service. Fault tolerance is also a means of avoiding single points of failure. As mentioned earlier, a single point of failure is any system, software, or device that is mission-critical to the entire environment. If that one element fails, then the entire environment fails. Your environments should be designed with redundancy so that there are no single points of failure. Such a redundant design is fault tolerant.

Another example of a high-availability solution is server clustering (see Figure 3.20). Server clustering is a technology that connects several duplicate systems together so they act cooperatively. If one system in a cluster fails, the other systems take over its workload. From a user’s perspective, the cluster is a single entity with a single resource access name.

Image described by caption and surrounding text.

FIGURE 3.20 Server clustering

Maintaining an onsite stash of spare parts can reduce downtime. Having an in-house supply of critical parts, devices, media, and so on enables fast repair and function restoration. A replacement part can then be ordered from the vendor and returned to the onsite spare-parts storage. Unexpected downtime due to hardware failure is a common cause of loss of availability. Planning for faster repairs improves uptime and eliminates lengthy downtimes caused by delayed shipping from vendors.

To avoid single points of failure completely, every communication pathway should be redundant. Thus, every link from the LAN to a carrier network or ISP should be duplicated. This can be accomplished by leasing two lines from the same ISP (which is the most basic form of redundant connection) or from different ISPs. The use of redundant ISPs reduces the likelihood that a failure at a single ISP will cause your organization significant connectivity downtime. However, the best redundant ISP configuration requires the two (or more) selected ISPs to use distinct Internet or network backbones.

Power is an essential utility for any organization, but especially those dependent on their IT infrastructure. In addition to basic elements such as power conditioners and UPS devices, many organizations opt for an onsite backup generator to provide power during complete blackouts. A variety of backup generators are available, in terms of both size and fuel.

An uninterruptible power supply (UPS) is an essential element of any computing environment. A UPS provides several important services and features. First, a UPS is a power conditioner to ensure that only clean, pure, nonfluctuating power is fed to computer equipment. Second, in the event of a loss of power, the internal battery can provide power for a short period of time. The larger the battery, the longer the UPS can provide power. Third, when the battery reaches the end of its charge, it can signal the computer system to initiate a graceful shutdown in order to prevent data loss.

RAID

One example of a high-availability solution is a redundant array of independent disks (RAID). A RAID solution employs multiple hard drives in a single storage volume, as illustrated in Figure 3.21. RAID 0 provides performance improvement but not fault tolerance known as striping, it uses multiple drives as a single volume. RAID 1 provides mirroring, meaning the data written to one drive is exactly duplicated to a second drive in real time. RAID 5 provides striping with parity: three or more drives are used in unison, and one drive’s worth of space is consumed with parity information. The parity information is stored across all drives. If any single drive of a RAID 5 volume fails, the parity information is used to rebuild the contents of the lost drive on the fly. A new drive can replace the failed drive, and the RAID 5 system rebuilds the contents of the lost drive onto the replacement drive. RAID 5 can only support the failure of one disk drive.

Diagram shows RAID 0 implementation using single drive, RAID 1 implementation using primary drive and secondary drive, and RAID 5 implementation using parity A, parity B and parity C drives.

FIGURE 3.21 Examples of RAID implementations

Exam Essentials

Understand automation and scripting. Automation is the control of systems on a regular scheduled, periodic, or triggered basis that does not require manual hands-on interaction. Automation is often critical to a resilient security infrastructure. Scripting is the crafting of a file of individual lines of commands that are executed one after another. Scripts can be set to launch on a schedule or based on a triggering event.

Know about master images. A master image is a crafted setup and configuration of a software product or an entire computer system. A master image is created just after the target system has been manually installed, patched, and configured.

Understand nonpersistence. A nonpersistent system is one where changes are possible. Changes may be performed by authorized users, administrators, automated processes, or malware. Due to the risk of change, various protection and recovery measures may need to be established.

Comprehend snapshots. A snapshot is a copy of the live current operating environment.

Understand revert to known state. Revert to known state is a type of backup or recovery process. Many databases support a known state reversion in order to return back to a state of data before edits or changes were implemented.

Know about roll back to known configuration. Roll back to known configuration is a concept similar to that of revert to known state, but a known configuration is just a collection of settings, not likely to include any software elements, such as code present before a patch was applied.

Understand live boot media. Live boot media is a portable storage device that can be used to boot a computer. Live boot media contains a read-to-run or portable version of an operating system.

Comprehend elasticity. Elasticity is the ability of a system to adapt to workload changes by allocating or provisioning resources in an automatic responsive manner.

Understand scalability. Scalability is the ability of a system to handle an ever-increasing level or load of work. It can also be the potential for a system to be expanded to handle or accommodate future growth.

Know about distributive allocation. Distributive allocation or distributed allocation is the concept of provisioning resources across multiple servers or services as needed, rather than using preallocation or concentrating resources based exclusively on physical system location.

Understand redundancy. Redundancy is the implementation of secondary or alternate solutions. Commonly, redundancy refers to having alternate means to perform work tasks or accomplish IT functions. Redundancy helps reduce single points of failure and improves fault tolerance.

Comprehend fault tolerance. Fault tolerance is the ability of a network, system, or computer to withstand a certain level of failures, faults, or problems and continue to provide reliable service. Fault tolerance is also a form of avoiding single points of failure. A single point of failure is any system, software, or device that is mission-critical to the entire environment.

Understand high availability. High availability means the availability of a system has been secured to offer very reliable assurance that the system will be online, active, and able to respond to requests in a timely manner, and that there will be sufficient bandwidth to accomplish requested tasks in the time required. RAID is a high-availability solution.

Know about the continuity of operations/high availability. Availability is the assurance of sufficient bandwidth and timely access to resources. High availability means the availability of a system has been secured to offer very reliable assurance that the system will be online, active, and able to respond to requests in a timely manner, and that there will be sufficient bandwidth to accomplish requested tasks in the time required. Both of these concerns are central to maintaining continuity of operations.

Understand RAID. One example of a high-availability solution is a redundant array of independent disks (RAID). A RAID solution employs multiple hard drives in a single storage volume with some level of drive loss protection (with the exception of RAID 0).

3.9 Explain the importance of physical security controls.

Without physical security, there is no security. No amount or extent of logical and technical security controls can compensate for lax physical security protection. Thus, physical security controls need to be assessed and implemented in the same manner as security controls for the IT infrastructure.

Physical security is an area that is often overlooked when security for an environment is being designed. As you prepare for the Security+ exam, don’t overlook the aspects and elements of physical security. As a security professional, you need to reduce overall opportunities for intrusions or physical security violations. This can be accomplished using various mechanisms, including prevention, deterrence, and detection.

To ensure proper physical security, you should design the layout of your physical environment with security in mind. This means you should place all equipment in locations that can be secured, and control and monitor access or entrance into those locations. Good physical security access control also recognizes that some computers and network devices are more important or mission-critical than others and therefore require greater physical security protection.

Mission-critical servers and devices should be placed in dedicated equipment rooms that are secured from all possible entrance and intrusion (see Figure 3.22). These rooms shouldn’t have windows or floor-to-roof walls (rather than short walls that end at a drop ceiling). Equipment rooms should be locked at all times, and only authorized personnel should be granted entrance. The rooms should be monitored, and all access should be logged and audited.

Schematic diagram shows computer center equipped with combination lock, locked door with door sensor, and motion detector inside along with video camera, perimeter security and fence outside.

FIGURE 3.22 An example of a multilayered physical security environment

Physical barriers are erected to control access to a location. Some of the most basic forms of physical barriers are walls and fences. Fences are used to designate the borders of a geographic area where entrance is restricted; a high fence, the presence of barbed wire, or electrified fencing all provide greater boundary protection. Walls provide protection as well, preventing entry except at designated points such as doors and windows. The stronger the wall, the more security it provides. And the greater the number of walls between the untrusted outside and the valuable assets located inside, the greater the level of physical security.

Lighting

Lighting is a commonly used form of perimeter security control. The primary purpose of lighting is to discourage casual intruders, trespassers, prowlers, or would-be thieves who would rather perform their misdeeds in the dark, such as vandalism, theft, and loitering. However, lighting is not a strong deterrent. It should not be used as the primary or sole protection mechanism except in areas with a low threat level.

Lighting should be combined with guards, dogs, CCTV, or some other form of intrusion detection or surveillance mechanism. Lighting must not cause a nuisance or problem for nearby residents, roads, railways, airports, and so on. It should also never cause glare or a reflective distraction to guards, dogs, and monitoring equipment, which could otherwise aid attackers during break-in attempts.

Signs

Signs can be used to declare areas off limits to those who are not authorized, indicate that security cameras are in use, and disclose safety warnings. Signs are useful in deterring minor criminal activity, establishing a basis for recording events, and guiding people into compliance or adherence with rules or safety precautions.

Fencing/gate/cage

A fence is a perimeter-defining device. Fencing protects against casual trespassing and clearly identifies the geographic boundaries of a property. Fences are used to clearly differentiate between areas that are under a specific level of security protection and those that aren’t. Fencing can include a wide range of components, materials, and construction methods. It can consist of stripes painted on the ground, chain-link fences, barbed wire, concrete walls, or invisible perimeters that use laser, motion, or heat detectors. Various types of fences are effective against different types of intruders:

  • Fences 3 to 4 feet high deter casual trespassers.
  • Fences 6 to 7 feet high are too hard to climb easily and deter most intruders except determined ones.
  • Fences 8 or more feet high with three strands of barbed wire deter even determined intruders.

A gate is a controlled exit and entry point in a fence. The deterrent level of a gate must be equivalent to the deterrent level of the fence to sustain the effectiveness of the fence as a whole. Hinges and locking/closing mechanisms should be hardened against tampering, destruction, or removal. When a gate is closed, it should not offer any additional access vulnerabilities. Keep the number of gates to a minimum. They can be manned by guards, or not. When they’re not protected by guards, the use of dogs or electronic monitoring is recommended.

A cage is an enclosed fence area that can be used to protect assets from being accessed by unauthorized individuals. Cages can be used inside or outside. For a cage to be most effective, it needs to have a secured floor and ceiling.

Security guards

All physical security controls, whether static deterrents or active detection and surveillance mechanisms, ultimately rely on personnel to intervene and stop actual intrusions and attacks. Security guards exist to fulfill this need. Guards can be posted around a perimeter or inside to monitor access points or watch detection and surveillance monitors. The real benefit of guards is that they are able to adapt and react to various conditions or situations. Guards can learn and recognize attack and intrusion activities and patterns, adjust to a changing environment, and make decisions and judgment calls. Security guards are often an appropriate security control when immediate situation handling and decision-making onsite is necessary.

Unfortunately, using security guards is not a perfect solution. There are numerous disadvantages to deploying, maintaining, and relying on security guards. Not all environments and facilities support security guards. This may be because of actual human incompatibility or the layout, design, location, and construction of the facility. Not all security guards are themselves reliable. Prescreening, bonding, and training do not guarantee that you won’t end up with an ineffective or unreliable security guard.

Even if a guard is initially reliable, guards are subject to physical injury and illness, take vacations, can become distracted, and are vulnerable to social engineering. In addition, security guards usually offer protection only up to the point at which their lives are endangered. Security guards are usually unaware of the scope of the operations in a facility and therefore are not thoroughly equipped to know how to respond to every situation. Finally, security guards are expensive.

The presence of security guards at an entrance or around the perimeter of a security boundary serves as a deterrent to intruders and provides a form of physical barrier. Guard dogs can also protect against intrusion by detecting the presence of unauthorized visitors.

A security guard can check each person’s credentials before granting entry. You can also use a biometrically controlled door. In either entrance-control system, a log or list of entries and exits, along with visitors and escorts, can be maintained. Such a log will assist in tracking down suspects or verifying that all personnel are accounted for in the event of an emergency.

In the realm of physical security, access controls are mechanisms designed to manage and control entrance into a location such as a building, a parking lot, a room, or even a specific box or server rack. Being able to control who can gain physical proximity to your environment (especially your computers and networking equipment) lets you provide true security for your data, assets, and other resources.

One method to control access is to issue each valid worker an ID badge that can be either a simple photo ID or an electronic smartcard. A photo ID requires a security guard to view, discriminate, and then grant or deny access. In this process, the security guard can also add the name and action to an access roster. A smartcard can be used with an automated system that can electronically unlock and even open doors when a valid smartcard is swiped. Smartcard use is also easy to log and monitor. Additionally, the same smartcard used for facility access can also serve as a photo ID as well as an authentication factor for accessing the company network.

Alarms

Alarms or physical IDSs are systems—automated or manual—designed to detect an attempted intrusion, breach, or attack; the use of an unauthorized entry point; or the occurrence of some specific event at an unauthorized or abnormal time. IDSs used to monitor physical activity may include security guards, automated access controls, and motion detectors as well as other specialty monitoring techniques.

Physical IDSs, also called burglar alarms, detect unauthorized activities and notify the authorities (internal security or external law enforcement). The most common type of system uses a simple circuit (aka dry contact switch) consisting of foil tape in entrance points to detect when a door or window has been opened.

An intrusion detection mechanism is useful only if it is connected to an intrusion alarm. An intrusion alarm notifies authorities about a breach of physical security.

Two aspects of any intrusion detection and alarm system can cause it to fail: how it gets its power and how it communicates. If the system loses power, the alarm will not function. Thus, a reliable detection and alarm system has a battery backup with enough stored power for 24 hours of operation.

If communication lines are cut, an alarm may not function, and security personnel and emergency services will not be notified. Thus, a reliable detection and alarm system incorporates a heartbeat sensor for line supervision. A heartbeat sensor is a mechanism by which the communication pathway is either constantly or periodically checked with a test signal. If the receiving station detects a failed heartbeat signal, the alarm triggers automatically. Both measures are designed to prevent intruders from circumventing the detection and alarm system.

Whenever a motion detector registers a significant or meaningful change in the environment, it triggers an alarm. An alarm is a separate mechanism that triggers a deterrent, a repellent, and/or a notification:

Deterrent Alarms Alarms that trigger deterrents may engage additional locks, shut doors, and so on. The goal of such an alarm is to make further intrusion or attack more difficult.

Repellent Alarms Alarms that trigger repellents usually sound an audio siren or bell and turn on lights. These kinds of alarms are used to discourage intruders or attackers from continuing their malicious or trespassing activities and force them off the premises.

Notification Alarms Alarms that trigger notification are often silent from the intruder/attacker perspective but record data about the incident and notify administrators, security guards, and law enforcement. A recording of an incident can take the form of log files and/or CCTV tapes. The purpose of a silent alarm is to bring authorized security personnel to the location of the intrusion or attack in hopes of catching the person(s) committing the unwanted or unauthorized acts.

Alarms are also categorized by where they are located:

Local Alarm System Local alarm systems must broadcast an audible (up to 120 decibel [db]) alarm signal that can be easily heard up to 400 feet away. Additionally, they must be protected from tampering and disablement, usually by security guards. For a local alarm system to be effective, a security team or guards who can respond when the alarm is triggered must be positioned nearby.

Central Station System A central station system alarm is usually silent locally, but offsite monitoring agents are notified so they can respond to the security breach. Most residential security systems are of this type. Most central station systems are well-known or national security companies, such as Brinks and ADT. A proprietary system is similar to a central station system, but the host organization has its own onsite security staff waiting to respond to security breaches.

Auxiliary Station System Alarm systems can be added to either local or centralized alarm systems. When the security perimeter is breached, emergency services are notified to respond to the incident at the location. This could include fire, police, and medical services.

Two or more of these types of intrusion and alarm systems can be incorporated into a single solution.

Safe

Any device or removable media containing highly sensitive information should be kept locked securely in a safe when not in active use. You can install a department-wide safe that is managed by a single person, or you can install per-desk safes. A per-desk safe is often smaller, but it lets workers store devices and documentation securely while also allowing quick access.

Long-term storage of media and devices may require safes as well. Safes may be present onsite, or you can contract with an offsite storage facility to provide a safe for secured storage.

Secure cabinets/enclosures

Cabinets, device enclosures, rack-mounting systems, patch panels, wiring closets, and other equipment and cable containers can provide additional physical security through the use of locking mechanisms. Locking cabinets and other forms of containers can block or reduce access to power switches, adapter ports, media bays, and cable runs. Locking cabinets can be used in server rooms or in workspace areas. These can also include desks that give workers access to the monitor, mouse, and keyboard but sequester the main system chassis inside a locked desk compartment.

Protected distribution/Protected cabling

Protected distribution or protective distribution systems (PDSs) (also known as protected cabling systems) are the means by which cables are protected against unauthorized access or harm. The goals of PDSs are to deter violations, detect access attempts, and otherwise prevent compromise of cables. Elements of PDS implementation can include protective conduits, sealed connections, and regular human inspections. Some PDS implementations require intrusion or compromise detection within the conduits.

Airgap

See the earlier section “Physical” for a discussion of the security benefits of this type of network segmentation.

Mantrap

Some high-value or high-security environments may also employ mantraps as a means to control access to the most secured, dangerous, or valuable areas of a facility. A mantrap is a form of high-security barrier entrance device (see Figure 3.23). It’s a small room with two doors: one in the trusted environment and one opening to the outside. The mantrap works like this:

  1. A person enters the mantrap.
  2. Both doors are locked.
  3. The person must properly authenticate to unlock the inner door to gain entry. If the authentication fails, security personnel are notified, and the intruder is detained in the mantrap.
Image described by caption and surrounding text.

FIGURE 3.23 A mantrap

Mantraps often contain scales and cameras in order to prevent piggybacking. Piggybacking occurs when one person authenticates, opens a door, and lets another person enter without that second person authenticating to the system.

Faraday cage

A Faraday cage (Figure 3.24) is an enclosure that blocks or absorbs electromagnetic fields or signals. Faraday cage containers, computer cases, rack-mount systems, rooms, or even building materials are used to create a blockage against the transmission of data, information, metadata, or other emanations from computers and other electronics. Devices inside a Faraday cage can use electromagnetic (EM) fields for communications, such as wireless or Bluetooth, but devices outside the cage will not be able to eavesdrop on the signals of the systems within the cage.

Diagram shows Faraday cage isolating WAP, portable system and smartphone from another portable system.

FIGURE 3.24 A Faraday cage prevents WiFi (EM) access outside of the container.

Lock types

Although you need walls and fences to protect boundaries, there must be a means for authorized personnel to cross these barriers into the secured environment. Doors and gates can be locked and controlled in such a way that only authorized people can unlock and/or enter through them. Such control can take the form of a lock with a key that only authorized people possess. Locks are used to keep doors and containers secured in order to protect assets.

Hardware conventional locks and even electronic or smart locks are used to keep specific doors or other access portals closed and prevent entry or access to all but authorized individuals. With the risks of lock picking and bumping, locks resistant to such attacks must be used whenever valuable assets are to be protected from tampering or theft.

Doors used to control entrance into secured areas can be protected by locks that are keyed to biometrics. A biometric lock requires that the person present a biometric factor, such as a finger, a hand, or a retina to the scanner, which in turn transmits the fingerprint, hand, or retina scan to the validation mechanism. Only after the biometric is verified is the door unlocked and the person allowed entry. When biometrics are used to control entrance into secured areas, they serve as a mechanism of identity proofing as well as authentication.

However, door access systems need not be exclusively biometric. Smartcards and even traditional metal keys can function as authentication factors for physical entry points.

Many door access systems, whether supporting biometrics, smartcards, or even PINs, are designed around the electronic access control (EAC) concept. An EAC system is a door-locking and -access mechanism that uses an electromagnet to keep a door closed, a reader to accept access credentials, and a door-close spring and sensor to ensure that the door recloses within a reasonable timeframe.

Biometrics

Biometrics is the term used to describe the collection of physical attributes of the human body that can be used as identification or authentication factors. Biometrics fall into the authentication factor category of something you are: you, as a human, have the element of identification as part of your physical body. Biometrics include fingerprints, palm scans (use of the entire palm as if it were a fingerprint), hand geometry (geometric dimensions of the silhouette of a hand), retinal scans (pattern of blood vessels at the back of the eye), iris scans (colored area of the eye around the pupil), facial recognition, voice recognition, signature dynamics, and keyboard dynamics.

Although biometrics are a stronger form of authentication than passwords alone, biometrics in and of themselves aren’t the best solution. Even with biometrics, implementing multifactor authentication is the most secure solution.

The key element in deploying biometrics as an element of authentication is a biometric device or a biometric reader. This is the hardware designed to read, scan, or view the body part that is to be presented as proof of identification.

See the Chapter 4 section “Biometric factors” for more about the benefits and limitations of biometrics.

Barricades/bollards

Barricades, in addition to fencing (discussed earlier), are used to control both foot traffic and vehicles. K-rails (often seen during road construction), large planters, zigzag queues, bollards, and tire shredders are all examples of barricades. When used properly, they can control crowds and prevent vehicles from being used to cause damage to your building.

Tokens/cards

A token device or an access card can be used as an element in authentication when gaining physical entry into a facility. See the Chapter 4 sections “Tokens,” “Physical access control,” and “Certificate-based authentication.”

Environmental controls

Environmental monitoring is the process of measuring and evaluating the quality of the environment within a given structure. This can focus on general or basic concerns, such as temperature, humidity, dust, smoke, and other debris. However, more advanced systems can include chemical, biological, radiological, and microbiological detectors.

When you’re designing a secure facility, it’s important to keep various environmental factors in mind. These include the following:

  • Controlling the temperature and humidity
  • Minimizing smoke and airborne dust and debris
  • Minimizing vibrations
  • Preventing food and drink from being consumed near sensitive equipment
  • Avoiding strong magnetic fields
  • Managing electromagnetic and radio frequency interference
  • Conditioning the power supply
  • Managing static electricity
  • Providing proper fire detection and suppression

HVAC

Heating, ventilating, and air-conditioning (HVAC) management is important for two reasons: temperature and humidity. In the mission-critical server vault or room, the temperature should be maintained around a chosen set point to support optimal system operation. For many, the “optimal” temperature or preferred set point is in the mid-60s Fahrenheit. However, some organizations are operating as low as 55 degrees and others are creeping upward into the 90s. With good airflow management and environmental monitoring, many companies are saving 4 to 5 percent on their cooling bills for every one degree they increase their server room temperature. Throughout the organization, humidity levels should be managed to keep the relative humidity between 40 and 60 percent. Low humidity allows static electricity buildup, with discharges capable of damaging most electronic equipment. High humidity can allow condensation, which leads to corrosion.

Hot and cold aisles

Hot and cold aisles are a means of maintaining optimum operating temperature in large server rooms. The overall technique is to arrange server racks in lines separated by aisles (Figure 3.25). Then the airflow system is designed so hot, rising air is captured by air-intake vents on the ceiling, whereas cold air is returned in opposing aisles from either the ceiling or the floor. Thus, every other aisle is hot, then cold. This creates a circulating air pattern that is intended to optimize the cooling process.

Diagram shows flow of hot and cold air between server racks in lines separated by aisles in server room.

FIGURE 3.25 A hot aisle/cold aisle air management system

Fire suppression

Fire is a common problem that must be addressed in the design of any facility. Electrical fires are common causes of building fires; they may result from overheated computer or networking equipment or improperly managed electrical power cables and distribution nodes (power strips).

Early fire detection and suppression is important because the earlier the discovery, the less damage is caused to the facility and equipment. Personnel safety is always of utmost importance. However, in a dedicated, secured, mission-critical server room (often called a server cage, server vault, or datacenter), the fire-suppression system can be gas discharge–based rather than water-based. A gas discharge–based system removes oxygen from the air and may even suppress the chemical reaction of combustion, often without damaging computer equipment, but such systems are harmful to people. If a water-based system must be used, employ a pre-action system that allows the release of the water to be turned off in the event of a false alarm.

The safety of the facility and personnel should be a priority of a security effort. Human life and safety are without question the top concerns, but sufficient focus needs to be placed on providing physical security for buildings and other real-world assets. The following sections discuss many aspects of security and safety.

Every building needs an escape plan, and a backup escape plan, and even a backup backup escape plan. An escape route is the path someone should take out of a building to reach safety. The preferred and alternate escape routes should be identified, marked, and clearly communicated to all personnel. Accommodations for those with disabilities need to be made.

Employees need to be trained in safety and escape procedures. Once they are trained, their training should be tested using drills and simulations. Having workers go through the routine of escape helps to reinforce their understanding of the escape plans and available routes, and it also helps reduce anxiety and panic in case of a threatening event.

All elements of physical security, especially those related to human life and safety, should be tested on a regular basis. It is mandated by law that fire extinguishers, fire detectors/alarms, and elevators be inspected regularly. A self-imposed schedule of control testing should be implemented for door locks, fences, gates, mantraps, turnstiles, video cameras, and all other physical security controls.

Cable locks

A cable lock is used to keep smaller pieces of equipment from being easy to steal. Many devices, most commonly portable computers, have a Kensington Security Slot (K-Slot) that is designed as a connection point for a cable lock. The K-Slot was originally developed by Kensington, which continues to develop new cable lock security devices.

A cable lock usually isn’t an impenetrable security device, since most portable systems are constructed with thin metal and plastic. However, a thief will be reluctant to swipe a cable-locked device, because the damage caused by forcing the cable lock out of the K-Slot will be obvious when they attempt to pawn or sell the device.

Screen filters

It may be worthwhile to install screen filters, also called privacy filters, which reduce the range of visibility of a screen down to a maximum of 30 degrees from perpendicular (Figure 3.26). These types of screens are designed to prevent someone sitting directly next to you, such as on an airplane, from being able to see the contents of your display.

Image described by caption and surrounding text.

FIGURE 3.26 The viewing angle for a screen filter

Cameras

Video surveillance, video monitoring, closed-circuit television (CCTV), and security cameras are all means to deter unwanted activity and create a digital record of the occurrence of events. Cameras should be positioned to watch exit and entry points allowing any change in authorization level—for example, doors allowing entry into a facility from outside, doors allowing entry into work areas from common areas, and doors allowing entry into high-security areas from work areas. Cameras should also be used to monitor activities around valuable assets and resources, such as server rooms, safes, vaults, and component closets, as well as to provide additional protection in public areas such as parking structures and walkways.

Cameras should be configured to record to storage media. This has traditionally been some sort of tape, such as VCR tape. However, modern systems may record to DVD, NVRAM, or hard drives and may do so over a wired or even an encrypted wireless connection.

Cameras vary in type. Typical security cameras operate by recording visible-light images and often require additional lighting in low-light areas. Alternative camera types include those that record only when motion is detected, those that are able to record in infrared, and those that can automatically track movement.

Video records may be used to detect policy violations, track personnel movements, or capture an intruder on film. Video recordings should be monitored in real time or reviewed on a periodic basis in order to provide a detective benefit. Just the visible presence of video cameras can provide a deterrent effect to would-be perpetrators.

A camera is primarily used to detect and record unwanted or unauthorized activity. If someone is aware that a camera is present and will record their actions, that person is less likely to perform actions that are violations. This is generally known as a deterrent.

A security guard is able to move around a facility to potentially view places a camera is unable to see. Security guards are often as much a deterrent as they are a detective control. They can respond to varying issues and can adjust their actions based on changing conditions.

Both cameras and guards have useful security features, but both require proper use to be beneficial, both have their own unique requirements for use, and both are costly in their own ways.

Motion detection

A motion detector, or motion sensor, is a device that senses movement or sound in a specific area. Many types of motion detection exist, including infrared, heat, wave pattern, capacitance, photoelectric, and passive audio:

  • An infrared motion detector monitors for significant or meaningful changes in the infrared lighting pattern of a monitored area.
  • A heat-based motion detector monitors for significant or meaningful changes in the heat levels and patterns in a monitored area.
  • A wave-pattern motion detector transmits a consistent low-frequency ultrasonic or high-frequency microwave signal into a monitored area and monitors for significant or meaningful changes or disturbances in the reflected pattern.
  • A capacitance motion detector senses changes in the electrical or magnetic field surrounding a monitored object.
  • A photoelectric motion detector senses changes in visible light levels for the monitored area. Photoelectric motion detectors are usually deployed in internal rooms that have no windows and are kept dark.
  • A passive audio motion detector listens for abnormal sounds in the monitored area.

The proper technology of motion detection should be selected for the environment where it will be deployed, in order to minimize false positives and false negatives.

Logs

Logs of physical access should be maintained. These can be created automatically through the use of smartcards for gaining access into the facility or manually by a security guide who will indicate entrance after inspecting each person’s ID. The purpose of physical access logs is to establish context for logical logs produced by servers, workstations, and networking equipment. The logs are also helpful in the event of an emergency in order to determine whether everyone has escaped a building safely or if rescue teams should be sent in.

Infrared detection

Infrared detection is often used by security cameras to see in perceived darkness or to detect movement in an area. These concepts were discussed in the previous sections “Cameras” and “Motion detection.”

Key management

Key management in relation to physical security focuses on the issuance of physical metal keys to those who need access into secured rooms, areas, or containers. A detailed log should be maintained of who was issued which key for which room/container and for what purpose. Regular auditing should be performed to ensure the responsible party still possesses the key and discloses whether or not the key has been exposed to theft or duplication. Keys should be numbered and, when possible, labeled or identified as not to be duplicated (so a locksmith will refuse to make a copy). Keys should be returned to the key manager when access is no longer required by that individual. When a worker who was in possession of a key is terminated, that lock and key should be replaced.

Digital or electronic key management relates to cryptography and is covered throughout Chapter 6.

Exam Essentials

Understand physical access control. Physical access control refers to mechanisms designed to manage and control entrance into a location. Being able to control who can gain physical proximity to your environment (especially your computers and networking equipment) allows you to provide true security for your data, assets, and other resources. Without physical access control, you have no security.

Know about lighting. Lighting is a commonly used form of perimeter security control. The primary purpose of lighting is to discourage casual intruders, trespassers, prowlers, or would-be thieves who would rather perform their misdeeds in the dark, such as vandalism, theft, and loitering.

Understand signs. Signs can be used to declare areas as off limits to those who are not authorized, to indicate that security cameras are in use, and to disclose safety warnings.

Know about fencing, gates, and cages. A fence is a perimeter-defining device. Fencing protects against casual trespassing and clearly identifies the geographic boundaries of a property. Fences are used to clearly differentiate between areas that are under a specific level of security protection and those that aren’t. A gate is a controlled exit and entry point in a fence. A cage is an enclosed fence area that can be used to protect assets from being accessed by unauthorized individuals.

Understand security guards. All physical security controls, whether static deterrents or active detection and surveillance mechanisms, ultimately rely on personnel to intervene and stop actual intrusions and attacks. Security guards exist to fulfill this need.

Comprehend alarms. Physical IDSs, also called burglar alarms, detect unauthorized activities and notify the authorities (internal security or external law enforcement).

Understand safes. Any device or removable media containing highly sensitive information should be kept locked securely in a safe when not in active use.

Know about PDS. Protected distribution or protective distribution systems (PDSs) (also known as protected cabling systems) are the means by which cables are protected against unauthorized access or harm.

Understand mantraps. A mantrap is a form of high-security barrier entrance device. It’s a small room with two doors: one to the trusted environment and one to the outside. A person must properly authenticate to unlock the inner door and gain entry.

Realize the importance of a Faraday cage. A Faraday cage is an enclosure that blocks or absorbs electromagnetic fields or signals.

Understand biometrics. Biometrics is the collection of physical attributes of the human body that can be used as authentication factors (something you are). Biometrics include fingerprints, palm scans (use of the entire palm as if it were a fingerprint), hand geometry (geometric dimensions of the silhouette of a hand), retinal scans (pattern of blood vessels at the back of the eye), iris scans (colored area of the eye around the pupil), facial recognition, voice recognition, signature dynamics, and keyboard dynamics.

Comprehend environmental monitoring. Environmental monitoring is the process of measuring and evaluating the quality of the environment within a given structure.

Understand humidity management. Throughout the organization, humidity levels should be managed to keep the relative humidity between 40 and 60 percent. Low humidity allows static electricity buildup, with discharges capable of damaging most electronic equipment. High humidity can allow condensation, which leads to corrosion.

Know about hot and cold aisles. Hot and cold aisles are a means of maintaining optimum operating temperature in large server rooms.

Understand fire suppression. Early fire detection and suppression is important because the earlier the discovery, the less damage will be caused to the facility and equipment. Personnel safety is always of utmost importance.

Know about cable locks. A cable lock is used to keep smaller pieces of equipment from being easy to steal.

Understand screen filters. It may be worthwhile to install screen filters that reduce the range of visibility of a screen down to a maximum of 4 degrees from perpendicular.

Know about cameras. Video surveillance, video monitoring, closed-circuit television (CCTV), and security cameras are all means to deter unwanted activity and create a digital record of the occurrence of events.

Understand motion detection. A motion detector, or motion sensor, is a device that senses movement or sound in a specific area. Many types of motion detection exist, including infrared, heat, wave pattern, capacitance, photoelectric, and passive audio.

Review Questions

You can find the answers in the Appendix.

  1. Which of the following allows the deployment of a publicly accessible web server without compromising the security of the private network?

    1. Intranet
    2. DMZ
    3. Extranet
    4. Switch
  2. An organization has a high-speed fiber Internet connection that it uses for most of its daily operations, as well as its offsite backup operations. This represents what security problem?

    1. Single point of failure
    2. Redundant connections
    3. Backup generator
    4. Offsite backup storage
  3. A security template can be used to perform all but which of the following tasks?

    1. Capture the security configuration of a master system
    2. Apply security settings to a target system
    3. Return a target system to its precompromised state
    4. Evaluate compliance with security of a target system
  4. What technique or method can be employed by hackers and researchers to discover unknown flaws or errors in software?

    1. Dictionary attacks
    2. Fuzzing
    3. War dialing
    4. Cross-site request forgery
  5. What is a security risk of an embedded system that is not commonly found in a standard PC?

    1. Power loss
    2. Access to the Internet
    3. Control of a mechanism in the physical world
    4. Software flaws
  6. To ensure that whole-drive encryption provides the best security possible, which of the following should not be performed?

    1. Screen lock the system overnight.
    2. Require a boot password to unlock the drive.
    3. Lock the system in a safe when it is not in use.
    4. Power down the system after use.
  7. In order to avoid creating a monolithic security structure, organizations should adopt a wide range of security mechanisms. This concept is known as __________.

    1. Defense in depth
    2. Control diversity
    3. Intranet buffering
    4. Sandboxing
  8. When offering a resource to public users, what means of deployment provides the most protection for a private network?

    1. Intranet
    2. Wireless
    3. Honeynet
    4. DMZ
  9. When you are implementing a security monitoring system, what element is deployed in order to detect and record activities and events?

    1. Correlation engine
    2. Tap
    3. Sensor
    4. Aggregation switch
  10. When an enterprise is using numerous guest OSs to operate their primary business operations, what tool or technique can be used to enable communications between guest OSs hosted on different server hardware but keep those communications distinct from standard subnet communications?

    1. VPN
    2. SDN
    3. EMP
    4. FDE
  11. What type of OS is designed for public end-user access and is locked down so that only preauthorized software products and functions are enabled?

    1. Kiosk
    2. Appliance
    3. Mobile
    4. Workstation
  12. When you need to test new software whose origin and supply chain are unknown or untrusted, what tool can you use to minimize the risk to your network or workstation?

    1. Hardware security module
    2. UEFI
    3. Sandboxing
    4. SDN
  13. What is the concept of a computer implemented as part of a larger system that is typically designed around a limited set of specific functions (such as management, monitoring, and control) in relation to the larger product of which it’s a component?

    1. IoT
    2. Application appliance
    3. SoC
    4. Embedded system
  14. What is an industrial control system (ICS) that provides computer management and control over industrial processes and machines?

    1. SCADA
    2. HSM
    3. OCSP
    4. MFD
  15. Which SDLC model is based around adaptive development where focusing on a working product and fulfilling customer needs is prioritized over rigid adherence to a process, use of specific tools, and detailed documentation?

    1. Waterfall
    2. Agile
    3. Spiral
    4. Ad hoc
  16. When an organization wishes to automate many elements and functions of IT management, such as development, operations, security, and quality assurance, they are likely to be implementing which of the following?

    1. SCADA
    2. UTM
    3. IaaS
    4. DevOps
  17. What is not a cloud security benefit or protection?

    1. CASB
    2. SECaaS
    3. VM sprawl
    4. VM isolation
  18. What form of cloud service provides the customer with the ability to run their own custom code but does not require that they manage the execution environment or operating system?

    1. SaaS
    2. PaaS
    3. IaaS
    4. SECaaS
  19. What recovery mechanism is used to return a system back to a previously operating condition when a new software install corrupts the operating system?

    1. Revert to known state
    2. Roll back to known configuration
    3. Live boot media
    4. Template
  20. What type of security mechanism can be used to prevent a vehicle from damaging a facility?

    1. Fencing
    2. Lighting
    3. Bollard
    4. Access cards
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.146.206.116