Chapter 8
Security Architecture

This chapter discusses enterprise security and architecture, a critical component of defense in depth. Years ago, perimeter security and simply having antivirus software on a computer may have been enough. This is not true anymore. Today, every time a user turns on a computer, clicks a link, or opens an email, there is the potential that an attack could occur. This is why host-based solutions such as anti-malware, antivirus, and anti-spyware are important to the defense-in-depth strategy, but these items are part of the solution. Good logical security is just like good physical security, and it requires hardening. Logical security doesn't require you to build a 10-foot concrete wall around your computer, but it does require you to harden the enterprise in such a way as to make an attacker's job more difficult. That job starts by removing unwanted services. It also includes implementing security policies and controls. Finally, it's about building in the principle of least privilege: provide only what the user needs to do the task at hand—and nothing more—while maintaining a secure baseline.

The term enterprise security refers to a holistic view of security. Our view of IT security has changed over the years in that items such as data security, IT security, and physical security are now seen as just components of a total security solution. Enterprise security is a framework for applying comprehensive controls designed to protect a company while mapping key services to needed information systems. The goals of enterprise security are as follows:

  • To add value to the company
  • To align the goals of IT with the goals of the company
  • To establish accountability
  • To verify that a pyramid of responsibility has been established that starts with the lowest level of employees and builds up to top management

Enterprise security has become much more important over the last decade. It's easy to see why, when you consider the number of security breaches and reports of poor security management.

In this chapter, we'll look at enterprise security architecting and hardening. We will examine topics such as asset management and the role that intrusion detection and intrusion prevention play. CompTIA expects you to know these topics for the exam. You may be presented with scenario questions, simulations, or even drag-and-drop situations in which you must properly position required controls and countermeasures.

Security Requirements and Objectives for a Secure Network Architecture

“Without a solid foundation, you'll have trouble creating anything of value,” according to Erika Fromm, a German psychologist. This quote is true of personal relationships, startup businesses, and cybersecurity architecture. Building a strong foundation and architecture is a means to reduce the risk of compromise and protect your assets using security principles, methods, frameworks, and modeling. Basically, security architecture translates the enterprise requirements to executable security requirements.

Services

The TCP/IP suite was not built for enterprise security. Its primary design consideration was usability, and although that was acceptable for the early Internet, today secure data flows, protocols, procedures, and services are needed to meet ever-changing business needs. Some weaknesses are due to unnecessary services running and protocol weaknesses, whereas others are defects in the software that implements the protocols and runs those services. It is important to know that many transmission protocols do not provide encryption.

The best way to build cyber resilience is through redundancy, having more than one of a system, service, device, or other components. Power, environmental controls, hardware and software, network connectivity, and any other factor that can fail or be disrupted needs to be assessed. Single points of failure—places where a single device or other element failing could disrupt or stop the system from functioning—must be identified, assessed, and either compensated for or documented in the design. Unneeded and extraneous open ports, applications, and services provide additional avenues for attackers. Default accounts are a huge attack vector if they aren't removed or changed. These apps and services can also leave other open ports, providing another vector for reconnaissance and attack. Secured root (Linux) and Administrator (Windows) accounts represent strong configurations. An unsecured root or administrator account could have a serious impact on the entire system and anything it's connected to.

After assessment work has been finished, a strategy is created that balances need, requirements, options, and the cost to build and operate the environment. Designs regularly have concessions made in them to meet complexity, staffing, or other limitations based on the overall risk and likelihood of occurrence for the dangers that were identified in the review and design phases.

Load Balancers

Load balancers make multiple systems or services appear like a single resource, allowing both redundancy and increased ability to handle the load by distributing it to more than one system. Load balancers are also commonly used to allow system upgrades by redirecting traffic away from systems that will be upgraded and then returning that traffic after the systems are patched or upgraded. Load balancers are also used to provide a centralized defense.

A proxy load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Proxy load balancers are used to increase the capacity of concurrent users and the reliability of applications. Proxy load balancers improve the overall performance of applications by decreasing the overall burden on servers associated with managing and maintaining applications and network sessions, as well as by performing application-specific tasks. Proxy load balancers are normally grouped into two categories: layer 4 and layer 7. Layer 4 proxy load balancers act upon data found in Network and Transport layer protocols (IP, TCP, FTP, and UDP). Layer 7 proxy load balancers distribute requests based on data found in application layer protocols such as HTTP. Proxy load balancers ensure reliability and availability by monitoring the health of applications and sending requests only to servers and applications that can respond in a timely manner.

Intrusion Detection Systems and Intrusion Prevention Systems

Another element of protection in security architecture is the introduction of an intrusion detection system (IDS). An IDS gathers and analyzes information from the computer or a network it is monitoring. An IDS can be considered a type of network management and monitoring tool. The key to what type of activity the IDS will detect depends on where the sensor is placed. Before discussing the types of intrusion detection systems, let's first review the various ways in which an intrusion is detected.

Intrusions are detected in one of three basic ways:

  • Signature Recognition Signature recognition relies on a database of known attacks, and it is also known as misuse detection. Each known attack is loaded into the IDS in the form of a signature. Once the signatures are loaded, the IDS can begin to guard the network. The signatures are usually given a number or name so that they are easily identified when an attack occurs. For example, these signatures may include the Ping of Death, SYN floods, or Smurf DoS. The biggest disadvantage to signature recognition is that these systems can trigger only on signatures that are known and loaded. Polymorphic attacks and encrypted traffic may not be properly assessed. Tools such as Snort are still an invaluable part of the security administrator's arsenal, and custom signatures will frequently be used depending on the organizational environment. They should, however, be combined with other measures that ensure the integrity of the system on which they are installed for the goal of defense in depth.
  • Anomaly Detection Anomaly detection systems detect an intrusion based on the fixed behavior of a set of characteristics. If an attacker can slowly change their activity over time, an anomaly-based system may not detect the attack and believe that the activity is actually acceptable. Anomaly detection is good at spotting behavior that is significantly different from normal activity. Normal activity is captured over days, weeks, or even months to establish a baseline. Rules can be written to compare current behavior with the baseline to find any anomalies. The CASP+ exam may also use the word heuristics to describe monitoring behavior.
  • Protocol Decoding This type of system uses models that are built on the TCP/IP stack and understands their specifications. Protocol-decoding IDS systems have the ability to reassemble packets and look at higher-layer activity. If the IDS knows the normal activity of the protocol, it can pick out abnormal activity. Protocol-decoding intrusion detection requires the IDS to maintain state information. To detect these intrusions effectively, an IDS must understand a wide variety of Application layer protocols. This can be useful in a situation where an attacker is attempting to use a custom TCP stack on a compromised machine to evade detection.

Here are some of the basic components of an IDS implementation:

  • Sensors: Detects and sends data to the system.
  • Central monitoring system: Processes and analyzes data sent from sensors.
  • Report analysis: Offers information about how to counteract a specific event.
  • Database and storage components: Perform trend analysis and then store the IP address and information about the attacker.
  • Response box: Inputs information from the previous components and forms an appropriate response. For example, the response box might decide to block, drop, or even redirect network traffic.
  • Alert definitions: An alert definition is basically the who, what, when, where, and why of the intrusions that occur. The first step in creating an alert definition is the rule action. These rule actions let the IDS know what to do when it finds a packet that is the same as the criteria of the rule.
    • Alert: Generates an alert using the selected alert method and then logs the packet
    • Log: Logs the packet
    • Pass: Ignores the packet
    • Activate: Alerts and then turns on another dynamic rule
    • Dynamic: Remains idle until activated by an activate rule and then acts as log rule
    • Drop: Blocks the log of the packet
    • Reject: Blocks the packet, logs it, and then sends a TCP reset if the protocol is TCP. If UDP, an ICMP port unreachable message is sent.
    • Sdrop: Blocks the packet but does not log it

There are three types of thresholds that you can apply to your IDS system's alerts. Most monitoring tools should support these thresholds.

  • Fixed Threshold Alert-based technology is often triggered by the use of a fixed threshold. Metrics are used and often based on a fixed calculation using various numeric values.
  • State-Based Threshold Metrics are sometimes used with discrete values that involve states of the information system. State-based thresholds have alerts that occur when there is a change in the metric value. An example of a state-based threshold is when a specific program process has started.
  • Historical Threshold Metrics are compared by using historical thresholds, that is, numerical values from the past and present over a set time frame. Network engineers often use historical thresholds to compare traffic spikes during the current week versus past weeks.

Alert fatigue is the overall threshold where it becomes too difficult for a security analyst to discern important alerts from the stream of all of the information being received. Security analysts have to review each and every alert to decide if it really is a threat or just another false positive, as all of the alerts may appear to be the same at first. When magnified by multiple systems delivering such alerts, it can quickly become overwhelming when there are so many alerts that it becomes difficult to distinguish the true positives from the false positives. Some of these alerts require aggregation before the combination can be confirmed as a true positive and the potential business impact can be assessed. This is why many security operation centers have tiered responders with different levels of expertise evaluating the alerts.

Placement of such an IDS system is another important concern. Placement requires consideration because a sensor in the demilitarized zone (DMZ) will work well at detecting misuse but will prove useless against attackers who have already compromised a system and are inside the network. Once placement of sensors has been determined, they still require specific tuning and baselining to learn normal traffic patterns.

IDS sensors can be placed externally in the DMZ or inside the network. Sensors may also be placed on specific systems that are mission critical. Sensor placement will in part drive the decision as to what type of intrusion system is deployed. Sensors may also be placed inline where one IDS performs signature-based scanning and permits valid traffic to the second IDS, which then performs heuristic or another scan type. This helps guarantee that a bottleneck is not created by placing too much demand on a single IDS. Intrusion detection systems are divided into two broad categories: network intrusion detection system and host intrusion detection system.

Network Intrusion Detection System

Much like a network sniffer, a network intrusion detection system (NIDS) is designed to capture and analyze network traffic. A NIDS inspects each packet as it passes by. Upon detection of suspect traffic, the action taken depends on the particular NIDS and its current configuration. It might be configured to reset a session, trigger an alarm, or even disallow specific traffic. NIDSs have the ability to monitor a portion of the network and provide an extra layer of defense between the firewall and host. Their disadvantages include the fact that attackers can perform insertion attacks, session splicing, and even fragmentation to prevent a NIDS from detecting an attack. Also, if an inline network encryption (INE) is used, the IDS would see only encrypted traffic.

Host Intrusion Detection System

A host intrusion detection system (HIDS) is designed to monitor a computer system and not the network. HIDSs examine host activity and compare it to a baseline of acceptable activities. These activities are determined by using a database of system objects that the HIDS should monitor. HIDSs reside on the host computer and quietly monitor traffic and attempt to detect suspect activity. Suspect activity can range from attempted system file modification to unsafe activation of commands. Things to remember about HIDSs include the fact that they consume some of the host's resources, but they also have the potential to analyze encrypted traffic and trigger an alert when unusual events are discovered after it is decrypted at the endpoint.

In high-security environments, devices such as an inline network encryptor (INE) and an inline media encryptor (IME) may also be used. An INE is a device that sits along the path of a public or unsecured network when its users need to maintain communications security using that network. The network might be packet switched or ATM (allowing for higher capacity), but the INE permits strong, Type 1 encryption. (Type 1 is NSA-speak for products that ensure encryption while still allowing network addressing.) This is different from a VPN in that an INE is not point-to-point. An INE could be placed for use by a group of users.

An IME is similar to an INE, except that it sits in line between the computer processor and hard drive to secure data in transit.

Although both NIDS and HIDS provide an additional tool for the security professional, they are generally considered passive devices. An active IDS can respond to events in simple ways such as modifying firewall rules. These devices are known as intrusion prevention systems (IPSs). Just as with IDS, an IPS can be either host- or network-based.

Network Intrusion Prevention System

A network intrusion prevention system (NIPS) builds on the foundation of IDS and attempts to take the technology a step further. A NIPS can react automatically and prevent a security occurrence from happening, preferably without user intervention. This ability to intervene and stop known attacks is the greatest benefit of the NIPS; however, it suffers from the same type of issues as the NIDS, such as the inability to examine encrypted traffic and difficulties with handling high network loads.

Host Intrusion Prevention System

A host intrusion prevention system (HIPS) is generally regarded as being capable of recognizing and halting anomalies. The HIPS is considered the next generation of IDS, and it can block attacks in real time. This process monitoring is similar to antivirus. The HIPS has the ability to monitor system calls. HIPSs have disadvantages in that they require resources and must process identified anomalies at the application level and while sending alerts, they do not prevent attacks.

Wireless Intrusion Detection System and Wireless Prevention System

Wireless intrusion detection systems (WIDSs) and wireless intrusion prevention systems (WIPSs) use devices that are built on the same philosophy as NIDS/NIPS; however, they focus on reacting to rogue wireless devices rather than singular security events. WIDS will alert when an unidentified wireless device is detected. Depending on configuration, a WIPS can do the same, as well as prevent the use of the wireless device. A best practice is to review alerts from a WIDS manually rather than shut a system down immediately. You may knock out another legitimate business's wireless access point by accident.

Web Application Firewalls

Web application firewalls (WAFs) are a technology that helps address the concerns of web application security. The WAF is not a replacement for a traditional firewall, but it adds another layer of protection. Whereas traditional firewalls block or allow traffic, WAFs can protect against cross-site scripting (XSS), hidden field tampering, cookie poisoning, and even SQL injection attacks. WAFs operate by inspecting the higher levels of the TCP/IP OSI layers and also tie in more closely with specific web apps.

Think of it in this way: a conventional firewall may deny inbound traffic at the perimeter. A WAF is a firewall sitting between a web client and a web server, analyzing OSI layer 7 traffic. These devices have the ability to perform deep packet inspection and look at requests and responses within the HTTP/HTTPS/SOAP/XML-RPC/Web Service layers. As with any security technology, WAFs are not 100 percent effective; there are various methods and tools used to detect and bypass these firewalls.

Two examples of automated detection tools are w3af and wafw00f. There are many more. Also available are various methods of exploiting inherent vulnerabilities in WAFs, which differ according to the WAF technology. One of the most prominent is cross-site scripting (XSS), which is one of the very things WAFs are designed to prevent. Some WAFs can detect attack signatures and try to identify a specific attack, whereas others look for abnormal behavior that doesn't fit the website's normal traffic patterns.

It may not be a viable option for a company or organization to cease communications across the Internet. This is where the WAF comes into the picture. A WAF can be monumental in protecting these organizations from emerging threats inherent in social networking and other web applications that the conventional firewall was not designed to defend against. One open-source example of a WAF is ModSecurity. Commercial options are offered through Barracuda Networks, Fortinet, and Cisco Systems.

Network Access Control

For large and small businesses alike, achieving optimal network security is a never-ending quest. A CASP+ plays a big part in securing critical systems. One potential solution to these issues is network access control (NAC). NAC offers administrators a way to verify that devices meet certain health standards before they're allowed to connect to the network. Laptops, desktop computers, or any device that doesn't comply with predefined requirements can be prevented from joining the network or can be relegated to a controlled network where access is restricted until the device is brought up to the required security standards. Several types of NAC solutions are available:

  • Infrastructure-based NAC requires an organization to upgrade its hardware and operating systems. If your IT organization plans to roll out Windows 11 or has budgeted an upgrade of your Cisco infrastructure, you're well positioned to take advantage of infrastructure NAC.
  • Endpoint-based NAC requires the installation of software agents on each network client. These devices are then managed by a centralized management console.
  • Hardware-based NAC requires the installation of a network appliance. The appliance monitors for specific behavior and can limit device connectivity should noncompliant activity be detected.

Virtual Private Networks

One item that has really changed remote access is the increased number of ways that individuals can communicate with their company and clients. Hotels, airports, restaurants, coffee shops, and so forth now routinely offer Internet access. It's a low-cost way to get connectivity, yet it's a public network. This is why virtual private networks were created.

A virtual private network (VPN) is a mechanism for providing secure, reliable transport over the Internet. VPNs are secure virtual networks built on top of physical networks. The value of a VPN lies in the ability to encrypt data between the endpoints that define the VPN network. Because the data is encrypted, outside observers on a public network are limited in what they can see or access. From a security standpoint, it is important to understand that a VPN is not a protocol. It's a method of using protocols to facilitate secure communications.

VPNs can be either hardware- or software-based. Both hardware and software VPNs offer real value and help protect sensitive company data.

  • Hardware-Based VPNs Hardware-based VPNs offer the ability to move the computational duties from the CPU to hardware. The hardware add-on product handles computationally intensive VPN tasks and can be useful for connecting remote branch offices. These solutions work well but require the purchase of additional hardware, which adds complexity to the network.
  • Software-Based VPNs Software-based VPNs are easy to build and implement. Several companies, such as PublicVPN.com, Anonymizer.com, VyprVPN.com, and many others, offer quick, easy-to-install software VPN solutions. These options do not require an investment in additional hardware and are extremely valuable for smaller firms with a limited IT staff because they are easier for the IT engineer to set up and maintain. However, in these situations, the company is relying on a third-party VPN provider. This approach could be problematic if companies need to control all aspects of their communications, such as partner business partner agreements (BPAs).

Companies continually work to improve protocols designed for use with VPNs. For example, in the mid-1990s, Microsoft led a consortium of networking companies to extend the Point-to-Point Protocol (PPP). The goal of the project was to build a set of protocols designed to work within the realm of VPNs. The result of this work was the Point-to-Point Tunneling Protocol (PPTP). The purpose of PPTP was to enable the secure transfer of data from a remote user to a server via a VPN. A VPN can be implemented using several different protocols, but the two that will be discussed here are IPsec and SSL. At a high level, the key distinction between the two is that IPsec VPNs operate at the Network layer, while SSL VPNs operate at the Application layer.

In the context of an SSL VPN, you'll recall that the SSL protocol is no longer regarded as secure. The VPN is actually using TLS. While this VPN implementation is still popularly referred to as an SSL VPN, as a CASP+ you must understand that SSL/TLS is in use.

A big advantage to using an SSL/TLS VPN is stricter control, governing access per application rather than to the network as a whole. In fact, restricting access per user is simpler with SSL/TLS VPNs than with an IPsec VPN. The final and arguably the most common reason why small organizations lean toward an SSL/TLS VPN is cost. IPsec VPNs require specialized end-user software, which likely includes licensing costs. Such costly client software isn't a requirement when connecting via SSL/TLS, which is a common feature in any browser today.

VPN Access and Authentication

The traditional perspective of a VPN is a tunnel. That perspective comes from IPsec VPN, because functionally an IPsec VPN connects the remote client in order to become a part of the local, trusted network. IPsec VPNs function in either tunnel or transport mode. Remote access can be defined as either centralized or decentralized. Centralized access control implies that all authorization verification is performed by a single entity within a system; two such systems are RADIUS and Diameter.

  • RADIUS Configurations Remote Authentication Dial-In User Service (RADIUS) originally used a modem pool for connecting users to an organization's network. The RADIUS server will contain usernames, passwords, and other information used to validate the user. Many systems formerly used a callback system for added security control. When used, the callback system calls the user back at a predefined number.

    RADIUS today carries authentication traffic from a network device to the authentication server. With IEEE 802.1X, RADIUS extends layer-2 Extensible Authentication Protocol (EAP) from the user to the server.

  • Diameter Diameter was designed to be an improvement over RADIUS and to handle mobile users better through IP mobility. It also provides functions for authentication, authorization, and accounting. Despite these efforts, RADIUS still remains popular today.

Decentralized access control can be described as having various entities located throughout a system performing authorization verification. Authentication is a key area of knowledge for the security professional because it serves as the first line of defense. It consists of two pieces: identification and authentication. As an example, think of identification as me saying, “Hi, I am Alice.” It's great that I have provided Bob with that information, but how does Bob know that it is really Alice? What you need is to determine the veracity of the claim. That's the role of authentication. Following the previous scenario, after providing her name, authentication would then require Alice to show her license, offer a secret word, or provide fingerprints.

Examples of access authentication protocols and tools include the following:

  • Password Authentication Protocol Password Authentication Protocol (PAP) is a simple protocol used to authenticate a user to a network access server that passes usernames and passwords in clear text. PAP is an older authentication system designed to be used on phone lines and with dial-up systems. PAP uses a two-way handshake to authenticate, but it is considered weak.
  • Challenge Handshake Authentication Protocol Challenge Handshake Authentication Protocol (CHAP) is used to provide authentication across point-to-point links using the Point-to-Point Protocol (PPP). CHAP uses a challenge/response process and makes use of a shared secret. It's more secure than PAP and provides protection against replay attacks. CHAP provides authentication by verifying through the use of a three-way handshake. Once the client is authenticated, it is periodically requested to re-authenticate to the connected party through the use of a new challenge message.

    MS-CHAP v2 is the newest standard password-based authentication protocol which is widely used as an authentication method in Point-to-Point Tunneling Protocol (PPTP)–based VPNs.

  • Lightweight Directory Access Protocol Lightweight Directory Access Protocol (LDAP) is an application protocol used to access directory services across a TCP/IP network. LDAP was created to be a lightweight alternative protocol for accessing X.500 directory services. X.500 is a series of computer networking standards covering electronic directory services.
  • Active Directory Active Directory (AD) is Microsoft's implementation of directory services and makes use of LDAP. AD retains information about access rights for all users and groups in the network. When a user logs on to the system, AD issues the user a globally unique identifier (GUID). Applications that support AD can use this GUID to provide access control. Although AD helps simplify sign-on and reduces overhead for administrators, there are several ways that it might be attacked.

    The difficulty in attacking AD is that it is inward facing, meaning that it would be easier for an insider than an outsider to target AD. One attack methodology is escalation of privilege. In this situation, the attacker escalates an existing user's privilege up to administrator or domain administrator. Other potential attack vectors may include targeting password hashes or Kerberos pre-authentication.

The primary vulnerability associated with authentication is dependent on the method used to pass data. PAP passes usernames and passwords via clear text and provides no security. Passwords can be easily sniffed; however, all of these protocols have suffered from varying levels of exploitation in the past.

VPN Placement

VPN is technology that requires consideration as to placement. You can choose among various design options for placement of VPN devices in your network. For example, you can place the VPN device parallel to a firewall in your network. The advantage of this approach is that it is highly scalable. This is because multiple VPN devices can be deployed in parallel with the firewall. Yet with this approach, no centralized point of content inspection is implemented.

Another potential placement of the VPN device is in the screened subnet on the firewall in the network. This network design approach allows the firewall to inspect decrypted VPN traffic, and it can use the firewall to enforce security policies. One disadvantage is that this design placement option may impose bandwidth restrictions.

Your final design option is an integrated VPN and firewall device in your network. This approach may be easier to manage with the same or fewer devices to support. However, scalability can be an issue because a single device must scale to meet the performance requirements of multiple features. There is also the question of system reliability. If the VPN fails, does the firewall fail too? Having a mirrored VPN and firewall is the way to ensure reliability if you choose this model.

Domain Name System Security Extensions

Before service is enabled on any DNS server, it should be secured. Some of the most popular DNS server software, such as the Internet Systems Consortium's BIND, has suffered from a high number of vulnerabilities in the past that have allowed attackers to gain access to and tamper with DNS servers. Alternatives to BIND such as Unbound are available, and you should consider them if your infrastructure permits. DNS is one of the services that you should secure, as there are many ways an attacker can target DNS. One such attack is DNS cache poisoning. This type of attack sends fake entries to a DNS server to corrupt the information stored there. DNS can also be susceptible to denial-of-service (DoS) attacks and unauthorized zone transfers. DNS uses UDP port 53 for DNS queries and TCP port 53 for zone transfers. Securing the zone transfer process is an important security control.

The integrity and availability of DNS are critical for the health of the Internet. One common approach to securing DNS is to manage two DNS servers: one internal and one external. Another approach is using Domain Name System Security Extensions (DNSSEC). DNSSEC is a real consideration, since one of the big issues with running two DNS servers is that the external DNS server that provides information to external hosts remains vulnerable to attack. This is because DNS servers have no mechanism of trust. A DNS client cannot normally determine whether a DNS reply is valid.

With DNSSEC, the DNS server provides a signature and digitally signs every response. For DNSSEC to function properly, authentication keys have to be distributed before use. Otherwise, DNSSEC is of little use if the client has no means to validate the authentication. DNSSEC authenticates only the DNS server and not the content. Even if the DNS server is configured for DNSSEC, situations can arise where the server may sign the results for a domain that it is impersonating. You can read more about DNSSEC at www.dnssec.net.

Firewall/Unified Threat Management/Next-Generation Firewall

Today, a range of security devices are available to help security professionals secure critical assets. These include antivirus, anti-spyware, host-based firewalls, next-generation firewalls (NGFWs), intrusion detection and prevention systems, and so on. But what if you could combine much of this technology into one common device?

Actually, you can do that, as it is what unified threat management (UTM) is designed to accomplish. UTM is an all-in-one security product that can include multiple security functions rolled into a single appliance. UTMs can provide network firewalling, network intrusion prevention, and gateway antivirus. They also provide gateway anti-spam and offer encrypted network tunnels via VPN capability, content filtering, and log/audit reporting. The real benefit of UTM is simplicity and an all-in-one approach to data flow enforcement. For smaller organizations, a single purchase covers most common security needs, and the device can be controlled and configured from a single management console. UTM devices are typically placed at the edge of the network. They offer the convenience of an all-in-one device, but the drawback is that if the device fails, there is no remaining protection.

Network Address Translation and Internet Gateway

Although IPv6 is becoming used by more organizations today, IPv4 is still the industry standard. IPv4 worked well for many years, but as more and more devices have been added to the Internet, the number of free addresses has decreased. IPv4 provides for approximately 4.3 billion addresses, which may seem like a lot, but these addresses have been used up at an increasingly rapid rate, mostly due to all of the Class B networks already allocated.

Several methods were adopted to allow for better allocation of IPv4 addresses and to extend the use of the existing address space. One was the concept of variable-length subnet masking (VLSM). VLSM was introduced to allow flexible subdivision into varying network sizes. Another was the introduction of network address translation (NAT). NAT, which is covered in RFC 1918, set aside three ranges of addresses to be designated as private: 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. NAT was designed to help with the shortage of IPv4 addresses, to provide a low level of security, and to ease network administration for small to medium businesses.

A network address translation (NAT) gateway (NGW) is used to enable devices in a private subnet with no public IP addresses to connect to the Internet or cloud services. It also prevents the Internet from connecting directly to those devices. An Internet gateway (IGW), on the other hand, allows a logical connection between an instance with a public IP to connect to the Internet.

Forward/Transparent and Reverse Proxy

A popular network component that is used for protection is a proxy. Many think of a proxy as a mechanism that “stands in place of.” A transparent proxy, also called a forward proxy, inline proxy, intercepting proxy, or forced proxy, is a server that intercepts the connection between an end user or device and the Internet. It is called transparent since it does not modify any requests and responses and can block malicious incoming traffic.

Using a transparent proxy, for example, an end user on a corporate network requests to view a page on www.cnn.com and views the same information as they would on their local connection at home. The page is delivered from a transparent proxy running on the corporate network. The user's experience is precisely the same, but the user's employer now has the ability to monitor their behavior and restrict access to specific websites. Squid Transparent Proxy Server is a popular open-source transparent proxy tool.

A reverse proxy server sits at the edge of the network and handles the policy management and traffic routing. It receives the connection request from the user, completes a TCP three-way handshake, connects with the origin server, and sends the original request. The reverse proxy is well suited for scrubbing incoming application traffic before it goes to a backend server. This helps an organization with DDoS protection to minimize impact, helps with web application protection to drop malicious packets, and can even reroute traffic to ensure availability.

Routers

Another critical network component is the router. Routers are considered OSI layer 3 components. A router's primary purpose is to forward packets out the correct port toward their destination through a process known as routing. Routers primarily work with two items: routing protocols and routable protocols. A good example of a routable protocol is IP. Routers examine IP packet headers and determine the location to where they should be forwarded. The header contains the destination address. The router is already aware which port connects to the network for which that destination is local. The path they take is determined by the routing protocol. Examples of routing protocols include Routing Information Protocol (RIP) and Open Shortest Path First (OSPF). Routers can forward IP packets to networks that have the same or different medium types. Routers can also be targeted by attackers. Some common attacks include route poisoning and Internet Control Message Protocol (ICMP) redirect attacks.

Improper router configurations are a big security risk. Although physical controls are important, software controls are needed to prevent router attacks.

  • Transport Security Increased network security risks and regulatory compliances have driven the need for wide area network (WAN) transport security. Examples of transport security include IPsec, TLS, and SSL. IPsec is the Internet standard for security. Designed as an add-on to IPv4, it is also integrated with IPv6. TLS and SSL perform the same service but are implemented differently. The most widely used version of TLS is v1.2, but the latest, v1.3, is already supported in the current version of most major web browsers.
  • Port Security Port security contains layer 2 traffic, which is a control feature of the Cisco Catalyst Switches. Port security gives security analysts and network administrators the capability to configure individual switch ports, which allow only a specific number of source MAC addresses to ingress the port. Port security's primary function is to deter the addition by users of what are known as dumb switches that illegally extend the reach of the network. Adding unmanaged devices can complicate troubleshooting for administrators and is not recommended. Port security is enabled with default parameters by issuing a single command on an interface.
  • Remotely Triggered Black Hole Filtering Remotely Triggered Black Hole (RTBH) filtering is a technique that uses routing protocol updates to manipulate route tables at the network edge, or anywhere else in the network, specifically to drop undesirable traffic before it enters the service provider network.

    One major area that needs to be mitigated is distributed denial-of-service (DDoS) attacks. For DDoS protection, once an attack is detected, black holing can be used to drop all DDoS attack traffic at the edge of an Internet service provider (ISP) network, based on either the destination or source IP address. Black holing is done by forwarding this traffic to a Null0 interface. A DDoS attack is then launched from the Internet targeting the server.

    In addition to service degradation of the target, the entire internal infrastructure will be affected due to high bandwidth consumption and processor utilization. Because of the distributed nature of the attack, network administrators must block all inbound traffic destined for the victim at the edge.

  • Trunking Security Trunking security is an important concern when discussing virtual local area networks (VLANs). VLANs started as a security and traffic control used to separate network traffic. The VLAN model works by separating its users into workgroups, such as engineering, marketing, and sales. Today, many companies prefer campus-wide VLANs because VLANs have to span and be trunked across the entire network. A trunk is simply a link between two switches that carries more than one VLAN's data.

    From a security perspective, this is a concern. If an attacker can get access to the trunked connection, they can potentially jump from one VLAN to another. This is called VLAN hopping. It is important to make sure that trunked connections are secure so that malicious activity cannot occur. Cisco has several ways to incorporate VLAN traffic for trunking. These techniques may include the IEEE's implementation of 802.1Q or Cisco's Inter-Switch Link (ISL).

Distributed denial-of-service (DDoS) attacks have been around a long time, and they're still a valid area of concern. Let's look at a few protective measures that can be taken for a router or switch.

  • Route Protection Route or path protection means to ensure that controls are in place to protect the network flow end to end. That's a general definition, but the actual technique depends highly on what type of network is being protected, be it an Optical Mesh network, a Multi-Protocol Label Switching (MPLS) network, or something else.
  • DDoS Protection Distributed denial-of-service (DDoS) attacks can be crippling to an organization, particularly if the traffic is so debilitating that the network engineer loses control and connectivity to devices to help stem the attack. One mitigating technique is to use remotely triggered black hole routing. As the name suggests, if network traffic is identified as unwanted, that traffic is sent to the network version of a “black hole” and dropped.

Mail Security

Many individuals would agree that email is one of the greatest inventions to come out of the development of the Internet. It is the most used Internet application. Just take a look around the office and see how many people use Android phones, iPhones, laptops, tablets, and other devices that provide email services. Email provides individuals with the ability to communicate electronically through the Internet or a data communications network.

Although email has many great features and provides a level of communication previously not possible, it's not without its problems. Now, before we beat it up too much, you must keep in mind that email was designed in a different era. Decades ago, security was not as much of a driving issue as usability. By default, email sends information via clear text, so it is susceptible to eavesdropping and interception. Email can be easily spoofed so that the true identity of the sender may be masked. Email is also a major conduit for spam, phishing, and viruses. Spam is unsolicited bulk mail. Studies by Symantec and others have found that spam is much more malicious than in the past. Although a large amount of spam is used to peddle fake drugs, counterfeit software, and fake designer goods, it's more targeted to inserting malware via malicious URLs today.

As for functionality, email operates by means of several underlying services, which can include the following:

  • Simple Mail Transfer Protocol Simple Mail Transfer Protocol (SMTP) is used to send mail and relay mail to other SMTP mail servers and uses port 25 by default. The secure version uses SSL/TLS on port 587 or 465.
  • Post Office Protocol Post Office Protocol (POP3), the current version, is widely used to retrieve messages from a mail server. POP3 performs authentication in clear text on port 110. The secure version uses SSL/TLS on port 995.
  • Internet Message Access Protocol Internet Message Access Protocol (IMAP) can be used as a replacement for POP3 and offers advantages over POP3 for mobile users. IMAP has the ability to work with mail remotely and uses port 143. The secure version uses SSL/TLS on port 993.

Basic email operation consists of the SMTP service being used to send messages to the mail server. To retrieve mail, the client application, such as Outlook, may use either POP or IMAP. Using a tool like Wireshark, it is very easy to capture clear-text email for review and reinforces the importance of protecting email with PGP, SSL/TLS, or other encryption methods.

Application Programming Interface Gateway/Extensible Markup Language Gateway

Today's systems are much more distributed than in the past and have a much greater reliance on the Internet. At the same time, there has been a move toward service-enabled delivery of services.

Have you noticed that many of the web-based attacks today no longer target web servers but are focused on web applications? Web-based applications continue to grow at a rapid pace, and securing them is a huge job. Application programming interfaces (APIs) are interfaces between clients and servers or applications and operating systems that define how the client should ask for information from the server and how the server will respond. This definition means that programs written in any language can implement the API and make requests.

APIs are tremendously useful for building interfaces between systems, but they can also be a point of vulnerability if they are not properly secured. API security relies on authentication, authorization, proper data scoping to ensure that too much data isn't released, rate limiting, input filtering, and appropriate monitoring and logging to remain secure. An API gateway sits between an external client and the application running on premises or in the cloud. An API gateway can validate an incoming request, send it to the right service, and deliver the correct response. An API gateway can act as an API proxy so the original API is not exposed.

Web services also make use of Extensible Markup Language (XML) messages. XML is a popular format for encoding data, and it is widely used for a variety of applications, ranging from document formats such as Microsoft Office to web formats. An XML gateway serves as an entry point for web traffic and an outbound proxy for internal web service consumers.

Traffic mirroring

Traffic mirroring is used by cloud vendors to monitor network traffic. Traffic mirroring copies the inbound and outbound traffic from the network interfaces that are attached to the cloud instance or load balancer. The purpose of a traffic mirror is to take the data and share with monitoring tools for content inspection, threat monitoring, and often, troubleshooting when issues arise.

Switched Port Analyzer Ports

Some tools are limited in that they can only see the span of the network to which they are attached, like a NIDS. This brings up an important fact: by stating that the NIDS may see the span, this means that they are often placed on a mirrored port or a port to which all traffic passing through the network device is forwarded. This could be a mirrored port on a switch or even a router. In the Cisco world, these ports are referred to as Switched Port Analyzer (SPAN) ports for obvious reasons.

Port Mirroring

A port mirroring technique is used often by network engineers to analyze data and troubleshoot issues. A port mirror is used on a layer 2 network switch to send a copy of packets to a monitoring device, connection, or another switch port by reserving one port. When the switch processes the packet, it makes a copy and sends it to that port. This type of monitoring is vital for network observation because of outages and bottlenecks.

Virtual Private Cloud

A virtual private cloud (VPC) is a highly secured, flexible, remote private cloud located inside a public cloud. A VPC user can do anything they would normally do in a private cloud, allowing an enterprise to create its own computing environment on shared infrastructure. A VPC's logical isolation is implemented for security reasons, giving very granular control to administrators.

Network Tap

A network tap is a hardware device plugged into a specific spot in a network where data can be accessed for testing or troubleshooting. Network taps usually have four ports, two for the network and two for monitoring. The network ports collect information. The monitoring ports provide a copy of this traffic to a device attached for monitoring. This way, traffic can be monitored without changing the flow of data.

Sensors

Sensors are being increasingly used in cybersecurity and could be a firewall log or a network tap. A sensor collects information about the network and uses it in tools around the environment, doing data analysis to make decisions about safety measures and defense. Host-based sensors provide accurate information, and network sensors provide extensive coverage but can be vulnerable to security threat actors. Developing a secure sensor network requires redundancy to ensure adequate protection against cyberattack.

Security Information and Event Management

Another popular security solution is security information and event management (SIEM). A SIEM solution helps security professionals identify, analyze, and report on threats quickly based on data in log files. SIEM solutions prevent IT security professionals from being overwhelmed with audit data and endless logs so that they can easily assess security events without drowning in security event data. This service is the combination of two separate reporting and recording areas, security information management (SIM) and security event management (SEM). SIM technologies are designed to process and handle the long-term storage of audit and event data. SEM tools are designed for real-time reporting of events. Combining these two technologies provides users with the ability to alert, capture, aggregate, review, and store event data and log information from many different systems and sources, which allows for visualization and monitoring for patterns. The primary drawback to using these systems is that they are complex to set up and require multiple databases. Vendors that offer SIEM tools include ArcSight, Splunk, Lacework, and Exabeam.

File Integrity Monitoring

File integrity monitoring (FIM) works in two modes: agent-based and agentless. With agent-based FIM, an agent sits on a host and provides real-time monitoring of files. The FIM agent also removes the repeated scanning loads on the host and network. The biggest disadvantage of a FIM agent is that it eats up host resources. With a FIM agent installed, a local baseline gets established. Thereafter, only qualifying changes require the FIM agent to engage, given some host resources. FIM agents must possess all of the capabilities for detecting unauthorized changes, be platform independent, and be capable of reporting what has been changed and who has made the changes.

Agentless FIM scanners are effective only on their scheduled time. There is no real-time detection or reporting capability. Agentless FIM scanners need to be re-baselined and hashed for every single file on the system each time that they scan. One advantage of agentless FIM scanners is that they are easy to operate without the hassle of maintaining the endpoint agents.

Simple Network Management Protocol Traps

Simple Network Management Protocol (SNMP) is a UDP service that operates on ports 161 and 162, and it is used for network management. SNMP allows agents to gather such information as network statistics and report back to their management stations. Most large corporations have implemented some type of SNMP management. Some of the security problems that plague SNMP are caused by the fact that v1 and v2 community strings can be passed as clear text and that the default community strings (public/private) are well known. SNMP version 3 is the most current; it offers encryption for more robust security. SNMPv3 uses the same ports as v1 and v2.

SNMP messages are categorized into five basic types: TRAP, GET, GET-NEXT, GET-RESPONSE, and SET. The SNMP manager and SNMP agent use these messages to communicate with each other. SNMP traps are very popular mechanisms to manage and monitor activity. They are used to alert messages from an enabled device (agent) to a collector (manager). For example, an SNMP trap might instantly report an event like an overheating server, which would help with reliability and possible data loss. Properly configured, it can help identify latency and congestion.

NetFlow

Before you take the CASP+ exam, you should be familiar with the tool Wireshark, an example of a network sniffer and analysis tool. The topic of this section is not network sniffers; however, if you were to use a network sniffer to look at the traffic on your local network, you might be surprised by what you would see. That is because most of the network data that is flowing or traversing a typical network is very easy to inspect. This in turn is because most of the protocols, such as FTP, HTTP, SMTP, NTP, Telnet, and others, are sending data across the network via clear text. This simply means that the traffic is not encrypted and is easy to view. The early designers of TCP/IP and what has become the modern Internet were not concerned about encryption; they were simply trying to get everything to work. This is a concern for the security professional because attackers can easily intercept and view this network traffic. The concept of protection of data and secure protocols such as SSH and SSL/TLS came much later.

Even when data is protected with technologies such as SSL/TLS and IPsec, an intruder may still be able to break your network. One way to detect current threats and multiple sources of internal and external information is to analyze network data flow. This is known as network flow analysis. The concept involves using existing network infrastructure. Flow analysis provides a different perspective on traffic movement in networks. It allows you to look at how often an event occurred in a given time period. As an example, how often was traffic containing encrypted zip files leaving your network between midnight and 2 a.m. headed to Russia on weekends? With flow analysis tools, security professionals can view this type of user activity in near real time.

Distilling the massive amount of data that flows through modern networks requires tools that allow for the aggregation and correlation of data. Cisco Systems was one of the first to market this technology with the development of NetFlow. Initially, the captured network flow data answered only the most basic questions. Today, NetFlow and similar tools help identify data exfiltration (as mentioned), possible misconfigured devices, systems misbehaving at odd times and, of course, unexpected or unaccountable network traffic. NetFlow is still popular but arguably designed for Cisco environments. Alternatively, IPFIX is an Internet Engineering Task Force (IETF) standard-based technology for other layer 3 network devices to collect and analyze network flows.

Data Loss Prevention

Developing a secure network infrastructure requires building management controls for growth. Companies will grow and change over time. Acquisitions, buyouts, adoption of new technologies, and the retirement of obsolete technologies all mean the infrastructure will change. Data loss prevention (DLP) requires the analysis of egress network traffic for anomalies and the use of better outbound firewall controls that perform deep packet inspection. Deep packet inspection normally occurs by a device at a network boundary, for example by a web application firewall at the trusted network's perimeter. To select where such a device should be placed in any organization, it's important to have a data flow diagram, depicting where and how data flows throughout the network.

Companies must have the ability to integrate storage devices such as storage area networks (SANs) as needed. A periodic review of the security and privacy considerations of storage integration needs is required to keep track of storage requirements, as companies have a never-ending need for increased data storage and data backups.

Antivirus

Antivirus companies have developed much more effective ways of detecting viruses. Yet the race continues as virus writers have fought back by developing viruses that are harder to detect. Antivirus programs typically use one of several techniques to identify and eradicate viruses. These methods include the following:

  • Signature-Based This technique uses a signature file to identify viruses and other malware. It requires frequent updates.
  • Heuristic-Based This detection technique looks for deviation from the normal behavior of an application or service. This method is useful against unknown and polymorphic viruses and uses AI to look for patterns of malicious code.

Years ago, antivirus may have been considered an optional protection mechanism, but that is no longer true. Antivirus software is the best defense against basic types of malware. Most detection software contains a library of signatures used to detect viruses. Viruses can use different techniques to infect and replicate. The following techniques are common:

  • Boot Record Infectors Reside in the boot sector of the computer
  • Macro Viruses Target Microsoft Office programs such as Word documents and Excel spreadsheets
  • Program Infectors Target executable programs
  • Multipartite Infectors Target both boot records and programs

Once your computer is infected, the computer virus can do any number of things. Some are known as fast infectors. Fast infection viruses infect any file that they are capable of infecting. Other viruses use sparse infection techniques. Sparse infection means that the virus takes its time in infecting other files or spreading its damage. This technique is used to try to avoid detection. Still other types of malware can live exclusively in files and load themselves into RAM. These viruses are known as RAM-resident viruses. One final technique used by malware creators is to design the virus to be polymorphic. Polymorphic viruses can change their signature every time they replicate and infect a new file. This technique makes it much harder for the antivirus program to detect the virus.

Preventing viruses and worms begins with end-user awareness. Users should be trained to practice care when opening attachments or running unknown programs. User awareness is a good first step, but antivirus software is essential. There are a number of antivirus products on the market, among them the following programs:

  • Bitdefender
  • Trend Micro
  • Norton Antivirus
  • McAfee Antivirus

Some well-known examples of malware include the following:

  • WannaCry: Massive ransomware attack of 2017, copied in several subsequent attacks
  • Petya: Family of various ransomware types spread through phishing emails
  • CryptoLocker: Malware/Trojan horse/ransomware that gained access to and encrypted files on more than 200,000 Windows-based systems
  • Mirai: The first major malware to spread through Internet of Things devices
  • Melissa: First widespread macro virus
  • ILoveYou: First widespread mass-mailing worm, seducing people to click due to its name
  • Code red: Well-known worm that targeted Windows servers running IIS
  • Nimda: An old one, but still one of the worst worms to infect several different mechanisms
  • Zeus: Ransomware
  • Conficker: Widespread worm that could propagate via email, thumb drives, and network attachments

Segmentation

Segmentation is used to place systems with different functions or data security levels in different zones or segments of a network. Segmentation is also used in virtual and cloud environments. In principle, segmentation is the process of using security, network, or physical machine boundaries to build separation between environments, systems, networks, and other components.

Incident responders may choose to use segmentation techniques as part of a response process to move groups of systems or services so that they can focus on specific areas. You might choose to segment infected systems away from the rest of your network or to move crucial systems to a more protected segment to help protect them during an active incident.

Microsegmentation

Microsegmentation is a method that is used in network security that allows a datacenter to be cut up into small and distinct security segments. After the segmentation, a security architect can define security controls and deliver services for each unique segment. This allows for extremely flexible security policies and application-level security controls, increasing resistance to a cyberattack.

Local Area Network/Virtual Local Area Network

A local area network (LAN) is a computer network that connects computers within a local area such as a home, school, or office. A virtual LAN (VLAN) is used to segment network traffic. VLANs offer many benefits to an organization because they allow the segmentation of network users and resources that are connected administratively to defined ports on a switch. VLANs reduce network congestion and increase bandwidth. VLANs result in smaller broadcast domains. VLANs can also be used to separate portions of the network that have lower levels of security. This defense-in-depth technique can use specific VLANs to include additional protection against sniffing, password attacks, and hijacking attempts. Although VLAN separation can be defeated, this will add a layer of defense that will keep out most casual attackers.

To increase security, network partitioning will segment systems into independent subnets. Network partitioning requires a review of potential network access points. Once these access points have been defined, a number of different technologies can be used to segment the network. Common and upcoming technologies include the use of packet filters, stateful inspection, application proxy firewalls, web application firewalls, multilayer switching, dispersive routing, and virtual LANs (VLANs)/virtual extensible LANs (VXLANs).

Jump Box

A jump box, oftentimes called a jump host or jump server, is a hardened device used to manage assets in different security zones. A jump box is used by administrators logging in as an origin point to then connect, or jump, to another server or untrusted environment. This is a way to keep threat actors from stealing credentials. Jump boxes are slowly evolving into a technology called a secure admin workstation (SAW). Neither a jump box nor a SAW should ever be used for a nonadministrative task like surfing the Internet, opening email, or using office applications. It is strictly used by administrators for administrator tasks.

Screened Subnet

A screened host firewall adds a router and screened host. The router is typically configured to see only one host computer on the intranet network. Users on the intranet have to connect to the Internet through this host computer, and external users cannot directly access other computers on the intranet.

A screened subnet sets up a type of demilitarized zone (DMZ). A screened subnet is a solution that organizations can implement to offer external-facing resources while keeping the internal network protected. Servers that host websites or provide public services are often placed within the DMZ.

DMZs are typically set up to give external users access to services within the DMZ. Basically, shared services such as an external-facing web server, email, and DNS can be placed within a DMZ; the DMZ provides no other access to services located within the internal network. Of course, what traffic is allowed or denied depends on the rules put into effect on either side of the screened subnet. Screened subnets and DMZs are the basis for most modern network designs.

Organizations implement a screened subnet by configuring a firewall with two sets of rules, segmenting a network between the Internet and two firewalls. This allows the organization to offer services to the public by letting Internet traffic go through a firewall less restrictive than the firewall protecting the internal network. A screened subnet is the middle segment, which also provides an extra layer of protection by adding a perimeter network that further isolates the internal network from the Internet. The screened subnet option is not an expensive one for an organization when using a triple-homed firewall, or a firewall with three network ports.

Data Zones

Examine security in your network from endpoint to endpoint and consider building security and data zones of protection to limit the reach of an attacker. This extends from where traffic enters the network to where users initially connect to the network and its resources. This requires defense in depth and availability controls. One of several approaches can be used to build different zones. These approaches include the following:

  • Vector-Oriented This approach focuses on common vectors used to launch an attack. Examples include disabling autorun on USB thumb drives, disabling USB ports, and removing CD/DVD burners.
  • Information-Centric This approach focuses on layering controls on top of the data. Examples include information controls, application controls, host controls, and network controls.
  • Protected Enclaves This approach specifies that some areas are of greater importance than others. Controls may include VPNs, strategic placement of firewalls, deployment of VLANs, and restricted access to critical segments of the network.

When considering endpoint security and data zones, who is the bigger threat: insiders or outsiders? Although the numbers vary depending on what report you consult, a large number of attacks are launched by insiders. A trusted insider who decides to act maliciously may bypass controls to access, view, alter, destroy, or remove data in ways that the employer disallows.

Outsiders may seek to access, view, alter, destroy, or remove data or information. Incidentally, you must consider the occasional external breach, such as malware, which provides an outsider with access to the inside network. Currently, there are reports of conventional attacks having high rates of success. This includes simple attack mechanisms such as various types of malware, spear phishing, other social engineering attacks, and so on. Once inside, the sophistication level of these attacks increases dramatically as attackers employ advanced privilege escalation- and configuration-specific exploits in order to provide future access, exfiltrate data, and evade security mechanisms. The configuration of a data zone will help regulation of inbound and outbound traffic through the policies created.

Staging Environments

A staging environment is another layer of defense in depth where an enterprise can model their production environment in order to test upgraded, patched, or new software to ensure that when the new code or software is deployed it doesn't break existing infrastructure or create new vulnerabilities. When an organization has a staging environment, it needs to match the existing environment as closely as possible so that analysis is accurate.

A best practice in a staging environment is to allow the software to run for a set period of time. In the past, it has been up to seven days before deployment in a live environment. However, the SolarWinds breach of 2021 taught the cyber community just how crafty an attacker could be. They are aware of our best practices, and the malware that was used was set up to execute after 14 days in order to bypass a staging environment. As mentioned, it is another layer in defense in depth, and bad actors know how it is used.

Guest Environments

Any large organization will occasionally have scenarios where there are visitors to the organization who need access to a guest sharing environment. These individuals are not employees of the organization and could be bringing unmanaged devices with them. For obvious security reasons, these assets should not be joined to a protected domain. For that reason alone, anyone who is coming to an organization to take a class or visit the campus should sign a terms of use agreement and have web access only for those unmanaged devices. If they are allowed to work on sensitive or classified materials, the following should be put in place:

  • Configure a timeout policy so authentication happens daily.
  • Create an automatically assigned classification type for any sensitive materials guests are touching.
  • Set up multifactor authentication (MFA) for guests.
  • Conduct reviews to validate permissions to corporate sites and information.

VPC/Virtual Network

A virtual private cloud (VPC) is an on-demand highly secured cloud environment hosted inside a public cloud. Many VPC customers will use this type of setting for testing code, storing information, or hosting a website. Many cloud providers offer a VPC type of ecosystem and require a more secure connection.

A virtual network (VNET) is exactly what it sounds like. A VNET is where all devices, virtual machines, and datacenters are created and maintained with software. Most cloud providers offering VNET capabilities give customers a lot of control over their instances with IP addressing, subnets, routing tables, and firewall rules. This allows for scalability and efficiency.

Availability Zone

Another element of defense in depth is the creation of availability zones. The purpose of an availability zone is to have highly available independent locations within regions for failover and redundancy. For major cloud providers, a single zone has multiple physical datacenters.

Policies/Security Groups

Policies are high-level documents, developed by management, to transmit the overall strategy and philosophy of management to all employees. Senior management and process owners are responsible for the organization. Policies are a template in the sense that they apply guidance to the wishes of management. Policies detail, define, and specify what is expected from employees and how management intends to meet the needs of customers, employees, and stakeholders.

There are two basic ways in which policies can be developed. In some organizations, policy development starts at the top of the organization. This approach, known as top-down policy development, means that policies are pushed down from senior management to the lower layers of the company. The big advantage of top-down policy development is that it ensures that policy is aligned with the strategy and vision of senior management. A downside of such a process is that it requires a substantial amount of time to implement and may not fully address the operational concerns of average employees. An alternative approach would be to develop policy from the bottom up. The bottom-up approach to policy development addresses the concerns of average employees. The process starts with their input and concerns and builds on known risks that employees and managers of organizational groups have identified. The big downside is that the process may not always map well to senior management's strategy.

For modern cloud infrastructures, you can use tools such as AWS Firewall Manager security group policies to manage Amazon Virtual Private Cloud security groups for your organization in AWS Organizations. You can apply centrally controlled security group policies to your entire organization or to a select subset of your accounts and resources. You can monitor and manage the security group policies that are in use in your organization, with auditing and usage security group policies.

You should be able to identify specific types of policies before attempting the CASP+ exam. Some basic policy types include the following:

  • Regulatory A regulatory policy makes certain that the organization's standards are in accordance with local, state, and federal laws. Industries that make frequent use of these documents include healthcare, public utilities, refining, education, and federal agencies.
  • Informative An informative policy is not for enforcement; it is created to teach or help employees and others understand specific rules. The goal of informative policies is to inform employees or customers. An example of an informative policy for a retail store is that it has a 90-day cash return policy on items bought at the store if you keep your receipt.
  • Advisory An advisory policy is designed to ensure that all employees know the consequences of certain behavior and actions. An example of an advisory policy is an acceptable use policy (AUP). This policy may advise how the Internet can be used by employees and may disallow employees from visiting social networking or pornographic websites. The policy might state that employees found to be in violation of the policy could face disciplinary action, up to and including dismissal.

One specific type of policy in which a CASP+ should be interested is a company's security policy. The security policy is the document that dictates management's commitment to the use, operation, and security of information systems. You may think of this policy as only addressing logical security, but most security policies also look at physical controls. Physical security is an essential part of building a secure environment and a holistic security plan. The security policy specifies the role that security plays within a company. The security policy should be driven by business objectives. It must also meet all applicable laws and regulations. For example, you may want to monitor employees, but that doesn't mean placing CCTV in bathrooms or dressing rooms.

The security policy should be used as a basis to integrate security into all business functions and must be balanced in the sense that all organizations are looking for ways to implement adequate security without hindering productivity or violating laws. It's also important not to create an adversarial relationship with employees. Cost is an issue in that you cannot spend more on a security control than the value of the asset. Your job as a security professional is to play a key role in the implementation of security policies based on organizational requirements.

In your role as a security professional, look closely at the security policies that apply to you and your employees. As a CASP+, you should be able to compare and contrast security, privacy policies, and procedures based on organizational requirements. If you are tasked with reviewing security policies, consider how well policy maps to activity. Also, have you addressed all new technology?

As business goals and strategies change, IT security policies will need to adapt to meet those changes. But factors external to the business, such as technological innovation and changing social expectations, will also force that adaptation of policies. For example, as smartphones became inexpensive and commonplace, this emerging risk called for new policies to address it. Finally, it's important to remember that policies don't last forever. For instance, a policy from 1992 that addressed the use of and restrictions on modems would need to be revisited. Older technologies, such as modems, become obsolete as new technologies become affordable; therefore, business processes have to change. It's sometimes easy to see that low-level procedures need to be updated, but this kind of change applies to high-level policies as well. Policies are just one level of procedural control. Next, our discussion will focus on procedures.

Procedures are documents that fall under policies. Consider procedures as more detailed documents that are built from the parent policy. Procedures provide step-by-step instructions. For example, your company may migrate from a Cisco to a Check Point firewall. In this situation, the policy would not change in that the policy dictates what type of traffic can enter or exit the network. What would change, however, is the procedure, because the setup and configuration of a Cisco device and a Check Point device are different.

Procedures are detailed documents; they are tied to specific technologies and devices. Procedure documents require more frequent changes than policy documents to stay relevant to business processes and procedures. Procedures change when equipment changes, when software changes, when policy changes, and even when the sales season changes. Any change will require a review of the procedure. This review process should be built into change management.

We have seen procedures that look great on paper but that cannot be carried out in real life. When policies are developed, they must be mapped back to real-life activities and validated. Although problems may be caught during an audit, that's after the fact and may mean that poor practices have been ongoing for some time. Misalignment can mean that the procedure doesn't map or is outdated, or that employees have not had the proper training on the procedure you asked to see in operation.

Regions

Geographic segmentation occurs in marketing because not all products are sold everywhere. Regional preferences are considered. A global cybersecurity organization may choose to segment networks and applications based on where the workforce resides. This helps create layers of defense in depth as well as improve performance and make for less congestion. It can also reduce compliance requirements related to auditing.

Access Control Lists and Network Access Control

Firewalls can be hardware, software, or a combination of both. They are usually located at the demarcation line between trusted and untrusted network elements. Firewalls play a critical role in the separation of important assets. Firewall rules determine what type of traffic is inspected, what is allowed to pass, and what is blocked. The most basic way to configure firewall rules to create network access control (NAC) is by means of an access control list (ACL). A network access control list is a layer of security that controls traffic coming in or out of specific subnets.

There are two types of ACLs, filesystem and networking. Filesystem ACLs work as a filter with rules allowing or denying access to directories or files. This ACL gives the operating system directions about what level of privilege the user has. A networking ACL provides information to switches and routers through rules about who is allowed to interface with the network and what devices can do once inside the network. When NAC is used but an agent is not installed on a device, it is referred to as an agentless configuration. When using agentless NAC, the policy enforcement component is integrated into an authentication system like Microsoft Active Directory. The enforcement of policies is performed when the device logs on or off the network.

An ACL is used for packet filtering and for selecting the types of traffic to be analyzed, forwarded, or influenced in some way by the firewall or device. ACLs are a basic example of data flow enforcement. Simple firewalls, and more specifically ACL configuration, may block traffic based on the source and destination addresses. However, more advanced configurations may deny traffic based on interface, port, protocol, thresholds, and various other criteria. Before implementing ACLs, be sure to perform secure configuration and baselining of networking and security components. Rules placed in an ACL can be used for more than just allowing or blocking traffic. For example, rules may also log activity for later inspection or to record an alarm. Table 8.1 shows the basic rule set.

TABLE 8.1 Basic rule set

Rule numberActionProtocolPortDirectionComment
Rule 20AllowDNS53 UDPOutboundNone
Rule 50AllowHTTP, HTTPS80, 443OutboundNone
Rule 100AllowSMTP25InboundTo mail server
Rule 101AllowSMTP25OutboundFrom mail server
Rule 255DenyALLBidirectionalNone

For the CASP+ exam, you will need to have a basic understanding of ACLs and their format. The command syntax format of a standard ACL in a Cisco IOS environment is as follows:

 access-list access-list-number {permit|deny}
 {host|source source-wildcard|any}

There are also extended ACLs. These rules have the ability to look more closely at the traffic and inspect for more items, such as the following:

  • Protocol
  • Port numbers
  • Differentiated Services Code Point (DSCP) value
  • Precedence value
  • State of the synchronize sequence number (SYN) bit

The command syntax formats of extended IP, ICMP, TCP, and UDP ACLs are shown here:

IP Traffic

access-list access-list-number
 [dynamic dynamic-name [timeout minutes]]
 {deny|permit} protocol source source-wildcard
 destination destination-wildcard [precedence precedence]
 [tos tos] [log|log-input] [time-range time-range-name]

ICMP Traffic

access-list access-list-number
 [dynamic dynamic-name [timeout minutes]]
 { deny|permit } icmp source source-wildcard
 destination destination-wildcard
 [icmp-type [icmp-code] |icmp-message]
 [precedence precedence] [tos tos] [log|log-input]
 [time-range time-range-name]

TCP Traffic

access-list access-list-number
 [dynamic dynamic-name [timeout minutes]]
 { deny|permit } tcp source source-wildcard [operator [port]]
 destination destination-wildcard [operator [port]]
  [established] [precedence precedence] [tos tos]
 [log|log-input] [time-range time-range-name]

UDP Traffic

access-list access-list-number
 [dynamic dynamic-name [timeout minutes]]
 { deny|permit } udp source source-wildcard [operator [port]]
 destination destination-wildcard [operator [port]]
 [precedence precedence] [tos tos] [log|log-input]
 [time-range time-range-name]

Peer-to-Peer

As far as segmentation goes, peer-to-peer (P2P) networks are used for equality. In a P2P, all computers and devices share and exchange workloads, and there is no privilege and no administrator. P2P networks are perfect for file sharing because devices connected can receive and send files at the same time. A P2P network is hard to bring down and very scalable. Large files are often shared on this type of architecture; however, P2P can also be used for illegitimate activities such as piracy of copyrighted materials, which is punishable by law.

Air Gap

Air gap refers to systems that are not connected in any way to the Internet. An air-gapped system has no risk of being compromised from the outside but can still be vulnerable to an attack from the inside.

The 2010 Stuxnet attack is generally recognized as the first implementation of a worm as a cyber weapon. The worm was aimed at the Iranian nuclear program and copied itself to thumb drives to bypass air-gapped computers (physically separated systems without a network connection). Stuxnet took advantage of a number of techniques advanced for its time, including using a trusted digital certificate, searching for specific industrial control systems (ICSs) that were known to be used by the Iranian nuclear program, and specific programming to attack and damage centrifuges while providing false monitoring data to controllers to ensure that the damage would not be noticed until it was too late. For more information about Stuxnet, see this Wired article:

www.wired.com/2014/11/countdown-to-zero-day-stuxnet

Deperimeterization/Zero Trust

“Trust but verify” is an old Russian proverb. The phrase became popular during the late 1980s when U.S. President Ronald Reagan was negotiating nuclear disarmament with the Soviet Union's General Secretary, Mikhail Gorbachev. It fits the mindset of a cybersecurity professional. When an outcome matters more than a relationship, you have to trust but verify. In IT, safety and security are of utmost importance with outcome-critical parameters. If a relationship matters more than an outcome, then this philosophy doesn't fit as well.

The zero trust security model, also known as deperimeterization or perimeter-less security, describes a philosophy of design and implementation. This means never trust by default. Key principles in zero trust include strong access and authorization policies, user and machine authentication, and a single source of truth for user identity.

Cloud

Cloud computing offers users the ability to increase capacity or add services as needed without investing in new datacenters, training new personnel, or maybe even licensing new software. This on-demand, or elastic, service can be added, upgraded, and provided at any time. With cloud computing, an organization is being forced to place their trust in the cloud provider. The cloud provider must develop sufficient controls to provide the same or a greater level of security than the organization would have if the cloud were not used. Defense in depth is built on the concept that every security control is vulnerable. Cloud computing places much of the control in someone else's hands. One big area that is handed over to the cloud provider is access. Authentication and authorization are real concerns. Because the information or service now resides in the cloud, there is a much greater opportunity for access by outsiders. Insiders might pose an additional risk.

Insiders, or those with access, have the means and opportunity to launch an attack and lack only a motive. Anyone considering using the cloud needs to look at who is managing their data and what types of controls are applied to individuals who may have logical or physical access. Chapter 9, “Secure Cloud and Virtualization,” is dedicated to everything cloud.

Remote Work

The huge exodus from the corporate office to the home office precipitated by the recent pandemic created massive cyber-related security challenges. The biggest challenge is the mindset of the end user. Many remote workers in a recent poll said they would ignore or circumvent corporate security policies if they got in the way of work, and that includes connecting to a coffee shop Wi-Fi without a VPN. Leaders still get complaints about restrictive policies, and many work-from-home employees feel that these measures waste their time.

Most organizations have had to rework their security policies to adapt to the increased remote work while IT teams still deal with ransomware, vulnerabilities, and data exfiltration. The only way past this is to foster a more cohesive and collaborative security culture in the organization and balance ease-of-use security measures with productive operations.

Mobile

Enterprise security is already a challenge, but with the advent of mobile devices it has gotten a whole lot harder. It is enough of a challenge to secure the environment within the four walls of the corporate building. Now wireless connectivity has pushed the network boundary well past the four walls. With mobile devices, employees expect simple and easy connectivity to both the enterprise and the world around them. Thankfully, there are several technologies and techniques available to integrate security controls to support mobile and small form factor devices.

  • Application, Content, and Data Management To ensure that the mobile device's integrity stays within acceptable limits, the IT team will want to maintain oversight and management of the applications, content, and data on the device.
  • Application Wrapping If enterprise users rely on a third-party application but the mobile device management team desires more control or wants to implement policy on that application, then one option is to use a process called application wrapping. Application wrapping involves adding a management layer around an app without making any changes to the app itself.
  • Remote Assistance Access Maybe those managing the mobile devices need to assist users while they're away or can't let go of their phones. The team could use an application such as virtual network computing (VNC), available on several platforms, to access the user's mobile device remotely. Another option is screen mirroring or duplicating your phone's screen display on an authorized desktop, where an administrator can take control or transfer files as needed.
  • Configuration Profiles and Payloads How can an IT department manage to keep track of every user's mobile device configuration? Now increase that challenge due to the variety of available mobile devices. What makes it possible for IT departments to manage this situation is the use of configuration profiles. By establishing and remotely installing a group of configuration settings and implementing standard configuration profiles, managing a large number of mobile devices is effectively like managing a single device.

    A subset of configuration settings within a configuration profile is called a payload. You might have a “mail” payload that configures the users’ devices for all settings related to email, such as server, port, and address. Security restrictions can also be a part of the configuration profile payload.

  • Application Permissions For each application, your mobile operating system will allow the user effectively to control what that application can do, access, or execute on the user's behalf. Application permissions are accessible within the configuration settings of the mobile OS. Typical objects that an application may seek access to include the camera, microphone, phone application, and address book, among others. Obviously, access to any of these objects should be granted only if the application is deemed trusted and the decision to bypass an added user action is warranted.
  • VPN How mobile users connect to the enterprise environment impacts the risk that they place on the perimeter and network. If an organization allows users to connect over Wi-Fi at a coffee shop, this is unacceptable, and it is all but guaranteed to cause the CASP+ headaches in the near future. Instead, establishing and using a virtual private network (VPN) is a far safer way for users to connect to the internal, trusted network. The caveat regarding mobile devices is how much faster they consume resources. The trick is to find a VPN solution with some intelligence to it and train it (and users) to establish a VPN only when it is truly needed.
  • Over-the-Air Updates

    In addition to managing configuration settings remotely, the team responsible for installing updates can do this remotely. This delivery, known as over-the-air (OTA) updates, avoids waiting for a user to come in or dock their device for the update. Software or the device's firmware can be updated in this way.

  • Remote Wiping Another process that a CASP+ might need to do without relying on the user being “in the building” is remotely wiping the device. For example, as part of the process of a termination, also known as offboarding, an employee's device could be remotely wiped, mitigating the risk of unauthorized device usage or abuse.

In the context of mobile devices, here are some concerns of which the CASP+ should be aware:

  • SCEP The Simple Certificate Enrollment Protocol (SCEP) is a popular method of certificate enrollment, which is the process of a user or device requesting and receiving a digital certificate. SCEP is highly scalable and widely implemented. However, its original designers went on hiatus for several years, and SCEP received little attention to address concerns about its cryptographic strength. According to CERT vulnerability note VU#971035, the authentication methods for SCEP to validate certificate requests are ill suited for any environment other than a closed one. So, any mobility device management (MDM) involving BYOD opens itself up to risks of unauthorized certificate requests.
  • Unsigned Apps/System Apps The concept of code signing is designed to ensure that an application developed by a third party has not been altered or corrupted since it was signed as valid. Of course, this matters when the original developer is assumed to be a trusted source. However, what about apps that are not signed? If a user is able to download and install unsigned applications and unsigned system apps, the mobile device is no longer assured to be trustworthy.
  • Sideloading Normally, when loading or installing an application, a user would use the device's application-distribution channel. For most devices, this would mean using Google Play or the Apple App Store. But if a user seeks an alternate, back-channel method to load an application that may not be accessible otherwise, that method is called sideloading. Naturally, the security concern for a CASP+ is why such an application requires sideloading and is not available through the normally vetted channel. Sideloading, or loading applications outside a controlled channel, means that the established process of inspecting an application for malicious code is likely being bypassed.
  • Context-Aware Management Security restrictions on mobile devices can be placed in keeping with a user's behavior or their local location. Just as straightforward as the position of a mobile device, time-based restrictions can be placed on a user's access and/or authorization. If the user is attempting to access a sensitive share directory after restricted hours, employing time-based restrictions means limiting access until the permissible time period.

Outsourcing and Contracting

Organizations should go through a source strategy to determine what tasks should be completed by employees, contractors, or third parties. Outsourcing is one common approach. Outsourcing can be defined as an arrangement in which one company provides services for another company that may or may not have been provided in house. Outsourcing has become a much bigger issue in the emerging global economy, and it is something security professionals need to review closely. There will always be concerns when ensuring that third-party providers have the requisite levels of information security.

Outsourcing has become much more common in the IT field throughout the course of the last decade or so. In some cases, the entire information management function of a company is outsourced, including planning and business analysis as well as the installation, management, and servicing of the network and workstations. The following services are commonly outsourced:

  • Application/web hosting
  • Check processing
  • Computer help desk
  • Credit card processing
  • Data entry
  • Payroll and check processing

Crucial to the outsourcing decision is determining whether a task is part of the organization's core competency or proficiency that defines the organization. Security should play a large role in making the decision to outsource because some tasks take on a much greater risk if performed by someone outside the organization.

During any technical deployment, IT security professionals should have a chance to review outsourcing, insourcing, managed services, and security controls and practices of a partnered company. If irregularities are found, they should be reported to management so that expenses and concerns can be properly identified. Before combining or separating systems, a CASP+ should ask these basic questions:

  • Who has access to resources?
  • What types of access will be provided?
  • How will users request access?
  • Who will grant users access?
  • What type of access logs will be used, what will be recorded, and when will it be reviewed?
  • What procedures will be taken for inappropriate use of resources?
  • How will security incidents be reported and recorded, and who will handle them?
  • Who will be responsible for investigating suspicious activity?

The CASP+ should also be concerned with developing a course of action that is based on informed business decisions with the goal to provide for long-term security. Here are some of the technical deployment models with which a CASP+ should be familiar:

  • Outsourcing Outsourcing can be defined as obtaining a resource or item from an outside provider. As an example, consider Dell. Dell might be based in Round Rock, Texas, yet its distribution hub is in Memphis, Tennessee. Further, Dell assembles PCs in Malaysia and has customer support in India. Many parts come from the far corners of the globe.
  • Insourcing Insourcing can be defined as using a company's own personnel to build an item or resource or to perform an activity.
  • Managed Services Some organizations use managed services. One common approach is to use managed cloud services. Managed cloud hosting delivers services over long periods of time, such as months or even years. The secure use of cloud computing is contingent upon your organization reviewing the cloud provider's policies. In such situations you may be purchasing services in time slices or as a service on a virtual server. This means that data aggregation and data isolation can occur.

    The security concern with data aggregation is that your data may be combined with others or may even be reused. For example, some nonprofit donor databases offer their databases for free yet combine your donor database with others and resell it. Data isolation should be used to address how your data is stored. Is your data stored on an isolated hard drive, or does it share virtual space with many other companies?

    Finally, it is important to discuss the issues of data ownership and data sovereignty with the managed service provider. Make sure that your data remains solely yours while in the custody of the managed service provider. As the volume of data grows, also ensure that the managed service provider maintains the same level of assurance.

Resource provisioning and deprovisioning should be examined before a technical deployment in order to understand how resources are handled. For example, is the hard drive holding your data destroyed, or is it simply reused for another client? How about users? Are their records destroyed and accounts suspended after they are terminated?

In addition to users, resource provisioning and deprovisioning can also include the following:

  • Servers
  • Virtual devices
  • Applications
  • Data remnants

When outsourcing is to occur, issues related to IT will be an area of real concern. As in the case of the outsourced MRI diagnosis, sensitive and personal data is often involved with outsourced services. This creates regulatory concerns, most commonly involving the Health Insurance Portability and Accountability Act (HIPAA) or the Sarbanes-Oxley (SOX) Act. Many IT departments have mission statements in which they publicly identify the level of service they agree to provide to their customers. This may be uptime for network services, availability of email, response to help-desk calls, or even server uptime. When you are sharing resources with an outsourced partner, you may not have a good idea of their security practices and techniques. Outsourcing partners face the same risks, threats, and vulnerabilities as the client; the only difference is they might not be as apparent. One approach that companies typically use when dealing with outsourcing, insourcing, managed services, or partnerships is the use of service level agreements (SLAs).

The SLA is a contract that describes the minimum performance criteria a provider promises to meet while delivering a service. The SLA will also define remedial action and any penalties that will take effect if the performance or service falls below the promised standard. You may also consider an operating level agreement (OLA), which is formed between operations and application groups. These are just a few of the items about which a CASP+ should be knowledgeable; however, the CASP+ is required to know only the basics. During these situations, legal and HR will play a big role and should be consulted regarding laws related to IT security.

SLAs define performance targets for hardware and software. There are many types of SLAs, among them the following:

  • Help Desk and Caller Services The help desk is a commonly outsourced service. One way an outsourcing partner may measure the service being provided is by tracking the abandon rate (AR). The AR is simply the number of callers who hang up while waiting for a service representative to answer. Most of us have experienced this when we hear something like, “Your call is extremely important to us; your hold time will be 1 hour and 46 minutes.”

    Another help-desk measurement is the first call resolution (FCR). FCR is the number of positive solutions that are made on the first call to the help desk before making any additional calls or requiring the user to call back to seek additional help.

    Finally, there is the time-to-service factor (TSF). The TSF is the percentage of help desk or response calls answered within a given time. So, if Kenneth calls in at 8 a.m., is the problem fixed by 9:30 a.m.?

  • Uptime and Availability Agreements Another common SLA measurement is an uptime agreement (UA). UAs specify a required amount of uptime for a given service. As an example, a web hosting provider may guarantee 99.999 percent uptime. UAs are commonly found in the area of network services, datacenters, and cloud computing.

Wireless/Radio Frequency Networks

The Institute of Electrical and Electronics Engineers Standards Association (IEEE) is an organization that develops standards for wireless communication gathering information from subject-matter experts (SMEs). IEEE is not an institution formed by a specific government but is a community of recognized leaders who follow the principle of “one country, one vote.”

The IEEE 802.11 is a set of specifications on implementing wireless over several frequencies. As technology has evolved, so has the need for more revisions. If you were to go shopping for wireless equipment, you would see the array of choices you have based on those revisions of 802.11. Most consumer and enterprise wireless devices conform to 802.11a, 802.11b/g/n/ac/ax standards. These standards are better known as Wi-Fi. Bluetooth and wireless personal area networks (WPANs) are specialized wireless technologies.

Information is sent from one component called a transmitter and picked up by another called a receiver. The transmitter sends electrical signals through an antenna to create waves that spread outward. The receiver with another antenna in the path of those waves picks up the signal and amplifies it so it can be processed. A wireless router is simply a router that uses radio waves instead of cables. It contains a low-power radio transmitter and receiver, with a range of about 90 meters or 300 feet, depending on what your walls are made of. The router can send and receive Internet data to any computer in your environment that is also equipped with wireless access. Each computer on the wireless network has to have a transmitter and receiver in it as well.

There are advantages and disadvantages to communicating wirelessly. Networks are pretty easy to set up and rather inexpensive, with several choices of frequencies to communicate over. Disadvantages can include keeping this communication secure, the range of the wireless devices, reliability, and, of course, speed. The transmitter and the receiver need to be on the same frequency, and each 802.11 standard has its own set of pros and cons. The latest Wi-Fi technology is called 802.11ax or Wi-Fi 6/6E. 802.11ax is anywhere from four to ten times faster than existing Wi-Fi with wider channels available, and offers less congestion and improved battery life on mobile devices, since data is transmitted faster.

As with any technology, as it evolves, you will start making decisions on what scenario is best for you and your organization. There may be trade-offs on frequency used, speed, or the range of a device from a Wi-Fi hotspot. A hotspot is merely an area with an accessible network.

When building a typical wireless small office or home office (SOHO) environment, after you identify what technology and design is best for your situation, you configure the settings of your router using a web interface. You can select the name of the network you want to use, known as the service set identifier (SSID). You can choose the channel. By default, most routers use channel 6 or 11. You will also choose security options, such as setting up your own username and password as well as encryption.

As a best practice, when you configure security settings on your router, choose WPA3, the latest and recommended standard. Similar to WPA2, WPA3 includes both a Personal and an Enterprise version. WPA3 maintains equivalent cryptographic strength through the required use of 192-bit AES for the Enterprise version, and optional 192-bit AES for the Personal version. WPA3 helps prevent offline password attacks by using Simultaneous Authentication of Equals (SAE).

Another best practice is configuring MAC filtering on your router. This doesn't use a password to authenticate. It uses the MAC address of the device itself. Each device that connects to a router has its own MAC address. You can specify which MAC addresses are allowed on your network as well as set limitations on how many devices can join your network. If you set up your router to use MAC filtering, one drawback is every time you need to add a device, you have to grant network permission. You sacrifice convenience for better protection. After reading this book, the more advanced user will know how to capture packets, examine the data, and possibly identify the MAC address of a device in the list of permitted devices. MAC filtering with WPA3 encryption is the best way to protect your data.

Merging Networks from Various Organizations

Organizations with different cultures, objectives, security practices and policies, topologies, and tools occasionally need to be blended together to work cohesively. Both sides of the newly integrated network will have their own issues from communication to workflows. Configuration options differ from organization to organization, so knowledge and preparation are incredibly important when integrating two companies’ networks.

Peering

Peering on a network is a technique that allows one network to connect directly to another for exchange of information. Benefits of peering include lower cost, greater control, improved performance, and redundancy. There are several types of peering including the following:

  • Public: Using a single port on an Ethernet switch, this has less capacity than a private peer but can connect many networks.
  • Private: This connects two networks with layer 3 hardware with a point-to-point cable. Private peering makes up most traffic on the Internet.
  • Partial: Network traffic is regional.
  • Paid: One party pays the other to participate in the peering network.

Cloud to On-Premises

Cloud computing does away with the typical network boundary. This type of deperimeterization and the constantly changing network boundary create a huge impact, as historically this demarcation line was at the edge of the physical network, a point at which the firewall is typically found.

The concept of cloud computing represents a shift in thought in that end users need not know the details of a specific technology. The service is fully managed by the provider. Users can consume the service at a rate that is set by their particular needs so on-demand service can be provided at any time.

Data Sensitivity Levels

Data classification is the process of organizing data into data sensitivity levels based on information characteristics such as degree of sensitivity, risk, and compliance regulations. To merge various organizations' data, sensitive data must be inventoried and classified.

For CASP+ study, data sensitivity labels/types are most often categorized into public, internal-only, confidential, and restricted/highly confidential. Other level name variations you may encounter include Restricted, Unrestricted, and Consumer Protected.

NIST recommends using three categories based on the impact of data disclosure:

  • Low impact: Limited adverse effect such as a job posting
  • Moderate impact: Serious adverse effect such as financial budgets
  • High impact: Severe adverse effect such as account numbers or PHI

For more information about best data classification practices, NIST has published a guide for mapping levels of data to security categories: nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-60v1r1.pdf.

Mergers and Acquisitions

Companies are constantly looking for advantages in business. For some companies, mergers and acquisitions offer a path to increased business opportunities. For example, the merger of Oracle and Cerner was a $28.3 billion ($95.00 per share) merger. These two companies are combining cloud and on-premises datacenters, medical records, distribution, subscriber networks, customer service, and all of the other pieces of their organizations into a single entity. Even though the two companies operate in somewhat similar markets, the amount of work involved with such mergers is staggering.

Businesses that have similar cultures and overlapping target audiences but individually reach those audiences differently may work better as a merged entity. To lean on a business buzzword, the merger creates a synergy that potentially benefits both companies. Take the example when Amazon, the online shopping giant, merged with Whole Foods, the brick-and-mortar chain of organic grocery stores. Between Amazon's reach and delivery and Whole Food's mass appeal to those wanting to buy organic foods, the merger has been an overwhelming success for both parties. While the merger initially stoked fears that grocery prices would rise dramatically, a report one year later noted that actual pricing experienced “downward pressure” and had dropped a few percentage points.

From the standpoint of risk, there are many things that can go wrong. Businesses typically look for synergy, but some businesses just don't fit together. For example, in 2008 Blockbuster moved aggressively to merge with Circuit City. Blockbuster saw the merger as a way to grow, but many outside analysts questioned how two companies with totally different business models, which were both failing, could be combined into one winning business. Because of these questions, the merger never occurred. Within months, Circuit City filed for Chapter 11 protection (bankruptcy), whereas Blockbuster failed and closed outright. Circuit City re-emerged from the ashes some 10 years after the failed merger. In early 2018, Circuit City relaunched online after having been acquired by TigerDirect.

Often, the different businesses cannot coexist as one entity. In other cases, companies enter the merger/acquisition phase without an adequate plan of action. Finally, people don't like change. Once a company's culture is established and people become set in their ways, attitudes are hard to change. Mergers are all about change, and that goes against the grain of what employees expect.

For the security professional, it's common to be asked to quickly establish connectivity with the proposed business partner. Although there is a need for connectivity, security should remain a driving concern. You need to understand the proposed merger partner's security policies and what controls they are enforcing. The last thing that you would want is to allow the ability for an attacker to enter your network through the merging company's network.

Security concerns will always exist when it comes to merging diverse industries. The previous example illustrates just a few of the problems that companies face when they integrate and become one single entity. There will always be security concerns when integrating diverse industries. A CASP+ should also be concerned with items such as the following:

  • Rules What is or is not allowed by each individual company. Rules affect all aspects of how a business operates, ranging from the required visibility of an ID badge to how easily groups may share information.
  • Policies These are high-level documents that outline the security goals and objectives of the company.
  • Regulations Diverse entities may very well be governed by different regulatory entities or regulations such as PCI DSS or HIPAA. Other regulatory factors include export controls, such as use of encryption software. Companies dealing with controlled data types must meet the regulatory-imposed legal requirements, often related to storing, safeguarding, and reporting on that data.
  • Geography It is all about location. A company that is located in Paris, France, will be operating on different standards than one that is based in Denver, Colorado.
  • Demerger/Divestiture Anytime businesses break apart, you must confront many of the same types of issues. As an example, each organization must now have its own, separate IT security group, and it must implement its own firewalls and other defenses.

Whether the situation involves an acquisition or a company breaks into two or more entities, the leadership must revisit the data and its classification. Data ownership is affected by the change in company ownership. Similarly, a change in company structure will likely affect how data is classified or categorized, so a data reclassification will be required as well.

The CASP+ should understand technical deployment models and design considerations such as outsourcing, insourcing, partnerships, mergers, acquisitions, and demergers. The CASP+ and others responsible for the IT security of an organization should play a role during mergers and acquisitions. Generally, a merger can be described as the combining of two companies of equal size, whereas an acquisition is where a larger company acquires a smaller one. In both situations, it's important that proper security control be maintained throughout the process. This can be done through the application of due care and due diligence.

Somewhat akin to outsourcing in terms of the security risks that they present are partnerships. A partnership can be best defined as a type of business model in which two or more entities share potential profit and risk with each other, whereas with outsourcing, the customer assigns the work to the contractor. Once the project or service has been provided, the customer pays the contractor, and the relationship ends.

Partnerships are different; they are much more like a marriage. With marriage, for example, you might want to execute a prenuptial agreement; likewise, in partnerships, you need to be prepared should things go wrong and the partnership ends. There are a number of potential risks when companies decide to acquire, merge, or form partnerships:

  • Loss of Competency Once a new entity begins providing a service, the other may lose the in-house ability to provide the same service. Should the partnership end, the company is forced to deal with the fact that this service can no longer be supported. Even worse, the partner may now become a competitor.
  • Broken Agreements Partnerships don't always work out. In situations where things go wrong, there are costs associated with switching services back in house.
  • Service Deterioration Although the partner may promise great things, can they actually deliver? Over time, do they deliver the same level of service, or does it deteriorate? Metrics must be in place to monitor overall quality. Depending on the task, the level of complexity, or even issues such as a growing customer base, the partner may not be able to deliver the product or service promised.
  • Poor Cultural Fit Some partners may be in other regions of the country or another part of the world. Cultural differences can play a big part in the success of a partnership. Once a partnership is formed, it may be discovered that incentives to provide services and products don't align or that top-level management of the two companies differs.
  • Hidden Costs Sometimes all of the costs of a partnership are not initially seen. Costs can escalate due to complexity, inexperience, and other problems. Partnerships that are focused on tangible production are typically less vulnerable to these interruptions than those that deal in intangible services.

This doesn't mean that partnerships are all bad. Your company may be in a situation where in-house technical expertise has been lacking or hard to acquire. Maybe you need someone to build a specific web application or code firmware for a new device. Just consider how strategic multinational partnerships are sometimes motivated by protectionist laws in other countries or marketing know-how or expertise in another country or region. In these situations, the business partner may have the expertise and skills to provide this service.

From an IT security perspective, a major risk of such partnerships is knowledge and technology transfer, both intentional/transparent and unintentional/covert. You should always keep in mind what resources are being provided to business partners, whether they are needed, what controls are being used, and what audit controls are in place.

Cross-Domain

Cross domains are integrated hardware and software solutions that make it secure to access data across multiple networks. Cross-domain solutions are extremely helpful when there are different levels of security classifications or domains. A cross-domain policy is the set of rules that grants permission to disparate data.

Federation

Federation allows you to link your digital identity to multiple sites and use those credentials to log into multiple accounts and systems that are controlled by different entities. Federation is a collection of domains that have established trust. The level of trust may vary but typically includes authentication and almost always includes authorization. In early 2022, the United States White House announced the Office of Management and Budget (OMG) released the federal zero-trust strategy in support of “Improving the Nation's Cybersecurity” calling for strict monitoring controls, identification of priorities, and baseline policies for limiting and verifying federation across network services regardless of where they are located. For more information, you can read about zero-trust architecture (ZTA) here: zerotrust.cyber.gov/federal-zero-trust-strategy.

A typical federation might include a number of organizations that have established trust for shared access to a set of resources. Federation with Azure AD or O365 enables users to authenticate using on-premises credentials and access all resources in the cloud. As a result, it becomes important to have a highly available AD FS infrastructure to ensure access to resources both on-premises and in the cloud.

Directory Services

Directory services are used in networks to provide information about systems, users, and other information about an organization. Directory services like the Lightweight Directory Access Protocol (LDAP) are often deployed as part of an identity management infrastructure. They are frequently used to make available an organizational directory for email. LDAP uses the same object data format as X.500, where each object is made up of attributes that are indexed and referenced by a distinguished name. Each object has a unique name designed to fit into a global namespace that helps determine the relationship of the object and allows for the object to be referenced uniquely. Each entry would also have additional information including a distinguished name, an email address, phone numbers, and office location.

Security issues with LDAP include the fact that no data encryption method was available in LDAP versions 1 and 2. Security is negotiated during the connection phase when the client and server begin communications. Options include no authentication or basic authentication. This is the same mechanism that is used in other protocols, such as HTTP. The LDAP v3 (RFC 2251) is designed to address some of the limitations of previous versions of LDAP in the areas of internationalization, authentication, referral, and deployment. It allows new features to be added to the protocol without also requiring changes to the protocol using extensions and controls.

Since directories contain significant amounts of organizational data and may be used to support a range of services, including directory-based authentication, they must be well protected. The same set of needs often means that directory servers need to be publicly exposed to provide services to systems or business partners who need to access the directory information. In those cases, additional security, tighter access controls, or even an entirely separate public directory service may be needed.

Regardless of what protocols and standards are being used to authenticate, there is a real need for strong cryptographic controls. This is the reason for the creation of secure directory services. This solution makes use of Secure Sockets Layer (SSL). LDAP over SSL (LDAPS) provides for secure communications between the LDAP servers and client systems by means of encrypted SSL connections. To use this service, SSL has to be present on both the client and server and be able to be configured to make use of certificates. LDAP supports two methods for the encryption of communications using SSL/TLS: traditional LDAPS and STARTTLS. LDAPS communication commonly occurs over a special port 636. However, STARTTLS begins as a plaintext connection over the standard LDAP port (389), and that connection is then upgraded to SSL/TLS.

Software-Defined Networking

Software-defined networking (SDN) is a technology that allows network professionals to virtualize the network so that control is decoupled from hardware and given to a software application called a controller.

In a typical network environment, hardware devices such as switches make forwarding decisions so that when a frame enters the switch, the switch's logic, built into the content addressable memory (CAM) table, determines the port to which the data frame is forwarded. All packets with the same address will be forwarded to the same destination. SDN is a step in the evolution toward programmable and active networking in that it gives network managers the flexibility to configure, manage, and optimize network resources dynamically by centralizing the network state in the control layer.

Software-defined networking overcomes this roadblock because it allows networking professionals to respond to the dynamic needs of modern networks. With SDN, a network administrator can shape traffic from a centralized control console without having to touch individual switches.

Based on demand and network needs, the network switch's rules can be changed dynamically as needed, permitting the blocking, allowing, or prioritizing of specific types of data frames with a very granular level of control. This enables the network to be treated as a logical or virtual entity. SDN is defined as three layers: application, control, and the infrastructure or data plane layer.

When a network administrator is ready to move to an a SDN topology, they will have to evaluate the model to implement.

  • Open SDN: Enables open protocols to control assets that route packets
  • Hybrid SDN: Merges traditional protocols with SDN, allowing the choice of best protocols from each model
  • SDN overlay: Places a virtual network over existing network hardware topology

Often the hybrid SDN is a first step into implementing SDN.

Organizational Requirements for Infrastructure Security Design

An organization must focus on defense in depth in the creation of a secure infrastructure. Defense in depth is a strategic technique for information security where many requirements are thoughtfully layered throughout the enterprise ecosystem to protect the integrity, confidentiality, and availability of the network, the data, and the people.

Scalability

Scalability and fault tolerance are vital to large networking environments measured by the number of requests they can handle at the same time. If you have a website where customers are placing orders, the transaction volume can vary. If those servers became inoperable, money can be lost, so scaling both vertically and horizontally is one of the best options. Scaling up or scaling vertically means that you are adding more machine resources, including CPU, storage, and RAM to your current ecosystem. Scaling horizontally or scaling out means you are adding more machines to the pool of resources.

Building scalability also means building flexibility into the organization. Research shows most on-premises organizations moving toward a horizontally scaled architecture because of a need for reliability through redundancy. However, moving to the elasticity of the cloud and using SaaS environments will also improve utilization as well as redundancy depending on the architecture.

Resiliency

Building a resilient, secure infrastructure requires an understanding of the risks that your organization faces. Natural and human-created disasters or physical and digital attacks can all have a significant effect on your organization's ability to function. Resilience is part of the foundation of the availability leg of the CIA triad, and this chapter explores resilience as a key part of availability.

Common elements of resilient design, such as geographic and network path diversity and high availability design elements like RAID arrays and backups, are important to the resiliency of an organization. Different methods to guarantee that records aren't missing and that services remain online despite failures should be deployed including the following:

  • High availability: Characteristic of a system with an agreed level of uptime and performance
  • Diversity/heterogeneity: A collective entity that interactively integrates different entities, whereas diversity implies divergence, not integration
  • Course of action orchestration: Automatic configuration and management of computer systems
  • Distributed allocation: Methodical approach to the design and implementation to share resources
  • Redundancy: Building multiple assets that provide a similar role and could replace each other if disaster occurs
  • Replication: Sharing data to ensure stability between hardware or software mechanisms
  • Clustering: Dividing information into smaller groups based on patterns in the data

Automation

Automation describes many technologies that reduce human intervention. With workflows, relationships, and decisions made in the creation of an automation technology, ideally the efficiency and reliability of many tasks or functions that used to be done by human staffers is now done by machines. Examples of automation range from self-driving cars to smart homes to continuous delivery of software with manifests and modules.

Autoscaling

One of the distinct advantages in cloud computing is the ability to dynamically scale up or down quickly and automatically to the number of resources needed based on traffic or utilization. Benefits of autoscaling include better fault tolerance, availability, and cost management.

Security Orchestration, Automation, and Response

SOAR, or security orchestration, automation, and response, tools are devised to automate security responses, allow centralized control of security settings and controls, and provide robust incident response capabilities. Managing multiple security technologies can be challenging. Using information from SOAR platforms and systems can help design your organization's security posture. Managing security operations and remediating issues you have identified is also an important part of security work that SOAR platforms attempt to solve. As a mitigation and recovery tool, SOAR platforms allow you to quickly assess the attack surface of an organization, the state of systems, and where issues may exist. They also allow automation of remediation and restoration workflows.

Bootstrapping

Bootstrapping can mean several things in cyber, including the program that initializes during the operating system startup. Bootstrapping in infrastructure is the sequence of events that need to happen when creating a virtual machine. The term came from the expression “pulling yourself up by your own bootstraps.”

In security or data science, it means extrapolating findings for a larger group based on results from a smaller collection of data. Bootstrapping can be used in machine learning by inferring results from a statistical average. Bootstrapping is a technique that data scientists can use to improve the quality of a learning model that some SOAR tools could use.

Containerization

Technologies related to virtual systems continue to evolve. In some cases, you may not need an entire virtual system to complete a specific task. In such situations, a container can now be used. Containerization allows for the isolation of applications running on a server. Containers offer a lower-cost alternative to using virtualization to run isolated applications on a single host. When a container is used, the OS kernel provides process isolation and performs resource management. Determining when to use containers instead of virtualizing the OS mostly breaks down to the type of workload you have to complete.

Picture the large cargo ships that sail products across the oceans. So long as the cargo can fit in a container, that's all that matters. No matter what the cargo is, whatever it's for, or who uses it, it fits in a reliable, standardized container. Thus, containerization ensures the efficient and cost-effective use of the ship on which the cargo is placed.

The same approach is used for a Linux technology that is similar to virtualization, commonly using the open-source Docker program, where each container runs the host's kernel. The container has its own network stack and incremental file system, but it is not a fully virtualized system or a Type 1 (bare-metal) hypervisor. It's more about isolating the running container, not virtualizing an entire operating system.

Virtualization

Virtualization is a technology that system administrators have been using in datacenters for many years, and it is at the heart of cloud computing infrastructure. It is a technology that allows the physical resources of a computer (CPU, RAM, hard disk, graphics card, etc.) to be shared by virtual machines (VMs). Consider the old days when a single physical hardware platform—the server—was dedicated to a single-server application like being a web server. It turns out that a typical web server application didn't utilize many of the underlying hardware resources available. If you assume that a web application running on a physical server utilizes 30 percent of the hardware resources, that means that 70 percent of the physical resources are going unused, and the server is being wasted.

With virtualization, if three web servers are running via VMs with each utilizing 30 percent of the physical hardware resources of the server, 90 percent of the physical hardware resources of the server are being utilized. This is a much better return on hardware investment. By installing virtualization software on your computer, you can create VMs that can be used to work in many situations with many different applications.

Modern computer systems have come a long way in how they process, store, and access information. One such advancement is virtualization, a method used to create a virtual version of a device or a resource such as a server, storage, or even operating system. Chapter 9 goes into cloud, containerization, and virtualization in much more detail about how to deploy virtual infrastructure securely.

Content Delivery Network

A content delivery network (CDN) is a regionally diverse distributed network of servers and datacenters with the ultimate goal of high availability to their end users. Many interactions on the Internet involve a CDN, from reading a news article to streaming a video on social media. CDNs are built to improve any latency between the consumer and the content. Delays can occur due to physical distance from a hosting server, and CDNs work to shorten the physical distance, increasing speed and performance by caching content on a server located in multiple geographic locations so that someone in the Asia Pacific accessing a site hosted in the United Kingdom will experience minimal, if any, lag time loading the information requested. Caching speeds up future requests for that specific data because it will store the data locally. For example, a web browser can avoid making duplicate trips to a server, saving time and resources.

Integrating Applications Securely into an Enterprise Architecture

Not only do applications themselves need to be developed securely using software, hardware, techniques, and best practices, but incorporating those applications into an enterprise architecture securely is of critical concern for organizations. One of most important tools an organization can build is an application security checklist to avoid pitfalls and create a high level of security in application integration.

Baseline and Templates

A baseline is a minimum level of security to which a system, network, or device must adhere. They are usually mapped to industry standards. Baselines may be established by comparing the security activities and events of other organizations. Baselines can be used as an initial point of fact and then used for comparison for future reviews.

A benchmark is a simulated evaluation conducted before purchasing or contracting equipment/services to determine how these items will perform once purchased. A baseline is a minimum level of security to which a system, network, or device must adhere or maintain.

Another area of benchmarking that is evolving is related to best practices, controls, or benchmark requirements against which organizations can be measured. The concept of compliance is sometimes compared to benchmarking because you may need to specify a level of requirements or provide a grade when measured against the predefined requirements.

One example of this is the Federal Information Security Management Act of 2002 (FISMA). This requirement mandates that U.S. government agencies implement and measure the effectiveness of their cybersecurity programs. Another is ISO 27001. In this document, Section 15 addresses compliance with external and internal requirements.

To validate is to check or prove the value or truth of a statement. Think about watching a car commercial and noting that the automobile is rated at 32 mpg for highway driving. Has the statement been validated? Actually, it has through a process governed by the Environmental Protection Agency (EPA). This same process of validation may occur when you purchase computer network gear, equipment, or applications. Here are some government standards for this:

  • NIST Special Publication 800-37 Rev. 2, Risk Management Framework for Information Systems and Organizations
  • NIST Special Publication 800-53A Revision 5, According to the NIST website, this pub is called Security and Privacy Controls for Information Systems and Organizations
  • FIPS Publication 199, Standards for Security Categorization of Federal Information and Information Systems

After all, automakers and computer vendors both make claims about the ratings of their products and what they can do. As sellers, they need to be able to measure these claims just as we, the buyers, need to be able to prove the veracity of them to ourselves.

One example of an early IT standard designed for this purpose is the U.S. DoD Trusted Computer System Evaluation Criteria (TCSEC). This document, also known as the Orange Book, provides a basis for specifying security requirements and a metric with which to evaluate the degree of trust that can be placed in a computer system.

A more current standard is Common Criteria (CC). CC is an international standard (ISO/IEC 15408) and is used for validation and computer security certification. CC makes use of protection profiles and security targets, and it provides assurance that the process of specification, implementation, and evaluation of a computer security product has been conducted in a rigorous and standard manner. The protection profiles maintain security requirements, which should include evaluation assurance levels (EALs).

The levels are as follows:

  • EAL1: Functionally Tested
  • EAL2: Structurally Tested
  • EAL3: Methodically Tested and Checked
  • EAL4: Methodically Designed, Tested, and Reviewed
  • EAL5: Semi-Formally Designed and Tested
  • EAL6: Semi-Formally Verified Design and Tested
  • EAL7: Formally Verified Design and Tested

Secure Design Patterns/Types of Web Technologies

Security by design means that the security measures are built in and that security code reviews must be carried out to uncover potential security problems during the early stages of the development process. The longer the delay in this process, the greater the cost to fix the problem. Security by default means that security is set to a secure or restrictive setting by default. As an example, OpenBSD is installed at the most secure setting; that is, security by default. See www.openbsd.org/security.html for more information. Security by deployment means that security is added in when the product or application is deployed. Research has shown that every bug removed during a review saves nine hours in testing, debugging, and fixing of the code. As a result, it is much cheaper to add in early on than later. Reviews make the application more robust and more resistant to malicious attackers. A security code review helps new programmers identify common security problems and best practices.

Another issue is secure functionality. Users constantly ask for software that has greater functionality. Macros are an example of this. A macro is just a set of instructions designed to make some tasks easier. This same functionality is used, however, by the macro virus. The macro virus takes advantage of the power offered by word processors, spreadsheets, or other applications. This exploitation is inherent in the product, and all users are susceptible to it unless they choose to disable all macros and do without the functionality. Feature requests drive software development and hinder security because complexity is the enemy of security.

An application security framework is a great framework for system development that can make the development process easier to manage for the security manager. It is designed to build in security controls as needed. There are many different models and approaches. Some have more steps than others, yet the overall goal is the same: to control the process and add security to build defense in depth. One industry-accepted approach is a standardized System Development Life Cycle (SDLC) process. NIST defines SDLC in NIST SP 800-34 as “The scope of activities associated with a system, encompassing the system's initiation, development and acquisition, implementation, operation and maintenance, and ultimately its disposal that instigates another system initiation.”

Some development use cases and programming languages used are as follows:

  • Front-end web development: JavaScript
  • Back-end web development: JavaScript, Java, Python, PHP, Ruby
  • Mobile development: Swift, Java, C#
  • Game development: C++, C#
  • Desktop applications: Java, C++, Python
  • Systems programming: C, Rust

Storage Design Patterns

To become more efficient, reduce complexity, and increase performance working with large forms of information, using plans and design patterns for repeatability becomes a way to simplify big data applications. Storage design patterns have various components including data sources as well as ingestion, storage, and access layers.

The ingestion layer takes information directly from the source itself and with that comes a great deal of noise, as well as important data. Filtering the noise from the important data can be difficult when you have multiple sources, data compression, and validation. After the separation, architects have to decide which pattern to use. Some of these patterns include the following:

  • Multidestination: Great for multiple data streams
  • Protocol converter: Best for data coming in from different types of systems
  • Real-time streaming: Used for continuous processing of unstructured data

The data storage layer is important to convert any data into a format that can be readily analyzed. Once the data has been normalized, then data access patterns focus on two primary ways to consume the data, through a developer API or an end-to-end user API. There are many ways to structure data at each layer and use those to efficiently pattern the way data will be consumed in an enterprise.

Container APIs

You may not need an entire virtual system to complete a specific task; a container can be used. Containers allow for the isolation of applications running on a server. Containers offer a lower-cost alternative to using virtualization to run isolated applications on a single host. When a container is used, the OS kernel provides process isolation and performs resource management. When exchanging information between the container and other resources, one of the best ways to do that is through an application programming interface (API). A container API allows for building complex functionality between containers, systems, and other services.

Secure Coding Standards

A standard library for a programming language is the library that is made available in every implementation of that language. Depending on the constructs made available by the host language, a standard library may include items such as subroutines, macro definitions, global variables, and templates. The C++ Standard Library can be divided into the following categories:

  • A standard template library
  • Inputs and outputs
  • Standard C headers

The C++ Standard Library provides several generic containers, functions to use and manipulate these containers, function objects, generic strings and streams (including interactive and file I/O), support for some language features, and functions for everyday tasks. The problem is that some C standard library functions can be used inappropriately or in ways that may cause security problems when programmers don't follow industry-accepted approaches to developing robust, secure applications.

Code quality is the term relating to the utility and longevity of a particular piece of code. Code that will be useful for a specific, short-lived function is deemed “low code quality.” A code module that will be useful for several years to come, possibly appropriate for multiple applications, possesses “high code quality.”

API Management

Application programming interfaces (APIs) are the connection between a consumer and a provider. This could be a client and server or an application and operating system. An API defines how the client should ask for information from the server and how the server will respond. This definition means that programs written in any language can implement an API and make requests.

APIs are useful for building interfaces, but they can also be a point of vulnerability if they are not properly secured and managed. API security relies on authentication, authorization, proper data scoping to ensure that too much data isn't exposed, and appropriate monitoring and logging to remain secure. There are four major types of APIs that need to be managed:

  • Open or Public API: Publicly available to developers, focuses on external users accessing a service or data
  • Internal or Private API: Hidden from external users and used only by internal systems
  • Partner API: Shared with business partners with a specific workflow to get access
  • Composite API: Built with API tooling, allowing access to multiple endpoints in one call

Managing APIs is the process and procedure of creating, publishing, documenting, and validating all the APIs used within an organization to ensure security. Benefits of implementing API management also include the ability to change quickly and automation.

Middleware

Middleware is the software that works between an application and the operating system and is the enabler for communication, authentication, APIs, and data management. Some common examples are database, application server, web, and transaction processing. Each of these applications needs to be able to communicate with the others, and middleware will perform the communication functions depending on what service is being used and what information needs to be sent. Middleware can use messaging frameworks like SOAP, REST, and JSON to provide messaging services for different applications.

Software Assurance

When development is complete, software is expected to behave in a certain way. The software is not expected to have vulnerabilities, to have issues handling unexpected or large volumes of input, or to be weakened by being overworked. The term that describes the level of confidence that the software will behave as expected is known as software assurance.

There is a long list of controls, testing, and practices, all for the purpose of raising this confidence level. In the following sections, we cover some of these controls and practices, as well as several ways in which software can be tested.

By employing industry-accepted approaches to raise software assurance, two outcomes happen:

  • Software developers can be confident that their time and effort will be spent on translating the requirements definition into workable, secure code.
  • Users can trust that the software development process was executed with a high level of care and due diligence.

Sandboxing/Development Environment

Application sandboxing is the process of writing files to a sandbox or temporary storage area instead of their normal location. Sandboxing is used to limit the ability of code to execute malicious actions on a user's computer. If you launch an application in the sandbox, it won't be able to edit the Registry or even make changes to other files on the user's hard drive.

One application area where sandboxing is used is with mobile code. Mobile code is software that will be downloaded from a remote system and run on the computer performing the download. The security issue with mobile code is that it is executed locally. Many times, the user might not even know that the code is executing. Java is mobile code, and it operates within a sandbox environment to provide additional security. Data can be processed as either client-side processing or server-side processing. Server-side processing is where the code is processed on the server, whereas with client-side processing the code will run on the client. PHP is an example of a server-side processing language, whereas JavaScript is processed on the client side and can be executed by the browser.

Malware is software. Malware sandboxing is a technique used to isolate malicious code so that it can run in an isolated environment. You can think of a malware sandbox as a stand-alone environment that lets you view or execute the program safely while keeping it contained. Microsoft has several good articles on how to build a Windows Sandbox here: docs.microsoft.com/en-us/windows/security/threat-protection/windows-sandbox/windows-sandbox-overview.

When using a sandbox, you should not expect the malware creators to make analysis an easy process. As an example, malware creators build in checks to try to prevent their malware from running in a sandbox environment. The malware may look at the MAC address to try to determine if the NIC is identified as a virtual one, or it may not run if it does not have an active network connection. In such cases you may need additional tools such as FakeNet-NG, which simulates a network connection to fool the malware so that the analyst can observe the malware's network activity from within a sandboxed environment.

Validating Third-Party Libraries

An application issue that plagues a surprisingly large number of companies is the use of third-party libraries. The attraction of using something already created is clear: money and time saved! However, there is a big security risk as well.

When developers at a company can find code that fits their needs and is already working, then why would they reinvent the wheel? They can plug in a third-party library to accomplish the required tasks. The problem is that the third-party library is an unknown, untested product when compared with the policies and safe coding practices of the in-house developers.

Code reuse is, as the name implies, the use of a single piece of code several times, whether it's within an application or reused as an application evolves from one version to the next.

Similar to using a third-party library, reusing code saves time and money. The developer doesn't have to rewrite the same block of code (and possibly introduce errors) and instead can direct the application to a single block of code to be used repeatedly when needed.

When practiced with security in mind, code reuse is certainly a positive practice—for example, when a subroutine is required for several aspects of an application. The subroutine was developed securely, so there is little or no added risk. However, if one application's code, however modular it is, were repurposed for the next version or for another application, then this practice might skip the evaluation steps required and introduce new risks.

Defined DevOps Pipeline

Software today is created much faster than in the past, so it becomes important to software companies to remain competitive and create a defined DevOps pipeline to keep track of customers’ demands and requirements. A DevOps pipeline also helps an organization stay organized and focused.

This DevOps pipeline is a set of tools and automated processes used to write, compile, and deploy code. If effective, this enables a software company to rapidly test new code on an ongoing automated basis. Moving from a manual process to an automated one results in fewer errors and pushes out higher-quality code that is signed and distributed to customers. Code signing is the process of digitally signing a piece of software guaranteeing the integrity of the code from the time of signature. If the software has been tampered with, the signature will appear untrusted and invalid.

Application Security Testing and Application Vetting Processes

Application testing is an important part of IT security. Applications are not like any other item. Should you purchase a car that has a defective accelerator or tire, the vendor will typically repair or replace the items. Software, however, is very different. If software is defective, the vendor may no longer support it, may offer a patch, or may even offer an upgrade for a fee. These are just a few of the reasons why testing applications for security issues is so important.

Application security testing is the process of using software, hardware, and procedural methods to prevent security flaws in applications and protect them from exploit. As a CASP+, you are not expected to be an expert programmer or understand the inner workings of a Python program. What the CASP+ must understand, however, is the importance of application security, how to work with programmers during the development of code, and the role of testing code for proper security controls. As a CASP+, you need to understand various testing methodologies. For example, when the code is available, a full code review may be possible. In situations where you do not have access to the code, you may choose no knowledge application assessment techniques.

Regardless of the type of software test performed, your task is to help ensure that adequate controls are developed and implemented and that testing an application meets an organization's security requirements. What security controls are used in part depends on the type of project. A small in-house project will not have the funding of a major product release. The time and money spent on testing commercial applications is typically much greater than for IT projects. IT projects result in software tools and applications that a company uses internally, that a consultant installs in a client's environment, or that the company installs on an internal or external website. Commercial software applications are developed by software manufacturers for sale to external customers. These may be further classified into stand-alone applications (installed on the user's machine), client-server applications, and web applications, all with their own systems development life cycle (SDLC) considerations.

A code analyzer is a Java application compatible with C, C++, Java, assembly, HTML, and user-defined software source metrics. Code analyzers calculate metrics across multiple source trees as one project. Code analyzers have a nice tree view of the project and offer flexible reporting capabilities.

Analyzing code can be done one of several ways. The first method, using a fuzzer, involves intentionally injecting a variety of input into the code in order to produce an unexpected or unwanted result, such as a crash. The input will come fast and furious, in large volumes of data (dubbed “fuzz”), with the goal that the application will reveal its weaknesses.

Another approach is to use static application security testing (SAST). With this method, the software is neither run nor executed; rather, it is reviewed statically. As you can imagine, if an application has millions of lines of code, a static code review is impossible. Therefore, you must rely on static code analysis tools. The code to be reviewed is fed input into the static code analysis tool.

There is also dynamic application security testing (DAST). The difference between dynamic and static code analysis is that while a static review does not execute the application, during dynamic code analysis, the application is run, and you can observe how the code behaves.

Finally, when you combine SAST and DAST, you end up with interactive application security testing (IAST). According to the research firm Gartner, “Next generation modern web and mobile applications require a combination of SAST and DAST.” With IAST, an agent performs all the analysis in real time from inside the application. It can be run during development, integration, or production. The agent has access to all the code, runtime control, configuration, and libraries. With all the access to all the things at the same time, IAST tools can cover more code with more security rules and give better results.

Considerations of Integrating Enterprise Applications

When integrating products and services into the environment, the CASP+ will need to determine what type of interoperability issues exist. One useful tool to help is computer-aided software engineering (CASE). CASE can be used not only for software process activities but also for reengineering, development, and testing. Testing can help find interoperability issues that have not been discovered during the development process.

CASE tools are generally classified into different areas, such as the following:

  • Reverse engineering
  • Requirement management
  • Process management
  • Software design

Interoperability issues that are not found during development may be discovered during deployment. That is one reason a deployment strategy is so important. Deployment techniques include the following:

  • Hard Changeover A hard changeover deploys the new product or service at a specific date. At this point in time, all users are forced to change to the new product or service. The advantage of the hard changeover is that it gets the change over with and completed. We would compare it to removing a Band-Aid quickly, though there is the possibility of some initial pain or discomfort.
  • Parallel Operation With a parallel operation, both the existing system and the new system are operational at the same time. This offers the advantage of being able to compare the results of the two systems. As users begin working with the new system or product, the old system can be shut down. The primary disadvantage of this method is that both systems must be maintained for a period of time, so there will be additional costs.
  • Phased Changeover If a phased changeover is chosen, the new systems are upgraded one piece at a time. So, for example, it may be rolled out first to marketing, then to sales, and finally to production. This method also offers the advantage of reduced upheaval, but it requires additional cost expenditures and a longer overall period for the change to take effect.

Ideally, the users of new products and services have been trained. Training strategies can vary, but they typically include classroom training, online training, practice sessions, and user manuals. After the integration of products and services is complete and employees have been trained, you may be asked to assess return on investment (ROI) or to look at the true payback analysis.

With the rise of the global economy, enterprises have increasingly been faced with the fundamental decision of where to acquire materials, goods, and services. Such resources often extend far beyond the location where products are made and can be found at diverse areas around the globe. Some potential solutions include the following:

  • Customer Relationship Management Customer relationship management (CRM) consists of the tools, techniques, and software used by companies to manage their relationships with customers. CRM solutions are designed to track and record everything you need to know about your customers. This includes items such as buying history, budget, timeline, areas of interest, and their future planned purchases. Products designed as CRM solutions range from simple off-the-shelf contact management applications to high-end interactive systems that combine marketing, sales, and executive information. CRM typically involves three areas: sales automation, customer service, and enterprise marketing.
  • Enterprise Resource Planning Another process improvement method is enterprise resource planning (ERP). The goal of this method is to integrate all of an organization's processes into a single integrated system. There are many advantages to building a unified system that can service the needs of people in finance, human resources, manufacturing, and the warehouse. Traditionally, each of those departments would have its own computer system. These unique systems would be optimized for the specific ways that each department operates. ERP combines them all together into one single, integrated software program that runs off a unified database. This allows each department to share information and communicate more easily with the others. ERP is seen as a replacement for business process reengineering.
  • Configuration Management Database A configuration management database (CMDB) is a database that contains the details of configuration items and the relationships between them. Once created and mapped to all known assets, the CMDB becomes a means of understanding what assets are critical, how they are connected to other items, and their dependencies.
  • Configuration Management System A configuration management system (CMS) is used to provide detailed recording and updating of information that describes an enterprise's hardware and software. CMS records typically include information such as the version of the software, what patches have been applied, and where resources are located. Location data might include a logical and physical location.

Integration Enablers

To complement the enterprise integration methods and systems mentioned in the preceding sections, there are other services that serve to mend or unite dissimilar parts of the enterprise. The following are additional integration enablers:

  • Directory Services Directory services are the means by which network services are identified and mapped. Directory services perform services similar to those of a phone book, as they correlate addresses to names.
  • Enterprise Service Bus The enterprise service bus (ESB) is a high-level concept to describe the middleware between two unlike services. It is used in service-oriented architectures as a technique of moving messages between services. You can describe an ESB as middleware because it acts as a service broker. ESB is a framework in that different ESB products have different capabilities. What they share in common is an abstraction layer. Not all ESBs offer encryption. It depends on the particular vendor's product. ESB is fast becoming the backbone of many service-oriented enterprises’ services.
  • Service-Oriented Architecture Service-oriented architecture (SOA) specifies the overall multilayered and multitier distributed infrastructure for supporting services such as ESB. Although using web services allows you to achieve interoperability across applications built on different platforms using different languages, applying service-oriented concepts and principles when building applications based on using web services can help you create robust, standards-based, interoperable SOA solutions.
  • Domain Name System Domain Name System (DNS) is the service that lets people find what they're looking for on the Internet. DNS operates behind every address translation, working to resolve fully qualified domain names (FQDNs) and human-readable addressing into numeric IP addresses. DNS serves a critical function for people. If DNS as a service were to cease, the Internet would continue, but users would need to know the IP address of every site they wanted to visit. DNS is a request-response protocol. When a DNS server sends a response for a request, the reply message contains the transaction ID and questions from the original request as well as any answers that it was able to find.

Integrating Security into the Development Life Cycle

Depending on the company, product or service, and situation, different methodologies may be used to develop an end-to-end solution. The first challenge is to select which methodology to use. Choosing a formal development methodology is not simple, because no one methodology always works best. Some popular software models include the spiral model or prototyping. These models share a common element in that they all have a predictive life cycle. This means that at the time the project is laid out, costs are calculated, and a schedule is defined. A second approach, end-to-end development, can be categorized as agile software development. With the agile software development model, teams of programmers and business experts work closely together. Project requirements are developed using an iterative approach because the project is mission-driven and component-based. The project manager becomes much more of a facilitator in these situations. Popular agile development models include extreme programming and scrum programming.

One good source of information on the systems development life cycle (SDLC) is NIST 800-64. Although there are many models for SDLC, NIST 800-64, NIST website calls this Security Considerations in the System Development Life Cycle, breaks the model into five phases. These phases are described here:

  • Phase 1: Initiation/Requirements The purpose of the initiation phase is to express the need and purpose of the system. System planning and feasibility studies are performed.
  • Phase 2: Development/Acquisition During this phase, the system is designed, requirements are determined, resources are purchased, and software is programmed or otherwise created. This phase often consists of other defined steps, such as the system development life cycle or the acquisition cycle.
  • Phase 3: Fielding After validation and acceptance testing, the system is installed or released. This phase is also called deployment or implementation.
  • Phase 4: Operation/Maintenance During this phase, the system performs its stated and designed work. The system is almost always modified by the addition of hardware and software and by numerous other events, such as patching. While operational, there may also be insertions or upgrades, or new versions made of the software.
  • Phase 5: Disposal and Reuse The computer system is disposed of or decommissioned once the transition to a new system is completed.

Planning is the key to success. You may not have a crystal ball, but once solutions are proposed, you can start to plan for activities that will be required should the proposal become reality. It's much the same as thinking about buying a car. You come up with a car payment each month but will also need to perform operational activities such as buying fuel and keeping insurance current. And there is also maintenance to consider. Should you decide that you no longer want your car, you may decide to sell it or donate it to a charity, but regardless of your choice, it will need to be decommissioned.

Although the purpose of NIST 800-64 is to assist in integrating essential information technology security steps into their established SDLC process, the security systems development life cycle (SecSDLC) is designed to identify security requirements early in the development process and incorporate them throughout the process.

The idea of SecSDLC is to build security into all SDLC activities. The goal is to have them incorporated into each step of the SDLC.

Microsoft developed the security development life cycle (SDL) to increase the security of software and to reduce the impact severity of problems in software and code. The SDL is designed to minimize the security-related design and coding bugs in software. An organization that employs the Microsoft SDL is expected to have a central security entity or team that performs security functions. Some objectives of SDL are as follows:

  • The team must create security best practices.
  • The team must consist of security experts and be able to act as a source of security expertise to the organization.
  • The team is responsible for completing a final review of the software before its release.
  • The team must interact with developers and others as needed throughout the development process.

Operational Activities

You should understand the threats and vulnerabilities associated with computer operations and know how to implement security controls for critical activities through the operational period of the product or software. Some key operational activities are as follows:

  • Vulnerability assessment
  • Security policy management
  • Security audits and reviews
  • Security impact analysis, privacy impact analysis, configuration management, and patch management
  • Security awareness and training; guidance documents
Monitoring

A routine and significant portion of operational activity is the monitoring of systems. Not limited to just servers, monitoring includes staying aware about software licensing issues and remaining receptive to potential incidents or abuses of privilege.

Maintenance

When you are responsible for the security of a network or IT infrastructure, periodic maintenance of hardware and software is required. Maintenance can include verifying that antivirus software is installed and current; ensuring that backups are completed, rotated, and encrypted; and performing patch management. Maintenance should be driven by policy. Policy should specify when activities are performed and the frequency with which these events occur. Policy should align as closely as possible to vendor-provided recommendations. The maintenance program should document the following:

  • Maintenance schedule
  • The cost of the maintenance
  • Maintenance history, including planned versus unplanned and executed versus exceptional

Responsibility for the security of the network includes times of incident response. Maintenance of devices or workstations includes those systems that have been recovered from an incident. The digital forensics of collected data is then passed on to the forensics team responsible for investigating the incident.

Test and Evaluation

An important phase of the systems development life cycle is the testing and evaluation of systems. Before a system can be accepted or deemed to reach a milestone in its development, that system must be evaluated against the expected criteria for that phase or project. If the system is tested against its specification requirements and passes, then its owners and users gain the assurance that the system will function as needed.

Testing spans hardware and software development, but the goal is to raise the assurance that the system will operate as expected. Further, testing may seek to verify that undesired aspects do not exist. For example, during the course of software development, an application might be evaluated for forbidden coding techniques.

General Change Management

As far as possible, compliance with standards should be automated in order to ensure that interorganizational change does not reduce the overall level of security. Unless strong controls have been put in place, the result of change will usually be that the level of security is reduced and that system configurations fall to a lower level of security. This is why it is so important to tie security process to change management.

ISO 20000 defines change management as a needed process to “ensure all changes are assessed, approved, implemented and reviewed in a controlled manner.” NIST 800-64 describes change management as a method to ensure that changes are approved, tested, reviewed, and implemented in a controlled way. Regardless of what guidelines or standards you follow, the change management process can be used to control change and to help ensure that security does not fall to a lower state. A typical change management process includes the following:

  1. Change request
  2. Change request approval
  3. Planned review
  4. A test of the change
  5. Scheduled rollout of the change
  6. Communication to those affected by the planned change
  7. Implementation of the change
  8. Documentation of all changes that occurred
  9. Post-change review
  10. Method to roll back the change if needed

Regardless of what change control process is used, it should be documented in the change control policy. Also, what is and is not covered by the policy should be specified. For example, some small changes, like an update to antivirus programs, may not be covered in the change control process, whereas larger institutional changes that have lasting effects on the company are included. The change control policy should also list how emergency changes are to occur, because a situation could arise in which changes must take place quickly without the normal reviews being completed before implementation. In such a situation, all of the steps should still be completed, but they may not be completed before implementation of the emergency change. Change management must be able to address any of the potential changes that can occur, including the following:

  • Changes to policies, procedures, and standards
  • Updates to requirements and new regulations
  • Modified network, altered system settings, or fixes implemented
  • Alterations to network configurations
  • New networking devices or equipment
  • Changes in company structure caused by acquisition, merger, or spinoff
  • New computers, laptops, smartphones, or tablets installed
  • New or updated applications
  • Patches and updates installed
  • New technologies integrated

Disposal and Reuse

Some products can have a long, useful life. For example, Windows XP was released in 2005 and was updated and maintained until 2014. Regardless of the product, at some point you will have to consider asset object reuse. Asset object reuse is important because of the remaining information that may reside on a hard disk or any other type of media. Even when data has been sanitized, there may be some remaining information. This is known as data remanence. Data remanence is the residual data that remains after data has been erased. Any asset object that may be reused will have some remaining amount of information left on media after it has been erased. Best practice is to wipe the drive with a minimum of seven passes or random ones and zeros. For situations where that is not sufficient, physical destruction of the media may be required.

When such information is deemed too sensitive, the decision may be made not to reuse the objects but to dispose of the assets instead. Asset disposal must be handled in an approved manner. As an example, media that has been used to store sensitive or secret information should be physically destroyed. Before decommissioning or disposing of any systems or data, you must understand any existing legal requirements pertaining to records retention. When archiving information, consider the method for retrieving the information.

Testing

Before products are released, they must typically go through some type of unit, integration, regression, validation, and acceptance testing. The idea is to conduct tests to verify that the product or application meets the requirements laid out in the specification documents.

For some entities, validation is also performed. The U.S. federal government specifies this process as certification and accreditation. Federal agencies are required by law to have their IT systems and infrastructures certified and accredited. Certification is the process of validating that implemented systems are configured and operating as expected. If management agrees with the findings of the certification, the report is formally approved. When comparing products, all products must be validated with identical tests. The formal approval of the certification is the accreditation process and authorization to operate in a given environment.

Testing is discussed in more detail later in the “Best Practices” section of this chapter.

Development Approaches

Carrying out security activities across the system or software life cycle requires that specific controls be put in place. These controls and security checkpoints start at design and development and do not end until decommissioning. Software development models provide the framework needed to plan and execute milestones and delivery cycles. Some of the most popular development approaches include agile, waterfall, and spiral methodologies.

SecDevOps

DevOps as a word is the shortened combination of two words, two groups of IT workers: development and operations. Traditionally, in the old school of thought, development and operations were teams who met at the point where an application completed development and was being rolled out to production. However, in today's world, applications change more rapidly, with new services being required from developers more often than in the past. This need called for more collaboration between the people who make the software and the people who administer the software as well as security.

In the past, development (Dev) and operations (Ops) were siloed disciplines, but now, as the term implies, there is a new, blended discipline that has added security best practices to the buzz word and is now called SecDevOps. For the collaboration between developers and operations staff to work well, SecDevOps relies on the agile software development model, which follows a continual flow of interaction between coders and users across phases of planning, building, testing, and feedback and adds the process of integrating security right into the development and deployment workflows. It gets developers to think about best practices as well as security standards very early in the processes to keep up with a rapid DevOps approach.

Agile

Agile software development allows teams of programmers and business experts to work closely together. According to the Agile Manifesto at agilemanifesto.org, this model builds on the following:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Agile project requirements are developed using an iterative approach, and the project is mission-driven and component-based.

Waterfall

The waterfall model was developed by Winston Royce in 1970 and operates as the name suggests. The original model prevented developers from returning to stages once they were complete; therefore, the process flowed logically from one stage to the next. Modified versions of the model add a feedback loop so that the process can move in both directions. An advantage of the waterfall method is that it provides a sense of order and is easily documented. The primary disadvantage is that it does not work for large and complex projects because it does not allow for much revision.

Spiral

The spiral design model was developed in 1988 by Barry Boehm. Each phase of the spiral model starts with a design goal and ends with the client review. The client can be either internal or external and is responsible for reviewing the progress. Analysis and engineering efforts are applied at each phase of the project. Each phase of the project contains its own risk assessment. Each time a risk assessment is performed, the schedules and estimated cost to complete are reviewed, and a decision is made to continue or cancel the project. The spiral design model works well for large projects. The disadvantage of this method is that it is much slower and takes longer to complete.

Versioning

Version control, also known as versioning, is a form of source control. Everyone, not just software developers, is familiar with the idea of app version 1.0 versus app version 2.0. The concept of how minor revisions and updates compare with major version upgrades can also be considered fairly common knowledge. We all understand that version 1.1 from version 1.0 likely includes a small set of changes or fixes, while going from version 1.0 to version 2.0 might include a completely different interface in addition to revisions and updates. This concept carries not just through the software but through the documentation as well.

Continuous Integration/Continuous Delivery Pipelines

Continuous integration/continuous delivery (CI/CD) pipelines allow for the speedy incorporation of new code. Continuous integration (CI) is the process in which the source code updates from all developers working on the same project are continually monitored and merged from a central repository. Continuous delivery (CD) automates code changes and testing, and provides the preparation for the code to be delivered or deployed into test or production after the build stage.

CI allows for the rapid integration of small chunks of new code from various developers into a shared repository. CI also allows you to repeatedly test the code for errors to detect bugs early on, making them simpler to fix. CD is an extension of CI, enabling software developers to execute additional tests such as UI tests, which helps ensure clean deployment. CD also helps the DevOps team improve the regularity of new feature releases and automate the entire software release. These features reduce the overall expense of a project.

Best Practices

Programmers must also assume that attempts will be made to subvert the behavior of the program directly, indirectly, or through manipulation of the software. The following are some programming best practices:

  • Do not rely on any parameters that are not self-generated.
  • Avoid complex code, keeping code simple and small when possible.
  • Don't add unnecessary functionality. Programmers should only implement functions defined in the software specification.
  • Minimize entry points and have as few exit points as possible.
  • Verify all input values. Input must be the correct length, range, format, and data type.
  • Interdependencies should be kept to a minimum so that any process modules or components can be disabled when not needed.
  • Modules should be developed that have high cohesion and low coupling.
Testing Best Practices

Once the secure code has been written, it will need to be tested. Tests are classified into the following categories:

  • Unit Testing Examines an individual program or module
  • Interface Testing Examines hardware or software to evaluate how well data can be passed from one entity to another
  • System Testing A series of tests that may include recovery testing and security testing
  • Final Acceptance Testing Usually performed at the implementation phase after the team leads are satisfied with all other tests and the application is ready to be deployed

No matter the size of the job, any developer will attest to how important documentation is to software development. It does not matter if there is a team of developers collaborating across the world or one coder doing it all. Documentation answers the same questions for all people: what does this software need to do? Documentation exists to assist and chronicle all phases of development. Documents should detail the various requirements and plans.

Some documents might include a list, such as the requirements definition, which specifies the overall requirements of the software. The software requirements definition document will detail all functional and nonfunctional requirements. While this definition document is nontechnical and high level, it is rarely a small list or simple document. The purpose of the document is to state in clear terms the objectives of that piece of software. From this document, other more technical specifications can be made.

Regression testing is used after a change is made to verify that the inputs and outputs are still correct. This is very important from a security standpoint, as poor input validation is one of the most common security flaws exploited during an application attack.

User acceptance testing is the stage in software development where users interact with and test the application. The big questions on the developers’ minds consist of the following:

  • Does the application meet users’ needs?
  • Do users operate the application as expected?
  • Can users break the software, either inadvertently or maliciously?

Unit testing is a piecemeal approach to testing software. Depending on the cohesion of individual modules, it may be possible to test a particular unit in development. By examining an individual program or module, developers can ensure that each module behaves as expected. The security questions on their minds might include the following:

  • Is the module accepting input correctly?
  • Does the module provide output properly to the next module?

Integration testing checks the interface between modules of the software application. The different modules are tested first individually and then in combination as a system. Testing the interface between the small units or modules is part of integration testing. It is usually conducted by a software integration tester and performed in conjunction with the development process.

Integration testing is done to do the following:

  • To verify whether, when combined, the individual modules comply with the necessary standards and yield the expected results.
  • To locate the defects or bugs in all of the interfaces. For example, when modules are combined, sometimes the data that is passed among the modules contains errors and may not produce the expected results.
  • To validate the integration among any third-party tools in use.

A peer review is a walk-through of each module or software unit. That walk-through is a technical, line-by-line review of the program, diving deep into each subroutine, object, method, and so forth. A peer review, also known as a code review, is meant to inspect code to identify possible improvements and to ensure that business needs are met and security concerns are addressed.

A disadvantage of peer reviews, as viewed by most programmers, is they take too much time. The investment in time and effort often results in delays that outweigh the benefits, such as fewer bugs, less rework, greater satisfaction with the end product, and better team communication and cohesiveness.

Prototyping is the process of building a proof-of-concept model that can be used to test various aspects of a design and verify its marketability. Prototyping is widely used during the development process and may be used as a first or preliminary model to test how something works. Virtual computing is a great tool to test multiple application solutions. Years ago, you would have had to build a physical system to test multiple solutions. With virtual computers and tools such as VMware, users are provided with a sophisticated and highly customizable platform that can be used to test complex customer client-server applications. Elastic cloud computing can also be used to prototype and test multiple solutions. This pay-as-you-go model of prototyping allows designers to speed solutions and significantly reduce the prototype phase.

ISO 27002

An ISO document worth reviewing is ISO 27002. This standard is considered a code of best practice for information security management. It grew out of ISO 17799 and British Standard 7799. ISO 27002 is considered a management guideline, not a technical document. You can learn more at www.27000.org/iso-27002.htm. ISO 27002 provides best practice recommendations on information security management for use by those responsible for leading, planning, implementing, or maintaining security. The ISO 27002 standard latest 2022 release notes can be found here: www.iso.org/obp/ui/#iso:std:iso-iec:27002:ed-3:v2:en.

Open Web Application Security Project (OWASP)

“The Open Web Application Security Project® (OWASP) is a nonprofit foundation that works to improve the security of software. Through community-led open-source software projects, hundreds of local chapters worldwide, tens of thousands of members, and leading educational and training conferences, the OWASP Foundation is the source for developers and technologists to secure the web,” according to owasp.org. Standards and guidelines may be sourced from government or the open-source community, including the CMU Software Engineering Institute (SEI), NIST, and OWASP. The OWASP project provides resources and tools for web developers. OWASP maintains a collection of tools and documents that are organized into the following categories:

  • Protect Tools and documents that can be used to guard against security-related design and implementation flaws
  • Detect Tools and documents that can be used to find security-related design and implementation flaws
  • Life Cycle Tools and documents that can be used to add security-related activities into the SDLC

Application exploits are a broad category of attack vectors that computer criminals use to target applications. There are many ways in which an attacker can target applications. Regardless of the path taken, if successful, the attacker can do harm to your business or organization. The resulting damage may range from minor impact to putting your company out of business. Depending on how the application has been designed, an application exploit may be easy to find or extremely difficult to pinpoint. With so much to consider, there needs to be a starting point. As such, OWASP lists the top 10 application security risks in 2021 as follows:

  1. Broken access control
  2. Cryptographic failure
  3. Injection
  4. Insecure design
  5. Security misconfiguration
  6. Vulnerable and outdated components
  7. Identification and authentication failure
  8. Software and data integrity failure
  9. Security logging and monitoring failure
  10. Server-side request forgery (SSRF)
Proper Hypertext Transfer Protocol Headers

Hypertext Transfer Protocol (HTTP) is one of the best-known applications. HTTP makes use of TCP port 80. Even though HTTP uses TCP, it makes use of a stateless connection. HTTP uses a request-response protocol in which a client sends a request, and a server sends a response.

HTTP headers are an important part of HTTP requests and responses. They are mainly intended for the communication between the server and client. More specifically, they are the name or value pairs that are in the request and response messages separated by a single colon. There are four types of HTTP headers:

  • Request Header: To the server and includes OS used by client and page requested
  • Response Header: To the client and includes type, date, and size of file
  • General Header: For both client and server and includes date, caching information, warnings, and protocol upgrade data if necessary
  • Entity Header: Information for encoding, language, location, and MD5 hashing for integrity of message upon receipt

Attacks that exploit HTTP can target the server, browser, or scripts that run on the browser. Cross-site scripting (XSS) and cross-site request forgery (XSRF or CSRF) attacks are two such examples. Basically, hackers craft special packets in a way that exploits vulnerable code on the web server.

Again, there are automated tools that can do this, and a good place to start researching this technology is the Open Web Application Security Project (OWASP) website at www.owasp.org/index.php/Main_Page. Consider instead using HTTPS over port 443.

Data Security Techniques for Securing Enterprise Architecture

There are daily headlines of breaches where data is stolen and leaked. No enterprise is immune to data breaches, but with layers of defense in depth, you can make your organization more difficult to attack than others. By taking measures that have been proven to ensure data security, you promote trust in corporate stakeholders, consumers, and auditors.

Data Loss Prevention

Detecting and blocking data exfiltration requires the use of security event management solutions that can closely monitor outbound data transmissions. Data loss prevention (DLP) requires the analysis of egress network traffic for anomalies and the use of better outbound firewall controls that perform deep packet inspection. Deep packet inspection normally is done by a device at a network boundary, for example by a web application firewall at the trusted network's perimeter. To select where such a device should be placed in any organization, it's important to have a data flow diagram, depicting where and how data flows throughout the network.

Blocking Use of External Media

Enterprise organizations rely on controlling which services and device capabilities can be used, and even where they can be used. Limiting or prohibiting the use of cameras and cell phones can help prevent data leakage from secure areas. Limiting the use of external media and USB functionality that allows devices to act as hosts for USB external devices like cameras or storage can also help to limit the potential for misuse of devices. This can be a useful privacy control or may be required by the organization as part of documentation processes.

Some organizations go so far as to block printers from accessing the Internet. Print blocking means if you work around many Wi-Fi locations or you have many computers or printers in one area, you can prevent others from being able to print on your printer or intercept files being sent to that printer.

Remote Desktop Protocol Blocking

Desktop sharing programs are extremely useful, but there are potential risks. One issue is that anyone who can connect to and use your desktop can execute or run programs on your computer. A search on the Web for Microsoft Remote Desktop Services returns a list of hundreds of systems to which you can potentially connect if you can guess the username and password.

At a minimum, these ports and applications should be blocked or restricted to those individuals who have a need for this service. Advertising this service on the Web is also not a good idea. If this is a public link, it should not be indexed by search engines. There should also be a warning banner on the page that states that the service is for authorized users only and that all activity is logged.

Another issue with desktop sharing is the potential risk from the user's point of view. If the user shares the desktop during a videoconference, then others in the conference can see what is on the presenter's desktop. Should there be a folder titled “divorce papers” or “why I hate my boss,” everyone will see it.

Application sharing is fraught with risks as well. If the desktop sharing user then opens an application such as email or a web browser before the session is truly terminated, anybody still in the meeting can read and/or see what's been opened. Any such incident looks highly unprofessional and can sink a business deal.

Clipboard Privacy Controls

When working on any type of application on any type of device, sometimes it is much easier to cut, copy, and paste that address information into a text or the URL into a browser. However, security professionals should know that some systems, including Androids, have incredibly insecure clipboards. Any application on an Android phone can read it without your permission. Any application that is installed that declares the permission android.permission.READ_CLIPBOARD in their AndroidManifest.xml file is automatically granted this permission when it is installed, meaning they can read the Android clipboard. In Windows, information is copied into Clipboard History in clear text and synced across different devices and accounts. It is convenient but not very secure. It is generally recommended that you never copy any sensitive data.

Restricted Virtual Desktop Infrastructure Implementation

Remember dumb terminals and the thin client concept? This has evolved into what is known as the virtual desktop infrastructure (VDI). This centralized desktop solution uses servers to serve up a desktop operating system to a host system. Each hosted desktop virtual machine is running an operating system such as Windows 11 or Windows Server 2019. The remote desktop is delivered to the user's endpoint device via Remote Desktop Protocol (RDP), Citrix, or other architecture. Technologies such as RDP are great for remote connectivity, but they can also allow remote access by an attacker. Questions about these technologies are likely to appear on the CASP+ exam.

This system has lots of benefits, such as reduced onsite support and greater centralized management. However, a disadvantage of this solution is that there is a significant investment in hardware and software to build the backend infrastructure, as well as security issues. To create a more restrictive interface, best practices include disabling local USB, segregating networks, restricting access, and doing a thorough job building the master image. An administrator should avoid bundling all applications in the base image. The best way to manage this is to create a clean base image and then manage applications with profiles and groups.

Data Classification Blocking

Not all data is created equal. A data classification policy describes the classification structure used by the organization and the process used to properly assign classifications to data. Data classification informs us of what kind of data we have, where it is, and how well we need to protect it. Combining data classification with a data loss prevention (DLP) policy results in your most sensitive data being the most protected. Security policies around the protection of data should be implemented with realistic and easy-to-understand procedures that are automated and audited periodically. Security policy should also outline how employees are using data and downloading files and how they are protecting them. A rule of not allowing certain data classes to be removed from protected enclaves is one way of blocking sensitive data from leaving the environment.

Data Loss Detection

The Information Age, also known as the computer age, is a period that began in the mid-20th century as the world shifted from the Industrial Revolution to a digital one. Data became instantly accessible to anyone with an Internet connection, as well as a monetized commodity. Data in use, data in motion/transit, and data at rest must be strategically protected to prevent any unauthorized access.

Watermarking

Although the term steganography is typically used to describe illicit activities, watermarking is used for legal purposes. It is typically used to identify ownership or the copyright of material such as videos or images. If any of these items are copies, the digital copy would be the same as the original; therefore, watermarking is a passive protection tool. It flags the data's ownership but does not degrade it in any way. It is an example of digital rights management.

Digital Rights Management

Digital rights management (DRM) is an entire suite of technologies designed to protect digital content. As an example, you may be reading a copy of this book on your tablet, yet that does not mean the publisher wants to provide free copies to 100 of your closest friends! That is the situation for which DRM is designed: it helps prevent copyright infringement online and thus helps the copyright holder maintain control of the information.

Network Traffic Analysis

Network traffic analysis (NTA) is important for networks because sophisticated attackers frequently go undetected in a victim's network for an extended period. Attackers can blend their traffic with legitimate traffic that only skilled network analysts know how to detect. A skilled cybersecurity professional should know how to identify malicious network activity as well as network protocols, network architecture, intrusion detection systems, network traffic capture, and traffic analysis. Network tools commonly used to analyze captured network traffic include Wireshark and the command-line versions Tshark or TCPdump.

Network Traffic Decryption/Deep Packet Inspection

Wireshark can also be used for some packet inspection, but there are tools that go deeper. Deep packet inspection (DPI) is a form of packet filtering that is able to inspect the information as well as the contents of the packet itself. Wireshark can tell that HTTP is being used but cannot view the contents of an encrypted packet. DPI has the ability to do something about traffic that fits a profile, such as dropping the whole packet or limiting bandwidth. Tools like nDPI are open-source tools, while Cisco's Network Based Application Recognition (NBAR) is on many Cisco devices.

Data Classification, Labeling, and Tagging

Information classification strengthens the organization in many ways. Labeling information secret or strictly confidential helps employees see the value of the information and give it a higher standard of care. Information classification also specifies how employees are to handle specific information. For example, a company policy might state, “All sensitive documents must be removed from the employee's desk when leaving work. We support a clean desk policy.” Tagging and marking a piece of data based on its category helps make it searchable, trackable, and protected in the most efficient way.

There are two widely used information classification systems that have been adopted. Each is focused on a different portion of the CIA security triad. These two approaches are as follows:

  • Government Classification System This system focuses on confidentiality.
  • Commercial Classification System This system focuses on integrity.

The governmental information classification system is divided into the following categories:

  • Top Secret: Its disclosure would cause grave damage to national security. This information requires the highest level of control.
  • Secret: Its disclosure would be expected to cause serious damage to national security and may divulge significant scientific, technological, operational, and logistical secrets, as well as many other developments.
  • Confidential: Its disclosure could cause damage to national security and should be safeguarded against disclosure.
  • Unclassified: Information is not sensitive and need not be protected unless For Official Use Only (FOUO) is appended to the classification. Unclassified information would not normally cause damage, but over time Unclassified FOUO information could be compiled to deduce information of a higher classification.

The commercial information classification system is focused not just on confidentiality but also on the integrity of information; therefore, it is divided into the following categories:

  • Confidential: This is the most sensitive rating. This is the information that keeps a company competitive. Not only is this information for internal use only, but its release or alteration could seriously affect or damage a corporation.
  • Private: This category of restricted information is considered personal in nature and might include medical records or human resource information.
  • Sensitive: This information requires controls to prevent its release to unauthorized parties. Damage could result from its loss of confidentiality or its loss of integrity.
  • Public: This is similar to unclassified information in that its disclosure or release would cause no damage to the corporation.

Depending on the industry in which the business operates and its specific needs, one of these options will typically fit better than the other. Regardless of the classification system chosen, security professionals play a key role in categorizing information and helping to determine classification guidelines. Once an organization starts the classification process, it's forced to ask what would happen if specific information was released and how its release would damage or affect the organization.

Metadata/Attributes

Metadata generated as a regular part of operations, communications, and other activities can also be used for incident response. Metadata is information about other data. In the case of systems and services, metadata is created as part of files and embedded documents, is used to define structured data, and is included in transactions and network communications amongst many other places you can find it. Metadata attributes express qualifications on content and can be used to modify how the content is processed or filtered. Another use is to flag content based on values or status. Metadata attributes can provide a list of one or more qualifications such as administrator programmer. This means the content applies to both administrators and programmers.

The following are common types of metadata to understand:

  • Email: This includes headers and other information found in an email. Email headers provide specifics about the sender, the recipient, the date, and time the message was sent, and if the email had an attachment.
  • Mobile: Data is collected by phones and other mobile devices including call logs, SMS and other message data, data usage, GPS location tracking, and cellular tower information. Mobile metadata is incredibly powerful because of the amount of geospatial information that is recorded about where the phone has traveled.
  • Web: This is embedded into a website as part of the code of the website but is invisible to average users and includes meta tags, headers, cookies, and other information that may help with search engine optimization, website functionality, advertising, and tracking.
  • File: This is a powerful tool when reviewing when a file was created, how it was created, if and when it was modified, and who modified it. Metadata is commonly used for forensics and other investigations, and most forensic tools have built-in metadata viewing capabilities.

Obfuscation

Obfuscation means being evasive, unclear, or confusing. Encryption is a way to obfuscate data, rendering it extremely difficult to use. Other ways to obfuscate data in an enterprise are to use the following techniques:

  • Tokenization: Replace sensitive data with a unique nonsensitive identification element, called a token, that has no exploitable value
  • Scrubbing: Sometimes called cleansing; fixing incorrect or duplicate data in a data set
  • Masking: Hiding or filtering data so only the data you want is shown

Anonymization

Anonymization of data is removing anything in a data set that could identify an individual. By removing sensitive or private information such as names, Social Security numbers, or phone numbers, the data itself remains, but it cannot be attributed to a single individual, preserving a data subject's privacy. When data becomes anonymized, it can be used for the greater good while ensuring the rights of the individual.

Encrypted vs. Unencrypted

Cryptography can be defined as the art of protecting information by transforming it into an unreadable format using a form of encryption. Everywhere you turn you see cryptography. It is used to protect sensitive information, prove the identity of a claimant, and verify the integrity of an application or program. As a security professional for your company, which of the following would you consider more critical if you could choose only one?

  • Provide a locking cable for every laptop user in the organization.
  • Enforce full disk encryption for every mobile device.

Our choice would be full disk encryption. Typically, the data will be worth more than the cost of a replacement laptop. If the data is lost or exposed, you'll incur additional costs such as client notification and reputation loss.

Encryption is not a new concept. The desire to keep secrets is as old as civilization. There are two basic ways in which encryption is used: for data at rest and for data in motion/transit. Data at rest might be information on a laptop hard drive or in cloud storage. Data in motion/transit might be data being processed by SQL, a URL requested via HTTP, or information traveling over a VPN at the local coffee shop bound for the corporate network. In each of these cases, protection must be sufficient.

Data classification should play a large role in the strategy of data being encrypted vs unencrypted. If data is available to the public, then unencrypted makes perfect sense. If the data is sensitive, then encryption is important to keep that information secure.

Data Life Cycle

The data life cycle, sometimes called the information framework, is the process of tracking data through its evolution, start to finish. As shown in Figure 8.1, the data life cycle is called a cycle but is more of a linear process.

An illustration of the data life cycle

FIGURE 8.1 The data life cycle

Data is first generated or captured. After the raw data is captured, it is processed or used in analysis or integrations. At this stage, often it is ingested, reformatted, or summarized into another workflow. Data gets shared when analysis turns into a decision and more information is gained. The storage of data after it has been gathered and analyzed can be kept in a location where it can be easily accessed for even more evaluation. Once the information has served its current purpose and before it's destroyed permanently, it is archived for any possible future value it might have.

Data Inventory and Mapping

Data inventory records basic information about a data asset including name, contents, privacy, compliance requirements, use license, data owner, and data source. To create a data inventory, a project manager or committee needs to form, since data will be gathered from multiple areas of the business. After business needs are collected, then a definition of the inventory scope is formalized, and a catalog of data assets can be built. With data coming from multiple areas of the business and possibly from disparate databases, data must be mapped by matching fields from one database to another, very much like creating a primary key in a table. For the best decisions to be made, the information must be digested and analyzed in a way that makes logical sense.

Data Integrity Management

Improper storage of sensitive data is a big problem. Sensitive data is not always protected by the appropriate controls or cryptographic solutions. As mentioned previously, during the requirements phase of the SDLC process, security controls for sensitive data must be defined. At this point, these questions must be asked:

  • Does the data require encryption?
  • Is personal information such as credit card numbers, health records, or other sensitive data being stored?
  • Is a strong encryption standard being used?

If you answered yes to any of these questions, controls should be implemented to protect the data. A CASP+ must be concerned with not just the storage of sensitive data but also how the data is accessed.

A CASP+ must also be concerned about what happens to the data at its end of life. This is when data remanence becomes a big concern. Data remanence refers to any residual data left on storage media. Depending on the sensitivity of the information, wiping a drive may be sufficient, or you may need to oversee the destruction of the media. Services are available that offer secure hard drive destruction that shreds or crushes drives before disposing of the pieces that remain. Data remanence is also a major concern when using the cloud.

Data Storage, Backup, and Recovery

Think of how much data is required for most modern enterprises. There is a huge dependence on information for the business world to survive. Although the amount of storage needed continues to climb, there is also the issue of terminology used in the enterprise storage market.

As a CASP+ candidate, you will be expected to understand the basics of enterprise storage and also grasp the security implications of secure storage management. Before any enterprise storage solution is implemented, a full assessment and classification of the data should occur. This would include an analysis of all threats, vulnerabilities, existing controls, and the potential impact of loss, disclosure, modification, interruption, or destruction of the data.

From a security standpoint, one of the first tasks in improving the overall security posture of an organization is to identify where data resides. The advances in technology make this much more difficult than it was in the past. Years ago, Redundant Array of Inexpensive/Independent Disks (RAID) was the standard for data storage and redundancy.

The use of redundant arrays of inexpensive disks, or RAID, is a common solution that uses multiple disks with data either striped (spread across disks) or mirrored (completely copied), and technology to ensure that data is not corrupted or lost (parity) to ensure that one or more disk failures can be handled by an array without losing data.

RAID can be deployed several ways, depending on the need of the system for either speed or redundancy. RAID 0 is all about optimizing the speed of your hard drives. If you have at least two drives, using RAID 0 will combine them and write data on both of them simultaneously or sequentially, creating data striping. This will help with read and write speeds. If one drive fails, you will lose all of your data. RAID 1 requires a two-drive minimum and is called data mirroring because the same information is written on both drives. RAID 3 and RAID 5 each have a three-drive minimum. RAID 10 (also called 1+0) requires four drives at a minimum. It combines RAID 1 and RAID 0. A variant exists, called 0+1. Both provide fault tolerance and increased performance.

Today, companies have moved to dynamic disk pools (DDPs) and cloud storage. DDP shuffles data, parity information, and spare capacity across a pool of drives so that the data is better protected and downtime is reduced. DDPs can be rebuilt up to eight times faster than traditional RAID.

Enterprise storage infrastructures may not have adequate protection mechanisms. The following basic security controls should be implemented:

  • Know your assets. Perform an inventory to know what data you have and where it is stored.
  • Build a security policy. A corporate security policy is essential. Enterprise storage is just one item that should be addressed.
  • Implement controls. The network should be designed with a series of technical controls, such as the following:
    • Intrusion detection system (IDS)/intrusion prevention system (IPS)
    • Firewalls
    • Network access control (NAC)
  • Harden your systems. Remove unnecessary services and applications.
  • Perform proper updates. Use patch management systems to roll out and deploy patches as needed.
  • Segment the infrastructure. Segment areas of the network where enterprise storage mechanisms are used.
  • Use encryption. Evaluate protection for data at rest and for data in transit.
  • Implement logging and auditing. Enterprise storage should have sufficient controls so that you can know who attempts to gain access, what requests fail, when changes to access are made, or when other suspicious activities occur.
  • Use change control. Use change control and IT change management to control all changes. Changes should occur in an ordered process.
  • Implement trunking security. Trunking security is typically used with VLANs. The concept is to block access to layer 2 devices based on their MAC address. Blocking the device by its MAC address effectively prevents the device from communicating through any network switch. This stops the device from propagating malicious traffic to any other network-connected devices.
  • Port Security When addressing the control of traffic at layer 2 on a switch, the term used today is port security. Port security specifically speaks to limiting what traffic is allowed in and out of particular switch ports. This traffic is controlled per layer 2 address or MAC address. One example of typical port security is when a network administrator knows that a fixed set of MAC addresses are expected to send traffic through a switch port. The administrator can employ port security to ensure that traffic from no other MAC address will be allowed into that port and traverse the switch.

Now that we've explored some of the security controls of enterprise storage, let's look at some of the storage technology used in enterprise storage.

  • Virtual Storage Over the last 20 years, virtual storage options have grown, evolved, and matured. These online entities typically focus either on storage or on sharing. The storage services are designed for storage of large files. Many companies are entering this market and giving away storage, such as Microsoft's OneDrive, Amazon Drive, and Google Drive.

    Virtual file sharing services are a second type of virtual storage. These services are not meant for long-term use. They allow users to transfer large files. These virtual services work well if you are trying to share very large files or move information that is too big to fit as an attachment.

    On the positive side, there are many great uses for these services, such as keeping a synchronized copy of your documents in an online collaboration environment, sharing documents, and synchronizing documents between desktops, laptops, tablets, and smartphones.

Security Requirements and Objectives for Authentication and Authorization Controls

The major objective for authentication and authorization is to confirm the identity of who is accessing your data, applications, and websites. Authentication confirms someone is who they say they are, and authorization gives them access to use a certain resource. It is one of the major tenets of cybersecurity.

Credential Management

Credential management policies describe the account life cycle from provisioning through active use and decommissioning using a set of tools to manage local, network, or internet credentials. This policy should include specific requirements for personnel who are employees of the organization, as well as third-party contractors. It should also include requirements for credentials used by devices, service accounts, and administrator/root accounts.

Password Repository Application

A password repository is a password vault to keep all your passwords safe. Just about every website that you visit today, from banking and social media accounts to shopping applications, require a user account and password. It is very difficult for a human being to remember different passwords for all of those accounts, so many end users create simple passwords like “123456789” or “Password123!” if complexity is required. Several examples of these include LastPass, Dashlane, and Keychain.

End-User Password Storage

A password repository will store passwords for you securely and assist in the generation of new ones. A password manager is essentially an encrypted vault for storing passwords that is itself protected by a master password. To obtain access to the passwords stored in the manager, a user has to know the master password; in many cases, a second authentication factor is required as well.

Password vaults can be used to simply store passwords for easy recall, but one of the best features of most password managers is their ability to generate passwords. A lengthier password is more protected and harder to crack, and the passwords generated by password managers are combinations of random numbers and letters that are very secure.

Another significant feature of most password managers is the ability to automatically fill in passwords to stored sites. By using that feature you won't have to type anything but the master password, and it's also a good way to avoid having passwords stolen by keylogging malware.

Most organizations have somewhat limited visibility into what passwords their employees are using, which can increase their risk both on premises and in the cloud. Another risk factor for passwords is that many of those employees reuse those passwords for multiple accounts, both personal and professional. With the adoption of cloud infrastructure, many organizations still struggle with enforcing password policies on all systems. With so many applications in the SaaS model, a business should look to using a next-generation password management system using multifactor authentication (MFA) and single sign-on (SSO) for security and compliance.

Another option for enterprise organization is a physical hardware key manager to combat the challenge of memorizing complex passphrases and keep employees from reusing passwords across multiple applications. Using a hardware key manager is a really good way of creating and maintaining passwords for all account logins.

Privileged Access Management

Privileged users have accounts that give them complete access to your IT ecosystem. These accounts can belong to internal administrators or external contractors, allowing them to manage network devices, operating systems, and applications. Because of their level of access, admin credentials are the number-one target of most bad actors. Admins can create other admins.

With an ever-changing landscape and staffing, managing these privileged accounts becomes difficult. Access management tooling allows insight into who is doing what in the environment. This allows secure access from a central location and adapts to security solutions already deployed in the environment.

Password Policies

As a security administrator, you've no doubt heard many stories about how some people do very little to protect their passwords. Sometimes people write their passwords down on sticky notes, place them under their keyboards, or even leave them on a scrap of paper taped to the monitor. As a security professional, you should not only help formulate good password policy but also help users understand why and how to protect passwords. One solution might be to offer password manager programs that can be used to secure passwords. Another approach is migration to biometric or token-based authentication systems.

One of my favorite penetration tester stories is of a targeted phishing campaign launched against a specific department where six people clicked the link in the email, and two of the six were logged in to their administrators' accounts when they clicked and were compromised in 22 seconds. Within hours, the entire network belonged to the pen tester. You have to control who has administrative privileges and even how those administrators use their credentials. When you are logged in as an admin, you should not be opening your email under any circumstances. That is what your standard user account is for.

Two very common attacks rely on privilege to execute. That is one reason the Common Vulnerability Scoring System (CVSS) actually measures if privilege is necessary for exploitability. The first type of an attack is like the one I described previously where a user with elevated credentials opens a malicious attachment. The other is the elevation of privilege when cracking a password for an administrator. If the password policy is weak or not enforced, the danger increases exponentially.

Educate leadership and help create a robust security posture where you restrict admin privilege. Have IT admins make a list of the tasks that they do on an average day. Check the tasks that require administrative credentials. Create an account for normal tasks that all users can use, and use the admin account for only those tasks where it's necessary. If you have executives insisting that they need admin privileges, remind them that they are the ones that hackers are targeting.

Microsoft has guidance on implementing least privilege. For Linux, each sysadmin should have a separate account and enforce the use of sudo by disabling su. You should also change all default passwords on all assets in your environment as well as making sure that each password is as robust as possible. Use multifactor authentication and configure systems to issue an alert if an admin mistypes their password.

Password hygiene is a fiercely debated topic in IT. If a user thinks they are compromised, immediately change any passwords that might have been revealed. If the same passwords are used on multiple sites for different accounts, change those as well, and don't ever use that password again. Some sites require the password to be a certain length with uppercase, lowercase, and special characters. Some people swear by using password managers like LastPass, Keeper, and Dashlane. A password manager is a tool that does the work of creating and remembering all the passwords for your accounts. It sounds great in theory but is a single point of failure.

To make accounts safer, you should make sure your passwords are:

  • Long and complex. Ideally, your password should be totally randomized with uppercase and lowercase letters, making it very difficult to remember. Try to create a long passphrase out of one of your favorite books—for example, Wh0i$J0hnG@1t!.
  • Do not write them down or use birthdays.
  • Do not reuse passwords.
  • Change passwords when instructed according to policy.
  • Always use multifactor authentication, whenever possible.
  • Don't be too terribly social on social media. Nearly 90 million Facebook accounts had their profile information shared by researchers using a third-party quiz app. If you aren't paying for it, you are the product. Nothing is ever truly private on social media.

A single weak password can expose your entire network to a bad actor. A password audit is using an application that allows an internal team to verify that the passwords used in the environment are not easy for a dictionary attack or brute-force attack to reverse engineer. Storing encrypted passwords in a way that is reversible means that an experienced attacker could decrypt those passwords and then log into network assets using that compromised account.

Password auditing tools run on and verify passwords on different operating systems. Applications like RainbowCrack use a brute-force hash to generate all possible plaintexts and compute the hashes. Once a match is found, the plaintext is found. This can be time consuming but is very important to the security of a large enterprise.

Federation

Wouldn't it be nice if you could log into one site such as outlook.live.com/owa and not have to repeat the login process as you visit other third-party sites? Well, you can do just that with services such as federation. Federation is similar to single sign-on (SSO). SSO allows someone to log in once and have access to any network resource, whereas federation allows you to link your digital identity to multiple sites and use those credentials to log into multiple accounts and systems that are controlled by different entities.

Federated ID is the means of linking a person's electronic identity and attributes, stored across multiple distinct identity management systems. Of course, these kinds of systems are often just as vulnerable as any other. One example is in RSA Federated Identity Manager under CVE-2010-2337. This specific example could potentially allow a remote attacker to redirect a user to an arbitrary website and perform phishing attacks, as referenced in the NIST National Vulnerability Database at nvd.nist.gov/vuln/detail/CVE-2010-2337.

Transitive Trust

Transitive trust theory means that if A trusts B and B trusts C, then logically A should be able to trust C. In Microsoft Active Director, transitive trust is a two-way relationship between domains. When a new domain is created, it automatically shares resources with its parent by default. This allows a user to access resources in both the child and the parent.

OpenID

OpenID is an open standard that is used as an authentication scheme. OpenID allows users to log on to many different websites using the same identity on each of the sites. As an example, you may log in to a news site with your Facebook username and password. OpenID was developed by the OpenID Foundation. OpenID works as a set of standards that includes OpenID Authentication, Attribute Exchange, Simple Registration Extension, and Provider Authentication Policy Exchange.

OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0 that you can use to sign in a user to an application securely. When you use the Microsoft identity platform's implementation of OpenID Connect, you can add sign-in and API access to your apps.

Security Assertion Markup Language

Security Assertion Markup Language (SAML) is one example of a protocol designed for cross-web service authentication and authorization. SAML is an XML-based standard that provides a mechanism to exchange authentication and authorization data between different entities.

Over time, SAML promises to improve new generations of web service. The protocol was created by the Organization for the Advancement of Structured Information Standards (OASIS), a nonprofit consortium that develops and adopts open standards for the global information society. One outcome of its work is SAML, which allows business entities to make assertions regarding the identity, attributes, and entitlements of a subject. This means that SAML allows users to authenticate once and then be able to access services offered by different companies. At the core of SAML is the XML schema that defines the representation of security data; this can be used to pass the security context between applications.

For SAML to be effective on a large scale, trust relationships need to be established between remote web services. The SAML specification makes use of pairwise circles of trust, brokered trust, and community trust. Extending these solutions beyond the Internet has been problematic and has led to the proliferation of noninteroperable proprietary technologies. In terms of protocol sequences, SAML is similar to OpenID.

SAML assertions are communicated by a web browser through cookies or URL strings. These include the following:

  • MIME SAML assertions are packaged into a single Multipurpose Internet Mail Extensions (MIME) security package.
  • SOAP SAML assertions are attached to the Simple Object Access Protocol (SOAP) document's envelope header to secure the payload.

Shibboleth

Shibboleth can be described as a distributed web resource access control system. Shibboleth enhances federation by allowing the sharing of web-based resources. When you use Shibboleth, the target website trusts the source site to authenticate its users and manage their attributes correctly. The disadvantage of this model is that there is no differentiation between authentication authorities and attribute authorities.

Access Control

Authentication is the process of proving the veracity, or truth, of a claim, or to put it differently, proving that a user is who they claim to be. Various network authentication methods have been developed over the years. These are divided into four basic categories known as authentication factors:

  • Something you know
  • Something you have
  • Something you are
  • Somewhere you are

These four authentication factors are discussed again later in the “Multifactor Authentication” section of this chapter.

If authentication is controlled by virtue of a person's location, time, or even behavior, this is an example of context-based authentication. For example, if a user doesn't ever normally sign in on a weekend, or past 7 p.m. on weeknights, you can set a deny rule for that user if they attempt to authenticate after 7 p.m. Monday through Friday or anytime on weekends. Alternatively, the rule could prompt the user for a second form of authentication, such as using a key fob.

Another form of authentication is when a user opens an application on a mobile device, for example. The user approves the sign-in request, and then they are signed into the system. That process is called out-of-band push-based authentication.

Discretionary Access Control

Discretionary access control (DAC) gives users the capability to access organizational files and directories. DAC is based on the decision of the owner of the file or directory. DAC uses access control lists (ACLs) along with the Unix permission file system.

Mandatory Access Control

Mandatory access control (MAC) has been used by the government for many years. All files controlled by the MAC policies are based on different security levels. MAC allows for the system to run at the same or lower levels. Overriding MAC requires authorization from senior management.

Role-Based Access Control

Role-based access control (RBAC) systems rely on roles that are then matched with privileges that are allocated to those roles. This makes RBAC a popular option for enterprises that can quickly categorize personnel with roles like “cashier” or “database administrator” and provide users with the appropriate access to systems and data based on those roles. RBAC systems have some fundamental criteria to follow:

  • Role assignment, which states that subjects can only use permissions that match a role they have been assigned.
  • Role authorization, which states that the subject's active role must be authorized for the subject. This prevents subjects from taking on roles they shouldn't be able to.
  • Permission authorization, which states that subjects can only use permissions that their active role is allowed to use.

These rules together describe how permissions can be applied in an RBAC system and hierarchies can be built that allow specific permissions to be accessible at the right stages based on roles in any particular ecosystem.

Rule-Based Access Control

Rule-based access control, sometimes called RBAC (and sometimes RuBAC to help differentiate it from role-based access control) is an applied set of rules or ACLs that apply to various objects or resources. When an attempt is made to access an object, the rule is checked to see if the access is allowed. A popular model of rule-based access control is a firewall rule set.

Attribute-Based Access Control

Attribute-based access control (ABAC) schemes are suitable for application security, where they are often used for enterprise systems that have complicated user roles and rights that vary depending on the roles that users have and the way they relate with a system. They're also used with databases and content management systems, microservices, and APIs for similar purposes.

Protocols

Secure protocols are another component of secure network infrastructure design. Routers and other network infrastructure devices should be capable of management by multiple secure access protocols. Insecure protocols such as Telnet and FTP should not be used.

Remote Authentication Dial-in User Service

In the Microsoft world, the component designed for remote access is Remote Access Services (RAS). RAS is designed to facilitate the management of remote access connections through dial-up modems. Unix systems also have built-in methods to enable remote access. Historically, these systems worked well with Remote Authentication Dial-In User Service (RADIUS), a protocol developed to be a centralized sign-on solution that could support authentication, authorization, and accountability. This method of remote access has been around for a while.

When other devices want to access the server, it sends a request message for matching the credentials. In response to this request, the server will give an access-accept message to the client if the credentials are correct and access-reject if the credentials are not. RADIUS is an open standard and can be utilized on other devices. RADIUS encrypts only passwords, not the username.

Terminal Access Controller Access Control System

Cisco has implemented a variety of remote access methods through its networking hardware and software. Originally, this was Terminal Access Controller Access Control System (TACACS). TACACS has been enhanced by Cisco and expanded twice. The original version of TACACS provided a combination process of authentication and authorization. This was extended to Extended Terminal Access Controller Access Control System (XTACACS). XTACACS is proprietary to Cisco and provides separate authentication, authorization, and accounting processes. The most current version is TACACS+. It has added functionality and extended attribute control and accounting processes. TACACS+ also separates the authentication and authorization process into three separate areas: authentication, authorization, and accounting. While these processes could even be hosted on separate servers, it's not necessary.

Diameter

The Diameter protocol was designed to be an improvement over RADIUS and have better handling of mobile users (IP mobility). Diameter provides the functions of authentication, authorization, and accounting. However, RADIUS remains very popular.

Lightweight Directory Access Protocol

Lightweight Directory Access Protocol (LDAP) is an application protocol that is used for accessing directory services across a TCP/IP network. Active Directory (AD) is Microsoft's implementation of directory services and makes use of LDAP.

Kerberos

Kerberos has three parts: a client, a server, and a trusted third-party key distribution center (KDC) to mediate between them. Clients obtain tickets from the KDC, and they present these tickets to servers when connections are established. Kerberos tickets represent the client's credentials. Kerberos relies on symmetric key cryptography. Kerberos has been the default authentication mechanism used by Windows for both the desktop and server OS since Windows 2000 going forward. Kerberos offers Windows users faster connections, mutual authentication, delegated authentication, simplified trust management, and interoperability. Kerberos V5 [RFC4120] implementation may upgrade communication between clients and Key Distribution Centers (KDCs) to use the Transport Layer Security (TLS) [RFC5246] protocol. The TLS protocol offers integrity and privacy protected exchanges that can be authenticated using X.509 certificates, OpenPGP keys [RFC5081], and usernames and passwords via SRP [RFC5054].

Kerberos does have some areas that can be targeted by criminals. One of these is the fact that Kerberos, like all authentication protocols, has a defined life span. As such, any network using the Kerberos protocol for authentication will need to ensure that the clocks on all systems are synchronized through the use of a protocol such as Network Time Protocol (NTP). Also note that it is important to secure NTP. How to do so depends on your particular configuration; however, if for instance your network was receiving NTP information from pool.ntp.org, then the IP addresses associated with those particular NTP servers should be the only IP addresses permitted into your network over UDP port 123, the default NTP port. More advanced configurations are available. Kerberos is discussed further in the “Single Sign-On” section later in this chapter.

OAuth

Open Authorization (OAuth) is an authorization standard used by many websites. Its purpose is to allow a user or a service to access resources. It allows a user to authorize access to a third-party resource without providing them with the user's credentials. As an example, you might allow a Facebook app to access your Facebook account. In this situation, OAuth would allow an access token to be generated and issued to the third-party application by an authorization server with the approval of the Facebook account holder. As a side note, the process of using a small piece of data in place of data to be kept more secure is called tokenization.

802.1X

IEEE 802.1X is an IEEE standard for port-based Network Access Control (NAC). 802.1X is widely used in wireless environments, and it relies on EAP. 802.1X acts as an application proxy. It's much like a middleman in the authentication process.

For wireless authentication, 802.1X allows you to accept or reject a user who wants to be authenticated. There are three basic parts to 802.1X:

  • The user
  • An authentication server
  • An authenticator acting as the go-between, typically the access point or wireless switch

It starts with a user initiating a “start” message to authenticate. The message goes to the authenticator, usually the access point. The AP requests the user's identity. The user responds with a packet containing the identity, and the AP forwards this packet to the authentication server. If the authentication server validates the user and accepts the packet, the AP is notified. The AP then places the user in an authorized state, allowing their traffic to go forward.

Extensible Authentication Protocol

Extensible Authentication Protocol (EAP) is an authentication framework that is used in wireless networks. EAP defines message formats and then leaves it up to the protocol to define a way to encapsulate EAP messages within that protocol's message. There are many different EAP formats in use, including EAP-TLS, EAP-TTLS, EAP-PSK, and EAP-MD5. Two common implementations of EAP are PEAP and LEAP.

Multifactor Authentication

Another important area of control is multifactor authentication. Authentication has moved far beyond simple usernames and passwords. Single sign-on (SSO), multifactor authentication, biometrics, and federated identity management are good examples of how this area is evolving. Multifactor authentication (MFA) is a method where a user is granted access after presenting two or more pieces of evidence. This evidence is usually broken down into the four types that were introduced earlier in the “Access Control” section.

  • Type 1: Something you know (a PIN or password)
  • Type 2: Something you have (a badge or token)
  • Type 3: Something you are (a fingerprint or retina scan)
  • Type 4: Where you are (geography)

Two-factor authentication (2FA) means you present two different types of authentication to a verification source. They cannot be two of the same type, like a handprint and a voice print. They have to be two different types for acceptance. The username and password, for example, is the knowledge factor of Type 1. The token or code that is generated is the Type 2. The user must know and have both to gain access.

In May 2021, Google announced that it was automatically enrolling its users into two-step verification. Two-step verification means you present one type and after verification, you are asked to present another, usually a verification code that was sent to your phone via text or voice call. To log in, users must enter that in addition to their usual password.

If that verification code is generated internally and it is sent via SMS to the mobile phone you just logged into the mobile application from, that is considered in-band authentication.

In-band authentication factors are not considered to be as secure as out-of-band (OOB) authentication. OOB factors in proofs of identity that do not arrive on or depend on the same system that is requesting the authentication. It is often used by organizations that require high security, such as healthcare and banking, where high security is needed to prevent unauthorized access.

One-Time Passwords

A one-time password (OTP) is exactly what it sounds like. It is a unique password that can be used only once and sometimes has a time expiration on it. Password resets will sometimes generate an OTP that must be used within 15 minutes or it expires. It is valid for a single login session or transaction. It can also be used in conjunction with a regular password for added security.

An HMAC-based one-time password (HOTP) is a hash-based message authentication code where the password algorithm will generate and be validated based on a counter. The code is usually valid until you request a new one that is, again, validated by the authentication server. YubiKey programmable tokens are an example of an HOTP.

A time-based one-time password is similar to the HOTP, but instead of a counter, it's based on a clock. The amount of time in each password is called a timestep. As a rule, a timestep is 30 to 60 seconds. If you don't use your password in that window, you will have to ask for a new one.

Hardware Root of Trust

According to NIST SP 800-172, hardware root of trust is defined as a “highly reliable component that performs specific, critical and security functions. Because roots of trust are inherently trusted, they must be secure by design.” A hardware root of trust is a set of functions that is always trusted by an OS. It serves as a separate engine controlling the cryptographic processor on the PC or mobile device it is embedded in. The most popular example is the TPM or trusted platform module, which is a cryptoprocessor chip on the motherboard that is designed for security-related cryptographic procedures.

Single Sign-On

Another approach to managing a multitude of passwords is single sign-on (SSO). SSO is designed to address this problem by permitting users to authenticate once to a single authentication authority and then access all other protected resources without reauthenticating. One of the most widely used SSO systems is Kerberos.

SSO allows a user to authenticate once and then access all of the resources that the user is authorized to use. Authentication to the individual resources is handled by SSO in a manner that is transparent to the user. There are several variations of SSO, including Kerberos and Sesame. Kerberos is the most widely used. Kerberos, which was previously discussed in the “Kerberos” section earlier in the chapter, is composed of three parts: client, server, and a trusted third party, the key distribution center (KDC), which mediates between them.

The authentication service issues ticket-granting tickets (TGTs) that are good for admission to the ticket-granting service (TGS). Before network clients are able to obtain tickets for services, they must first obtain a TGT from the authentication service. The ticket-granting service issues the client tickets to specific target services.

A common approach to using Kerberos is to use it for authentication and use Lightweight Directory Access Protocol (LDAP) as the directory service to store authentication information. SSO enables users to sign in only once without IT having to manage several different usernames and passwords. By making use of one or more centralized servers, a security professional can allow or block access to resources should changes be needed. The disadvantage of SSO is that the service may become a single point of failure for authentication to many resources, so the availability of the server affects the availability of all of the resources that rely on the server for authentication services. Also, any compromise of the server means that an attacker has access to many resources.

Another concern is mutual authentication; if SSO is not used to authenticate both the client and the server, it can be vulnerable to on-path (formerly known as man-in-the-middle) attacks. Even when SSO is implemented, only the authentication process is secured. If after authentication an insecure protocol is used, such as FTP, passwords and other information can be sniffed, or captured, by keylogging or other means.

JavaScript Object Notation Web Token

JavaScript Object Notation (JSON) is a lightweight, human-readable programming format that is a natural fit with JavaScript applications. It's an alternative to XML, and it is primarily used to transmit data between a server and a web application. One potential security issue is that JSON can be used to execute JavaScript. Representational State Transfer (REST) is used in mobile applications and mashup tools as a type of simple stateless architecture that generally runs over HTTP. It is considered easier to use than other technologies, such as SOAP. Its advantages include ease of use and modification and that it helps organize complex datasets.

JavaScript Object Notation (JSON) Web Token (JWT) is a way to securely send information between entities as a JSON object because it is digitally signed. It can be signed with a secret or public/private keys using RSA. This creates a signed token that verifies the integrity of the information. When using a private/public key pair, it also gives the data nonrepudiation. JSON JWT is used quite a bit with SSO because of its ability to be used across different domains.

Attestation and Identity Proofing

Attestation is the act of proving something is true and correct. Attestation is a critical component for trusted computing environments, providing an essential proof of trust. Attestation is used in the authentication process, and it is also part of services such as TPM.

Identify proofing, also called identity verification, is as simple as it sounds: verifying the user's identity as being legitimate. For example, let's say that a certification candidate is ready to take a proctored exam. Being proctored means that a trusted authority will oversee the exam and verify that each person coming in to take the exam is indeed the same registered person who paid for it and who will receive the credential upon passing. The authority will greet the candidate at the entrance and ask for an ID. The next step is identity proofing or verifying that the person is who they say they are.

Summary

This chapter focused on the duties and responsibilities of a CASP+ with regard to appropriate security controls for an enterprise. This chapter examined the following subjects:

  • Secure network architecture
  • Requirements for proper infrastructure security design
  • Integrating software apps securely
  • Implementing data security techniques
  • Providing authentication and authorization controls

The subjects fit into the broader concepts of network protocols and applications and services.

Modern networks are built on the TCP/IP protocol stack. As a CASP+, you should have a good understanding of how TCP/IP works. You should also have a grasp of network flow analysis. You must also understand the importance of securing routing protocols and be familiar with controls put in place to improve transport security, trunking security, and route protection.

Applications and services are also of importance to a CASP+. You must know how to secure DNS, understand the importance of securing zone transfers, and know how to configure LDAP.

There are authentication along with authentication protocols. This critical component is used as a first line of defense to keep hackers off your network. Systems such as federated ID, AD, and single sign-on can be used to manage this process more securely.

Communication is the lifeblood of most modern organizations, and a failure of these systems can be disastrous. However, these systems must also be controlled. Email is one example of a modern communication system that most organizations rely on. Email is clear text and can easily be sniffed and, as such, requires adequate security controls. With email, there is also the issue of content. A CASP+ must understand threats to the business, potential risks, and ways to mitigate risks. Risk is something that we must deal with every day on a personal and business level. Policy is one way to deal with risk. On the people side of the business, policy should dictate what employees can and cannot do. Even when risk is identified and has been associated with a vulnerability, a potential cost must still be determined. You must also understand aspects of the business that go beyond basic IT security. These items include partnerships, mergers, and acquisitions.

Exam Essentials

Be able to describe IDSs and IPSs. An intrusion detection system (IDS) gathers and analyzes information from a computer or a network that it is monitoring. There are three basic ways in which intrusions are detected: signature recognition, anomaly detection, and protocol decoding. NIDSs are designed to capture and analyze network traffic. HIDSs are designed to monitor a computer system and not the network. A network IPS system can react automatically and actually prevent a security occurrence from happening, preferably without user intervention. Host-based intrusion prevention is generally considered capable of recognizing and halting anomalies.

Be able to describe advanced network design concepts. Advanced network design requires an understanding of remote access and firewall deployment and placement. Firewall placement designs include packet filtering, dual-homed gateway, screened host, and screened subnet.

Be familiar with the process of remote access. Cisco has implemented a variety of remote access methods through its networking hardware and software. Originally, this was Terminal Access Controller Access Control System (TACACS). The most current version is TACACS+. Another, newer standard is Diameter. Although both operate in a similar manner, Diameter improves upon RADIUS by resolving discovered weaknesses.

Be able to describe switches, routers, and wireless devices. A security professional must understand the various types of network equipment and the attacks that can be performed against them. Both switches and routers can be used to increase network security, but techniques such as MAC flooding and route poisoning can be used to overcome their security features.

Be able to describe IPv6. Internet Protocol version 6 (IPv6) is the newest version of the IP and is the designated replacement for IPv4. IPv6 brings many improvements to modern networks. IPv6 increases the address space from 32 bits to 128 bits and has IPsec built in. Security concerns include the fact that older devices may not be compatible or able to provide adequate protection.

Know how DNSSEC works. DNSSEC is designed to provide a layer of security to DNS. DNSSEC allows hosts to validate that domain names are correct and have not been spoofed or poisoned.

Know the importance of securing zone transfers. Securing zone transfers begins by making sure that your DNS servers are not set to allow zone transfers. If your host has external DNS servers and internal DNS servers, the security administrator should also close TCP port 53. Internal DNS servers should be configured to talk only to the root servers.

Know how LDAP operates. LDAP was created to be a lightweight alternative protocol for accessing X.500 directory services. With LDAP, each object is made up of attributes that are indexed and referenced by a distinguished name. Each object has a unique name designed to fit into a global namespace that helps determine the relationship of the object and allows for the object to be referenced uniquely.

Be able to describe the importance of transport security. Increased network security risks and regulatory compliances have driven the need for transport security. Examples of transport security include IPsec, TLS, and SSL. IPsec is the Internet standard for security.

Know the concerns and best practices related to remote access. Remote access is the ability to get access to a computer, laptop, tablet, or other device from a network or remote host. One concern with remote access is how the remote connection is made. Is a VPN used or the information passed without cryptographic controls? Some authentication methods pass usernames and passwords via clear text and provide no security.

Understand SSO. Single sign-on (SSO) allows a user to authenticate once and then access all the resources the user is authorized to use. Authentication to the individual resources is handled by SSO in a manner that is transparent to the user.

Understand the SDLC process. The security/systems/software development life cycle (SDLC) is designed to identify security requirements early in the development process and incorporate them throughout the process.

Understand the advantages and disadvantages of virtualizing servers. Virtualized servers have many advantages. One of the biggest is server consolidation. Virtualization allows you to host many virtual machines on one physical server. Virtualization also helps with research and development. It allows rapid deployment of new systems and offers the ability to test applications in a controlled environment.

Be able to describe virtual desktop infrastructure (VDI). Virtual desktop infrastructure is a centralized desktop solution that uses servers to serve up a desktop operating system to a host system.

Be able to define the purpose of a VLAN. VLANs are used to segment the network into smaller broadcast domains or segments. They offer many benefits to an organization because they allow the segmentation of network users and resources that are connected administratively to defined ports on a switch. VLANs reduce network congestion and increase bandwidth, and they result in smaller broadcast domains. From a security standpoint, VLANs restrict the attacker's ability to see as much network traffic as they would without VLANs in place. VLANs are susceptible to VLAN hopping. This attack technique allows the attacker to move from one VLAN to another.

Know how to secure enterprise storage. Securing enterprise storage requires a defense-in-depth approach that includes security policies, encryption, hardening, patch management, and logging and auditing. These are just a few of the needed controls.

Be able to describe antivirus. Antivirus typically uses one of several techniques to identify and eradicate viruses. These methods include signature-based detection, which uses a signature file to identify viruses and other malware, and heuristic-based detection, which looks for deviation from normal behavior of an application or service. This detection method is useful against unknown and polymorphic viruses.

Know how and when to apply security controls. Controls may or may not be applied. All companies have only limited funds to implement controls, and the cost of the control should not exceed the value of the asset. Performing a quantitative or qualitative risk assessment can help make the case for whether a control should be applied.

Review Questions

You can find the answers in Appendix.

  1. Nicole is the security administrator for a large governmental agency. She has implemented port security, restricted network traffic, and installed NIDS, firewalls, and spam filters. She thinks the network is secure. Now she wants to focus on endpoint security. What is the most comprehensive plan for her to follow?
    1. Anti-malware/virus/spyware, host-based firewall, and MFA
    2. Antivirus/spam, host-based IDS, and 2FA
    3. Anti-malware/virus, host-based IDS, and biometrics
    4. Antivirus/spam, host-based IDS, and SSO
  2. Sally's CISO asked her to recommend an intrusion system to recognize intrusions traversing the network and send email alerts to the IT staff when one is detected. What type of intrusion system does the CISO want?
    1. HIDS
    2. NIDS
    3. HIPS
    4. NIPS
  3. Troy must decide about his organization's File Integrity Monitoring (FIM). Stand-alone FIM generally means file analysis only. Another option is to integrate it with the host so that Troy can detect threats in other areas, such as system memory or an I/O. For the integration, which of the following does Troy need to use?
    1. HIDS
    2. ADVFIM
    3. NIDS
    4. Change management
  4. The IT department decided to implement a security appliance in front of their web servers to inspect HTTP/HTTPS/SOAP traffic for malicious activity. Which of the following is the BEST solution to use?
    1. Screened host firewall
    2. Packet filter firewall
    3. DMZ
    4. WAF
  5. Your employees need internal access while traveling to remote locations. You need a service that enables them to securely connect back to a private corporate network from a public network to log into a centralized portal. You want the traffic to be encrypted. Which of the following is the BEST tool?
    1. Wi-Fi
    2. VPN
    3. RDP
    4. NIC
  6. The IT security department was tasked with recommending a single security device that can perform various security functions. The security functions include antivirus protection, antispyware, a firewall, and an intrusion detection and prevention system. What device should the IT security department recommend?
    1. Next-generation firewall
    2. Unified threat management system
    3. Quantum proxy
    4. Next-generation intrusion detection and prevention system
  7. The IT group within your organization wants to filter requests between clients and their servers. They want to place a device in front of the servers that acts as a middleman between the clients and the servers. This device receives the request from the clients and forwards the request to the servers. The server will reply to the request by sending the reply to the device; then the device will forward the reply onward to the clients. What device best meets this description?
    1. Firewall
    2. NIDS
    3. Reverse proxy
    4. Proxy
  8. Your network administrator, George, reaches out to you to investigate why your ecommerce site went down twice in the past three days. Everything looks good on your network, so you reach out to your ISP. You suspect an attacker set up botnets that flooded your DNS server with invalid requests. You find this out by examining your external logging service. What is this type of attack called?
    1. DDoS
    2. Spamming
    3. IP spoofing
    4. Containerization
  9. The Cisco switch port you are using for traffic analysis and troubleshooting and has a dedicated SPAN port is in an “error-disabled state.” What is the procedure to reenable it after you enter privilege exec mode?
    1. Issue the no shutdown command on the error-disabled interface.
    2. Issue the shutdown and then the no shutdown command on the error-disabled interface.
    3. Issue the no error command on the error-disabled interface.
    4. Issue the no error-disable command on the error-disabled interface.
  10. Your news organization is dealing with a recent defacement of your website and secure web server. The server was compromised around a three-day holiday weekend while most of the IT staff was not at work. The network diagram, in the order from the outside in, consists of the Internet, firewall, IDS, SSL accelerator, web server farm, internal firewall, and internal network. You attempt a forensic analysis, but all the web server logs have been deleted, and the internal firewall logs show no activity. As the security administrator, what do you do?
    1. Review sensor placement and examine the external firewall logs to find the attack.
    2. Review the IDS logs to determine the source of the attack.
    3. Correlate all the logs from all the devices to find where the organization was compromised.
    4. Reconfigure the network and put the IDS between the SSL accelerator and server farm to better determine the cause of future attacks.
  11. After merging with a newly acquired company, Gavin comes to work Monday morning to find a metamorphic worm from the newly acquired network spreading through the parent organization. The security administrator isolated the worm using a network tap mirroring all the new network traffic and found it spreading on TCP port 445. What should Gavin advise the administrator to do to immediately to minimize the attack?
    1. Run Wireshark to watch for traffic on TCP port 445.
    2. Update antivirus software and scan the entire enterprise.
    3. Check your SIEM for alerts for any asset with TCP port 445 open.
    4. Deploy an ACL to all HIPS: DENY-TCP-ANY-ANY-445.
  12. Jonathan is a senior architect who has submitted budget requests to the CISO to upgrade their security landscape. One item to purchase in the new year is a security information and event management (SIEM). What is the primary function of a SIEM tool?
    1. Blocking malicious users and traffic
    2. Administers access control
    3. Automating DNS servers
    4. Monitoring servers
  13. Your security team implemented NAC lists for authentication as well as corporate policy enforcement. Originally, the team installed software on the devices to perform these tasks. However, the security team decided this method is no longer desirable. They want to implement a solution that performs the same function but doesn't require that software be installed on the devices. In the context of NAC, what is this configuration called?
    1. Agent
    2. Agentless
    3. Volatile
    4. Persistent
  14. You had your internal team do an analysis on compiled binaries to find errors in mobile and desktop applications. You would like an external agency to test them as well. Which of these tests BEST suits this need?
    1. DAST
    2. VAST
    3. IAST
    4. SAST
  15. Your company is looking at a new CRM model to reach customers that includes social media. The marketing director, Tucker, would like to share news, updates, and promotions on all social websites. What are the major security risks?
    1. Malware, phishing, and social engineering
    2. DDoS, brute force, and SQLi
    3. Mergers and data ownership
    4. Regulatory requirements and environmental changes
  16. Michael is selected to manage a system development and implementation project. His manager suggests that you follow the phases in the SDLC. In which of these phases do you determine the controls needed to ensure that the system complies with standards?
    1. Testing
    2. Initiation
    3. Accreditation
    4. Acceptance
  17. Your IT group is modernizing and adopting a DevSecOps approach, making everyone responsible for security. Traditionally, storage and security were separate disciplines inside IT as a whole. As a security analyst, what is your primary concern of data at rest?
    1. Encryption
    2. Authentication
    3. Infrastructure
    4. Authorization
  18. Jackie is a software engineer and inherently prefers to use a flexible framework that enables software development to evolve with teamwork and feedback. What type of software development model would this be called?
    1. Prototyping
    2. Ceremony
    3. Agile
    4. Radical
  19. You are working on a high-risk software development project that is large, the releases are to be frequent, and the requirements are complex. The waterfall and agile models are too simple. What software development model would you opt for?
    1. Functional
    2. Cost estimation
    3. Continuous delivery
    4. Spiral
  20. Many of your corporate users are using laptop computers to perform their work remotely. Security is concerned that confidential data residing on these laptops may be disclosed and leaked to the public. What methodology BEST helps prevent the loss of such data?
    1. DLP
    2. HIPS
    3. NIDS
    4. NIPS
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.66.241