© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
V. Viegas, O. KuyucuIT Security Controlshttps://doi.org/10.1007/978-1-4842-7799-7_4

4. IT Security Technical Controls

Virgilio Viegas1   and Oben Kuyucu1
(1)
Doha, Qatar
 

This chapter lists and explains the technical security controls that organizations must have to secure their assets. These controls implementation should be based on a zero trust1 model where all users, whether inside or outside the organization, need to be authenticated and authorized, and continuously identified to have access to corporate resources. Although these controls are essential to secure the organization, they are not enough to accomplish it. With these technical controls, organizations must also implement effective management and operational processes and hire qualified resources with the proper skills to implement, administer and operate these controls.

Off-Premises Unmanaged Devices

We have categorized endpoints as off-premises unmanaged devices and managed devices in our approach.

Off-premises unmanaged devices include any Internet-enabled device that the organization has no control over, including anonymous devices, customer devices, mobile devices with corporate mobile applications, partner devices, and employee devices that connect to the organization from the Internet.

Considering that the organization has no control over these devices, it can only protect its resources from these devices. To do that, the organization must implement the following controls.

MDM: Mobile Device Management

Mobile device management (MDM) is the concept associated with bringing your own device (BYOD), where personal or corporate-owned mobile devices access corporate resources are the most used email.

MDM is frequently confused with mobile application management (MAM) and unified endpoint management (UEM) since they are relatively similar.

MDM improves security by managing the mobile devices (owned by the organization or by employees) to access the organization’s resources and the following.
  • Push or remove apps (MAM)

  • Manage and encrypt data stored in the mobile devices (MCM)

  • Enforce configuration settings

  • Disable unused features to minimize the threat surface

  • Enforce strong password to access the mobile device

  • Control removable media (e.g., microSD cards)

  • Manage the device applications, including non-corporate applications (e.g., application whitelisting (MAM))

  • Geo location

  • Patch management

  • Forensics

  • Remote wiping of the device

Although MDM is a powerful mechanism that allows organizations to secure mobile devices access to corporate resources, it raises several concerns that must be addressed before any implementation, namely the following.
  • Privacy: Considering that MDM can have full control (including complete wipe) over mobile devices that the organization might not own, employees must be informed and accept the terms on how their devices are tracked and monitored. The organization must also implement the necessary controls to avoid unauthorized access to employees’ devices and data.

  • Acceptable use: Although devices can be privately owned, employees must be informed and formally accept the organization’s acceptable BYOD use policies to remotely access corporate assets.

  • Off-boarding: The organization must have a process to ensure that all corporate data is removed from the mobile device when the employee leaves the organization or reports a lost device.

  • Data loss: To avoid data loss, MDM must be configured so that data cannot be copied between the corporate container and the personal space.

  • Infrastructure design: MDM implementation design must include all security controls (e.g., IDS/IPS, strict access controls, sandboxing).

MAM: Mobile Application Management

MAM solutions securely manage and deploy corporate applications to mobile devices, including BYOD.

MAM solutions can establish the controls over the mobile applications, deliver and configure applications, control application updates, manage software licenses, and track application usage. It can adjust restrictions on applications based on geolocation.

Most MDM solutions can also implement MAM.

NAC: Network Access Control

Network access control acts as a gatekeeper of the corporate network. Its objective is to control the access of all devices and ensure that those devices comply with the organization’s security policies, where each device must be authenticated, identified, or cataloged. It is compliance verified before it connects to the network.

NAC is a security control that can automatically detect and respond to potential threats in real time when devices try to connect to corporate networks.

The Institute of Electrical and Electronics Engineers (IEEE) 802.1X2 is the standard for port-based network access control (PNAC) and the main NAC implementation reference.

NAC can also quarantine noncompliant devices and allow them only limited access until they comply with security policies such as installing missing security patches, updating antivirus, renewing certificates, or hardening.

Although NAC must be implemented to avoid unmanaged devices to get unauthorized access to corporate networks, some implementations may allow visitors to access a corporate captive portal, or a landing page to provide them access to the Internet. Through captive portals, some user information can be collected and validated out-of-band (such as an OTP sent to a mobile phone), which may be mandatory in some jurisdictions.

NAC solutions have been integrated with SOAR (Security Orchestration, Automation and Response) technologies in recent deployments. When an unauthorized device is detected, a vulnerability scan is performed to leverage the visibility and ensure this unauthorized device does not impose any threat to the environment.

Multi-Factor Authentication

Access to the corporate environment from unmanaged and untrusted devices is prone to several information security risks. Therefore, organizations must implement all applicable security controls to ensure that threat agents do not impersonate their employees, customers, and partners. One of these mechanisms is multi-factor authentication, where two or more separate factors are used simultaneously for authentication. This provides additional security if one factor is compromised.

The following are authentication factors.
  • Factor 1: Something you know

    (e.g., password, PIN, patterns, the name of your favorite pet)

  • Factor 2: Something you have

    (e.g., SMS token, push-based OTP, soft-token, hard-token, smartcard, certificates)

  • Factor 3: Something you are or something you do

    (e.g., biometrics such as fingerprint, palmprint, face ID, voice, retina or iris, DNA, or handwriting analysis, typing speeds)

Using a password as the first step and a PIN as the second step to authenticate is not multi-factor authentication. Although it uses two authentication elements and the username, these elements are from the same authentication factor (something you know) and not considered MFA.

The terms multi-factor authentication and multi-step authentication are often used interchangeably; however, it should be noted that they are different. Unlike multi-factor authentication, multi-step authentication may use the same factor as long as it is securely obtained. For example, the one-time password (OTP) token you see in your mobile authenticator app is something you know, and when you use it with a password, it is called multi-step authentication. Instead, if you have clicked on a notification or approved an authentication request from your mobile device, it would be multi-factor authentication because the device is something you have.

One additional important consideration about MFA is that authentication methods used for MFA should not allow access to another direction. For example, if you are receiving an OTP in your email address for the second factor, and if you are using the same username and password to access this email account without MFA, the attacker could access the email account to retrieve the OTP. Hence, preferably, the authentication processes should be out-of-band, meaning that they should be distributed over different networks or channels.

RASP for Mobile Applications

Runtime application self-protection (RASP) is a technology built into or linked to an application or runtime environment. It can control the application execution and detect and prevent real-time attacks.

When deploying mobile applications, organizations should include RASP in their applications to create an additional layer of protection on the client side, since the mobile device is run in an insecure, outdated, or compromised operating system.

By implementing RASP technology with their mobile apps, organizations can avoid their apps from running on compromised, rooted, or jailbroken devices.

RASP technology can also prevent the app from debugging, identify potential vulnerabilities, reduce functionalities, or stop the application from running.

Some RASP solutions produce a device fingerprint and relate it to a user or customer account, assigning a risk score to sessions and user or customer profiling.

Secure Connections

Considering that the Internet is a hostile environment and that non-encrypted traffic can be easily eavesdropped on or manipulated, to ensure information integrity and confidentiality, each organization must implement some form of encryption to all connections from the Internet to corporate resources. This includes external partner communications, employee connections to corporate resources, customer or anonymous visitors traffic to corporate websites and mobile applications.

All connections from the outside world to the organization’s environment must be secured by encrypting the channel to ensure confidentiality, integrity, authentication, and non-repudiation.

OSI Model

Currently, most network connections are based on the TCP/IP model. However, there are many other communications protocols. To standardize different computer networking methods and protocols developed by different companies since the early days of computer networks, International Organization for Standardization (ISO) developed the Open Systems Interconnection (OSI) model for protocols in the early 1980s. The OSI model is defined by ISO Standard 7498.

The OSI model (Table 4-1) has seven distinct conceptual layers. Each layer is responsible for specific tasks or operations to support the layer above it, and it is supported by the layer below it. Communication between layers is done by standardized protocols.
Table 4-1

OSI Model Layers and Protocols

Layer

Protocols

Application Layer

HTTP

FTP

LPD (Line Print Daemon)

SMTP

Telnet

TFTP

EDI

POP3

IMAP

SNMP

NNTP

S-RPC

SET

Presentation Layer

American Standard Code for Information Interchange (ASCII)

Extended Binary-Coded Decimal Interchange Mode (EBCDIM)

Tagged Image File Format (TIFF)

Joint Photographic Experts Group (JPEG)

Moving Picture Experts Group (MPEG)

Musical Instrument Digital Interface (MIDI)

Session Layer

Network File System (NFS)

Structured Query Language (SQL)

Remote Procedure Call (RPC)

Simplex/Half-Duplex/Full-Duplex

Transport Layer

Transmission Control Protocol (TCP)

User Datagram Protocol (UDP)

Sequenced Packet Exchange (SPX)

Secure Socket Layer (SSL)

Transport Layer Security (TLS)3

Network Layer

Internet Control Message Protocol (ICMP)

Routing Information Protocol (RIP)

Open Shortest Path First (OSPF)

Border Gateway Protocol (BGP)

Internet Group Management Protocol (IGMP)

Internet Protocol (IP)

Internet Protocol Security (IPsec)

Internetwork Packet Exchange (IPX)

Network Address Translation (NAT)4

Simple Key Management for Internet Protocols (SKIP)

Data Link Layer

Serial Line Internet Protocol (SLIP)

Point-to-Point Protocol (PPP)

Address Resolution Protocol (ARP)

Layer 2 Forwarding (L2F)

Layer 2 Tunneling Protocol (L2TP)

Point-to-Point Tunneling Protocol (PPTP)

Integrated Services Digital Network (ISDN)

Physical Layer

EIA/TIA-232 and EIA/TIA-449

X.21

High-Speed Serial Interface (HSSI)

Synchronous Optical Networking (SONET)

V.24 and V.35

TCP/IP Model

Unlike the OSI model, the TCP/IP Model (Table 4-2) only has four layers.
Table 4-2

Comparison Between TCP/IP Models

TCP/IP is the most used protocol suite and can be found in almost all operating systems and uses several individual protocols (Table 4-3).
Table 4-3

TCP/IP Model Layer Protocols

The TCP/IP network traffic can be secured by encrypting connections with a virtual private network (VPN) between the communicating hosts and ensuring confidentiality, integrity, and authentication. VPNs can be established using the following protocols.
  • PPTP: Point-to-Point Tunneling Protocol

  • L2TP: Layer 2 Tunneling Protocol

  • SSH: Secure Shell

  • SSL/TLS: Secure Sockets Layer/Transport Layer Security

  • IPsec: Internet Protocol Security

IPsec, SSH, and TLS

The most common ways to secure connections from the Internet to corporate resources are the implementation of the IPsec, SFTP, and TLS.

IPsec

Internet Protocol Security (IPsec) is an Internet Engineering Task Force (IETF) open standard encryption suite that implements encryption between two devices connected through an IP network for data authentication, integrity, and confidentiality. These two devices can be two hosts on a host-to-host communication, two security gateways (network-to-network), or a security gateway and a host (network-to-host). IPsec implements VPNs.

IPsec allows mutual authentication between the two connecting devices at the beginning of each session and the negotiation of cryptographic keys.

The following are IPsec functions.
  • Authentication Header (AH): (integrity and non-repudiation) AH provides protection against replay attacks and ensures data integrity, data source authentication (IP datagrams).

  • Encapsulating Security Payloads (ESP) : (confidentiality and content integrity) ESP provides confidentiality, data integrity, data source authentication, anti-replay attack service (partial sequence integrity), and limited traffic-flow confidentiality.

  • Internet Security Association and Key Management Protocol (ISAKMP) : This is a framework for authentication and key exchange. It can be implemented by manually configuring pre-shared keys, Internet Key Exchange (IKE and IKEv2), Kerberized Internet Negotiation of Keys (KINK), or IPSECKEY DNS records. ISAKMP generates the Security Association (SA) with algorithms and parameters needed for AH and/or ESP.

IPsec has the following operation modes (Table 4-4).
  • Transport mode is where only the payload is encrypted (e.g., peer-to-peer).

  • Tunnel mode is where the entire packet is encrypted (e.g., gateway to gateway). Each IP packet is encapsulated into another IPsec packet, which may have different source and destination IP addresses from the “inner” packet.

Table 4-4

IP Packets in IPsec Modes

IPsec supports several integrity and encryption algorithms. Before implementing IPsec, each organization must consider that some of the supported algorithms are considered weak (e.g., DES, 3DES, SHA1) and that in some countries, the use of some algorithms is mandatory (e.g., Saudi Cryptographic Standard5).

SSH

Secure Shell (SSH) is a network protocol for secure client-server communication by employing strong password authentication, key pairs, or both. Although SSH connections from the Internet to the organization resources should not be allowed, since it poses several security risks, it can be used between the organizations and partners to exchange files, specifically through Secure File Transfer Protocol (SFTP), including SSH security components.

Like all other processes, it is highly recommended to implement additional security controls like, for example, strong authentication method, source IP whitelisting, scheduled firewall rules.

TLS

Web browsers use Hypertext Transfer Protocol Secure (HTTPS) to encrypt communications with web servers. HTTPS transmissions use SSL/TLS protocol. SSL preceded TLS and was gradually replaced by TLS after Google Security Team found that SSL was vulnerable6 to POODLE attacks (Padding Oracle ON Downgraded Legacy Encryption).

TLS, first defined in 19997, now the current version is 1.3, supports client/server applications to communicate over the Internet and avoid eavesdropping, tampering, and message manipulation, and should be used in all connections to the organization websites and web services (APIs).

TLS version 1.3 is specified by the IETF Request for Comments (RFC 8446) with improvements in performance and privacy compared to previous version 1.2, which also has known vulnerabilities.
  • CVE-2012-4929: Compression Ratio Info-leak Made Easy (CRIME)

  • CVE-2015-7575: Security Losses from Obsolete and Truncated Transcript Hashes (SLOTH)

The following are some of the differences between TLS 1.2 and TLS 1.3.
  • The TLS 1.2 negotiation mechanism was deprecated.

  • Elliptic curve algorithms are part of version 1.3 specifications, and new algorithms like Edwards-curve Digital Signature Algorithm (EdDSA)8 were included.

  • Handshake messages after the ServerHello9 all encrypted.

  • RSA and Diffie–Hellman static cipher suites were removed, and forward secrecy is now provided by all public key–based key exchange mechanisms.

Organizations must ensure that all their web servers and APIs enforce the use of TLS 1.3 and that the downgrading connections to weaker versions like TLS 1.1 or all versions of SSL are disabled. For reference on selecting and configuring TLS implementations, NIST has a nice guideline called SP 800-52 Rev. 2 Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations,10 which helps organizations configure their TLS implementations according to best practices.

Clean Pipes

Clean pipes concept is an anti-DDoS (distributed denial-of-service) mechanism maintained and managed by the Internet Service Provider (ISP) that protects the organization’s Internet connections from well-known bandwidth-consuming denial-of-service attacks. By implementing this mechanism, the organization frees bandwidth for production traffic. This is known as DDoS deflation.

DDoS Mitigation

In addition to clean pipes, organizations must also implement on-premises DDoS protection and mitigation mechanisms.

The following are the main types of DDoS attacks.
  • Volume-based attacks

  • Protocol attacks

  • Application layer attacks

Each one of this type requires its own unique mitigation strategy and tool.

Volume-based attacks consist of generating a large volume of requests, overloading network devices or servers so that they cannot respond to legitimate requests. These attacks can be UDP floods, ICMP floods, NTP amplification, among other attacks. Volume-based attacks can be prevented by clean pipes and volumetric anti-DDoS devices that inspect traffic and drop malicious traffic.

Protocol attacks generate requests to exploit network protocol weaknesses like SYN floods, packet fragmentation, or ping of death.

Anti-DDoS tools protect the organization’s infrastructure by analyzing traffic, identifying, and preventing malicious traffic from reaching the destination target.

Application layer attacks generate a large number of requests to web applications or other application servers, appearing to be generated by legitimate users, and include GET/POST floods, low-and-slow attacks, or attacks that target specific application server limitations or vulnerabilities like Apache or IIS.

Anti-DDoS devices can prevent application layer attacks by analyzing site visitors’ behavior and blocking bad requests. Additional protection mechanisms can also be implemented by challenging unrecognized visitors using cookie challenges or CAPTCHAs.

Managed Devices

Directory Service Integration

A directory service is a fundamental tool to organize and unify the management of the organization’s users and authentication credentials, computers, printers, and others. Directory services can mirror the organization hierarchy and map the users and other resources to that hierarchy and enforce the relevant policies to those users and resources.

Among several advantages of directory services, we can highlight the following.
  • Centralized resource repository

  • Centralized security administration

  • Enhance single logon to access corporate resources

  • Simplified resource location

  • Corporate Policies enforcement

  • Policies applied according to the organizational structure

Although the most well-known directory service is Microsoft Active Directory (AD), the following alternatives can be considered according to the organization’s specific needs.
  • Apache Directory Studio

  • Open LDAP

  • JXplorer

  • FreeIPA

  • Samba

  • 398 Directory Server

  • OpenDJ

  • Zentyal Active Directory

  • Oracle Directory Server Enterprise Edition

  • RazDC

It is highly recommended to use strong authentication protocols like Kerberos authentication instead of NT LAN Manager (NTLM), which is vulnerable to several known security vulnerabilities related to password hashing and password salting.

It is also recommended to use security baselines and frequent configuration reviews to confirm compliance with those baseline and other standards (e.g., PCI DSS).

Centralized Endpoint Management

To ensure effective management of their assets, organizations must have centralized endpoint management . It is the platform for managing and administering distributed systems running Windows, macOS, Linux, Unix, and other operating systems.

With centralized endpoint management, organizations can provide remote control, patch management, software distribution, operating system deployment, and network access protection, among others. This solution can also play a fundamental role in supporting an effective asset management process. Microsoft System Center Configuration Manager (SCCM) is an example.

TPM: Trusted Platform Module

A Trusted Platform Module (TPM) is a chip in the device’s motherboard. TPM manages and stores encryption keys used for full-disk encryption and provides access to those keys to the operating system. If the hard disk is removed and installed in another device, the new device does not have access to the encryption keys.

VPN Client

A VPN client is software that allows a device to establish a secure channel with the organization and access the organization’s resources.

Since VPN clients permit direct access to corporate resources, they should only be installed in managed devices, where the organization can install additional security controls. Before connecting to corporate resources, devices that use VPN clients should be checked for the existence of mandatory security controls such as antivirus and up-to-date AV signatures, EDR, up-to-date operating system, hard disk encryption, or disabling print screen option, and if these controls are running the latest versions.

Although it is a widely used solution for remote access, VPN clients raise several security concerns, like data exfiltration. For this reason, organizations should consider implementing alternate, more robust, and secure remote access solutions, like SSL/VPN, where it is possible to access corporate resources through an SSL/VPN portal that does not allow exfiltration of corporate data by blocking the possibility of copying files.

NAC: Network Access Control

NAC agents should be installed on all devices to comply with corporate policies and are properly updated.

Data Classification

Data classification is a crucial element of information security since it allows you to protect data according to its confidentiality, sensitivity, or secrecy, with suitable security controls. Applying the same security controls to data with different types of confidentiality is not cost-effective or does not ensure the proper level of security for confidential information since it treats public and confidential information as the same.

The first step of data classification is to define a classification scheme that allows the organization to group assets or objects in categories according to their value, sensitiveness, risk (impact X likelihood), potential damage or loss, or anything else considered relevant (e.g., cardholder data in PCI DSS or PII in privacy laws).

The scheme must be adjusted to the organization’s specific needs and operating context and easily understandable by all employees.

One possible scheme that can be used as a reference is the one used by the US military (Table 4-5).
Table 4-5

Classification scheme using U.S. military classification scheme as reference

High

Top Secret

Unauthorized disclosure have drastic effects and cause grave damage to national security

 

Secret

Unauthorized disclosure have significant effects and cause critical damage to national security

 

Confidential

Unauthorized disclosure have noticeable effects and cause serious damage to national security

 

Sensitive but unclassified

Used for internal use and unauthorized disclosure can violate individuals’ privacy rights

Low

Unclassified

Disclosure of unclassified data won’t compromise confidentiality or cause any material damage

Considering that this classification might be too complex to implement and unsuitable for corporate environments, organizations should consider implementing a more adequate and easier classification scheme with fewer classification levels (Table 4-6), as shown in the following.
Table 4-6

Simplified corporate classification

High

Confidential

Private

 

Sensitive

Corporate

Low

Public

External

  • Confidential: The highest classification level to classify extremely sensitive corporate data that could have a significant negative impact on the organization if it was disclosed (e.g., new product specifications or design, corporate formulas or trade secrets like KFC recipes, customer data).

  • Private: Private or personal data that should only be used internally. Like confidential data, private data can have a significant negative impact on the organization or people if it is disclosed.

  • Sensitive, Corporate: Internal corporate data can harm the organization if disclosed. Examples of sensitive data are internal procedures, accounting, and budget data.

  • Public: This is the lowest classification level that does not negatively impact the organization if it is disclosed.

The following are other concepts associated with data classification.
  • The data owner is responsible for classifying information and ensuring that the appropriate security controls are in place.

  • The data custodian is responsible for implementing the defined security controls for the data according to its classification. The data custodian is also responsible for data backup (and testing those backups), managing storage, and ensuring integrity and availability.

  • The users are individuals who have limited access to data to perform the necessary tasks according to their job description (least privilege principle).

  • The auditor reviews and verifies compliance with security and data protection policies and the adequacy of the applied security controls.

Data protection and data classification policy owners are ultimately responsible for ensuring these policies’ implementation and approving possible exceptions.

To ensure that all data is classified, organizations must provide end-users tools to classify all documents generated in their workstations. These tools are usually Microsoft Office plug-ins that force users to classify all created documents (Word, Excel, PowerPoint, Outlook email, or modified documents that were not previously classified).

UAM: User Activity Monitoring

User activity monitoring (UAM) allows organizations to monitor and track their users’ activities and behavior and generate alerts of potential danger based on the user behavior profile.

UAM can capture all user activities in the operating system and applications, including commands, text entered chosen options, and verify if employees and contractors comply with their assigned tasks or usage profile and pose a potential threat to the organization.

When implementing UAM, organizations must consider the legal implications on the employees’ privacy. In some countries, it is not allowed to monitor individual behavior unless informed.11 In other countries,12 it is allowed in certain conditions. The key takeaway is that organizations should notify users beforehand of what is tracked and what is not.

Endpoint Protection

This section addresses all the security controls present in all the organization endpoints.

Before deploying any of these controls, it must be ensured that the host devices have the needed resources to avoid performance degradation. It should be noted that several EDR solutions in the market currently have all these features except for full-disk encryption.

Phishing Reporting Tool

To provide users with an easy and fast way to report potential email threats, organizations should install a phishing reporting tool on all desktops.

Usually, this tool is an email client plugin (e.g., Microsoft Outlook add-in), which is displayed as a report phishing button that tags and forwards the email to the security operations center for analysis and deletes the original email.

Similarly, this button recognizes the phishing simulation emails that aim to educate users and collect statistics for measuring the program’s effectiveness.

Host IPS or EDR

A host intrusion prevention system (HIPS) is installed in the organization endpoints and monitors all traffic to and from a host and blocks if an intrusion is detected.

Currently, the concept of HIDS has become outdated, and endpoint detection and response (EDR) has become the next generation of HIPS. EDR detects threats and follows up the entire life cycle of the threat, providing useful information about what happened, how the threat got into the system, its location and activity, and how it was stopped.

Some EDR solutions report to cloud-based services, which allow their vendors to provide additional services like response and forensics to support the organization’s security operations center.

Desktop Firewall

The desktop firewall feature, or personal firewall, is deployed with most operating systems. However, most of the time, this feature is not enabled or not enforced. Desktop firewalls control network traffic from and to computers, allowing or dropping the packets based on security rules.

Although desktop firewall rule set management can be very complex and create constraints by blocking legitimate network traffic, a limited set of generic rules can be implemented to block traffic that clearly should not be present in the organization networks; for example, certain unused TCP or UDP ports ranges or known malware ports.

Antivirus

Antivirus software is one of the oldest and most well-known endpoint security controls.

Typically, antivirus software detects viruses based on signatures. Therefore, to ensure the effectiveness of this security control, antivirus software must be kept up-to-date and centrally managed, where the signature updates can be pushed to the endpoints, and schedule scan jobs and track detected virus alerts.

End users should not be able to uninstall or stop antivirus software and, scanning exclusions due to performance reasons or others must be properly assessed before being approved.

Most antivirus software currently offers protection against spyware, worms, rootkits, and others.

One of the biggest challenges related to antivirus software and other agent-based security controls is assuring that the software is installed in all endpoints, which must be supported by an efficient and effective IT asset management process.

Antispyware

Spyware is software that monitors user actions, filters relevant information, and transmits them to a remote system. One example of this user information is Internet banking login credentials.

Although in the past, anti-spyware protection was separately licensed, currently, most endpoint antivirus software also offers anti-spyware protection.

Full-Disk Encryption

Full-disk encryption is a security mechanism that prevents the unwanted disclosure of corporate data.

The most frequently shown use case for disk encryption is the loss of corporate laptops with sensitive information that can be prevented with disk encryption. However, there are other cases where disk encryption also prevents the unwanted disclosure of corporate data, like corporate media that is not effectively sanitized before being disposed of or used corporate desktops that are sold without any sanitation.

Full-disk encryption can be implemented using dedicated software or using features provided by the operating system like Microsoft Windows BitLocker. TPM is also used for full-disk encryption.

Application Control and Application Whitelisting

Application control is a solution that controls the applications that can be installed in a device, force certain applications to be installed, and force applications settings. It prevents users from installing or running applications in their workstations and reduces the organization’s exposure to potentially malicious software.

Perimeter Security

Firewalls

Firewalls are the most well-known security control name. Almost every person in your organization knows the word firewall. However, only a few people know exactly what a firewall is.

Firewalls are software or hardware-based network security systems that control incoming or outgoing traffic between two or more network segments based on a defined ruleset. With firewalls, organizations can establish a barrier between untrusted networks (e.g., Internet, partner, vendor, etc.) and internal networks.

The following are the most common firewall types.
  • Static packet-filtering firewalls filter traffic by examining message headers and validating source, destination, and port, regardless of the source interface. This type of firewall can be easily tricked since packets can be spoofed.

  • Application firewalls are also designated proxy firewalls . Packets are copied from one network segment to another. Source and destination addresses can be modified to protect certain network segments.

  • Circuit-level gateway firewalls, designated as circuit proxies , manage communications based on the circuit, not on the traffic content, and are used in communications with trusted partners. These firewalls act as the OSI model session layer (Layer 5). SOCKS (Socket service or Socket secure) is a very popular implementation of circuit-level gateway firewalls.

  • Stateful inspection firewalls are also known as dynamic packet-filtering firewalls . These firewalls validate the connection (session) state, inspecting source address, a destination address, destination port, and the relation between the current packet and the previous packet in the same session by keeping track of the session state.

  • Deep packet inspection firewalls operate at the application level and filter the communication payload content, doing a full packet inspection and identifying domain names, malware, spam, and blocking unwanted traffic.

  • Next-Gen firewalls are designated all-in-one because they can have several security functions like IPS, SSL/TLS proxy, VPN concentrator, and web filtering.

To increase the misperception of what a firewall does, most vendors provide “all-in-one” solutions in which several security controls (and other functions) are incorporated into a firewall. These include the following.
  • An NTP server

  • An explicit proxy

  • Content filtering

  • Antivirus

  • A honeypot

  • Application control

  • A DNS server

  • IPS/IDS

  • A web application firewall

  • A VPN concentrator

  • A router

  • A remote access platform
    • SSL/VPN

    • VPN

Considering these capabilities, all-in-one firewalls are a very attractive solution for small and medium organizations since they are cheaper and easier to manage. For this reason, all-in-one firewalls can give a false perception of security. To avoid it, IT security teams must have the appropriate training in all the user features to ensure that they are properly configured and updated. These devices can also act as a single point of failure or compromise where a single exploitable vulnerability can compromise all the used features.

We recommend reading the following documents for more information regarding firewall deployment and architecture (Figure 4-1).
  • NIST SP 800-125B13 Secure Virtual Network Configuration for Virtual Machine (VM) Protection

  • NIST SP 800-4114 Guidelines on Firewalls and Firewall Policy

Figure 4-1

Firewall deployment architectures

Being one of the most important security controls of an organization, it is important to ensure that these device configurations and policies are fine-tuned and compliant with legal requirements and the organization’s policies; otherwise, they give a false perception of security.

Intrusion Detection and Intrusion Protection Systems

An intrusion detection system (IDS) is a passive device that monitors network traffic and generates alerts based on a predefined set of rules with attack signatures. An IDS does not need to be placed inline to monitor traffic and monitor several network segments by mirroring ports into the IDS port.

An IDS is an active device placed inline that blocks attack attempts by terminating network connections or user sessions according to the matching attack signatures or patterns. Since an IPS can also act as an IDS, during the IPS implementation initial stages, it is advised to set up the IPS in “learning mode” to profile traffic and avoid false positives that can impact some services availability.

The following are generic types of IDS/IPS.
  • A network intrusion detection system (NIDS) is deployed to monitor network traffic.

  • A host intrusion detection system (HIDS) is deployed to hosts (e.g., servers and desktops) to monitor traffic to and from those hosts.

  • A network intrusion protection system (NIPS) is deployed inline to monitor all network traffic and block malicious traffic to prevent intrusions. Since these devices are placed inline, they can be a single point of failure. Therefore, it is highly recommended to implement the appropriate redundancy to avoid latency and decide to implement fail-open or fail-close if the device fails based on a comprehensive risk assessment.

  • Host intrusion prevention system (HIPS) is deployed to monitor all traffic to and from a host and block if an intrusion is detected. Before deploying a HIPS, it must be ensured that the host device has the needed resources to avoid performance degradation.

IDS/IPS devices are the fundamental security control to prevent intrusions. However, it must be ensured that they have the proper set of rules to avoid false negatives or false positives that can affect the availability of the protected services.

The organization size and infrastructure complexity can also be considered to implement these controls in an all-in-one firewall.

Proxy and Content (URL) Filtering

Ideally, to reduce risk, organizations should not allow their employees to have unrestricted access to the Internet. However, considering that this scenario is not an option for most organizations, the next best option would be granting access only to the strictly needed websites. This is our recommendation. Users should only access websites for business needs.

Some, if not most, organizations allow their employees to access the Internet based on content categories, banning access based on the categories risk (e.g., hacking-related sites, some non-business categories or file sharing) or impact on performance or productivity (e.g., advertising, streaming, social media). This implementation has several risks since, although most websites are categorized, it is impossible to ensure that all sites effectively match the assigned category or, when the category matches, it is impossible to ensure that some of the content is not malicious like, for example, some websites that are commonly categorized as “information technologies” can contain malicious software or documents.

Additionally, some attacks often use recently created domains that are still to be categorized. For this reason, access to all uncategorized websites should also be blocked.

All downloaded content should be analyzed by antivirus software and the sandbox.

DLP: Data Loss Prevention

The purpose of a perimeter network-based data loss (or leakage) prevention (DLP) system is to detect and prevent unauthorized data exfiltration by examining the content of the web, email content, and attached files, as well as uploaded files looking for the predefined keyword, patterns and metadata fields such as data classification tags.

Most DLP tools work with defined policies. By policies, security professionals describe what type of information would be monitored in the network. These can be based on content properties (file size, metadata, domains of senders/recipients, number of recipients, etc.), file-type (works both with MIME types and extensions15), content classifiers (patterns such as credit card numbers, national ID numbers, social security numbers, etc.) or dictionaries (words, phrases, weighted words, predefined terms for various categories). Some DLP solutions find sensitive information in databases and file shares by fingerprinting technologies. They collect indexes from these sources, and when hashes of one or more records are matched (such as customer name next to his/her credit card number), they can create events to investigate. File fingerprinting also works with document templates, where the overall layout is the same, but the content can change (such as common forms, e.g., HR forms, pay slips, CVs, contracts, etc.). When combined with machine learning, where users train DLP for allowed and disallowed contents, it can produce much fewer false positives.

Indeed, false positives16 and false negatives17 are primary issues in DLP solutions. To decrease false positives, the number of matches in one content can be increased, but this would cause false negatives, where sensitive data to be sent successfully in smaller batches. Some DLP tools track such content over time.

One of the other methods to decrease the false positives is to validate data somehow. One example of a tracked pattern is a credit card number which is typically 16 digits. However, DLP technologies should never track any 16 digits, but those that comply with the Luhn algorithm to check the validity of the data. Furthermore, financial institutions may track certain BINs to decrease the possibility of false positives.

DLP technologies are now merging with NLP (natural language processing) and deep learning to help organizations better understand the semantics of the data to recognize the language and context of the communications. Better data sets give better analysis results and decrease the number of false positives.

Classification of sensitive data at the time of creation would certainly help DLP tools block such data. Tags or labels can be found in the metadata of supporting documents and files. DLP tools can also track that information.

In the first steps of implementing DLP, every organization is overwhelmed by the amount of captured data. By knowing your processes well and automation of internal processes that support DLP to implement exceptions, events are not created for BAU activities. Anomaly detection in the network and then evaluating for DLP also helps but should be done carefully.

Blocking on day one would impact your business and users. Instead, users and other stakeholders should be educated during the process on what is sensitive data and what is not, such as providing popups containing educational material when they try to send sensitive data .

Honeypot

A honeypot system is created to simulate a legitimate system to mislead attackers. These systems should be placed in an isolated network without connection to the remaining productive systems and should not host any real data.

The system should be relatively hardened but with some exploitable vulnerabilities. This way, the attacker focuses on compromising the exploitable system and later moves other systems without realizing that that system is not a “real” system.

When the attacker compromises the honeypot, it should trigger an alert, and the intrusion is detected before any production system is attacked. From there, the attacker’s actions can be closely monitored and blocked. With the analysis of this attack, you understand the modus operandi of this attacker.

Honeypots should not be used to explicitly provoke attacks.

WAF: Web Application Firewall

A web application firewall (WAF) is a network security device that filters, monitors, and blocks non-essential or malicious web traffic. They provide additional protection against attacks like SQL injection, cross-site scripting, cross-site forgery, security misconfigurations, or design flaws.

WAFs are considered a second layer of defense, where firewalls and IPS should be in front of them and are essentially reverse-proxy devices that intercept and check the traffic against defined security policies. Some WAFs use profiling to understand the application and provide better and efficient protection. Based on the policies, they either drop the traffic or present other information that would confuse the attacker. WAFs can use a positive or negative security model or a combination of the two. A positive security model contains a whitelist that filters traffic, dropping the rest. A negative security model contains a deny list that only blocks those specific traffic patterns, and the rest is allowed.

WAFs can be host-based, where it is located in the application’s software. They are easy to configure; however, they consume the same local resources of the application. Hence some performance issues can be observed. Network-based WAFs can be used to overcome this problem, where special hardware is put in front of the application. However, they are more expensive than the host-based ones. Nowadays, cloud-based WAFs offer a more affordable solution, where all traffic is redirected to the cloud service providers through DNS changes. Traffic is analyzed in cloud platforms and then sent to the actual application. These are easy to deploy and scalable and have no hardware maintenance costs. However, some features may not be configurable in such platforms, and they may not be used where data privacy laws are stricter and may not permit cloud platforms.

To inspect encrypted traffic, WAFs must have a copy of the private key of the server certificates of the web servers they are protecting. Please see the “TLS Decryption” section for more information.

SSL VPN

Organizations can use SSL/VPN to allow their users to remote access corporate networks via an encrypted connection between the user computer (web browser or client software) on the internet and the SSL/VPN device. This device should enforce the use of TLS since SSL has several known vulnerabilities.

SSL/VPN can be used in two major ways.
  • Corporate remote access portal: Corporate users use their web browsers to access a web portal and from there remote access internal corporate resources like internal websites, emulated sessions (SSH, Telnet, etc.), or remote desktop sessions that are published according to the user profile. The web portal should only allow a single SSL/TLS session per user. Data loss prevention mechanisms like remote printing or copy/paste should be implemented.

  • SSL/TLS as a tunnel to establish a VPN: This way allows the remote host to access multiple network services by using SSL/TLS to establish a tunnel and encapsulate other protocols not exclusively web-based. VPN tunneling might require the user to install additional software in their computers like JavaScript or a VPN client. Since users can directly access internal corporate network resources (e.g., remote desktop), it is harder to implement some ingress and egress controls.

Considering the inherent risk of this connection, multi-factor authentication must be used.

DNS

A Domain Name System (DNS) is a naming system for computers, services, or other resources connected where a name is associated with an IP address or vice versa.

Organizations have internal DNS servers to translate internal resources names and external DNS servers to publish their domains and services names on the Internet (Figure 4-2).

The following are common DNS records.
  • A record stores a hostname and its IPv4 address.

  • A Canonical Name record (CNAME record) is commonly used to alias a hostname to another hostname.

  • An MX record is a mail exchanger record that specifies a domain SMTP email server. This record is the corporate email gateway to route outgoing to the Internet and incoming emails from external servers.

  • An NS record is a name server record that specifies a domain primary and backup name server.

  • A PTR record is a reverse-lookup pointer record that contains the hostname that matches a certain IP address.

  • A TXT record is a text record that allows organizations to publish information related to a host or other name, like readable information about a server, network, data center, DKIM, and DMARC, among others.

  • An SOA record is a start of authority record that contains administrative information about the zone indicating the zone authoritative name server, domain administrator contact information, and other relevant information.

Figure 4-2

Simplified DNS query process

Internal DNS Servers

Internal DNS servers are used internally to resolve names and other information related to internal corporate assets. Usually, when they are requested to resolve names (or other information) from an external domain, they forward those requests to a predefined forwarder DNS server. An internal DNS server queries a DNS server outside the organization at a certain point.

It should be noted that DNS can be used as a covert channel, or DNS queries can indicate that internal hosts have been compromised, for example, by malware that is trying to access malicious websites. It is highly recommended to subscribe to or implement DNS security services such as DNS firewalls or DNS-layer security. All DNS queries are forwarded to this system and analyzed, and alert the SOC in case of malicious activity suspicions. DNS query logs can also be extremely useful to perform incident investigations.

External DNS Servers

Corporate external DNS servers are authoritative DNS servers accessible from the Internet with all the DNS records related to the organization domain. These servers return the IP address related to an asset from the organization domain (e.g., website, mail gateway, etc.).

Being exposed to the Internet, these servers are susceptible to several types of attacks like DNS poisoning, DNS amplification (which leads to DDoS), or DNS hijacking. Therefore it is highly recommended to constantly monitor these servers or place them in an external hosting provider. Some organizations provide these servers as a service with additional security features.

Message Security

Email, or in broader terms, messaging, is the core of companies’ communication, so it is the number one threat vector used by cyberattackers to find security gaps. Spoofing, ransomware, phishing, zero-day attacks, and business email compromise (BEC) are some examples. Although end-user education is the top mechanism to prevent these attacks from being successful, additional security measures, such as message security, can be implemented.

Incoming emails can be sent by malicious users pretending to be sending emails from trusted sources (spoofing). To detect that, DMARC (Domain-based Message Authentication, Reporting, and Conformance) (Figure 4-3) protocol authorizes and authenticates email senders. If they have defined a DMARC DNS entry, then the Secure Email gateways (receiving email server) would check for the existence of corresponding records. If the email passes the authentication, it is delivered and trusted. If the email cannot pass the authentication, then based on the policy, it is delivered, rejected, or quarantined.
Figure 4-3

DMARC record for gmail.​com

DMARC also uses two email authentication mechanisms, Sender Policy Framework (SPF) (Figure 4-4) and DomainKeys Identified Mail (DKIM), for better sender identification, which is called identifier alignment. SPF publishes the IP addresses that send the email from that sender domain name (Figure 4-5). In the example shown in Figure 4-6 SPF (the record for _netblocks.google.com), when someone sends an email from gmail.​com, SPF tells the IP addresses authorized to send mail.
Figure 4-4

SPF record for gmail.​com, pointing to _spf.google.com

Figure 4-5

SPF record for _spf.google.com, pointing to _netblocks.google.com

Figure 4-6

SPF record for _netblocks.google.com

DKIM provides a similar thing by providing a digital signature to the outgoing message to show that the sender in the sender domain is authorized to send emails. The recipient email gateway can verify the signature by looking up the sender’s public key (the p in Figure 4-7) published in the DNS record.
Figure 4-7

DKIM value for gmail.​com, the selector is 20161025

These protocols are there to verify the sender. However, we still need some control over the email content. There are technologies to reject or quarantine the emails based on the reputation of the domain that the email is coming from; however, this configuration needs to be well adjusted because reputation is based on many blacklist providers, which do not publish the same information and sometimes these providers are blacklisting domains based on false-positive information. So, tomorrow you may see that a very valid domain is blocked due to a blacklist.

Regardless of the reputation, a verified legitimate sender can still send malicious content, e.g., a virus, malware, a malicious code with autoexecute or autodownload abilities like a macro-enabled attachment that downloads a payload when opened. Thus, antivirus, anti-malware, and anti-spyware technologies must also be implemented to analyze content, attachments, and any links mentioned in the email body, just like antivirus software on a PC. Some vendors are using sandboxing technologies to analyze the attachments. Some use cloud-based threat intelligence platforms or hashes of the files to check the authenticity of the file and sender and if it contains any malicious code. Most vendors are now checking the reputations of the links provided in the email body, including scanning URLs in attachments and managed (shortened) URLs, which also minimizes the risk of clicking a malicious link in the content.

Such technologies also allow security professionals to track if end users have received malicious emails and even clicked the links, which can be useful during containment.

Directory Integration for External Applications

For most organizations, LDAP (the most popular ones are Active Directory, OpenLDAP, and Lotus Domino) plays a central role in identity, authentication, and access management. It serves as the place for user identities, and it provides access control to on-prem systems such as file shares, networks, and applications.18

Although this works well in LAN or WAN environments, as organizations are shifting to cloud-based applications, there must be a connection to the LDAP Service that checks the credentials. Directory integration provides this connection.

There are several methods to integrate such cloud-based applications with corporate LDAP.

One method exposes LDAP to the Internet, which poses a great risk to your entire organization. A second method would be to create the same LDAP structure in the cloud and synchronize it often. Although it may work for smaller organizations, it imposes a huge overhead on IT administrators in larger environments. A third option would be to use Directory-as-a-Service models, where an agent placed on the internal LDAP can synchronize users to the cloud-based directory. This serves as the bridge between the on-prem LDAP and cloud infrastructure. The last approach is often chosen because it is simple, available, and security features (access controls on who can access cloud services or MFA) can be implemented.

Sandbox

Sandboxes are isolated environments that simulate the end-user operating environment to run programs and test incoming files. Untested, unverified, untrusted attachments, files, codes, programs are run first in sandboxes to see the impact on the host. In the isolated environment, every change or attempt is recorded to analyze the malware thoroughly.

Sandboxing technologies are heavily used in evaluating and mapping the behavior of malware (Figure 4-8). Researchers can evaluate how malware infects and compromises a target host by creating an environment that mimics or replicates the targeted desktops. This provides a crucial input to contain the malware and restore it to a normal operation in a real-life incident.

Sandboxes can be implemented in security devices and applications, such as web and email gateways, IPS, or even software testing. They run the content first in the sandbox, analyze the behavior and then release it to the user. This process provides an additional layer of protection against zero-day attacks, ransomware, and stealthy attacks like APTs (advanced persistent threats). In software testing, the code is first to run in the sandbox, which gives the developer flexibility to play with the code and see the impact.

Sandboxing should not be confused with containers. Traditional containers such as Docker, Linux Containers (LXC), and Rocket (rkt) are still sharing the host OS kernel, which can be compromised through the container application if necessary. Security measures are not implemented.
Figure 4-8

Behavior analysis of a WannaCry variant by Tencent HABO19

File Integrity

File integrity monitoring is a change-detection mechanism to validate the integrity of a file, folder, or registry setting and compares the current state with a defined baseline.

Most of the FIM solutions are capable of detecting changes, additions, and deletions of system and application executables, critical configuration and parameter files, and log and audit files in various systems including servers, workstations, network devices, and notify security professionals for further investigation if it is expected or not. If it was not expected, then it could be caused by a malicious insider or a cybersecurity attack.

There are many agent-based or agentless FIM solutions in the market, but overall, the technology compares the current file state with a known previous state. If the changed content does not need to be known, the easiest FIM method is to monitor the changes in the checksum of the file or the folder. If the changed content (diff) is needed, the actual contents are indexed by the solution, showing what has changed. It should be noted that the latter consumes more storage and time than the former.

FIM tools, unfortunately, generate too much noise. Even in the BAU activity of an application, temporary files are created and deleted on the fly. During OS updates, lots of changes are happening in the system files. So, organizations must find a way to implement FIM solutions to the ITSM or change management platforms. By this integration, the FIM solution would know the timeframe for an approved change and expect changes in the defined folders and create lesser noise and lesser false positives.

Encrypted Email

You use SMTP, POP, or IMAP when you send or receive an email. They are the protocols used to transmit email messages from a client to an email server and from one email server to another. They do not employ encryption natively, making the email exchange prone to eavesdropping, sniffing, and MitM attacks.

Nowadays, most email communication is encrypted in the transport layer (STARTTLS, for example, is the extension for SMTP to encrypt the channel, Figure 4-9); however, not all systems support TLS. In addition, end users would not know whether the recipient’s email server supported TLS or not. They would only write the email and then click the Send button.
Figure 4-9

A simple SMTP chat with gmail.​com

Therefore, there should be a transparent process for the users so that when they need to send a confidential email to an external recipient, they should not worry if the channel is encrypted. There are mainly two ways to ensure security when sending emails.

The first one is the secured web interfaces. In these systems, when the email gateway catches a message with one or more external recipients, and the classification is “confidential,” it stores the message in the external web interface of the platform and only sends a link to the recipient. If the recipient has not registered yet to the platform, it also sends an email for registration. When the recipient registers to the system, logs himself/herself in (preferably with MFA), and sees the message in an external web interface, which is already HTTPS, it is protected against eavesdropping or sniffing.

Second is the key or certificate-based email exchange (i.e., S/MIME, PGP, or GNU Privacy Guard). Senders and recipients exchange keys or certificates before the email messaging and then encrypt the message with the correspondent’s key or certificate so that only the private key owner can decrypt the message. This method is not as smooth and transparent as the former, and it requires supported email servers and email clients. For example, most webmail clients do not support S/MIME.

On-Premises Support Controls

Access Control

Every client (human or machine20) has an identity or unique user ID in the system that he/she/it is authenticated with. In addition, the group the user ID belongs to is also kept in the device. This user ID or group is used for allowing or disallowing content (file access or privileges) to the user. This is the essence of access control.

Most modern operating systems have access control lists (ACLs), which is the table containing user rights for each data object. Whenever a user attempts to read, write or access a file, the OS verifies if the user has the appropriate right in the ACL. If the rights are assigned by the owner of the object, the access control model is discretionary access control (DAC). If you assign groups or roles to the users and administrators assign privileges based on roles, then it is role-based access control. Most web applications use this type of approach for better visibility and saving time. Separate roles are created for different sections, where users can write on one section but only read other sections.

There is also a rule-based access control model, which uses rules for access for all users. These are typically used in firewalls. If further attributes are added to the model, it is called attribute-based access control .

Mandatory access control is based on the classification or sensitivity level assigned to the object. Data subjects can only access the object if they have sufficient clearance.

As in the examples, access controls are everywhere in the organization (files, folders, shares, applications, firewall rules, or physical access controls), making it a prime target of attackers. Typically, malicious actors spoof identities to bypass the ACLs or gain more privileges. OWASP considers broken access control the most serious web application security risk in their OWASP Top 10 2021.21 MITRE (Table 4-5) has a good relationship matrix for this type of attack.22
Table 4-5

MITRE CAPEC-151 Identity Spoofing Relationships

Nature

Type

ID

Name

ParentOf

S

89

Pharming

ParentOf

S

98

Phishing

ParentOf

S

194

Fake the Source of Data

ParentOf

S

195

Principal Spoof

ParentOf

S

473

Signature Spoof

PeerOf

D

665

Exploitation of Thunderbolt Protection Flaws

CanFollow

D

16

Dictionary-based Password Attack

CanFollow

S

49

Password Brute Forcing

CanFollow

S

50

Password Recovery Exploitation

CanFollow

D

55

Rainbow Table Password Cracking

CanFollow

D

70

Try Common or Default Usernames and Passwords

CanFollow

M

94

Adversary in the Middle (AiTM)

CanFollow

D

509

Kerberoasting

CanFollow

S

555

Remote Services with Stolen Credentials

CanFollow

M

560

Use of Known Domain Credentials

CanFollow

D

561

Windows Admin Shares with Stolen Credentials

CanFollow

D

565

Password Spraying

CanFollow

D

568

Capture Credentials via Keylogger

CanFollow

S

600

Credential Stuffing

CanFollow

D

644

Use of Captured Hashes (Pass the Hash)

CanFollow

D

645

Use of Captured Tickets (Pass the Ticket)

CanFollow

S

652

Use of Known Kerberos Credentials

CanFollow

S

653

Use of Known Windows Credentials

Organizations need to centralize access control, review the ACLs or implement multi-factor authentication mechanisms to prevent unauthorized access. Access should be denied by default and only enabled to individuals with a business need with the least privileges.23 Failed login attempts should be monitored, and the security operations center should be alerted. User accounts are needed to be locked after certain failed attempts. A good list of recommendations can be found in OWASP Proactive Control: Enforce Access Controls24 and the OWASP Authorization cheat sheet25 for better application development.

Secure VLAN Segmentation

VLAN segmentation, also known as zoning, implements physical and/or logical access controls to separate systems with different security and/or functionality needs. The simplest example for logical controls would be a firewall or a router configured to prevent traffic from passing between networks or VLANs. Having separate cabling for different purposes and/or different sensitivity levels would be an example of physical segmentation.

Segmentation is one of the key access controls in a corporate environment, which enables granularity. When properly implemented, VLAN segmentation provides better visibility on the network, improved mapping of the data flows, and better implementation of security. Security professionals can better tailor the needs for the defined VLANs or different types of traffic. For example, VOIP traffic can be isolated from the desktop workstation zones, both for security purposes (i.e., to prevent or hinder eavesdropping) and for efficient networking (i.e., to reduce congestion and avoid bottlenecks in the network) (Figure 4-10).
Figure 4-10

Sample VLAN segmentations

By applying segmentation controls, security professionals can minimize the risk of lateral movement by an attacker from a compromised system to another. For example, if a public-facing application or an internal endpoint is compromised, the attacker would have gained access to the entire organization if it was a flat network. However, if access control lists are applied, that server would have been in a DMZ and even compromised, it would be contained in its segment, or at least, the movement would not be as easy as flat networks.

In a corporate environment, the segmentation should be done by considering the two golden rules for any access control mechanism: least privileged and need to know. Users, applications, devices, or systems must be separated based on their accessing other networks. Moreover, segmentation should be done based on the criticality of the traffic, service nature (Table 4-6). When establishing segments, the following should be considered.
Table 4-6

VLAN Segmentation Considerations

Criticality

Perimeter VLANs are segregated according to the hosted system’s criticality (e.g., highly critical, critical, non-critical).

Service Nature

Perimeter VLANs are segregated according to the hosted system’s nature (e.g., cardholder data environment, front-end, middleware, database, fileserver, printer and scanner devices, ATMs, POS devices).

Type

Perimeter VLANs are segregated according to the hosted systems environment (e.g., DMZ, production, staging, quality, development, third-party access, extranet).

It can be used for quicker and more efficient compliance efforts, such as reducing the number of systems in scope for PCI DSS so that only those systems are required to undergo PCI DSS assessment, which would be a cost-effective, time-saving approach. It would also decrease the threat surface for the cardholder data environment. As a result, the risk to the organization would be reduced.

One key consideration for segmentation is not to have so many different segments, which leads to over-segmentation. This causes network connectivity issues, degrading performance, and non-sustainable access control lists.

Security Baselines

Security baselines (often called benchmarks or configuration checks ) are minimally accepted recommendations used to harden OS, application, database, or service. They typically contain several settings, parameters, and security considerations for the defined system. Most of the requirements in a benchmark are scored to compare the overall security posture of the target system with others in the same region, business function, industry, or type of application. Most of the time, they complement vulnerability scanning. Thus many vulnerability management tools provide scanning for benchmarks.

Unfortunately, security hardening and business functioning are opposite sides of a seesaw. Where you want full security, there is always a compromise on the business end, which your users never want to see. So, for systems to be resilient to threats while still working efficiently and effectively, security baselines must be fine-tuned. They must apply to all server and workstation deployments to avoid gaps in the security posture.

Many industry-accepted, mature standards have impact levels for the considerations of security professionals. Low-impact or level-1 security baselines are generally base recommendations that can be implemented more quickly than those without any major performance impact. They intend to lower the attack surface while keeping the business functioning. High-impacts or level-2 baselines can be used where security is a must; however, if they are blindly implemented, there may be some adverse effect to the performance of the function.

Security professionals can create their own security baselines from scratch, use the following baselines, or a combination of where they can adjust the parameters that would fit their environment without any compromise on the intended security level.

The following are noticeable security baselines.
  • CIS benchmarks26

  • NIST SP 800-53b Control Baselines for Information Systems and Organizations27

  • Microsoft Security Baselines28

  • Security Technical Implementation Guides (STIGs)29

Redundancy

In its broadest terms, system redundancy means that if a system goes down, another takes its place. Critical systems should be highly available to rapidly recover from any interruption through hot, warm, or cold backups, active-passive, or active-active redundancy.

The lack of redundancy in mission-critical components is called a single point of failure . In security architecture design, it is important to implement redundancies, fault-tolerant systems, for information security devices.

Unfortunately, in most cases, redundancy implies the purchase of additional hardware or software, which increases costs. For a cost-effective approach, the following can be considered.
  • Revenue or income-generating computing systems must have 99.99% availability, whereas, for others, 99% availability is accepted.

  • Supporting security infrastructure should have a similar approach.

  • RPO (recovery point objectives) and RTO (recovery time objectives) values should be planned and decided. Better (smaller) values mean higher operational costs (more storage, more hardware, etc.). Higher values mean lower operational costs, but you may risk losing important transactional data. The design should aim to reduce the cost and loss to a minimum. Less redundancy can be implemented in systems that are less susceptible to failures.

Load Balancing

Load balancing is distributing the computing loads over a cluster of (redundant) resources to maximize the system’s efficiency. Current computing systems need to handle millions of requests per day, and sometimes it is not easy for a single system to respond to all requests in time. Thus, additional servers or systems should be added to the environment to process the requests and share the load. However, a front “gatekeeper” or a supervisor should distribute the load between servers based on several factors such as availability or the service/function requested. This supervisor should know if the back-end servers are live and ready to take requests or too busy to handle any requests. Moreover, this supervisor should understand requests and forward them to the correct server for that particular function. These devices are called load balancers (Figure 4-11).
Figure 4-11

Load balancing

Load balancing is used for the following.
  • Scalability: If you know you need more requests coming, you may increase the back-end servers.

  • Redundancy (high availability (HA)): If one server is down, the load balancer knows that it should send the requests to others, which maintains the overall high availability of the application.

  • Maintainability: Upgrades and updates need reboot time. When one server is rebooting, the others can still respond.

  • Security: Modern load balancers also check for the request for signs of malicious content such as SQL injection, cross-site scripting attacks, or main DDoS attacks.

  • Acceleration: Load balancers can offload or terminate TLS communication on themselves so that HTTP traffic (instead of HTTPS) is sent to the server without any TLS overhead. In addition, load balancers can store static content and respond from their cache.

Load balancers distribute the loads based on several techniques .
  • Round robin: The basic load balancing technique where the request is forwarded one by one to every server in the cluster sequentially, without any prioritization.

  • Weighted round robin: The same as the round robin, but back-end servers are weighted so that a high number of responses are sent to the most powerful server.

  • Randomized static: Requests are forwarded to the servers randomly.

  • Least connection: Requests are sent to the server with the least number of active sessions. The traffic is distributed based on the server load.

  • Weighted least connection: Same as the least connection but back-end servers are weighted. If two servers have the same number of active sessions, the request is forwarded to the server with the higher weight.

  • Least response time method: Requests are sent to the server having the least response time.

  • Least bandwidth method: Requests are sent to the server with the least traffic measured in megabits per second (Mbps).

  • Hashing: Requests are directed to the servers based on hashes of various information of the incoming packet, such as source/destination IP address, port number, URL, or domain name, from the incoming packet. Some content-aware load balancers are using this method.

Load balancers distribute the load between servers (Layer 4, server load balancing, or SLB). Industry is now switching to application delivery controllers (Layer 7), where load is distributed between application nodes in an application and in a content-aware manner.

For example, in a user authentication request to authenticate, the conventional load balancer would send the whole request to the application server, which would process that information. In contrast, in ADC, the request would be directed to the authentication module of the application node, where it would be handled only by this module, making other nodes free for other requests.

Encryption

We have always used secretive ways to communicate with each other. Even older civilizations used cryptography (e.g., substitutions like a Caesar cipher or a Vigenère cipher) to hide the messages. Enigma coding machines were used during World War II to cipher radio messages. The purpose is always the same; you have the original message (plaintext), submit it to a cryptographic algorithm to encode it (or encrypt), and, as a result, you get a ciphertext that only people who know how to decode it (or decrypt) can retrieve the original message. All cryptosystems rely on algorithms, mathematical sets of calculations. It is crucial to protect the encryption key for publicly available cryptographic algorithms. Algorithms, however, can be made public (Kerckhoffs’ principle) for better analysis of their weaknesses.

Confidentiality ensures that data remains private at rest, in transit, and in use. By encrypting data, you would ensure its confidentiality, as long as it is not decrypted along any of those phases. Even though data is encrypted, it is prone to cryptographic attacks such as analytic attacks, implementation attacks, and statistical attacks. The ciphertext is also susceptible to brute-force or rainbow tables attacks. If you have sufficient ciphertext and know the plaintext language, there are some methods to retrieve the plaintext by frequency analysis or birthday or replay attacks. This is the primary reason for the need to rotate the keys used for encryption.

Symmetric or asymmetric encryption algorithms are used based on the type of information you can require confidentiality.

Overall, organizations rely on these methods to protect their data. Portable devices are always prone to get lost or stolen, and if they contain sensitive information, they can harm the organization in various ways. Hence, full-disk encryption technologies are present to transparently encrypt the disk, mostly utilizing the Trusted Platform Module (TPM). Even if stolen, no one except the person knowing the PIN would have access to the disk, hence sensitive information.

Similarly, files in file shares can be encrypted. Many available solutions in the market can perform file and folder encryption. The same principle mentioned earlier can be applied to databases, where sensitive data is encrypted and stored in tables in the ciphertext. This would ensure that even DBAs, along with cyberattackers, cannot see the actual data. This assumes that the decryption key is stored elsewhere in a protected area. Transparent data encryption (TDE) technologies are also present to ensure on-the-fly encryption/decryption using the database encryption keys.

Encrypting your data at rest wherever it is stored does not provide encryption across communication channels. For that, the transport layer should be encrypted (i.e., TLS). HTTPS is the secure version of plaintext HTTP traffic, using TLS certificates. These certificates, ideally, are generated by a trusted certificate authority (CA). If the browser trusts the CA, secure communication between the client and server can be established.

Similarly, the secure version of Telnet, Secure Shell (SSH), provides end-to-end encryption using passwords, keys, or certificates to securely connect and administer the remote device.

VPNs should use secure channels by implementing TLS (as in the case of SSL VPNs) or IPsec as the traffic is transmitted through a public network, the Internet.

Wi-Fi networks also use encryption for communication. Wi-Fi Protected Access (WPA2 or WPA3) provides encryption to the network. Organizations can improve security using a certificate-based system to authenticate the connecting device, following the standard 802.11X.

Multi-tier and Multi-layer

Multi-tiering and multi-layering are some of the implementations of the concept of defense-in-depth.

Multi-layering

A multi-layer architecture allows the application of different security controls between each layer. If one layer is compromised, the remaining layer can still be protected.

One example of multi-layering is the implementation of transactional websites, where applications should have a presentation layer, a business logic or application layer, and a data layer.
  • Presentation layer: The end-user graphical interface

  • Application layer: The application processes requests from the presentation and applies the business logic rules

  • Data layer: Typically, database servers with persistent data are accessed and manipulated by the application layer

Multi-tiering

A multi-tier deployment architecture deploys multiple subnets in the DMZ between the corporate networks and the Internet. Firewalls appropriately segment systems with strict filtering rules where devices residing in different tiers but the same layer should not communicate. For example, an organization can implement different tiers for their websites’ front ends in the same layer, having a tier for their public websites with unauthenticated services, another tier for remote access front ends, and another tier for customers authenticated websites (e.g., Internet banking).

TLS Decryption

While minimal to low, encryption creates overhead on the server or the device. The overhead starts impacting the system’s general performance when there is too much traffic. Overall, application servers should not be the ones terminating the TLS connection because it degrades the performance of the application component. Instead, ideally, the TLS communication should be terminated at the load balancer or WAF interfaces, which are designed for this process, and the connection between the network device and the server can be in cleartext. This would ensure that the application would process the request without the overhead, increasing response time. Of course, there are some shortcomings of this setup. Someone sniffing the network between the device and the application can collect highly sensitive information. In these cases, additional security features, such as network access control can come into the picture.

TLS decryption is also needed for inspection of the traffic. Secure communication often blinds the security and monitoring devices, which can be used maliciously for exfiltration of data or malware communication with a command and control (C&C) site. Thus, it is crucial to decrypt and inspect the TLS traffic as privacy laws allow. Using the root certificate on clients, acting as a certificate authority for HTTPS requests (unless it is using certificate pinning), it is possible to decrypt the traffic, perform the inspection required and permitted, and then re-encrypt the traffic before sending it to its original destination. There are, as mentioned, some privacy concerns over such inspection, so organizations can choose to implement exceptions for specific site categories such as banking, health, or government. It should be noted that, with the availability of TLSv1.3, Perfect Forward Secrecy (PFS) may require additional steps to decrypt the traffic.

Perimeter Static Routing

To avoid exposure to misconfigurations, automatic routing protocols should not be implemented between perimeter networks.

Ideally, routing between perimeter networks should be configured manually, and all traffic routed through and filtered by firewalls.

Heartbeat Interfaces

Heartbeat is a concept used when implementing high availability in security devices like firewalls. When implementing a cluster, regardless of the implementation type, active-passive or active-active, each cluster member device must know the other member(s) state. This is implemented with heartbeat interfaces.

For clusters of two devices, a heartbeat can be done with dedicated interfaces directly connected using patch cables without connecting other network devices like switches or hubs. These devices can be compromised or overloaded and cause heartbeat delays.

For clusters with more than two devices, switches can connect heartbeat interfaces in a distinct Layer 2 VLAN to avoid any impact from other traffic.

Disaster Recovery

Every organization has witnessed (or witnessed) a disaster in its life cycle. Whether it is a cyber incident or a natural disaster, IT systems are impacted by shutting down or some of the data lost. Disasters can happen anytime, so organizations need to be resilient enough to mitigate their effects from them. Proper plans and procedures should be designed, implemented, and tested before any actual disruption occurs in the organization.

To understand the mission-critical business systems and find their RTO and RPO, a business impact analysis (BIA) should be performed. BIA lists the threats to the business, the likelihood of the risks, the potential impact, and the human and technology resources needed to recover the business in case of a disruption. BIAs give an idea of the RPO and RTO values of the services provided so that a business continuity plan (BCP) is designed. It should be noted that the BCP should cover the organization’s capabilities, so priorities should be defined.

BIAs are also important to identify the single point of failure. The disruptions mostly occur because of these systems. Nowadays, most mission-critical systems can work active-active (one in production, one in DR site), but budgetary limitations may cause organizations to purchase only one device. Then, cheaper backup alternatives (full backup, incremental backups, or differential backup) can be used.

Security professionals should not simply plan the BCP and wait for the disaster. The availability aspect of the systems should be by the design of the architecture. For example, when designing a new project, based on the budget provided, a redundant server, a secondary PSU connected to a different electrical network, or disks for RAID can be considered for fault-tolerant services.

Even though every system can have fault tolerance by itself or backups in place, it does not protect the building. Such facilities require electricity, HVAC, Internet connection and many others. Any failure in these areas degrade the performance or even cause service disruptions. Then, alternate processing sites should be considered. These sites include the following.
  • Cold sites are standby facilities with electrical and HVAC equipment but no hardware or software and no data. They have the longest time to be active, between days to weeks.

  • Hot sites are near-real-time sites with running hardware, software, and data, ready to become the main data center in one DNS update. They have the shortest time to activate, so they have the best RTO and RPO values, although costly to build and maintain.

  • Warm sites have electrical and HVAC equipment, hardware, and software but no data. Time to activate is the time to restore the data to the storage units.

  • Mobile sites are usually cold or warm sites, provide easy relocation, but performance is lower than other sites.

  • Service bureaus are third-party companies that provide backup and recovery services and locations, servers, and storage units.

  • Cloud service providers deliver “as a service” offerings. IaaS, or infrastructure as a service, provides cloud-hosted physical and virtual servers, storage, and networking. PaaS, or platform as a service, is a ready-to-use, cloud-hosted platform for developing, running, maintaining, and managing applications with zero OS maintenance. SaaS, or software as a service, is ready-to-use, cloud-hosted application software where you only provide data.

Larger organizations may choose to decrease backup and restore costs by having mutual assistance agreements (MAA) or reciprocal agreements with other companies having similar hardware and software. When disaster happens to one company, it can use the hardware/software equipment of the other company. Although confidentiality and competition are an issue here, state regulators may mandate companies to sign such agreements to prevent any disruption for the sake of public benefit.

Backup and restore plans, also known as business continuity plans, must be tested annually. This testing can be a read-through test, where the plan is only read to the audience, or through a more structured and scenario-based walk-through, also known as a tabletop exercise. Simulation testing can be performed with all the required personnel ready in the exercise. More ready organizations can choose to have a full interruption test, where they fully switch to a recovery site or only relocate personnel to have parallel testing.

Time Synchronization

Time is the quickest and simplest common reference point for all computing systems. Certificates, Kerberos tickets, events all rely on the accuracy and reliability of time. When two systems, one client and one server, are significantly different from each other, the certificate handshake is not completed, so a secure connection cannot be made. Moreover, inaccurate time means inaccurate time-outs for session management such as Kerberos tickets, which causes users to drop their sessions.

Time synchronization is also crucial in auditing and forensics. When SIEM tools correlate events to find a true incident, they rely on the accuracy of time.

In a centralized network, time can be provided by at least a one-time server, which also synchronizes time from a trusted external source time. These outside time sources can use NTP (Network Time Protocol), GPS (Global Positioning System), CDMA technology, or other time signals such as Irig-B, WWVB, JJY, and DCF77.

When it comes to NTP and time synchronization, there is a hierarchical system of time servers in a network, with stratum levels between 0 and 15. The levels indicate the device’s distance to the reference clock.

Stratum 0 is a high-precision timekeeping device (e.g., an atomic clock, GPS, or another radio clock) with very little or no delay. Stratum 0 servers cannot be used on the network, so stratum 1 devices get the time from stratum 0, then stratum 1 provides time through stratum 2 servers or clients, and so on to the next stratum level. Considering that there would be delays between stratum levels, the higher the stratum number, the more inaccurate the time. Stratum 16 is considered unsynchronized.

When security professionals decide on time synchronization, they should use the best suited method for their needs and capabilities.

Although NTP from trusted sources works fine most of the time, it is not free of attacks, misuse, and abuse.30

Log Concentrator

Log concentrators, also known as log collectors or aggregators, are the systems that backup the logs from the devices in near real time; even when the device generating the logs is compromised, logs are protected. They can also be used as a bandwidth saver, as some concentrators compress the required logs in a compressed file ready to be sent to SIEM (Figure 4-12). Most concentrators support encryption, so the shipped logs are sent over a secure channel, ensuring the integrity and confidentiality of the logs.
Figure 4-12

Log concentrator

Some log concentrators are “write-once” hardware designed to protect highly sensitive logs. Most of these systems reside in either logically or physically separated zones.

Routing and Management Networks

Management Networks

Management and other traffic not directly related to production (e.g., backups) should be segregated from the remaining VLANs to avoid any possible impact over business traffic and avoid being impacted in case of any compromise of the “business” networks. Device management traffic like iLO (integrated lights out) or iDRAC, network devices, security devices, virtualization platforms should be done through distinct VLANs.

Perimeter Routing Networks

Depending on the number of tiers and layers implemented in the DMZ and the need to connect some of them, there may be the need to implement a routing network that directly connects all perimeter firewalls to ensure connectivity inside the DMZ.

Centralized Management

In large organizations, where distant/overseas offices are present, management of network and security devices may become a challenging task to follow. IT and security professionals must spend lots of time configuring changes or applying patches to these devices. As it is prone to human error, this may result in gaps in the security posture. In addition, it may be an extremely difficult task to troubleshoot an issue, as all devices must be checked one by one to pinpoint the problem.

Instead of managing devices one by one, devices should be centrally managed. Security professionals can have full visibility, easier change implementations, and even automation through a single console. Most security devices now support management through APIs or proprietary communication protocols and consoles. When devices are onboarded to centralized management, security policies, signature updates, or patches can be distributed and managed from a central dashboard, increasing efficiency and effectiveness and reducing cost and risk. From the centralized management platform, devices can be easily monitored, and any issues regarding availability can be detected.

Physical Network Segmentation

VLAN hopping is an attack where the attacker can move between VLANs in the same switch. Although most switch vendors have effective protection mechanisms for this kind of attack, organizations should have physical network segmentation between each layer of their perimeter and implement distinct switches for each one of those layers. For example, one switch to support all networks directly connected to the Internet outside the perimeter, another switch to support all presentation layer networks, another switch to support all application layer networks, and another to support data layer networks.

Switch redundancy is also recommended.

Sinkhole

A sinkhole is a relatively simple to implement anomaly detection concept. It is very useful to detect misconfigurations and malicious activities.

Considering that internal networks should only route traffic to internal IP addresses and exclusively to the corporate address space, a device (e.g., router) can be configured as the corporate default router where all traffic to non-corporate IP addresses (public and private) or non-defined networks is redirected. Since that sinkhole should not receive any traffic in normal conditions, whenever it receives any network traffic, it can mean that there is a misconfiguration, the network is being manually or automatically probed, or a compromised device is trying to contact a command and control (C&C) external IP address.

A sinkhole must be integrated with SIEM, and the appropriate use cases must be defined to maximize the SOC response effectiveness.

The sinkhole concept has evolved to network detection and response (NDR), where enterprise network traffic is continuously analyzed, monitored, and compared with “learned normal network behavior” baseline. The NDR generates an alert whenever suspicious anomalous network traffic patterns are detected.

Public Key Infrastructure

Public key infrastructure (PKI) is the infrastructure to manage digital certificates and encryption with a public key that protects communications between the server and client. With PKI, server and client can be unknown to each other. It is not required for them to exchange keys or certificates before the actual handshake, as long as their certificates were issued by a certificate authority trusted by the client. PKI is mainly based on digital certificates that verify the identity of the devices or users.

PKI has three main components: digital certificates, certificate authority, and registration authority.

Digital certificates (conforming to X.509)31 are electronic identification for users, devices, websites, and organizations.

A CA acts as a notary service for digital certificates. They issue and authenticate the digital identities of the users based on a hierarchical setup. For example, if a browser trusts one certificate authority, it also trusts every certificate issued by that CA until the expiration date of the CA root, intermediate certificates, or the actual certificate itself. For someone (or some organization) to obtain a certificate from a publicly trusted CA, the organization must provide a certificate request and identity information. After identity verification, CA issues a certificate (public key) to the organization to install the website. Anyone who trusts that CA would trust the digitally signed website certificate.

A registration authority (RA), on the other hand, is authorized by the CA to receive requests from users, collect and verify identity, and submit information to the CA.

It should be noted that the certificates should be checked to see if they are still valid using certificate revocation lists (CRL) or the Online Certificate Status Protocol (OCSP).

PKI can be used for the following purposes.
  • Securing email communication (either the email or the transaction)

  • Securing web communications (HTTPS)

  • Securing other communication channels

  • Digitally signing software code and applications (ensuring its integrity is kept along with any transmit)

  • Encrypting and decrypting files, folders, disks

  • Smartcard authentication

  • Device authentication when connecting to the network

Public Key and Private Key

Meet Alice and Bob,32 two fictional characters who explain any cryptographic system, and this book is no exception.

Alice and Bob wish to exchange information without other parties eavesdropping.

Alice has a symmetric key, meaning that it can encrypt and decrypt data with the same key. Alice encrypts the plaintext and sends the ciphertext to Bob, and if Bob has the same secret key, he can decrypt and read the data. This process requires the secret key to be shared to all parties that read the ciphertext. This is called symmetric encryption . Although it is a fast process, the main drawback is that the distribution has to be secure because anyone sniffing the traffic when the key is being shared can see the key and use it. It also does not provide any non-repudiation because anyone can use the key and, if the key is compromised, everyone should change the keys. Examples are AES, 3DES, and International Data Encryption Algorithm (IDEA).

There are two parts of a key in asymmetric encryption: private and public. The public key can be distributed to anyone authorized to see the data. Only the private key can decrypt something encrypted with a public key. Examples are RSA, DSA, ECC, Diffie–Hellman key exchange.

So, Alice wishes to send data to Bob. She requests Bob’s public key, then encrypts the data with the public key, then sends it to Bob. With a private key, Bob can decrypt the message and read it. It allows you to create a public key for the party reporting to you so that they may encrypt their incoming information, after which you can decrypt the information with a private key. In this way, the distribution is fairly easier than symmetric encryption.

In real-world implementations, public-key cryptography is very slow, so public keys are not very commonly used to encrypt actual messages. Instead, a hybrid approach is used.
  1. 1.

    Bob sends Alice his public key.

     
  2. 2.

    Alice generates a random symmetric key (usually called a session key), encrypts it with Bob’s public key, and sends it to Bob.

     
  3. 3.

    Bob decrypts the session key with his private key.

     
  4. 4.

    Alice and Bob exchange messages using the session key.

     
  5. 5.

    The session key is discarded after communication.

     
PKI provides non-repudiation by signing by a private key (digital signatures), also ensuring the integrity of the data after signing.
  1. 6.

    Alice calculates the one-way hash of a document.

     
  2. 7.

    Alice encrypts the hash with her private key. The encrypted hash is now the document’s signature.

     
  3. 8.

    Alice sends the document to Bob with the encrypted hash.

     
  4. 9.

    Bob uses the same one-way hash function to derive the hash, then he decrypts the encrypted hash with Alice’s public key and compares the values. If they match, it is ensured that the document came from Alice, and the document’s integrity was protected during transmission.

     

Security Monitoring and Enforcement

Privileged Access Management

Just as user access to systems is supervised, privileged access should also be monitored, tracked and managed, because “with great power comes great responsibility,33” and trust should not prevent controlling that power.

Indeed, at some point, we should trust IT administrators. They are the ones keeping the wheels turning. However, while administering the systems, some privacy concerns always arise due to potential privilege misuse. To prevent that and implement a true multi-layered defense, even system administrators should be monitored. Privileged access management (PAM) tools are used for this purpose.34 They act as a safeguard to privileged identities. These identities are stored in a secure vault and used only when required, some even without displaying the password.

PAM is based on the least privilege principle, where users (or accounts) are only allowed to have the minimum access rights required to perform their BAU activities. In addition, PAM provides more granular visibility, control, and auditing over-privileged identities and activities.

In PAM, privileged accounts, such as superuser accounts, domain administrative accounts, local admin accounts, important SSH keys, emergency break glass accounts, application accounts, service accounts, or privileged business accounts, are onboarded to the tool. From that point, these accounts are managed by that tool. The passwords are changed in a defined time frame without any user interaction. Users can only log in to the PAM platform, and from there, they can remotely log in to the systems they are administering. This helps to track all administrative activities—most PAM tools can record entire sessions, including the commands typed by the user, through session management (Figure 4-13).
Figure 4-13

Privileged access management

PAM also supports automation, access approval requests, and workflows. For example, if you have a highly confidential server or database that you want to ensure the four-eye principle before login, you can set up a workflow where the system “manager” needs to approve the request before PAM allows access to the server. This would help audit the server and system admins’ activities and provide a secure connection, which prevents MitM attacks.

PAM login process should implement multi-factor authentication to provide an additional security layer and prevent stolen user accounts from being used.

A key consideration is that there should not be any bypass method (admins having direct access to the systems), and admins should have no knowledge of the password used to connect. Organizations tend to have only one PAM tool to ensure that key passwords are kept in the least number of places possible, creating a single point of failure. In case of a disaster, access to critical platforms can be compromised.

Security Information and Event Management

Security information and event management (SIEM) tools are an essential system of an organization, typically managed by the security operations centers. SIEM collects all logs and events in the environment (sensory input from various parts of the body), analyzes and correlates them (like a brain does) and puts them in a meaningful understanding of what is going on in the organization, and acts accordingly (like treating a fever).

SIEM systems can aggregate relevant data from multiple log sources, such as servers, workstations, applications, network devices, security devices, and even threat intelligence feeds.

The following is an example scenario.
  1. 1.
    A user clicks an email, and the workstation downloads a malicious file from the Internet, and it starts encrypting the system.
    • When a user receives the email (MTA logs)

    • There is a log of the connection from the workstation to the malicious site, both for downloading the payload and C&C communication (firewall logs)

    • New, unusual processes, new registry, new scheduled tasks are created35 (OS logs)

    • A file executing DLLs for bulk encryption (OS logs)

     
  2. 2.
    The malware tries several exploits and ports to move laterally.
    • Unusual system-to-system communication, maybe even a port scanning (network device logs)

    • Abnormal scripting activities (OS logs)

     

All these events can be sent to SIEM for analysis, and the result would be a ransomware outbreak. If the security operations center has the necessary standard operating procedures (SOPs) on such malware outbreaks (or other threats), the outbreak could be contained with minimal impact.

SIEMs are also used for the case management of the organization. When events are collected, they are presented as cases to SOC analysts, requiring their manual review for that particular incident. Some events may lead to false-positive incidents, meaning that it is not a legitimate information security incident, but to verify that, SOC analysts need a good representation of the events, which is maintained by the case management modules of SIEM tools. These modules may also include additional data from the affected device, such as last vulnerability scan date, missing patches, information on resources, crash dumps, running services, local users, and groups, which are critical to identifying the incident (e.g., decrease the mean time to detect) and potential impact on the environment. Case management modules also provide granular access to the cases and audit trails, which then results in an efficient distribution of the analysts’ workloads with better collaboration possibilities on the cases.

SIEMs provide key information to SOAR tools for an automated countermeasure to security incidents. Certain use cases with SOAR can be designed in a way that when an incident is confirmed, actions such as adding or modifying ACLs on the system to block malicious traffic, changing the privileges of user accounts, or even removing the accounts, setting up NAC measures to the device, or shutting down a service or even the device can be initiated by the SOAR.

Database Activity Monitoring

Database activity monitoring (DAM) tools track database activities such as the privileged user (primarily DBAs) monitoring, application activity monitoring, and unusual or undesirable activities, such as information or cybersecurity attacks, fraudulent activities. Some basic ones would only provide analysis of the accounts connected to the DBMS. But, newer technologies also allow security professionals to discover and classify data inside a database, then build rules to access those sensitive tables, conduct vulnerability management to discover missing patches on the DBMS, or security compliance benchmarking of the DBMS configuration.

DAMs can be implemented as stand-alone configurations and act as a network sniffer, software modules, or agents loaded on the database servers. Each one of these implementations has pros and cons, but the general idea is to catch every activity happening in a DBMS. Agent-based approaches can collect nearly every information required for efficient monitoring. Still, the agent should be installed in every DB, and usually, DBAs are very reluctant to install an agent to their systems. Plus, the DAM should support different DBMS platforms and versions. A sniffing approach requires no configuration changes to DB and creates zero overhead; however, it only works as long as they manage to capture every ingress and egress channel to and from the database, and they should have the capability to analyze secured traffic. In addition, network monitoring would not know the internal state of the DB. Remote connections to the databases are also used for monitoring purposes, but since they create some noticeable overhead, they may create performance issues.

DAMs collect, monitor, and audit all DB activities, including but not limited to SELECT, GRANT statements, to keep track of who or what accessed sensitive information. In addition, they can correlate multiple security-related events occurring in the DBMS and alert SOC analysts for a possible threat or a violation of rule-based or heuristic policies.

In large organizations, the workload to discover sensitive data in databases may be a hassle if there is no continuous communication between DBAs, business teams, and the security operations center. Thus, instead of keeping every activity in a DB (which consumes lots and lots of space), data classification can detect sensitive data. This classification can be based on table/column names, data schemas, searching for regular expressions, or even with a sample data set in that particular column. As a result, better activity monitoring can be done, which would have a better mean time to detect and mean time to respond to an incident. This information can also ensure compliance requirements such as encrypting sensitive data inside a database or auditing access.

Single Sign-on

Single sign-on (SSO) is a session and user authentication scheme that allows users to consolidate authentication and use only one or very few sets of login credentials. It keeps the number of sign-ons required from users to a minimum. It is an identity federation where the applications (Ad-aware, Kerberized, or SAML) retrieve the user authentication credentials such as Kerberos tickets and check with the authentication service to validate if the ticket is valid and the user is not asked for re-authentication.

SSO provides a better end-user experience. However, it also has benefits on security. With SSO, it is possible to keep track of user logins and avoid other login attempts. The user login attempts fail if the user has already logged in through SSO from a legitimate source. In addition, third-party applications do not know the actual user credentials, which as a result, minimizes the threat surface.

SSO is also criticized because it provides access to multiple applications, i.e., if the user credentials are compromised, then malicious threat actors may gain access to every application the user has rights to. To overcome this, the user credentials used for SSO should be supported by MFA, where OTPs or smart cards are used. Moreover, if SSO access is lost, the applications must switch to their generic authentication mechanisms; otherwise, users cannot log in.

Nowadays, federated logins from social platforms such as Facebook, Google, LinkedIn, Twitter are also possible through OAuth, but these may not be applicable for corporate environments.

Risk Register

A risk register is a tool in GRC (governance risk compliance) used for recording identified risks, their likelihood and impact, planned actions to reduce the risks, and a responsible person or department for managing these risks. In a security sense, it is the repository for information security-related risks, including but not limited to vulnerabilities observed in the organization assets, security findings from penetration tests, findings recorded in dynamic or static application security testing (DAST, SAST), audit findings, and others, so that security professionals can prioritize their remediation plans.

A risk register helps security teams effectively integrate their risks into the enterprise risk management program.

According to NIST IR 8286: Integrating Cybersecurity and Enterprise Risk Management (ERM),36 a security risk register should contain the elements listed and described in Table 4-7.
Table 4-7

Descriptions of Notional Cybersecurity Risk Register Template Elements from NISTIR 8286

Register Element

Description

ID (risk identifier)

A sequential numeric identifier for referring to risk in the risk register

Priority

A relative indicator of the criticality of this entry in the risk register, either expressed in ordinal value (e.g., 1, 2, 3) or in reference to a given scale (e.g., high, moderate, low)

Risk description

A brief explanation of the cybersecurity risk scenario (potentially) impacting the organization and enterprise. Risk descriptions are often written in a cause-and-effect format, such as “if X occurs, then Y happens.”

Risk Category

An organizing construct that enables multiple risk register entries to be consolidated (e.g., using SP 800-53 Control Families: Access Control (AC), Audit and Accountability [AU]). Consistent risk categorization helps compare risk registers during the risk aggregation step of ERM.

Current Assessment: Likelihood

An estimation of the probability of this scenario occurring before any risk response. This may also be considered the initial assessment on the first iteration of the risk cycle.

Current Assessment: Impact

Analysis of the potential benefits or consequences of this scenario if no additional response is provided. This may also be considered the initial assessment on the first iteration of the risk cycle.

Current Assessment – Exposure Rating

A calculation of the probability of risk exposure based on the likelihood estimate and the determined benefits or consequences of the risk. Other common frameworks use different terms for this combination, such as level of risk (e.g., ISO 31000, NIST SP 800-300 Rev. 1). This may also be considered the initial assessment on the first iteration of the risk cycle.

Risk Response Type

This is the risk response (sometimes referred to as the risk treatment) for handling the identified risk.

Risk Response Description

A brief description of the risk response. For example, “Implement software management application XYZ to ensure that software platforms and applications are inventoried,” or “Develop and implement a process to ensure the timely receipt of threat intelligence from [name of specific information-sharing forums and sources].”

Risk Owner

This is the designated party responsible and accountable for ensuring that the risk is maintained in accordance with enterprise requirements. The risk owner may work with a designated risk manager responsible for managing and monitoring the selected risk response.

Status

This is a field for tracking the current condition of the risk.

After identifying the risks, risk treatment activities are needed to be done. According to NISTIR 8286, the risk response types are described in Table 4-8.
Table 4-8

Risk Response Types from NISTIR 8286

Type

Description

Accept

Accept cybersecurity risk within risk tolerance levels. No additional risk response action is needed except for monitoring.

Transfer

For cybersecurity risks that fall outside of tolerance levels, reduce them to an acceptable level by sharing a portion of the consequences with another party (e.g., cybersecurity insurance). While some of the financial consequences may be transferable, there are often consequences that cannot be transferred, like losing customer trust.

Mitigate

Apply actions that reduce a given risk’s threats, vulnerabilities, and impacts to an acceptable level. Responses could include those that help prevent a loss (i.e., reducing the probability of occurrence or the likelihood that a threat event materializes or succeeds) or that help limit such a loss by decreasing the amount of damage and liability.

Avoid

Apply responses to ensure that the risk does not occur. Avoiding risk may be the best option if there is no cost-effective method to reduce the cybersecurity risk to an acceptable level. The cost of the lost opportunity associated with such a decision should also be considered.

Once all required information is entered into a risk register, the security professionals can do the following.
  • Identify all risks in the environment.

  • Prioritize the actions.

  • Start building risk treatment/response activities.

  • Notify all related departments on the risks and action plans, let them involve in the process, report the residual risks.

  • Reiterate the cycle for the next risk assessment.

Most GRC tools are capable of building risk registers and have built-in integration capabilities with most vulnerability management tools, security configuration compliance benchmarks, application security testing tools, SOARs, asset inventory, ticketing systems, and third-party risk assessment tools.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.34.198