Chapter 5. Network and Security Components, Concepts, and Architectures

This chapter covers the following topics:

This chapter covers CAS-003 objective 2.1

A secure network design cannot be achieved without an understanding of the components that must be included and the concepts of secure design that must be followed. While it is true that many security features come at a cost of performance or ease of use, these are costs that most enterprises will be willing to incur if they understand some important security principles. This chapter discusses the building blocks of a secure architecture.

Physical and Virtual Network and Security Devices

To implement a secure network, you need to understand the available security devices and their respective capabilities. The following sections discuss a variety of devices, both hardware and software based.


Unified threat management (UTM) is an approach that involves performing multiple security functions within the same device or appliance. The functions may include:

  • Network firewalling

  • Network intrusion prevention

  • Gateway antivirus

  • Gateway antispam

  • VPN

  • Content filtering

  • Load balancing

  • Data leak prevention

  • On-appliance reporting

UTM makes administering multiple systems unnecessary. However, some security professionals feel that UTM creates a single point of failure and favor creating multiple layers of devices as a more secure approach. Table 5-1 lists some additional advantages and disadvantages of UTM.


Table 5-1 Advantages and Disadvantages of UTM



Lower up-front cost

Single point of failure

Lower maintenance cost

May lack the granularity provided in individual tools

Less power consumption

Performance issues related to one device performing all functions

Easier to install and configure


Full integration



An intrusion detection system (IDS) is a system responsible for detecting unauthorized access or attacks against systems and networks. It can verify, itemize, and characterize threats from outside and inside the network. Most IDSs are programmed to react certain ways in specific situations. Event notification and alerts are crucial to an IDS. They inform administrators and security professionals when and where attacks are detected. An intrusion prevention system (IPS) is a system responsible for preventing attacks.

IDS/IPS implementations are furthered divided into the following categories:

  • Signature based: This type of IDS/IPS analyzes traffic and compares it to attack or state patterns, called signatures, that reside within the IDS database. It is also referred to as a misuse-detection system. Although this type of IDS is very popular, it can only recognize attacks as compared with its database and is only as effective as the signatures provided. Frequent updates are necessary. The two main types of signature-based IDSs/IPSs are:

    • Pattern matching: The IDS/IPS compares traffic to a database of attack patterns. The IDS carries out specific steps when it detects traffic that matches an attack pattern.

    • Stateful matching: The IDS/IPS records the initial operating system state. Any changes to the system state that specifically violate the defined rules result in an alert or a notification being sent.

  • Anomaly based: This type of IDS/IPS analyzes traffic and compares it to normal traffic to determine whether said traffic is a threat. It is also referred to as a behavior-based or profile-based system. The problem with this type of system is that any traffic outside expected norms is reported, resulting in more false positives than with signature-based systems. There are three main types of anomaly-based IDSs:

    • Statistical anomaly based: The IDS/IPS samples the live environment to record activities. The longer the IDS/IPS is in operation, the more accurate the profile that is built. However, developing a profile that will not have a large number of false positives can be difficult and time-consuming. Thresholds for activity deviations are important in this ID. A threshold that is too low results in false positives, whereas a threshold that is too high results in false negatives.

    • Protocol anomaly based: The IDS/IPS has knowledge of the protocols that it will monitor. A profile of normal usage is built and compared to activity.

    • Traffic anomaly based: The IDS/IPS tracks traffic pattern changes. All future traffic patterns are compared to the sample. Changing the threshold reduces the number of false positives or negatives. This type of filter is excellent for detecting unknown attacks, but user activity might not be static enough to effectively implement this system.

  • Rule or heuristic based: This type of IDS/IPS is an expert system that uses a knowledge base, an inference engine, and rule-based programming. The knowledge is configured as rules. The data and traffic are analyzed, and the rules are applied to the analyzed traffic. The inference engine uses its intelligent software to “learn.” If characteristics of an attack are met, alerts or notifications trigger. This is often referred to as an if/then, or expert, system.

An IPS is a system that is responsible for preventing attacks. When an attack begins, an IPS takes actions to prevent and contain the attack. An IPS can be network or host based, like an IDS. Although an IPS can be signature or anomaly based, it can also use a rate-based metric that analyzes the volume of traffic as well as the type of traffic.

In most cases, implementing an IPS is costlier than implementing an IDS because of the added security of preventing attacks compared to simply detecting attacks. In addition, running an IPS is more of an overall performance load than running an IDS.

While an IDS should be a part of any network security solution, there are some limitations to this technology, including the following:

  • Network noise limits effectiveness by creating false positives.

  • A high number of false positives can cause a lax attitude on the part of the security team.

  • Signatures must be updated constantly.

  • There is lag time between the release of an attack and the release of the corresponding signature.

  • An IDS can’t address authentication issues.

  • Encrypted packets can’t be analyzed.

  • In some cases, IDS software is susceptible to attacks.

The most common way to classify an IPS or IDS is based on its information source: network based or host based.


A host-based IDS/IPS (HIDS/HIPS) monitors traffic on a single system. Its primary responsibility is to protect the system on which it is installed. A HIDS/HIPS uses information from the operating system audit trails and system logs. The detection capabilities of a HIDS are limited by the completeness of the audit logs and system logs.

An application-based IDS/IPS is a specialized IDS/IPS that analyzes transaction log files for a single application. This type of IPS/IDS is usually provided as part of an application or can be purchased as an add-on.

Tools that can complement an IDS/IPS include vulnerability analysis systems, honeypots, and padded cells. Honeypots are systems that are configured with reduced security to entice attackers so that administrators can learn about attack techniques. Padded cells are special hosts to which an attacker is transferred during an attack.


A network IPS (NIPS) scans traffic on a network for signs of malicious activity and then takes some action to prevent it. A NIPS monitors an entire network. You need to be careful to set the filter of a NIPS in such a way that false positives and false negatives are kept to a minimum. A false positive is an unwarranted alarm, and a false negative indicates troubling traffic that doesn’t generate an alarm. The advantages and disadvantages of NIPS devices are shown in Table 5-2.


Table 5-2 Advantages and Disadvantages of NIPS Devices



Can protect up to the application layer.

False positives can cause problems with automatic response.

Take action to prevent attacks.

Performance can be slow.

Permit real-time correlation.

Can be costly.

Contribute to defense in depth.

May have trouble keeping up with traffic.


The most common IDS, a network IDS (NIDS), monitors network traffic on a local network segment. To monitor traffic on the network segment, the network interface card (NIC) must be operating in promiscuous mode. A NIDS can only monitor the network traffic; it cannot monitor any internal activity that occurs within a system, such as an attack against a system that is carried out by logging on to the system’s local terminal. An NIDS is affected by a switched network because generally a NIDS monitors only a single network segment. Table 5-3 lists advantages and disadvantages of NIDS devices.


Table 5-3 Advantages and Disadvantages of NIDS Devices



Can protect up to the application layer.

False positives can cause problems with automatic response.

Take action to prevent attacks.

Performance can be slow.

Permit real-time correlation.

Can be costly.

Contribute to defense in depth.

May have trouble keeping up with traffic.


An in-line network encryptor (INE), also called a high-assurance Internet Protocol encryptor (HAIPE), is a Type I encryption device. Type I designation indicates that it is a system certified by the National Security Agency (NSA) for use in securing U.S. government classified documents. To achieve this designation, the system must use NSA-approved algorithms. Such systems are seen in government deployments, particularly those of the Department of Defense (DoD).

INE devices may also support routing and layer 2 virtual LANs (VLANs). They also are built to be easily disabled and cleared of keys if in danger of physical compromise, using a technique called zeroization. INE devices are placed in each network that needs their services, and the INE devices communicate with one another through a secure tunnel. Table 5-4 lists advantages and disadvantages of INE devices.

Table 5-4 Advantages and Disadvantages of INE Devices




Easily disabled and cleared of keys


Communicate through a secure tunnel

Introduce a single point of failures for each link if link encryptions are 1:1

Certified by the NSA


May also support routing and layer 2 VLANs



Network access control (NAC) is a service that goes beyond authentication of the user and includes an examination of the state of the computer the user is introducing to the network when making a remote access or VPN connection to the network.

The Cisco world calls these services network admission control services, and the Microsoft world calls them network access protection (NAP) services. Regardless of the term used, the goals of the features are the same: to examine all devices requesting network access for malware, missing security updates, and any other security issues the devices could potentially introduce to the network. Table 5-5 lists advantages and disadvantages of NAC devices.


Table 5-5 Advantages and Disadvantages of NAC Devices



Prevent introduction of malware infection from infected systems

Cannot protect information that leaves the premises via email, laptop theft, printouts, or USB storage devices

Ensure that updates are current

Cannot defend against social engineering

Support BYOD

Cannot prevent users with authorized access from using data inappropriately

Can limit the reach of less trusted users

Cannot block known malware from entering over the WAN connection


Security information and event management (SIEM) utilities receive information from log files of critical systems and centralize the collection and analysis of this data. SIEM technology is an intersection of two closely related technologies: security information management (SIM) and security event management (SEM). Figure 5-1 shows the relationship between the reporting, event management, and log analysis components.

The relationship between the reporting, event management, and log analysis components is depicted.

Figure 5-1 SIEM Reporting, Event Management, and Log Analysis

Log sources for SIEM can include the following:

  • Application logs

  • Antivirus logs

  • Operating system logs

  • Malware detection logs

One consideration when working with a SIEM system is to limit the amount of information collected to what is really needed. Moreover, you need to ensure that adequate resources are available to ensure good performance.

In summary, an organization should implement a SIEM system when:

  • More visibility into network events is desired

  • Faster correlation of events is required

  • Compliance issues require reporting to be streamlined and automated

  • It needs help prioritizing security issues

Table 5-6 lists advantages and disadvantages of SIEM.


Table 5-6 Advantages and Disadvantages of SIEM



Identifies network threats in real time

Potentially complex deployment

Enables quick forensics


Has a GUI-based dashboard

Can generate many false positives

Enables administrators to study the root causes of errors

May not provide visibility into cloud assets


Switches are intelligent and operate at layer 2 of the OSI model. They map to this layer because they make switching decisions based on MAC addresses, which reside at layer 2. This process is called transparent bridging (see Figure 5-2).

The process of transparent bridging is depicted.

Figure 5-2 Transparent Bridging

Switches provide better performance than hubs because they eliminate collisions. Each switch port is in its own collision domain, while all ports of a hub are in the same collision domain. From a security standpoint, switches are more secure in that a sniffer connected to any single port will only be able to capture traffic destined for or originating from that port.

Some switches, however, are both routers and switches. Such devices are called layer 3 switches because they both route and switch.

When using switches, it is important to be aware that providing redundant connections between switches is desirable but can introduce switching loops, which can be devastating to the network. Most switches run Spanning Tree Protocol (STP) to prevent switching loops. You should ensure that a switch does this and that it is enabled.


The network device that is perhaps most connected with the idea of security is the firewall. A firewall can be a software program that is installed over a server or client operating system or an appliance that has its own operating system. In either case, the job of a firewall is to inspect and control the type of traffic allowed.

Firewalls can be discussed on the basis of their type and on the basis their architecture. They can also be physical devices or can exist in a virtualized environment. The following sections look at them from multiple angles.

Types of Firewalls

When we discuss types of firewalls, we focus on the differences in the way they operate. Some firewalls make a more thorough inspection of traffic than others. Usually there is trade-off between the performance of a firewall and the type of inspection it performs. A deep inspection of the contents of packets results in a firewall having a detrimental effect on throughput, while a more cursory look at each packet has somewhat less of a performance impact. To wisely select which traffic to inspect, you need to keep this trade-off in mind:

  • Packet-filtering firewalls: These firewalls are the least detrimental to throughput as they only inspect the header of the packet for allowed IP addresses or port numbers. While performing this function slows traffic, it involves only looking at the beginning of the packet and making a quick decision to allow or disallow.

    While packet-filtering firewalls serve an important function, there are many attack types they cannot prevent. They cannot prevent IP spoofing, attacks that are specific to an application, attacks that depend on packet fragmentation, or attacks that take advantage of the TCP handshake. More advanced inspection firewall types are required to stop these attacks.

  • Stateful firewalls: These firewalls are aware of the proper functioning of the TCP handshake, keep track of the state of all connections with respect to this process, and can recognize when packets trying to enter the network don’t make sense in the context of the TCP handshake. In that process, a packet should never arrive at a firewall for delivery with both the SYN flag and the ACK flag set, unless it is part of an existing handshake process; also, it should be in response to a packet sent from inside the network with the SYN flag set. This is the type of packet that the stateful firewall would disallow.

    A stateful firewall also has the ability to recognize other attack types that attempt to misuse this process. It does this by maintaining a state table about all current connections and where each connection is in the process. This allows it to recognize any traffic that doesn’t make sense with the current state of the connections. Of course, maintaining this table and referencing the table cause this firewall type to have a larger effect on performance than does a packet-filtering firewall.

  • Proxy firewalls: This type of firewall stands between the internal and external sides of an internal-to-external connection and makes the connection on behalf of the endpoints. A firewall that is used in this fashion is called a forward proxy. With a proxy firewall, there is no direct connection; rather, the proxy firewall acts as a relay between the two endpoints. Proxy firewalls can operate at two different layers of the OSI model:

    • Circuit-level proxies: These proxies operate at the session layer (layer 5) of the OSI model. This type of proxy makes decisions based on the protocol header and session layer information. Because it does no deep packet inspection (at layer 7, or the application layer), this type of proxy is considered application independent and can be used for a wide range of layer 7 protocols. A SOCKS firewall is an example of a circuit-level firewall. It requires a SOCKS client on the computers. Many vendors have integrated their software with SOCKS to make it easier to use this type of firewall.

    • Application-level proxies: These proxies perform a type of deep packet inspection (inspection up to layer 7). This type of firewall understands the details of the communication process at layer 7 for the application. An application-level firewall maintains a different proxy function for each protocol. For example, the proxy can read and filter HTTP traffic based on specific HTTP commands. Operating at this layer requires each packet to be completely opened and closed, which means this firewall has the greatest impact on performance.

  • Dynamic packet filtering: Although this isn’t actually a type of firewall, dynamic packet filtering is a process that a firewall may or may not handle, and it is worth discussing here. When internal computers are attempting to establish a session with a remote computer, this process places both a source and destination port numbers in the packet. For example, if the computer is making a request of a web server, the destination will be port 80 because HTTP uses port 80 by default.

    The source computer randomly selects the source port from the numbers available above the well-known port numbers or above 1023. Because it is impossible to predict what that random number will be, it is impossible to create a firewall rule that anticipates and allows traffic back through the firewall on that random port. A dynamic packet-filtering firewall keeps track of that source port and dynamically adds a rule to the list to allow return traffic to that port.

  • Kernel proxy firewalls: This type of firewall is an example of a fifth-generation firewall. It inspects a packet at every layer of the OSI model but does not introduce the same performance hit as an application-layer firewall because it does this at the kernel layer. It also follows the proxy model in that it stands between two systems and creates connections on their behalf.

Table 5-7 lists advantages and disadvantages of these firewall types.


Table 5-7 Advantages and Disadvantages of Firewall Types

Firewall Type



Packet-filtering firewalls

Provide the best performance

Cannot prevent:

  • IP spoofing

  • Attacks that are specific to an application

  • Attacks that depend on packet fragmentation

  • Attacks that take advantage of the TCP handshake

Circuit-level proxies

Secure addresses from exposure

Support a multiprotocol environment

Allow for comprehensive logging

Have a slight impact on performance

May require a client on the computer SOCKS proxy

Have no application layer security

Application-level proxies

Understand the details of the communication process at layer 7 for the application

Have a big impact on performance

Kernel proxy firewalls

Inspect packets at every layer of the OSI model

Don’t impact performance as do application layer proxies



Next-generation firewalls (NGFWs) are a category of devices that attempt to address traffic inspection and application awareness shortcomings of a traditional stateful firewall—without hampering performance. Although UTM devices also attempt to address these issues, they tend to use separate internal engines to perform individual security functions. This means a packet may be examined several times by different engines to determine whether it should be allowed into the network.

NGFWs are application aware, which means they can distinguish between specific applications instead of allowing all traffic coming in via typical web ports. Moreover, they examine packets only once, during the deep packet inspection phase (which is required to detect malware and anomalies). Among the features provided by NGFWs are:

  • Non-disruptive in-line configuration (which has little impact on network performance)

  • Standard first-generation firewall capabilities, such as network address translation (NAT), stateful protocol inspection (SPI), and virtual private networking

  • Integrated signature-based IPS engine

  • Application awareness, full stack visibility, and granular control

  • Ability to incorporate information from outside the firewall, such as directory-based policy, blacklists, and whitelists

  • Upgrade path to include future information feeds and security threats and SSL decryption to enable identifying undesirable encrypted applications

Table 5-8 lists advantages and disadvantages of NGFWs.


Table 5-8 Advantages and Disadvantages of NGFWs



Provide enhanced security.

Require more involved management than standard firewalls

Provide integration between security services.

Lead to reliance on a single vendor

Firewall Architecture

Whereas the type of firewall speaks to the internal operation of the firewall, the architecture refers to the way in which firewalls are deployed in the network to form a system of protection. The following sections look at the various ways firewalls can be deployed.

Bastion Hosts

A bastion host may or may not be a firewall. Some other examples of bastion hosts are FTP servers, DNS servers, web servers, and email servers. The term bastion host actually refers to the position of any device. If the device is exposed directly to the Internet or to any other untrusted network while screening the rest of the network from exposure, it is a bastion host. Whether the bastion host is a firewall, a DNS server, or a web server, all standard hardening procedures are especially important because this device is exposed.

In any case where a host must be publicly accessible from the Internet, the device must be treated as a bastion host, and you should take the following measures to protect these machines:

  • Disable or remove all unnecessary services, protocols, programs, and network ports.

  • Use separate authentication services from trusted hosts within the network.

  • Remove as many utilities and system configuration tools as is practical.

  • Install all appropriate service packs, hot fixes, and patches.

  • Encrypt any local user account and password databases.

Implementing such procedures is referred to as reducing the attack surface.

Dual-Homed Firewalls

A dual-homed firewall has two network interfaces: one pointing to the internal network and another connected to the untrusted network. In many cases, routing between these interfaces is turned off. The firewall software will allow or deny traffic between the two interfaces based on the firewall rules configured by the administrator.

The danger of relying on a single dual-homed firewall is that it can be a single point of failure. If this device is compromised, the network is compromised, too. If it suffers a denial-of-service attack, no traffic will pass. Neither of these is a good situation.

The advantages of a dual-homed firewall include:

  • The configuration is simple.

  • It’s possible to perform IP masquerading (NAT).

  • It is less costly than using two firewalls

Disadvantages of a dual-homed firewall include:

  • There is a single point of failure.

  • It is not as secure as other options.

Multihomed Firewalls

A firewall may be multihomed. One popular type is the three-legged firewall. This configuration has three interfaces: one connected to the untrusted network, a second to the internal network, and the third to a part of the network called a demilitarized zone (DMZ), a protected network that contains systems that need a higher level of protection. A DMZ might contain web servers, email servers, or DNS servers. The multihomed firewall controls the traffic that flows between the three networks, being somewhat careful with traffic destined for the DMZ and treating traffic to the internal network with much more suspicion.

The advantages of three-legged firewalls include:

  • They offer cost savings on devices because you need only one firewall rather than two or three.

  • It is possible to perform IP masquerading (NAT) on the internal network while not doing so for the DMZ.

The following are some of the disadvantages of multihomed firewalls:

  • The complexity of the configuration is increased.

  • There is a single point of failure.

Screened Host Firewalls

While the firewalls discussed thus far typically connect directly to an untrusted network (at least one interface does), a screened host firewall is placed between the final router and the internal network. When traffic comes into the router and is forwarded to the firewall, it is inspected before going into the internal network. This configuration is very similar to that of a dual-homed firewall; the difference is that the separation between the perimeter network and the internal network is logical rather than physical. There is only a single interface.

The advantages of screened host firewalls include:

  • They offer more flexibility than dual-homed firewalls because they use rules rather than interfaces to create the separation.

  • There are potential cost savings.

The disadvantages of screened host firewalls include:

  • The configuration is more complex.

  • It is easier to violate policies than with dual-homed firewalls.

Screened Subnets

A screened subnet takes the screened host concept a step further. In this case, two firewalls are used, and traffic must be inspected at both firewalls before it can enter the internal network. This solution is called a screened subnet because there is a subnet between the two firewalls that can act as a DMZ for resources from the outside world.

The advantages of a screened subnet include:

  • It offers the added security of two firewalls before the internal network.

  • One firewall is placed before the DMZ, protecting the devices in the DMZ.

Disadvantages of a screened subnet include:

  • It is costlier than using either a dual-homed or three-legged firewall.

  • Configuring two firewalls adds complexity.

In any situation where multiple firewalls are in use, such as an active/passive cluster of two firewalls, care should be taken to ensure that TCP sessions are not traversing one firewall while return traffic of the same session is traversing the other. When stateful filtering is being performed, the return traffic will be denied, which will break the user connection.


In the real world, the various firewall approaches are mixed and matched to meet requirements, and you may find elements of all these architectural concepts being applied to a specific situation.

Wireless Controller

Wireless controllers are centralized appliances or software packages that monitor, manage, and control multiple wireless access points. Wireless controller architecture is shown in Figure 5-3.

An 802.1X Authentication Server is connected with a wireless LAN controller, which in turn is connected with six wireless access points.

Figure 5-3 Wireless Controller Architecture

WLAN controllers include many security features that are not possible with access points (APs) operating independently of one another. Some of these features include:

  • Interference detection and avoidance: This is achieved by adjusting the channel assignment and RF power in real time.

  • Load balancing: You can use load balancing to connect a single user to multiple APs for better coverage and increased data rate.

  • Coverage gap detection: This type of detection can increase the power to cover holes that appear in real time.

WLAN controllers also support forms of authentication such as 802.1x, Protected Extensible Authentication Protocol (PEAP), Lightweight Extensible Authentication Protocol (LEAP), Extensible Authentication Protocol (EAP)–Transport Layer Security (EAP-TLS), Wi-Fi Protected Access (WPA), 802.11i (WPA2), and Layer 2 Tunneling Protocol (L2TP).

While in the past wireless access points operated as standalone devices, the move to wireless controllers that manage multiple APs provides many benefits over using standalone APs, including the following:

  • Ability to manage the relative strengths of the radio waves to provide backup and to reduce inference between APs

  • More seamless roaming between APs

  • Real-time control of access points

  • Centralized authentication

The following are disadvantages of wireless controllers that manage multiple APs:

  • More costly

  • More complex configuration


If we’re discussing the routing function in isolation, we can say that routers operate at layer 3. Some routing devices can combine routing functionality with switching and layer 4 filtering. But because routing uses layer 3 information (IP addresses) to make decisions, it is a layer 3 function.

A router uses a routing table that tells the router in which direction to send traffic destined for a particular network. Although routers can be configured with routes to individual computers, typically they route toward networks, not toward individual computers. When a packet arrives at a router that is directly connected to the destination network, that particular router performs an ARP broadcast to learn the MAC address of the computer and sends the packet as a frame at layer 2.

Routers perform an important security function in that access control lists (ACLs) are typically configured on them. ACLs are ordered sets of rules that control the traffic that is permitted or denied the use of a path through the router. These rules can operate at layer 3, in which case they make decisions on the basis of IP addresses, or at layer 4, in which case only certain types of traffic are allowed. An ACL typically references a port number of the service or application that is allowed or denied.

To secure a router, you need to ensure that the following settings are in place:

  • Configure authentication between the routers to prevent them from performing routing updates with rogue routers.

  • Secure the management interfaces with strong passwords.

  • Manage routers with SSH rather than Telnet.


Proxy servers can be appliances, or they can be software installed on a server operating system. These servers act like proxy firewalls in that they create the web connection between systems on their behalf; however, they can typically allow and disallow traffic on a more granular basis. For example, a proxy server may allow the Sales group to go to certain websites while not allowing the Data Entry group access to those same sites. The functionality extends beyond HTTP to other traffic types, such as FTP traffic.

Proxy servers can provide an additional beneficial function called web caching. When a proxy server is configured to provide web caching, it saves in a web cache a copy of every web page that has been delivered to an internal computer. If any user requests the same page later, the proxy server has a local copy and need not spend the time and effort to retrieve it from the Internet. This greatly improves web performance for frequently requested pages.

Load Balancer

Load balancers are hardware or software products that provide load-balancing services. Application delivery controllers (ADCs) support the same algorithms but also use complex number-crunching processes, such as per-server CPU and memory utilization, fastest response times, and so on, to adjust the balance of the load. Load-balancing solutions are also referred to as server farms or pools.


A hardware security module (HSM) is an appliance that safeguards and manages digital keys used with strong authentication and provides crypto processing. It attaches directly to a computer or server. Among the functions of an HSM are:

  • Onboard secure cryptographic key generation

  • Onboard secure cryptographic key storage and management

  • Use of cryptographic and sensitive data material

  • Offloading of application servers for complete asymmetric and symmetric cryptography

HSM devices can be used in a variety of scenarios, including:

  • In a PKI environment to generate, store, and manage key pairs

  • In card payment systems to encrypt PINs and to load keys into protected memory

  • To perform the processing for applications that use SSL

  • In Domain Name System Security Extensions (DNSSEC; a secure form of DNS that protects the integrity of zone files) to store the keys used to sign the zone file

There are some drawbacks to an HSM, including the following:

  • High cost

  • Lack of a standard for the strength of the random number generator

  • Difficulty in upgrading

When an HSM product is selected, you must ensure that it provides the services needed, based on its application. Remember that each HSM has different features and different encryption technologies, and some HSM devices might not support a strong enough encryption level to meet an enterprise’s needs. Moreover, you should keep in mind the portable nature of these devices and protect the physical security of the area where they are connected.


A microSD HSM is an HSM that connects to the microSD port on a device that has such a port. The card is specifically suited for mobile apps written for Android and is supported by most Android phones and tablets with a microSD card slot.

Moreover, microSD cards can be made to support various cryptographic algorithms, such as AES, RSA, SHA-1, SHA-256, and Triple DES, as well as the Diffie-Hellman key exchange. This advantage over regular microSD cards allows them to provide the same protections as microSD HSM.

Application and Protocol-Aware Technologies

Application- and protocol-aware technologies maintain current information about applications and the protocols used to connect to them. These intelligent technologies use this information to optimize the functioning of the protocol and thus the application. The following sections look at some of these technologies.


A web application firewall (WAF) applies rule sets to an HTTP conversation. These rule sets cover common attack types to which these session types are susceptible. Among the common attacks they address are cross-site scripting and SQL injections. A WAF can be implemented as an appliance or as a server plug-in. While all traffic is usually funneled in-line through the device, some solutions monitor a port and operate out-of-band. Table 5-9 lists the pros and cons of these two approaches. Finally, WAFs can be installed directly on web servers.

Table 5-9 Advantages and Disadvantages of WAF Placement Options





Can prevent live attacks

May slow web traffic

Could block legitimate traffic



Doesn’t interfere with traffic

Can’t block live traffic

The security issues involved with WAFs include the following:

  • The IT infrastructure becomes more complex.

  • Training on the WAF must be provided with each new release of the web application.

  • Testing procedures may change with each release.

  • False positives may occur and can have significant business impacts.

  • Troubleshooting is more complex.

  • The WAF terminating the application session can potentially have an effect on the web application.



Firewalls are covered earlier in this chapter, in the section “Physical and Virtual Network and Security Devices.”

Passive Vulnerability Scanners

Vulnerability scanners are tools or utilities used to probe and reveal weaknesses in a network’s security. A passive vulnerability scanner (PVS) monitors network traffic at the packet layer to determine topology, services, and vulnerabilities. It avoids the instability that can be introduced to a system by actively scanning for vulnerabilities.

PVS tools analyze the packet stream and look for vulnerabilities through direct analysis. They are deployed much like network IDSs or packet analyzers. A PVS can pick a network session that targets a protected server and monitor it as much as needed. The biggest benefit of a PVS is its ability to do this without impacting the monitored network.

Active Vulnerability Scanners

Whereas passive scanners can only gather information, active scanners can take action to block attacks, such as blocking dangerous IP addresses. They can also be used to simulate an attack to assess readiness. They operate by sending transmissions to nodes and examining the responses—which means they may disrupt network traffic.


Regardless of whether it’s active or passive, a vulnerability scanner cannot replace the expertise of trained security personnel. Moreover, these scanners are only as effective as the signature databases on which they depend, so the databases must be updated regularly. Finally, scanners require bandwidth and potentially slow the network.


Database activity monitors (DAMs) monitor transactions and the activity of database services. They can be used for monitoring unauthorized access and fraudulent activities as well as for compliance auditing. Several implementations exist, and they operate and gather information at different levels. A DAM typically performs continuously and in real time. In many cases, these systems operate independently of the database management system and do not rely on the logs created by these systems. Among the architectures used are:

  • Interception-based model: Watches the communications between the client and the server.

  • Memory-based model: Uses a sensor attached to the database and continually polls the system to collect SQL statements as they are being performed.

  • Log-based model: Analyzes and extracts information from the transaction logs.

While DAMs are useful tools, they have some limitations:

  • With some solutions that capture traffic on its way to the database, inspection of the SQL statements is not as thorough as with solutions that install an agent on the database; issues may be missed.

  • Many solutions do a poor job of tracking responses to SQL queries.

  • As the number of policies configured increases, performance declines.

Advanced Network Design (Wired/Wireless)

Changes in network design and approaches to securing the network infrastructure come fast and furious. It is easy to fall behind and cling to outdated approaches. New technologies and new design principles are constantly coming. The following sections cover some recent advances and their costs and benefits.

Remote Access

The day when all workers gathered together in the same controlled environment to do their jobs is fast fading into the past. Workers are increasingly working from other locations, such as their home or distant small offices. A secure remote access solution is critical as remote access becomes a more common method of connecting to corporate resources. The following sections discuss options for securing these connections.


A virtual private network (VPN) connection uses an untrusted carrier network but provides protection of the information through strong authentication protocols and encryption mechanisms. While we typically use the most untrusted network—the Internet—as the classic example, and most VPNs do travel through the Internet, a VPN can be used with interior networks as well whenever traffic needs to be protected from prying eyes.

In VPN operations, entire protocols wrap around other protocols. They include:

  • A LAN protocol (required)

  • A remote access or line protocol (required)

  • An authentication protocol (optional)

  • An encryption protocol (optional)

A device that terminates multiple VPN connections is called a VPN concentrator. VPN concentrators incorporate the most advanced encryption and authentication techniques available.

In some instances, VLANs in a VPN solution may not be supported by the ISP if the ISP is also using VLANs in their internal network. Choosing a provider that provisions Multiprotocol Label Switching (MPLS) connections can allow customers to establish VLANs to other sites. MPLS provides VPN services with address and routing separation between VPNs.

VPN connections come in two flavors:

  • Remote access VPNs: A remote access VPN can be used to provide remote access to teleworkers or traveling users. The tunnel that is created has as its endpoints the user’s computer and the VPN concentrator. In this case, only traffic traveling from the user computer to the VPN concentrator uses this tunnel.

  • Site-to-site VPNs: VPN connections can be used to securely connect two locations. In this type of VPN, called a site-to-site VPN, the tunnel endpoints are the two VPN routers, one in each office. With this configuration, all traffic that goes between the offices will use the tunnel, regardless of the source or destination. The endpoints are defined during the creation of the VPN connection and thus must be set correctly, according to the type of remote access link being used.


Several remote access or line protocols (tunneling protocols) are used to create VPN connections, including:

  • Point-to-Point Tunneling Protocol (PPTP): PPTP is a Microsoft protocol based on PPP. It uses built-in Microsoft Point-to-Point encryption and can use a number of authentication methods, including CHAP, MS-CHAP, and EAP-TLS. One shortcoming of PPTP is that it only works on IP-based networks. If a WAN connection that is not IP based is in use, L2TP must be used.

  • Layer 2 Tunneling Protocol (L2TP): L2TP is a newer protocol that operates at layer 2 of the OSI model. Like PPTP, L2TP can use various authentication mechanisms; however, L2TP does not provide any encryption. It is typically used with Internet Protocol Security (IPsec), which is a very strong encryption mechanism.

When using PPTP, the encryption is included, and the only remaining choice to be made is the authentication protocol.

When using L2TP, both encryption and authentication protocols, if desired, must be added. IPsec can provide encryption, data integrity, and system-based authentication, which makes it a flexible and capable option. By implementing certain parts of the IPsec suite, you can choose to use these features or not.

IPsec is actually a suite of protocols, much like TCP/IP. It includes the following components:

  • Authentication Header (AH): AH provides data integrity, data origin authentication, and protection from replay attacks.

  • Encapsulating Security Payload (ESP): ESP provides all that AH does, as well as data confidentiality.

  • Internet Security Association and Key Management Protocol (ISAKMP): ISAKMP handles the creation of a security association for the session and the exchange of keys.

  • Internet Key Exchange (IKE): Also sometimes referred to as IPsec Key Exchange, IKE provides the authentication material used to create the keys exchanged by ISAKMP during peer authentication. This was proposed to be performed by a protocol called Oakley that relied on the Diffie-Hellman algorithm, but Oakley has been superseded by IKE.

IPsec is a framework, which means it does not specify many of the components used with it. These components must be identified in the configuration, and they must match in order for the two ends to successfully create the required security association that must be in place before any data is transferred. The selections that must be made are:

  • The encryption algorithm (encrypts the data)

  • The hashing algorithm (ensures that the data has not been altered and verifies its origin)

  • The mode (tunnel or transport)

  • The protocol (AH, ESP, or both)

All these settings must match on both ends of the connection. It is not possible for the systems to select them on the fly. They must be preconfigured correctly in order to match.

When configured in tunnel mode, the tunnel exists only between the two gateways, but all traffic that passes through the tunnel is protected. This is normally done to protect all traffic between two offices. The security association (SA) is between the gateways between the offices. This is the type of connection that would be called a site-to-site VPN.

The SA between the two endpoints is made up of the security parameter index (SPI) and the AH/ESP combination. The SPI, a value contained in each IPsec header, helps the devices maintain the relationship between each SA (and there could be several happening at once) and the security parameters (also called the transform set) used for each SA.

Each session has a unique session value, which helps prevent:

  • Reverse engineering

  • Content modification

  • Factoring attacks (in which the attacker tries all the combinations of numbers that can be used with the algorithm to decrypt ciphertext)

With respect to authenticating the connection, the keys can be preshared or derived from a public key infrastructure (PKI). A PKI creates public/private key pairs that are associated with individual users and computers that use a certificate. These key pairs are used in the place of preshared keys in that case. Certificates that are not derived from a PKI can also be used.

In transport mode, the SA is either between two end stations or between an end station and a gateway or remote access server. In this mode, the tunnel extends from computer to computer or from computer to gateway. This is the type of connection that would be used for a remote access VPN. This is but one application of IPsec.

When the communication is from gateway to gateway or host to gateway, either transport or tunnel mode may be used. If the communication is computer to computer, transport mode is required. When using transport mode from gateway to host, the gateway must operate as a host.

The most effective attack against an IPsec VPN is a man-in-the-middle attack. In this type of attack, the attacker proceeds through the security negotiation phase until the key negotiation, when the victim reveals its identity. In a well-implemented system, the attacker fails when the attacker cannot likewise prove his identity.


Secure Sockets Layer (SSL) is another option for creating secure connections to servers. It works at the application layer of the OSI model. It is used mainly to protect HTTP traffic or web servers. Its functionality is embedded in most browsers, and its use typically requires no action on the part of the user. It is widely used to secure Internet transactions and can be implemented in two ways:

  • SSL portal VPN: In this case, a user has a single SSL connection for accessing multiple services on the web server. Once authenticated, the user is provided a page that acts as a portal to other services.

  • SSL tunnel VPN: A user may use an SSL tunnel to access services on a server that is not a web server. This solution uses custom programming to provide access to non-web services through a web browser.

TLS and SSL are very similar but not the same. When configuring SSL, a session key length must be designated. The two options are 40-bit and 128-bit keys. Using self-signed certificates to authenticate the server’s public key prevents man-in-the-middle attacks.

SSL is often used to protect other protocols. Secure Copy Protocol (SCP), for example, uses SSL to secure file transfers between hosts. Table 5-10 lists some of the advantages and disadvantages of SSL.


Table 5-10 Advantages and Disadvantages of SSL



Data is encrypted.

Encryption and decryption require heavy resource usage.

SSL is supported on all browsers.

Critical troubleshooting components (URL path, SQL queries, passed parameters) are encrypted.

Users can easily identify its use (via https://).


When placing the SSL gateway, you must consider a trade-off: The closer the gateway is to the edge of the network, the less encryption that needs to be performed in the LAN (and the less performance degradation), but the closer to the network edge it is placed, the farther the traffic travels through the LAN in the clear. The decision comes down to how much you trust your internal network.

TLS 1.2

The latest version of TLS, version 1.2, provides access to advanced cipher suites that support elliptical curve cryptography and AEAD block cipher modes. TLS has been improved to support:

  • Hash negotiation: TLS can negotiate any hash algorithm to be used as a built-in feature, and the default cipher pair MD5/SHA-1 has been replaced with SHA-256.

  • Certificate hash or signature control: TLS can configure the certificate requester to accept only specified hash or signature algorithm pairs in the certification path.

  • Suite B–compliant cipher suites: Two cipher suites have been added so that the use of TLS can be Suite B compliant:




In many cases, administrators or network technicians need to manage and configure network devices remotely. Protocols such as Telnet allow technicians to connect to devices such as routers, switches, and wireless access points so they can manage them from the command line. Telnet, however, transmits in cleartext, which is a security issue.

Secure Shell (SSH) was created to provide an encrypted method of performing these procedures. It connects, via a secure channel over an insecure network, a server and a client running SSH server and SSH client programs, respectively. It is a widely used replacement for Telnet and should be considered when performing remote management from the command line.

Several steps can be taken to enhance the security of an SSH implementation:

  • Change the port number in use from the default 22 to something above 1024.

  • Use only version 2, which corrects many vulnerabilities that exist in earlier versions.

  • Disable root login to devices that have a root account (in Linux or UNIX).

  • Control access to any SSH-enabled devices by using ACLs, IP tables, or TCP wrappers.


Remote Desktop Protocol (RDP) is a proprietary protocol developed by Microsoft that provides a graphical interface to connect to another computer over a network connection. Unlike Telnet and SSH, which allow only working from the command line, RDP enables you to work on a remote computer as if you were actually sitting at its console.

RDP sessions use native RDP encryption but do not authenticate the session host server. To mitigate this, you can use SSL for server authentication and to encrypt RDP session host server communications. This requires a certificate. You can use an existing certificate or the default self-signed certificate.

While RDP can be used for remote connections to a machine, it can also be used to connect users to a virtual desktop infrastructure (VDI). This allows the user to connect from anywhere and work from a virtual desktop. Each user may have his or her own virtual machine (VM) image, or many users may use images based on the same VM.

The advantages and disadvantages of RDP are described in Table 5-11.


Table 5-11 Advantages and Disadvantages of RDP



Data is kept in the data center, so disaster recovery is easier.

Sever downtime can cause issues for many users.

Users can work from anywhere when using RDP in a VDI.

Network issues can cause problems for many users.

There is a potential reduction in the cost of business software when using an RDP model where all users are using the same base VM.

Insufficient processing power in the host system can cause bottlenecks.


Implementing and supporting RDP requires solid knowledge.


Virtual Network Computing (VNC) operates much like RDP but uses the Remote Frame Buffer (RFB) protocol. Unlike RDP, VNC is platform independent. For example, it could be used to transmit between a Linux server and a Mac OS laptop. The VNC system contains the following components:

  • The VNC server is the program on the machine that shares its screen.

  • The VNC client (or viewer) is the program that watches, controls, and interacts with the server.

  • The VNC protocol (RFB) is used to communicate between the VNC server and client.


Virtual desktop infrastructures (VDIs) host desktop operating systems within a virtual environment in a centralized server. Users access the desktops and run them from the server. There are three models for implementing VDI:

  • Centralized model: All desktop instances are stored in a single server, which requires significant processing power on the server.

  • Hosted model: Desktops are maintained by a service provider. This model eliminates capital cost and is instead subject to operation cost.

  • Remote virtual desktops model: An image is copied to the local machine, which means a constant network connection is unnecessary.

Figure 5-4 compares the remote virtual desktop models (also called streaming) with centralized VDI.

Virtual Desktop Streaming at left is compared with Virtual Desktop at right.

Figure 5-4 VDI Streaming and Centralized VDI

Reverse Proxy

A reverse proxy is a type of proxy server that retrieves resources on behalf of external clients from one or more internal servers. These resources are then returned to the client as if they originated from the web server itself. Unlike a forward proxy, which is an intermediary for internal clients to contact external servers, a reverse proxy is an intermediary for internal servers to be contacted by external clients. Quite often, popular web servers use reverse-proxying functionality, shielding application frameworks of weaker HTTP capabilities.

Forward proxy servers are covered earlier in this chapter, in the section “Physical and Virtual Network and Security Devices.”

IPv4 and IPv6 Transitional Technologies

IPv6 is an IP addressing scheme designed to provide a virtually unlimited number of IP addresses. It uses 128 bits rather than 32, as in IPv4, and it is represented in hexadecimal rather than dotted-decimal format. Moreover, any implementation of IPv6 requires support built in for IPsec, which is optional in IPv4. IPsec is used to protect the integrity and confidentiality of the data contained in a packet.

An IPv6 address looks different from an IPv4 address. When viewed in nonbinary format (it can be represented in binary and is processed by the computer in binary), it is organized into eight sections, or fields, instead of four, as in IPv4. The sections are separated by colons rather than periods, as in IPv4. Finally, each of the eight sections has four characters rather than one to three, as in the dotted-decimal format of IPv4. An IPv4 and IPv6 address are presented here for comparison:


IPv6: 2001:0db8:85a3:0000:0000:8a2e:0370:7334

The IPv6 address has two logical parts: a 64-bit network prefix and a 64-bit host address. The host address is automatically generated from the MAC address of the device. The host address in the example above consists of the rightmost four sections, or 0000:8a2e:0370:7334. The leftmost four sections are the network portion. This portion can be further subdivided. The first section to the left of the host portion can be used to identify a site within an organization. The other three far-left sections are assigned by the ISP or in some cases are generated automatically, based on the address type.

There are some allowed methods/rules of shortening the representation of an IPv6 address:

  • Leading zeros in each section can be omitted, but each section must be represented by at least one character, unless you are making use of the next rule. By applying this rule, the previous IPv6 address example could be written as follows:


  • One or more consecutive sections with only a 0 can be represented with a single empty section (double colons), as shown here applied to the same address:


  • The second rule can be applied only once within an address. For example, the following IPv6 address, which contains two sets of consecutive sections with all zeros, could have the second rule applied only once.


    It could not be represented as follows:


To alleviate some of the stress of changing over to IPv6, a number of transition mechanisms have been developed, including the following:

  • 6to4: This type of tunneling allows IPv6 sites to communicate with each other over an IPv4 network. IPv6 sites communicate with native IPv6 domains via relay routers. 6to4 effectively treats a wide area IPv4 network as a unicast point-to-point link layer.

  • Teredo: Teredo assigns addresses and creates host-to-host tunnels for unicast IPv6 traffic when IPv6 hosts are located behind IPv4 network address translators.

  • Dual stack: This solution involves running both IPv4 and IPv6 on networking devices.

  • GRE tunnels: An IPv4 network can carry IPv6 packets if they are encapsulated in Generic Routing Encapsulation (GRE) IPv4 packets.

There are many more techniques, but these are some of the most common.

While switching to IPv6 involves a learning curve for those versed in IPv4, there are a number of advantages to using IPv6:

  • Security: IPsec is built into the standard; it’s not an add-on.

  • Larger address space: There are enough IPv6 addresses for every man, woman, and child on the face of the earth to each have the total number of IP addresses that were available in IPv4.

  • Stateless autoconfiguration: It is possible for IPv6 devices to create their own IPv6 address, either link-local or global unicast.

  • Better performance: Performance is better due to the simpler header.

IPv6 does not remove all security issues, though. The following concerns still exist:

  • Lack of training on IPv6: Many devices are already running IPv6, and failure to secure it creates a backdoor.

  • New threats: Current security products may lack the ability to recognize IPv6 threats.

  • Bugs in code of new IPv6 products: Products supporting IPv6 are often rushed to market, and in many cases, not all of the bugs are worked out.

Network Authentication Methods

One of the protocol choices that must be made in creating a remote access solution is the authentication protocol. The following are some of the most important of those protocols:

  • Password Authentication Protocol (PAP): PAP provides authentication, but the credentials are sent in cleartext and can be read with a sniffer.

  • Challenge Handshake Authentication Protocol (CHAP): CHAP solves the cleartext problem by operating without sending the credentials across the link. The server sends the client a set of random text called a challenge. The client encrypts the text with the password and sends it back. The server then decrypts it with the same password and compares the result with what was sent originally. If the results match, the server can be assured that the user or system possesses the correct password without ever needing to send it across the untrusted network. Microsoft has created its own variant of CHAP:

    • MS-CHAP v1: This is the first version of a variant of CHAP by Microsoft. This protocol works only with Microsoft devices, and while it stores the password more securely than CHAP, like any other password-based system, it is susceptible to brute-force and dictionary attacks.

    • MS-CHAP v2: This update to MS-CHAP provides stronger encryption keys and mutual authentication, and it uses different keys for sending and receiving.

  • Extensible Authentication Protocol (EAP): EAP is not a single protocol but a framework for port-based access control that uses the same three components that are used in RADIUS. A wide variety of EAP implementations can use all sorts of authentication mechanisms, including certificates, a PKI, and even simple passwords:

    • EAP-MD5-CHAP: This variant of EAP uses the CHAP challenge process, but the challenges and responses are sent as EAP messages. It allows the use of passwords.

    • EAP-TLS: This form of EAP requires a PKI because it requires certificates on both the server and clients. It is, however, immune to password-based attacks as it does not use passwords.

    • EAP-TTLS: This form of EAP requires a certificate on the server only. The client uses a password, but the password is sent within a protected EAP message. It is, however, susceptible to password-based attacks.

Table 5-12 compares the authentication protocols described here.


Table 5-12 Authentication Protocols







Password sent in cleartext

Do not use


No passwords are exchanged

Widely supported standard

Susceptible to dictionary and brute-force attacks

Ensure complex passwords


No passwords are exchanged

Stronger password storage than CHAP

Susceptible to dictionary and brute-force attacks

Supported only on Microsoft devices

Ensure complex passwords

If possible, use MS-CHAP v2 instead


No passwords are exchanged

Stronger password storage than CHAP

Mutual authentication

Susceptible to dictionary and brute-force attacks

Supported only on Microsoft devices

Not supported on some legacy Microsoft clients

Ensure complex passwords


Supports password-based authentication

Widely supported standard

Susceptible to dictionary and brute-force attacks

Ensure complex passwords


The most secure form of EAP; uses certificates on the server and client

Widely supported standard

Requires a PKI

More complex to configure

No known issues


As secure as EAP-TLS

Only requires a certificate on the server

Allows passwords on the client

Susceptible to dictionary and brute-force attacks

More complex to configure

Ensure complex passwords


802.1x is a standard that defines a framework for centralized port-based authentication. It can be applied to both wireless and wired networks and uses three components:

  • Supplicant: The user or device requesting access to the network

  • Authenticator: The device through which the supplicant is attempting to access the network

  • Authentication server: The centralized device that performs authentication

The role of the authenticator can be performed by a wide variety of network access devices, including remote access servers (both dial-up and VPN), switches, and wireless access points. The role of the authentication server can be performed by a Remote Authentication Dial-in User Service (RADIUS) or Terminal Access Controller Access-Control System Plus (TACACS+) server. The authenticator requests credentials from the supplicant and, upon receiving those credentials, relays them to the authentication server, where they are validated. Upon successful verification, the authenticator is notified to open the port for the supplicant to allow network access. This process is illustrated in Figure 5-5.

The 802.1x process is depicted.

Figure 5-5 The 802.1x Process

While RADIUS and TACACS+ perform the same roles, they have different characteristics. These differences must be considered in the choice of a method. Keep in mind also that while RADUIS is a standard, TACACS+ is Cisco proprietary. Table 5-13 compares them.


Table 5-13 RADIUS and TACACS+




Transport protocol

Uses UDP, which may result in faster response

Uses TCP, which offers more information for troubleshooting


Encrypts only the password in the access-request packet

Encrypts the entire body of the packet but leaves a standard TACACS+ header for troubleshooting

Authentication and authorization

Combines authentication and authorization

Separates authentication, authorization, and accounting processes

Supported layer 3 protocols

Does not support any of the following:

  • AppleTalk Remote Access (ARA) protocol

  • NetBIOS Frame Protocol Control protocol

  • X.25 PAD connections

Supports all protocols


Does not support securing the available commands on routers and switches

Supports securing the available commands on routers and switches


Creates less traffic

Creates more traffic

Many security professionals consider enabling 802.1x authentication on all devices to be the best protection you can provide for a network.

Mesh Networks

A mesh network is a network in which all nodes cooperate to relay data and in which all nodes are all connected to one another. To ensure complete availability, continuous connections are provided through the use of self-healing algorithms that route around broken or blocked paths.

One area where this concept has been utilized is in wireless mesh networking. When one node can no longer operate, the rest of the nodes can still communicate with each other, directly or through one or more intermediate nodes. This is accomplished with one of several protocols, including:

  • Ad Hoc Configuration Protocol (AHCP)

  • Proactive Autoconfiguration (PAA)

  • Dynamic WMN Configuration Protocol (DWCP)

In Figure 5-6, multiple connections between the wireless nodes allow one of these protocols to self-heal the network by routing around broken links in real time.

A mesh network is shown.

Figure 5-6 Mesh Networking

Application of Solutions

This chapter has already covered a number of network design approaches and solutions. Although knowledge of these solutions is certainly valuable, determining the proper application of these solutions to a given scenario truly tests your understanding. Let’s look at an example. Consider a scenario with the following network:

  • 37 workstations

  • 3 printers

  • 48 port switches

  • The latest patches and up-to-date antivirus software

  • An enterprise-class router

  • A firewall at the boundary to the ISP

  • Two-factor authentication

  • Encrypted sensitive data on each workstation

This scenario seems secure, but can you tell what’s missing? That’s right: There’s no transport security. Data traveling around the network is unencrypted!

Now let’s consider another scenario. This time, two companies are merging, and their respective authentication systems are:

Company A: Captive portal using LDAP

Company B: 802.1x with a RADIUS server

What would be the best way to integrate these networks: Use the captive portal or switch Company A to 802.1x? If you said switch Company A to 802.1x, you are correct. It is superior to using a captive portal; whereas a captive portal uses passwords that can be spoofed, 802.1x uses certificates for devices.

Now let’s consider one more scenario. Suppose you are a consultant and have been asked to suggest an improvement in the following solution:

  • End-to-end encryption via SSL in the DMZ

  • IPsec in transport mode with Authentication Header (AH) enabled and Encapsulating Security Payload (ESP) disabled throughout the internal network

You need to minimize the performance degradation of the improvement.

What would you do? Would you want to enable ESP in the network? No. That would cause all traffic to be encrypted, which would increase security but degrade performance. A better suggestion would be to change from SSL in the DMZ to TLS. TLS versions 1.1 and 1.2 are significantly more secure and fix many vulnerabilities present in SSL v3.0 and TLS v1.0.

Placement of Hardware, Applications, and Fixed/Mobile Devices

The proper placement of the devices and applications described in this chapter is critical for their proper function. The following sections discuss this placement.


A UTM device should be placed between the LAN and the connection to the Internet, as shown in Figure 5-7.

A UTM device is placed between the LAN and the connection to the Internet.

Figure 5-7 Placement of a UTM Device


The placement of an IPS or IDS depends on whether it is network or host based. Let’s look at both:

  • HIDS/HIPS: These devices are located on the hosts to which they are providing protection. Therefore, secure placement is a function of the placement of the host rather than the IDS/IPS.

  • NIDS/NIPS: Where you place a NIDS depends on the needs of the organization. To identify malicious traffic coming in from the Internet only, you should place it outside the firewall. On the other hand, placing a NIDS inside the firewall will enable the system to identify internal attacks and attacks that get through the firewall. In cases where multiple sensors can be deployed, you might place NIDS devices in both locations. When the budget allows, you should place any additional sensors closer to the sensitive systems in the network. When only a single sensor can be placed, all traffic should be funneled through it, regardless of whether it is inside or outside the firewall (see Figure 5-8).

Placement of an NIPS is depicted.

Figure 5-8 Placement of a NIPS


You place an INE or an HAIPE device in a network whose data is to be secured, at the point where the network has a connection to an unprotected network.

In Figure 5-9, any traffic that comes from Network A destined for either Network B or Network C goes through HAIPE A, is encrypted, encapsulated with headers that are appropriate for the transit network, and then sent out onto the insecure network. The receiving HAIPE device then decrypts the data packet and sends it on to the destination network.

Placement of an INE device is depicted.

Figure 5-9 Placement of an INE Device


While the network policy server or the server performing health analysis should be located securely within the protected LAN, the health status of the device requesting access is collected at each point of entry into the network. When agents are in use, the collection occurs on the client, and this information is forwarded to the server. When agents are not in use, the collection of the health status is performed by the edge access device (for example, switch, wireless AP, VPN server, RAS server).


You should place a SIEM device in a central location where all reporting systems can reach it. Moreover, given the security information it contains, you should put it in a secured portion of the network. More important than the placement, though, is the tuning of the system so that it doesn’t gather so much information that it is unusable.


As switches are considered access layer devices in the Cisco three-layer model, they must be located near the devices they will connect to. Usually this means they are located on the same floor with the devices in order to accommodate the 100-meter cable length limitation of twisted pair cabling.


The location of a router is dependent on the security zones or broadcast domains you need to create around the router and the desired relationship of the router with other routers in the network. This decision is therefore less about security than it is about performance.


The placement of proxies depends on the type. Although each scenario can be unique, Table 5-14 shows the typical placement of each proxy type.


Table 5-14 Placement of Proxies



Circuit-level proxy

At the network edge

Application-level proxy

Close to the application server it is protecting

Kernel proxy firewall

Close to the systems it is protecting

Load Balancer

Because load balancers smooth the workloads of multiple devices, they must be located near such devices. When a load balancer is implemented as a service in a clustering solution, the service occurs in one of the clustering devices, so the location choice is the same.


Figure 5-10 shows the typical placement of an HSM. These devices also exist in network card form.

Placement of an HSM is depicted.

Figure 5-10 Placement of an HSM


When a microSD HSM card in in use, it is connected to an SD port on the device to which it is providing cryptography services.


In appliance form, a WAF is typically placed directly behind the firewall and in front of the web server farm; Figure 5-11 shows an example.

Web Traffic passes through firewall and then web application firewall to get connected with Web Servers. The firewall and the web application firewall are placed in a cloud.

Figure 5-11 Placement of a WAF

Vulnerability Scanner

For best performance, you can place a vulnerability scanner in a subnet that needs to be protected. You can also connect a scanner through a firewall to multiple subnets; this complicates the configuration and requires opening ports on the firewall, which could be problematic and could impact the performance of the firewall.


VPN connections are terminated at the edge of the network, so this is where both VPN servers should be located.

VPN Controller

When VPNs are terminated at wireless controllers, the controllers reside in the secure LAN. The APs, which in this deployment are just radios, relay the credentials to the controllers.


With VNC, any connections that go through a firewall are on port 5900. It may be necessary to add a rule to the firewall to allow this traffic. Moreover, the VNC server should be safely placed in the internal network, and only local connections should be allowed to it. Any connections from outside the network should use a VPN or should use SSH through a more secure server. The VNC server should also be set to only allow viewing of sessions to minimize the damage in the event of a breach.

Reverse Proxy

The location of a reverse proxy should follow the guidelines specified earlier in this chapter, in the section “Proxy.”


When the 802.1x standard is deployed, the authentication server (TACACS+, RADIUS, or Diameter) should be located securely in the LAN or intranet. The authenticators (switches, APs, VPN servers, RAS servers, and so on) should be located at the network edge, where the supplicants (laptops, mobile devices, remote desktops) will be attempting access to the network.


Although each scenario can be unique, Table 5-15 shows the typical placement of each firewall type.


Table 5-15 Typical Placement of Firewall Types



Packet-filtering firewall

Located between subnets, which must be secured

Circuit-level proxy

At the network edge

Application-level proxy

Close to the application server it is protecting

Kernel proxy firewall

Close to the systems it is protecting

An NGFW can be placed in-line (or in-path) or out-of-path. Out-of-path means that a gateway redirects traffic to the NGFW, while in-line placement causes all traffic to flow through the device. The two placements are shown in Figure 5-12.

In-path and out-of-path placements of NGFW are depicted.

Figure 5-12 NGFW Placement Options

A bastion host can be placed as follows:

  • Behind the exterior and interior firewalls: Locating it here and keeping it separate from the interior network complicates the configuration but is safest.

  • Behind the exterior firewall only: Perhaps the most common location for a bastion host is separated from the internal network; this means less complicated configuration (see Figure 5-13).

    A bastion host is separated from the internal network.

    Figure 5-13 A Bastion Host in a Screened Subnet

  • As both the exterior firewall and a bastion host: This setup exposes the host to the most danger.

Figure 5-14 shows the location of a dual-homed firewall (also called a dual-homed host).

Computers and a server are connected in an internal network, which in turn is connected with a dual-homed host. The dual-homed host is connected to the Internet and is enclosed by a firewall.

Figure 5-14 The Location of a Dual-Homed Firewall

Figure 5-15 shows the location of a three-legged firewall.

A three-legged firewall is connected to an internal network, DMZ network, and the Internet. Servers are connected to the internal network and servers are also connected to the DMZ network.

Figure 5-15 The Location of a Three-Legged Firewall

The location of a screened host firewall is shown in Figure 5-16.

The location of a screened host firewall is shown.

Figure 5-16 The Location of a Screened Host Firewall

Figure 5-17 shows the placement of a firewall to create a screened subnet.

Intranet is connected to a firewall which is connected to a web server passing through DMZ. The firewall is also connected to a router, which is connected to the Internet.

Figure 5-17 The Location of a Screened Subnet

WLAN Controller

WLAN controllers are centralized devices used to manage multiple wireless access points. Figure 5-18 shows the layout of a WLAN that uses a controller, and Figure 5-19 shows a layout of a WLAN that does not use a controller.

The layout of a WLAN that uses a controller is shown.

Figure 5-18 A WLAN with a Controller

A layout of a WLAN that does not use a controller.

Figure 5-19 A WLAN with No Controller


Placement of a DAM depends on how the DAM operates. In some cases, traffic is routed through a DAM before it reaches the database. In other solutions, the collector is given administrative access to the database, and it performs the monitoring remotely. Finally, some solutions have an agent installed directly on the database. These three placement options are shown in Figure 5-20.

The three placement options of DAM are depicted.

Figure 5-20 DAM Placement Options

Complex Network Security Solutions for Data Flow

While securing the information that traverses a network is probably the most obvious duty of a security professional, having an awareness of the type of traffic that is generated on the network is just as important. For both security and performance reasons, you need to understand the amounts of various traffic types and the sources of each type of traffic. The following sections talk about what data flows are and how to protect sensitive flows.


Data leakage occurs when sensitive data is disclosed to unauthorized personnel either intentionally or inadvertently. Data loss prevention (DLP) software attempts to prevent data leakage. It does this by maintaining awareness of actions that can and cannot be taken with respect to a document. For example, it might allow printing of a document but only at the company office. It might also disallow sending the document through email. DLP software uses ingress and egress filters to identify sensitive data that is leaving the organization and can prevent such leakage. Ingress filters examine information that is entering the network, while egress filters examine information that is leaving the network. Using an egress filter is one of the main mitigations to data exfiltration, which is the unauthorized transfer of data from a network.

Let’s look at an example. Suppose that product plans should be available only to the Sales group. For that document you might create a policy that specifies the following:

  • It cannot be emailed to anyone other than Sales group members.

  • It cannot be printed.

  • It cannot be copied.

You could then implement the policy in two locations:

  • Network DLP: You could install it at network egress points near the perimeter; network DLP analyzes network traffic.

  • Endpoint DLP: Endpoint DLP runs on end-user workstations or servers in the organization.

You can use both precise and imprecise methods to determine what is sensitive:

  • Precise methods: These methods involve content registration and trigger almost zero false-positive incidents.

  • Imprecise methods: These methods can include keywords, lexicons, regular expressions, extended regular expressions, metadata tags, Bayesian analysis, and statistical analysis.

The value of a DLP system resides in the level of precision with which it can locate and prevent the leakage of sensitive data.

Deep Packet Inspection

Earlier in this chapter you learned about application layer firewalls. You learned that these firewalls place a performance hit on the firewall. This is because these firewalls perform deep packet inspection—that is, they look into the data portion of a packet for signs of malicious code. Table 5-16 lists the advantage and disadvantage of deep packet inspection. Deep packet inspection should be done at the network edge.


Table 5-16 Advantage and Disadvantage of Deep Packet Inspection



Detects malicious content in the data portion of the packet

Slows network performance

Data-Flow Enforcement

Data-flow enforcement can refer to controlling data flows within an application, and it can also refer to controlling information flows within and between networks. Both concepts are important to understand and address correctly.

It is critical that developers ensure that applications handle data in a safe manner. This applies to both the confidentiality and integrity of data. The system architecture of an application should be designed to provide the following services:

  • Boundary control services: These services are responsible for placing various components in security zones and maintaining boundary control between them. Generally, this is accomplished by indicating components and services as trusted or not trusted. For example, memory space insulated from other running processes in a multiprocessing system is part of a protection boundary.

  • Access control services: Various methods of access control can be deployed. An appropriate method should be deployed to control access to sensitive material and to give users the access they need to do their jobs.

  • Integrity services: Integrity implies that data has not been changed. Integrity services ensure that data moving through the operating system or application can be verified to not have been damaged or corrupted in the transfer.

  • Cryptography services: If the system is capable of scrambling or encrypting information in transit, it is said to provide cryptography services. In some cases, such services are not natively provided by a system, and if they are desired, they must be provided in some other fashion. But if the capability is present, it is valuable, especially in instances where systems are distributed and talk across the network.

  • Auditing and monitoring services: If a system has a method of tracking the activities of users and of operations of the system processes, it is said to provide auditing and monitoring services. Although our focus here is on security, the value of this service goes beyond security as it also allows for monitoring what the system is actually doing.

Data-flow enforcement can also refer to controlling data within and between networks. A few examples of flow control restrictions include:

  • Preventing information from being transmitted in the clear to the Internet

  • Blocking outside traffic that claims to be from within the organization

  • Preventing the passing to the Internet of any web requests that are not from the internal web proxy

Network Flow (S/flow)

Sampled flow (S/flow or sFlow) is an industry standard for exporting packets at layer 2 of the OSI model. When these packets are exported for monitoring purposes, they are truncated and used along with interface counters. With sFlow, which is supported by multiple network device manufacturers, the sampled data is sent as a UDP packet to the specified host and port. The official port number for sFlow is port 6343, and the current version is sFlow v5.

Network Flow Data

A network flow is a single conversation or session that shares certain characteristics between two devices. You can use tools and utilities such as Cisco’s NetFlow Analyzer to organize these conversations for traffic analysis and planning. You can set tools like this to define the conversations on the basis of various combinations of the following characteristics:

  • Ingress interface

  • Source IP address

  • Destination IP address

  • IP protocol

  • Source port for UDP or TCP

  • Destination port for UDP or TCP and type and code for ICMP (with type and code set as 0 for protocols other than ICMP)

  • IP type of service

The most commonly used network flow identifiers are source and destination IP addresses and source and destination port numbers. You can use the nfdump command-line tool to extract network flow information for a particular flow or conversation. Here is an example:

Date flow start Duration Proto Src IP Addr:Port Dst IP Addr:Port Packets Bytes Flows
2010-09-01 00:00:00.459 0.000 UDP -> 1 46 1
2010-09-01 00:00:00.363 0.000 UDP -> 1 80 1

In this example, in the first flow, a packet is sent from the host machine using with port number 24920 to a machine at directed to port 22126. The second flow is the response from the device at to the original source port 24920.

Tools like this usually provide the ability to identify the top five protocols in use, the top five speakers on the network, and the top five flows or conversions. Moreover, they can graph this information, which makes identifying patterns easier.

Data Flow Diagram

A data flow diagram (DFD) shows the flow of data as transactions occur in an application or a service. It shows what kind of information will be input to and output from the system, how the data will advance through the system, and where the data will be stored. Figure 5-21 provides a simple example of a flow diagram that diagrams a workflow in a college. A DFD is often used as a preliminary step to create an overview of a system without going into great detail, which can later be elaborated.

Example of a data flow diagram is depicted.

Figure 5-21 Data Flow Diagram

Secure Configuration and Baselining of Networking and Security Components

To take advantage of all the available security features on the various security devices discussed in this chapter, proper configuration and management of configurations must take place. This requires a consistent change process and some method of restricting administrative access to devices. The following sections explore both issues.


ACLs are rule sets that can be implemented on firewalls, switches, and other infrastructure devices to control access. There are other uses of ACLs, such as to identify traffic for the purpose of applying Quality of Service (QoS), but the focus here is on using ACLs to restrict access to devices.

Many of the devices in question have web interfaces that can be used for management, but many are also managed through a command-line interface (and many technicians prefer this method). ACLs can be applied to these virtual terminal interfaces to control which users (based on their IP addresses) have access and which do not.

When creating ACL rule sets, keep in mind the following design considerations:

  • The order of the rules is important. If traffic matches a rule, the action specified by the rule will be applied, and no other rules will be read. Place more specific rules at the top of the list and more general rules at the bottom.

  • On many devices (such as Cisco routers), an implied deny all rule is located at the end of every ACL. If you are unsure, it is always best to configure an explicit deny all rule at the end of an ACL list.

  • It is possible to log all traffic that meets any of the rules.

Creating Rule Sets

Firewalls use rule sets to do their job. You can create rule sets at the command line or in a GUI. As a CASP candidate, you must understand the logic that a device uses to process the rules. A device examines rules starting at the top of the rule set, in this order:

  • The type of traffic

  • The source of the traffic

  • The destination of the traffic

  • The action to take on the traffic

For example, the following rule denies HTTP traffic from the device at if it is destined for the device at It is created as an access list on a Cisco router:

Access-list 101 deny tcp host host eq www

If the first rule in a list doesn’t match the traffic in question, the next rule in the list is examined. If all the rules are examined and none of them match the traffic type in a packet, the traffic will be denied by a rule called the implicit deny rule. Therefore, if a list doesn’t contain at least one permit statement, all traffic will be denied.

While ACLs can be part of a larger access control policy, you shouldn’t lose sight of the fact that you need to also use a secure method to work at the command line. You should use SSH instead of Telnet because Telnet uses cleartext, and SSH does not.

Change Monitoring

All networks evolve, grow, and change over time. Companies and their processes also evolve and change, which is a good thing. But change should be managed in a structured way to maintain a common sense of purpose about the changes. By following recommended steps in a formal process, you can prevent change from becoming the tail that wags the dog. The following guidelines should be a part of any change control policy:

  • All changes should be formally requested.

  • Each request should be analyzed to ensure that it supports all goals and polices.

  • Prior to formal approval, all costs and effects of the methods of implementation should be reviewed.

  • Once approved, the change steps should be developed.

  • During implementation, incremental testing should occur, and a predetermined fallback strategy should be used, if necessary.

  • Complete documentation should be produced and submitted with a formal report to management.

One of the key benefits of following this method is the ability to make use of the documentation in future planning. Lessons learned can be applied, and the process itself can be improved through analysis.

In summary, these are the steps in a formal change control process:

Step 1. Submit/resubmit a change request.

Step 2. Review the change request.

Step 3. Coordinate the change.

Step 4. Implement the change.

Step 5. Measure the results of the change.

Configuration Lockdown

Configuration lockdown (sometimes also called system lockdown) is a setting that can be implemented on devices including servers, routers, switches, firewalls, and virtual hosts. You set it on a device after that device is correctly configured, and it prevents any changes to the configuration, even by users who formerly had the right to configure the device. This setting helps support change control.

Full tests for functionality of all services and applications should be performed prior to implementing this setting. Many products that provide this functionality offer a test mode, in which you can log any problems the current configuration causes without allowing the problems to completely manifest on the network. This allows you to identify and correct any problems prior to implementing full lockdown.

Availability Controls

While security operations seem to focus attention on providing confidentiality and integrity of data, availability of data is also an important goal. Ensuring availability requires security professionals to design and maintain processes and systems that maintain availability to resources despite hardware or software failures in the environment. Availability controls comprise a set of features or steps taken to ensure that a resource is available for use. The following measures help achieve this goal:

  • Redundant hardware: Failure of physical components, such as hard drives and network cards, can interrupt access to resources. Providing redundant instances of these components can help ensure faster return to access. In some cases, redundancy may require manual intervention to change out a component, but in many cases, these items are hot swappable (that is, they can be changed while the device is up and running), in which case there may be a momentary reduction in performance rather than a complete disruption of access. While the advantage of redundant hardware is more availability, the disadvantage is the additional cost and, in some cases, the opportunity cost of a device never being used unless there is a failure.

  • Fault-tolerant technologies: At the next level of redundancy are technologies that are based on multiple computing systems or devices working together to provide uninterrupted access, even in the event of a failure of one of the systems. Clustering of servers and grid computing are great examples of this approach. As with redundant hardware, many fault-tolerant technologies result in devices serving only as backups and not typically being used.

A number of metrics are used to measure and control availability, including the following:

  • Service-level agreements (SLAs): SLAs are agreements about the ability of the support system to respond to problems within a certain time frame while providing an agreed level of service. These agreements can be internal (between departments) or external (with a service provider). Agreeing on the quickness with which various problems are addressed introduces some predictability to the response to problems; this ultimately supports the maintenance of access to resources. An SLA may include requirements such as the following examples:

    • Loss of connectivity to the DNS server must be restored within a two-hour period.

    • Loss of connectivity to Internet service must be restored within a five-hour period.

    • Loss of connectivity of a host machine must be restored within an eight-hour period.

  • MTBF and MTTR: SLAs are appropriate for services that are provided, but a slightly different approach to introducing predictability can be used with regard to physical components that are purchased. Vendors typically publish values for a product’s mean time between failures (MTBF), which describes the average amount of time between failures during normal operations. Another valuable metric typically provided is the mean time to repair (MTTR), which is the average amount of time it will take to get the device fixed and back online.

CASP candidates must understand a variety of high-availability terms and techniques, including the following:

  • Redundant array of inexpensive/independent disks (RAID): RAID is a hard drive technology in which data is written across multiple disks in such a way that a disk can fail, and the data can be quickly made available by remaking disks in the array without resorting to a backup tape. The most common types of RAID are:

  • RAID 0: Also called disk striping, this method writes the data across multiple drives. While it improves performance, it does not provide fault tolerance. RAID 0 is depicted in Figure 5-22.

Disk 0 and Disk 1 are shown and both the disks are indicated as RAID 0. The data in Disk 0 are indicated as A1, A3, A5, and A7 and that in Disk 1 are indicated as A2, A4, A6, and A8.

Figure 5-22 RAID 0

  • RAID 1: Also called disk mirroring, RAID 1 uses two disks and writes a copy of the data to both disks, providing fault tolerance in the case of a single drive failure. RAID 1 is depicted in Figure 5-23.

    Disk 0 and Disk 1 are shown and both the disks are indicated as RAID 1. The data in Disk 0 and Disk 1 are indicated as A1, A2, A3, and A4 in each of them.

    Figure 5-23 RAID 1

  • RAID 3: This method, which requires at least three drives, writes the data across all drives, as with striping, and then writes parity information to a single dedicated drive. The parity information is used to regenerate the data in the case of a single drive failure. The downfall of this method is that the parity drive is a single point of failure. RAID 3 is depicted in Figure 5-24.

    RAID 3 represents bytes striped in data disks and a dedicated parity disk.

    Figure 5-24 RAID 3

  • RAID 5: This method, which requires at least three drives, writes the data across all drives, as with striping, and then writes parity information across all drives as well. The parity information is used in the same way as in RAID 3, but it is not stored on a single drive, so there is no single point of failure for the parity data. With hardware RAID 5, the spare drives that replace the failed drives are usually hot swappable, meaning they can be replaced on the server while it is running. RAID 5 is depicted in Figure 5-25.

    RAID 5 represents parity spread across disks.

    Figure 5-25 RAID 5

  • RAID 7: While not a standard but a proprietary implementation, this system incorporates the same principles as RAID 5 but enables the drive array to continue to operate if any disk or any path to any disk fails. The multiple disks in the array operate as a single virtual disk.

  • RAID 10: This method combines RAID 1 and RAID 0 and requires a minimum of two disks. However, most implementations of RAID 10 have four or more drives. A RAID 10 deployment contains a striped disk that is mirrored on a separate striped disk. Figure 5-26 depicts RAID 10.

RAID 10 is depicted.

Figure 5-26 RAID 10

RAID can be implemented with software or with hardware, and certain types of RAID are faster when implemented with hardware. Both RAID 3 and 5 are examples of RAID types that are faster when implemented with hardware. Simple striping and mirroring (RAID 0 and 1), however, tend to perform well in software because they do not use the hardware-level parity drives. When software RAID is used, it is a function of the operating system. Table 5-17 summarizes the RAID types.


Table 5-17 RAID Types

RAID Level

Minimum Number of Drives






Data striping without redundancy

Highest performance

No data protection; if one drive fails, all data is lost



Disk mirroring

Very high performance; very high data protection; very minimal penalty on write performance

High redundancy cost overhead; because all data is duplicated, twice the storage capacity is required



Byte-level data striping with a dedicated parity drive

Excellent performance for large, sequential data requests

Not well suited for transaction-oriented network applications; the single parity drive does not support multiple, simultaneous read and write requests



Block-level data striping with distributed parity

Best cost/performance for transaction-oriented networks; very high performance and very high data protection; supports multiple simultaneous reads and writes; can also be optimized for large, sequential requests

Write performance is slower than with RAID 0 or RAID 1



Disk striping with mirroring

High data protection, which increases each time you add a new striped/mirror set

High redundancy cost overhead; because all data is duplicated, twice the storage capacity is required

Here are some key terms with regard to fault tolerance.

  • Storage area networks (SANs): These high-capacity storage devices are connected by a high-speed private network, using storage-specific switches.

  • Failover: This is the capacity of a system to switch over to a backup system if a failure occurs in the primary system.

  • Failsoft: This is the capability of a system to terminate noncritical processes when a failure occurs.

  • Clustering: This refers to a software product that provides load balancing services. With clustering, one instance of an application server acts as a master controller and distributes requests to multiple instances, using round-robin, weighted-round-robin, or a least-connections algorithm.

  • Load balancing: Load balancing is covered earlier in this chapter.

A single point of failure (SPOF) is not a strategy, but it is worth mentioning that the ultimate goal of any of the approaches described here is to avoid a single point of failure in a system. All components and groups of components and devices should be examined to discover any single element that could interrupt access to resources if a failure occurs. Then each SPOF should be mitigated in some way. For example, if you have a single high-speed Internet connection, you might decide to implement another lower-speed connection to provide backup in case the primary connection goes down. This particular measure is especially important for ecommerce servers.

Software-Defined Networking

In a network, three planes typically form the networking architecture:

  • Control plane: This plane carries signaling traffic originating from or destined for a router. This is the information that allows routers to share information and build routing tables.

  • Data plane: Also known as the forwarding plane, this plane carries user traffic.

  • Management plane: This plane administers the router.

Software-defined networking (SDN) has been classically defined as the decoupling of the control plane and the data plane in networking. In a conventional network, these planes are implemented in the firmware of routers and switches. SDN implements the control plane in software, which enables programmatic access to it.

This definition has evolved over time to focus more on providing programmatic interfaces to networking equipment and less on the decoupling of the control and data planes. An example of this is the provision of APIs by vendors in the multiple platforms they sell.

One advantage of SDN is that it enables very detailed access into and control over network elements. It allows IT organizations to replace a manual interface with a programmatic one that can enable the automation of configuration and policy management.

An example of the use of SDN is using software to centralize the control planes of multiple switches that normally operate independently. (While the control plane normally functions in hardware, with SDN it is performed in software.) This concept is shown in Figure 5-27.

Distributed Control is depicted on the left and Centralized Control on the right.

Figure 5-27 Centralized and Decentralized SDN

The advantages of SDN include the following:

  • It is simple to mix and match solutions from different vendors.

  • SDN offers choice, speed, and agility in deployment.

The disadvantages of SDN include the following:

  • Loss of connectivity to the controller brings down the entire network.

  • SDN can potentially allow attacks on the controller.

Network Management and Monitoring Tools

Network management and monitoring tools are essential elements of a security solution. An earlier part of this chapter covered many common network management and monitoring tools, including IDS and NIPS. Additional tools include the following:

  • Audit logs: These logs provide digital proof of who is performing certain activities. This is useful for good guys as well as for bad guys. In many cases, you may need to determine who misconfigured something rather than who stole something. Audit trails based on access and identification codes establish individual accountability. Among the questions that should be addressed when reviewing audit logs are:

    • Are users accessing information or performing tasks that are unnecessary for their job?

    • Are repetitive mistakes (such as deletions) being made?

    • Do too many users have special rights and privileges?

    The level and amount of auditing should reflect the security policy of the company. Audits can be self-audits, or they can be performed by a third party. Self-audits always introduce the danger of subjectivity to the process. Logs can be generated on a wide variety of devices, including IDSs, servers, routers, and switches. In fact, host-based IDSs make use of the operating system logs of the host machine.

    When assessing controls over audit trails or logs, the following questions must be addressed:

    • Does the audit trail provide a trace of user actions?

    • Is access to online logs strictly controlled?

    • Is there separation of duties between security personnel who administer the access control function and those who administer the audit trail?

  • Log management: Typically, system, network, and security administrators are responsible for managing logging on their systems, performing regular analysis of their log data, documenting and reporting the results of their log management activities, and ensuring that log data is provided to the log management infrastructure in accordance with the organization’s policies. In addition, some of the organization’s security administrators act as log management infrastructure administrators, with responsibilities such as the following:

    • Contact system-level administrators to get additional information regarding an event or to request investigation of a particular event.

    • Identify changes needed to system logging configurations (for example, which entries and data fields are sent to the centralized log servers, what log format should be used) and inform system-level administrators of the necessary changes.

    • Initiate responses to events, including incident handling and operational problems (for example, a failure of a log management infrastructure component).

    • Ensure that old log data is archived to removable media and disposed of properly when it is no longer needed.

    • Cooperate with requests from legal counsel, auditors, and others.

    • Monitor the status of the log management infrastructure (for example, failures in logging software or log archival media, failures of local systems to transfer their log data) and initiate appropriate responses when problems occur.

    • Test and implement upgrades and updates to the log management infrastructure’s components.

    • Maintain the security of the log management infrastructure.

      Organizations should develop policies that clearly define mandatory requirements and suggested recommendations for several aspects of log management, including the following: log generation, log transmission, log storage and disposal, and log analysis. Table 5-18 provides examples of logging configuration settings that an organization can use. The types of values defined in Table 5-18 should only be applied to the hosts and host components previously specified by the organization as ones that must or should be logging security-related events.


Table 5-18 Examples of Logging Configuration Settings


Low-Impact System

Moderate-Impact Systems

High-Impact Systems

Log retention duration

1–2 weeks

1–3 months

3–12 months

Log rotation

Optional (if performed, at least every week or every 25 MB)

Every 6–24 hours or every 2–5 MB

Every 15–60 minutes or every 0.5–1.0 MB

Log data transfer frequency (to SIEM)

Every 3–24 hours

Every 15–60 minutes

At least every 5 minutes

Local log data analysis

Every 1–7 days

Every 12–24 hours

At least 6 times a day

File integrity check for rotated logs?




Encrypt rotated logs?




Encrypt log data transfers to SIEM?




  • Protocol analyzers: Also called sniffers, these devices can capture raw data frames from a network. They can be used as security and performance tools. Many protocol analyzers can organize and graph the information they collect. Graphs are great for visually identifying trends and patterns.

Reading and understanding audit logs requires getting used to the specific layout of the log in use. As a CASP candidate, you should be able to recognize some standard events of interest that tend to manifest with distinct patterns. Figure 5-28 shows output from the protocol analyzer Wireshark. The top pane shows packets that have been captured. The line numbered 384 has been chosen, and the parts of the packet are shown in the middle pane. In this case, the packet is a response from a DNS server to a device that queried for a resolution. The bottom pane shows the actual data in the packet and, because this packet is not encrypted, you can see that the user was requesting the IP address for Any packet that is not encrypted can be read in this pane.

A screenshot shows Wireshark window.

Figure 5-28 Wireshark Output

Table 5-19 lists events of interest, clues to their occurrence, and mitigation techniques a CASP candidate needs to know.


Table 5-19 Attacks and Mitigations

Attack Type



Typical Sources

Authentication attacks

Multiple unsuccessful logon attempts

Alert sent and/or disabling after 3 failed attempts

Active Directory




Firewall attacks

Multiple drop/reject/deny events from the same IP address

Alert sent on 15 or more of these events from a single IP address in a minute




IPS/IDS attacks

Multiple drop/reject/deny events from the same IP address

Alert sent on 7 or more of these events from a single IP address in a minute



Alert Definitions and Rule Writing

Alerts can be sent from various security devices, such as IPS, IDS, and SIEM systems. Some of these alerts are predefined within a tool, while others must be constructed or defined. For example, custom rules can be written for the Snort IDS, which uses a lightweight rules description language that is flexible and quite powerful.

Snort rules are divided into two logical sections: the rule header and the rule options. The rule header contains the rule’s action, protocol, source and destination IP addresses, netmasks, and the source and destination ports information. The rule option section contains alert messages and information on which parts of the packet should be inspected to determine if the rule action should be taken. The following is an example of a rule:

alert tcp any any -> 111 (msg: "<sensitive>";

The rule header is the portion that says alert tcp any any -> 111.

This rule’s IP addresses indicate “any tcp packet with a source IP address not originating from the internal network and a destination address on the internal network.” The rule options portion—what the alert is looking for—is in parentheses (msg: “<sensitive>”;). In this case, the rule is looking for the appearance of the word sensitive in the message text. Using custom rules to create alert definitions can help tailor an alert and cut down on false positives.

Tuning Alert Thresholds

You can create alert thresholds such that an alert is issued only when a specific number of occurrences of the event have occurred. You can also create a threshold based on the number of events received per second.

Some tools offer other options, such as the following options offered by Microsoft Forefront Threat Management Gateway:

  • If the alert should be reissued immediately if the event recurs, click Immediately.

  • If the alert should be reissued only after the alert is reset, click Only if the alert was manually reset.

  • If the alert should be reissued after a specified amount of time, click If time since last execution is more than Number minutes, and then type the number of minutes that should elapse before the action should be performed.

The number of alerts received is a function of these options and the sensitivity of the system. When there is a scarcity of alerts or if you feel you are not being alerted (false negatives), you may need to increase the sensitivity of the system or tune the alerts to make them less specific. On the other hand, if you are being overwhelmed with alerts or if many of them are not important or are faulty (false positives), you may need to increase the sensitivity or make the alert settings more specific.

Alert Fatigue

Alert fatigue refers to the effect on the security team that occurs when too many false positives (alerts that do not represent threats) are received. Alert fatigue can lead to a loss of the sense of urgency that should always be present. Using custom rules to create alert definitions can help tailor alerts and cut down on false positives.

Advanced Configuration of Routers, Switches, and Other Network Devices

When configuring routers, switches, and other network devices, some specific advanced configurations should be a part of securing the devices and the networks they support. The following sections discuss some of these and the security concerns they address.

Transport Security

While encryption protocols such as SSL and TLS provide protection to application layer protocols such as HTTP, they offer no protection to the information contained in the transport or network layers of a packet. You can use IPsec to protect the protocols that work in the network layer and all layers above the network layer. IPsec is a suite of protocols that establishes a secure channel between two devices. For more information on IPsec, see the section “IPsec,” earlier in this chapter.

Trunking Security

Trunk links are links between switches and between routers and switches that carry the traffic of multiple VLANs. Normally when a hacker is trying to capture traffic with a protocol analyzer, she is confined to capturing only unicast data on the same switch port to which she is attached and only broadcasting and multicasting data from the same VLAN of which her port is a member. However, if a hacker is able to create a trunk link with one of your switches, she can now capture traffic in all VLANs on the trunk link. In most cases, it is difficult for her to do so, but on Cisco switches, it is possible for the hacker to take advantage of the operations of a protocol called Dynamic Trunking Protocol (DTP) to create a trunk link quite easily.

DTP allows two switches to form a trunk link automatically, based on their settings. A switch port can be configured with the following possible settings:

  • Trunk: The switch port is hard-coded to be a trunk.

  • Access: The switch port is hard-coded to be an access port.

  • Dynamic desirable: The port is willing to form a trunk and will actively attempt to form a trunk.

  • Dynamic auto: The port is willing to form a trunk but will not initiate the process.


If a switch port is set to either dynamic desirable or dynamic auto, it would be easy for a hacker to connect a switch to that port, set his port to dynamic desirable, and thereby form a trunk. This type of attack, called switch spoofing, is shown in Figure 5-29. All switch ports should be hard-coded to trunk or access, and DTP should not be used. The protocol is not even recommended by Cisco who created it.

Switch spoofing has been depicted.

Figure 5-29 Switch Spoofing

You can use the following command set to hard-code a port on a Cisco router as a trunk port:

Switch(config)#interface FastEthernet 0/1
Switch(config-if)#switchport mode trunk

To hard-code a port as an access port that will never become a trunk port, thus making it impervious to a switch spoofing attack, you use this command set:

Switch(config)#interface FastEthernet 0/1
Switch(config-if)#switchport mode access

Tags are used on trunk links to identify the VLAN to which each frame belongs. They are involved in a type of attack to trunk ports called VLAN hopping. It can be accomplished by using a process called double tagging. In this attack, the hacker creates a packet with two tags. The first tag is stripped off by the trunk port of the first switch it encounters, but the second tag remains, allowing the frame to hop to another VLAN. This process is shown in Figure 5-30. In this example, the native VLAN number between the Company A and Company B switches has been changed from the default of 1 to 10.

The process of VLAN hopping is depicted.

Figure 5-30 VLAN Hopping

To prevent this type of attack, you do the following:

  • Specify the native VLAN (the default VLAN, or VLAN 1) as an unused VLAN ID for all trunk ports by specifying a different VLAN number for the native VLAN. Make sure it matches on both ends of each link. To change the native VLAN from 1 to 99, execute this command on the trunk interface:

    switch(config-if)#switchport trunk native vlan 99

  • Move all access ports out of VLAN 1. You can do this by using the interface-range command for every port on a 12-port switch, as follows:

    switch(config)#interface-range FastEthernet 0/1 - 12
    switch(config-if)#switchport access vlan 61

    This example places the access ports in VLAN 61.

  • Place unused ports in an unused VLAN. Use the same command you used to place all ports in a new native VLAN and specify the VLAN number.

Port Security

Port security applies to ports on a switch, and because it relies on monitoring the MAC addresses of the devices attached to the switch ports, we call it layer 2 security. While disabling any ports that are not in use is always a good idea, port security goes a step further and allows you to keep a port enabled for legitimate devices while preventing its use by illegitimate devices.

You can apply two types of restrictions to a switch port:

  • You can restrict the specific MAC addresses allowed to send on the port.

  • You can restrict the total number of different MAC addresses allowed to send on the port.

By specifying which specific MAC addresses are allowed to send on a port, you can prevent unknown devices from connecting to the switch port. Port security is applied at the interface level. The interface must be configured as an access port, so first you ensure that it is by executing the following command:

Switch(config)#int fa0/1
Switch(config-if)#switchport mode access

In order for port security to function, you must enable the feature. To enable it on a switchport, use the following command at the interface configuration prompt:

Switch(config-if)#switchport port security

Limiting MAC Addresses

Now you need to define the maximum number of MAC addresses allowed on the port. In many cases today, IP phones and computers share a switchport (the computer plugs into the phone, and the phone plugs into the switch), so here you want to allow a maximum of two:

Switch(config-if)#switchport port security maximum 2

Next, you define the two allowed MAC addresses—in this case, aaaa.aaaa.aaaa and bbbb.bbbb.bbbb:

Switch(config-if)#switchport port security mac-address aaaa.aaaa.aaaa
Switch(config-if)#switchport port security mac-address bbbb.bbbb.bbbb

Finally, you set an action for the switch to take if there is a violation. By default, the action is to shut down the port. You can also set it to restrict, which doesn’t shut down the port but prevents the violating device from sending any data. In this case, set it to restrict:

Switch(config-if#)switchport port security violation restrict

Now you have secured the port to allow only the two MAC addresses required by the legitimate user: one for his phone and the other for his computer. Now you just need to gather all the MAC addresses for all the phones and computers, and you can lock down all the ports. Boy, that’s a lot of work! In the next section, you’ll see that there is an easier way.

Implementing Sticky Mac

Sticky Mac is a feature that allows a switch to learn the MAC addresses of the devices currently connected to the port and convert them to secure MAC addresses (the only MAC addresses allowed to send on the port). All you need to do is specify the keyword sticky in the command where you designate the MAC addresses, and you’re done. You still define the maximum number, and Sticky Mac will convert up to that number of addresses to secure MAC addresses. Therefore, you can secure all ports by only specifying the number allowed on each port and specifying the sticky command in the port security mac-address command. To secure a single port, execute the following code:

Switch(config-if)#port security
Switch(config-if)#port security maximum 2
Switch(config-if)#port security mac-address sticky


When the transport layer learns the required port number for the service or application required on the destination device from the application layer, it is recorded in the header as either a TCP or UDP port number. Both UDP and TCP use 16 bits in the header to identify these ports. These port numbers are software based, or logical, and there are 65,535 possible numbers. Port numbers are assigned in various ways, based on three ranges:

  • System, or well-known, ports (0–1023)

  • User ports (1024–49151)

  • Dynamic and/or private ports (49152–65535)

System ports are assigned by the Internet Engineering Task Force (IETF) for standards-track protocols, as per RFC 6335. User ports can be registered with the Internet Assigned Numbers Authority (IANA) and assigned to the service or application by using the “expert review” process described in RFC 6335. Source devices use dynamic ports as source ports when accessing a service or an application on another machine. For example, if computer A is sending an FTP packet, the destination port will be the well-known port for FTP, and the source will be selected by the computer randomly from the dynamic range.

The combination of the destination IP address and the destination port number is called a socket. The relationship between these two values can be understood if viewed through the analogy of an office address. The office has a street address, but the address also must contain a suite number, as there could be thousands (in this case 65,535) suites in the building. Both are required in order to get the information where it should go.

As a security professional, you should be aware of well-known port numbers of common services. In many instances, firewall rules and ACLs are written or configured in terms of the port number of what is being allowed or denied rather than the name of the service or application. Table 5-20 lists some of the most important port numbers. As you can see, some protocols or services use more than one port.


Table 5-20 Common TCP/UDP Port Numbers

Application Protocol

Transport Protocol

Port Number












161 and 162



20 and 21



989 and 990















67 and 68









137, 138, and 139


















1812 and 1813




rsh and RCP












AFP over TCP



Route Protection

Most networks today use dynamic routing protocols to keep the routing tables of the routers up to date. Just as it is possible for a hacker to introduce a switch to capture all VLAN traffic, she can also introduce a router in an attempt to collect routing table information and, in some cases, edit routing information to route traffic in a manner that facilitates her attacks.

Routing protocols provide a way to configure the routes to authenticate with one another before exchanging routing information. In most cases, you can configure either a simple password between the routes or use MD5 authentication. You should always use MD5 authentication when possible as it ensures the integrity of the information contained in the update and verifies the source of the exchange between the routers; simple password authentication does not. Here’s how you could configure this between a router named A and one named B, using the Open Shortest Path First (OSPF) routing protocol, MD5 key 1, and the password MYPASS:

A(config)#interface fastEthernet 0/0
A(config-if)#ip ospf message-digest-key 1 md5 MYPASS
A(config-if)#ip ospf authentication message-digest
B(config)#interface fastEthernet 0/0
B(config-if)#ip ospf message-digest-key 1 md5 MYPASS
B(config-if)#ip ospf authentication message-digest

You enter these commands on the interfaces, and you need to make sure the two values are the same on both ends of the connection.

The first example configures the MD5 authentication at the interface level. You can do this on all interfaces on the router that belong to the same OSPF area by configuring the MD5 authentication on an area basis instead, as shown below:

A(config)#router ospf 1
A(config-router)#area 0 authentication message-digest
B(config)#router ospf 1
B(config-router)#area 0 authentication message-digest

DDoS Protection

A denial-of-service (DoS) attack occurs when attackers flood a device with enough requests to degrade the performance of the targeted device. Some popular DoS attacks include SYN floods and teardrop attacks.

A distributed DoS (DDoS) attack is a DoS attack that is carried out from multiple attack locations. Vulnerable devices are infected with software agents called zombies. The vulnerable devices become botnets, which then carry out the attack. Because of the distributed nature of such an attack, identifying all the attacking botnets is virtually impossible. The botnets also help hide the original source of the attack.

DDoS happens because of vulnerable software or applications running on machines in a network. Constant vigilance in installing all security patches is a key to preventing these attacks. Setting up a firewall that does ingress and egress filtering at the gateway is also a good measure. Make sure your DNS server is protected behind the same type of load balancing as your web and other resources. The next section describes is another mitigation technique often used by provider networks.

Remotely Triggered Black Hole

Remotely triggered black hole (RTBH) routing involves the application of Border Gateway Protocol (BGP) as a security tool within service provider networks. RTBH works by injecting a specially crafted BGP route into the network, forcing routers to drop all traffic with a specific next hop, thereby effectively creating a “black hole.” These are the high-level steps:

Step 1. Create a static route that forces any traffic destined for a specified network (not the actual network of the device you are protecting) to be immediately dropped by the router.

Step 2. Create a route map to redistribute certain tagged static routes into BGP with a modified next-hop value that leads to the null route created in step 1.

Step 3. Enable static route redistribution into BGP for the route map to take effect.

Step 4. Once an attack is detected and the decision is made to block traffic, implement the route to the protected device that uses the route tag specified in the route map created in step 2.

The tag value ensures that the RTBH route map redistributes the route into BGP with a modified next hop. Then the route to the modified next hop leads to a black hole, protecting the device and preventing the traffic from even entering the network.

RTBH routing is appropriate when security professionals want to be ready for an attack on a specific target with a preconfigured response. While the advantage to this approach is that the static route required to implement the modified next hop is ready, its application is still a manual operation that must be deployed with a sense of urgency when the attack first appears.

Security Zones

When designing a network, it is advisable to create security zones separated by subnetting, ACLs, firewall rules, and other tools used for isolation. The following sections discuss some commonly used security zones and measures that can help you protect and shape the flow of data between security zones.


One of the most common implementations of a security zone is a DMZ, which may be used between the Internet and an internal network. (For more information on DMZs, see the “Firewall” section, earlier in this chapter.) The advantages and disadvantages of using a DMZ are listed in Table 5-21.


Table 5-21 Advantages and Disadvantages of Using a DMZ



Allows controlled access to publicly available servers

Requires additional interfaces on the firewall

Allows precise control of traffic between the internal, external, and DMZ zones

Requires multiple public IP addresses for servers in the DMZ

Separation of Critical Assets

Of course, the entire purpose of creating security zones such as DMZs is to separate sensitive assets from those that require less protection. Because the goals of security and of performance/ease of use are typically mutually exclusive, not all networks should have the same levels of security.

The proper location of information assets may require a variety of segregated networks. Whereas DMZs are often used to make assets publicly available, extranets are used to make data available to a smaller set of the public, such as a partner organization. An extranet is a network logically separate from the intranet, the Internet, and the DMZ (if both exist in the design) where resources that will be accessed from the outside world are made available. Access may be granted to customers, business partners, or the public in general. All traffic between this network and the intranet should be closely monitored and securely controlled. Nothing of a sensitive nature should be placed on the extranet.

Locating assets in the cloud is another way to segregate sensitive assets from other information assets, although security professionals should be aware that cloud environments introduce unique security concerns. Mixing or commingling of data with data assets of other tenants is always a concern. Unauthorized access to data from other tenants is another concern.

In cases where data security concerns are extreme, it may even be advisable to protect the underlying system with an air gap. This means the device has no network connections, and all access to the system must be done manually, adding and removing items with a flash drive or another external device.

Network Segmentation

An organization may need to segment its network to improve network performance, to protect certain traffic, or for a number of other reasons. Segmenting an enterprise network is usually achieved through the use of routers, switches, and firewalls. A network administrator may decide to implement VLANs by using switches or deploy a DMZ by using firewalls. No matter how you choose to segment the network, you should ensure that the interfaces that connect the segments are as secure as possible. This may mean closing ports, implementing MAC filtering, and using other security controls. In a virtualized environment, you can implement separate physical trust zones. When the segments or zones are created, you can delegate separate administrators who are responsible for managing the different segments or zones.

Network Access Control

NAC is briefly described earlier in this chapter. This section covers NAC in more detail.

Figure 5-31 shows the steps that occur in Microsoft NAP (which, as discussed earlier in this chapter, is a form of NAC). The health state of the device requesting access is collected and sent to the network policy server (NPS), where the state is compared to requirements. If requirements are met, access is granted.

NAP steps are shown.

Figure 5-31 NAP Steps

These are the limitations of using NAP or another form of NAC:

  • They work well for company-managed computers but less so for guests.

  • They tend to react only to known threats and not new threats.

  • The return on investment is still unproven.

  • Some implementations involve confusing configuration.



If you examine step 5 in the process shown in Figure 5-31, you see that a device that fails examination is placed in a restricted network until it can be remediated. A remediation server addresses the problems discovered on the device. It may remove the malware, install missing operating system updates, or update virus definitions. When the remediation process is complete, the device is granted full access to the network.

Persistent/Volatile or Non-persistent Agent

When agents are used, they can be either persistent or non-persistent. Persistent agents are installed on each endpoint and are there waiting to be called into action. Non-persistent agents are installed and run as needed on an endpoint. Installation could be from a USB drive, using a standard IT remote administration tool, or using a dedicated incident response tool that uses a non-persistent approach. Some non-persistent agents install and then uninstall themselves after the connection is taken down. Following the guidelines set out in the previous section, when agents are in use, non-persistent agents work best when unknown devices will be connecting.

Agent vs. Agentless

You can implement NAC by installing an agent on the client device, but you don’t have to use such an agent. Agentless NAC is easier to deploy but offers less control and fewer inspection capabilities. Deploying agents can be a significant expense, so an agent must provide ample benefits to warrant installation.

In scenarios where all devices will be managed devices and are known to the organization, an agent-based solution offers many benefits. However, when a large organization has many devices connecting and some are unknown to the organization, this becomes an administrative headache, and in the case of unknown devices, it is an impossibility. In these scenarios, an agentless system is more appropriate.

Network-Enabled Devices

Beyond the typical infrastructure devices, such as routers, switches, and firewalls, security professionals also have to manage and protect specialized devices that have evolved into IP devices. The networking of systems that in the past were managed out-of-band from the IP network continues to grow. The following sections cover some of the systems that have been merged with the IP network.

System on a Chip (SoC)

An SoC is an integrated circuit that includes all components of a computer or another electronic system. SoCs can be built around a microcontroller or a microprocessor (the type found in mobile phones). Specialized SoCs are also designed for specific applications.

Secure SoCs provide the key functionalities described in the following sections.

Secure Booting

Secure booting is a series of authentication processes performed on the hardware and software used in the boot chain. Secure booting starts from a trusted entity (also called the anchor point). Chip hardware booting sequence and BootROM are the trusted entities, and they are fabricated in silicon. Hence, it is next to impossible to change the hardware (trusted entity) and still have functional SoC.

The process of authenticating each successive stage is performed to create a chain of trust, as depicted in Figure 5-32.

Secure booting is depicted.

Figure 5-32 Secure Boot

Secured Memory

Memory can be divided into multiple partitions. Based on the nature of data in a partition, the partition can be designated as a security-sensitive or a non-security-sensitive partition. In a security breach (such as tamper detection), the contents of a security-sensitive partition can be erased by the controller itself, while the contents of the non-security-sensitive partitions can remain unchanged (see Figure 5-33).

Memory Controller at left and Secure Memory at right are connected by two-way arrows.

Figure 5-33 Secured Memory

Runtime Data Integrity Check

The runtime data integrity check process ensures the integrity of the peripheral memory contents during runtime execution. The secure booting sequence generates a hash value of the contents of individual memory blocks stored in secured memory. In the runtime mode, the integrity checker reads the contents of a memory block, waits for a specified period, and then reads the contents of another memory block. In the process, the checker also computes the hash values of the memory blocks and compares them with the contents of the reference file generated during boot time.

In the event of a mismatch between two hash values, the checker reports a security intrusion to a central unit that decides the action to be taken based on the security policy, as shown in Figure 5-34.

Two hash values are compared by an integrity checker.

Figure 5-34 Runtime Data Integrity Check

Central Security Breach Response

The security breach response unit monitors security intrusions. In the event that intrusions are reported by hardware detectors (such as voltage, frequency, and temperature monitors), the response unit moves the state of the SoC to non-secure state. The non-secure state is characterized by certain restrictions that differentiate it from the secure state. Any further security breach reported to the response unit takes the SoC to the fail state (that is, a non-functional state). The SoC remains in the fail state until a power-on-reset is issued. See Figure 5-35.

A flow diagram represents security state machine.

Figure 5-35 Central Security Breach Response

Building/Home Automation Systems

The networking of facility systems has enhanced the ability to automate the management of systems including the following:

  • Lighting

  • HVAC

  • Water systems

  • Security alarms

Bringing together the management of these seemingly disparate systems allows for the orchestration of their interaction in ways that were never before possible. When industry leaders discuss the Internet of Things (IoT), the success of building automation is often used as a real example of where connecting other devices, such as cars and street signs, to the network can lead. These systems usually can pay for themselves in the long run by managing the entire ecosystem more efficiently in real time than a human could ever do. If a wireless version of such a system is deployed, keep the following issues in mind:

  • Interference issues: Construction materials may prevent you from using wireless everywhere.

  • Security: Use encryption, separate the building automation systems (BAS) network from the IT network, and prevent routing between the networks.

  • Power: When Power over Ethernet (PoE) cannot provide power to controllers and sensors, ensure that battery life supports a reasonable lifetime and that procedures are created to maintain batteries.

IP Video

IP video systems provide a good example of the benefits of networking applications. These systems can be used for both surveillance of a facility and facilitating collaboration. An example of the layout of an IP surveillance system is shown in Figure 5-36.

Typical multi-camera business surveillance network is depicted.

Figure 5-36 IP Surveillance

IP video has also ushered in a new age of remote collaboration. It has saved a great deal of money on travel expenses while at the same time making more efficient use of time.

Issues to consider and plan for when implementing IP video systems include the following:

  • Expect a large increase in the need for bandwidth.

  • QoS needs to be configured to ensure performance.

  • Storage needs to be provisioned for the camera recordings. This could entail cloud storage, if desired. See Chapter 13, “Cloud and Virtualization Technology Integration,” for coverage of cloud issues.

  • The initial cost may be high.

HVAC Controllers

One of the best examples of the marriage of IP networks and a system that formerly operated in a silo is heating, ventilation, and air conditioning (HVAC) systems. HVAC systems usually use a protocol called Building Automation and Control Network (BACnet), which is an application, network, and media access control (MAC) layer communications service. It can operate over a number of layer 2 protocols, including Ethernet.

To use the BACnet protocol in an IP world, BACnet/IP (B/IP) was developed. The BACnet standard makes exclusive use of MAC addresses for all data links, including Ethernet. To support IP, IP addresses are needed. BACnet/IP, Annex J defines an equivalent MAC address composed of a 4-byte IP address followed by a 2-byte UDP port number. A range of 16 UDP port numbers has been registered as hexadecimal BAC0 through BACF.

While putting these systems on an IP network makes them more manageable, it has become apparent that these networks should be separate from the internal network. In the infamous Target breach, hackers broke into the network of a company that managed the company’s HVAC systems. The intruders leveraged the trust and network access granted to them by Target and then from these internal systems broke into the point-of-sale systems and stole credit and debit card numbers, as well as other personal customer information.


Sensors are designed to gather information of some sort and make it available to a larger system, such as an HVAC controller. Sensors and their role in SCADA systems are covered in the section “Critical Infrastructure,” later in this chapter.

Physical Access Control Systems

Physical access control systems are any systems used to allow or deny physical access to the facility. They can include:

  • Mantrap: This is a series of two doors with a small room between them. The user is authenticated at the first door and then allowed into the room. At that point, additional verification occurs (such as a guard visually identifying the person), and then the person is allowed through the second door. Mantraps are typically used only in very high-security situations. They can help prevent tailgating. A mantrap design is shown in Figure 5-37.

    An aerial view of a mantrap design is shown.

    Figure 5-37 Mantrap

  • Proximity readers: These readers are door controls that read a proximity card from a short distance and are used to control access to sensitive rooms. These devices can also provide a log of all entries and exits.

  • IP-based access control and video systems: When using these systems, a network traffic baseline for each system should be developed so that unusual traffic can be detected.

Some higher-level facilities are starting to incorporate biometrics as well, especially in high-security environments where there are terrorist concerns.

A/V Systems

Audio/visual (A/V) systems can be completely connected to IP networks, providing the video conferencing capabilities discussed earlier. But they also operate in other areas as well. Real-time IP production technology integrates network technology and high-definition serial digital interface (HD-SDI), the standard for HD video transmission. This is the technology used to support live video productions, such as sportscasts.

Securing these systems involves the same hardening procedures you should exercise everywhere, including the following:

  • Changing all default passwords

  • Applying password security best practices

  • Enabling encryption for video teleconference (VTC) sessions

  • Disabling insecure IP services (such as Telnet and HTTP)

  • Regularly updating firmware and applying patches

  • When remote access is absolutely required, instituting strict access controls (such as router access control lists and firewall rules) to limit privileged access to administrators only

Moreover, the following are some measures that apply specifically to these systems:

  • Disabling broadcast streaming

  • Disabling the far-end camera control feature (used to adjust a camera remotely)

  • Performing initial VTC settings locally, using the craft port (a direct physical connection to a device) or the menu on the system

  • Practicing good physical security (such as restricting access, turning off the device, and covering the camera lens when not in use)

  • Disabling any automatic answering feature

  • Disabling wireless capabilities when possible

  • Logically separating VTC from the rest of the IP network by using VLANs

Scientific/Industrial Equipment

Both scientific and industrial equipment have been moved to IP networks. In hospitals, more and more devices are now IP enabled. While this has provided many benefits, adding biomedical devices to a converged network can pose significant risks, such as viruses, worms, or other malware, which can severely impact overall network security and availability. It is essential to have a way to safely connect biomedical, guest, and IT devices to the IP network. You should isolate and protect specific biomedical devices from other hosts on the IP network to protect them from malware and provide the appropriate quality of service.

Critical Infrastructure

Industrial equipment and building system controls have mostly been moved to IP networks. In this section we look at two technologies driving this process.

Industrial control systems (ICS) is a general term that encompasses several types of control systems used in industrial production. The most widespread is supervisory control and data acquisition (SCADA). SCADA is a system that operates with coded signals over communication channels to provide control of remote equipment. It includes the following components:

  • Sensors: Sensors typically have digital or analog I/O and are not in a form that can be easily communicated over long distances.

  • Remote terminal units (RTUs): RTUs connect to the sensors and convert sensor data to digital data, including telemetry hardware.

  • Programmable logic controllers (PLCs): PLCs connect to the sensors and convert sensor data to digital data; they do not include telemetry hardware.

  • Telemetry system: Such a system connects RTUs and PLCs to control centers and the enterprise.

  • Human interface: Such an interface presents data to the operator.

These systems should be securely segregated from other networks. The Stuxnet virus hit the SCADA used for the control and monitoring of industrial processes. SCADA components are considered privileged targets for cyber attacks. By using cyber tools, it is possible to destroy an industrial process. This was the idea used on the attack on the nuclear plant in Natanz to interfere with the Iranian nuclear program.

Considering the criticality of the systems, physical access to SCADA-based systems must be strictly controlled. Systems that integrate IT security with physical access controls like badging systems and video surveillance should be deployed. In addition, the solution should be integrated with existing information security tools such as log management and IPS/IDS. A helpful publication by the National Standards and Technology Institute (NIST), Special Publication 800-82, provides recommendations on ICS security. Issues with these emerging systems include the following:

  • Required changes to the system may void the warranty.

  • Products may be rushed to market, with security an afterthought.

  • The return on investment may take decades.

  • There is insufficient regulation regarding these systems.

Exam Preparation Tasks

As mentioned in the section “How to Use This Book” in the Introduction, you have a couple choices for exam preparation: the exercises here and the practice exams in the Pearson IT Certification test engine.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 5-22 lists these key topics and the page number on which each is found.


Table 5-22 Key Topics for Chapter 5

Key Topic Element


Page Number

Table 5-1

Advantages and disadvantages of UTM



IDS implementations


Table 5-2

Advantages and disadvantages of NIPs devices


Table 5-3

Advantages and disadvantages of NIDs devices


Table 5-4

Advantages and disadvantages of INE devices


Table 5-5

Advantages and disadvantages of NAC devices


Table 5-6

Advantages and disadvantages of SIEM devices



Firewall types


Table 5-7

Advantages and disadvantages of firewall types


Table 5-8

Advantages and disadvantages of NGFWs



WLAN controller features


Table 5-9

Advantages and disadvantages of WAF placement options



DAM architectures


Table 5-10

Advantages and disadvantages of SSL


Table 5-11

Advantages and disadvantages of RDP



IPv6 transition mechanisms



Network authentication methods


Table 5-12

Authentication protocols



802.1x components


Figure 5-5

802.1x process


Table 5-13



Table 5-14

Placement of proxies


Table 5-15

Typical placement of firewall types


Table 5-16

Advantages and disadvantages of deep packet inspection


Table 5-17

RAID types



Network architecture planes


Table 5-18

Examples of logging configuration settings


Table 5-19

Attacks and mitigations


Figure 5-29

Switch spoofing


Figure 5-30

VLAN hopping


Table 5-20

Common TCP/UDP port numbers


Table 5-21

Advantages and disadvantages of using a DMZ


Figure 5-31

NAP steps



SCADA components


Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:



access control list (ACL)

application-level proxy

BACnet (Building Automation and Control Network)

bastion host

Challenge Handshake Authentication Protocol (CHAP)

circuit-level proxy


configuration lockdown

control plane

data plane

database activity monitor (DAM)

dual stack

dual-homed firewall

Extensible Authentication Protocol (EAP)




Generic Routing Encapsulation (GRE)

hardware security module (HSM)

Hypertext Transfer Protocol Secure (HTTPS)

in-line network encryptor (INE)

Internet Protocol Security (IPsec)


kernel proxy firewall

load balancing

management plane

mean time between failures (MTBF)

mean time to repair (MTTR)

mesh network

network intrusion detection system (NIDS)

network intrusion prevention system (NIPS)

next-generation firewall (NGFW)

packet filtering firewall

Password Authentication Protocol (PAP)

protocol analyzer

proxy firewall

redundant array of inexpensive/independent disks (RAID)

Remote Desktop Protocol (RDP)

screened host

screened subnet

Secure Shell (SSH)

Secure Sockets Layer (SSL)

security information and event management (SIEM)


service-level agreement (SLA)

signature-based detection

SOCKS firewall

stateful firewall

stateful protocol analysis detection

statistical anomaly-based detection

storage area network (SAN)



three-legged firewall

trunk link

unified threat management (UTM)

virtual local area network (VLAN)

Virtual Network Computing (VNC)

virtual private network (VPN)

virtual switch

web application firewall (WAF)

wireless controller

Review Questions

1. Which of the following is not a command-line utility?

  • RDP

  • Telnet

  • SSH

  • nslookup

2. Which of the following is not a valid IPv6 address?

  • 2001:0db8:85a3:0000:0000:8a2e:0370:7334

  • 2001:0db8:85a3:0:0:8a2e:0370:7334

  • 2001:0db8:85a3::8a2e:0370:7334

  • 2001::85a3:8a2e::7334

3. Which IPv4-to-IPv6 transition mechanism assigns addresses and creates host-to-host tunnels for unicast IPv6 traffic when IPv6 hosts are located behind IPv4 network address translators?

  • GRE tunnels

  • 6to4

  • dual stack

  • Teredo

4. What port number does HTTPS use?

  • 80

  • 443

  • 23

  • 69

5. Which of the following is not a single protocol but a framework for port-based access control?

  • PAP

  • CHAP

  • EAP

  • RDP

6. Which of the following is not a component of 802.1x authentication?

  • supplicant

  • authenticator

  • authentication server

  • KDC

7. Which IDS type analyzes traffic and compares it to attack or state patterns that reside within the IDS database?

  • signature-based IDS

  • protocol anomaly-based IDS

  • rule- or heuristic-based IDS

  • traffic anomaly-based IDS

8. Which of the following applies rule sets to an HTTP conversation?

  • HSM

  • WAF

  • SIEM

  • NIPS

9. Which DAM architecture uses a sensor attached to the database and continually polls the system to collect the SQL statements as they are being performed?

  • interception-based model

  • log-based model

  • memory-based model

  • signature-based model

10. Which form of HSM is specifically suited to mobile apps?

  • USB

  • serial

  • Ethernet

  • microSD

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.