Images Intrusion Detection Systems and Network Security


Our adversaries are very adept at hiding attacks in normal traffic. The only true way to protect our networks is to have an intrusion detection system.


In this chapter, you will learn how to

Images   Apply the appropriate network tools to facilitate network security

Images   Determine the appropriate use of tools to facilitate network security

Images   Apply host-based security applications

An intrusion detection system (IDS) is a security system that detects inappropriate or malicious activity on a computer or network. Most organizations use their own approaches to network security, choosing the layers that make sense for them after they weigh the risks, potential for loss, costs, and manpower requirements.

The foundation for a layered network security approach usually starts with a well-secured system, regardless of the system’s function (whether it’s a user PC or a corporate e-mail server). A well-secured system uses up-to-date application and operating system patches, requires well-chosen passwords, runs the minimum number of services necessary, and restricts access to available services. On top of that foundation, you can add layers of protective measures such as antivirus products, firewalls, sniffers, and IDSs.

IDSs, which are to the network world what burglar alarms are to the physical world, are some of the more complicated and interesting types of network/data security devices. The main purpose of an IDS is to identify suspicious or malicious activity, note activity that deviates from normal behavior, catalog and classify the activity, and, if possible, respond to the activity.

Images History of Intrusion Detection Systems

Like much of the network technology we see today, IDSs grew from a need to solve specific problems. Like the Internet itself, the IDS concept came from U.S. Department of Defense–sponsored research. In the early 1970s, the U.S. government and military became increasingly aware of the need to protect the electronic networks that were becoming critical to daily operations. The U.S. government led the evolution of IDSs through work in the U.S. Air Force in the 1990s. In the mid-to-late 1990s, commercial devices began to be marketed, and the development shifted to the commercial sector.

Images IDS Overview

An IDS is somewhat like a burglar alarm. It watches the activity going on around it and tries to identify undesirable activity. IDSs are typically divided into two main categories, depending on how they monitor activity:

Images   Host-based IDS (HIDS)  Examines activity on an individual system, such as a mail server, web server, or individual PC. It is concerned only with an individual system and usually has no visibility into the activity on the network or systems around it.

Images   Network-based IDS (NIDS)  Examines activity on the network itself. It has visibility only into the traffic crossing the network link it is monitoring and typically has no idea of what is happening on individual systems.


Know the differences between host-based and network-based IDSs. A host-based IDS runs on a specific system (server or workstation) and looks at all the activity on that host. A network-based IDS sniffs traffic from the network and sees only activity that occurs on the network.

Whether it is network or host based, an IDS typically consists of several specialized components working together, as illustrated in Figure 13.1. These components are often logical and software based rather than physical and will vary slightly from vendor to vendor and product to product. Typically, an IDS has the following logical components:


Figure 13.1 Logical depiction of IDS components

Images   Traffic collector (or sensor)  Collects activity/events for the IDS to examine. On an HIDS, this could be log files, audit logs, or traffic coming to or leaving a specific system. On a NIDS, this is typically a mechanism for copying traffic off the network link—basically functioning as a sniffer. This component is often referred to as a sensor.

Images   Analysis engine  Examines the collected network traffic and compares it to known patterns of suspicious or malicious activity stored in the signature database. The analysis engine is the “brains” of the IDS.

Images   Signature database  A collection of patterns and definitions of known suspicious or malicious activity.

Images   User interface and reporting  Interfaces with the human element, providing alerts when appropriate and giving the user a means to interact with and operate the IDS.


Tech Tip

IDS Signatures

An IDS relies heavily on its signature database, just like antivirus products rely on their virus definitions. If an attack is something completely new, an IDS might not recognize the traffic as malicious.

Let’s look at an example to see how all these components work together. Imagine a network intruder is scanning your organization for systems running a web server. The intruder launches a series of network probes against every IP address in your organization. The traffic from the intruder comes into your network and passes through the traffic collector (sensor). The traffic collector forwards the traffic to the analysis engine. The analysis engine examines and categorizes the traffic—it identifies a large number of probes coming from the same outside IP address (the intruder). The analysis engine compares the observed behavior against the signature database and gets a match. The intruder’s activity matches a TCP port scan. The intruder is sending probes to many different systems in a short period of time. The analysis engine generates an alarm that is passed off to the user interface and reporting mechanisms. The user interface generates a notification to the administrator (icon, log entry, and so on). The administrator sees the alert and can now decide what to do about the potentially malicious traffic. Alarm storage is simply a repository of alarms the IDS has recorded—most IDS products allow administrators to run customized reports that sift through the collected alarms for items the administrator is searching for, such as all the alarms generated by a specific IP address.


Most IDSs can be tuned to fit a particular environment. Certain signatures can be turned off, telling the IDS not to look for certain types of traffic. For example, if you are operating in a pure UNIX/Linux environment, you may not wish to see Windows-based alarms, as they will not affect your systems. Additionally, the severity of the alarm levels can be adjusted depending on how concerned you are over certain types of traffic. Some IDSs also allow the user to exclude certain patterns of activity from specific hosts. In other words, you can tell the IDS to ignore the fact that some systems generate traffic that looks like malicious activity, because it really isn’t.

In addition to the network-versus-host distinction, some IDS vendors will further categorize an IDS based on how it performs the detection of suspicious or malicious traffic. The different models used are covered in the next section.

IDS Models

In addition to being divided along the host and network lines, IDSs are often classified according to the detection model they use: anomaly or misuse. For an IDS, a model is a method for examining behavior so that the IDS can determine whether that behavior is “not normal” or in violation of established policies.

An anomaly detection model is the more complicated of the two. In this model, the IDS must know what “normal” behavior on the host or network being protected really is. Once the “normal” behavior baseline is established, the IDS can then go to work identifying deviations from the norm, which are further scrutinized to determine whether or not that activity is malicious. Building the profile of normal activity is usually done by the IDS, with some input from security administrators, and can take days to months. The IDS must be flexible and capable enough to account for things such as new systems, new users, movement of information resources, and other factors, but be sensitive enough to detect a single user illegally switching from one account to another at 3 A.M. on a Saturday.


Anomaly detection looks for things that are out of the ordinary, such as a user logging in when they’re not supposed to or unusually high network traffic into and out of a workstation.


Anomaly detection identifies deviations from normal behavior.

Anomaly detection was developed to make the system capable of dealing with variations in traffic and better able to determine which activity patterns are malicious. A perfectly functioning anomaly-based system would be able to ignore patterns from legitimate hosts and users but still identify those patterns as suspicious should they come from a potential attacker. Unfortunately, most anomaly-based systems suffer from extremely high false positives, especially during the “break-in” period while the IDS is learning the network. On the other hand, an anomaly-based system is not restricted to a specific signature set and is far more likely to identify a new exploit or attack tool that would go unnoticed by a traditional IDS.

A misuse detection model is a little simpler to implement, and therefore it’s the more popular of the two models. In a misuse detection model, the IDS looks for suspicious activity or activity that violates specific policies and then reacts as it has been programmed to do. This reaction can be an alarm, e-mail, router reconfiguration, or TCP reset message. Technically, misuse detection is the more efficient model, as it takes fewer resources to operate, does not need to learn what “normal” behavior is, and will generate an alarm whenever a pattern is successfully matched. However, the misuse model’s greatest weakness is its reliance on a predefined signature base—any activity, malicious or otherwise, that the misuse-based IDS does not have a signature for will go undetected. Despite that drawback and because it is easier and cheaper to implement, most commercial IDS products are based on the misuse detection model.


Misuse detection looks for things that violate policy, such as a denial-of-service attack launched at your web server or an attacker attempting to brute-force an SSH session.

Some analysts break down IDS models even further into four categories, depending on how the IDS operates and detects malicious traffic (the same models can also be applied to intrusion prevention systems as well—both NIPS and HIPS):

Images   Behavior based  This model relies on a collected set of “normal behavior”: what should happen on the network and is considered “normal” or “acceptable” traffic. Behavior that does not fit into the “normal” activity categories or patterns is considered suspicious or malicious. This model can potentially detect zero-day or unpublished attacks but carries a high false-positive rate, as any new traffic pattern can be labeled as “suspect.”

Images   Signature based  This model relies on a predefined set of patterns (called signatures). The IDS has to know what behavior is considered “bad” ahead of time before it can identify and act upon suspicious or malicious traffic. Signature-based systems work by matching signatures in the network traffic stream to defined patterns stored in the system. Signature-based systems can be very fast and precise, with low false-positive rates. The weakness of signature-based systems is that they rely on having accurate signature definitions beforehand, and as the number of signatures expand, this creates an issue in scalability.

Images   Anomaly based  This model is essentially the same as the behavior-based model. The IDS is first taught what “normal” traffic looks like and then looks for deviations to those “normal” patterns. An anomaly is a deviation from an expected pattern or behavior. Specific anomalies can also be defined, such as Linux commands sent to Windows-based systems and implemented via an AI-based engine to expand the utility of specific definitions.

Images   Heuristic  This model uses artificial intelligence to detect intrusions and malicious traffic. A heuristic model is typically implemented through algorithms that help an IDS decide whether or not a traffic pattern is malicious. For example, a URL containing 10 or more repeating instances of the same character may be considered “bad” traffic as a single signature. With a heuristic model, the IDS understands that if having 10 repeating characters is bad, then having 11 is still bad, and having 20 is even worse. This implementation of fuzzy logic allows this model to fall somewhere between the signature-based and behavior-based models.


As you have probably deduced from the discussion so far, one of the critical elements of any good IDS is the signature database—the set of patterns the IDS uses to determine whether or not activity is potentially hostile. Signatures can be very simple or remarkably complicated, depending on the activity they are trying to highlight. In general, signatures can be divided into two main groups, depending on what the signature is looking for: content-based signatures and context-based signatures.


Tech Tip

Advanced IDS Rules

IDS/IPS make use of an analytics engine that use rules to determine whether or not an event of interest has occurred. These rules may be simple signature-based rules, such as Snort rules, or more complex Bayesian rules associated with heuristic/behavioral systems or anomaly-based systems. Rules are the important part of the NIDS/NIPS capability equation—without an appropriate rule, the system will not detect the desired condition. One of the things that has to be updated when new threats are discovered is a rule to enable their detection.

Content-based signatures are generally the simplest. They are designed to examine the content of such things as network packets or log entries. Content-based signatures are typically easy to build and look for simple things, such as a certain string of characters or a certain flag set in a TCP packet. Here are some example content-based signatures:

Images   Matching the characters “/etc/passwd” in a Telnet session  On a Linux system, the names of valid user accounts (and sometimes the passwords for those user accounts) are stored in a file called passwd located in the etc directory.

Images   Matching the characters “to: decode” in the header of an e-mail message  On certain older versions of sendmail, sending an e-mail message to “decode” would cause the system to execute the contents of the e-mail.

Context-based signatures are generally more complicated, as they are designed to match large patterns of activity and examine how certain types of activity fit into the other activities going on around them. Context-based signatures generally address the question, How does this event compare to other events that have already happened or might happen in the near future? Context-based signatures are more difficult to analyze and take more resources to match, as the IDS must be able to “remember” past events to match certain context signatures. Here are some examples of context-based signatures:

Images   Match a potential intruder scanning for open web servers on a specific network. A potential intruder may use a port scanner to look for any systems accepting connections on port 80. To match this signature, the IDS must analyze all attempted connections to port 80 and then be able to determine which connection attempts are coming from the same source but are going to multiple, different destinations.

Images   Identify a Nessus scan. Nessus is an open source vulnerability scanner that allows security administrators (and potential attackers) to quickly examine systems for vulnerabilities. Depending on the tests chosen, Nessus typically performs the tests in a certain order, one after the other. To be able to determine the presence of a Nessus scan, the IDS must know which tests Nessus runs as well as the typical order in which the tests are run.

Images   Identify a ping flood attack. A single Internet Control Message Protocol (ICMP) packet on its own is generally regarded as harmless, certainly not worthy of an IDS signature. Yet thousands of ICMP packets coming to a single system in a short period of time can have a devastating effect on the receiving system. By flooding a system with thousands of valid ICMP packets, an attacker can keep a target system so busy it doesn’t have time to do anything else—a very effective denial-of-service attack. To identify a ping flood, the IDS must recognize each ICMP packet and keep track of how many ICMP packets different systems have received in the recent past.


You should note the differences between content-based and context-based signatures. Content-based signatures match specific content, such as a certain string or series of characters (matching the string /etc/passwd in an FTP session). Context-based signatures match a pattern of activity based on the other activity around it, such as a port scan.

To function, the IDS must have a decent signature base with examples of known, undesirable activity that it can use when analyzing traffic or events. Any time an IDS matches current events against a signature, the IDS could be considered successful, as it has correctly matched the current event against a known signature and reacted accordingly (usually with an alarm or alert of some type).

False Positives and False Negatives

Viewed in its simplest form, an IDS is really just looking at activity (be it host based or network based) and matching it against a predefined set of patterns. When it matches activity to a specific pattern, the IDS cannot know the true intent behind that activity—whether it is benign or hostile—and therefore it can react only as it has been programmed to do. In most cases, this means generating an alert that must then be analyzed by a human who tries to determine the intent of the traffic from whatever information is available. When an IDS matches a pattern and generates an alarm for benign traffic, meaning the traffic was not hostile and not a threat, this is called a false positive. In other words, the IDS matched a pattern and raised an alarm when it didn’t really need to do so. Keep in mind that the IDS can only match patterns and has no ability to determine intent behind the activity, so in some ways this is an unfair label. Technically, the IDS is functioning correctly by matching the pattern, but from a human standpoint this is not information the analyst needed to see, as it does not constitute a threat and does not require intervention.


To reduce the generation of false positives, most administrators tune the IDS. “Tuning” an IDS is the process of configuring the IDS so that it works in your specific environment—generating alarms for malicious traffic and not generating alarms for traffic that is “normal” for your network. Effectively tuning an IDS can result in significant reductions in false-positive traffic.

An IDS is also limited by its signature set—it can match only activity for which it has stored patterns. Hostile activity that does not match an IDS signature and therefore goes undetected is called a false negative. In this case, the IDS is not generating any alarms, even though it should be, giving a false sense of security.

Images Network-Based IDSs

Network-based IDSs (NIDSs) actually came along a few years after host-based systems. After running host-based systems for a while, many organizations grew tired of the time, energy, and expense involved with managing the first generation of these systems—the host-based systems were not centrally managed, there was no easy way to correlate alerts between systems, and false-positive rates were high. The desire for a “better way” grew along with the amount of interconnectivity between systems and, consequently, the amount of malicious activity coming across the networks themselves. This fueled development of a new breed of IDS designed to focus on the source for a great deal of the malicious traffic—the network itself.


Tech Tip

Network Visibility

A network IDS has to be able to see traffic to find the malicious traffic. Encrypted traffic such as SSH and HTTPS sessions must be decrypted before a network IDS can examine them.

The NIDS integrated very well into the concept of perimeter security. More and more companies began to operate their computer security like a castle or military base (see Figure 13.2), with attention and effort focused on securing and controlling the ways in and out—the idea being that if you could restrict and control access at the perimeter, you didn’t have to worry as much about activity inside the organization. Even though the idea of a security perimeter is somewhat flawed (many security incidents originate inside the perimeter), it caught on very quickly, as it was easy to understand and devices such as firewalls, bastion hosts, and routers were available to define and secure that perimeter. The best way to secure the perimeter from outside attack is to reject all traffic from external entities, but this is impossible and impractical to do, so security personnel needed a way to let traffic in but still be able to determine whether or not the traffic was malicious. This is the problem that NIDS developers were trying to solve.


Figure 13.2 Network perimeters are a little like castles—firewalls and NIDSs form the gates and guards to keep malicious traffic out.

As its name suggests, a NIDS focuses on network traffic—the bits and bytes traveling along the cables and wires that interconnect the systems. A NIDS must examine the network traffic as it passes by and be able to analyze traffic according to protocol, type, amount, source, destination, content, traffic already seen, and other factors. This analysis must happen quickly, and the NIDS must be able to handle traffic at whatever speed the network operates to be effective.

NIDSs are typically deployed so that they can monitor traffic in and out of an organization’s major links: connections to the Internet, remote offices, partners, and so on. Like host-based systems, NIDSs look for certain activities that typify hostile actions or misuse, such as the following:

Images   Denial-of-service attacks

Images   Port scans or sweeps

Images   Malicious content in the data payload of a packet or packets

Images   Vulnerability scanning

Images   Trojans, viruses, or worms

Images   Tunneling

Images   Brute force attacks

In general, most NIDSs operate in a fairly similar fashion. Figure 13.3 shows the logical layout of a NIDS. By considering the function and activity of each component, you can gain some insight into how a NIDS operates.


Figure 13.3 Network IDS components

In the simplest form, a NIDS has the same major components: traffic collector, analysis engine, reports, and a user interface.

In a NIDS, the traffic collector is specifically designed to pull traffic from the network. This component usually behaves in much the same way as a network traffic sniffer—it simply pulls every packet it can see off the network to which it is connected. In a NIDS, the traffic collector will logically attach itself to a network interface card (NIC) and instruct the NIC to accept every packet it can. A NIC that accepts and processes every packet, regardless of the packet’s origin and destination, is said to be in promiscuous mode.


Tech Tip

Another Way to Look at NIDSs

In its simplest form, a NIDS is a lot like a motion detector and a video surveillance system rolled into one. The NIDS notes the undesirable activity, generates an alarm, and records what happens.

The analysis engine in a NIDS serves the same function as its host-based counterpart, with some substantial differences. The network analysis engine must be able to collect packets and examine them individually or, if necessary, reassemble them into an entire traffic session. The patterns and signatures being matched are far more complicated than host-based signatures, so the analysis engine must be able to remember what traffic preceded the traffic currently being analyzed so that it can determine whether or not that traffic fits into a larger pattern of malicious activity. Additionally, the network-based analysis engine must be able to keep up with the flow of traffic on the network, rebuilding network sessions and matching patterns in real time.


Cross Check

NIDS and Encrypted Traffic

You learned about encrypted traffic in Chapter 5, so check your memory with these questions. What is SSH? What is a one-time pad? Can you name at least three different algorithms?

The NIDS signature database is usually much larger than that of a host-based system. When examining network patterns, the NIDS must be able to recognize traffic targeted at many different applications and operating systems as well as traffic from a wide variety of threats (worms, assessment tools, attack tools, and so on). Some of the signatures themselves can be quite large, as the NIDS must look at network traffic occurring in a specific order over a period of time to match a particular malicious pattern.

Using the lessons learned from early host-based systems, NIDS developers modified the logical component design somewhat to distribute the user interface and reporting functions. Because many companies had more than one network link, they needed an IDS capable of handling multiple links in many different locations. The early IDS vendors solved this dilemma by dividing the components and assigning them to separate entities. The traffic collector, analysis engine, and signature database were bundled into a single entity, usually called a sensor or appliance. The sensors would report to and be controlled by a central system or master console. This central system, shown in Figure 13.4, consolidated alarms and provided the user interface and reporting functions that allowed users in one location to manage, maintain, and monitor sensors deployed in a variety of remote locations.


Figure 13.4 Distributed network IDS components

By creating separate components designed to work together, the NIDS developers were able to build a more capable and flexible system. With encrypted communications, network sensors could be placed around both local and remote perimeters and still be monitored and managed securely from a central location. Placement of the sensors very quickly became an issue for most security personnel, as the sensors obviously had to have visibility of the network traffic in order to analyze it. Because most organizations with NIDSs also had firewalls, the location of the NIDS relative to the firewall had to be considered as well. Placed before the firewall, as shown in Figure 13.5, the NIDS will see all traffic coming in from the Internet, including attacks against the firewall itself. This includes traffic that the firewall stops and does not permit into the corporate network. With this type of deployment, the NIDS sensor will generate a large number of alarms (including alarms for traffic that the firewall would stop). This tends to overwhelm the human operators managing the system.


Figure 13.5 NIDS sensor placed in front of firewall

Placed after the firewall, as shown in Figure 13.6, the NIDS sensor sees and analyzes the traffic that is being passed through the firewall and into the corporate network. Although this does not allow the NIDS to see attacks against the firewall, it generally results in far fewer alarms and is the most popular placement for NIDS sensors.


Figure 13.6 NIDS sensor placed behind firewall

As you already know, NIDSs examine the network traffic for suspicious or malicious activity. Here are two examples of suspicious traffic to illustrate the operation of a NIDS:

Images   Port scan  A port scan is a reconnaissance activity a potential attacker uses to find out information about the systems they want to attack. Using any of a number of tools, the attacker attempts to connect to various services (web, FTP, SMTP, and so on) to see if they exist on the intended target. In normal network traffic, a single user might connect to the FTP service provided on a single system. During a port scan, an attacker may attempt to connect to the FTP service on every system. As the attacker’s traffic passes by the IDS, the IDS will notice this pattern of attempting to connect to different services on different systems in a relatively short period of time. When the IDS compares the activity to its signature database, it will very likely match this traffic against the port scanning signature and generate an alarm.

Images   Ping of death  Toward the end of 1996, it was discovered that certain operating systems, such as Windows, could be crashed by sending a very large Internet Control Message Protocol (ICMP) echo request packet to them. This is a fairly simple traffic pattern for a NIDS to identify, as it simply has to look for ICMP packets over a certain size.


Port scanning activity is rampant on the Internet. Most organizations with NIDSs see hundreds or thousands of port scan alarms every day from sources around the world. Some administrators reduce the alarm level of port scan alarms or ignore port scanning traffic because there is simply too much traffic to track down and respond to each alarm.

Advantages of a NIDS

A NIDS has certain advantages that make it a good choice for certain situations:

Images   Providing IDS coverage requires fewer systems. With a few well-placed NIDS sensors, you can monitor all the network traffic going in and out of your organization. Fewer sensors usually equates to less overhead and maintenance, meaning you can protect the same number of systems at a lower cost.

Images   Deployment, maintenance, and upgrade costs are usually lower. The fewer systems that have to be managed and maintained to provide IDS coverage, the lower the cost to operate the IDS. Upgrading and maintaining a few sensors is usually much cheaper than upgrading and maintaining hundreds of host-based processes.

Images   A NIDS has visibility into all network traffic and can correlate attacks among multiple systems. Well-placed NIDS sensors can see the “big picture” when it comes to network-based attacks. The network sensors can tell you whether attacks are widespread and unorganized or focused and concentrated on specific systems.

Disadvantages of a NIDS

A NIDS has certain disadvantages:

Images   It is ineffective when traffic is encrypted. When network traffic is encrypted from application to application or system to system, a NIDS sensor will not be able to examine that traffic. With the increasing popularity of encrypted traffic, this is becoming a bigger problem for effective IDS operations.

Images   It can’t see traffic that does not cross it. The IDS sensor can examine only traffic crossing the network link it is monitoring. With most IDS sensors being placed on perimeter links, traffic traversing the internal network is never seen.

Images   It must be able to handle high volumes of traffic. As network speeds continue to increase, the network sensors must be able to keep pace and examine the traffic as quickly as it can pass the network. When NIDSs were introduced, 10Mbps networks were the norm. Now 100Mbps and even 1Gbps networks are commonplace. This increase in traffic speeds means IDS sensors must be faster and more powerful than ever before.

Images   It doesn’t know about activity on the hosts themselves. NIDSs focus on network traffic. Activity that occurs on the hosts themselves will not be seen by a NIDS.


Tech Tip

TCP Reset

The most common defensive ability for an active NIDS is to send a TCP reset message. Within TCP, the reset message (RST) essentially tells both sides of the connection to drop the session and stop communicating immediately. While this mechanism was originally developed to cover situations such as systems accidentally receiving communications intended for other systems, the reset message works fairly well for NIDSs, but with one serious drawback: a reset message affects only the current session. Nothing prevents the attacker from coming back and trying again and again. Despite the “temporariness” of this solution, sending a reset message is usually the only defensive measure implemented on NIDS deployments, as the fear of blocking legitimate traffic and disrupting business processes, even for a few moments, often outweighs the perceived benefit of discouraging potential intruders.

Active vs. Passive NIDSs

Most NIDSs can be distinguished by how they examine the traffic and whether or not they interact with that traffic. On a passive system, the NIDS simply watches the traffic, analyzes it, and generates alarms. It does not interact with the traffic itself in any way, and it does not modify the defensive posture of the system to react to the traffic. A passive NIDS is very similar to a simple motion sensor—it generates an alarm when it matches a pattern, much as the motion sensor generates an alarm when it sees movement. An active NIDS contains all the same components and capabilities of the passive NIDS with one critical addition—the active NIDS can react to the traffic it is analyzing. These reactions can range from something simple, such as sending a TCP reset message to interrupt a potential attack and disconnect a session, to something complex, such as dynamically modifying firewall rules to reject all traffic from specific source IP addresses for the next 24 hours.

NIDS Tools

There are numerous examples of NIDS tools in the marketplace, from open source projects to commercial entries. Snort has been the de facto standard IDS engine since its creation in 1998. It has a large user base and has set the standard for many IDS elements, including rulesets and formats. Snort rules are the list of activities that Snort will alert on and provide the flexible power behind the IDS platform. Snort rulesets are updated by a large, active community as well as by the Sourcefire Vulnerability Research Team, the company behind Snort. Snort VRT rulesets are available to subscribers and provide such elements as same-day protection for items such as Microsoft patch Tuesday vulnerabilities. These rules are moved to the open community after 30 days.

A newer entrant to the IDS marketplace is Suricata. Suricata is an open source IDS that began with grant money from the U.S. government and is maintained by the Open Source Security Foundation (OSIF). Suricata has one advantage over Snort: it supports multithreading, whereas Snort only supports single-threaded operation. Both of these systems are highly flexible and scalable, operating on both Windows and Linux platforms.


Tech Tip

Snort Rules

The basic format for Snort rules is a rule header followed by rule options, as shown here.


Images Host-Based IDSs

The very first IDSs were host based, designed to examine activity only on a specific host. A host-based IDS (HIDS) examines log files, audit trails, and network traffic coming into or leaving a specific host. HIDSs can operate in real time, looking for activity as it occurs, or in batch mode, looking for activity on a periodic basis. Host-based systems are typically self-contained, but many of the newer commercial products have been designed to report to and be managed by a central system. Host-based systems also take local system resources to operate. In other words, an HIDS will use up some of the memory and CPU cycles of the system it is protecting. Early versions of HIDSs ran in batch mode, looking for suspicious activity on an hourly or daily basis, and typically looked only for specific events in the system’s log files. As processor speeds increased, later versions of HIDSs looked through the log files in real time and even added the ability to examine the data traffic the host was generating and receiving.

Most HIDSs focus on the log files or audit trails generated by the local operating system. On UNIX systems, the examined logs usually include those created by syslog, such as messages, kernel logs, and error logs. On Windows systems, the examined logs are typically the three event logs: Application, System, and Security. Some HIDSs can cover specific applications, such as FTP or web services, by examining the logs produced by those specific applications or examining the traffic from the services themselves. Within the log files, the HIDS is looking for certain activities that typify hostile actions or misuse, such as the following:

Images   Logins at odd hours

Images   Login authentication failures

Images   Additions of new user accounts

Images   Modification or access of critical system files

Images   Modification or removal of binary files (executables)

Images   Starting or stopping processes

Images   Privilege escalation

Images   Use of certain programs

In general, most HIDSs operate in a very similar fashion. (Figure 13.7 shows the logical layout of an HIDS.) By considering the function and activity of each component, you can gain some insight into how HIDSs operate.


Figure 13.7 Host-based IDS components

As on any IDS, the traffic collector on an HIDS pulls in the information the other components, such as the analysis engine, need to examine. For most HIDSs, the traffic collector pulls data from information the local system has already generated, such as error messages, log files, and system files. The traffic collector is responsible for reading those files, selecting which items are of interest, and forwarding them to the analysis engine. On some HIDSs, the traffic collector also examines specific attributes of critical files, such as file size, date modified, and checksum.


Critical files are those that are vital to the system’s operation or overall functionality. They may be program (or binary) files, files containing user accounts and passwords, or even scripts to start or stop system processes. Any unexpected modifications to these files could mean the system has been compromised or modified by an attacker. By monitoring these files, the HIDS can warn users of potentially malicious activity.

The analysis engine is perhaps the most important component of the HIDS, as it must decide what activity is “okay” and what activity is “bad.” The analysis engine is a sophisticated decision and pattern-matching mechanism—it looks at the information provided by the traffic collector and tries to match it against known patterns of activity stored in the signature database. If the activity matches a known pattern, the analysis engine can react, usually by issuing an alert or alarm. An analysis engine may also be capable of remembering how the activity it is looking at right now compares to traffic it has already seen or may see in the near future, so that it can match more complicated, multistep malicious activity patterns. An analysis engine must also be capable of examining traffic patterns as quickly as possible, because the longer it takes to match a malicious pattern, the less time the HIDS or human operator has to react to malicious traffic. Most HIDS vendors build a decision tree into their analysis engines to expedite pattern matching.

The signature database is a collection of predefined activity patterns that have already been identified and categorized—patterns that typically indicate suspicious or malicious activity. When the analysis engine has an activity or traffic pattern to examine, it compares that pattern to the appropriate signatures in the database. The signature database can contain anywhere from a few to a few thousand signatures, depending on the vendor, type of HIDS, space available on the system to store signatures, and other factors.

The user interface is the visible component of the HIDS—the part that humans interact with. The user interface varies widely, depending on the product and vendor, and could be anything from a detailed GUI to a simple command line. Regardless of the type and complexity, the interface is provided to allow the user to interact with the system: changing parameters, receiving alarms, tuning signatures and response patterns, and so on.


Tech Tip

Decision Trees

In computer systems, a tree is a data structure, each element of which is attached to one or more structures directly beneath it (the connections are called branches). Structures on the end of a branch without any elements below them are called leaves. Trees are most often drawn inverted, with the root at the top and all subsequent elements branching down from the root. Trees in which each element has no more than two elements below it are called binary trees. In IDSs, a decision tree is used to help the analysis engine quickly examine traffic patterns and eliminate signatures that don’t apply to the particular traffic or activity being examined, so that the fewest number of comparisons need to be made. For example, as shown in the illustration, the decision tree may contain a section that divides the activity into one of three subsections based on the origin of the activity (a log entry for an event taken from the system logs, a file change for a modification to a critical file, or a user action for something a user has done).


When the analysis engine looks at the activity pattern and starts down the decision tree, it must decide which path to follow. If it is a log entry, the analysis engine can then concentrate on only the signatures that apply to log entries, and it does not need to worry about signatures that apply to file changes or user actions. This type of decision tree allows the analysis engine to function much faster, as it does not have to compare activities to every signature in the database, just the signatures that apply to that particular type of activity. It is important to note that HIDSs can look at both activities occurring on the host itself and the network traffic coming into or leaving the host.

To better understand how an HIDS operates, take a look at the following examples from a UNIX system and a Windows system.

On a UNIX system, the HIDS is likely going to examine any of a number of system logs—basically, large text files containing entries about what is happening on the system. For this example, consider the following lines from the “messages” log on a Red Hat system:


In the first line beginning with “Jan 5,” you see a session being opened by a user named bob. This usually indicates that whoever owns the account bob has logged in to the system. On the next three lines beginning with “Jan 5,” you see authentication failures as bob tries to become root—the superuser account that can do anything on the system. In this case, user bob tries three times to become root and fails on each try. This pattern of activity could mean a number of different things—bob could be an admin who has forgotten the password for the root account, bob could be an admin and someone changed the root password without telling him, bob could be a user attempting to guess the root password, or an attacker could have compromised bob’s account and is now trying to compromise the root account on the system. In any case, our HIDS will work through its decision tree to determine whether an authentication failure in the message log is something it needs to examine. In this instance, when the HIDS examines these lines in the log, it will note the fact that three of the lines in the log match one of the patterns it has been told to look for (as determined by information from the decision tree and the signature database), and it will react accordingly, usually by generating an alarm or alert of some type that appears on the user interface or in an e-mail, page, or other form of message.


Tech Tip

Analyst-Driven Log Analysis

Log analysis is the art of translating computer-generated logs into meaningful data. For example, a computer can’t always tell you if an administrator-level login at 3 A.M. on a Saturday is definitely a bad thing, but an analyst can. Human analysts can add value through the interpretation of information in context with other sources of information.

On a Windows system, the HIDS will likely examine the logs generated by the operating system. The three basic types of logs (Application, System, and Security) are similar to the logs on a UNIX system, though the Windows logs are not stored as text files and typically require a utility or application to read them. This example uses the Security log from a Windows system:


In the first three main lines of the Security log, you see an Audit Failure entry for the Logon process. This indicates someone has tried to log in to the system three times and has failed each time (much like our UNIX example), and then succeeded on the fourth try. You won’t see the name of the account until you expand the log entry within the Windows Event Viewer tool, but for this example, assume it was the administrator account—the Windows equivalent of the root account. Here again, you see three login failures—if the HIDS has been programmed to look for failed login attempts, it will generate alerts when it examines these log entries.

Advantages of HIDSs

HIDSs have certain advantages that make them a good choice for certain situations:

Images   They can be very specific to an operating system and have more detailed signatures. An HIDS can be very specifically designed to run on a certain operating system or to protect certain applications. This narrow focus lets developers concentrate on the specific things that affect the particular environment they are trying to protect. With this type of focus, the developers can avoid generic alarms and develop much more specific, detailed signatures to identify malicious traffic more accurately.

Images   They can reduce false-positive rates. When running on a specific system, the HIDS process is much more likely to be able to determine whether or not the activity being examined is malicious. By more accurately identifying which activity is “bad,” the HIDS will generate fewer false positives (alarms generated when the traffic matches a pattern but is not actually malicious).

Images   They can examine data after it has been decrypted. With security concerns constantly on the rise, many developers are starting to encrypt their network communications. When designed and implemented in the right manner, an HIDS will be able to examine traffic that is unreadable to a network-based IDS. This particular ability is becoming more important each day as more and more websites start to encrypt all of their traffic.

Images   They can be very application specific. On a host level, the IDS can be designed, modified, or tuned to work very well on specific applications without having to analyze or even hold signatures for other applications that are not running on that particular system. Signatures can be built for specific versions of web server software, FTP servers, mail servers, or any other application housed on that host.

Images   They can determine whether or not an alarm may impact that specific system. The ability to determine whether or not a particular activity or pattern will really affect the system being protected assists greatly in reducing the number of generated alarms. Because the HIDS resides on the system, it can verify things such as patch levels, presence of certain files, and system state when it analyzes traffic. By knowing what state the system is in, the HIDS can more accurately determine whether an activity is potentially harmful to the system.

Disadvantages of HIDSs

HIDSs also have certain disadvantages that must be weighed in making the decision of whether to deploy this type of technology:

Images   The HIDS must have a process on every system you want to watch. You must have an HIDS process or application installed on every host you want to watch. To watch 100 systems, then, you would need to deploy 100 HIDSs, or remote agents.

Images   The HIDS can have a high cost of ownership and maintenance. Depending on the specific vendor and application, an HIDS can be fairly costly in terms of time and manpower to maintain. Unless some type of central console is used that allows for the maintenance of remote processes, administrators must maintain each HIDS process individually. Even with a central console, with an HIDS, there will be a high number of processes to maintain, software to update, and parameters to tune.

Images   The HIDS uses local system resources. To function, the HIDS must use CPU cycles and memory from the system it is trying to protect. Whatever resources the HIDS uses are no longer available for the system to perform its other functions. This becomes extremely important on applications such as high-volume web servers, where fewer resources usually means fewer visitors served and the need for more systems to handle expected traffic.

Images   The HIDS has a very focused view and cannot relate to activity around it. The HIDS has a limited view of the world, as it can see activity only on the host it is protecting. It has little to no visibility into traffic around it on the network or events taking place on other hosts. Consequently, an HIDS can tell you only if the system it is running on is under attack.

Images   The HIDS, if logging only locally, could be compromised or disabled. When an HIDS generates alarms, it typically stores the alarm information in a file or database of some sort. If the HIDS stores its generated alarm traffic on the local system, an attacker who is successful in breaking into the system might be able to modify or delete those alarms. This makes it difficult for security personnel to discover the intruder and conduct any type of post-incident investigation. A capable intruder may even be able to turn off the HIDS process completely.


A security best practice is to store or make a copy of log information, especially security-related log information, on a separate system. When a system is compromised, the attacker typically hides their tracks by clearing out any log files on the compromised system. If the log files are only stored locally on the compromised system, you’ll know an attacker was present (due to the empty log files) but you won’t know what they did or when they did it.

Active vs. Passive HIDSs

Most IDSs can be distinguished by how they examine the activity around them and whether or not they interact with that activity. This is certainly true for HIDSs. On a passive system, the HIDS is exactly that—it simply watches the activity, analyzes it, and generates alarms. It does not interact with the activity itself in any way, and it does not modify the defensive posture of the system to react to the traffic. A passive HIDS is similar to a simple motion sensor—it generates an alarm when it matches a pattern, much as the motion sensor generates an alarm when it sees movement.

An active IDS will contain all the same components and capabilities of the passive IDS with one critical exception—the active IDS can react to the activity it is analyzing. These reactions can range from something simple, such as running a script to turn a process on or off, to something as complex as modifying file permissions, terminating the offending processes, logging off specific users, and reconfiguring local capabilities to prevent specific users from logging in for the next 12 hours.

Resurgence and Advancement of HIDSs

The past few years have seen a strong resurgence in the use of HIDSs. With the great advances in processor power, the introduction of multicore processors, and the increased capacity of hard drives and memory systems, some of the traditional barriers to running an HIDS have been overcome. Combine those advances in technology with the widespread adoption of always-on broadband connections, the rise in the use of telecommuting, and a greater overall awareness of the need for computer security, and HIDSs start to become an attractive and sometimes effective solution for business and home users alike.

The latest generation of HIDSs has introduced new capabilities designed to stop attacks by preventing them from ever executing or accessing protected files in the first place, rather than relying on a specific signature set that only matches known attacks. The more advanced host-based offerings, which most vendors refer to as host-based intrusion prevention systems (HIPSs), combine the following elements into a single package:

Images   Integrated system firewall  The firewall component checks all network traffic passing into and out of the host. Users can set rules for what types of traffic they want to allow into or out of their system.

Images   Behavioral- and signature-based IDS  This hybrid approach uses signatures to match well-known attacks and generic patterns for catching “zero-day” or unknown attacks for which no signatures exist.

Images   Application control  This allows administrators to control how applications are used on the system and whether or not new applications can be installed. Controlling the addition, deletion, or modification of existing software can be a good way to control a system’s baseline and prevent malware from being installed.

Images   Enterprise management  Some host-based products are installed with an “agent” that allows them to be managed by and report back to a central server. This type of integrated remote management capability is essential in any large-scale deployment of host-based IDS/IPS.

Images   Malware detection and prevention  Some HIDSs/HIPSs include scanning and prevention capabilities that address spyware, malware, rootkits, and other malicious software.


Integrated security products can provide a great deal of security-related features in a single package. This is often cheaper and more convenient than purchasing a separate antivirus product, a firewall, and an IDS. However, integrated products are not without potential pitfalls—if one portion of the integrated product fails, the entire protective suite may fail. Symantec’s Endpoint Protection and McAfee’s Internet Security are examples of integrated, host-based protection products.


When you’re examining endpoint security solutions, one of the key differentiators is in what the system detects. There are single-purpose systems, antivirus, anti-malware, and data loss prevention (DLP). Multipurpose systems such as EDR, firewalls, and HIDS/HIPS can look for a variety of types of items. The key to all this is in the definition of the rules for each product.

Images Intrusion Prevention Systems

An intrusion prevention system (IPS) monitors network traffic for malicious or unwanted behavior and can block, reject, or redirect that traffic in real time. Sound familiar? It should: while many vendors will argue that an IPS is a different animal from an IDS, the truth is that most IPSs are merely expansions of existing IDS capabilities. As a core function, an IPS must be able to monitor for and detect potentially malicious network traffic, which is essentially the same function as an IDS. However, an IPS does not stop at merely monitoring traffic—it must be able to block, reject, or redirect that traffic in real time to be considered a true IPS. It must be able to stop or prevent malicious traffic from having an impact. To qualify as an IDS, a system just needs to see and classify the traffic as malicious. To qualify as an IPS, a system must be able to do something about that traffic. In reality, most products that are called IDSs, including the first commercially available IDS, NetRanger, can interact with and stop malicious traffic, so the distinction between the two is often blurred.


The term intrusion prevention system was originally coined by Andrew Plato in marketing literature developed for NetworkICE, a company that was purchased by ISS and is now part of IBM. The term IPS has effectively taken the place of the term active IDS.

Like IDSs, most IPSs have an internal signature database to compare network traffic against known “bad” traffic patterns. IPSs can perform content-based inspections, looking inside network packets for unique packets, data values, or patterns that match known malicious patterns. Some IPSs can perform protocol inspection, in which the IPS decodes traffic and analyzes it as it would appear to the server receiving it. For example, many IPSs can do HTTP protocol inspection, so they can examine incoming and outgoing HTTP traffic and process it as an HTTP server would. The advantage here is that the IPS can detect and defeat popular evasion techniques such as encoding URLs because the IPS “sees” the traffic in the same way the web server would when it receives and decodes it. The IPS can also detect activity that is abnormal or potentially malicious for that protocol, such as passing an extremely large value (over 10,000 characters) to a login field on a web page.

Unlike a traditional IDS, an IPS must sit inline (in the flow of traffic) to be able to interact effectively with the network traffic. Most IPSs can operate in “stealth mode” and do not require an IP address for the connections they are monitoring. When an IPS detects malicious traffic, it can drop the offending packets, reset incoming or established connections, generate alerts, quarantine traffic to/from specific IP addresses, or even block traffic from offending IP addresses on a temporary or permanent basis. As they are sitting inline, most IPSs can also offer rate-based monitoring to detect and mitigate denial-of-service attacks. With rate-based monitoring, the IPS can watch the amount of traffic traversing the network. If the IPS sees too much traffic coming into or going out from a specific system or set of systems, the IPS can intervene and throttle down the traffic to a lower and more acceptable level. Many IPSs perform this function by “learning” what are “normal” network traffic patterns with regard to the number of connections per second, amount of packets per connection, packets coming from or going to specific ports, and so on, and then comparing current traffic rates for network traffic (TCP, UDP, ARP, ICMP, and so on) to those established norms. When a traffic pattern reaches a threshold or varies dramatically from those norms, the IPS can react and intervene as needed.


Tech Tip

Inline Network Devices

Two methods can be employed: an inline sensor and a passive sensor. An inline sensor is one where the data packets actually pass through the device. A failure of an inline sensor would block traffic flow. A passive sensor monitors the traffic via a copying process, so the actual traffic does not flow through or depend on the sensor for connectivity. Some administrators choose to have their firewalls and IPSs fail “closed,” meaning that if the devices are not functioning correctly, all traffic is stopped until those devices can be repaired. Inline placement is also required for elements that are designed to interrupt traffic on occasion, such as IPS, where the P refers to an active element.

Like a traditional IDS, the IPS has a potential weakness when dealing with encrypted traffic. Traffic that is encrypted will typically pass by the IPS untouched (provided it does not trigger any non-content-related alarms such as rate-based alarms). To counter this problem, some IPS vendors are including the ability to decrypt Secure Sockets Layer (SSL) sessions for further inspection. To do this, some IPS solutions store copies of any protected web servers’ private keys on the sensor itself. When the IPS sees a session initiation request, it monitors the initial transactions between the server and the client. By using the server’s stored private keys, the IPS will be able to determine the session keys negotiated during the SSL session initiation. With the session keys, the IPS can decrypt all future packets passed between server and client during that web session. This gives the IPS the ability to perform content inspection on SSL-encrypted traffic.


The term wire speed refers to the theoretical maximum transmission rate of a cable or other medium and is based on a number of factors, including the properties of the cable itself and the connection protocol in use (in other words, how much data can be pushed through under ideal conditions).

You will often see IPSs (and IDSs) advertised and marketed by the amount of traffic they can process without dropping packets or interrupting the flow of network traffic. In reality, a network will never reach its hypothetical maximum transmission rate, or wire speed, due to errors, collisions, retransmissions, and other factors; therefore, a 1Gbps network is not actually capable of passing 1 Gbps of network traffic, even if all the components are rated to handle 1 Gbps. When used in a marketing sense, wire speed is the maximum throughput rate the networking or security device equipment can process without impacting that network traffic. For example, a 1Gbps IPS should be able to process, analyze, and protect 1 Gbps of network traffic without impacting traffic flow. IPS vendors often quote their products’ capacity as the combined throughput possible for all available ports on the IPS sensor—for example, a 10Gbps sensor may have 12 Gigabit Ethernet ports but is capable of handling only 10 Gbps of network traffic.


Tech Tip

Detection Controls vs. Prevention Controls

When securing your organization, especially your network perimeter and critical systems, you will likely have to make some choices as to what type of protective measures and controls you need to implement. For example, you may need to decide between detection controls (capabilities that detect and alert on suspicious or malicious activity) and prevention controls (capabilities that stop suspicious or malicious activity). Consider the differences between a traditional IDS and IPS. Although many IDSs have some type of response capability, their real purpose is to watch for activity and then alert when “hostile” activity is noted. On the other hand, an IPS is designed to block, thwart, and prevent that same “hostile” activity.

Parallel examples in the physical security space would be a camera and a security guard. A camera watches activity and can even generate alerts when motion is detected, but a camera cannot stop an intruder from breaking into a facility and stealing something—it only records and alerts. A security guard, however, has the ability to stop the intruder physically, either before they break into the facility or before they can leave with the stolen goods.

Images Network Security Monitoring

Network security monitoring (NSM) is the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions. Although an IDS will provide an indication of a rule being met or some other aspect, it typically provides a singular event. NSM is a process of collecting a bunch of different indications and then using these points of data and the context from which they are examined to come to a more complete understanding of what is happening.

An example of an IDS alert is when an FTP session is opened on a non-FTP server in the enterprise (assuming you had a rule watching for this). What are you as a security analyst going to do with this information? It is a single point-in-time indication of something that has happened, and it violates the rules, but what do you do? Using NSM, where you have the same indication of the FTP issue, also available (assuming you are capturing and logging the correct data elements) are additional data elements that can be examined. You could go look at the packet that created the alert and then, using this information, along with a tool such as Wireshark, reconstruct the conversation and see what the attacker did. Was this an intentional attack, or did the attacker actually just enter the wrong server IP address?

A Linux distribution specifically aimed at NSM is Security Onion, and it has a whole host of tools preconfigured. Whereas IDS is an important element in detecting bad activity on a system, NSM takes this considerably further, giving you tools and techniques that can provide greater insight into what is happening.

Images Deception and Disruption Technologies

Deception and disruption have become tools in the defender’s arsenal against advanced threats. Because a threat actor has limited information about how a system is architected, the addition of deceptive elements such as honeypots/nets can lead to situations where the adversary is discovered. Once an adversary is discovered, a campaign can be waged against them, including the use of additional deception elements to disrupt the attacker’s attack methodology. Deception adds a fake layer to your enterprise by placing decoy assets, fake data, and other artifacts in your enterprise. This fake technology is not part of your enterprise configurations, so no system or person should ever touch something fake unless they are actively seeking something or there is a misconfiguration.

Honeypots and Honeynets

As is often the case, one of the best tools for information security personnel has always been knowledge. To secure and defend a network and the information systems on that network properly, security personnel need to know what they are up against. What types of attacks are being used? What tools and techniques are popular at the moment? How effective is a certain technique? What sort of impact will this tool have on my network? Often this sort of information is passed through white papers, conferences, mailing lists, or even word of mouth. In some cases, the tool developers themselves provide much of the information in the interest of promoting better security for everyone.

Information is also gathered through examination and forensic analysis, often after a major incident has already occurred and information systems are already damaged. One of the most effective techniques for collecting this type of information is to observe activity firsthand—watching an attacker as they probe, navigate, and exploit their way through a network. To accomplish this without exposing critical information systems, security researchers often use something called a honeypot.

A honeypot, sometimes called a digital sandbox, is an artificial environment where attackers can be contained and observed without putting real systems at risk. A good honeypot appears to an attacker to be a real network consisting of application servers, user systems, network traffic, and so on, but in most cases it’s actually made up of one or a few systems running specialized software to simulate the user and network traffic common to most targeted networks. Figure 13.8 illustrates a simple honeypot layout in which a single system is placed on the network to deliberately attract attention from potential attackers.


Figure 13.8 Logical depiction of a honeypot

Figure 13.8 shows the security researcher’s view of the honeypot, while Figure 13.9 shows the attacker’s view. The security administrator knows that the honeypot, in this case, actually consists of a single system running software designed to react to probes, reconnaissance attempts, and exploits as if it were an entire network of systems. When the attacker connects to the honeypot, they are presented with an entire “virtual” network of servers and PCs running a variety of applications. In most cases, the honeypot will appear to be running versions of applications that are known to be vulnerable to specific exploits. All this is designed to provide the attacker with an enticing, hopefully irresistible, target.


Figure 13.9 Virtual network created by the honeypot

Any time an attacker has been lured into probing or attacking the virtual network, the honeypot records the activity for later analysis: what the attacker does, which systems and applications they concentrate on, what tools are run, how long the attacker stays, and so on. All this information is collected and analyzed in the hopes that it will allow security personnel to better understand and protect against the threats to their systems.


A honeypot is a system designed to attract potential attackers by pretending to be one or more systems with open network services.

There are many honeypots in use, specializing in everything from wireless to denial-of-service attacks; most are run by research, government, or law enforcement organizations. Why aren’t more businesses running honeypots? Quite simply, the time and cost are prohibitive. Honeypots take a lot of time and effort to manage and maintain, and even more effort to sort, analyze, and classify the traffic the honeypot collects. Unless they are developing security tools, most companies focus their limited security efforts on preventing attacks, and in many cases, companies aren’t even that concerned with detecting attacks as long as the attacks are blocked, are unsuccessful, and don’t affect business operations. Even though honeypots can serve as a valuable resource by luring attackers away from production systems and allowing defenders to identify and thwart potential attackers before they cause any serious damage, the costs and efforts involved deter many companies from using honeypots.

A honeynet is a collection of two or more honeypots. Larger, very diverse network environments can deploy multiple honeypots (thus forming a honeynet) when a single honeypot device does not provide enough coverage. Honeynets are often integrated into an organization-wide IDS/IPS because the honeynet can provide relevant information about potential attackers.


A honeyfile is a file that is designed to look like a real file on a server, but the data it possesses is fake. Honeyfiles serve as attractive targets to attackers. A honeyfile acts as a trap for attackers, and the data in the file can contain triggers to alert DLP solutions. Access to the files can be monitored as well. A variation of a honeyfile is a honeyrecord in a database. These records serve the same purpose: they are fake and are never used, but if they are ever copied, you know there is unauthorized activity.

Honeyfiles and honeyrecords can be comingled with legitimate files and records, making their discovery and exploitation more likely. These elements act as tripwires and can be tracked to alert to unauthorized activity.

Fake Telemetry

When you are on a system and you realize there is no other traffic, the first thought is you are no longer in the enterprise network. To prevent a lack of “normal” traffic from being a dead giveaway that you have entered a fake part of the network, fake telemetry is used. Fake telemetry is synthetic network traffic that resembles genuine communications, delivered at an appropriate volume to make honeynets and honeypots look real.


Fake telemetry is a deception technology used to make honeynets and honeypots look real and appealing to would-be attackers.

DNS Sinkhole

A DNS sinkhole is a DNS provider that returns specific DNS requests with false results. This results in the requester being sent to the wrong address, usually a nonroutable address. When a computer visits a DNS server to resolve a domain name, the server will give a result, if available; otherwise, it will send the resolution request to a higher-level DNS server for resolution. This means that the higher a DNS sinkhole is in this chain, the more requests it will affect and the more beneficial effect it can provide. A typical DNS sinkhole is a standard DNS server that has been configured to return nonroutable addresses for all domains in the sinkhole list so that every request will result in failure to get access to the real site. Some of the larger botnets have been rendered unusable by top-level domain (TLD) sinkholes that can span the entire Internet. DNS sinkholes are a useful tool for blocking malicious traffic, and they are used to combat bots and other malware that rely on DNS responses to communicate. A famous example of this was the use of a DNS sinkhole to block the WannaCry malware in 2017.


A DNS sinkhole is a deception and disruption technology that returns specific DNS requests with false results. DNS sinkholes can be used in both destructive and constructive ways. When used in a constructive fashion, a DNS sinkhole prevents users from accessing malicious domains.

Images Analytics

Big data analytics is currently all the rage in the IT industry with claims of how much value can be derived from large data sets. NIDS/NIPS as well as other detection equipment can certainly create large data sets, especially when connected to other data sources such as log files in a SIEM solution. (SIEM is covered next in this chapter.) Using analytics to increase accurate detection of desired events requires planning, testing, and NIDS/NIPS/SIEM solutions that support this level of functionality. In the past, being able to write Snort rules was all that was needed to have a serious NIDS/NIPS solution. Today, it is essential to integrate the data from NIDS/NIPS together with other security data to detect advanced persistent threats (APTs). Analytics is essential today, and tomorrow it will be artificial intelligence (AI) determining how to examine packets.

Images SIEM

Security information and event management (SIEM) systems are a combination of hardware and software designed to classify and analyze security data from numerous sources. What was once considered to be only for the largest of enterprises, the large number of data sources associated with security have made SIEMs essential in almost all security organizations. There is a wide range of vendor offerings in this space, from virtually free to systems large enough to handle any enterprise, with a budget to match. During an investigation, the SIEM system can provide a host of information concerning a user, what they have done, and so on. The fundamental purpose of a SIEM system is to provide alerts and relevant information to incident response teams that are investigating incidents. If something happens that initiates an investigation, and the SIEM system has no relevant information, then this suggests that the SIEM and its component elements need better tuning to provide meaningful surveillance of the system for potential problems.


SIEMs allow you to identify, visualize, and monitor trends via alerts and a dashboard.

SIEM Dashboards

SIEM dashboards are the windows into the SIEM datastore, a collection of information that can tell you where attacks are occurring and provide a trail of breadcrumbs to show how the attacker got into the network and moved to where they are now. SIEM systems act as the information repository for information surrounding potential and actual intrusions.


Sensors are the devices that provide security data into the security data store. Regardless of where that data store is housed, the security information is important for investigators. Sensors don’t just happen; they have to be placed in the correct location to collect information. Sensor placement begins with defining collection objectives. A study of where the data flows, where the information of value is in a network, and where adversaries can gain access, coupled with what information you wish to collect, are just some of the factors that go into designing sensor placement. Just as logs can provide a lot of useful information, they also can produce a lot of meaningless data. Sensors are no different. Packet capture sensors can record vital information for an investigation, but they have to be in the correct location (that is, have visibility with respect to the desired packets) while also avoiding common traffic areas where there is a lot of noise. To be properly prepared for future investigations, you need to properly design and place your sensors.


Sensitivity is the quality of being quick to detect or respond to slight changes, signals, or influences. As the purpose of a SIEM system is to alert operators to changes that indicate significant events, sensitivity to those events is important. The biggest problem with SIEMs and sensitivity is the tradeoff between false positives and false negatives. If you alert on too many possible conditions, you increase false positives and create operator fatigue. Wait for too much data, and you miss some, creating a false negative and an impression that the SIEM system doesn’t work. Adjusting the sensitivity until you have the right balance is a tough but important task.


Trends are a series of data points that indicate a change over time. Trends can be increasing, decreasing, cyclical, or related to variability. What is important is that trends indicate some form of change. Not all forms of change are relevant to the SIEM system’s mission, and a key element is in understanding which changes are and which aren’t. Some changes are important in a direct fashion, such as failed logins. If the average number of failed logins is 20 per day, and suddenly you are getting 10,000 in an hour, that indicates something has changed. An attacker? A script with an error? It will take some investigation to find. What if those same failures were spread across four users, all system admins? Trends matter, but so does the information behind them. This makes alerting on multiple items with good comprehensive reports more useful than just an alert stating “this number is too high.” Context matters.


Alerts are the primary method of communication between the SIEM system and operators. When conditions meet the rule requirements, the SIEM system can send an alert. The more information that can be provided in the alert (other related information, the context of the event, and so on), the better the alert. The key isn’t to tell a security engineer “something happened, go find out what it is,” but rather to steer the engineer in the correct direction with supplemental information that the operator can interpret and then devise a plan to investigate effectively.


Correlation is the process of establishing a relationship between two variables. However, as a wise scientist once stated, correlation is not causation, meaning that just because measurements trend together doesn’t mean one causes the other. There is frequently another element at play, some variable not being measured. Think about a series of failed logins coming from an IP address that was also rejected at a firewall for scanning activity. Or how about some access control failures, and activity such as a successful login with a different username from same IP address in a short time period? Or a UDP packet with port 67 as the destination port, but the destination address is not one of your DHCP servers? Correlation is a means for a SIEM system to apply rules to combine data sources to fine-tune event detection.

Correlation is the connection of events based on some common basis. Things can correlate based on time, based on common events, based on behaviors—the list can go on and on. Although correlation is not necessarily causation, it is still useful to look for patterns and then use these patterns to find future issues before they get to the end of their cycle. Correlation can identify things like suspicious IP addresses based on recent behavior. For instance, a correlation rule can identify port scanning, a behavior that in of itself is not hostile, but also not normal; hence, future activity from that IP would be considered suspect.


SIEM event correlation logs are extremely useful because they can be used to identify malicious activity across a plethora of network devices and programs. This is data that otherwise may go unnoticed.


One of the key functions of a SIEM solution is the aggregation of security information sources. In this instance, aggregation refers to the collecting of information in a central place, in a common format, to facilitate analysis and decision making. The sources that can feed a SIEM solution are many, including system event logs, firewall logs, security application logs, and specific program feeds from security appliances. Having this material in a central location that facilitates easy exploration by a security analyst is very useful during incident response events.

Automated Alerting and Triggers

SIEMs have the ability through a set of rules and the use of analytical engines to identify specific predetermined patterns and either alert or react to them. Automated alerting can remove much of the time delays between specific activity and security operations reaction. Consider this like an IDS on steroids, because it can use external information in addition to current traffic information to provide a much richer pattern-matching environment. A trigger event, such as the previously mentioned scanning activity, or the generation of access control list (ACL) failures in log events, can result in a connection being highlighted on an analyst’s workstation, or in some cases, an automated response.

Time Synchronization

Time synchronization is a common problem for computer systems. When multiple systems handle aspects of a particular transaction, having them all have a common time standard is essential if one is going to compare the logs from different systems. This problem becomes even more pronounced when an enterprise has geographically dispersed operations across multiple time zones. Most systems record things in local time, and when multiple time zones are involved, analysts need to be able to work two time readings synchronously: local time and UTC time. UTC is global time and does not have the issues of daylight saving settings, or even different time zones. UTC is in essence a global time zone. Local time is still important to compare events to local activities. SIEMs can handle both time readings simultaneously, using UTC for correlation across the entire enterprise, and local time for local process meaning.

Event Deduplication

In many cases, multiple records related to the same item can be generated. A firewall log may note an event, and the system log file on the system may also note the event. NetFlow data, because of how and where it is generated, is full of duplicate records for the same packet. Having multiple records in a database representing the same event is wasteful of space and processing, and it can skew analytics. To avoid these issues, using a special form of correlation, where records are determined to be duplicates of a specific event, the SIEM can delete all but a single record of an event from the multiple recordset. This event deduplication assists security analysts by reducing clutter in a data set that can obscure real events that have meaning. For this to happen, the events need a central store—something a SIEM solution provides.


Understanding how and when you would use a SIEM solution relates to the problems it can help solve. Understanding the need to aggregate information, correlate events, synchronize times, deduplicate records/events, and use all this for automated detection, alerting, and triggers is the key to understanding the value of a SIEM solution.


Log files exist across a wide array of sources and have a wide range of locations and details recorded. One of the valuable elements of a SIEM solution is the collection of these disparate data sources into a standardized data structure that can then be employed using database tools to create informative reports. Logs are written once into this SIEM data store, and then can be read many times by different rules and analytical engines for different decision support processes. This write once read many (WORM) times concept is commonly employed to achieve operational efficiencies, especially when working with large data sets, such as log files on large systems.


One of the most powerful use cases for SIEM solutions is in the identification of log and event anomalies. In the stream of log and event data, anomalies can be difficult to detect, but upon correlation with other information they can be found. This is the primary purpose of a SIEM solution.

Images DLP

Data loss prevention (DLP) refers to technology employed to detect and prevent transfers of data across an enterprise. Employed at key locations, DLP technology can scan packets for specific data patterns. This technology can be tuned to detect account numbers, secrets, specific markers, or files. When specific data elements are detected, the system can block the transfer. The primary challenge in employing DLP technologies is the placement of the sensor. The DLP sensor needs to be able observe the data, so if the channel is encrypted, DLP technology can be thwarted.

USB Blocking

USB devices offer a convenient method of connecting external storage to a system and an easy means of moving data between machines. They also provide a means by which data can be infiltrated from a network by an unauthorized party. There are numerous methods of performing USB blocking—from the extreme of physically disabling the ports, to software solutions that enable a wide range of controls. Most enterprise-level DLP solutions include a solution for USB devices. Typically this involves preventing the use of USB devices for transferring data to the device without specific authorization codes. This acts as a barrier, allowing USBs to bring data in, but not allow data out.

Cloud-Based DLP

As data moves to the cloud, so does the need for data loss prevention. However, performing cloud-based DLP is not as simple as moving the enterprise edge methodology to the cloud. There are several attributes of cloud systems that can result in issues for DLP deployments. Enterprises move data to the cloud for many reasons, but two primary ones are size (cloud data sets can be very large) and availability (cloud-based data can be highly available across the entire globe to multiple parties), and both of these are challenges for DLP solutions. The DLP industry has responded with cloud-based DLP solutions designed to manage these and other cloud-related issues while still affording the enterprise visibility and control over data transfers.


E-mail is a common means of communication in the enterprise, and it is common to attach files to an e-mail to provide additional information. Transferring information out of the enterprise by e-mail is a concern for many organizations. Blocking e-mail attachments is not practical given their ubiquity in normal business, so a solution is needed to scan e-mails for unauthorized data transfers. This is a common chore for enterprise-class DLP solutions because they can connect to the mail server and use the same scanning technology used for other network connections.

Images Tools

Tools are a vital part of any security professional’s skill set. You may not be an “assessment professional” who spends most of their career examining networks looking for vulnerabilities, but you can use many of the same tools for internal assessment activities, tracking down infected systems, spotting inappropriate behavior, and so on. Knowing the right tool for the job can be critical to performing effectively.

Protocol Analyzer

A protocol analyzer (also known as a packet sniffer, network analyzer, or network sniffer) is a piece of software or an integrated software/hardware system that can capture and decode network traffic. Protocol analyzers have been popular with system administrators and security professionals for decades because they are such versatile and useful tools for a network environment. From a security perspective, protocol analyzers can be used for a number of activities, such as the following:

Images   Detecting intrusions or undesirable traffic. (An IDS/IPS must have some type of capture and decode capabilities to be able to look for suspicious/malicious traffic.)

Images   Capturing traffic during incident response or incident handling.

Images   Looking for evidence of botnets, Trojans, and infected systems.

Images   Looking for unusual traffic or traffic exceeding certain thresholds.

Images   Testing encryption between systems or applications.

From a network administration perspective, protocol analyzers can be used for activities such as these:

Images   Analyzing network problems

Images   Detecting misconfigured applications or misbehaving applications

Images   Gathering and reporting network usage and traffic statistics

Images   Debugging client/server communications


A sniffer must use a NIC placed in promiscuous (promisc) mode; otherwise, it will not see all the network traffic coming into the NIC.

Regardless of the intended use, a protocol analyzer must be able to see network traffic in order to capture and decode it. A software-based protocol analyzer must be able to place the NIC it is going to use to monitor network traffic in promiscuous mode (sometimes called promisc mode). Promiscuous mode tells the NIC to process every network packet it sees regardless of the intended destination. Normally, a NIC processes only broadcast packets (which go to everyone on that subnet) and packets with the NIC’s Media Access Control (MAC) address as the destination address inside the packet. As a sniffer, the analyzer must process every packet crossing the wire, so the ability to place a NIC into promiscuous mode is critical.

With older networking technologies, such as hubs, it was easier to operate a protocol analyzer because the hub broadcasted every packet across every interface, regardless of the destination. With switches now the standard for networking equipment, placing a protocol analyzer becomes more difficult because switches do not broadcast every packet across every port. Although this might make it harder for administrators to sniff the traffic, it also makes it harder for eavesdroppers and potential attackers.

Network Placement

To accommodate protocol analyzers, IDS devices, and IPS devices, most switch manufacturers support port mirroring or a Switched Port Analyzer (SPAN) port (discussed in the next section). Depending on the manufacturer and the hardware, a mirrored port will see all the traffic passing through the switch or through a specific virtual LAN (or multiple VLANs), or all the traffic passing through other specific switch ports. The network traffic is essentially copied (or mirrored) to a specific port, which can then support a protocol analyzer.

Another option for traffic capture is to use a network tap, a hardware device that can be placed inline on a network connection and that will copy traffic passing through the tap to a second set of interfaces on the tap. Network taps are often used to sniff traffic passing between devices at the network perimeter, such as the traffic passing between a router and a firewall. Many common network taps work by bridging a network connection and passing incoming traffic through one tap port (A) and outgoing traffic through another tap port (B), as shown in Figure 13.10.


Figure 13.10 A basic network tap

A popular, open source protocol analyzer is Wireshark ( Available for both UNIX/Linux and Windows operating systems, Wireshark is a GUI-based protocol analyzer that allows users to capture and decode network traffic on any available network interface in the system on which the software is running (including wireless interfaces), as demonstrated in Figure 13.11. Wireshark has some interesting features, including the ability to “follow the TCP stream,” which allows the user to select a single TCP packet and then see all the other packets involved in that TCP conversation.


Figure 13.11 Wireshark—a popular, open source protocol analyzer

In-Band vs. Out-of-Band NIDS/NIPS

In-band versus out-of-band NIDS/NIPS is similar to the inline-versus-passive issue in an earlier section. An in-band NIDS/NIPS is an inline sensor coupled to a NIDS/NIPS that makes its decisions “in band” and enacts changes via the sensor. This has the advantage of high security, but it also has implications related to traffic levels and traffic complexity. In-band solutions work great for protecting network segments that have high-value systems and a limited number of traffic types—for instance, in front of a set of database servers with serious corporate data, where the only types of access would be via database connections.

An out-of-band system relies on a passive sensor, or set of passive sensors, and has the ability for greater flexibility in detection across a wider range of traffic types. The disadvantage is the delay in reacting to the positive findings as the traffic has already passed on to the end host.

Switched Port Analyzer

The term Switched Port Analyzer (SPAN) is usually associated with Cisco switches—other vendors refer to the same capability as port mirroring or port monitoring. A SPAN has the ability to copy network traffic passing through one or more ports on a switch or one or more VLANs on a switch and then forward that copied traffic to a port designated for traffic capture and analysis (as shown in Figure 13.12). A SPAN port or mirror port creates the collection point for traffic that will be fed into a protocol analyzer or IDS/IPS. SPAN or mirror ports can usually be configured to monitor traffic passing into interfaces, passing out of interfaces, or passing in both directions. When configuring port mirroring, you need to be aware of the capabilities of the switch you are working with. Can it handle the volume of traffic? Can it successfully mirror all the traffic, or will it end up dropping packets to the SPAN if traffic volume gets too high?


Figure 13.12 A SPAN port collects traffic from other ports on a switch.

Port Scanner

A port scanner is a tool designed to probe a system or systems for open ports. Its job is to probe for open (or listening) ports and report back to the user which ports are closed, which are filtered, and which are open. Port scanners are available for virtually every operating system and almost every popular mobile computing platform—from tablets to smartphones. Having a good port-scanning tool in your toolset and knowing how to use it can be very beneficial. The good news/bad news about port scanners is that the “bad guys” use them for basically the same reasons the good guys use them. Port scanners can be used to do the following:

Images   Search for “live” hosts on a network. Most port scanners enable you to perform a quick scan using ICMP, TCP, or UDP packets to search for active hosts on a given network or network segment. ICMP is still very popular for this task, but with the default blocking of ICMP v4 in many modern operating systems, such as Windows 10, users are increasingly turning to TCP or UDP scans for these tasks.

Images   Search for any open ports on the network. Port scanners are most often used to identify any open ports on a host, group of hosts, or network. By scanning a large number of ports over a large number of hosts, a port scanner can provide you (or an attacker) with a very good picture of what services are running on which hosts on your network. Scans can be done for the “default” set of popular ports, a large range of ports, or every possible port (from 1 to 65535).

Images   Search for specific ports. Only looking for web servers? Mail servers? Port scanners can also be configured to just look for specific services.

Images   Identify services on ports. Some port scanners can help identify the services running on open ports based on information returned by the service or the port/service assigned (if standards have been followed). For example, a service running on port 80 is likely to be a web server.

Images   Look for TCP/UDP services. Most port scanners can perform scans for both TCP and UDP services, although some tools do not allow you to scan for both protocols at the same time.

As a security professional, you’ll use port scanners in much the same way an attacker would: to probe the systems in your network for open services. When you find open services, you’ll need to determine if those services should be running at all, if they should be running on the system(s) you found them on, and if you can do anything to limit what connections are allowed to those services. For example, you may want to scan your network for any system accepting connections on TCP port 1433 (Microsoft SQL Server). If you find a system accepting connections on TCP port 1433 in your Sales group, chances are someone has installed something they shouldn’t have (or someone installed something for them).

So how does a port scanner actually work? Much will depend on the options you select when configuring your scan, but for the sake of this example, assume you’re running a standard TCP connect scan against for ports 1–10000. The scanner will attempt to create a TCP connection to each port in the range 1–10000 on When the scanner sends out that SYN packet, it waits for the responding SYN/ACK. If a SYN/ACK is received, the scanner will attempt to complete the three-way handshake and mark the port as “open.” If the sent packet times out or an RST packet is received, the scanner will likely mark that port as “closed.” If an “administratively prohibited” message or something similar comes back, the scanner may mark that port as “filtered.” When the scan is complete, the scanner will present the results in a summary format—listing the ports that are open, closed, filtered, and so on. By examining the responses from each port, you can typically deduce a bit more information about the system(s) you are scanning, as detailed here:

Images   Open  Open ports accept connections. If you can connect to these with a port scanner, the ports are not being filtered at the network level. However, there are instances where you may find a port that is marked as “open” by a port scanner that will immediately drop your connections if you attempt to connect to it in some other manner. For example, port 22 for SSH may appear “open” to a port scanner but will immediately drop your SSH connections. In such a case, the service is likely being filtered by a host-based firewall or a firewall capability within the service itself.

Images   Closed  You will typically see this response when the scanned target returns an RST packet.

Images   Filtered  You will typically see this response when an “ICMP unreachable” error is returned. This usually indicates that the port is being filtered by a firewall or other device.

Images   Additional types  Some port scanners will attempt to further classify responses, such as dropped, blocked, denied, timeout, and so on. These are fairly tool specific, and you should refer to any documentation or help file that accompanies that port scanner for additional information.

In general, you will want to run your scanning efforts multiple times using different options to ensure you get a better picture. A SYN scan may return different results than a NULL scan or FIN scan. You’ll want to run both TCP and UDP scans as well. You may need to alter your scanning approach to use multiple techniques at different times of the day/night to ensure complete coverage. The bad guys are doing this against your network right now, so you might as well use the same tools they do to see what they see. Port scanners can also be very useful for testing firewall configurations because the results of the port scans can show you exactly which ports are open, which ones you allow through, which ports are carrying services, and so on.

So how do you defend against port scans? Well, it’s tough. Port scans are pretty much a part of the Internet traffic landscape now. Although you can block IP addresses that scan you, most organizations don’t because they run the risk of an attacker spoofing source addresses as decoys for other scanning activity. The best defense is to carefully control what traffic you let in and out of your network, using firewalls, network filters, and host filters. Then carefully monitor any traffic that you do allow in.

Passive vs. Active Tools

Tools can be classified as active or passive. Active tools interact with a target system in a fashion where their use can be detected. Scanning a network with nmap (Network Mapper) is an active act that can be detected. In the case of nmap, the tool may not be specifically detectable, but its use, the sending of packets, can be detected. When you need to map out your network or look for open services on one or more hosts, a port scanner is probably the most efficient tool for the job. Figure 13.13 shows a screenshot of Zenmap, a cross-platform version of the very popular nmap port scanner available from


Figure 13.13 Zenmap—a port scanner based on nmap

Passive tools are those that do not interact with the system in a manner that would permit detection, as in sending packets or altering traffic. An example of a passive tool is Tripwire, which can detect changes to a file based on hash values. Another passive example is the OS mapping by analyzing TCP/IP traces with a tool such as Wireshark. Passive sensors can use existing traffic to provide data for analysis.


Passive tools receive traffic only and do nothing to the traffic flow that would permit others to know they are interacting with the network. Active tools modify or send traffic and are thus discoverable by their traffic patterns.

Banner Grabbing

Banner grabbing is a technique used to gather information from a service that publicizes information via a banner. Banners can be used for many things; for example, they can be used to identify services by type, version, and so forth, and they enable administrators to post information, including warnings, to users when they log in. Attackers can use banners to determine what services are running, and typically do for common banner-issuing services such as HTTP, FTP, SMTP, and Telnet. Figure 13.14 shows a couple of banner grabs being performed from a Telnet client against a web server. In this example, Telnet sends information to two different web servers and displays the responses (the banners). The top response is from an Apache instance (Apache/2.0.65) and the bottom is from Microsoft IIS (Microsoft-HTTPAPI/2.0).


Figure 13.14 Banner grabbing using Telnet

Images Indicators of Compromise

Indicators of compromise (IOCs) are just that—indications that a system has been compromised by unauthorized activity. When a threat actor makes changes to a system—either by direct action, malware, or other exploit—forensic artifacts are left behind in the system. IOCs act as breadcrumbs for investigators, providing little clues that can help identify the presence of an attack on a system. The challenge is in looking for, collecting, and analyzing these bits of information and then determining what they mean for a given system. This is one of the primary tasks for an incident responder—gathering and processing these disparate pieces of data and creating a meaningful picture of the current state of a system.

Fortunately, there are toolsets to aid the investigator in this task. Tools such as Yara can take a set of signatures (also called IOCs) and then scan a system for them, determining whether or not a specific threshold is met, thus indicating a particular infection. Although the specific list will vary based on the system and the specific threat being looked for, here is a common set of IOCs that firms should monitor for:

Images   Unusual outbound network traffic

Images   Anomalies in privileged user account activity

Images   Geographical irregularities in network traffic

Images   Account log-in red flags

Images   Increases in database read volumes

Images   HTML response sizes

Images   Large numbers of requests for the same file

Images   Mismatched port-application traffic, including encrypted traffic on plain ports

Images   Suspicious registry or system file changes

Images   Unusual DNS requests

Images   Unexpected patching of systems

Images   Mobile device profile changes

Images   Bundles of data in the wrong place

Images   Web traffic with nonhuman behavior

Images   Signs of DDoS activity, even if temporary

No single compromise will hit all of these IOCs, but monitoring these items will tend to catch most compromises, because at some point in their lifecycle, the compromises will exhibit one or more of these behaviors. Then, once a compromise is detected, a responder can zero in on the information and fully document the nature and scope of the problem.

As with many other sophisticated systems, IOCs have developed their own internal languages, protocols, and tools. Two major, independent systems for communicating IOC information are available: the OpenIOC and the STIX/TAXII/CybOX system. OpenIOC was developed by Mandiant to facilitate information of IOC data, whereas MITRE, under contract with the U.S. government, developed STIX/TAXII/CybOX. MITRE designed Structured Threat Information Expression (STIX), Trusted Automated Exchange of Indicator Information (TAXII), and Cyber Observable Expression (CybOX) to specifically facilitate automated information sharing between organizations.

Advanced Malware Tools

Advanced malware tools include tools such as Yara, a command-line pattern matcher to look for indicators of compromise in a system. Yara assists security engineers in hunting down malware infections based on artifacts that the malware leaves behind in memory. Another advanced malware tool is a threat prevention platform that analyzes a system and its traffic in real time and alerts engineers to common malware artifacts such as callbacks to external devices.

Images For More Information

SANS Intrusion Detection FAQ

SANS Reading Room—Firewalls & Perimeter Protection

The Honeynet Project

Fight Spam on the Internet!

Chapter 13 Review

Images Chapter Summary

After reading this chapter and completing the exercises, you should understand the following facts about intrusion detection systems and network security.

Apply the appropriate network tools to facilitate network security

Images   Intrusion detection is a mechanism for detecting unexpected or unauthorized activity on computer systems.

Images   IDSs can be “host based,” examining only the activity applicable to a specific system, or “network based,” examining network traffic for a large number of systems.

Images   Protocol analyzers, often called sniffers, are tools that capture and decode network traffic.

Images   Honeypots are specialized forms of intrusion detection that involve setting up simulated hosts and services for attackers to target.

Images   Honeypots are based on the concept of luring attackers away from legitimate systems by presenting more tempting or interesting systems that, in most cases, appear to be easy targets.

Determine the appropriate use of tools to facilitate network security

Images   IDSs match patterns known as signatures that can be content or context based. Some IDSs are model based and alert an administrator when activity does not match normal patterns (anomaly based) or when it matches known suspicious or malicious patterns (misuse detection).

Images   Newer versions of IDSs include prevention capabilities that automatically block suspicious or malicious traffic before it reaches its intended destination. Most vendors call these intrusion prevention systems (IPSs).

Images   Analyzers must be able to see and capture network traffic to be effective, and many switch vendors support network analysis through the use of mirroring or SPAN ports.

Images   Network traffic can also be viewed using a network tap, which is a device for replicating network traffic passing across a physical link.

Images   By monitoring activity within the honeypot, security personnel are better able to identify potential attackers, along with their tools and capabilities.

Apply host-based security applications

Images   Host-based IDSs can apply specific context-sensitive rules because of the known host role.

Images   Host-based IPSs can provide better control over specific attacks because the scope of control is limited to a host.

Images Key Terms

analysis engine (481)

anomaly detection model (476)

banner grabbing (505)

content-based signature (478)

context-based signature (478)

digital sandbox (493)

false negative (479)

false positive (479)

honeynet (494)

honeypot (493)

host-based IDS (HIDS) (485)

intrusion detection system (IDS) (474)

intrusion prevention system (IPS) (490)

misuse detection model (477)

network tap (501)

network-based IDs (NIDs) (479)

perimeter security (480)

port mirroring (501)

protocol analyzer (500)

signature database (481)

Snort (484)

Suricata (484)

Switched Port Analyzer (SPAN) (502)

traffic collector (480)

Images Key Terms Quiz

Use terms from the Key Terms list to complete the sentences that follow. Don’t use the same term more than once. Not all terms will be used.

1.   A(n) _______________ is a piece of software or an integrated software/hardware system that can capture and decode network traffic.

2.   When an IDS generates an alarm on “normal” traffic that is actually not malicious or suspicious, that alarm is called a(n) _______________.

3.   An attacker scanning a network full of inviting, seemingly vulnerable targets might actually be scanning a(n) _______________, where the attacker’s every move can be watched and monitored by security administrators.

4.   A(n) _______________ looks at a certain string of characters inside a TCP packet.

5.   An IDS that looks for unusual or unexpected behavior is using a(n) _______________.

6.   _______________ allows administrators to send all traffic passing through a network switch to a specific port on the switch.

7.   Within an IDS, the _______________ examines the collected network traffic and compares it to known patterns of suspicious or malicious activity stored in the signature database.

8.   _______________ is a technique whereby a host is queried and identified based on its response to a query.

9.   _______________ is a technique for matching an element against a large set of patterns and using activity as a screening element.

10.   _______________ is a new entry in the IDS toolset as a replacement for Snort.

Images Multiple-Choice Quiz

1.   What are the two main types of intrusion detection systems?

A.   Network based and host based

B.   Signature based and event based

C.   Active and reactive

D.   Intelligent and passive

2.   What are the two main types of IDS signatures?

A.   Network based and file based

B.   Context based and content based

C.   Active and reactive

D.   None of the above

3.   Which of the following describes a passive, host-based IDS?

A.   It runs on the local system.

B.   It does not interact with the traffic around it.

C.   It can look at system event and error logs.

D.   All of the above.

4.   Which of the following is not a capability of network-based IDS?

A.   It can detect denial-of-service attacks.

B.   It can decrypt and read encrypted traffic.

C.   It can decode UDP and TCP packets.

D.   It can be tuned to a particular network environment.

5.   An active IDS can do which of the following?

A.   Respond to attacks with TCP resets

B.   Monitor for malicious activity

C.   A and B

D.   None of the above

6.   What are honeypots used for?

A.   To attract attackers by simulating systems with open network services

B.   To monitor network usage by employees

C.   To process alarms from other IDSs

D.   To attract customers to e-commerce sites

7.   Connecting to a server and sending a request over a known port in an attempt to identify the version of a service is an example of what?

A.   Port sniffing

B.   Protocol analysis

C.   Banner grabbing

D.   TCP reset

8.   Preventative intrusion detection systems:

A.   Are cheaper

B.   Are designed to stop malicious activity from occurring

C.   Can only monitor activity

D.   Were the first type of IDS

9.   IPS stands for which of the following?

A.   Intrusion processing system

B.   Intrusion prevention sensor

C.   Intrusion prevention system

D.   Interactive protection system

10.   What is a protocol analyzer used for?

A.   To troubleshoot network problems

B.   To collect network traffic statistics

C.   To monitor for suspicious traffic

D.   All of the above

Images Essay Quiz

1.   Discuss the differences between an anomaly-based and a misuse-based detection model. Which would you use to protect a corporate network of 10,000 users? Why would you choose that model?

2.   Pick three technologies discussed in this chapter and describe how you would deploy them to protect a small business network. Describe the protection each technology provides.

Lab Projects

Lab Project 13.1

Design three content-based signatures and three context-based signatures for use in an IDS. Name each signature and describe what the signature should look for, including traffic patterns or characters that need to be matched. Describe any activity that could generate a false positive for each signature.

Lab Project 13.2

Use the Internet to research Snort (an open source IDS). With your instructor’s permission, download Snort and install it on your classroom network. Examine the traffic and note any alarms that are generated. Research and note the sources of the alarm traffic. See if you can track down the sources of the alarm traffic and discover why these sources are generating those alarms on your IDS.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.