Network security monitoring

The use of next-generation firewalls, data loss prevention, malware analysis, and intrusion prevention are the foundation of network security monitoring at the Internet edge and other network boundaries. As an integral component to defense in depth, these tools analyze all network traffic traversing the network and are typically positioned in areas of the most criticality. Each of these technologies has been covered in depth in the earlier chapters from the protection standpoint; this section will discuss leveraging the tools from a monitoring perspective.

In order to gain an understanding of what traffic is traversing the network and its intent, it is imperative to have a strategic implementation of these tools in a fashion that will provide the most valuable event data. This is particularly difficult as there is a significant amount of data that is analyzed, collected, and security events created. For each of these technologies, an evaluation of capabilities must be undertaken to determine the best configuration to reduce false positives, reduce impact to production traffic, and provide analysts with enough information to investigate potential threats and mitigate in an acceptable time frame.

Next-generation firewalls

The configuration of next-generation firewalls can be complex; and the temptation is to turn on every feature and log all the output. This may be necessary in some environments but must be carefully weighed as the more log data present, the more log data there will be to store and analyze. This can be costly in terms of storage and management and can reduce the effectiveness of the tool.

There are differing schools of thought on what firewall rules to log in order to ensure capturing of malicious traffic when permitted by a valid policy. An example is an HTTP-permitted access inbound to web servers; logging or not logging the data can be a detriment in either case. If the service is expecting connections, there will be a significant amount of log data, most of it legitimate. This, however, will require more log storage and more information for analysts to sift through for security events. If the traffic is not logged, then abuse of the permitted access can also go unnoticed. This is a primary reason for the design of next-generation firewalls that added additional analysis capabilities of traffic permitted and essentially will log and alert on anomalies only, reducing event data and more timely mitigation. By simply adding denial-of-service checks, intrusion prevention, and protocol analysis capabilities, the intelligence provided by next-generation firewalls proves more effective by detecting real threats and reducing the log data to be analyzed by analysts.

If the firewall infrastructure is unable to combine detection engine output to an incident, it may be prudent to limit the services enabled and leverage Security Information and Event Management (SIEM) to collect and analyze the data from multiple sources to get a single-pane view of seemingly disparate events to determine if in fact an incident is occurring or has occurred.

Note

Security Information and Event Management (SIEM) will be discussed later in this chapter.

Data loss prevention

Because data loss is a security event, it should be included in the overall security monitoring strategy of the enterprise. Though it is a very specifically built tool, the presence of sensitive data at an egress point of the network may be indicative of a security incident, not simply a bad business process. Regular analysis of incidents created within the data loss prevention (DLP) solution should be the role of IT security to look for potential behaviors of malicious intent through human- or malware-based exfiltration.

Care should be taken when handling the data collected by DLP as it usually consists of sensitive data that should not be viewable outside of the teams and individuals responsible for analysis and remediation. If it is possible to send generic events through an alerting mechanism, it may be a method to proactively alert security staff of an incident that warrants immediate attention. If the solution cannot send generic alerts with sensitive data removed, then manual analysis of events will need to persist allowing for confidentiality to remain intact.

The solution may be configured to decrypt traffic allowing for inspection of traffic that otherwise would exit the network without detection. If the enterprise has chosen to implement such a method, removing personal employee data from normal transactions such as online banking and purchasing should be a topic of discussion not only for privacy, but also to reduce incidents and alerts generated for benign network activity.

The enterprise should have other forms of data access monitoring, privileged user access monitoring, and system monitoring that, if configured properly, should yield similar results and can be combined with DLP output for validation. DLP is specifically designed for data loss and may not alert simply on misuse unless it is seen at an egress point of the network, including at the end user system. It is most typical for an alert to be generated in a DLP solution, then further investigation and analysis of the other monitoring components creates the complete scenario of data access, attempted misuse, and exfiltration.

A network DLP implementation is considered an edge technology such as a firewall and intrusion prevention providing only a small portion of the traffic analysis, if any, for analysts to correlate events. Therefore, this technology must be used with other technologies to be most effective in proactive detection and mitigation of data loss-related security events.

Malware detection and analysis

A technology gaining momentum as a must have security tool is malware detection and analysis at the edge using local- or cloud-based methods. The data output from such advanced tools is only as good as the analyst using the tools, requiring in-depth knowledge of malware analysis. These tools can however aid in environments with little malware analysis know-how by simply being able to detect, analyze, and provide actionable data on what the threat is and where it is on the network. Solutions offering these capabilities include commercial products by FireEye, RSA NetWitness, and Fidelis.

Security teams can then engage desktop and server support teams to take systems offline and remediate in a much faster time frame than attempting to manually find infected internal hosts. Because these tools are highly specialized, there is not much more information that can be gleaned from other security monitoring tools other than host-specific tools that may have detected the malware; but the reason these tools exist is the lack of traditional host solution's capability of detecting malware with no known signature. As with most security monitoring, the output from malware tools can provide details that infringe upon the confidentiality of those infected and must be handled in a manner to protect the privacy of those involved. Any correlation in security event data is helpful and should be the goal of using simple to advanced tools to provide comprehensive security monitoring of the enterprise infrastructure.

Intrusion prevention

Whether standalone or integrated within a next-generation firewall, intrusion prevention is still a very effective method to not only detect and mitigate threats, but also to provide valuable alerting capabilities for security analysts. Locating the intrusion prevention system (IPS) (outside the external firewall and inside the internal firewall for a basic DMZ) allows attack patterns to be easily detected and alerts sent to security staff for mitigation. Distributed denial-of-service (DDoS) protection is becoming a standard feature of traditional IPS and can alert security personnel of impending services' outages prior to occurrence; and, in most cases, can mitigate the threat by the simple nature of the predictability of incomplete and bogus service requests that are the basis of DDoS attacks.

Similar to other edge-focused technologies, IPS will only be capable of seeing single transactions between source and destination; what happens after the attack reaches a destination may or may not be detected by an IPS depending on the resulting callback traffic. IPS is also good at mitigating low-hanging fruit type attacks and can help conserve firewall sessions by being placed in front of the firewall; the downside is that if the IPS does not have a firewall itself; the amount of traffic will be significant and this must be considered before placing it outside an Internet edge firewall.

With the amount of data that could be alerted on, the security team must consider what requires immediate attention, as alerting would indicate urgency. Regardless of alerting, constant monitoring of IPS and security monitoring tools should occur to reduce security incident impact; this is the shortcoming of most teams as they are understaffed and have too many tools to effectively monitor. Careful analysis of IPS capabilities and alerting strategy will greatly increase the overall effectiveness of IPS.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.176.228