Chapter 11. Common Pitfalls and Mistakes

Information in this chapter:

• Complacency
• Misconfigurations
• Compliance vs. Security
• Scope and Scale
Even with best of intentions, a qualified staff, budget, and time, it can be difficult to implement strong security measures into any network, and even more so into an industrial network. While assessing real, deployed networks, many common pitfalls and mistakes are made. One of the most common and dangerous mistakes that can be made is complacency, either as a result of overconfidence or because of stubborn refusal to recognize the threats that exist against industrial networks. Other pitfalls include simple misconfigurations of both assets and security devices, resulting in a false sense of security while the industrial network systems are vulnerable; confusing security best practices with compliance requirements; and finally—even when everything else is done correctly—substandard security products that, despite precise efforts to configure and tune them correctly, simply fail to function under the increased load of an actual cyber incident.

Complacency

Complacency is a danger to any security profile. Just as a boxer needs to always maintain a proper guard, network security professionals need to assume a similarly defensive posture. However, it is easy to let our guard down when there is no real belief or conviction that a threat is real or when there is overconfidence in the defenses that have already been established. The following examples are the result of complacency in some form or another.

Vulnerability Assessments vs. Zero-Days

Most recommendations, including those made within this book, include some form of vulnerability assessment and/or penetration testing. This process will potentially uncover areas of risk that can then be addressed by patching systems and strengthening the policies of firewalls or IPSs. At the end of this process, a common misperception is that the network is now 100% secure, as there are no more open vulnerabilities against it.
However, any vulnerability assessment or penetration test will only identify how susceptible a network is against known attacks, exploits, and penetration techniques. In reality, there are unknown threats that cannot be accounted for. Therefore, no security plan is fully complete without some method of accounting for unknown attacks. This includes
1. Using multiple layers of defense. Some defensive products may have different detection signatures, more accurate profiles, or different threat research that might allow one product to detect something that another product missed.
2. Using alternate threat detection mechanisms. In addition to “blacklist” based protection, such as what is provided by a firewall or an IPS, utilize anomaly detection products to detect abnormal behavior that could indicate a possible threat, and/or utilize “whitelisting” to block anything that is not specifically identified as a known good service or application.
3. Finally, using the full capability of security monitoring and analysis tools to provide Situational Awareness across the network as a whole, potentially identifying unknown threats that might go undetected by perimeter security devices.

Real Security vs. Policy and Awareness

Security is a process that not only demands a well thought out information security practice but also depends heavily on the human element. Even the strongest network perimeter can be circumvented by an end user—either intentionally as an act of sabotage or in innocence and ignorance. Even a true air gap can be breached if a worker enters the physical security zone and plugs in an unauthorized device, such as a USB drive, an iPod, or some other intelligent mobile device.
While a strong network security practice will anticipate and account for many of these scenarios (e.g., by removing or disabling USB connectors on critical assets), it will never be possible to anticipate every event. Only a properly trained and motivated staff can ultimately ensure that the established technical controls will operate successfully.
Conversely, a well-established security training program coupled with the best and most honest intentions of the entire employee base cannot protect the network against a real threat unless the proper technical security controls are also in place. Knowing not to visit public websites from inside of a “secure” enclave is not enough; if the connection is openly allowed, there is a clear and direct vector into the enclave that can be exploited by an attacker.

The Air Gap Myth

In a time where open networking protocols and wireless networks are prominently used, there is still the misperception that a true air gap exists, protecting critical industrial systems simply because they are not connected to the IT network.
In reality, even a real air gap (if one truly does exist) is of little use in defending against cyber attacks, because cyber attacks have evolved past physical wires. Many assets that were not designed or intended to support wireless network communications include embedded Wi-Fi capabilities at the microprocessor level, 1 which can be exploited by attackers ranging from the skilled cyber terrorist to a disgruntled worker with an understanding of wireless technologies. 2
1.J. Larson, Idaho National Laboratories, Control systems at risk: sophisticated penetration testers show how to get through the defenses, in: Proc. 2009 SANS European SCADA and Process Control Security Summit, October, 2009.
2.J. Brodsky, A. McConnell, M. Cajina, D. Peterson, Security and reliability of wireless LAN protocol stacks used in control systems, in: Proc. SCADA Security Scientific Symposium (S4), Kenexis Security Corporation, 2010, Digital Bond Press.
In addition, there is the high possibility that a threat could be walked into a critical network, stepping across the air gap with the aid of a human carrier. Only strong security awareness and strong technical security controls can truly “gap” a networked system.

Misconfigurations

If complacency is an error of intention, misconfigurations are errors of implementation. In a 10-year study completed in 2010, configuration weaknesses accounted for 16% of exploits in industrial control systems. 3 With misconfigurations responsible for such a high level of risk, it is no wonder that security recommendations from NERC CIP, CFATS, NRC RG 5.71, ISO/IEC 27002:2005, and the NSIT SP 800 documents focus so heavily on configuration control and management. Simple errors can negate all of the benefits of a specific security device (such as using the default password on a firewall), exposing an entire enclave, while misconfigured hosts can provide easy penetration and propagation through a network once it is breached.
3.J. Pollet, R. Tiger, Electricity for free? The dirty underbelly of SCADA and Smart Meters, in: Proc. 2010 BlackHat Technical Conference, July, 2010.
The use of default accounts and passwords is a common misconfiguration. Others include the lack of outbound monitoring or policy enforcement in perimeter controls and the introduction of intentional security holes for a legitimate business purpose, which is given the affectionate moniker of “the executive override.” Perhaps the most common configuration error, however, is the “set it and forget it” approach. Because effective security is an ongoing process, any configuration that is not continuously assessed, monitored, and managed will eventually shift its alignment with the desired security policies, opening unintentional holes through which an attack can occur.

Tip
While the process of performing vulnerability assessments and penetration tests should uncover most configuration issues, there are also configuration assessment tools to help with configuration assurance. These tools—which may be part of a configuration management system, a SIEM, or as a standalone product—look for common errors in configuration files. For example, a firewall configuration policy should not include “allow all” policies, or policies that do not explicitly define the source and destination IP address(es) and port(s). Especially when combined with regular vulnerability assessments, these tools can simplify the process of assuring the strength of a security configuration, so that the validated configuration files can then be monitored and controlled.

Default Accounts and Passwords

The use of default accounts and passwords is common and dangerous. The initial stages of most attacks involve the enumeration of legitimate system and user identities, a process that is necessary to determine vulnerabilities so that an exploit can be attempted (see Chapter 6, “Vulnerability and Risk Assessment”). If the username and password of a system are already known, the attacker—whether an outside entity or an internal user—can simply and easily authenticate, often with administrative privileges since most default accounts exist for the purpose of initial setup and configuration of other user accounts. Regardless of how secure the system is otherwise, the system is now highly vulnerable and at risk: security configurations can be altered to allow broader access, software can be installed, new accounts can be created, etc. In essence, the successful administrative login to any system is the end game of most hacking attempts.
The use of default passwords, or to a lesser degree weak passwords, therefore is a primary concern. A quick search on the web will provide most default passwords, as well as sites that specifically track and document default credentials, making them easy to obtain. 4 However, these default password lists can be used for benevolent intent as well. The solution is simple: disable all default accounts where possible, and require unique user accounts with strong credentials.
4.C. Sullo, CIRT.net. Default passwords. < http://cirt.net/passwords>, October 4, 2007 (cited: March 15, 2011).
Unfortunately, unless the device in question enforces strong password controls, it is difficult to ensure that all unique user accounts will use strong passwords. Luckily, both default and weak passwords are easy to detect. By using these sources the same way a hacker would, it is possible to define a blacklist of known default passwords, which can then be used by various security products to detect when a default password is in use. Weak passwords can also be easily detected, using regular expressions. For example, the following regular expression checks for a password that is a minimum of eight characters, with at least one uppercase letter, one lowercase letter, and one number. 5
5.I. Spaanjaars, Regular expression for a strong password. < http://imar.spaanjaars.com/297/regular-expression-for-a-strong-password>, May 14, 2004 (cited: March 15, 2011).
^(?=.{8,})(?=.*d)(?=.*[a-z])(?=.*[A-Z])(?!.*s).*$
Applied as a detection signature, the following might be used to detect either weak passwords or default passwords:
((password != /^(?=.{8,})(?=.*d)(?=.*[a-z])(?=.*[A-Z])(?!.*s).*$/) || (password == $defaultPasswords))
Whatever measures are taken to eliminate default passwords and enforce strong password use, establishing unique and strongly authenticated accounts is one of the most basic and necessary steps in securing any network.

Lack of Outbound Security and Monitoring

It is easy to think of an “attack” as an inbound event: someone is attempting to break into the industrial network from “the outside.” However, as shown in Chapter 7, “Establishing Secure Enclaves,” there are many access control points to consider, and the “outside” of one enclave may be the “inside” of another. In addition, there are inside attackers including but not limited to disgruntled employees or “trusted” third parties. It is critical to enforce access control and traffic flow in both directions: both into and out of every enclave in order to ensure that an inbound attack is not originating from inside the network.
In addition, many breaches result in the infection and propagation of malware, which will typically attempt to connect back out of the network to a public IP. Depending on the sophistication of the attack, the outbound connection may be well hidden or obvious, but if firewall and IPS policies are only enforcing traffic in one direction, it does not matter. Monitoring is equally as important: even if the perimeter security policies are strong enough to stop the malicious outbound traffic, the fact that the traffic originated from the inside indicates that there is a malicious entity (user or malware) inside your network. Monitoring will alert you to this, and can also help indicate where the attacks are originating from.

The Executive Override

The “Executive Override” is an intentional policy allowing traffic through a perimeter firewall for a nonessential use (at least from the perspective of industrial operations, there may be a very legitimate business case for the exception). It is almost unavoidable as business operations continue to evolve, but it is addressable.
One example of the “Executive Override” is the need for real-time process data within the business enterprise (in the least secure zone of the network!) so that financial and trading analysis can be made using the absolute latest information on production yields, quality, manufacturing efficiency, etc. This will often be done by extending Historian data through one or more firewalls, “poking holes” in the security perimeter of (potentially) several enclaves. The result, when implemented poorly, is a direct vector of attack from the executive console to the Historian system, which resides inside of a critical enclave.
To secure these inevitable requirements, establish a purely supervisory enclave that consists of purely read-only information data (i.e., no “control”), and replicate the necessary Historians or HMIs into this enclave over a unidirectional gateway or data diode for physical layer separation. Now, these replicated systems can be allowed to communicate to less secure zones without risk of any malicious backwash into the critical zones.

The Ronco Perimeter

The “set it and forget it” process extolled by clever Ronco kitchen products (www.ronco.com) may be suitable for a rotisserie cooker, but it is not suitable for cyber security. Cyber security is a process, not a product, and therefore needs to be continuously assessed, adjusted, and improved. Even after a vulnerability assessment is complete and security policies are locked down, there are still steps to be taken. Specifically, these steps include
• Monitoring the newly established configuration to make sure it does not change, by an unknown administrator, a disgruntled insider, or an attacker modifying defenses in order to penetrate deeper into the network.
• Identifying and adapting to new threat types, including new zero-day attacks, new social engineering schemes, and attacks introduced by new technology.
• Adding new security controls, and/or adjusting the configurations of existing controls to minimize risk in an ongoing manner.
The smartphone is an excellent example of how the introduction of new technology needs to be accommodated by security policies. Are these devices capable of transporting files (and potentially malware)? Can they be mounted as a removable drive via USB, wireless, or Bluetooth? Is it possible to route between a cellular carrier network and local Wi-Fi networks supported by these devices? Are there existing controls to prevent misuse of intelligent mobile platforms, or are new controls needed?

Compliance vs. Security

Compliance controls represent any number of guidelines and/or mandates that have been developed in order to ensure that organizations have correctly planned and implemented the necessary security measures to protect whatever sensitive materials, systems, or services may need protecting. While the controls discussed in this book (see Chapter 10, “Standards and Regulations”) relate specifically to cyber security, there are compliance regulations spanning almost every foreseeable aspect of information security, including fraud prevention, privacy, safety, financial responsibility, financial integrity, and more.
While compliance controls are presumably developed with good intentions, they can sometimes impact the security process that they are trying to assure. This is because the necessary steps that need to be taken to prove that you have implemented and enforced certain security controls are different from the controls themselves. As a result, it is a common pitfall to focus efforts on obtaining the necessary documentation to earn “compliance check off boxes” rather than on the security goals of the standard. This can result in misplaced efforts, especially in preparation for an audit, at which point security analysis might be temporarily repurposed to the role of the compliance analyst.

Note
While it is often possible to become compliant with a particular standard or regulation without implementing strong security controls, the documentation and audit trails required by most compliance standards are often easier to obtain when those security controls are in place. Compliance officers and security analysts should work together early in the planning stages to ensure that the intended security controls are implemented in a manner that satisfies both areas of responsibility. Compliance does not equal security, and security does not equal compliance; however, the two can be obtained together.

Audit Fodder

The various requirements that are excerpted in Chapter 10 from NERC CIP, CFATS, NRC, ISO, and NIST could be summarized quickly as the need to implement measures to protect against attacks, to log and review that activity on a continuous basis, and to prove it by retaining those logs for a set period of time. The issue is that last piece: to prove compliance by retaining logs.
Unfortunately, simply collecting event logs for compliance is not going to help with security, unless those logs are collected from a security net that is properly cast, correctly configured, and regularly reviewed. While log retention can prove that certain measures are in place, and while documented plans and policies can prove that an organization’s intentions are well founded, neither can entirely prove or disprove that the ongoing security practices are good, bad, efficient, successful, or complete.
The issue comes when systems are implemented with the primary goal to provide the evidence of compliance and the secondary goal of security. The result in these cases is “audit fodder,” mounded high to satisfy the requirements of the auditor. Instead, security measures should be designed and implemented for security first. The resulting logs and documentation should satisfy the reporting and auditing requirements of the standard, and the network will be more secure for the effort. If the compliance requirements are still not met after the best efforts to secure the network are complete, continue the process of assessment, remediation, monitoring, response, retention, and then back to assessment again—repeating the cycle until all requirements are met.

The “One Week Compliance Window”

The pre-audit reallocation of resources has been observed in many organizations. With an impending audit, documentation must be put in order. Logs that have been retained for (potentially) many years need to be cross-referenced, correlated, collated, and formatted into suitable reports. Networks need to be mapped, vulnerabilities detected and resolved, patches applied, and antivirus signatures updated. In short, the network needs to be put in perfect order, and cleanly represented on paper for review by a compliance auditor. In many cases, however, the technical resource that is utilized to perform this extensive work is the only resource that is available: the security analyst(s). The skilled security professionals are tasked with cleaning house, taking their attention away from real-time, day-to-day security operations.
The result is a flurry of activity that actually weakens the network while the organization “becomes compliant.” Afterward the staff members are reassigned to their original duties, and if all goes well the organization will remain in compliance until the next audit occurs. In reality, new systems are implemented, new patches are applied, there is a merger or a purchase or a reduction—something that misaligns the current security policies and practices with the auditable compliance goals. Until the next audit cycle occurs, these errors may go unnoticed or disregarded.
The result is obvious though not always possible or realistic: ensure that dedicated compliance resources are in place to separate the responsibility of the audit from the security practices. Ironically, this separation can be facilitated through the closer integration of security and compliance efforts, for example, by mapping compliance controls to specific security events, so that as incident(s) occur the responsible security analysts are immediately aware of the impact that the incident(s) may have on the organization’s compliance goals. Chapter 10, “Standards and Regulations,” begins the process of mapping compliance and security controls together. This effort may also be facilitated through the use of the Unified Compliance Framework (www.unifiedcompliance.com).

Scope and Scale

Another common mistake made when attempting to secure a control system is to think of the industrial network as an isolated system. While once air-gapped from the rest of the organization, industrial and automated control systems are now dependent on and heavily influenced by many other systems: the business or enterprise network, new communications infrastructures that are integrated with power systems (i.e., the smart grid), new technologies, tools, etc. The result is that control systems must be assessed (at least for security purposes) as a dynamic system. Without sufficient planning for outside influences and unforeseen growth, the best-laid plans can fail after implementation.

Caution
When implementing new security products, proper sizing and configuration of those products is critical. However, most vendors rate products differently. Similar products may be marketed using entirely different metrics, making it difficult to choose the correct tool for the job.
Especially in an industrial network (where there is likely a compliance requirement to thoroughly test new assets in any case), insist on a trial of significant length to ensure that the product is sufficient for the scope and scale of the network it will be deployed in. Because it is also difficult to effectively measure the various necessary qualities of a network, this trial should be performed in a full test network environment that replicates the production network as closely as possible. Such a test environment should be maintained in its own isolated and secured enclave, and to the greatest degree possible it should contain the same network assets and systems that are in production environments. The use of virtual machines (VMs) can simplify the process of establishing test networks by enabling the easy reimaging of certain systems. However, while certain systems may be able to be virtualized for simplicity, due to the nature of many industrial assets, at least a partly built physical test environment will likely be required.

Project-Limited Thinking

Two common axioms in information security are “Security is a Process, not a Product” and “Every door is a back door.” Taking this advice under consideration, security cannot be treated as a onetime project, with limited scope and definable goals. Rather, security policies should be continuously assessed and reassessed as part of the global information technology strategy. Because of the relatively static nature of industrial operations, there may be an inclination to implement perimeter defenses, lock down configurations, and then consider the job to be finished.
However, even if the industrial network(s) remain unchanged, the networks that surround them and interact with them are likely to evolve, often at a rapid pace. New tools and technologies will be implemented that could impact the industrial network in ways that were never considered. The sudden introduction of wireless networking in Smart Phones, for example, can suddenly introduce traffic into an industrial network as the phone attempts to discover wireless access points and negotiate connections; in this example, an executive touring a plant floor with a Smart Phone in his or her pocket has introduced unexpected change into the industrial network.
Especially in industrial networks where the enterprise or business network and the operational networks may be managed by separate groups, the point of demarcation between the two networks needs constant scrutiny as well. For example, the firewall between the SCADA DMZ and the Business Network might allow certain traffic on a certain port as part of a legitimate policy. Later, a new system or application could be introduced on the Business Network that uses the same port for a different function. Likewise, the enabled system on the Business Network (the firewall rule should explicitly define the source and destination IPs that are allowed to communicate) could be misconfigured, new software could be installed, or some other change made that ultimately violates the originally established policy—after all, business networks are more dynamic than industrial networks by nature. If both networks are not continuously assessed, these changes may go unnoticed, invalidating the security perimeter.
It is therefore necessary to think about industrial network security as broadly as possible, with full consideration of all outside influencing factors—even if those factors are outside of the responsibilities of the industrial network operator.

Insufficiently Sized Security Controls

The last pitfall to be discussed involves the improper implementation of automated security systems. These systems include specific security devices—such as a firewall, IPS, industrial protocol filter, application monitor, or whitelisting agent—as well as systems designed to provide the required situational awareness, log retention, and reporting—such as a SIEM or Log Management system. In any case, the tool may not be big enough to complete the required task. For firewalls and IPSs, it might be an issue of throughput, latency, or the completeness of the rule set. For situational awareness tools, it might be a limitation on the types of logs, the amount of logs, or the rate at which logs can be assessed. The result is the same: the system will eventually fail.
This pitfall occurs partly because of the difficulties in measuring the required performance, especially in the case of situational awareness tools. The types of devices, the properties of the network(s), how the network is used, and other factors all influence the rate at which event logs are produced. 6 As the rate of new events increases, the need to store more log files increases, as does the hardware requirements of the system itself, so that collected logs can be effectively managed, analyzed, reported against, etc. Additional difficulties arise during times when an incident occurs. If there is an attempted breach, an unauthorized change in configurations, or some other policy violation, all properly configured security devices will begin to generate an increased number of logs. In the event of a malware infection or an Advanced Persistent Threat, the incident can be prolonged, extending the spike in event volume into an ongoing plateau that can quickly overburden systems that have not been designed with sufficient headroom for growth. 7
6.J.M. Butler, Benchmarking security information event management (SIEM), The SANS Institute Analytics Program, February, 2009.
7.Ibid.

Summary

With the proper intentions, a well-informed network security administrator can plan, implement, and execute best-in class security measures for any industrial network. By following the basic guidelines presented in this book, as well as those provided by various compliance standards, regulatory guides, and other publications referenced in the Appendices of this book, industrial networks will be more secure, protecting the valuable—and often critical—automated processes that they operate.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.134.130