Chapter 9. Software Security Meets Security Operations[1]

Software Security Meets Security OperationsTouchpointslist ofsecurity operationsParts of this chapter appeared in original form in IEEE Security & Privacy magazine coauthored with Ken van Wyk [van Wyk and McGraw 2005].
 

A foolish consistency is the hobgoblin of little minds.

 
 --RALPH WALDO EMERSON

Traditionally, software development efforts at large corporations have been about as far removed from information security as they were from HR or any other particular business function. Not only that, but software development also has a tendency to be highly distributed among business units, and for that reason not even practiced in a cohesive, coherent manner. In the worst cases, roving bands of developers are traded like Pokémon cards in a fifth-grade classroom between busy business unit executives trying to get ahead. Suffice it to say, none of this is good.

The disconnect between security and development results in software development efforts that lack any sort of contemporary understanding of technical security risks. Security concerns are myriad for applications in today’s complex and highly connected computing environments. By blowing off the idea of security entirely, software builders ensure that software applications end up with way too many security weaknesses that could have and should have been avoided.

This chapter presents various recommendations to solve this problem by bridging the gap between two disparate fields. The approach is born out of experience in two diverse fields—software security and information security.[2] Central among these recommendations is the notion of using the knowledge inherent in information security organizations to enhance secure software development efforts.

Don’t Stand So Close to Me

Best practices in software security, such as the touchpoints described in this book, include a manageable number of simple security activities that are to be applied throughout any software development process. These activities are lightweight processes to be initiated at the earliest stages of software development (e.g., requirements and specifications) and then continued throughout the development process and on into deployment and operations.

Although an increasing number of software shops and individual developers are adopting the software security touchpoints as their own, they often lack the requisite security domain knowledge required to do so. This critical knowledge arises from years of observing system intrusions, dealing with malicious hackers, suffering the consequences of software vulnerabilities, and so on. Put in this position, even the best-intended development efforts can fail to take into account real-world attacks previously observed on similar application architectures. Though books, such as Exploiting Software and The Shellcoder’s Handbook, are starting to turn this knowledge gap around, the science of attack is a novel one [Hoglund and McGraw 2004; Koziol et al. 2004].

On the other hand, information security staff—in particular, incident handlers and vulnerability/patch specialists—have spent years responding to attacks against real systems and thinking about the vulnerabilities that spawned them. In many cases, they’ve studied application vulnerabilities and their resulting attack profiles in minute detail. However, few information security professionals are software developers, at least on a full-time basis, and their solution sets tend to be limited to reactive techniques such as installing software patches, shoring up firewalls, updating intrusion detection signature databases, and the like. It is very rare indeed to find information security professionals directly involved in major software development projects.

Sadly, these two communities of highly skilled technology experts exist in nearly complete isolation. Their knowledge and experience bases, however, are largely complementary. Finding avenues for interdisciplinary cooperation is very likely to bear fruit in the form of fielded software that is better equipped to resist well-known and easily predicted attacks. A secondary benefit of any interdisciplinary cooperation is having information security personnel who develop a much better understanding of the applications that they are tasked with protecting. This knowledge will no doubt benefit security professionals during their normal job tasks.

Kumbaya (for Software Security)

Software security is a significant and developing topic. The touchpoints described in this book are meant to be carried out by software security specialists in tandem with development teams. The issue at hand is how information security professionals can best participate in the software development process. If you are a CISSP, an operational security professional, or a network administrator, this Bud’s for you. After a brief refresher paragraph on each touchpoint, I will introduce some recommendations relevant to both software developers and information security practitioners. The idea is to describe how best to leverage the complementary aspects of the two disciplines.

  • Requirements: Abuse Cases

    The concept of abuse case development is derived from use case development (see Chapter 8). In an abuse case, an application’s deliberate misuse is considered and the corresponding effect is pondered. For example, when addressing user input, a series of abuse cases can be constructed that describe in some detail how malicious users can and will attempt to overflow input buffers, insert malicious data (e.g., using SQL insertion attacks), and basically ride herd over software vulnerability. An abuse case will describe these scenarios as well as how the application should respond to them. As with their use case counterparts, each abuse case is then used to drive a (non)functional requirement and corresponding test scenario for the software.

    Involving information security in abuse case development is such low-hanging fruit that the fruit itself is dirt splattered from the latest hard rain. Simply put, infosec pros come to the table with the (rather unfortunate) benefit of having watched and dissected years of attack data, built forensics tools,[3] created profiles of attackers, and so on. This may make them jaded and surly, but at least they intimately know what we’re up against. Many abuse case analysis efforts begin with brainstorming or “whiteboarding” sessions during which an application’s use cases and functional requirements are described while a room full of experts pontificate about how an attacker might attempt to abuse the system. Properly participating in these exercises involves carefully and thoroughly considering similar systems and the attacks that have been successful against them. Thorough knowledge of attack patterns and the computer security horror stories of days gone by brings this exercise to life. Getting past your own belly button is important to abuse case success, so consider other domains that may be relevant to the application under review while you’re at it. Once again, real battle experience is critical.

    Infosec people are likely to find (much to their amusement) that the software developers in the room are blissfully unaware of many of the attack forms seen every day out beyond the network perimeter. Of course, many of the uninformed are also quite naturally skeptical unbelievers. While converting the unbelievers, great care should be taken not to succumb to the tendency toward hyperbole and exaggeration that is unfortunately common among security types. There’s really nothing worse than a blustery security weenie on his high horse over some minor skirmish. Do not overstate the attacks that you’ve seen and studied. Instead, stick to the facts (ma’am) and be prepared to back your statements up with actual examples. Knowledge of actual software technology a plus.

  • Design: Business Risk Analysis

    Assessing the business impact likely to result from a successful compromise of the software is a critical undertaking (see Chapters 2 and 5). Without explicitly taking this on, a security analysis will fall short in the “who cares” department. Questions of cost to the parent organization sponsoring the software are considered relative to the project. This cost is understood in terms of both direct cost (think liability, lost productivity, and rework) as well as in terms of indirect cost (think reputation and brand damage).

    The most important people to consult when assessing software-induced business risks are the business stakeholders behind the software. In organizations that already practice business-level technology analysis, that fact tends to be quite well understood. The problem is that in a majority of these organizations, technology assessment of the business situation stops well before the level of software. A standard approach can be enhanced with the addition of a few simple questions: What do the people causing the software to be built think about security? What do they expect? What are they trying to accomplish that might be thwarted by successful attack? What worries them about security? The value that information security professionals can bring to answering these questions comes from a wealth of first hand experience seeing security impact when similar business applications were compromised.

    That puts them in a good position to answer other security-related questions: What sorts of costs have similar companies incurred from attacks? How much downtime was involved? What was the resulting publicity in each case? In what ways was the organization’s reputation tarnished? Infosec people are in a good position to provide input and flesh out a conversation with relevant stories. Here again, great care should be taken to not overstate facts. When citing incidents at other organizations, be prepared to back up your claims with news reports and other third-party documentation.

  • Design: Architectural Risk Analysis

    Like the business risk analysis just described, architectural risk analysis assesses the technical security exposures in an application’s proposed design and links these to business impact. Starting with a high-level depiction of the design, each module, interface, interaction, and so on is considered against known attack methodologies and their likelihood of success (see Chapter 5). Architectural risk analyses are often usefully applied against individual subcomponents of a design as well as on the design as a whole. This provides a forest-level view of a software system’s security posture. Attention to holistic aspects of security is paramount as at least 50% of security defects are architectural in nature.

    At this point we’re beginning to get to the technical heart of the software development process. For architectural risk analysis to be effective, security analysts must possess a great deal of technology knowledge covering both the application and its underlying platform, frameworks, languages, functions, libraries, and so on. The most effective infosec team member in this situation is clearly the one who is a technology expert with solid experience around particular software tools. With this kind of knowledge under her belt, the infosec professional should again be providing real-world feedback into the process. For example, the analysis team might be discussing the relative strengths and weaknesses of a particular network encryption protocol.

    Information security can help by providing perspective to the conversation. All software has potential weaknesses, but has component X been involved in actual attacks? Are there known vulnerabilities in the protocol that the project is planning to use? Is a COTS component or platform a popular attacker target? Or, on the other hand, does it have a stellar reputation and only a handful of properly handled, published vulnerabilities or known attacks? Feedback of this sort should be extremely useful in prioritizing risk and weaknesses as well as deciding on what, if any, mitigation strategies to pursue.

  • Test Planning: Security Testing

    Just as testers typically use functional specifications and requirements to create test scenarios and test plans,[4] security-specific functionality should be used to derive tests against the target software’s security functions (see Chapter 7). These kinds of investigations generally include tests that verify security features such as encryption, user identification, logging, confidentiality, authentication, and so on. Think of these as the “positive” security features that white hats are concerned with.

    Thinking like a good guy is not enough. Adversarial test scenarios are the natural result of the process of assessing and prioritizing software’s architectural risks (see Chapter 7). Each architectural risk and abuse case considered should be described and documented down to a level that clearly explains how an attacker might go about exploiting a weakness and compromising the software. Donning your black hat and thinking like a bad guy is critical. Such descriptions can be used to generate a priority-based list of test scenarios for later adversarial testing.

    Although test planning and execution are generally performed by QA and development groups, testing represents another opportunity for infosec to have a positive impact. Testing—especially risk-based testing—not only must cover functionality but also should closely emulate the steps that an attacker will take when breaking a target system. Highly realistic scenarios (e.g., the security analog to real user) are much more useful than arbitrary pretend “attacks.” Standard testing organizations, if they are effective at all, are most effective at designing and performing tests based around functional specifications. Designing risk-based test scenarios is a rather substantial departure from the status quo and one that should benefit from the experience base of security incident handlers. In this case, infosec professionals who are good at thinking like bad guys are the most valuable resources. The key to risk-based testing is to understand how bad guys work and what that means for the system under test.

  • Implementation: Code Review

    The design-centric activities described earlier focus on architectural flaws built into software design. They completely overlook, however, implementation bugs that may well be introduced during coding. Implementation bugs are both numerous and common (just like real bugs in the Virginia countryside) and include nasty creatures like the notorious buffer overflow, which owes its existence to the use (or misuse) of vulnerable APIs (e.g., gets(), strcpy(), and so on in C) (see Chapter 4). Code review processes, both manual and (even more important) automated with a static analysis tool, attempt to identify security bugs prior to the software’s release.

    By its very nature, code review requires knowledge of code. An infosec practitioner with little experience writing and compiling software is going to be of little use during a code review. If you don’t know what it means for a variable to be declared in a header or an argument to a method to be static/final, staring at lines of code all day isn’t going to help. Because of this, the code review step is best left in the hands of the members of the development organization, especially if they are armed with a modern source code analysis tool. With the exception of information security people who are highly experienced in programming languages and code-level vulnerability resolution, there is no natural fit for network security expertise during the code review phase. This may come as a great surprise to those organizations currently attempting to impose software security on their enterprises through the infosec division. Even though the idea of security enforcement is solid, making enforcement at the code level successful when it comes to code review requires real hands-on experience with code (see the box Know When Enough Is Too Much).

  • System Testing: Penetration Testing

    System penetration testing, when used appropriately, focuses on people failures and procedure failures made during the configuration and deployment of software. The best kinds of penetration testing are driven by previously identified risks and are engineered to probe risks directly in order to ascertain their exploitability (see Chapter 6).

    While testing software to functional specifications has traditionally been the domain of QA, penetration testing has traditionally been the domain of information security and incident-handling organizations. As such, the fit here for information security participation is a very natural and intuitive one. Of course, there are a number of subtleties that should not be ignored. As I describe in Chapter 6, a majority of penetration testing today focuses its attention on network topology, firewall placement, communications protocols, and the like. It is therefore very much an outside→in approach that barely begins to scratch the surface of applications. Penetration testing needs to encompass a more inside→out approach that takes into account risk analyses and other software security results as it is carried out. This distinction is sometimes described as the difference between network penetration testing and application penetration testing. Software security is much more interested in the latter. Also worth noting is the use of various black box penetration tools. Network security scanners like Nessus, nmap, and other SATAN derivatives, are extremely useful since there are countless ways to configure (and misconfigure) complex networks and their various services. Application security scanners (which I lambaste in Chapter 1) are nowhere near as useful. If by an “application penetration test” you mean the process of running an application security testing tool and gathering results, you have a long way to go to make your approach hold water.[5]

    The good news about penetration testing and infosec involvement is that it is most likely already underway. The bad news is that infosec needs to up the level of software clue in order to carry out penetration testing most effectively.

  • Fielded System: Deployment and Operations

    The final steps in fielding secure software are the central activities of deployment and operations. Careful configuration and customization of any software application’s deployment environment can greatly enhance its security posture. Designing a smartly tailored deployment environment for a program requires following a process that starts at the network component level, proceeds through the operating system, and ends with the application’s own security configuration and setup.

Many software developers would argue that deployment and operations are not even part of the software development process. Even if this view was correct, there is no way that operations and deployment concerns can be properly addressed if the software is so poorly constructed as to fall apart no matter what kind of solid ground it is placed on. Put bluntly, operations organizations have put up with some rather stinky software for a long time, and it has made them wary. If we can set that argument aside for a moment and look at the broader picture—that is, safely setting up the application in a secure operational environment and running it accordingly—then the work that needs doing can certainly be positively affected by information security. The best opportunities exist in fine-tuning access controls at the network and operating system levels, as well as in configuring an event-logging and event-monitoring mechanism that will be most effective during incident response operations. Attacks will happen. Be prepared for them to happen, and be prepared to clean up the mess after they have.[6]

Come Together (Right Now)

Let’s pretend that the advice given in this chapter is sound. Even if you accept the recommendations wholesale as worthy, the act of aligning information security and software development is a serious undertaking (and not one for the faint of heart). Close cooperation with the development organization is essential to success. If infosec is perceived as the security police or “those people with sticks who show up every once in a while and beat us soundly for reasons we don’t understand” by dev, you have a problem that must be addressed (see the box The Infosec Boogey Man).

In many cases, dev is more than willing to accept guidance and advice from information security people who know what they’re talking about. One problem is that dev doesn’t know who in information security to talk to, who might help them, and who might just be a blowhard security weenie. To fix this problem, the first step for any information security professional who wants to help out with development efforts should be to reach out to the developers, roll up your sleeves, and offer to assist.

Once you have made dev aware of your willingness to help, consider taking small steps toward the goals laid out in this chapter. Rather than trying to become involved in every phase of a giant world-changing endeavor all at once, try one at a time. Be careful to not overwhelm the overall system by attempting to make too many changes at the same time. (Much more about this and about adopting software security in large organizations can be found in Chapter 10.)

Another positive step is for the information security troops to take the time to learn as much as they can about software development in general and their organization’s software development environment in particular. Study and learn about the types of applications that your software people develop; why they are working on them (i.e., what business purpose software is being built for); what languages, platforms, frameworks, and libraries are being used; and so on. Showing up with a clue is much better than showing up willing but clueless. Software people are not the most patient people on the planet, and often you have one and only one shot at getting involved. If you help, that’s great. But if you hinder, that’ll be the last time they talk to you.

In the end, success or failure is as likely to be driven by the personalities of the people involved as anything else. Success certainly is not guaranteed, even with the best of intentions and the most careful planning. Beer helps.

Future’s So Bright, I Gotta Wear Shades

The interesting thing about software security is that it appears to be in the earliest stages of development, much as the field of information security itself was ten years or so ago. The security activities I describe in this chapter only touch the tip of the best practice iceberg. The good news is that these best practices are emerging at all! Of course, the software security discipline will evolve and change with time, and best practices and advice will ebb and flow like the tide at the beach. But the advice here is likely to bear fruit for some time.

The recommendations in this chapter are based on years of experience with a large dose of intuition thrown in for good measure. They are presented in the hopes that others will take them, consider them, adjust them, and attempt to apply them in their organizations. I believe that companies’ software developers and information security staff can benefit greatly from the respective experiences of the other.

Much work will need to be done before the practical recommendations made here prove themselves to be as useful in practice as I believe that they will be.



[1] Parts of this chapter appeared in original form in IEEE Security & Privacy magazine coauthored with Ken van Wyk [van Wyk and McGraw 2005].

[2] To be completely honest, it is Ken van Wyk who brings vast experience in information security to this chapter. I’m just a software security guy. Ken cowrote the book Secure Coding [Graff and van Wyk 2003], which tackles software security from the point of view of operations-related security people.

[3] See Dan Farmer and Wietse Venema’s excellent new tome on forensics, Forensic Discovery [Farmer and Venema 2005].

[4] Especially those testers who understand the critical notion of requirements traceability <http://www.sei.cmu.edu/str/descriptions/reqtracing_body.html>.

[5] It’s worth noting here for non-software people how amusing the idea of a canned set of security tests (hacker in a box, so to speak) for any possible application is to software professionals. Software testing is not something that can be handled by a set of canned tests, no matter how large the can. The idea of testing any arbitrary program with, say, a few thousand tests determined in advance before the software was even conceived is ridiculous. I’m afraid that the idea of testing any arbitrary program with a few hundred application security tests is just as silly!

[6] This kind of advice is pretty much a “no duh” for information security organizations. That’s one reason why their involvement in this step is paramount.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.38.176