Chapter 3. Secure Software Development Management and Organizational Models1

1. Many of the models presented in this chapter were initially discussed in Mead [2010b].

with Julia Allen and Dan Shoemaker

3.1 The Management Dilemma

When managers and stakeholders start a software acquisition or development project, they face a dazzling array of models and frameworks to choose from. Some of those models are general software process models, and others are specific to security or software assurance. Very often the marketing hype that accompanies these models makes it difficult to select a model or set of practices.

In our study of the problem, we realized that there is no single, recognized framework to organize research and practice areas that focuses on building assured systems. Although we did not succeed in defining a single “best” framework, we were able to develop guidance to help managers and stakeholders address challenges such as the following:

• How do I decide which security methods fit into a specific lifecycle activity?

• How do I know if a specific security method is sufficiently mature for me to use on my projects?

• When should I take a chance on a security research approach that has not been widely used?

• What actions can I take when I have no approach or method for prioritizing and selecting new research or when promising research appears to be unrelated to other research in the field?

In this chapter, we present a variety of models and frameworks that managers and stakeholders can use to help address these challenges. We define a framework using the following definitions from Babylon dictionary [Babylon 2009]:


A framework is a basic conceptual structure used to solve or address complex issues. This very broad definition has allowed the term to be used as a buzzword, especially in a software context.

A structure to hold together or support something, a basic structure.


3.1.1 Background on Assured Systems

The following topics exhibit varying levels of maturity and use differing terminology, but they all play a role in building assured systems:

Engineering resilient systems encompasses secure software engineering, as well as requirements engineering, architecture, and design of secure systems and large systems of systems, and service, and system continuity of operations.

Containment focuses on the problem of how to monitor and detect a component’s behavior to contain and isolate the effect of aberrant behavior while still being able to recover from a false assumption of bad behavior.

Architecting secure systems defines the necessary and appropriate design artifacts, quality attributes, and appropriate trade-off considerations that describe how security properties are positioned, how they relate to the overall system/IT architecture, and how security quality attributes are measured.

Secure software engineering (secure coding, software engineering, and hardware design improvement) improves the way software and hardware are developed by reducing vulnerabilities from software and hardware flaws. This work includes technology lifecycle assurance mechanisms, advanced engineering disciplines, standards and certification regimes, and best practices. Research areas in secure software engineering include refining current assurance mechanisms and developing new ones where necessary, developing certification regimes, and exploring policy and incentive options.

Secure software engineering encompasses a range of activities targeting security. The book Software Security Engineering [Allen 2008] presents a valuable discussion of these topics, and further research continues.

Some organizations have begun to pay more attention to building assured systems, including the following:

• Some organizations are participating in the Building Security In Maturity Model [McGraw 2015].

• Some organizations are using Microsoft’s Security Development Lifecycle (SDL) [Howard 2006].

• Some organizations are members of the Software Assurance Forum for Excellence in Code (SAFECode) consortium [SAFECode 2010].

• Some organizations are working with Oracle cyber security initiatives and security solutions [Oracle 2016].

• Members of the Open Web Application Security Project (OWASP) are using the Software Assurance Maturity Model (SAMM) [OWASP 2015].

• The Trustworthy Software Initiative in the UK, in conjunction with the British Standards Institution, has produced Publicly Available Specification 754 (PAS 754), “Software Trustworthiness—Governance and Management—Specification” [TSI 2014].

Software assurance efforts tend to be strongest in software product development organizations, which have provided the most significant contribution to the efforts listed above. However, software assurance efforts tend to be weaker in large organizations that develop systems for use in-house and integrate systems across multiple vendors. They also tend to be weaker in small- to medium-sized organizations developing software products for licensed use. It’s worth noting that there are many small- and medium-sized organizations that have good cyber security practices, and there are also large organizations that have poor ones. For a while, organizations producing industrial control systems lagged behind large software development firms, but this has changed over the past several years.

Furthermore, there are a variety of lifecycle models in practice. Even in the larger organizations that adopt secure software engineering practices, there is a tendency to select a subset of the total set of recommended or applicable practices. Such uneven adoption of practices for building assured systems makes it difficult to evaluate the results using these practices.

Let’s take a look at existing frameworks and lifecycle models for building assured systems. In the literature, we typically see lifecycle models or approaches that serve as structured repositories of practices from which organizations select those that are meaningful for their development projects.

Summary descriptions of several software development and acquisition process models that are in active use appear in Section 3.2, “Process Models for Software Development and Acquisition,” and models for software security are summarized in Section 3.3, “Software Security Frameworks, Models, and Roadmaps.”

3.2 Process Models for Software Development and Acquisition

A framework for building assured systems needs to build on and reflect known, accepted, common practice for software development and acquisition. One commonly accepted expression of the codification of effective software development and acquisition practices is a process model. Process models define a set of processes that, when implemented, demonstrably improve the quality of the software that is developed or acquired using such processes. The Software Engineering Institute (SEI) at Carnegie Mellon University has been a recognized thought leader for more than 25 years in developing capability and maturity models for defining and improving the process by which software is developed and acquired. This work includes building a community of practitioners and reflecting their experiences and feedback in successive versions of the models. These models reflect commonly known good practices that have been observed, measured, and assessed by hundreds of organizations. Such practices serve as the foundation for building assured systems; it makes no sense to attempt to integrate software security practices into a software development process or lifecycle if this development process is not defined, implemented, and regularly improved. Thus, these development and acquisition models serve as the basis against which models and practices for software security are considered. These development and acquisition models also serve as the basis for considering the use of promising research results. The models described in this section apply to newly developed software, acquired software, and (extending the useful life of) legacy software.

The content in this section is excerpted from publicly available SEI reports and the CMMI Institute website. It summarizes the objectives of Capability Maturity Model Integration (CMMI) models in general, CMMI for Development, and CMMI for Acquisition. We recommend that you familiarize yourself with software development and acquisition process models in general (including CMMI-based models) to better understand how software security practices, necessary for building assured systems, are implemented and deployed.

3.2.1 CMMI Models in General

The following information about CMMI models is from the CMMI Institute [CMMI Institute 2015]:


The Capability Maturity Model Integration (CMMI®) is a world-class performance improvement framework for competitive organizations that want to achieve high-performance operations. Building upon an organization’s business performance objectives, CMMI provides a set of practices for improving processes, resulting in a performance improvement system that paves the way for better operations and performance. More than any other approach, CMMI doesn’t just help you to improve your organizational processes. CMMI also has built-in practices that help you to improve the way you use any performance improvement approach, setting you up to achieve a positive return on your investment.

CMMI does not provide a single process. Rather, the CMMI framework models what to do to improve your processes, not define your processes. CMMI is designed to compare an organization’s existing processes to proven best practices developed by members of industry, government, and academia; reveal possible areas for improvement; and provide ways to measure progress.

The result? CMMI helps you to build and manage performance improvement systems that fit your unique environment.

CMMI is not just for software development. CMMI helps software and services organizations in a variety of industries to align meaningful process improvement with business and engineering goals for cost, schedule, productivity, quality, and customer satisfaction.

CMMI helps companies to improve operational performance by lowering the cost of development, production, and delivery. CMMI provides the framework for you to consistently and predictably deliver the products and services that your customers want, when they want them.

CMMI offers three constellations—CMMI for Acquisition, CMMI for Development, and CMMI for Services—that help to improve specific business needs, plus the People Capability Maturity Model (People CMM), which uses process framework as a foundation to help organizations managing and developing their workforce to become an employer of choice. Across these three constellations and the People CMM, CMMI delivers measurable results for organizations of all sizes in a variety of industries, including aerospace, finance, health services, software, defense, transportation, and telecommunications.


3.2.2 CMMI for Development (CMMI-DEV)

The SEI’s CMMI for Development report states the following [CMMI Product Team 2010b]:


Companies want to deliver products and services better, faster, and cheaper. At the same time, in the high-technology environment of the twenty-first century, nearly all organizations have found themselves building increasingly complex products and services. It is unusual today for a single organization to develop all the components that compose a complex product or service. More commonly, some components are built in-house and some are acquired; then all the components are integrated into the final product or service. Organizations must be able to manage and control this complex development and maintenance process.

The problems these organizations address today involve enterprise-wide solutions that require an integrated approach. Effective management of organizational assets is critical to business success. In essence, these organizations are product and service developers that need a way to manage their development activities as part of achieving their business objectives.

In the current marketplace, maturity models, standards, methodologies, and guidelines exist that can help an organization improve the way it does business. However, most available improvement approaches focus on a specific part of the business and do not take a systemic approach to the problems that most organizations are facing. By focusing on improving one area of a business, these models have unfortunately perpetuated the stovepipes and barriers that exist in organizations.

CMMI® for Development (CMMI-DEV) provides an opportunity to avoid or eliminate these stovepipes and barriers. CMMI for Development consists of best practices that address development activities applied to products and services. It addresses practices that cover the product’s lifecycle from conception through delivery and maintenance. The emphasis is on the work necessary to build and maintain the total product.



What Is a Process Area?

A process area is a cluster of related practices in an area that, when implemented collectively, satisfies a set of goals considered important for making improvement in that area.


CMMI-DEV includes the following 22 process areas [CMMI Product Team 2010b]. The 22 process areas appear in alphabetical order by acronym:

• Causal Analysis and Resolution (CAR)

• Configuration Management (CM)

• Decision Analysis and Resolution (DAR)

• Integrated Project Management (IPM)

• Measurement and Analysis (MA)

• Organizational Process Definition (OPD)

• Organizational Process Focus (OPF)

• Organizational Performance Management (OPM)

• Organizational Process Performance (OPP)

• Organizational Training (OT)

• Product Integration (PI)

• Project Monitoring and Control (PMC)

• Project Planning (PP)

• Process and Product Quality Assurance (PPQA)

• Quantitative Project Management (QPM)

• Requirements Development (RD)

• Requirements Management (REQM)

• Risk Management (RSKM)

• Supplier Agreement Management (SAM)

• Technical Solution (TS)

• Validation (VAL)

• Verification (VER)

3.2.3 CMMI for Acquisition (CMMI-ACQ)

The SEI’s CMMI for Acquisition (CMMI-ACQ) report states the following [CMMI Product Team 2010a]:


Organizations are increasingly becoming acquirers of needed capabilities by obtaining products and services from suppliers and developing less and less of these capabilities in-house. This widely adopted business strategy is designed to improve an organization’s operational efficiencies by leveraging suppliers’ capabilities to deliver quality solutions rapidly, at lower cost, and with the most appropriate technology.

Acquisition of needed capabilities is challenging because acquirers have overall accountability for satisfying the end user while allowing the supplier to perform the tasks necessary to develop and provide the solution.

Mismanagement, the inability to articulate customer needs, poor requirements definition, inadequate supplier selection and contracting processes, insufficient technology selection procedures, and uncontrolled requirements changes are factors that contribute to project failure. Responsibility is shared by both the supplier and the acquirer. The majority of project failures could be avoided if the acquirer learned how to properly prepare for, engage with, and manage suppliers.

In addition to these challenges, an overall key to a successful acquirer-supplier relationship is communication.

Unfortunately, many organizations have not invested in the capabilities necessary to effectively manage projects in an acquisition environment. Too often acquirers disengage from the project once the supplier is hired. Too late they discover that the project is not on schedule, deadlines will not be met, the technology selected is not viable, and the project has failed.

The acquirer has a focused set of major objectives. These objectives include the requirement to maintain a relationship with end users to fully comprehend their needs. The acquirer owns the project, executes overall project management, and is accountable for delivering the product or service to the end users. Thus, these acquirer responsibilities can extend beyond ensuring the product or service is delivered by chosen suppliers to include activities such as integrating the overall product or service, ensuring it makes the transition into operation, and obtaining insight into its appropriateness and adequacy to continue to meet customer needs.

CMMI® for Acquisition (CMMI-ACQ) enables organizations to avoid or eliminate barriers in the acquisition process through practices and terminology that transcend the interests of individual departments or groups.


CMMI-ACQ has 22 process areas, 6 of which are specific to acquisition practices, and 16 of which are shared with other CMMI models. These are the process areas specific to acquisition practices:

• Acquisition Requirements Development

• Solicitation and Supplier Agreement Development

• Agreement Management

• Acquisition Technical Management

• Acquisition Verification

• Acquisition Validation

In addition, the model includes guidance on the following:

• Acquisition strategy

• Typical supplier deliverables

• Transition to operations and support

• Integrated teams

The 16 shared process areas include practices for project management, organizational process management, and infrastructure and support.

3.2.4 CMMI for Services (CMMI-SVC)

The SEI’s CMMI for Services (CMMI-SVC) report states the following [CMMI Product Team 2010c]:


The service industry is a significant driver for worldwide economic growth. Guidance on developing and improving mature service practices is a key contributor to the service provider performance and customer satisfaction. The CMMI® for Services (CMMI-SVC) model is designed to begin meeting that need.

All CMMI-SVC model practices focus on the activities of the service provider. Seven process areas focus on practices specific to services, addressing capacity and availability management, service continuity, service delivery, incident resolution and prevention, service transition, service system development, and strategic service management processes.


CMMI-SVC contains 24 process areas. Of those process areas, 16 are core process areas, 1 is a shared process area, and 7 are service-specific process areas. Detailed information on the process areas can be found in CMMI for Services, Version 1.3 [CMMI Product Team 2010c]. The 24 process areas appear in alphabetical order by acronym:

• Capacity and Availability Management (CAM)

• Causal Analysis and Resolution (CAR)

• Configuration Management (CM)

• Decision Analysis and Resolution (DAR)

• Incident Resolution and Prevention (IRP)

• Integrated Work Management (IWM)

• Measurement and Analysis (MA)

• Organizational Process Definition (OPD)

• Organizational Process Focus (OPF)

• Organizational Performance Management (OPM)

• Organizational Process Performance (OPP)

• Organizational Training (OT)

• Process and Product Quality Assurance (PPQA)

• Quantitative Work Management (QWM)

• Requirements Management (REQM)

• Risk Management (RSKM)

• Supplier Agreement Management (SAM)

• Service Continuity (SCON)

• Service Delivery (SD)

• Service System Development (SSD)

• Service System Transition (SST)

• Strategic Service Management (STSM)

• Work Monitoring and Control (WMC)

• Work Planning (WP)

3.2.5 CMMI Process Model Uses

CMMI models are one foundation for well-managed and well-defined software development, acquisition, and services processes. In practice, organizations have been using them for many years to improve their processes, identifying areas for improvement, and implementing systematic improvement programs. Process models have been a valuable tool for executive managers and middle managers. There are many self-improvement programs as well as consultants for this area.

In academia, process models are routinely taught in software engineering degree programs and in some individual software engineering courses, so that graduates of these programs are familiar with them and know how to apply them. In capstone projects, students are frequently asked to select a development process from a range of models.

The next section describes leading models and frameworks that define processes and practices for software security. Such processes and practices are, in large part, in common use by a growing body of organizations that are developing software to be more secure.

3.3 Software Security Frameworks, Models, and Roadmaps

In addition to considering process models for software development and acquisition, a framework for building assured systems needs to build on and reflect known, accepted, common practice for software security. The number of promising frameworks and models for building more secure software is growing. For example, Microsoft has defined their SDL and made it publicly available. In their recently released version 6, the authors of Building Security In Maturity Model [McGraw 2015] have collected and analyzed software security practices in 78 organizations.

The following subsections summarize models, frameworks, and roadmaps and provide excerpts of descriptive information from publicly available websites and reports to provide an overview of the objectives and content of each effort. You should have a broad understanding of these models and their processes and practices to appreciate the current state of the practice in building secure software and to aid in identifying promising research opportunities to fill gaps.

3.3.1 Building Security In Maturity Model (BSIMM)

An introduction on the BSIMM website states the following [McGraw 2015]:


The purpose of the BSIMM is to quantify the activities carried out by real software security initiatives. Because these initiatives make use of different methodologies and different terminology, the BSIMM requires a framework that allows us to describe all of the initiatives in a uniform way. Our Software Security Framework (SSF) and activity descriptions provide a common vocabulary for explaining the salient elements of a software security initiative, thereby allowing us to compare initiatives that use different terms, operate at different scales, exist in different vertical markets, or create different work products.

We classify our work as a maturity model because improving software security almost always means changing the way an organization works—something that doesn’t happen overnight. We understand that not all organizations need to achieve the same security goals, but we believe all organizations can benefit from using the same measuring stick.

BSIMM6 is the sixth major version of the BSIMM model. It includes updated activity descriptions, data from 78 firms in multiple vertical markets, and a longitudinal study.

The BSIMM is meant for use by anyone responsible for creating and executing a software security initiative. We have observed that successful software security initiatives are typically run by a senior executive who reports to the highest levels in an organization. These executives lead an internal group that we call the software security group (SSG), charged with directly executing or facilitating the activities described in the BSIMM. The BSIMM is written with the SSG and SSG leadership in mind.

Our work with the BSIMM model shows that measuring a firm’s software security initiative is both possible and extremely useful. BSIMM measurements can be used to plan, structure, and execute the evolution of a software security initiative. Over time, firms participating in the BSIMM show measurable improvement in their software security initiatives.


A maturity model is appropriate for building more secure software—a key component of building assured systems—because improving software security means changing the way an organization develops software over time.

The BSIMM is meant to be used by those who create and execute a software security initiative. Most successful initiatives are run by a senior executive who reports to the highest levels in the organization, such as the board of directors or the chief information officer. These executives lead an internal group that the BSIMM calls the software security group (SSG), charged with directly executing or facilitating the activities described in the BSIMM. The BSIMM is written with the SSG and SSG leadership in mind.

The BSIMM addresses the following roles:

• SSG (software security staff with deep coding, design, and architectural experience)

• Executives and middle management, including line-of-business owners and product managers

• Builders, testers, and operations staff

• Administrators

• Line of business owners

• Vendors

As an organizing structure for the body of observed practices, the BSIMM uses the software security framework (SSF) described in Table 3.1.

Image
Image

Table 3.1 BSIMM Software Security Framework [McGraw 2015]

3.3.2 CMMI Assurance Process Reference Model

The Department of Homeland Security (DHS) Software Assurance (SwA) Processes and Practices Working Group developed a draft process reference model (PRM) for assurance in July 2008 [DHS 2008]. This PRM recommended additions to CMMI-DEV v1.2 to address software assurance. These also apply to CMMI-DEV v1.3. The “assurance thread” description2 includes Figure 3.1, which may be useful for addressing the lifecycle phase aspect of building assured systems.

2. https://buildsecurityin.us-cert.gov/swa/procwg.html

Image

Figure 3.1 Summary of Assurance for CMMI Efforts

The DHS SwA Processes and Practices Working Group’s additions and updates to CMMI-DEV v1.2 and v1.3 are focused at the specific practices (SP) level for the following CMMI-DEV process areas (PAs):

• Process Management

• Organizational Process Focus

• Organizational Process Definition

• Organizational Training

• Project Management

• Project Planning

• Project Monitoring and Control

• Supplier Agreement Management

• Integrated Project Management

• Risk Management

• Engineering

• Requirements Development

• Technical Solution

• Verification

• Validation

• Support

• Measurement & Analysis

More recently, the CMMI Institute published “Security by Design with CMMI for Development, Version 1.3,” a set of additional process areas that integrate with CMMI [CMMI 2013].

3.3.3 Open Web Application Security Project (OWASP) Software Assurance Maturity Model (SAMM)

The OWASP website provides the following information on the Software Assurance Maturity Model (SAMM) [OWASP 2015]:


The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. The resources provided by SAMM will aid in:

• Evaluating an organization’s existing software security practices

• Building a balanced software security assurance program in well-defined iterations

• Demonstrating concrete improvements to a security assurance program

• Defining and measuring security-related activities throughout an organization

SAMM was defined with flexibility in mind such that it can be utilized by small, medium, and large organizations using any style of development. Additionally, this model can be applied organization-wide, for a single line-of-business, or even for an individual project. Beyond these traits, SAMM was built on the following principles:

• An organization’s behavior changes slowly over time—A successful software security program should be specified in small iterations that deliver tangible assurance gains while incrementally working toward long-term goals.

• There is no single recipe that works for all organizations—A software security framework must be flexible and allow organizations to tailor their choices based on their risk tolerance and the way in which they build and use software.

• Guidance related to security activities must be prescriptive—All the steps in building and assessing an assurance program should be simple, well-defined, and measurable. This model also provides roadmap templates for common types of organizations.

The foundation of the model is built upon the core business functions of software development with security practices tied to each [see Table 3.2]. The building blocks of the model are the three maturity levels defined for each of the twelve security practices. These define a wide variety of activities in which an organization could engage to reduce security risks and increase software assurance. Additional details are included to measure successful activity performance, understand the associated assurance benefits, estimate personnel, and other costs.

Image

Table 3.2 OWASP SAMM Business Functions and Security Practices [OWASP 2015]

Practical Measurement Framework for Software Assurance and Information Security provides an approach for measuring the effectiveness of achieving software assurance goals and objectives at an organizational, program, or project level. It addresses how to assess the degree of assurance provided by software, using quantitative and qualitative methodologies and techniques. This framework incorporates existing measurement methodologies and is intended to help organizations and projects integrate SwA measurement into their existing programs.


The SAMM presents success metrics for all activities in all 12 practices for all 4 critical business functions. Each practice has 3 objectives, and each objective has 2 activities, for a total of 72 activities.

3.3.4 DHS SwA Measurement Work

Nadya Bartol and Michele Moss, both of whom played important roles in the DHS SwA Measurement Working Group, led the development of several important metrics documents. These documents were published at an earlier time. Note that we discuss more recent work in the measurement area by the SEI in Chapter 6, “Metrics.”

According to the DHS SwA Measurement Working Group [DHS 2010]:


Practical Measurement Framework for Software Assurance and Information Security provides an approach for measuring the effectiveness of achieving software assurance goals and objectives at an organizational, program, or project level. It addresses how to assess the degree of assurance provided by software, using quantitative and qualitative methodologies and techniques. This framework incorporates existing measurement methodologies and is intended to help organizations and projects integrate SwA measurement into their existing programs.


The following discussion is from the Practical Measurement Framework for Software Assurance and Information Security [Bartol 2008]:


Software assurance is interdisciplinary and relies on methods and techniques produced by other disciplines, including project management, process improvement, quality assurance, training, information security/information assurance, system engineering, safety, test and evaluation, software acquisition, reliability, and dependability [as shown in Figure 3.2].

Image

Figure 3.2 Cross-Disciplinary Nature of SwA [Bartol 2008]

The Practical Measurement Framework focuses principally, though not exclusively, on the information security viewpoint of SwA. Many of the contributing disciplines of SwA enjoy an established process improvement and measurement body of knowledge, such as quality assurance, project management, process improvement, and safety. SwA measurement can leverage measurement methods and techniques that are already established in those disciplines, and adapt them to SwA. The Practical Measurement Framework report focuses on information assurance/information security aspects of SwA to help mature that aspect of SwA measurement.

This framework provides an integrated measurement approach, which leverages five existing industry approaches that use similar processes to develop and implement measurement as follows:

• Draft National Institute of Standards and Technology (NIST) Special Publication (SP) 800-55, Revision 1, Performance Measurement Guide for Information Security

• ISO/IEC 27004 Information technology—Security techniques—Information security management measurement

• ISO/IEC 15939, System and Software Engineering—Measurement Process, also known as Practical Software and System Measurement (PSM)

• CMMI Measurement and Analysis Process Area

• CMMI GQ(I)M—Capability Maturity Model Integration Goal Question Indicator Measure

The Practical Measurement Framework authors selected these methodologies because of their widespread use among the software and systems development community and the information security community. The Framework includes a common measure specification table which is a crosswalk of specifications, templates, forms and other means of documenting individual measures provided by the five industry approaches listed above that were leveraged to create the framework.

Measures are intended to help answer the following five questions:

• What are the defects in the design and code that have a potential to be exploited?

• Where are they?

• How did they get there?

• Have they been mitigated?

• How can they be avoided in the future?

A number of representative key measures for different stakeholder groups are included in the framework to help organizations assess the state of their SwA efforts during any stage of a project:

Supplier—an individual or an organization that offers software and system-related products and services to other organizations. This includes software developers, program managers, and other staff working for an organization that develops and supplies software to other organizations.

Acquirer—an individual or an organization that acquires software and system-related products and services from other organizations. This includes acquisition officials, program managers, system integrators, system owners, information owners, operators, designated approving authorities (DAAs), certifying authorities, independent verification and validation (IV&V), and other individuals who are working for an organization that is acquiring software from other organizations.

Within each supplier and acquirer organization, the following stakeholders are considered:

Executive Decision Maker—a leader who has authority to make decisions and may require quantifiable information to understand the level of risk associated with software to support decision-making processes.

Practitioner—an individual responsible for implementing SwA as a part of their job.


The framework describes candidate goals and information needs for each stakeholder group. The framework then presents examples of supplier measures as a table, with columns for project activity, measures, information needs, and benefits. The framework includes supplier project activities—requirements management (five measures), design (three measures), development (six measures), test (nine measures)—and the entire software development lifecycle (SDLC) (three measures).

Examples of measures for acquirers are also presented and are intended to answer the following questions:

• Have SwA activities been adequately integrated into the organization’s acquisition process?

• Have SwA considerations been integrated into the SDLC and resulting product by the supplier?

The acquisition activities are planning (two measures), contracting (three measures), and implementation and acceptance (five measures).

Ten examples of measures for executives are presented. These are intended to answer the question “Is the risk generated by software acceptable to the organization?” The following are some of these examples of measures:

• Number and percentage of patches published on announced date

• Time elapsed for supplier to fix defects

• Number of known defects by type and impact

• Cost to correct vulnerabilities in operations

• Cost of fixing defects before system becomes operational

• Cost of individual data breaches

• Cost of SwA practices throughout the SDLC

Fifteen examples of measures for practitioners are presented. They are intended to answer the question “How well are current SwA processes and techniques mitigating software-related risks?”

3.3.5 Microsoft Security Development Lifecycle (SDL)

The Microsoft Security Development Lifecycle (SDL)3 is an industry-leading software security process. A Microsoft-wide initiative and a mandatory policy since 2004, the SDL has played a critical role in enabling Microsoft to embed security and privacy in its software and culture. Combining a holistic and practical approach, the SDL introduces security and privacy early and throughout all phases of the development process.

3. More information is available in The Security Development Lifecycle [Howard 2006], at the Microsoft Security Development Lifecycle website [Microsoft 2010a], and in the document Microsoft Security Development Lifecycle Version 5.0 [Microsoft 2010b].

The reliable delivery of more secure software requires a comprehensive process, so Microsoft defined a collection of principles it calls Secure by Design, Secure by Default, Secure in Deployment, and Communications (SD3+C) to help determine where security efforts are needed [Microsoft 2010b]:


Secure by Design

Secure architecture, design, and structure. Developers consider security issues part of the basic architectural design of software development. They review detailed designs for possible security issues, and they design and develop mitigations for all threats.

Threat modeling and mitigation. Threat models are created, and threat mitigations are present in all design and functional specifications.

Elimination of vulnerabilities. No known security vulnerabilities that would present a significant risk to the anticipated use of the software remain in the code after review. This review includes the use of analysis and testing tools to eliminate classes of vulnerabilities.

Improvements in security. Less secure legacy protocols and code are deprecated, and, where possible, users are provided with secure alternatives that are consistent with industry standards.

Secure by Default

Least privilege. All components run with the fewest possible permissions.

Defense in depth. Components do not rely on a single threat mitigation solution that leaves users exposed if it fails.

Conservative default settings. The development team is aware of the attack surface for the product and minimizes it in the default configuration.

Avoidance of risky default changes. Applications do not make any default changes to the operating system or security settings that reduce security for the host computer. In some cases, such as for security products, it is acceptable for a software program to strengthen (increase) security settings for the host computer. The most common violations of this principle are games that either open firewall ports without informing the user or instruct users to open firewall ports without informing users of possible risks.

Less commonly used services off by default. If fewer than 80 percent of a program’s users use a feature, that feature should not be activated by default. Measuring 80 percent usage in a product is often difficult because programs are designed for many different personas. It can be useful to consider whether a feature addresses a core/primary use scenario for all personas. If it does, the feature is sometimes referred to as a P1 feature.

Secure in Deployment

Deployment guides. Prescriptive deployment guides outline how to deploy each feature of a program securely, including providing users with information that enables them to assess the security risk of activating non-default options (and thereby increasing the attack surface).

Analysis and management tools. Security analysis and management tools enable administrators to determine and configure the optimal security level for a software release.

Patch deployment tools. Deployment tools aid in patch deployment.

Communications

Security response. Development teams respond promptly to reports of security vulnerabilities and communicate information about security updates.

Community engagement. Development teams proactively engage with users to answer questions about security vulnerabilities, security updates, or changes in the security landscape.


Figure 3.3 shows what the secure software development process model looks like.

Image

Figure 3.3 Secure Software Development Process Model at Microsoft [Shunn 2013]

The Microsoft SDL documentation describes, in great detail, what architects, designers, developers, and testers are required to do during each lifecycle phase. The introduction states, “Secure software development has three elements—best practices, process improvements, and metrics. This document focuses primarily on the first two elements, and metrics are derived from measuring how they are applied” [Microsoft 2010b]. This description indicates that the document contains no concrete measurement-related information; measures would need to be derived from each of the lifecycle-phase practice areas.

3.3.6 SEI Framework for Building Assured Systems

In developing the Building Assured Systems Framework (BASF), we studied the available models, roadmaps, and frameworks. Given our deep knowledge of the MSwA2010 Body of Knowledge (BoK)—the core body of knowledge for a master of software architecture degree from Carnegie Mellon University—we decided to use it as an initial foundation for the BASF.

Maturity Levels

We assigned the following maturity levels to each element of the MSwA2010 BoK:

L1—The approach provides guidance for how to think about a topic for which there is no proven or widely accepted approach. The intent of the area is to raise awareness and aid in thinking about the problem and candidate solutions. The area may also describe promising research results that may have been demonstrated in a constrained setting.

L2—The approach describes practices that are in early pilot use and are demonstrating some successful results.

L3—The approach describes practices that have been successfully deployed (mature) but are in limited use in industry or government organizations. They may be more broadly deployed in a particular market sector.

L4—The approach describes practices that have been successfully deployed and are in widespread use. You can start using these practices today with confidence. Experience reports and case studies are typically available.

We developed these maturity levels to support our work in software security engineering [Allen 2008]. We associated the BoK elements and maturity levels by evaluating the extent to which relevant sources, practices, curricula, and courseware exist for a particular BoK element and the extent to which we have observed the element in practice in organizations.

MSwA2010 BoK with Outcomes and Maturity Levels

We found that the current maturity of the material being proposed for delivery in the MSwA2010 BoK varied. For example, a student would be expected to learn material at all maturity levels. If a practice was not very mature, we would still expect the student to be able to master it and use it in an appropriate manner after completing an MSwA program. We reasoned that the MSwA curriculum body of knowledge could be used as a basis for assessing maturity of software assurance practices, but to our knowledge, it has not been used for this purpose on an actual project, so it remains a hypothetical model. The portion of the table addressing risk management is shown below. The full table is contained in Appendix B, “The MSwA Body of Knowledge with Maturity Levels Added.”


2. Risk Management

Outcome: Graduates will have the ability to perform risk analysis and tradeoff assessment and to prioritize security measures.

2.1. Risk Management Concepts

2.1.1. Types and classification [L4]

Different classes of risks (for example, business, project, technical)

2.1.2. Probability, impact, severity [L4]

Basic elements of risk analysis

2.1.3. Models, processes, metrics [L4] [L3—metrics]

Models, process, and metrics used in risk management

2.2. Risk Management Process

2.2.1. Identification [L4]

Identification and classification of risks associated with a project

2.2.2. Analysis [L4]

Analysis of the likelihood, impact, and severity of each identified risk

2.2.3. Planning [L4]

Risk management plan covering risk avoidance and mitigation

2.2.4. Monitoring and management [L4]

Assessment and monitoring of risk occurrence and management of risk mitigation

2.3. Software Assurance Risk Management

2.3.1. Vulnerability and threat identification [L3]

Application of risk analysis techniques to vulnerability and threat risks

2.3.2. Analysis of software assurance risks [L3]

Analysis of risks for both new and existing systems

2.3.3. Software assurance risk mitigation [L3]

Plan for and mitigation of software assurance risks

2.3.4. Assessment of Software Assurance Processes and Practices [L2/3]

As part of risk avoidance and mitigation, assessment of the identification and use of appropriate software assurance processes and practices


3.3.7 SEI Research in Relation to the Microsoft SDL

More recently, the SEI’s CERT Division examined the linkages between CERT research and the Microsoft SDL [Shunn 2013]. An excerpt from this report follows:


Our research has confirmed that decisions made in the acquisition and development of new software and software-based systems have a major impact on operational security. The challenge begins with properly stating software requirements and ensuring they clearly and practically define security. This is fundamental to the development and fielding of effectively secure systems. When these systems must interoperate with other systems built at a different time and with varying degrees of security, effective operational security becomes much more complex. Software and systems acquired, designed, and developed with operational security in mind are more resistant to both intentional attack and unintentional failures. The goal is to build and acquire better, minimally defective software and systems that can

• possess, through testing and analysis, some measurable level of assurance of minimal vulnerabilities

• operate correctly in the presence of most attacks by either resisting the exploitation of weaknesses in the software or tolerating the failures that result from such exploits

• recognize an attack and respond with expected behaviors that support resistance and recovery

• limit the damage from failures caused by attack or unanticipated faults and events and recover as quickly as possible

Managing complexity and ensuring survivability requires engineering methods based on solid foundations and the realities of current and emerging systems. A great deal of security response is reactive—addressing security issues in response to an attack. A more effective approach is to reduce the potential of such attacks by removing the vulnerabilities that allow a compromise in the first place. Our efforts to address issues before they become a security problem focus on the following key areas:

Secure Coding addresses tools, techniques, and standards that software developers and software development organizations require to eliminate vulnerabilities resulting from coding errors before software is deployed.

Vulnerability Analysis reduces the security risks posed by software vulnerabilities by addressing both the number of vulnerabilities in software that is being developed and the number of vulnerabilities in software that is already deployed. Our vulnerability analysis work is divided into two areas. Identifying and reducing the number of new vulnerabilities before the software is deployed is the focus of our vulnerability discovery effort, while our vulnerability remediation work deals with existing vulnerabilities in deployed software. We regularly comment on issues of importance to the vulnerability analysis and security community through the CERT/CC Blog.

Cyber Security Engineering addresses research needed to prepare acquirers, managers, developers, and operators of largescale, complex, networked systems to address security, survivability, and software assurance throughout the design and acquisition lifecycles. This research encompasses four areas: Software Assurance, Security Requirements, Software Supply Chain Risk Management (SCRM), and Software Risk Management. Because much of the DoD software is vendor developed, the research addresses both internal development and acquired software sources.


The report thus highlights a sample of CERT results with readily apparent connections to the SDL. Table 3.3 maps the CERT results to Microsoft SDL activities.

Image

Table 3.3 Summary Mapping and Recommended Use

3.3.8 CERT Resilience Management Model Resilient Technical Solution Engineering Process Area

As is the case for software security and software assurance, resilience is a property of software and systems. Developing and acquiring resilient4 software and systems requires a dedicated process focused on this property that encompasses the software and system lifecycle. Version 1.1 of the CERT Resilience Management Model’s (CERT-RMM’s)5 Resilient Technical Solution Engineering (RTSE) process area defines what is required to develop resilient software and systems [Caralli 2011] (Version 1.2 is available as a free download. The associated release notes describe its updated features.6):

4. There is substantial overlap in the definitions of assured software (or software assurance) and resilient software (or software resilience). Resilient software is software that continues to operate as intended (including recovering to a known operational state) in the face of a disruptive event (satisfying business continuity requirements) so as to satisfy its confidentiality, availability, and integrity requirements (reflecting operational and security requirements) [Caralli 2011].

5. www.cert.org/resilience/

6. Version 1.2 of the Resilience Management Model document [Caralli 2016] can be downloaded from the CERT website (www.cert.org/resilience/products-services/cert-rmm/index.cfm).


• Establish a plan for addressing resiliency as part of the organization’s (or supplier’s) regular development lifecycle and integrate the plan into the organization’s corresponding development process. Plan development and execution includes identifying and mitigating risks to the success of the project.

• Identify practice-based guidelines that apply to all phases such as threat analysis and modeling as well as those that apply to a specific lifecycle phase.

• Elicit, identify, develop, and validate assurance and resiliency requirements (using methods for representing attacker and defender perspectives, for example). Such processes, methods, and tools are performed alongside similar processes for functional requirements.

• Use architectures as the basis for design that reflect a resiliency and assurance focus, including security, sustainability, and operations controls.

• Develop assured and resilient software and systems through processes that include secure coding of software, software defect detection and removal, and the development of resiliency and assurance controls based on design specifications.

• Test assurance and resiliency controls for software and systems and refer issues back to the design and development cycle for resolution.

• Conduct reviews throughout the development life cycle to ensure that resiliency (as one aspect of assurance) is kept in the forefront and given adequate attention and consideration.

• Perform system-specific continuity planning and integrate related service continuity plans to ensure that software, systems, hardware, networks, telecommunications, and other technical assets that depend on one another are sustainable.

• Perform a post-implementation review of deployed systems to ensure that resiliency (as well as assurance) requirements are being satisfied as intended.

• In operations, monitor software and systems to determine if there is variability that could indicate the effects of threats or vulnerabilities and to ensure that controls are functioning properly.

• Implement configuration management and change control processes to ensure software and systems are kept up to date to address newly discovered vulnerabilities and weaknesses (particularly in vendor-acquired products and components) and to prevent the intentional or inadvertent introduction of malicious code or other exploitable vulnerabilities.


Table 3.4 lists RTSE practices.

Image

Table 3.4 RTSE Practices

Organizations should consider the following goals—in addition to RTSE—when developing and acquiring software and systems that need to meet assurance and resiliency requirements [Caralli 2011]:


Resiliency requirements for software and system technology assets in operation, including those that may influence quality attribute requirements in the development process, are developed and managed in the Resiliency Requirements Development (RRD) and Resiliency Requirements Management (RRM) process areas respectively.

Identifying and adding newly developed and acquired software and system assets to the organization’s asset inventory is addressed in the Asset Definition and Management (ADM) process area.

The management of resiliency for technology assets as a whole, particularly for deployed, operational assets, is addressed in the Technology Management (TM) process area. This includes, for example, asset fail-over, backup, recovery, and restoration.

Acquiring software and systems from external entities and ensuring that such assets meet their resiliency requirements throughout the asset life cycle is addressed in the External Dependencies Management process area. That said, RTSE specific goals and practices should be used to aid in evaluating and selecting external entities that are developing software and systems (EXD:SG3.SP3), formalizing relationships with such external entities (EXD:SG3.SP4), and managing an external entity’s performance when developing software and systems (EXD:SG4).

Monitoring for events, incidents, and vulnerabilities that may affect software and systems in operation is addressed in the Monitoring (MON) process area.

Service continuity plans are identified and created in the Service Continuity (SC) process area. These plans may be inclusive of software and systems that support the services for which planning is performed.


RTSE assumes that the organization has one or more existing, defined process for software and system development into which resiliency controls and activities can be integrated. If this is not the case, the organization should not attempt to implement the goals and practices identified in RTSE or in other CERT-RMM process areas as described above.

3.3.9 International Process Research Consortium (IPRC) Roadmap

From August 2004 to December 2006, the SEI’s process program sponsored a research consortium of 28 international thought leaders to explore process needs for today, the foreseeable future, and the unforeseeable future. One of the emerging research themes was the relationships between processes and product qualities, defined as “understanding if and how particular process characteristics can affect desired product (and service) qualities such as security, usability, and maintainability” [IPRC 2006]. As an example, or instantiation, of this research theme, two of the participating members, Julia Allen and Barbara Kitchenham, developed research nodes and research questions for security as a product quality. This content helps identify research topics and gaps that could be explored within the context of the BASF.

The descriptive material presented in Table 3.5 is excerpted from A Process Research Framework [IPRC 2006].

Image
Image
Image

Table 3.5 IPRC Research Nodes and Questions for Security as a Product Quality

3.3.10 NIST Cyber Security Framework

The NIST Framework for Improving Critical Infrastructure Cybersecurity is the result of a February 2013 executive order from U.S. President Barack Obama titled Improving Critical Infrastructure Cybersecurity [White House 2013]. The order emphasized that “it is the Policy of the United States to enhance the security and resilience of the Nation’s critical infrastructure and to maintain a cyber-environment that encourages efficiency, innovation, and economic prosperity while promoting safety, security, business confidentiality, privacy, and civil liberties” [White House 2013].

The NIST framework provides an assessment mechanism that enables organizations to determine their current cyber security capabilities, set individual goals for a target state, and establish a plan for improving and maintaining cyber security programs [NIST 2014]. There are three components—Core, Profile, and Implementation tiers—as discussed in the following excerpt [NIST 2014]:

The Core presents the recommendations of industry standards, guidelines, and practices in a manner that allows for communication of cybersecurity activities and outcomes across the organization from the executive level to the implementation/operations level.

The Core is hierarchical and consists of five cyber security risk functions. Each function is further broken down into categories and subcategories.

The categories include processes, procedures, and technologies such as the following:

• Asset management

• Alignment with business strategy

• Risk assessment

• Access control

• Employee training

• Data security

• Event logging and analysis

• Incident response plans

Each subcategory provides a set of cyber security risk management best practices that can help organizations align and improve their cyber security capability based on individual business needs, tolerance to risk, and resource availability [NIST 2014].

The Core criteria are used to determine the outcomes necessary to improve the overall security effort of an organization. The unique requirements of industry, customers, and partners are then factored into the target profile. Comparing the current and target profiles identifies the gaps that should be closed to enhance cyber security. Organizations must prioritize the gaps to establish the basis for a prioritized roadmap to help make improvements.

Implementation tiers create a context that enables an organization to understand how its current cyber security risk management capabilities rate against the ideal characteristics described by the NIST Framework. Tiers range from Partial (Tier 1) to Adaptive (Tier 4). NIST recommends that organizations seeking to achieve an effective, defensible cyber security program progress to Tier 3 or 4.

3.3.11 Uses of Software Security Frameworks, Models, and Roadmaps

Because software security is a relatively new field, the frameworks, models, and roadmaps have not been in use, on average, for as long a time as the CMMI models, and their use is not as widespread. Nevertheless, there are important uses to consider.

Secure development process models are in use by organizations for which security is a priority:

• The Microsoft SDL and variants on it are in relatively wide use.

• BSIMM has strong participation, with 78 organizations represented in BSIMM6. Since its inception in 2008, the BSIMM has studied 104 organizations.

• Elements of CERT-RMM are also widely used.

There is less usage data on secure software process models than on the more general software process models, so the true extent of usage is hard to assess. As organizations and governments become more aware of the need for software security, we expect usage of these models to increase, and we expect to see additional research in this area, perhaps with the appearance of new models.

In academia, in addition to traditional software process models, software security courses often present one or more of the secure development models and processes. Such courses occur at all levels of education, but especially at the master’s level. Individual and team student projects frequently use the models or their individual components and provide a rich learning environment for students learning about secure software development.

3.4 Summary

This chapter presents a number of frameworks and models that can be used to help support cyber security decision making. These frameworks and models include process models, security frameworks and models in the literature, and the SEI efforts in this area. We do a deeper dive for some of these topics in Chapter 7, “Special Topics in Cyber Security Engineering,” which provides further discussion of governance considerations for cyber security engineering, security requirements engineering for acquisition, and standards.

As noted earlier in this chapter, we developed maturity levels to support our work in software security engineering [Allen 2008]. Since 2008, some of the practice areas have increased in maturity. Nevertheless, we believe you can still apply the maturity levels to assess whether a specific approach is sufficiently mature to help achieve your cyber security goals. Our earlier work [Allen 2008] also included a recommended strategy and suggested order of practice implementation. This work remains valid today, and we reiterate the maturity levels here. They also appear in Chapter 8, “Summary and Plan for Improvements in Cyber Security Engineering Performance,” where we rate the maturity levels of the methods presented throughout the book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.218.84