Chapter 2. Software Development Process Models

Software metrics and models cannot be discussed in a vacuum; they must be referenced to the software development process. In this chapter we summarize the major process models being used in the software development community. We start with the waterfall process life-cycle model and then cover the prototyping approach, the spiral model, the iterative development process, and several approaches to the object-oriented development process. Processes pertinent to the improvement of the development process, such as the Cleanroom methodology and the defect prevention process, are also described.

In the last part of the chapter we shift our discussion from specific development processes to the evaluation of development processes and quality management standards. Presented and discussed are the process maturity framework, including the Software Engineering Institute’s (SEI) Capability Maturity Model (CMM) and the Software Productivity Research’s (SPR) assessment approach, and two bodies of quality standards—the Malcolm Baldrige assessment discipline and ISO 9000—as they relate to software process and quality.

The Waterfall Development Model

In the 1960s and 1970s software development projects were characterized by massive cost overruns and schedule delays; the focus was on planning and control (Basili and Musa, 1991). The emergence of the waterfall process to help tackle the growing complexity of development projects was a logical event (Boehm, 1976). As Figure 1.2 in Chapter 1 shows, the waterfall process model encourages the development team to specify what the software is supposed to do (gather and define system requirements) before developing the system. It then breaks the complex mission of development into several logical steps (design, code, test, and so forth) with intermediate deliverables that lead to the final product. To ensure proper execution with good-quality deliverables, each step has validation, entry, and exit criteria. This Entry-Task-Validation-Exit (ETVX) paradigm is a key characteristic of the waterfall process and the IBM programming process architecture (Radice et al., 1985).

The divide-and-conquer approach of the waterfall process has several advantages. It enables more accurate tracking of project progress and early identification of possible slippages. It forces the organization that develops the software system to be more structured and manageable. This structural approach is very important for large organizations with large, complex development projects. It demands that the process generate a series of documents that can later be used to test and maintain the system (Davis et al., 1988). The bottom line of this approach is to make large software projects more manageable and delivered on time without cost overrun. Experiences of the past several decades show that the waterfall process is very valuable. Many major developers, especially those who were established early and are involved with systems development, have adopted this process. This group includes commercial corporations, government contractors, and governmental entities. Although a variety of names have been given to each stage in the model, the basic methodologies remain more or less the same. Thus, the system-requirements stages are sometimes called system analysis, customer-requirements gathering and analysis, or user needs analysis; the design stage may be broken down into high-level design and detail-level design; the implementation stage may be called code and debug; and the testing stage may include component-level test, product-level test, and system-level test.

Figure 2.1 shows an implementation of the waterfall process model for a large project. Note that the requirements stage is followed by a stage for architectural design. When the system architecture and design are in place, design and development work for each function begins. This consists of high-level design (HLD), low-level design (LLD), code development, and unit testing (UT). Despite the waterfall concept, parallelism exists because various functions can proceed simultaneously. As shown in the figure, the code development and unit test stages are also implemented iteratively. Since UT is an integral part of the implementation stage, it makes little sense to separate it into another formal stage. Before the completion of the HLD, LLD, and code, formal reviews and inspections occur as part of the validation and exit criteria. These inspections are called I0, I1, and I2 inspections, respectively. When the code is completed and unit tested, the subsequent stages are integration, component test, system test, and early customer programs. The final stage is release of the software system to customers.

An Example of the Waterfall Process Model

Figure 2.1. An Example of the Waterfall Process Model

The following sections describe the objectives of the various stages from highlevel design to early customer programs.

High-Level Design

High-level design is the process of defining the externals and internals from the perspective of a component. Its objectives are as follows:

  • Develop the external functions and interfaces, including:

    • external user interfaces

    • application programming interfaces

    • system programming interfaces: intercomponent interfaces and data structures.

  • Design the internal component structure, including intracomponent interfaces and data structures.

  • Ensure all functional requirements are satisfied.

  • Ensure the component fits into the system/product structure.

  • Ensure the component design is complete.

  • Ensure the external functions can be accomplished—“doability” of

  • requirements.

Low-Level Design

Low-level design is the process of transforming the HLD into more detailed designs from the perspective of a part (modules, macros, includes, and so forth). Its objectives are as follows:

  • Finalize the design of components and parts (modules, macros, includes) within a system or product.

  • Complete the component test plans.

  • Give feedback about HLD and verify changes in HLD.

Code Stage

The coding portion of the process results in the transformation of a function’s LLD to completely coded parts. The objectives of this stage are as follows:

  • Code parts (modules, macros, includes, messages, etc.).

  • Code component test cases.

  • Verify changes in HLD and LLD.

Unit Test

The unit test is the first test of an executable module. Its objectives are as follows:

  • Verify the code against the component’s

    • high-level design and

    • low-level design.

  • Execute all new and changed code to ensure

    • all branches are executed in all directions,

    • logic is correct, and

    • data paths are verified.

  • Exercise all error messages, return codes, and response options.

  • Give feedback about code, LLD, and HLD.

The level of unit test is for verification of limits, internal interfaces, and logic and data paths in a module, macro, or executable include. Unit testing is performed on nonintegrated code and may require scaffold code to construct the proper environment.

Component Test

Component tests evaluate the combined software parts that make up a component after they have been integrated into the system library. The objectives of this test are as follows:

  • Test external user interfaces against the component’s design documentation— user requirements.

  • Test intercomponent interfaces against the component’s design documentation.

  • Test application program interfaces against the component’s design documentation.

  • Test function against the component’s design documentation.

  • Test intracomponent interfaces (module level) against the component’s design documentation.

  • Test error recovery and messages against the component’s design documentation.

  • Verify that component drivers are functionally complete and at the acceptable quality level.

  • Test the shared paths (multitasking) and shared resources (files, locks, queues, etc.) against the component’s design documentation.

  • Test ported and unchanged functions against the component’s design documentation.

System-Level Test

The system-level test phase comprises the following tests:

  • System test

  • System regression test

  • System performance measurement test

  • Usability tests

The system test follows the component tests and precedes system regression tests. The system performance test usually begins shortly after system testing starts and proceeds throughout the system-level test phase. Usability tests occur throughout the development process (i.e., prototyping during design stages, formal usability testing during system test period).

  • System test objectives

    • Ensure software products function correctly when executed concurrently and in stressful system environments.

    • Verify overall system stability when development activity has been completed for all products.

  • System regression test objective

    • Verify that the final programming package is ready to be shipped to external customers.

    • Make sure original functions work correctly after functions were added to the system.

  • System performance measurement test objectives

    • Validate the performance of the system.

    • Verify performance specifications.

    • Provide performance information to marketing.

    • Establish base performance measurements for future releases.

  • Usability tests objective

    • Verify that the system contains the usability characteristics required for the intended user tasks and user environment.

Early Customer Programs

The early customer programs (ECP) include testing of the following support structures to verify their readiness:

  • Service structures

  • Development fix support

  • Electronic customer support

  • Market support

  • Ordering, manufacturing, and distribution

In addition to these objectives, a side benefit of having production systems installed in a customer’s environment for the ECP is the opportunity to gather customers’ feedback so developers can evaluate features and improve them for future releases. Collections of such data or user opinion include:

  • Product feedback: functions offered, ease of use, and quality of online documentation

  • Installability of hardware and software

  • Reliability

  • Performance (measure throughput under the customer’s typical load)

  • System connectivity

  • Customer acceptance

As the preceding lists illustrate, the waterfall process model is a disciplined approach to software development. It is most appropriate for systems development characterized by a high degree of complexity and interdependency. Although expressed as a cascading waterfall, parallelism and some amount of iteration among process phases often exist in actual implementation. During this process, the focus should be on the intermediate deliverables (e.g., design document, interface rules, test plans, and test cases) rather than on the sequence of activities for each development phase. In other words, it should be entity-based instead of step-by-step based. Otherwise the process could become too rigid to be efficient and effective.

The Prototyping Approach

The first step in the waterfall model is the gathering and analysis of customers’ requirements. When the requirements are defined, the design and development work begins. The model assumes that requirements are known, and that once requirements are defined, they will not change or any change will be insignificant. This may well be the case for system development in which the system’s purpose and architecture are thoroughly investigated. However, if requirements change significantly between the time the system’s specifications are finalized and when the product’s development is complete, the waterfall may not be the best model to deal with the resulting problems. Sometimes the requirements are not even known. In the past, various software process models have been proposed to deal with customer feedback on the product to ensure that it satisfied the requirements. Each of these models provides some form of prototyping, of either a part or all of the system. Some of them build prototypes to be thrown away; others evolve the prototype over time, based on customer needs.

A prototype is a partial implementation of the product expressed either logically or physically with all external interfaces presented. The potential customers use the prototype and provide feedback to the development team before full-scale development begins. Seeing is believing, and that is really what prototyping intends to achieve. By using this approach, the customers and the development team can clarify requirements and their interpretation.

As Figure 2.2 shows, the prototyping approach usually involves the following steps:

  1. Gather and analyze requirements.

  2. Do a quick design.

  3. Build a prototype.

  4. Customers evaluate the prototype.

  5. Refine the design and prototype.

  6. If customers are not satisfied with the prototype, loop back to step 5.

  7. If customers are satisfied, begin full-scale product development.

The Prototyping Approach

Figure 2.2. The Prototyping Approach

The critical factor for success of the prototyping approach is quick turnaround in designing and building the prototypes. Several technologies can be used to achieve such an objective. Reusable software parts could make the design and implementation of prototypes easier. Formal specification languages could facilitate the generation of executable code (e.g., the Z notation and the Input/Output Requirements Language (IORL) (Smith and Wood, 1989; Wing, 1990)). Fourth-generation lan-guages and technologies could be extremely useful for prototyping in the graphical user interface (GUI) domain. These technologies are still emerging, however, and are used in varying degrees depending on the specific characteristics of the projects.

The prototyping approach is most applicable to small tasks or at the subsystem level. Prototyping a complete system is difficult. Another difficulty with this approach is knowing when to stop iterating. In practice, the method of time boxing is being used. This method involves setting arbitrary time limits (e.g., three weeks) for each activity in the iteration cycle and for the entire iteration and then assessing progress at these checkpoints.

Rapid Throwaway Prototyping

The rapid throwaway prototyping approach of software development, made popular by Gomaa and Scott (1981), is now used widely in the industry, especially in application development. It is usually used with high-risk items or with parts of the system that the development team does not understand thoroughly. In this approach, “quick and dirty” prototypes are built, verified with customers, and thrown away until a satisfactory prototype is reached, at which time full-scale development begins.

Evolutionary Prototyping

In the evolutionary prototyping approach, a prototype is built based on some known requirements and understanding. The prototype is then refined and evolved instead of thrown away. Whereas throwaway prototypes are usually used with the aspects of the system that are poorly understood, evolutionary prototypes are likely to be used with aspects of the system that are well understood and thus build on the development team’s strengths. These prototypes are also based on prioritized requirements, sometimes referred to as “chunking” in application development (Hough, 1993). For complex applications, it is not reasonable or economical to expect the prototypes to be developed and thrown away rapidly.

The Spiral Model

The spiral model of software development and enhancement, developed by Boehm (1988), is based on experience with various refinements of the waterfall model as applied to large government software projects. Relying heavily on prototyping and risk management, it is much more flexible than the waterfall model. The most comprehensive application of the model is the development of the TRW Software Productivity System (TRW-SPS) as described by Boehm. The spiral concept and the risk management focus have gained acceptance in software engineering and project management in recent years.

Figure 2.3 shows Boehm’s spiral model. The underlying concept of the model is that each portion of the product and each level of elaboration involves the same sequence of steps (cycle). Starting at the center of the spiral, one can see that each development phase (concept of operation, software requirements, product design, detailed design, and implementation) involves one cycle of the spiral. The radial dimension in Figure 2.3 represents the cumulative cost incurred in accomplishing the steps. The angular dimension represents the progress made in completing each cycle of the spiral. As indicated by the quadrants in the figure, the first step of each cycle of the spiral is to identify the objectives of the portion of the product being elaborated, the alternative means of implementation of this portion of the product, and the constraints imposed on the application of the alternatives. The next step is to evaluate the alternatives relative to the objectives and constraints, to identify the associated risks, and to resolve them. Risk analysis and the risk-driven approach, therefore, are key characteristics of the spiral model, in contrast to the document-driven approach of the waterfall model.

Spiral Model of the Software Process

From “A Spiral Model of Software Development and Enhancement,” by B. W. Boehm. IEEE Computer (May): 61–72. © 1988 IEEE. Reprinted with permission.

Figure 2.3. Spiral Model of the Software Process

In this risk-driven approach, prototyping is an important tool. Usually prototyping is applied to the elements of the system or the alternatives that present the higher risks. Unsatisfactory prototypes can be thrown away; when an operational prototype is in place, implementation can begin. In addition to prototyping, the spiral model uses simulations, models, and benchmarks in order to reach the best alternative. Finally, as indicated in the illustration, an important feature of the spiral model, as with other models, is that each cycle ends with a review involving the key members or organizations concerned with the product.

For software projects with incremental development or with components to be developed by separate organizations or individuals, a series of spiral cycles can be used, one for each increment or component. A third dimension could be added to Figure 2.3 to represent the model better.

Boehm (1988) provides a candid discussion of the advantages and disadvantages of the spiral model. Its advantages are as follows:

  • Its range of options accommodates the good features of existing software process models, whereas its risk-driven approach avoids many of their difficulties. This is the primary advantage. Boehm also discusses the primary conditions under which this model becomes equivalent to other process models such as the waterfall model and the evolutionary prototype model.

  • It focuses early attention on options involving the reuse of existing software. These options are encouraged because early identification and evaluation of alternatives is a key step in each spiral cycle. This model accommodates preparation for life-cycle evolution, growth, and changes of the software product.

  • It provides a mechanism for incorporating software quality objectives into software product development.

  • It focuses on eliminating errors and unattractive alternatives early.

  • It does not involve separate approaches for software development and software enhancement.

  • It provides a viable framework for integrating hardware-software system development. The risk-driven approach can be applied to both hardware and software.

On the other hand, difficulties with the spiral model include the following:

  • Matching to contract software: Contract software relies heavily on control, checkpoint, and intermediate deliverables for which the waterfall model is good. The spiral model has a great deal of flexibility and freedom and is, therefore, more suitable for internal software development. The challenge is how to achieve the flexibility and freedom prescribed by the spiral model without losing accountability and control for contract software.

  • Relying on risk management expertise: The risk-driven approach is the back-bone of the model. The risk-driven specification addresses high-risk elements in great detail and leaves low-risk elements to be elaborated in later stages. However, an inexperienced team may also produce a specification just the opposite: a great deal of detail for the well-understood, low-risk elements and little elaboration of the poorly understood, high-risk elements. In such a case, the project may fail and the failure may be discovered only after major resources have been invested. Another concern is that a risk-driven specification is people dependent. In the case where a design produced by an expert is to be implemented by nonexperts, the expert must furnish additional documentation.

  • Need for further elaboration of spiral steps: The spiral model describes a flexible and dynamic process model that can be used to its fullest advantage by experienced developers. For nonexperts and especially for large-scale projects, however, the steps in the spiral must be elaborated and more specifically defined so that consistency, tracking, and control can be achieved. Such elaboration and control are especially important in the area of risk analysis and risk management.

The Iterative Development Process Model

The iterative enhancement (IE) approach (Basili and Turner, 1975), or the iterative development process (IDP), was defined to begin with a subset of the requirements and develop a subset of the product that satisfies the essential needs of the users, provides a vehicle for analysis and training for the customers, and provides a learning experience for the developer. Based on the analysis of each intermediate product, the design and the requirements are modified over a series of iterations to provide a system to the users that meets evolving customer needs with improved design based on feedback and learning.

The IDP model combines prototyping with the strength of the classical waterfall model. Other methods such as domain analysis and risk analysis can also be incorporated into the IDP model. The model has much in common with the spiral model, especially with regard to prototyping and risk management. Indeed, the spiral model can be regarded as a specific IDP model, while the term IDP is a general rubric under which various forms of the model can exist. The model also provides a framework for many modern systems and software engineering methods and techniques such as reuse, object-oriented development, and rapid prototyping.

Figure 2.4 shows an example of the iterative development process model used by IBM Owego, New York. With the purpose of “building a system by evolving an architectural prototype through a series of executable versions, with each successive iteration incorporating experience and more system functionality,” the example implementation contains eight major steps (Luckey et al., 1992):

An Example of the Iterative Development Process Model

Source: P. H. Luckey, R. M. Pittman, and A. Q. LeVan, 1992, “Iterative Development Process with Proposed Applications,” IBM Federal Sector Division, Route 17C, Owego, NY 13827.

Figure 2.4. An Example of the Iterative Development Process Model

  1. Domain analysis

  2. Requirements definition

  3. Software architecture

  4. Risk analysis

  5. Prototype

  6. Test suite and environment development

  7. Integration with previous iterations

  8. Release of iteration

As illustrated in the figure, the iteration process involves the last five steps; domain analysis, requirements definition, and software architecture are preiteration steps, which are similar to those in the waterfall model. During the five iteration steps, the following activities occur:

  • Analyze or review the system requirements.

  • Design or revise the solution that best satisfies the requirements.

  • Identify the highest risks for the project and prioritize them. Mitigate the highest priority risk via prototyping, leaving lower risks for subsequent iterations.

  • Define and schedule or revise the next few iterations.

  • Develop the iteration test suite and supporting test environment.

  • Implement the portion of the design that is minimally required to satisfy the current iteration.

  • Integrate the software in test environments and perform regression testing.

  • Update documents for release with the iteration.

  • Release the iteration.

Note that test suite development along with design and development is extremely important for the verification of the function and quality of each iteration. Yet in practice this activity is not always emphasized appropriately.

The development of IBM’s OS/2 2.0 operating system is a combination of the iterative development process and the small team approach. Different from the last example to some extent, the OS/2 2.0 iterative development process involved large-scale early customer feedback instead of just prototyping. The iterative part of the process involved the loop of subsystem design → subsystem code and test → system integration → customer feedback → subsystem design. Specifically, the waterfall process involved the steps of market requirements, design, code and test, and system certification. The iterative process went from initial market requirements to the iterative loop, then to system certification. Within the one-year development cycle, there were five iterations, each with increased functionality, before completion of the sys-tem. For each iteration, the customer feedback involved a beta test of the available functions, a formal customer satisfaction survey, and feedback from various vehicles such as electronic messages on Prodigy, IBM internal e-mail conferences, customer visits, technical seminars, and internal and public bulletin boards. Feedback from various channels was also statistically verified and validated by the formal customer satisfaction surveys. More than 30,000 customers and 100,000 users were involved in the iteration feedback process. Supporting the iterative process was the small team approach in which each team assumed full responsibility for a particular function of the system. Each team owned its project, functionality, quality, and customer satisfaction, and was held completely responsible. Cross-functional system teams also provided support and services to make the subsystem teams successful and to help resolve cross-subsystem concerns (Jenkins, 1992).

The OS/2 2.0 development process and approach, although it may not be universally applicable to other products and systems, was apparently a success as attested by customers’ acceptance of the product and positive responses.

The Object-Oriented Development Process

The object-oriented (OO) approach to design and programming, which was introduced in the 1980s, represents a major paradigm shift in software development. This approach will continue to have a major effect in software for many years. Different from traditional programming, which separates data and control, object-oriented programming is based on objects, each of which is a set of defined data and a set of operations (methods) that can be performed on that data. Like the paradigm of structural design and functional decomposition, the object-oriented approach has become a major cornerstone of software engineering. In the early days of OO technology deployment (from late 1980s to mid 1990s), much of the OO literature concerned analysis and design methods; there was little information about OO development processes. In recent years the object-oriented technology has been widely accepted and object-oriented development is now so pervasive that there is no longer a question of its viability.

Branson and Herness (1992) proposed an OO development process for large-scale projects that centers on an eight-step methodology supported by a mechanism for tracking, a series of inspections, a set of technologies, and rules for prototyping and testing.

The eight-step process is divided into three logical phases:

  1. The analysis phase focuses on obtaining and representing customers’ requirements in a concise manner, to visualize an essential system that represents the users’ requirements regardless of which implementation platform (hardware or software environment) is developed.

  2. The design phase involves modifying the essential system so that it can be implemented on a given set of hardware and software. Essential classes and incarnation classes are combined and refined into the evolving class hierarchy. The objectives of class synthesis are to optimize reuse and to create reusable classes.

  3. The implementation phase takes the defined classes to completion.

The eight steps of the process are summarized as follows:

  • Model the essential system: The essential system describes those aspects of the system required for it to achieve its purpose, regardless of the target hardware and software environment. It is composed of essential activities and essential data. This step has five substeps:

    • Create the user view.

    • Model essential activities.

    • Define solution data.

    • Refine the essential model.

    • Construct a detailed analysis.

  • This step focuses on the user requirements. Requirements are analyzed, dissected, refined, combined, and organized into an essential logical model of the system. This model is based on the perfect technology premise.

  1. Derive candidate-essential classes: This step uses a technique known as “carving” to identify candidate-essential classes and methods from the essential model of the whole system. A complete set of data-flow diagrams, along with supporting process specifications and data dictionary entries, is the basis for class and method selection. Candidate classes and methods are found in external entities, data stores, input flows, and process specifications.

  2. Constrain the essential model: The essential model is modified to work within the constraints of the target implementation environment. Essential activities and essential data are allocated to the various processors and containers (data repositories). Activities are added to the system as needed, based on limitations in the target implementation environment. The essential model, when augmented with the activities needed to support the target environment, is referred to as the incarnation model.

  3. Derive additional classes: Additional candidate classes and methods specific to the implementation environment are selected based on the activities added while constraining the essential model. These classes supply interfaces to the essential classes at a consistent level.

  4. Synthesize classes: The candidate-essential classes and the candidate-additional classes are refined and organized into a hierarchy. Common attributes and operations are extracted to produce superclasses and subclasses. Final classes are selected to maximize reuse through inheritance and importation.

  5. Define interfaces: The interfaces, object-type declarations, and class definitions are written based on the documented synthesized classes.

  6. Complete the design: The design of the implementation module is completed. The implementation module comprises several methods, each of which provides a single cohesive function. Logic, system interaction, and method invocations to other classes are used to accomplish the complete design for each method in a class. Referential integrity constraints specified in the essential model (using the data model diagrams and data dictionary) are now reflected in the class design.

  7. Implement the solution: The implementation of the classes is coded and unit tested.

The analysis phase of the process consists of steps 1 and 2, the design phase consists of steps 3 through 6, and the implementation phase consists of steps 7 and 8. Several iterations are expected during analysis and design. Prototyping may also be used to validate the essential model and to assist in selecting the appropriate incarnation. Furthermore, the process calls for several reviews and checkpoints to enhance the control of the project. The reviews include the following:

  • Requirements review after the second substep of step 1 (model essential system)

  • External structure and design review after the fourth substep (refined model) of step 1

  • Class analysis verification review after step 5

  • Class externals review after step 6

  • Code inspection after step 8 code is complete

In addition to methodology, requirements, design, analysis, implementation, prototyping, and verification, Branson and Herness (1993) assert that the object-oriented development process architecture must also address elements such as reuse, CASE tools, integration, build and test, and project management. The Branson and Herness process model, based on their object-oriented experience at IBM Rochester, represents one attempt to deploy the object-oriented technology in large organizations. It is certain that many more variations will emerge before a commonly recognized OOP model is reached.

Finally, the element of reuse merits more discussion from the process perspective, even in this brief section. Design and code reuse gives object-oriented development significant advantages in quality and productivity. However, reuse is not automatically achieved simply by using object-oriented development. Object-oriented development provides a large potential source of reusable components, which must be generalized to become usable in new development environments. In terms of development life cycle, generalization for reuse is typically considered an “add-on” at the end of the project. However, generalization activities take time and resources. Therefore, developing with reuse is what every object-oriented project is aiming for, but developing for reuse is difficult to accomplish. This reuse paradox explains the reality that there are no significant amounts of business-level reusable code despite the promises OO technology offers, although there are many general-purpose reusable libraries. Therefore, organizations that intend to leverage the reuse advantage of OO development must deal with this issue in their development process.

Henderson-Sellers and Pant (1993) propose a two-library model for the generalization activities for reusable parts. The model addresses the problem of costing and is quite promising. The first step is to put “on hold” project-specific classes from the current project by placing them in a library of potentially reusable components (LPRC). Thus the only cost to the current project is the identification of these classes. The second library, the library of generalized components (LGC), is the high-quality company resource. At the beginning of each new project, an early phase in the development process is an assessment of classes that reside in the LPRC and LGC libraries in terms of their reuse value for the project. If of value, additional spending on generalization is made and potential parts in LPRC can undergo the generalization process and quality checks and be placed in LGC. Because the reusable parts are to benefit the new project, it is reasonable to allocate the cost of generalization to the customer, for whom it will be a savings.

As the preceding discussion illustrates, it may take significant research, experience, and ingenuity to piece together the key elements of an object-oriented development process and for it to mature. In 1997, the Unified Software Development Process, which was developed by Jacobson, Booch, and Rumbaugh (1997) and is owned by the Rational Software Corporation, was published. The process relies on the Unified Modeling Language (UML) for its visual modeling standard. It is usecase driven, architecture-centric, iterative, and incremental. Use cases are the key components that drive this process model. A use case can be defined as a piece of functionality that gives a user a result of a value. All the use cases developed can be combined into a use-case model, which describes the complete functionality of the system. The use-case model is analogous to the functional specification in a traditional software development process model. Use cases are developed with the users and are modeled in UML. These represent the requirements for the software and are used throughout the process model. The Unified Process is also described as architecture-centric. This architecture is a view of the whole design with important characterisitcs made visible by leaving details out. It works hand in hand with the use cases. Subsystems, classes, and components are expressed in the architecture and are also modeled in UML. Last, the Unified Process is iterative and incremental. Iterations represent steps in a workflow, and increments show growth in functionality of the product. The core workflows for iterative development are:

  • Requirements

  • Analysis

  • Design

  • Implementation

  • Test

The Unified Process consists of cycles. Each cycle results in a new release of the system, and each release is a deliverable product. Each cycle has four phases: inception, elaboration, construction, and transition. A number of iterations occur in each phase, and the five core workflows take place over the four phases.

During inception, a good idea for a software product is developed and the project is kicked off. A simplified use-case model is created and project risks are prioritized. Next, during the elaboration phase, product use cases are specified in detail and the system architecture is designed. The project manager begins planning for resources and estimating activities. All views of the system are delivered, including the usecase model, the design model, and the implementation model. These models are developed using UML and held under configuration management. Once this phase is complete, the construction phase begins. From here the architecture design grows into a full system. Code is developed and the software is tested. Then the software is assessed to determine if the product meets the users’ needs so that some customers can take early delivery. Finally, the transition phase begins with beta testing. In this phase, defects are tracked and fixed and the software is transitioned to a maintenance team.

One very controversial OO process that has gained recognition and generated vigorous debates among software engineers is Extreme Programming (XP) proposed by Kent Beck (2000). This lightweight, iterative and incremental process has four cornerstone values: communication, simplicity, feedback, and courage. With this foundation, XP advocates the following practices:

  • The Planning Game: Development teams estimate time, risk, and story order. The customer defines scope, release dates, and priority.

  • System metaphor: A metaphor describes how the system works.

  • Simple design: Designs are minimal, just enough to pass the tests that bound the scope.

  • Pair programming: All design and coding is done by two people at one workstation. This spreads knowledge better and uses constant peer reviews.

  • Unit testing and acceptance testing: Unit tests are written before code to give a clear intent of the code and provide a complete library of tests.

  • Refactoring: Code is refactored before and after implementing a feature to help keep the code clean.

  • Collective code ownership: By switching teams and seeing all pieces of the code, all developers are able to fix broken pieces.

  • Continuous integration: The more code is integrated, the more likely it is to keep running without big hang-ups.

  • On-site customer: An onsite customer is considered part of the team and is responsible for domain expertise and acceptance testing.

  • 40-hour week: Stipulating a 40-hour week ensures that developers are always alert.

  • Small releases: Releases are small but contain useful functionality.

  • Coding standard: Coding standards are defined by the team and are adhered to.

According to Beck, because these practices balance and reinforce one another, implementing all of them in concert is what makes XP extreme. With these practices, a software engineering team can “embrace changes.” Unlike other evolutionary process models, XP discourages preliminary requirements gathering, extensive analysis, and design modeling. Instead, it intentionally limits planning for future flexibility, promoting a “You Aren’t Gonna Need It” (YANGI) philosophy that emphasizes fewer classes and reduced documentation. It appears that the XP philosophy and practices may be more applicable to small projects. For large and complex software development, some XP principles become harder to implement and may even run against traditional wisdom that is built upon successful projects. Beck stipulates that to date XP efforts have worked best with teams of ten or fewer members.

The Cleanroom Methodology

Cleanroom Software Engineering approaches software development as an engineering process with mathematical foundations rather than a trial-and-error programming process (Linger and Hausler, 1992). The Cleanroom process employs theory-based technologies such as box structure specification of user function and system object architecture, function-theoretic design and correctness verification, and statistical usage testing for quality certification. Cleanroom management is based on incremental development and certification of a pipeline of user-function increments that accumulate into the final product. Cleanroom operations are carried out by small, independent development and certification (test) teams, with teams of teams for large projects (Linger, 1993). Figure 2.5 shows the full implementation of the Cleanroom process (Linger, 1993).

The Cleanroom Process

From “Cleanroom Software Engineering for Zero-Defect Software,” by R. C. Linger. Proceedings Fifteenth International Conference on Software Engineering, May 17–21. © 1993 IEEE. Reprinted with permission.

Figure 2.5. The Cleanroom Process

The Cleanroom process emphasizes the importance of the development team having intellectual control over the project. The bases of the process are proof of correctness (of design and code) and formal quality certification via statistical testing. Perhaps the most controversial aspect of Cleanroom is that team verification of correctness takes the place of individual unit testing. Once the code is developed, it is subject to statistical testing for quality assessment. Proponents argue that the intellectual control of a project afforded by team verification of correctness is the basis for prohibition of unit testing. This elimination also motivates tremendous determination by developers that the code they deliver for independent testing be error-free on first execution (Hausler and Trammell, 1993).

The Cleanroom process proclaims that statistical testing can replace coverage and path testing. In Cleanroom, all testing is based on anticipated customer usage. Test cases are designed to rehearse the more frequently used functions. Therefore, errors that are likely to cause frequent failures to the users are likely to be found first. In terms of measurement, software quality is certified in terms of mean time to failure (MTTF).

The Cleanroom process represents one of the formal approaches in software development that have begun to see application in industry. Other examples of formal approaches include the Vienna Development Method (VDM) and the Z notation (Smith and Wood, 1989; Wing, 1990). It appears that Z and VDM have been used primarily by developers in the United Kingdom and Europe; Cleanroom projects are conducted mostly in the United States.

Since the pilot projects in 1987 and 1988, a number of projects have been completed using the Cleanroom process. As reported by Linger (1993), the average defect rate in first-time execution was 2.9 defects per thousand lines of code (KLOC), which is significantly better than the industry average.

The adoption of Cleanroom thus far is mostly confined to small projects. Like other formal methods, the questions about its ability to be scaled up to large projects and the mathematical training required have been asked by many developers and project managers. Also, as discussed previously, the prohibition of unit testing is perhaps the most controversial concern. Whether statistical testing could completely replace range/limit testing and path testing remains a key question in many developers’ minds. This is especially true when the software system is complex or when the system is a common-purpose system where a typical customer usage profile is itself in question. Not surprisingly, some Cleanroom projects do not preclude the traditional methods (such as unit test and limit test) while adopting Cleanroom’s formal approaches. Hausler and Trammell (1993) even proposed a phased implementation approach in order to facilitate the acceptance of Cleanroom. The phased implementation framework includes three stages:

  1. Introductory implementationinvolves the implementation of Cleanroom principles without the full formality of the methodology (e.g., box structure, statistical testing, and certification of reliability).

  2. Full implementationinvolves the complete use of Cleanroom’s formal methods (as illustrated in Figure 2.5).

  3. Advanced implementationoptimizes the process for the local environment (e.g., the use of an automated code generator, Markov modeling and analysis of system usage, and certification using a locally validated reliability model).

In their recent work, the Cleanroom experts elaborate in detail the development and certification process (Prowell et al., 1999). They also show that the Cleanroom software process is compatible with the Software Engineering Institute’s capability maturity model (CMM).

The Defect Prevention Process

The defect prevention process (DPP) is not itself a software development process. Rather, it is a process to continually improve the development process. It originated in the software development environment and thus far has been implemented mostly in software development organizations. Because we would be remiss if we did not discuss this process while discussing software development processes, this chapter includes a brief discussion of DPP.

The DPP was modeled on techniques used in Japan for decades and is in agreement with Deming’s principles. It is based on three simple steps:

  1. Analyze defects or errors to trace the root causes.

  2. Suggest preventive actions to eliminate the defect root causes.

  3. Implement the preventive actions.

The formal process, first used at the IBM Communications Programming Laboratory at Research Triangle Park, North Carolina (Jones, 1985; Mays et al., 1990), consists of the following four key elements:

  1. Causal analysis meetings: These are usually two-hour brainstorming sessions conducted by technical teams at the end of each stage of the development process. Developers analyze defects that occurred in the stage, trace the root causes of errors, and suggest possible actions to prevent similar errors from recurring. Methods for removing similar defects in a current product are also discussed. Team members discuss overall defect trends that may emerge from their analysis of this stage, particularly what went wrong and what went right, and examine suggestions for improvement. After the meeting, the causal analysis leader records the data (defects, causes, and suggested actions) in an action database for subsequent reporting and tracking. To allow participants at this meeting to express their thoughts and feelings on why defects occurred without fear of jeopardizing their careers, managers do not attend this meeting.

  2. Action team: The action team is responsible for screening, prioritizing, and implementing suggested actions from causal analysis meetings. Each member has a percentage of time allotted for this task. Each action team has a coordinator and a management representative (the action team manager). The team uses reports from the action database to guide its meetings. The action team is the engine of the process. Other than action implementation, the team is involved in feedback to the organization, reports to management on the status of its activities, publishing success stories, and taking the lead in various aspects of the process. The action team relieves the programmers of having to implement their own suggestions, especially actions that have a broad scope of influence and require substantial resources. Of course, existence of the action team does not preclude action implemented by others. In fact, technical teams are encouraged to take improvement actions, especially those that pertain to their specific areas.

  3. Stage kickoff meetings: The technical teams conduct these meetings at the beginning of each development stage. The emphasis is on the technical aspect of the development process and on quality: What is the right process? How do we do things more effectively? What are the tools and methods that can help? What are the common errors to avoid? What improvements and actions had been implemented? The meetings thus serve two main purposes: as a primary feedback mechanism of the defect prevention process and as a preventive measure.

  4. Action tracking and data collection: To prevent suggestions from being lost over time, to aid action implementation, and to enhance communications among groups, an action database tool is needed to track action status.

Figure 2.6 shows this process schematically.

Defect Prevention Process

Figure 2.6. Defect Prevention Process

Different from postmortem analysis, the DPP is a real-time process, integrated into every stage of the development process. Rather than wait for a postmortem on the project, which has frequently been the case, DPP is incorporated into every sub-process and phase of that project. This approach ensures that meaningful discussion takes place when it is fresh in everyone’s mind. It focuses on defect-related actions and process-oriented preventive actions, which is very important. Through the action teams and action tracking tools and methodology, DPP provides a systematic, objective, data-based mechanism for action implementation. It is a bottoms-up approach; causal analysis meetings are conducted by developers without management interference. However, the process requires management support and direct participation via the action teams.

Many divisions of IBM have had successful experiences with DPP and causal analysis. DPP was successful at IBM in Raleigh, North Carolina, on several software products. For example, IBM’s Network Communications Program had a 54% reduction in error injection during development and a 60% reduction in field defects after DPP was implemented. Also, IBM in Houston, Texas, developed the space shuttle onboard software control system with DPP and achieved zero defects since the late 1980s. Causal analysis of defects along with actions aimed at eliminating the cause of defects are credited as the key factors in these successes (Mays et al., 1990). Indeed, the element of defect prevention has been incorporated as one of the “imperatives” of the software development process at IBM. Other companies, especially those in the software industry, have also begun to implement the process.

DPP can be applied to any development process—waterfall, prototyping, iterative, spiral, Cleanroom, or others. As long as the defects are recorded, causal analysis can be performed and preventive actions mapped and implemented. For example, the middle of the waterfall process includes designing, coding, and testing. After incorporating DPP at each stage, the process will look like Figure 2.7. The important role of DPP in software process improvement is widely recognized by the software community. In the SEI (Software Engineering Institute) software process maturity assessment model (Humphrey, 1989), the element of defect prevention is necessary for a process to achieve the highest maturity level—level 5. The SEI maturity model is discussed in more detail in the next section.

Applying the Defect Prevention Process to the Middle Segment of the Waterfall Model

Figure 2.7. Applying the Defect Prevention Process to the Middle Segment of the Waterfall Model

Finally, although the defect prevention process has been implemented primarily in software development environments, it can be applied to any product or industry. Indeed, the international quality standard ISO 9000 has a major element of corrective action; DPP is often an effective vehicle employed by companies to address this element when they implement the ISO 9000 registration process. ISO 9000 is also covered in the next section on process maturity assessment and quality standards.

Process Maturity Framework and Quality Standards

Regardless of which process is used, the degree to which it is implemented varies from organization to organization and even from project to project. Indeed, given the framework of a certain process model, the development team usually defines its specifics such as implementation procedures, methods and tools, metrics and measurements, and so forth. Whereas certain process models are better for certain types of projects under certain environments, the success of a project depends heavily on the implementation maturity, regardless of the process model. In addition to the process model, questions related to the overall quality management system of the company are important to the outcome of the software projects.

This section discusses frameworks to assess the process maturity of an organization or a project. They include the SEI and the Software Productivity Research (SPR) process maturity assessment methods, the Malcolm Baldrige discipline and assessment processes, and the ISO 9000 registration process. Although the SEI and SPR methods are specific to software processes, the latter two frameworks are quality process and quality management standards that apply to all industries.

The SEI Process Capability Maturity Model

The Software Engineering Institute at the Carnegie-Mellon University developed the Process Capability Maturity Model (CMM), a framework for software development (Humphrey, 1989). The CMM includes five levels of process maturity (Humphrey, 1989, p. 56):

Level 1: Initial

Characteristics: Chaotic—unpredictable cost, schedule, and quality performance.

Level 2: Repeatable

Characteristics: Intuitive—cost and quality highly variable, reasonable control of schedules, informal and ad hoc methods and procedures. The key elements, or key process areas (KPA), to achieve level 2 maturity follow:

  • Requirements management

  • Software project planning and oversight

  • Software subcontract management

  • Software quality assurance

  • Software configuration management

Level 3: Defined

Characteristics: Qualitative—reliable costs and schedules, improving but unpredictable quality performance. The key elements to achieve this level of maturity follow:

  • Organizational process improvement

  • Organizational process definition

  • Training program

  • Integrated software management

  • Software product engineering

  • Intergroup coordination

  • Peer reviews

Level 4: Managed

Characteristics: Quantitative—reasonable statistical control over product quality. The key elements to achieve this level of maturity follow:

  • Process measurement and analysis

  • Quality management

Level 5: Optimizing

Characteristics: Quantitative basis for continued capital investment in process automation and improvement. The key elements to achieve this highest level of maturity follow:

  • Defect prevention

  • Technology innovation

  • Process change management

The SEI maturity assessment framework has been used by government agencies and software companies. It is meant to be used with an assessment methodology and a management system. The assessment methodology relies on a questionnaire (85 items in version 1 and 124 items in version 1.1), with yes or no answers. For each question, the SEI maturity level that the question is associated with is indicated. Special questions are designated as key to each maturity level. To be qualified for a certain level, 90% of the key questions and 80% of all questions for that level must be answered yes. The maturity levels are hierarchical. Level 2 must be attained before the calculation for level 3 or higher is accepted. Levels 2 and 3 must be attained before level 4 calculation is accepted, and so forth. If an organization has more than one project, its ranking is determined by answering the questionnaire with a composite viewpoint—specifically, the answer to each question should be substantially true across the organization.

It is interesting to note that pervasive use of software metrics and models is a key characteristic of level 4 maturity, and for level 5 the element of defect prevention is key. Following is a list of metrics-related topics addressed by the questionnaire.

  • Profiles of software size maintained for each software configuration item over time

  • Statistics on software design errors

  • Statistics on software code and test errors

  • Projection of design errors and comparison between projected and actual numbers

  • Projection of test errors and comparison between projected and actual numbers

  • Measurement of design review coverage

  • Measurement of test coverage

  • Tracking of design review actions to closure

  • Tracking of testing defects to closure

  • Database for process metrics data across all projects

  • Analysis of review data gathered during design reviews

  • Analysis of data already gathered to determine the likely distribution and characteristics of the errors in the remainder of the project

  • Analysis of errors to determine their process-related causes

  • Analysis of review efficiency for each project

Several questions on defect prevention address the following topics:

  • Mechanism for error cause analysis

  • Analysis of error causes to determine the process changes required for error prevention

  • Mechanism for initiating error-prevention actions

The SEI maturity assessment has been conducted on many projects, carried out by SEI or by the organizations themselves in the form of self-assessment. As of April 1996, based on assessments of 477 organizations by SEI, 68.8% were at level 1, 18% were at level 2, 11.3% were at level 3, 1.5% were at level 4, and only 0.4% were at level 5 (Humphrey, 2000). As of March 2000, based on more recent assessments of 870 organizations since 1995, the percentage distribution by level is: level 1, 39.3%; level 2, 36.3%; level 3, 17.7%; level 4, 4.8%; level 5, 1.8% (Humphrey, 2000). The data indicate that the maturity profile of software organizations is improving.

The SEI maturity assessment framework applies to the organizational or project level. At the individual level and team level, Humphrey developed the Personal Software Processs (PSP) and Team Software Processs (TSP) (Humphrey, 1995, 1997, 2000 a,b). The PSP shows software engineers how to plan and track their work, and good and consistent practices that lead to high-quality software. Time management, good software engineering practices, data tracking, and analysis at the individual level are among the focus areas of PSP. The TSP is built on the PSP and addresses how to apply similar engineering discipline to the full range of a team’s software tasks. The PSP and TSP can be viewed as the individual and team versions of the capability maturity model (CMM), respectively. Per Humphrey’s guidelines, PSP introduction should follow organizational process improvement and should generally be deferred until the organization is working on achieving at least CMM level 2 (Humphrey, 1995).

Since the early 1990s, a number of capability maturity models have been developed for different disciplines. The Capability Maturity Model Integrationsm (CMMIsm) was developed by integrating practices from four CMMs: for software engineering, for systems engineering, for integrated product and process development (IPPD), and for acquisition. It was released in late 2001 (Software Engineering Institute, 2001a, 2001b). Organizations that want to pursue process improvement across disciplines can now rely on a consistent model. The CMMI has two representations, the staged representation and the continuous representation. The staged representation of the CMMI provides five levels of process maturity.

Maturity Level 1: Initial

Processes are ad hoc and chaotic.

Maturity Level 2: Managed

Focuses on basic project management. The process areas (PAs) are:

  • Requirements management

  • Project planning

  • Project monitoring and control

  • Supplier agreement management

  • Measurement and analysis

  • Process and product quality assurance

  • Configuration management

Maturity Level 3: Defined

Focuses on process standardization. The process areas are:

  • Requirements development

  • Technical solution

  • Product integration

  • Verification

  • Validation

  • Organizational process focus

  • Organizational process definition

  • Integrated product management

  • Risk management

  • Decision analysis and resolution

  • Organizational environment for integration (IPPD)

  • Integrated teaming (IPPD)

Level 4: Quantitatively Managed

Focuses on quantitative management. The process areas are:

  • Organizational process performance

  • Quantitative project management

Level 5: Optimizing

Focuses on continuous process improvement. The process areas are:

  • Organizational innovation and deployment

  • Causal analysis and resolution

  • The continuous representation of the CMMI is used to describe the capability level of individual process areas. The capability levels are as follows:

    • Capability Level 0: Incomplete

    • Capability Level 1: Performed

    • Capability Level 2: Managed

    • Capability Level 3: Defined

    • Capability Level 4: Quantitatively Managed

    • Capability Level 5: Optimizing

The two representations of the CMMI take different approaches to process improvement. One focuses on the organization as a whole and provides a road map for the organization to understand and improve its processes through successive stages. The other approach focuses on individual processes and allows the organization to focus on processes that require personnel with more or different capability. The rules for moving from one representation into the other have been defined. Therefore, a choice of one representation does not preclude the use of another at a later time.

The SPR Assessment

Software Productivity Research, Inc. (SPR), developed the SPR assessment method at about the same time (Jones, 1986) the SEI process maturity model was developed. There is a large degree of similarity and some substantial differences between the SEI and SPR methods (Jones, 1992). Some leading U.S. software developers use both methods concurrently. While SEI’s questions focus on software organization structure and software process, SPR’s questions cover both strategic corporate issues and tactical project issues that affect quality, productivity, and user satisfaction. The number of questions in the SPR questionnaire is about 400. Furthermore, the SPR questions are linked-multiple-choice questions with a five-point Likert scale for responses, whereas the SEI method uses a binary (yes/no) scale. The overall process assessment outcome by the SPR method is also expressed in the same five-point scale:

  1. Excellent

  2. Good

  3. Average

  4. Below average

  5. Poor

Different from SEI’s five maturity levels, which have defined criteria, the SPR questions are structured so that a rating of “3” is the approximate average for the topic being explored. SPR has also developed an automated software tool (CHECKPOINT) for assessment and for resource planning and quality projection. In addition, the SPR method collects quantitative productivity and quality data from each project assessed. This is one of the differences between the SPR and SEI assessment methods.

With regard to software quality and metrics, topics such as the following are addressed by the SPR questions:

  • Quality and productivity measurements

  • Pretest defect removal experience among programmers

  • Testing defect removal experience among programmers

  • Project quality and reliability targets

  • Pretest defect removal at the project level

  • Project testing defect removal

  • Postrelease defect removal

Findings of the SPR assessments are often divided into five major themes (Jones, 2000):

  • Findings about the projects or software products assessed

  • Findings about the software technologies used

  • Findings about the software processes used

  • Findings about the ergonomics and work environments for staff

  • Findings about personnel and training for management and staff

According to Jones (2000), as of 2000, SPR has performed assessments and benchmarks for nearly 350 corporations and 50 government organizations, with the number of sites assessed in excess of 600. The percent distributions of these assessments by the five assessment scales are excellent, 3.0%; above average, 18.0%; average, 54.0%; below average, 22.0%; poor, 3.0%.

The Malcolm Baldrige Assessment

The Malcolm Baldrige National Quality Award (MBNQA) is the most prestigious quality award in the United States. Established in 1988 by the U.S. Department of Commerce (and named after Secretary Malcolm Baldrige), the award is given annually to recognize U.S. companies that excel in quality management and quality achievement. The examination criteria are divided into seven categories that contain twenty-eight examination items:

  • Leadership

  • Information and analysis

  • Strategic quality planning

  • Human resource utilization

  • Quality assurance of products and services

  • Quality results

  • Customer satisfaction

The system for scoring the examination items is based on three evaluation dimensions: approach, deployment, and results. Each item requires information relating to at least one of these dimensions. Approach refers to the methods the company is using to achieve the purposes addressed in the examination item. Deployment refers to the extent to which the approach is applied. Results refers to the outcomes and effects of achieving the purposes addressed and applied.

The purpose of the Malcolm Baldrige assessment approach (the examination items and their assessment) is fivefold:

  1. Elevate quality standards and expectations in the United States.

  2. Facilitate communication and sharing among and within organizations of all types based on a common understanding of key quality requirements.

  3. Serve as a working tool for planning, training, assessment, and other uses.

  4. Provide the basis for making the award.

  5. Provide feedback to the applicants.

There are 1,000 points available in the award criteria. Each examination item is given a percentage score (ranging from 0% to 100%). A candidate for the Baldrige award should be scoring above 70%. This would generally translate as follows:

  • For an approach examination item, continuous refinement of approaches are in place and a majority of them are linked to each other.

  • For a deployment examination item, deployment has reached all of the company’s major business areas as well as many support areas.

  • For a results examination item, the company’s results in many of its major areas are among the highest in the industry. There should be evidence that the results are caused by the approach.

While score is important, the most valuable output from an assessment is the feedback, which consists of the observed strengths and (most significant) the areas for improvement. It is not unusual for even the higher scoring enterprises to receive hundreds of improvement suggestions. By focusing on and eliminating the high-priority weaknesses, the company can be assured of continuous improvement.

To be the MBNQA winner, the four basic elements of the award criteria must be evident:

  1. Driver: The leadership of the senior executive management team.

  2. System: The set of well-defined and well-designed processes for meeting the company’s quality and performance requirements.

  3. Measure of progress: The results of the company’s in-process quality measurements (aimed at improving customer value and company performance).

  4. Goal: The basic aim of the quality process is the delivery of continuously improving value to customers.

Many U.S. companies have adopted the Malcolm Baldrige assessment and its discipline as the basis for their in-company quality programs. In 1992, the European Foundation for Quality Management published the European Quality Award, which is awarded to the most successful proponents of total quality management in Western Europe. Its criteria are similar to those of the Baldrige award (i.e., 1,000 maximum points; the areas of approach, deployment, results are scoring dimensions). Although there are nine categories (versus Baldrige’s seven), they cover similar examination areas. In 1998, the seven MBNQA categories were reorganized as: Leadership, Strategic Planning, Customer and Market Focus, Information and Analysis, Human Resource Focus, Process Management, and Business Results. Many U.S. states have established quality award programs modeled on the Malcolm Baldrige National Quality Award.

Unlike the SEI and SPR assessments, which focus on software organizations, projects, and processes, the MBNQA and the European Quality Award encompass a much broader scope. They are quality standards for overall quality management, regardless of industry. Indeed, the MBNQA covers three broad categories: manufacturing, service, and small business.

ISO 9000

ISO 9000, a set of standards and guidelines for a quality assurance management system, represents another body of quality standards. It was established by the International Organization for Standardization and has been adopted by the European Community. Many European Community companies are ISO 9000 registered. To position their products to compete better in the European market, many U.S. companies are working to have their development and manufacturing processes registered. To obtain ISO registration, a formal audit of twenty elements is involved and the outcome has to be positive. Guidelines for the application of the twenty elements to the development, supply, and maintenance of software are specified in ISO 9000-3. The twenty elements are as follows:

  1. Management responsibility

  2. Quality system

  3. Contract review

  4. Design control

  5. Document control

  6. Purchasing

  7. Purchaser-supplied product

  8. Product identification and traceability

  9. Process control

  10. Inspection and testing

  11. Inspection, measuring, and test equipment

  12. Inspection and test status

  13. Control of nonconforming product

  14. Corrective action

  15. Handling, storage, packaging, and delivery

  16. Quality records

  17. Internal quality audits

  18. Training

  19. Servicing

  20. Statistical techniques

Many firms and companies pursue ISO 9000 registration, and many companies fail the first audit. The number of initial failures ranges from 60% to 70%. This interesting statistic is probably explained by the complexity of the standards, their bureaucratic nature, the opportunity for omissions, and a lack of familiarity with the requirements.

From the software standpoint, corrective actions and document control are the areas of most nonconformance. As discussed earlier, the defect prevention process is a good vehicle to address the element of corrective action. It is important, however, to make sure that the process is fully implemented throughout the entire organization. If an organization does not implement the DPP, a process for corrective action must be established to meet the ISO requirements.

With regard to document control, ISO 9000 has very strong requirements, as the following examples demonstrate:

  • Must be adequate for purpose: The document must allow a properly trained person to adequately perform the described duties.

  • Owner must be identified: The owner may be a person or department. The owner is not necessarily the author.

  • Properly approved before issued: Qualified approvers must be identified by the organization’s title and the approver’s name before the document is distributed.

  • Distribution must be controlled: Control may consist of:

    • Keeping a master hard copy with distribution on demand

    • Maintaining a distribution record

    • Having documents reside online available to all authorized users, with the following control statement, “Master document is the online version.”

  • Version identified: The version must be identified clearly by a version level or a date.

  • Pages numbered: All pages must be numbered to ensure sections are not missing.

  • Total pages indicated: The total number of pages must be indicated, at least on the title page.

  • Promptly destroyed when obsolete: When a controlled document is revised or replaced, all copies of it must be recalled or destroyed. Individuals who receive controlled documents are responsible for prompt disposition of superseded documents.

From our perspective, the more interesting requirements address software metrics, which are listed under the element of statistical techniques. The requirements address both product metrics and process metrics.

  1. Product metrics: Measurements should be used for the following purposes:

    • To collect data and report metric values on a regular basis

    • To identify the current level of performance on each metric

    • To take remedial action if metric levels grow worse or exceed established target levels

    • To establish specific improvement goals in terms of the metrics

      At a minimum, some metrics should be used that represent

    • Reported field failures

    • Defects from customer viewpoint

      Selected metrics should be described such that results are comparable.

  2. Process metrics

    • Ask if in-process quality objectives are being met.

    • Address how well development process is being carried out with checkpoints.

    • Address how effective the development process is at reducing the probability that faults are introduced or go undetected.

The MBNQA criteria and the ISO 9000 quality assurance system can complement each other as an enterprise pursues quality. However, note that Baldrige is a nonprescriptive assessment tool that illuminates improvement items; ISO 9000 registration requires passing an audit. Furthermore, while the Malcolm Baldrige assessment focuses on both process and results, the ISO 9000 audit focuses on a quality management system and process control. Simply put, ISO 9000 can be described as “say what you do, do what you say, and prove it.” But ISO 9000 does not examine the quality results and customer satisfaction, to which the MBNQA is heavily tilted. The two sets of standards are thus complementary. Development organizations that adopt them will have more rigorous processes. Figure 2.8 shows a comparison of ISO 9000 and the Baldrige scoring system. For the Baldrige system, the length of the arrow for each category is in proportion to the maximum score for that category. For ISO 9000, the lengths of the arrows are based on the perceived strength of focus from the IBM Rochester ISO 9000 audit experience, initial registration audit in 1992 and subsequent yearly surveillance audits. As can be seen, if the strengths of ISO 9000 (process quality and process implementation) are combined with the strengths of the Baldrige discipline (quality results, customer focus and satisfaction, and broader issues such as leadership and human resource development), the resulting quality system will have both broad-based coverage and deep penetration.

Malcolm Baldrige Assessment and ISO 9000: A Comparison Based on the Baldrige Scoring

Figure 2.8. Malcolm Baldrige Assessment and ISO 9000: A Comparison Based on the Baldrige Scoring

The Baldrige/ISO synergism comes from the following:

  • The formal ISO documentation requirements (e.g., quality record) facilitate addressing the Baldrige examination items.

  • The formal ISO validation requirements (i.e., internal assessments, external audits, and periodic surveillance) assist completeness and thoroughness.

  • The heavy ISO emphasis on corrective action contributes to the company’s continuous improvement program.

  • The audit process itself results in additional focus on many of the Baldrige examination areas.

In recent years, the ISO technical committee responsible for ISO 9000 family of quality standards has undertaken a major project to update the standards and to make them more user-friendly. ISO 9001:2000 contains the first major changes to the stan-dards since their initial issue. Some of the major changes include the following (British Standards Institution, 2001; Cianfrani, Tsiakals, and West, 2001):

  • Use of a process approach and new structure for standards built around a process model that considers all work in terms of inputs and outputs

  • Shift of emphasis from preparing documented procedures to describe the system to developing and managing a family of effective processes

  • Greater emphasis on role of top management

  • Increased emphasis on the customer, including understanding needs, meeting requirements, and measuring customer satisfaction

  • Emphasis on setting measurable objectives and on measuring product and process performance

  • Introduction of requirements for analysis and the use of data to define opportunity for improvement

  • Formalization of the concept of continual improvement of the quality management system

  • Use of wording that is easily understood in all product sectors, not just hardware

  • Provisions via the application clause to adapt ISO 9001:2000 to all sizes and kinds of organizations and to all sectors of the marketplace

From these changes, it appears ISO 9000 is moving closer to the MBNQA criteria while maintaining a strong process improvement focus.

Summary

This chapter

  • Describes the major process models and approaches in software development— the waterfall process, the prototyping approach, the spiral model, the iterative process, the object-oriented process, the Cleanroom methodology, and the defect prevention process.

  • Discusses two methods of process maturity assessment—the SEI process capability maturity model and the SPR assessment method.

  • Summarizes two bodies of quality management standards—the Malcolm Baldrige National Quality Award assessment discipline and ISO 9000.

The waterfall process is time-tested and is most suitable for the development of complex system software with numerous interdependencies. This process yields clearly defined intermediate deliverables and enables strong project control.

The prototyping approach enables the development team and the customers to clarify the requirements and their interpretation early in the development cycle. It is not a process per se; it can be used with various process models. It has become widely used in application development. It can also be used with subsystems of systems software when external interfaces are involved.

The iterative process and the spiral model have seen wide use in recent years, especially in application development. Coupled with risk management and prototyping, these new processes increase the likelihood that the final product will satisfy user requirements and facilitate the reduction of cycle time.

In terms of the object-oriented development process, the Unified Process is the most well known process among the object-oriented community. The light-weight Extreme Programming process is one of the more controversial processes.

The Cleanroom approach can be regarded as a process as well as a methodology. As a process, it is well defined. As a methodology, it can be used with other processes such as the waterfall and even object-oriented development. Since the early experimental projects in the late 1980s, the Cleanroom approach has seen increased use in recent years.

The defect prevention process is aimed at process development. When integrated with the development process, it facilitates process maturity because it enables the process to fine-tune itself through the closed-loop learning process. It can be applied to software development as well as to other industries.

Whereas the process models deal with software development, the SEI and SPR maturity models deal with the maturity of the organization’s development process, regardless of the process model being used. They entail defining a set of ideal criteria and measuring the processes of organizations against these ideals. This concept has become very popular in the last decade and provides a mechanism for companies to be related with regard to process. The Malcolm Baldrige assessment and ISO 9000 are bodies of quality standards of an even broader scope. They pertain to the quality assurance management system at the company level regardless of industry. In sum, the specific development process being used, the maturity level of the process, and the company’s quality management system are all important factors that affect the quality of a software project.

In the next chapter we focus on some aspects of measurement theory that will set the stage for our discussions of software metrics.

References

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.103.219