Chapter 16. Conducting Software Project Assessments

In this chapter we discuss how to conduct software project assessments. [1] The scope of a project assessment includes the end to end methodologies for the development and management of the project. Development methodologies refer to the development process from requirements and specifications, to design, code, integration and driver build, testing, and early customer programs (when applicable), tools and development environment, sizing and schedule development, dependency management, and overall project management. The level and scope of concern here is different from the discussions in the last chapter in which the focus of interest was to determine a project’s in-process quality status and whether it is on track to achieve its quality objectives. Here we are concerned with the overall development effectiveness and efficiency, and improvement opportunities. In-process quality assessments are key activities of the project quality management effort and conducted by members of the project team. Project assessments are invariably conducted by people external to the project team. A project assessment can be conducted when the project is under development or when it is complete. Usually, a project assessment is triggered by some unfavorable field results, with the intent to improve a follow-on release or another project by the same team.

Software assessments, which originated from the need for improvement in the software industry, started in the 1960s and 1970s as informal assessments. Since then software assessments has evolved into formal software process assessments based on the maturity concept and standard questionnaires. In recent years, they have become a fast growing subindustry. Arguably, the standard assessment approaches all gear toward the organization level as the assessment unit. At the same time, in software development organizations, managers continue to ask the original question: How do I improve my project or how do I perform my next project better. Conducting project assessments for their organization is still the bona fide responsibility of software quality professionals. Peer reviews of projects are also common in software development organizations. The assessment unit of these project assessments, by definition, is at the project level. These kinds of project assessments, not organizational-level process assessments, are the subject of this chapter. The interest is on assessing a specific project for immediate improvement actions or for a small number of projects for identifying best practices. We propose a systematic approach for conducting software project assessments.

Audit and Assessment

It is important to recognize the difference between an audit and an assessment (Zahran, 1997). The IEEE’s definition (IEEE-STD-610) of an audit is as follows:

An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria.

According to ISO documents (ISO 9000-3), the concepts of certification and audit are defined as follows:

Certification, or third-party assessment (referred to as registration in some countries), is carried out by an independent organization against a particular standard.

The outcome of an audit is in compliance or not in compliance, or pass or fail. Humphrey’s view is that “a software process assessment is not an audit but a review of a software organization to advise its management and professionals on how they can improve their operation” (Humphrey, 1989, p. 149). Zahran (1997, p. 149) provides a comprehensive definition of a software process assessment and its objectives according to the maturity framework:

A software process assessment is a disciplined examination of the software processes used by an organization, based on a process model. The objective is to determine the maturity level of those processes, as measured against a process improvement road map. The result should identify and characterize current practices, identifying areas of strengths and weaknesses, and the ability of current practices to control or avoid significant causes of poor (software) quality, cost, and schedule. The assessment findings can also be used as indicators of the capability of those processes to achieve the quality, cost, and schedule goals of software development with a high degree of predictability. (p. 149)

Depending on who plays the key role in an assessment, a software assessment (or audit) can be a self-assessment (or first-party assessment), a second-party assessment, or a third-party assessment. A self-assessment is performed internally by an organization’s own personnel. A second-party or third-party assessment is performed by an external party. The assessing party can be the second party (e.g., a company hires an external assessment team, or a company is being assessed by a customer) or the third party (e.g., a supplier is being assessed by a third party to verify its ability to enter contracts with a customer).

In the SEI (Software Engineering Institute at Carnegie Mellon University) terminology, a distinction is made between software process assessments and software capability evaluations because the two differ in motivation, objective, outcome, and ownership of the results. Software capability evaluations are used by the Department of Defense (DoD) and other major customers for selection and monitoring of software contractors or for assessing the risks associated with the procurement of a given product. The results are known to DoD or the initiator of the evaluation, and no member of the organization being evaluated is on the evaluation team. They are conducted in a more audit-oriented environment. Software process assessments, in contrast, are performed in an open, collaborative environment. They are for the use of the organization to improve its software process, and results are confidential to the organization. The organization being assessed must have members on the assessment team (Zahran, 1997). With the move to the Standard CMMIsm Appraisal Method for Process Improvement (SCAMPIsm) by SEI, this distinction is going away (Software Engineering Institute, 2000). The same assessment method will be used both for internal improvement and external source selection.

Software Process Maturity Assessment and Software Project Assessment

The scope of a software process assessment can cover all processes in the organization, a selected subset of the software processes, or a specific project. For most process assessments that are based on the maturity concept, the target unit of analysis and rating is normally at the organizational level. In fact, most of the standard-based process assessment approaches are invariably based on the concept of process maturity. This is the case for the SEI (Software Engineering Institute at Carnegie Mellon University) capability maturity model (CMM), the SPR (Software Productivity Research, Inc.) approach, the Trillium model (a CMM-based model developed by a consortium of telecommunications companies, headed by Bell Canada), as well as the recently developed BOOTSTRAP methodology (the result of a European Community project) and the ISO/IEC 15504 draft standard (Zahran, 1997).

When the assessment target is the organization, the results of a process assessment may differ, even on successive applications of the same method. Paulk and colleagues (1995) explain the two reasons for the different results. First, the organization being investigated must be determined. For a large company, several definitions of organization are possible and therefore the actual scope of appraisal may differ in successive assessments. Second, even in what appears to be the same organization, the sample of projects selected to represent the organization may affect the scope and outcome. This project sampling effect can be substantial for large organizations with a variety of projects.

When the target unit of assessment is at the project level, the potential problems associated with organizational-level assessments just discussed are not relevant. The ambiguities and vagueness with regard to assessment results do not exist. Furthermore, some process dimensions of a standard process assessment method may not apply to a specific project. On the other hand, a software project assessment should include all meaningful factors that contribute to the success or failure of the project. It should not be limited by established dimensions of a given process maturity model. One should assess not only the processes of the project, but also the degree of implementation and their effectiveness as substantiated by project data. Project assessments address “hows” and “whys” with sufficient depth, in addition to the “whats.” Therefore, exploratory and in-depth probing are key characteristics of software project assessments. In this regard, the standard questionnaires used by the maturity assessment models may not be sufficient. It is well known that the standard questionnaires address the “whats” but not the “hows” by design so that each organization can optimize its own approach to process maturity. Because of this inherent limitation of standard questionnaires, standard-based process assessment models also rely on other data gathering methods such as document reviews and extensive interviewing.

In addition to the difference in the unit of analysis, the very concept of process maturity may not be applicable to the project level. What matters is the success or failure of the project, as measured in field performance and development effectiveness and efficiency. If the projects achieve measurable improvement, whether or not a certain set of process activities is being practiced, or a certain maturity level is achieved, is not relevant. If a project fails, the remedial actions have to aim directly at the causes of failure. Process maturity becomes relevant when an organization intends to embark on an overall long-term improvement strategy. Even then, the additional value derived from the implementation of additional process elements needs to be monitored and verified at the project level.

Software project assessments, informal or formal, must be independent assessments in order to be objective. The assessment team may be in the same organization but must be under a different management chain from the project team. It may come from a different division of the company, it could be an external team, or it could be a combination of internal personnel and external consultants.

The necessity of and demand for project assessments exist regardless of whether the organization is pursuing a long-term process maturity improvement strategy. Within an organization of a specific maturity level, there are always variations among projects with regard to the state of practices of development methodologies, how they are implemented and why, and their correlation with the project outcome. The two types of assessment can be complementary: the process maturity assessments for overall improvement strategy for the organization and specific project assessments to drive immediate and specific improvement actions at the project level. With customization and a shift in the assessment focus (unit of analysis), standard process assessment methods might be applied to project assessments. For small organizations with a few projects, the distinction between a process maturity assessment and a project assessment may be blurred.

Software Process Assessment Cycle

According to Paulk and colleagues (1995), the CMM-based assessment approach uses a six-step cycle. The first step is to select a team. The members of the team should be professionals knowledgeable in software engineering and management. In the second step, the representatives of the site to be appraised complete the standard process maturity questionnaire. Then the assessment team performs an analysis of the questionnaire responses and identifies areas that warrant further exploration according to the CMM key process areas. The fourth step is for the assessment team to conduct a site visit to gain an understanding of the software process followed by the site. At the end of the site visit comes step 5, when the assessment team produces a list of findings that identifies the strengths and weakness of the organization’s software process. Finally, the assessment team prepares a key process area (KPA) profile analysis and presents the results to the appropriate audience.

The SEI also developed and published the CMM-Based Appraisal for Internal Process Improvement (CBA IPI) (Dunaway and Masters, 1996). The data collected for CBA IPI is based on key process areas of the CMM as well as non-CMM issues. For an assessment to be considered a CBA IPI, the assessment must meet minimum requirements concerning (1) the assessment team, (2) the assessment plan, (3) data collection, (4) data validation, (5) the rating, and (6) the reporting of assessment results. For example, the assessment team must be led by an authorized SEI Lead Assessor. The team shall consist of between 4 and 10 team members. At least one team member must be from the organization being assessed, and all team members must complete the SEI’s Introduction to the CMM course (or its equivalent) and the SEI’s CBA IPI team training course. Team members must also meet some selection guidelines. With regard to data collection, the CBA IPI relies on four methods: the standard maturity questionnaire, individual and group interviews, document reviews, and feedback from the review of the draft findings with the assessment participants.

The Standard CMMI Assessment Method for Process Improvement (SCAMPI) was developed to satisfy the CMMI model requirements (Software Engineering Institute, 2000). It is also based on the CBA IPI. Both the CBA IPI and the SCAMPI consist of three phases: plan and preparation, conducting the assessment onsite, and reporting results. The activities for the plan and preparation phase include:

  • Identify assessment scope.

  • Develop the assessment plan.

  • Prepare and train the assessment team.

  • Make a brief assessment of participants.

  • Administer the CMMI Appraisal Questionnaire.

  • Examine Questionnaire responses.

  • Conduct initial document review.

The activities for the onsite assessment phase include:

  • Conduct an opening meeting.

  • Conduct interviews.

  • Consolidate information.

  • Prepare presentation of draft findings.

  • Present draft findings.

  • Consolidate, rate, and prepare final findings.

The activities of the reporting results phase include:

  • Present final findings.

  • Conduct executive session.

  • Wrap up the assessment.

The description of the CBA IPI and the SCAMPI assessment cycle appears to be more elaborate. Its resemblance to the assessment approach outlined by Paulk and colleagues in 1995 remains obvious.

The SPR assessment process involves similar steps (Jones, 1994). The initial step is an assessment kickoff session (1), followed by project data collection (2), and then individual project analysis (3). A parallel track is to conduct management interviews (4). The two tracks then merge for benchmark comparison, aggregate analysis, and interpretation (5). The final phase is measurement report and improvement opportunities (6). Data collection and interviews are based on the structured SPR assessment questionnaire. The SPR assessment approach uses multiple models and does not assume the same process steps and activities for all types of software.

Table 16.1. Zahran’s Generic Phases and Main Activities of Software Process Assessment

Phase

Sub-phase

Main activities

From Software Process Improvement, Practical Guidelines for Business Success, by Sami Zahran (Table 8.3, p. 161). © 1998 Addison-Wesley Longman. Reprinted by permission of Pearson Education, Inc.

Preassessment

Preplanning

Understanding of business context and justification, objectives, and constraints Securing sponsorship and commitment

Assessment

Planning

Selection of assessment approach

Selection of improvement road map

Definition of assessment boundaries

Selection of assessment team

Launching the assessment

Training the assessment team

Planning fact gathering, fact analysis and reporting activities

 

Fact gathering

Selecting a fact gathering approach (e.g., questionnaire, interviews, and group discussion)

Defining the target interviewees

Distributing and collecting questionnaire responses

Conducting the interviews

 

Fact analysis

Analysis of questionnaire responses

Analysis of facts gathered in the interviews

Analysis of the evidence gathered

Collective analysis of the data gathered

Calibration of the findings against the road map

Identifying strengths and weaknesses and areas of improvement

 

Reporting

Documenting the findings: strengths and weaknesses

Documenting the recommendations

Postassessment

Action plan for process improvement

Implementing the process improvement actions Managing and monitoring the process improvement plan

While each assessment approach has its unique characteristics, a common schema should apply to all. Zahran (1997) developed a generic cycle of process assessment that includes four phases: planning, fact finding, fact analysis, and reporting. Besides the assessment cycle per se, a preassessment and preplanning phase and a postassessment and process improvement plan phase are in Zahran’s generic cycle. The main activities of the phases are shown in Table 16.1.

The generic phases and the main activities within each phase serve as a useful overall framework for assessment projects. Zahran also successfully mapped the current process assessment approaches into this framework, including the CMM, the Trillium model, the BOOTSTRAP methodology, and the ISO/IEC 15504 draft standard for software process assessment. In the next sections when we discuss our method for software project assessments, we will refer to the main activities in this framework as appropriate.

A Proposed Software Project Assessment Method

We propose a software project assessment method as shown in Figure 16.1, which is based on our project assessment experience over the past many years. We will discuss each phase with some details, but first there are several characteristics of this method that may be different from other assessment approaches:

A Proposed Software Project Assessment Method

Figure 16.1. A Proposed Software Project Assessment Method

  • It is project based.

  • There are two phases of facts gathering and the phase of a complete project review precedes other methods of data collection. Because the focus is on the project, it is important to understand the complete project history and end-to-end processes from the project team’s perspective before imposing a questionnaire on the team.

  • This method does not rely on a standard questionnaire. There may be a questionnaire in place from previous assessments, or some repository of pertinent questions maintained by the process group of the organization. There may be no questionnaire in place and an initial set of questions needs to be developed in the preparation phase. In either case, customization of the questionnaire after a complete project review is crucial so that each and every question is relevant.

  • Observations, analysis, and possible recommendations are part of an ongoing process beginning at the start of the assessment project. With each additional phase and input, the ongoing analysis and observations are being refuted, confirmed, or refined. This is an iterative process.

  • The direct input by the project team/development team with regard to strengths and weaknesses and recommendations for improvement is important, as reflected in steps 3 through 6, although the final assessment still rests on the assessment team’s shoulders.

Preparation Phase

The preparation phase includes all planning and preparation. For an assessment team external to the organization whose project is to be assessed, a good understanding of the business context and justification, objectives, and commitment is important. Since most of the project assessments are done by personnel within the organization, or from a separate division of a company, this is normally not needed. In this phase, a request for basic project data should be made. General information on the type of software, size, functions, field performance, skills and experience of the development team, organizational structure, development process, and language used is important for the assessment team to start formulating some frames of reference for the assessment. This data doesn’t have to be specific and precise and should be readily available from the project team. If there is no questionnaire from a previous similar project, the assessment team should start developing a battery of questions based on the basic project data. These questions can be revised and finalized when the facts gathering phase I is completed.

For overall planning, we recommend the assessment be run as a project with all applicable project management practices. It is important to put in place a project plan that covers all key phases and activities of the assessment. For internal assessments, a very important practice at the preparation phase is to obtain a project charter from the sponsor and commitment from the management team of the project being assessed. The project charter establishes the scope of the assessment and the authority of the assessment team. It should be one page or shorter and probably is best drafted by the assessment leader and signed and communicated by the sponsor executive.

Another easily neglected activity in the preparation phase is a project closeout plan. Good project management calls for planning for the closeout on day 1 of the project. A closeout plan in this case may include the kind of reports or presentations that will be delivered by the assessment team, the audience, and the format.

Facts Gathering Phase 1

The first phase of facts gathering involves detailed review of all aspects of the project from the project team’s perspective. The format of this phase may be a series of project team’s descriptions or presentations. In the assessment team’s request for information, at least the following areas should be covered:

  • Project description and basic project data (size, functions, schedule, key dates and milestones)

  • Project and development team information (team size, skills, and experience)

  • Project progress, development timeline, and project deliverables

  • End-to-end development process from requirements to testing to product ship

  • Sizing and schedule development, staffing

  • Development environment and library system

  • Tools and specific methodologies

  • Project outcome or current project status

  • Use of metrics, quantitative data, and indicators

  • Project management practices

  • Any aspects of the project that the project team deems important

The assessment team’s role in this phase is to gather as much information as possible and gain a good understanding of the project. Therefore, the members should be in a listening mode and should not ask questions that may mislead the project team. Establishing the whats and hows of the project are the top priority, and sometimes it is necessary to get into the whys via probing techniques. For example, the project may have implemented a joint test phase between the development group and the independent test team to improve the test effectiveness of the project, and to make sure that the project meets the entry criteria of the system verification test (SVT) phase. This is a “what” of the actual project practices. The joint test phase was implemented at the end of the functional verification test (FVT) and before SVT start during the SVT acceptance test activities. Independent test team members and developers were paired for major component areas. The possible gaps between the FVT and the SVT plans were being tested. The testing environment was a network of test-ing systems maintained by the independent group for SVT test. To increase the chances for latent defects to surface, the test systems were stressed by running a set of performance workloads in the background. This test phase was implemented because meeting SVT entrance criteria on time had been a problem in the past, and due to the formal hand-off between FVT and SVT, it was felt that there was room for improvement with regard to the communication between developers and independent testers. The project team thinks that this joint test practice contributed significantly to the success of the project because a number of additional defects were found before SVT (as supported by metrics), SVT entrance criteria were met on time, the testers and developers learned from each other and improved their communications as a result, and the test added only minimum time to the testing schedule so it didn’t negatively affect the project completion date. These are the “hows” and “whys” of the actual implementation. During the project review, the project team may describe this practice briefly. It is up to the assessment team to ask the right questions to get the details with regard to hows and whys.

At the end of a project review, critical success factors or major reasons for failure should be discussed. These factors may also include sociological factors of software development, which are important (Curtis et al., 2001; DeMarco and Lister, 1999; Jones, 1994, 2000). For the entire review process, the assessment team’s detailed note-taking is important. If the assessment team consists of more than one person and the project review lasts more than one day, discussions and exchange of thoughts among the assessment team members is always a good practice.

Questionnaire Customization and Finalization

Now that the assessment team has gained a good understanding of the project, the next step is to customize and finalize the questionnaire for formal data collection. The assumption here is that a questionnaire is in place. It may be from a previous assessment project, developed over time from the assessment team’s experience, from a repository of questions maintained by the software engineering process group (SEPG) of the organization, or from a prior customization of a standard questionnaire of a software process assessment method. If this is not the case, then initial questionnaire construction should be a major activity in the preparation phase, as previously mentioned.

Note that in peer reviews (versus an assessment that is chartered by an executive sponsor), a formal questionnaire is not always used.

There are several important considerations in the construction of and finalization of a questionnaire. First, if the questionnaire is a customized version of a standard questionnaire from one of the formal process assessment methods (e.g., CMM, SPR, and ISO software process assessment guidelines), it must be able to elicit more specific information. The standard questionnaires related to process maturity assessment usually are at a higher level than is desirable at the project level. For example, the following are the first three questions of the Peer Reviews key process activity (KPA) in the CMM maturity questionnaire (Zubrow et al., 1994).

  1. Are peer reviews planned? (Yes, No, Does Not Apply, Don’t Know)

  2. Are actions associated with defects that are identified during peer reviews tracked until they are resolved?

  3. Does the project follow a written organizational policy for performing peer reviews?

The following two questions related to peer design reviews were used in some project assessments we conducted:

  1. What is the most common form of design reviews for this project?

    • Formal review meeting with moderators, reviewers, and defect tracking, and issue resolution and rework completion are part of the completion criteria

    • Formal review but issue resolution is up to the owner

    • Informal review by experts of related areas

    • Codevelper (codesigner) informal review

    • Other ..... please specify

  2. To what extent were design reviews of the project conducted? (Please mark the appropriate cell in each row in the table)

     

    All Design Work Done Rigorously

    All Major Pieces of Design Items

    Selected Items Based on Criteria (e.g., Error Recovery)

    Design Reviews Were Occasionally Done

    Not Done

    Original design

         

    Design changes/rework

         

The differences between the two set of questions are obvious: One focuses on process maturity and organizational policy and the other focuses on specific project practices and degree of execution.

Second, a major objective of a project assessment is to identify the gaps and therefore opportunities for improvement. To elicit input from the project team, the vignette-question approach in questionnaire design can be used with regard to importance of activities in the development process. Specifically, the vignette questions include a question on the state of practice for the specific activity by the project and another question on the project team’s assessment of the importance of that activity. The three following questions provide an example of this approach:

  1. Are there entry/exit criteria used for the independent system verification test phase?

    • If yes, (a) please provide a brief description.

    • (b) how is the criteria used and enforced?

  2. Per your experience and assessment, how important is this practice (entry/exit criteria for SVT) to the success of the project?

    • Very important

    • Important

    • Somewhat important

    • Not sure

  3. If your assessment in question 2 is “very important” or “important” and your project’s actual practice did not match the level of importance, what were the reasons for the disparity (e.g., obstacles, constraints, process, culture)? Please explain.

Third, it is wise to ask for the team’s direct input on strengths and weaknesses of the project’s practices in each major area of the development process (e.g., design, code, test, and project management). The following are the two common questions we used in every questionnaire at the end of each section of questions. Caution: These questions should not be asked before a description of the overall project practices are completed (by the project team) and understood (by the assessment team), otherwise the project team will be led prematurely to a self-evaluative mode. In this assessment method, there are two phases of facts gathering and asking these questions at the second phase is part of the design of the method.

  1. Is there any practice(s) by your project with regard to testing that you consider to be a strength and that should be considered for implementation by other projects? If so, please describe and explain.

  2. If you were to do this project all over again, what would you do differently with regard to testing and why?

The Appendix in this book shows a questionnaire that we have used as a base for customization for many software project assessments.

Facts Gathering Phase 2

In this phase, the questionnaire is administered to the project team including development managers, the project manager, and technical leads. The respondents complete the questionnaire separately. All sections of the questionnaire may not apply to all respondents. The responses are then analyzed by the assessment team and validated via a session with the project team. Conflicts of responses among the respondents and between information from the project review and the questionnaire responses should be discussed and resolved. The assessment team can also take up any topic for further probing. At the second half of the session, a brainstorming session on strengths and weaknesses, what’s done right, what’s done wrong, and what would the project team have done differently is highly recommended.

Possible Improvement Opportunities and Recommendations

As Figure 16.1 depicts, this phase runs parallel with other phases, starting from the preparation phase and ending with the final assessment report phase. Assessing the project’s strengths and weakness and providing recommendations for improvement constitute the purpose of a project assessment. Because the quality of the observations and recommendations is critical, this activity should not be done mechanically. To accomplish this important task, the assessment team can draw on three sources of information:

  1. Findings from the literature. For example, Jones (1994, 1995, 2000) provides an excellent summary of the risks and pitfalls of software projects, and benchmarks and best practices by types of software, based on the SPR’s experiences in software assessment. Another example is the assessment results from the CMM-based assessments published by the SEI. A familiarity with the frameworks and findings in the assessment literature enhances the breadth of the assessment team’s framework for development of recommendations. Of course, you cannot use the findings in the literature for recommendations unless they are pertinent to the project being assessed. Those findings are great references, however, so the big-picture view will be maintained while combing through the tremendous amount of project-specific information. Based on Jones’s findings (2002), the most significant factors associated with success and failure are the following:

    • Successful projects

      • Effective project planning

      • Effective project cost estimating

      • Effective project measurements

      • Effective project milestone tracking

      • Effective project quality control

      • Effective project change management

      • Effective development processes

      • Effective communications

      • Capable project managers

      • Capable technical personnel

      • Significant use of specialists

      • Substantial volume of reusable materials

    • Failing projects

      • Inadequate project planning

      • Inadequate cost estimating

      • Inadequate measurements

      • Inadequate milestone tracking

      • Inadequate quality control

      • Ineffective change control

      • Ineffective development processes

      • Ineffective communications

      • Ineffective project managers

      • Inexperienced technical personnel

      • Generalists rather than specialists

      • Little or no reuse of technical material

  2. Experience: The assessment team’s experience and findings from previous project assessments are essential. What works and what doesn’t, for what types of projects, under what kind of environment, organization, and culture? This experience-based knowledge is extremely valuable. This factor is especially important for internal project assessments. When sufficient findings from various projects in one organization are accumulated, patterns of successes and failure may emerge. This is related to the concept of the experience factory discussed by Basili (1995).

  3. Direct input from the project teamAs discussed earlier in the chapter, we recommend placing direct questions in the questionnaire on strengths and weakness, and what the team would do differently. Direct input from the team provides an insider’s view and at the same time implies feasibility of the suggested improvement opportunities. Many consultants and assessors know well that in-depth observations and good recommendations often come from the project team itself. Of course, the assessment team must evaluate the input and decide whether or not, and what part of, the project team’s input will become their recommendations.

Jones’s findings highlight the importance of good project management and project management methods such as estimating, measurements, and tracking and control. Sizing and schedule development without the support of metrics and experience from previous projects can lead to overcommitment and therefore project failure. This can happen even in well-established development organizations.

When developing improvement recommendations, feasibility of implementation in the organization’s environment should be considered. In this regard, the assessment team should think like a project manager, software development managers, and team leaders. At the same time, recommendations for strategic improvements that may pose challenges to the team should not be overlooked. In this regard, the assessment team should think like the strategist of the organization. In other words, there will be two sets of recommendations and the assessment team will wear two hats when developing the recommendations. In either case, recommendations should be based on facts and analysis and should never be coverage-checklist type items.

As an example, the following segment of recommendations is extracted from a project assessment report that we were involved with. This segment addresses the intersite communication of a cross-site development project.

It was not apparent as to whether there were trust issues between the two development teams. However, given that the two teams have never worked together on the same project and the different environments at the two sites, there is bound to be at least some level of unfamiliarity if not a lack of trust. The following techniques could be employed to address these issues.

  • Nothing can replace face-to-face interaction. If the budget allows it, make time for the Leadership Team (managers, technical leads, project leads) to get together for face-to-face interaction. A quarterly “Leadership Summit” can be useful to review the results of the last 90 days and establish goals for the next 90 days. The social interaction during and after these meetings is as important as the technical content of meetings.

  • A close alternative to travel and face-to-face meetings is the use of video conferencing. Find a way to make video conference equipment available to your teams and encourage its use. While it is quite easy to be distracted with mail or other duties while on a teleconference call, it is much more difficult to get away with it on a video conference. Seeing one another as points are raised in a meeting allows for a measure of nonverbal communication to take place. In addition, the cameras can be focused on the white boards to hold chalk talks or discuss design issues.

  • A simple thing that can be done to enhance cross-site communications is to place a picture board at each site with pictures of all team members.

  • Stress the importance of a single, cross-site team in all area communications and make sure the Leadership Team has completely bought into and is promoting this concept. Ensure processes, decisions, and communications are based on technical merit without unwarranted division by site. One simple example is that the use of site-specific distribution lists may promote communications divided between the sites. We recommend the leadership team work to abolish use of these lists and establish distribution lists based on the needs and tasks of the project.

Team Discussions of Assessment Results and Recommendations

In this phase, the assessment team discusses its findings and draft recommendations with the project team and obtains its feedback before the final report is completed. The two teams may not be in total agreement, and it is important that the assessment team make such a declaration before the session takes place. Nonetheless, this phase is important because it serves as a validation mechanism and increases the buy-in of the project team.

Assessment Report

The format of the report may vary (a summary presentation, a final written report, or both) but a formal meeting to present the report is highly recommended. With regard to content, at least the following topics should be covered:

  • Project information and basic project data

  • The assessment approach

  • Brief descriptions and observations of the project’s practices (development process, project management, etc.)

  • Strengths and weaknesses, and if appropriate, gap analysis

  • Critical success factors or major project pitfalls

  • What the project team would do differently (improvements from the project team’s perspective)

  • Recommendations

Tables 16.2 through 16.4 show report topics for three real assessed projects so you can relate the outcome to the points discussed earlier (for example, those on questionnaire construction). All three assessed projects were based on similar versions of the questionnaire in the Appendix. Project X is the software that supports the service processor of a computer system, Project Y is the microcode that supports a new hardware processor of a server, and Project Z is the software system that supports a family of disk storage subsystem products. The subsystem integrates hundreds of disk drives through a storage controller that provides redundant arrays of inexpensive disks (RAID), disk caching, devices emulation, and host attachment functions.

Table 16.2 shows the basic project data for the three projects. Table 16.3 summarizes the state of practices of key development and project management activities throughout the development cycle, and the project team’s self-assessment of the importance of the specific practices. For most cells, the vignette of ‘state of practice’ and ‘importance assessment’ is shown. A gap exists where the importance assessment is Important or Very Important but the state of practice is No, Seldom, Occasionally, or Not Done.

The most gaps were identified for Project X, in the areas of requirements and reviews, specifications, design document in place to guide implementation, design reviews, effective sizing and bottom-up schedule development, major checkpoint reviews, staging and code drop plans, and in-process metrics.

For Project Y, gaps were in the areas of design document, design reviews, effective sizing and bottom-up schedule development, and in-process metrics. For Project Z , the gaps were in the project management areas such as sizing and bottom-up schedule development as they related to the planning process and project assumptions. Development/unit test was also identified as a key gap as related to the Cleanroom software process used for the project. The Cleanroom software process focuses on specifications, design and design verification, and mathematical proof of program correctness. For testing, it focuses on statistical testing as it relates to customers’ operations profiles (Mills et al., 1987). However, the process does not focus on program debug and does not include a development unit test phase (i.e., from coding directly to independent test). Furthermore, questions on scalability (e.g., large and complex projects with many interdependencies) and the feasibility of implementations of customer operations profiles were raised by critics of the process. For Project Z, the use of this development process was a top-down decision, and faulty productivity and quality assumptions related to this process were used in schedule development.

Table 16.4 shows the project team’s improvement plan as a result of the iterative emphasis of the assessment method. Note that the project team’s own improvement ideas or plan are separate from the final recommendations by the assessment team, although the latter can include or reference the former.

Table 16.2. Basic Project Data for Projects X, Y, and Z

Project X

Project Y

Project Z

 

Size (KLOC)

  • Total

228

4000

1625

  • New and changed

78

100

690

Team size

10.5

35

225

Development cycle time (mo)

  • Design to GA

30

18

38

  • Design to development test complete

23

17

29

Team experience

Inexperienced 70% <2 yr

Very experienced 70% >5 yr some >15 yr

Very experienced 80% >5 yr

Cross-site development

Y

Y

Y

Cross-product brand development

Y

N

Y

Development environ/library

CMVC

PDL, Team Connect

AIX CMVC DEV2000

Project complexity (self-rated 10 pts)

7

8

10

Table 16.3. State of Practice, Importance Assessment, and Gap Analysis for Projects X, Y, and Z

Project Activity

Project X

Project Y

Project Z

Requirements reviews

-Seldom

-Very important

-Always

-Very important

-Always

-Very important

Develop specifications

-Seldom

-Very important

-Usually

-Very important

-Always

-Very important

Design documents in place

-No

-Very important

-No

-Very important

-Yes

-Very important

Design reviews

-Not done

-Very important

-Occasionally

-Very important

-Major pieces

-Very important

Coding standard/ guidelines

-No

-Yes

-Yes

Unit test

-Yes

- ad hoc

-Yes -No

Simulation test/environment

-No

-Very important

-Yes

-Very important

-No

-Important

Process to address code integration quality and driver stability

-Yes

-Very important

-Yes

-Very important

-Yes

-Very important

Driver build interval

-Weekly –

>Biweekly

-Biweekly with

fix support

-1 – 3 days

Entry/exit Criteria for independent test

-Yes

-Very important

-Yes

-Very important

-Yes

-Important

Change control process for fix integration

-Yes

-Very important

-Yes

-Very important

-Yes

-Important

Microcode project manager in place

-Yes (midway through project)

-No

-No

Role of effective project management

-Very important

-Important

-Important

Effective sizing and bottom-up schedule development

-No

-Very important

-No

-Important

-No

-Very important

Staging and code drop plans

-No

-Very important

-Yes

-Very important

-Yes

-Important

Major checkpoint reviews

-No

-Very important

-No

Somewhat important

-Yes

-Very important

In-process metrics

-No (started midway)

-Very important

-No

-Very important

-Yes

-Somewhat important

Table 16.4. Project Teams’ Improvement Plans Resulting from Assessment of Projects X, Y, and Z

Project

Improvement Plan

Project X

Requirements and specifications:

Freeze external requirements by a specific date. Create an overall specifications document before heading into the design phase for the components. Force requirements and specifications ownership early in the development cycle.

Design, code, and reviews:

Eliminate most/all shortcuts in design and code to get to and through bring-up. The code that goes into bring-up is the basis for shipping to customers and many times it is not the base that was desired to be working on. Establish project-specific development milestones and work to those instead of only the higher-level system mile-stones.

Code integration and driver build:

Increase focus on unit test for code integration quality, document unit test plans.

Test:

Establish entry/exit criteria for test and then adhere to them.

Project management (planning, schedule, dependency management, metrics):

Staff a project manager from the beginning. Have dependency management mapped to a specific site, and minimize cross-site dependency.

Tools and methodologies:

Use a more industry-standard toolset. Deploy the mobile toolset recently available on Thinkpad. Establish a skills workgroup to address the skills and education of the team.

Project Y

Requirements and specifications:

Focus more on design flexibility and considerations that could deal with changing requirements.

Design, code, and reviews:

Conduct a more detailed design review for changes to the system structure.

Test:

Better communications between test groups, make sure enough understanding by the test groups that are in different locations.

Project management (planning, schedule, dependency management, metrics):

Implement microcode project manager(s) for coordination of deliverables in combination with hardware deliverables.

Tools and methodologies:

Force the parallel development of test enhancement for regression test whenever new functions are developed.

Project Z

Requirements and specifications:

Link requirements and specifications with schedule and review schedule assumptions.

Design, code, and reviews:

Document what types of white box testing need to be done to verify design points.

Project Management (planning, schedule, dependency management, metrics):

Strong focus on project management, scheduling, staffing, and commitments. Periodically review schedule assumptions and assess impact as assumptions become invalid.

Summary

Table 16.6 summarizes the essential points discussed under each phase of the proposed software project assessment method.

Table 16.6. Essential Activities and Considerations by Phase of a Proposed Software Project Assessment Method

Phase

Essential Activities and Considerations

Phase — continued

(1) Preparation

- As appropriate

- Gain understanding of business context and justification, objectives and constraints.

- Establish assessment project plan.

- Establish project charter and secure commitment.

- Request basic project data.

- Develop an initial set of questions based on information available thus far, or use an existing. questionnaire.

- Establish assessment project closeout plan.

(3) Phase 3 - possible improvement opportunities and recommendations.

- Review findings and improvement frameworks in the literature.

(2) Facts gathering — phase 1

- Build a detailed project review from the project team’s perspective.

- Focus on whats and hows; at times on whys via probing.

- Formulate ideas at the end of project review.

(4) Questionnaire customization

- Customize to project being assessed.

- Use vignette-question approach for gap analysis.

- Define strengths and weaknesses.

- Gather team’s improvement ideas.

- Design questionnaire to include questions on improvements from the project team.

(5) Facts gathering — phase 2

- Administer questionnaire to project personnel.

- Validate responses.

- Triangulate across respondents and with information gathered from Phase 1.

- Brainstorm project strengths and weaknesses; ask what the project team would have done differently.

- Formulate whole list of recommendations - actions for immediate improvements and for strategic directions.

(6) Team discussions feedback

- Review with project team and assessment results and draft recommendations.

- Finalize recommendations.

(7) Reporting and closeout

- Complete final report including recommendations

- Meet with assessment executive sponsor and management of assessed project.

 

Summary

Software project assessments are different from software process assessments that are based on the process maturity framework. While process maturity assessments are more applicable to the organization level and are important for long-term improvement, specific project assessments are crucial to drive experience-based improvement. Process maturity assessments focus on the whats, and coverage and maturity level of key process activities and practices. Project assessments focus on the whats, hows, and whys of specific practices of the target project(s) and address the issue of improvement from the development team’s perspective. The two approaches are complementary.

A seven-step project assessment method, which was derived from experience, is proposed. Effective assessment and quality recommendations depend on three factors: the assessment approach; the experience and quality of the assessment team; and the quality, depth, and breadth of project data and information. The proposed method does not rely on a standard questionnaire to collect data and responses, or use a maturity framework to derive recommendations. The two-phase approach for fact finding in the method enhances the accuracy and depth of relevant project information and data. This method stresses the importance of quality professionals or peer development leaders who have deep understanding of the organization’s cultures and practices. It puts the project team back in the assessment process and therefore enhances the relevance and buy-in of the recommendations.

References



[1] An earlier version of this chapter was presented by Stephan H. Kan and Diane Manlove at the Tenth International Conference on Practical Software Quality Techniques (PSQT 2002 North), St. Paul, Minnesota, September 10–11, 2002.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.66.156