Chapter 17. Dos and Don’ts of Software Process Improvement

Patrick O’Toole

This chapter provides an overview of CMMI-based process maturity measures and describes how maturity and capability levels are determined. It sheds insight into some practices that may heighten the probability of success for an improvement program, and others that may lead to a less desirable outcome. [1]

Many organizations perceive a correlation between the quality of their software processes and the quality of their resulting products and services. This holds true both for software development organizations and the software products they produce, as well as software support organizations and the software they maintain. A number of these organizations use a model-based approach, such as the Capability Maturity Model for Software (CMM) or Capability Maturity Model Integration (CMMI), to guide and measure their process improvement efforts. [2]

Unfortunately, many well-intentioned organizations fail to achieve their stated improvement goals. In many organizations, the goal is stated in terms of attaining a CMMI level, rather than in terms linked directly to project performance. Process maturity is a laudable goal—provided it leads to improved project performance aligned with the organization’s business objectives.

Measuring Process Maturity

Using the CMMI, an organization can plan and execute a process improvement strategy based on industry best practices and a proven approach. The staged representation of the CMMI provides five levels of process maturity:

  • Maturity level 1: Initial

  • Maturity level 2: Managed

  • Maturity level 3: Defined

  • Maturity level 4: Quantitatively Managed

  • Maturity level 5: Optimizing

In the staged representation, maturity levels 2 and higher consist of a series of process areas that form evolutionary plateaus of process improvement and performance. For example, the maturity level 2 process areas are:

  1. Requirements Management

  2. Project Planning

  3. Project Monitoring and Control

  4. Supplier Agreement Management

  5. Measurement and Analysis

  6. Process and Product Quality Assurance

  7. Configuration Management

The determination of an organization’s maturity level is accomplished by conducting a formal assessment such as the CMM-Based Appraisal for Internal Process Improvement (CBA IPI), Software Capability Evaluation (SCE), or Standard CMMI Appraisal Method for Process Improvement.

The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is designed to provide benchmark quality ratings relative to Capability Maturity Model Integration (CMMI) models. It is applicable to a wide range of appraisal usage models, including both internal process improvement and external capability determinations. SCAMPI satisfies all of the Appraisal Requirements for CMMI (ARC) requirements for a Class A appraisal method and can support the conduct of ISO/IEC 15504 assessments.

SCAMPI v1.1 enables a sponsor to do the following:

  1. Gain insight into an organization’s engineering capability by identifying the strengths and weaknesses of its current processes.

  2. Relate these strengths and weaknesses to the CMMI model.

  3. Prioritize improvement plans.

  4. Focus on improvements (correct weaknesses that generate risks) that are most beneficial to the organization given its current level of organizational maturity or process capabilities.

  5. Derive capability level ratings as well as a maturity level rating.

  6. Identify development/acquisition risks relative to capability/maturity determinations. [3]

When conducting a SCAMPI, a lead assessor authorized by the Software Engineering Institute (SEI) at Carnegie Mellon University works with a trained assessment team to gather and examine objective evidence and relate it to the CMMI. Data is gathered from questionnaires, organizational and project documents, interviews with organizational personnel, and presentations.

Based on the evidence, observations are written for each CMMI practice in the scope of the assessment. The ultimate objective for data gathering at the practice level is to characterize the extent to which each practice, or a satisfactory alternative practice, is implemented in the organization, and how well that implementation supports the associated process area goal. As data gathering continues, evidence is captured at the project or group level and then aggregated to the organizational level. After sufficient evidence has been obtained and evaluated, the assessment team characterizes organization-level implementation for each practice in the scope of the assessment according to this scale:

  1. Fully implemented

  2. Largely implemented

  3. Partially implemented

  4. Not implemented

After the assessment team has characterized the implementation status of each practice, preliminary findings are generated. The preliminary findings are presented to the interview participants such that the findings can be confirmed or countermanding evidence can be gathered. Separate presentation sessions may be conducted for managers, project managers, and practitioners in order to encourage open communication, and thereby gather data with higher integrity.

Presentations of the preliminary findings are typically the assessment’s last activity in the data-gathering phase prior to rating process area goals and determining the maturity level. Using the data gathered throughout the assessment, the team exercises professional judgment in determining whether the goals associated with each process area are satisfied. The goal ratings determine the organization’s maturity level. The organization’s maturity level is the highest level at which all goals are satisfied, and all goals at lower levels are also satisfied. In other words, in order for an organization to achieve CMMI maturity level 3, the organization must have satisfied all of the goals associated with the level 2 and level 3 process areas.

Measuring Process Capability

When measuring process capability, the CMMI “continuous representation” is used. In the continuous representation, each process area is assigned a capability level as follows:

  • Capability Level 0: Incomplete

  • Capability Level 1: Performed

  • Capability Level 2: Managed

  • Capability Level 3: Defined

  • Capability Level 4: Quantitatively Managed

  • Capability Level 5: Optimizing

In the CMMI continuous representation, capability level profiles are used to depict the list of process areas and their corresponding capability levels. Achievement profiles represent the current state and target profiles represent the desired state. The SCAMPI assessment method is also used for formal determination of an organization’s achievement capability level profile.

Staged versus Continuous—Debating Religion

The CMMI includes two representations, staged and continuous, due largely to the model’s legacy. The CMMI model is founded on source models that employed either the staged representation (software engineering) or the continuous representation (systems engineering). Rather than risk alienating members from either engineering discipline, the authors of the CMMI decided to provide both representations.

Early adopters of the CMMI tend to favor the representation inherited from their legacy model. The CMMI’s continuous representation has taken most software engineering groups out of their comfort zones. It’s new to them, it’s different from what they’re used to, and therefore they perceive it as wrong/bad/evil. Besides, it has a level 0 and everybody knows that real models start at level 1!

Not too terribly long after completing the Intro to CMMI course, a process improvement consultant found herself in the midst of a battle between a Software Engineering Process Group (SEPG) and its process improvement sponsor. The SEPG was very much in favor of the continuous representation, because the members perceived it gave them more flexibility in implementing improvements as well as a more granular means of planning and tracking their progress. The sponsor, who wanted to use the tried-and-true staged representation, brought the consultant in for one day to arbitrate a peaceful resolution to this lingering conflict.

After spending a few hours with the SEPG to gain a better understanding of that perspective, the consultant met individually with the sponsor. They talked for an hour before lunch, and it was pretty obvious that the sponsor was adamant about using the staged representation. Based on the consultant’s experience (as well as her personal comfort) with the CMM for Software, she tended to agree with the sponsor. Now all she had to do was figure out how to ease the SEPG into the “correct” way of thinking.

Over lunch, the sponsor was bragging about his daughter, gloating that she had achieved a 3.8 grade point average in her freshman year at an Ivy League school. “It’s funny,” the consultant mused, “using a ‘staged GPA representation’ she would only be a 3.” His jaw tightened as he pondered the remark. “It’s worse than that,” the sponsor finally admitted, “to date she has received nine A’s and one C, so she’d only be a 2.” The remainder of the lunchtime conversation focused on how to implement the continuous representation throughout the sponsor’s organization!

Measuring Levels Is Not Enough

As indicated, most organizations that follow a model-based approach to process improvement designate a process improvement team, typically called the Software Engineering Process Group (SEPG), to spearhead the effort. Working under the auspices of a senior management sponsor, the SEPG is responsible for documenting, deploying, and improving the processes suggested by the corresponding model.

Imagine that six months ago your senior management directed your SEPG to achieve CMMI maturity level 2 within a year. As an SEPG member, you have invested extensive (some might say “excessive”) time and effort generating all the required policies, procedures, templates, measures, checklists, and training materials, but you are now having trouble getting the development projects to pilot and adopt any of these new process components.

Being a reasonable, proactive change agent, you solicit your sponsor’s assistance. The sponsor eloquently reminds project personnel how important it is to reach their target maturity level and urges them to be more open to helping the organization achieve this distinctive honor. The sponsor instructs software quality assurance personnel to be more aggressive in explaining the value of all this new process stuff and in identifying process deviants. But nothing really changes; the process assets continue to gather dust and the SEPG’s frustration continues to mount.

Why are the project teams being so difficult and how can the SEPG achieve maturity level 2 if they don’t get with the program? Why aren’t they helping the SEPG to be successful?

But wait a minute . . . why is the organization doing process improvement? They’re not really doing it to “achieve CMMI maturity level 2”; they’re doing it to improve. They shouldn’t be forcing the projects to grunt through a pile of administrivia to accommodate the CMMI; they should be employing process aspirin to relieve project pain. Perhaps they can exploit the CMMI to help the projects achieve greater success!

Let’s check this hypothesis by determining which of two possible results would be preferred:

  1. The organization is assessed at CMMI maturity level 2, but the projects achieve no measurable improvement; or

  2. The projects achieve measurable improvement, but never achieve CMMI maturity level 2.

Unless there are compelling business reasons to reach a particular CMMI level (i.e., your customers will not allow you to bid on work unless you have been assessed at maturity level 2), if your answer is A, you’ve probably been in the “quality” organization too long!

On the other hand, if the SEPG continues to help the projects succeed, its value will be recognized and the group will overcome much of the natural resistance to change. In addition, if the projects continue to demonstrate sustainable improvement, the CMMI level will ultimately come. Remember that using the CMMI is merely one tactic to achieve a higher-level (no pun intended) business strategy through the execution of successful projects.

The CMMI maturity level is intended to be a leading indicator of the organization’s process capability, but the real value of process improvement is in running more successful projects. The organization shouldn’t merely measure the CMMI maturity level; it should also monitor the additional value derived from the implementation of these improved process elements.

Many organizations use the acronym SPI to mean “Software Process Improvement.” The organization should consider keeping the SEPG focused properly by changing the meaning of “SPI” to Software Project Improvement and establishing the SEPG’s motto as: “If we are not helping the projects achieve measurable improvement, we are failing!” The projects do not exist to help the SEPG achieve success, but rather the other way around.

One of the critical steps in beginning (and continuing) with a CMMI assessment is to develop a process capability baseline, which serves as the basis for determining what to improve and to assess progress along the way. These desired outcomes might include:

  • Delivered quality

  • Productivity

  • Schedule

  • Reduced overtime

  • Defect injection rate

  • Defect discovery rate

  • In-process defect removal efficiency

  • Defect distribution

Establishing the Alignment Principle

Project managers often tell their customers, “Faster, better, or cheaper—pick two.” What they mean, of course, is that if the customer demands a high-quality product in the shortest amount of time, they reserve the right to tell her how much it will cost. Conversely, if the customer prefers a low-cost product in the shortest amount of time, it may have quality problems (typically referred to as “undocumented features”). The point is that the solution can be constrained in only two of the three dimensions— there must be at least one independent variable.

The “alignment principle” requires you as an SEPG member, to take this concept a step farther. You need to tell senior management, “faster, better, or cheaper—pick one.” Since senior management has funding and firing authority over the SEPG, however, you may want to ask questions like:

  • What is the business imperative in our marketplace?

  • What gives us a competitive edge in the minds of our customers?

  • Why do our potential customers keep buying our competitor’s products?

It seems fairly obvious that if your firm manufactures pacemakers, quality is the attribute to be maximized. Such an organization will quickly conclude that it would be willing to sacrifice a bit of time and cost to reduce the number of field-reported defects—especially those reported by the relatives of their late customers.

But what about your company? How would your senior managers answer if the question were posed to them? The response to this question is the most important piece of planning data for the process improvement program, because it is the foundation of the alignment principle. It is imperative that senior management, not the SEPG, establish this. The alignment principle is the strategic business imperative that will be supported by the tactical implementation of improved process elements. Strategic decisions are the responsibility of senior management; tactical plans are generated and executed by organizational personnel based on those decisions.

Suppose senior management has just informed you that quality, defined by field-reported defects, is the most important competitive dimension in the minds of your customers. So now it’s time to craft the alignment principle. Continuing with the pacemaker example, we may establish an alignment principle something like: “Achieve an annual, sustainable 20% reduction in field-reported defects without degrading current levels of cost, schedule, and functional variance.”

Now you know what it means when you say that the SEPG is going to help the projects achieve greater success, and the project teams now know what’s most important to senior management. When the SEPG pilots a new process element that demonstrates a measurable reduction in defects, the projects will be anxious to deploy and adopt this enhancement. Rather than force projects to use process elements in which they perceive little value, the Alignment Principle guides the SEPG to provide services that demonstrate measurable benefit. In this manner, the SEPG and the projects are aligned.

Take Time Getting Faster

Maturity level 2 in the CMMI is called “Managed,” but even maturity level 1 organizations “manage” their projects. For instance, many maturity level 1 projects manage to target project completion on the earliest date with a nonzero probability of success. The thinking is that if everyone works 60+ hours per week, and nobody gets sick or takes weekends or holidays off, the project might just manage to finish on time.

And now your senior management has proclaimed that the overarching goal of the process improvement program is to reduce cycle time. They are not being unreasonable—it’s hard to deny that for many software-intensive products the primary competitive dimension is time to market. Customers, whether internal or external, are always clambering to get new products and features in an ever-decreasing time frame. But as a member of the SEPG you fear that this edict simply gives the project teams an excuse to circumvent existing processes and standards in order to get the job done any way they see fit.

Customers apply schedule pressure as part of a “ritualistic dance.” They get project managers to commit to delivering in twelve months in the desperate hope that they will receive the full product in fifteen months, or alternatively, that they will get 60% of the core functionality by the committed date. The project manager knows it, the customer knows it, and yet the charade plays out release after release. It’s just like budgeting’s ritualistic dance—if you need $5M for your project you request $7.5M, knowing that one-third of the budget will be cut back during the approval cycle. We all know the tune and the dance continues!

So what do the customers really want? They want to believe! Customers want to know that when you commit to delivering a product on March 31 that it will be available for implementation on April 1 (an appropriate date?). Although they claim they want the software faster, they really want it more reliably.

As much as the customers want to believe, the software development staff wants to be believed. They would like their managers to leave their professional integrity intact by accepting and even defending their estimates. They would like senior management to understand and mitigate the devastating effects of golf course commitments and uncontrolled scope creep. They would like their customers to realize that it is in their mutual best interest to plan and execute projects in a disciplined manner. Finally, they would like to see their spouses and children more often than Sunday evenings!

Rather than establish a goal of finishing projects faster, the organization would be better served initially with the goal of estimating and executing projects more reliably. To achieve this goal, the organization must

  1. gather and analyze historical data to develop and calibrate estimation models,

  2. codify good engineering and project management practices that lead to more reliable results,

  3. establish the means to control the projects’ requirements and configuration baselines, and

  4. enjoy the strong support of leadership in achieving this goal.

Disciplined planning and execution significantly reduce the variability of project results. They also establish a solid foundation for getting projects done faster, the organization’s ultimate goal. Since you are unlikely to achieve sustainable cycle time reductions without first achieving reasonably reliable results, follow Steven Covey’s advice and “put first things first.” The SEPG’s challenge is to convince senior management that generating more reliable estimates is a necessary prerequisite to reducing cycle time—and that once customers believe and estimation credibility has been established, the tune of the ritualistic dance will be altered forever.

Keep It Simple — or Face Decomplexification

A friend of mine flies all the time. Whenever he boards a plane, he sneaks a peak into the cockpit to see what the crew is up to. He says he feels a sense of comfort when he sees the captain filling out the preflight checklist. This simple act tells him that the pilot’s objectives are similar to his own—making sure this bird can

  • get off the ground safely,

  • fly all the way to their destination, and

  • get back down—preferably in a controlled manner.

That is, the captain fills out the preflight checklist because he/she wants to live as much as the passengers do. On the other hand, imagine how nervous my friend might be if his glance in the cockpit found the captain reading the operator’s manual!

As they board, the passengers’ working assumption is that the pilot knows how to fly the plane. The pilot simply uses the preflight checklist to ensure that the plane is fit for use, and to lower the probability that critical safety precautions are inadvertently overlooked.

So why is it that your organization’s multivolume set of software process documentation gets less use than a preflight checklist? Are you not working on important projects that would make internal headlines if they “crashed and burned?” Is the project team not entrusted with planning the project at the outset, performing project activities throughout, and ultimately delivering products to your customers, preferably in working order? Have you really committed the full process to memory so that you know the processes that you execute once every three months better than those a pilot executes three or four times a day?

One reason that process documentation remains on the shelf is that its authors have a different working assumption about project personnel than passengers have about pilots. Authors of such documentation typically assume that the project personnel do not know how to plan, manage, or fulfill their project responsibilities.

But let’s face facts; when the process was written, your people probably didn’t have all the requisite skills to perform their project responsibilities using the newly defined process. So naturally the documentation was written to fulfill this need. However, just imagine how thick the preflight checklist would be if it were written such that any passenger would be able to fly a modern commercial airliner. (And imagine how quickly you would bolt from the plane if they asked the person sitting next to you to proceed to the cockpit to try!)

Lack of skills is a transient issue that should be addressed by training. Your organization should provide ample skill-building interventions to train novices, to address skill deficiencies, and to introduce major process changes. But it is unreal-istic to expect experienced project personnel to need or use the same detailed instructions required by the novice. Is it any surprise, then, that your process documentation remains on the shelf unopened if 90% of its bulk is dedicated to such mundane tasks?

So, differentiate between training material and process documentation. For students, training material is typically a single-use asset—they use it in the classroom and then stick it on the shelf. In contrast, process documentation should serve as a ready reference guide for the process executor. Like a preflight checklist, it should focus on the vital process elements that lower the probability that critical steps will be overlooked. Like the airline’s preflight checklists, software process documentation should be continuously reviewed to assess its effectiveness and improved to allow better control and software quality.

Measuring the Value of Process Improvement

Oscar Wilde said, “Nowadays, people know the cost of everything and the value of nothing.” Unfortunately this is often true for organizations focusing on process improvement. Most organizations can tell you their process improvement budget for the year and how much they have spent to date, but few can relate hard data regarding the benefits they’ve derived from that investment.

When asked about the benefits, management is likely to tell you things like: “Projects seem to be running a lot smoother now.” “We don’t seem to have as many late projects as we used to.” “Customers seem to be happier with the quality of the products that we’re producing.” “We’ve achieved CMMI maturity level 3, so we know we’re doing well!” If a process capability baseline was established before the project began, you should be able to demonstrate improvements in the list of desired outcomes.

Soft, qualitative data is comforting, but it is not likely to sustain the improvement program through the next budgeting cycle, let alone the next economic downturn. When times get tough, programs that cannot support the benefits they provide with hard data are likely to be thanked for their contribution as their people are redeployed to work on “real” projects that can.

So how does an SEPG go about measuring the benefits derived from process improvement? Clearly the alignment principle provides some guidance in this regard. Using the sample alignment principle, the organization must measure:

  • Field-reported defects

  • Cost variance

  • Schedule variance

  • Functional variance

The success (or lack thereof) of the process improvement program can be determined objectively by comparing the results of the projects with those proclaimed in the alignment principle. An SEPG that can demonstrate sustainable benefit in quantifiable terms is much more likely to have the opportunity to repeat this success.

To further establish the “measurement mentality” for process improvement, the SEPG should be encouraged (or is it “expected?”) to hypothesize the value of each improvement in measurable terms prior to piloting it on a real project. The pilot should be conducted in such a way that it confirms or denies the realization of the hypothesized value, and deployment decisions should be based on this objective analysis. Deploying the process element as is, modifying the process element and running additional pilots, or abandoning the proposed changes are likely outcomes of such postpilot analyses. Remember that pilots are simply data-gathering activities; a successful pilot is one that contributes to organizational knowledge, not necessarily one that demonstrates a proposed process change is ready to be inflicted on all projects.

Measuring Process Adoption

As the hypothesized value of new process elements is proved by pilot projects, the SEPG should prepare for broad deployment and subsequent implementation. Many organizations make the process elements available via the Web and expect that the projects will simply start using them. This Field of Dreams approach (“if you build it, they will come”) is intuitively appealing to the SEPG, but rarely proves to be an effective or efficient means of changing people’s behavior.

Other organizations follow a strategy similar to that employed for their software packages. These organizations package multiple new or modified process elements into periodic “process releases,” and accompany each release with release notes, testimonial data from the pilot projects, process training, internal consultation, and a great deal of fanfare. Typically, these process releases are made one to four times per year. This approach provides sufficient process stability and ensures that project personnel aren’t constantly struggling to figure out what today’s process is.

Regardless of the deployment approach, as new process elements are released, the SEPG should monitor the project adoption rate—the rate at which the new process elements are being adopted by the project teams. The primary question addressed by measuring the adoption rate is: How many of the projects that should be using these new elements are using them?

The SEPG should seek to apply process improvement concepts to its own processes (“physician, heal thyself”), one of which is the deployment process. In an effort to provide better services in the future, the SEPG should ask a series of secondary questions related to the adoption rate such as:

  • Why are some projects resisting adoption of the new elements?

  • Is this resistance based on inadvertent side effects that weren’t experienced on the pilot projects?

  • Should there be additional support mechanisms to assist behavioral change and project adoption?

  • Should there be tailoring guidelines that reduce the administrative overhead of these new process elements when implemented in small projects?

  • How much effort is being invested in supporting the adoption of these new elements?”

The data derived from the answers to such questions can be analyzed to accelerate the adoption of the current process release as well as to enhance the means of releasing new process elements in the future.

Measuring Process Compliance

Organizations that are using the CMM or CMMI to guide their process improvement activities have typically established a Software Quality Assurance (SQA) group. SQA is responsible for verifying that project activities and work products conform to the project’s designated processes, procedures, standards, and requirements. SQA conducts audits and reviews on various aspects of the project work and reports the results.

SQA groups generally establish checklists of items to be verified. The checklist for verifying that a peer review was conducted properly may include items such as:

  • Was a qualified peer review moderator designated for the review?

  • Did the review team have the appropriate skills to review the work product adequately?

  • Was the work product to be reviewed published at least three days in advance of the review meeting?

  • Did the moderator verify that at least 80% of the invited reviewers participated in the review?

  • Did the scribe note the preparation time on the peer review data sheet?

  • Did the moderator verify that the reviewers were adequately prepared for the review?

  • Did the scribe note the defects, their location in the work product, and the responsible person on the peer review data sheet?

  • Was the peer review data entered into the peer review database?

  • Were the defects resolved?

  • Did the moderator (or designee) verify the resolution of the defects?

Verifying and measuring compliance can identify:

  • Areas where compliance to the process is degrading

  • Process steps in which additional training or coaching is necessary

  • Process elements that may warrant additional tailoring guidelines (or replacement)

  • Elements of the process that are deemed administratively burdensome

  • Areas where tool support may be beneficial

Project personnel are rarely noncompliant just to be belligerent. Unless this is a relatively new and therefore unproved process element, noncompliance usually indicates a change in the work pattern from that which was in place when the process element was introduced. Monitoring process compliance trends can detect shifts in project behavior and can result in initiation of corrective action in a timely manner.

If the SEPG established a process capability baseline, process results at each step should be verified against these desired outcomes. This will help keep management informed and engaged based upon expectations that were established when the project was first begun. It will also serve as a means to maintain commitment and focus to the project.

Celebrate the Journey, Not Just the Destination

A few weeks before the Twin Cities Marathon, a local running club traditionally hosts a 20-mile training run followed immediately by a potluck lunch. To get runners of varying skill to finish around the same time, they have the slower runners start out at 8 A.M., the medium runners at 8:30 A.M., and the fast runners at 9 A.M.

A couple of years ago, I participated in the run and went out with the medium runners. At mile 17, I happened across a 285-pound “runner,” who was lumbering along at a pace slower than most people walk. He was breathing heavily (no pun intended), sweating profusely, and looking as though he was about to pass out. I was convinced that it was only the thought of the pending feast that kept him moving at all!

My first concern was medical, “Are you all right?” I asked. “I’m doing OK,” he replied, “but I’m sure glad I started out at 6:30.” After he assured me that he was going to make it, I promised to save him a plate of grub, bid him a fond farewell, and left him to ponder Newton’s first law of motion.

The run ended at an elementary school where we used the cafeteria for our luncheon. The “moving mountain” was the primary topic of conversation as the returning runners loaded up their plates and took their seats. “How could a runner allow himself to get into such bad shape?” was the question of the hour. “No ‘real runner’ would ever allow himself to fall apart like that.”

With lunch just about over, the first runner to set out became the last to finish. As promised, I had saved him some food, and so I brought it to the table where he had plopped down with a sense of self-satisfaction. A few members of the running club are newspaper reporters; they joined us at the table, each carrying another plate of food for our beleaguered colleague. The questions started slowly but the pace picked up as his story unfolded.

It turns out that about a year prior, Bill had tipped the scale at 400 pounds and decided right then to turn his life around. His friends scoffed when he made a public commitment to run a marathon. After consulting a sports doctor, he started off slowly, but still managed to drop over 100 pounds in the next 11 months. With only three weeks to go he was still hoping to run the full 26.2 miles within the six-hour time limit—and darned if he didn’t do it! Bill’s miraculous feat was featured in Runner’s World magazine—an experience that most of us “real runners” will probably never enjoy. OK, to the moral of the story . . .

At every SEPG conference there is at least one presentation entitled something like “How We Achieved Level 2 in Three Months.” These sessions are jammed with hopeful newbies praying for the miracle of instantaneous success. They could probably learn more from talking to Bill, the marathoner. First, they could learn that “one data point does not a trend make.” They shouldn’t judge the distance the presenters have traveled by only seeing them cross the finish line. After all, the presenters can run a 100-yard dash quicker than your organization can run a marathon.

They could also learn that you really can set aggressive goals and achieve them. But, more important, they could reflect on the fact that even if Bill had not successfully completed the marathon that year, he had still dropped 100 pounds and was a better man for it. Bill’s life didn’t miraculously improve because he ran a marathon; his life improved because he worked out every day for 11 months.

Don’t waste time looking for the easy path; start improving your projects’ performance one day at a time and you will be successful, whether or not you ever achieve level 2. As an anonymous marathoner said, “The miracle isn’t that I finished… the miracle is that I had the courage to start.” So take that first step—and enjoy the journey!

Summary

An assessment using the CMMI model is a significant undertaking, and the journey to the next level may take a long time to accomplish. Therefore, it is essential to sustain management commitment along the way. Process measurements and progress against desired outcomes should be communicated to management on a regular basis to avoid having process improvement fall off the management radar screen.

References



[1] From Dos and Don’ts of Software Process Improvement, by Patrick O’Toole. © 2002, Process Assessment, Consulting & Training. Reprinted by permission. All rights reserved. Pat O’Toole is a Principal Consultant at Process Assessment, Consulting & Training (PACT) where he provides a variety of services to his process improvement clients. Pat is one of the most active SEI authorized CBA IPI and SCAMPI lead assessors, and has led assessments spanning all maturity levels, including the largest and most complex Level 5 assessment conducted to date. He is a candidate lead assessor for the People-CMM, and is an SEI transition partner for the Introduction to CMMI course.

[2] Capability Maturity Model® and CMM® are registered in the U.S. Patent and Trademark Office; CMMSM Integration and SCAMPISM are service marks of Carnegie Mellon University.

[3] Standard CMMI Appraisal Method for Process Improvement (SCAMPI), Version 1.1: Method Definition

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.229.92