Chapter 6. Essays About CMMI for Services

This chapter consists of essays written by invited contributing authors. All of these essays are related to CMMI for Services. Some of them are straightforward applications of CMMI-SVC to a particular field; for example, reporting a pilot use of the model in an IT organization or in education. In addition, we sought some unusual treatments of CMMI-SVC: How can CMMI-SVC be used in development contexts? Can CMMI-SVC be used with Agile methods—or are services already agile? Finally, several essayists describe how to approach known challenges using CMMI-SVC: adding security concerns to an appraisal, using the practices of CMMI-SVC to capitalize on what existing research tells us about superior IT service, and reminding those who buy services to be responsible consumers, even when buying from users of CMMI-SVC.

In this way, we have sought to introduce ideas about using CMMI-SVC both for those who are new to CMMI and for those who are very experienced with prior CMMI models. These essays demonstrate the promise, applicability, and versatility of CMMI for Services.

A Changing Landscape

By Peter Flower

Book authors’ comments: During the past three years, Trinity, where Peter Flower is a founder and managing director, has been working with a major aerospace and defense contractor that is transforming its business from one that develops and sells products to one that provides the services necessary to keep the products operational and to ensure that they are available when and where customers want them. This new form of “availability contracting” supports the company’s strategic move to grow its readiness and sustainment capabilities and support its customers through partnering agreements.

During Peter’s work to help make this transformation happen, he has become aware that this is not just a strategic change in direction by one organization, but is part of a global phenomenon that is becoming known as the “servitization” of products. With this realization, Peter sets the larger stage, or rather, paints the landscape, in which process improvement for service fits.

“It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.”

—Charles Darwin

Changing Paradigms

Services are becoming increasingly important in the global economy; indeed, the majority of current business activity is in the service sector. Services constitute more than 50 percent of the GDP in low-income countries, and as the economies of these countries continue to develop, the importance of services continues to grow. In the United States, the current list of Fortune 500 companies contains more service companies and fewer manufacturers than in previous decades.

At the same time, many products are being transformed into services, with today’s products containing a higher service component than in previous decades. In management literature, this is referred to as the “servitization” of products. Virtually every product today has a service component to it; the old dichotomy between product and service has been replaced with a service-product continuum.

For limited rates of change in the past, it has been sufficient for management to seek incremental improvements in performance for existing products and processes. This is generally the domain in which benchmarking has been most effective. However, change in the business, social, and natural environments has been accelerating. A major concern for top management, especially in large and established companies, is the need to expand the company’s scope, not only to ensure survival and success in the present competitive arena, but also to make an effective transition to a turbulent future environment. The transition of an established organization from the present to the future competitive environment is often described in terms of a “paradigm shift.” Think of a paradigm shift as a change from one way of thinking to another. It’s a revolution, a transformation, a sort of metamorphosis. It does not just happen, but rather is driven by agents of change. To adopt a new paradigm, management needs to start thinking differently, to start thinking “outside the box.”

Throughout history, significant paradigm shifts have almost always been led by those at the fringes of a paradigm, not those with a vested interest (intellectual, financial, or otherwise) in maintaining the current paradigm, regardless of its obvious shortcomings. Paradigm shifts do not involve simply slight modifications to the existing model. They instead replace and render the old model obsolete.

The current change that is engulfing every industry is unprecedented, and many organizations, even in the Fortune 500, may not survive the next five years. This shift is not to be mistaken as a passing storm, but rather is a permanent change in the weather. Understanding the reasons behind this change and managing them has become one of the primary tasks of management if a business is to survive.

The Principal Agents of Change

Globalization or global sourcing is an irreversible trend in delivery models, not a cyclical shift. Its potential benefits and risks require enterprise leaders to examine what roles and functions in a given business model can be delivered or distributed remotely, whether from nearby or from across the globe. The political, economic, and social ramifications of increased global sourcing are enormous. With China and India taking a much larger slice of the world economy, and with labor-intensive production processes continuing to shift to lower-cost economies, traditional professional IT services jobs are now being delivered by people based in emerging markets. India continues to play a leading role in this regard, but significant additional labor is now being provided by China and Russia, among other countries.

IP telephony and VoIP communications technologies, made possible by the Internet, are driving voice and data convergence activity in most major companies. In turn, this is leading to a demand for new classes of business applications. Developers are finally shifting their focus toward business processes and away from software functionality. In turn, software has become a facilitator of rapid business change, not an inhibitor. The value creation in software has shifted toward subscription services and composite applications, and away from monolithic suites of packaged software. With increased globalization and continued improvements in networking technologies, enterprises will be able to use the world as their supply base for talent and materials. The distinction among different functions, organizations, software integrators and vendors, and even industries will be increasingly blurred as packaged applications are deconstructed and delivered as service-oriented business applications.

Open source software (OSS) and service-oriented architecture (SOA) are revolutionizing software markets by moving revenue streams from license fees to services and support. In doing so, they are a catalyst for restructuring the IT industry. OSS refers to software whose code is open and can be extended and freely distributed. In contrast to proprietary software, OSS allows for collaborative development and a continuous cycle of development, review, and testing. Most global organizations now have formal open source acquisition and management strategies, and OSS applications are directly competing with closed-source products in every software infrastructure market. The SOA mode of computing, based on real-time infrastructure (RTI) architecture, enables companies to increase their flexibility in delivering IT business services. Instead of having to dedicate resources to specific roles and processes, companies can rely more on pools of multifaceted resources. These improvements in efficiency have enabled large organizations to reduce their IT hardware costs by 10 percent to 30 percent and their labor costs by 30 percent to 60 percent as service quality improves and agility increases. Large companies are now looking to fulfill their application demands through shared, rather than dedicated, sources, and to deliver their software by external pay-as-you-go providers. For example, Amazon.com now sells its back-office functions to other online businesses, allowing its IT processes to be made available to other companies at a price, as a separate business from that of selling goods to consumers.

A problem for those moving from one paradigm to another is their inability to see what is before them. The current, but altering, paradigm consists of rules and expectations, many of which have been in place for decades. Any new data or phenomena that fail to fit those assumptions and expectations are often rejected as errors or impossibilities. Therefore, the focus of management’s attention must shift to innovation and customer service, where personal chemistry or creative insight matters more than rules and processes. Improving the productivity of knowledge workers through technology, training, and organizational change needs to be at the top of the agenda in most boardrooms in the forthcoming years. Indeed, a large percentage of senior executives think that knowledge management will offer the greatest potential for productivity gains in the next 15 years and that knowledge workers will be their most valuable source of competitive advantage. We are experiencing a transition from an era of competitive advantage gained through information to one gained from knowledge creation. Investment in information technology may yield information, but the interpretation of the information and the value added to it by the human mind is knowledge. Knowledge is the prerogative of the human mind, and not of the machines.

Unlike the quality of a tangible product, which can be measured in many ways, the quality of a service is only apparent at the point of delivery and is dependent on the customer’s point of view. There is no absolute measure of quality. Quality of service is mostly intangible, and may be perceived from differing viewpoints as the timely delivery of a service, delivery of the right service, convenient ordering and receipt of the service, and providing value for money. This intangibility requires organizations to adopt a new level of communication with their customers about the services they can order, ensuring that customers know how to request services and report incidents and that they are able to do so. They need to define and formally agree on the exact service parameters, ensure that their customers understand the pricing of the service, ensure the security and integrity of their customers’ data, and provide agreed-upon information regarding service performance in a timely and meaningful manner. Information, therefore, is not only a key input in services production, but also a key output as well.

In addition, an organization’s “online customers” are literally invisible, and this lack of visual and tactile presence makes it even more crucial to create a sense of personal, human-to-human connection in the online arena. While price and quality will continue to matter, personalization will increasingly play a role in how people buy products and services, and many executives believe that customer service and support will offer the greatest potential for competitive advantage in the new economy.

Management’s Challenge

As a result of this paradigm shift, a number of key challenges face many of today’s senior business managers. The major challenges include management’s visibility of actual performance and control of the activities that deliver the service, to ensure that customers get what they want. Controlling costs by utilizing the opportunities presented by the globalization of IT resources; compliance with internal processes and rules, applicable standards, regulatory requirements, and local laws; and continuous service improvement must become the order of the day.

Operational processes need to be managed in a way that provides transparency of performance, issues, and risks. Strategic plans must integrate and align IT and business goals while allowing for constant business and IT change. Business and IT partnerships and relationships must be developed that demonstrate appropriate IT governance and deliver the required, business-justified IT services (i.e., what is required, when required, and at an agreed cost). Service processes must work smoothly together to meet the business requirements and must provide for the optimization of costs and total cost of ownership (TCO) while achieving and demonstrating an appropriate return on investment (ROI). A balance must be established between outsourcing, insourcing, and smart sourcing.

Following the economic crisis, the social environment is considerably less trusting and less secure, and the public is wary of cascading risks and seems to be supportive of legislation and litigation aimed at reducing those risks, including those posed by IT. As a result, it is very likely that IT products and services will soon be subject to regulation, and organizations must be prepared to meet the requirements that regulated IT will impose on their processes, procedures, and performance.

Organizations will need to measure their effectiveness and efficiency, and demonstrate to senior management that they are improving delivery success, the business value of IT, and how they are using IT to gain a competitive advantage. Finally, organizations must understand how they are performing in comparison to a meaningful peer group, and why.

Benchmarking: A Management Tool for Change

Benchmarking is the continuous search for, and adaptation of, significantly better practices that leads to superior performance by investigating the performance and practices of other organizations (benchmark partners). In addition, benchmarking can create a crisis to facilitate the change process.

The term benchmark refers to the reference point against which performance is measured. It is the indicator of what can and is being achieved. The term benchmarking refers to the actual activity of establishing benchmarks.

Most of the early work in the area of benchmarking was done in manufacturing. However, benchmarking is now a management tool that is being applied almost everywhere. Benefits of benchmarking include providing realistic and achievable targets, preventing companies from being industry-led, challenging operational complacency, creating an atmosphere conducive to continuous improvement, allowing employees to visualize the improvement (which can, in itself, be a strong motivator for change), creating a sense of urgency for improvement, confirming the belief that there is a need for change, and helping to identify weak areas and indicating what needs to be done to improve. As an example, quality performance in the 96 percent to 98 percent range was considered excellent in the early 1980s. However, Japanese companies, in the meantime, were measuring quality by a few hundred parts per million, by focusing on process control to ensure quality consistency. Thus, benchmarking is the only real way to assess industrial competitiveness and to determine how one company’s process performance compares to that of other companies.

Benchmarking goes beyond comparisons with competitors to understanding the practices that lie behind performance gaps. It is not a method for copying the practices of competitors, but a way of seeking superior process performance by looking outside the industry. Benchmarking makes it possible to gain competitive superiority rather than competitive parity.

With the CMMI for Services reference model, the SCAMPI appraisal method, and the Partner Network of certified individuals such as instructors, appraisers, and evaluators, the Software Engineering Institute (SEI) provides a set of tools and a cadre of appropriately experienced and qualified people that organizations can use to benchmark their own business processes against widely established best practices. It must be noted, however, that there will undoubtedly be difficulties encountered when benchmarking. Significant effort and attention to detail is required to ensure that problems are minimized.

The SEI has developed a document to assist in identifying or developing appraisal methods that are compatible with the CMMI Product Suite. This document is the Appraisal Requirements for CMMI (ARC). The ARC describes a full benchmarking class of appraisals as class A. However, other CMMI-based appraisal methods might be more appropriate for a given set of needs, including self-assessments, initial appraisals, quick-look or mini-appraisals, incremental appraisals, and external appraisals. Of course, this has always been true, but the ARC formalizes these appraisals into three classes by mapping requirements to them, providing a consistency and standardization rarely seen in other appraisal methods. This is important because it recognizes that an organization can get benefits from internal appraisals with various levels of effort.

A CMMI appraisal is the process of obtaining, analyzing, evaluating, and recording information about the strengths and weaknesses of an organization’s processes and the successes and failures of its delivery. The objective is to define the problems, find solutions that offer the best value for the money, and produce a formal recommendation for action. A CMMI appraisal breaks the organization down into discrete areas that are the targets for benchmarking, and it is therefore a more focused study than other benchmarking methods as it attempts to benchmark not only business processes but also the management practices behind them. Some business processes are the same, regardless of the type of industry.

The Standard CMMI Appraisal Method for Process Improvement (SCAMPI) is the official SEI method to provide a benchmarking appraisal. Its objectives are to understand the implemented processes, identify process weaknesses (called “findings”), determine the level of satisfaction against the CMMI model (“gap analysis”), reveal development, acquisition, and service risks, and (if requested) assign ratings. Ratings are the extent to which the corresponding practices are present in the planned and implemented processes of the organization and are made based on the aggregate of evidence available to the appraisal team.

SCAMPI appraisals help in prioritizing areas of improvement and facilitate the development of a strategy for consolidating process improvements on a sustainable basis; hence SCAMPI appraisals are predominantly used as part of a process improvement program. Unless an organization measures its process strengths and weaknesses, it will not know where to focus its process improvement efforts. By knowing its actual strengths and weaknesses, it is easier for an organization to establish an effective and more focused action plan.

Summary

Driven by unprecedented technological and economic change, a paradigm shift is underway in the servitization of products. The greatest barrier in some cases is an inability or refusal to see beyond the current models of thinking; however, for enterprises to survive this shift, they must be at the forefront of change and understand how they perform within the new paradigm.

Benchmarking is the process of determining who is the very best, who sets the standard, and what that standard is. If an organization doesn’t know what the standard is, how can it compare itself against it?

One way to achieve this knowledge and to establish the necessary continuous improvement that is becoming a prerequisite for survival
is to adopt CMMI for Services.

“To change is difficult. Not to change is fatal.”

—Ed Allen

Expanding Capabilities across the “Constellations”

By Mike Phillips

Book authors’ comments: We frequently hear from users who are concerned about the three separate CMMI models and looking for advice on which is the best for them to use and how to use them together. In this essay, Mike Phillips, who is the program manager for CMMI, considers the myriad ways to get value from the multiple constellations.

As we are finishing the details of the current collection of process areas that span three CMMI constellations, this essay is my opportunity to encourage “continuous thinking.” My esteemed mentor as we began and then evolved the CMMI Product Suite was our chief architect, Dr. Roger Bate. Roger left us with an amazing legacy. He imagined that organizations could look at a collection of “process areas” and choose ones they might wish to use to facilitate their process improvement journey.

Maturity levels for organizations were all right, but not as interesting to him as being able to focus attention on a collection of process areas for business benefit. Small businesses have been the first to see the advantage of this approach, as they often find the full collection of process areas in any constellation daunting. An SEI report, “CMMI Roadmaps,” describes some ways to construct thematic approaches to effective use of process areas from the CMMI for Development constellation. This report can be found on the SEI website at www.sei.cmu.edu/library/abstracts/reports/08tn010.cfm.

As we created the two new constellations, we took care to refer back to the predecessor collection of process areas in CMMI for Development. For example, in CMMI for Acquisition, we note that some acquisition organizations might need more technical detail in the requirements development effort than what we provided in Acquisition Requirements Development (ARD), and to “reach back” to CMMI-DEV’s Requirements Development (RD) process area for more assistance.

In CMMI for Services, we suggest that the Service System Development (SSD) process area is useful when the development efforts are appropriately scoped, but the full Engineering process area category in CMMI-DEV may be useful if sufficiently complex service systems are being created and delivered.

Now, with three full constellations to consider when addressing the complex organizations many of you have as your process improvement opportunities, many additional “refer to” possibilities exist. With the release of the V1.3 Product Suite, we will offer the option to declare satisfaction of process areas from any of the process areas in the CMMI portfolio. What are some of the more obvious expansions?

We have already mentioned two expansions—ARD using RD, and SSD expanded to capture RD, TS, PI, VER, and VAL. What about situations in which most of the development is done outside the organization, but final responsibility for effective systems integration remains with your organization? Perhaps a few of the acquisition process areas would be useful beyond SAM. A simple start would be to investigate using SSAD and AM as a replacement for SAM to get the additional detailed help. And ATM might give some good technical assistance in monitoring the technical progress of the elements being developed by specific partners.

As we add the contributions of CMMI-SVC to the mix, several process areas offer more ways to expand. In V1.2 of CMMI-DEV, for example, we added informative material in Risk Management to begin to address beforehand concerns about continuity of operations after some significant disruption occurs. Now, with CMMI-SVC, we have a full process area, Service Continuity (SCON), to provide robust coverage of continuity concerns. (And for those who need even more coverage, the SEI now has the Resilience Management Model [RMM] to give the greater attention that some financial institutions and similar organizations have expressed as necessary for their process improvement endeavors. For more, see www.cert.org/resilience/rmm.html.)

Another expansion worthy of consideration is to include the Service System Transition (SST) process area. Organizations that are responsible for development of new systems—and maintenance of existing systems until the new system can be brought to full capability—may find the practices contained in SST to be a useful expansion since the transition part of the lifecycle has limited coverage in CMMI-DEV. (See the essay by Lynn Penn and Suzanne Garcia Miller for more information on this approach.) In addition, CMMI-ACQ added two practices to PP and PMC to address planning for and monitoring transition into use, so the CMMI-ACQ versions of these two core process areas might couple nicely with SST.

A topic that challenged the development team for V1.3 was improved coverage of “strategy.” Those of us with acquisition experience knew the criticality of an effective acquisition strategy to program success, so the practice was added to the CMMI-ACQ version of PP. In the CMMI-SVC constellation, Strategic Service Management (STSM) has as its objective “to get the information needed to make effective strategic decisions about the set of standard services the organization maintains.” With minor interpretation, this process area could assist a development organization in determining what types of development projects should be in its product development line. The SVC constellation authors also added a robust strategy establishment practice in the CMMI-SVC version of PP (Work Planning) to “provide the business framework for planning and managing the work.”

Two process areas essential for service work were seriously considered for insertion into CMMI-DEV, V1.3: Capacity and Availability Management (CAM) and Incident Resolution and Prevention (IRP). In the end, expansion of the CMMI-DEV constellation from 22 to 24 process areas was determined to be less valuable than continuing our efforts to streamline coverage. In any case, these two process areas offer another opportunity for the type of expansion I am exploring in this essay.

Those of you who have experienced appraisals have likely seen the use of target profiles that gather the collection of process areas to be examined. Often these profiles specifically address the necessary collections of process areas associated with maturity levels, but this need not be the case. With the release of V1.3, we have ensured that the reporting system (SCAMPI Appraisal System or SAS) is robust enough to allow depiction of process areas from multiple CMMI constellations. As use of other architecturally similar SEI models, such as the RMM mentioned earlier as well as the People CMM, grows, we will be able to depict profiles using mixtures of process areas or even practices from multiple models, giving greater value to the process improvement efforts of a growing range of complex organizations.

CMMI for Services, with a Dash of CMMI for Development

By Maggie Pabustan and Mary Jenifer

Book authors’ comments: Maggie Pabustan and Mary Jenifer implement process improvements at a Washington-area organization, AEM, that many would consider a development house, given that its mission is applied engineering. They have been strong users of the CMMI-DEV model, yet they have found that CMMI-SVC works admirably in their setting for some contracts. Here they provide an early-adopter account of how to use both models and how to switch from one to the other to best advantage.

How do you apply CMMI best practices to a call center that provides a variety of services and develops software? This essay describes the adaptation of the CMMI for Services model to a call center that exists within a larger organization having extensive experience in using the CMMI for Development model. The call center provides a mix of services, from more traditional software support services to more technical software development expertise. Since software development is not the primary work of the call center, however, applying the CMMI for Development model to its processes was insufficient. The CMMI for Services framework helped to provide all of the elements needed for standardized service delivery and software development processes.

The Development Environment

Applied Engineering Management (AEM) Corporation is a 100 percent woman-owned company, founded by Sharon deMonsabert, Ph.D, in 1986. It is located in the Washington Metropolitan Area and historically has focused on engineering, business, and software solutions. Over the years, AEM has contributed significant effort toward implementing processes based on the CMMI for Development model. Its corporate culture of process improvement is one that values the CMMI model and the benefits of using it. Two of AEM’s federal contracts have been appraised and rated as maturity level 3 using the CMMI for Development model. Both of these contracts focus on software development and maintenance. When considering the feasibility of an appraisal for a third federal contract that focuses on customer support, AEM management determined that the nature of the work was a closer match to the CMMI for Services model, so it proceeded with plans for a CMMI for Services appraisal. After considerable preparation, the organization participated in a Standard CMMI Appraisal Method for Process Improvement (SCAMPI) appraisal in 2010.

The Services Environment

This third contract provides 24/7 customer support to the worldwide users of a software application for a federal client. The customer support team, known as the Support Office, is a staff of 16 people who are tasked with six main areas of customer support.

• Site visits: These site visits are in the form of site surveys, software installation and training, and revisits/refresher training. Tasks may include user training, data analysis, data migration preparation and verification, report creation, and bar coding equipment setup.

• Support requests: Requests for assistance are received from users primarily via e-mail and telephone.

• Data analysis: Data analysis occurs in relation to the conduct of site visits and in the resolution of requests. Support Office team members provide data migration assistance and verification during site visits and identify and correct the source of data problems received via requests.

• Reporting: The Support Office creates and maintains custom reports for each site. In addition, the Support Office provides software development expertise for the more robust reporting features in the application, including universe creation and ad hoc functionality.

• Configuration Change Board (CCB) participation: Team members participate in CCB meetings. Tasks associated with these meetings include clarifying requirements with users and supporting change request analysis.

• Testing: As new versions of the software are released, the Support Office provides testing expertise to ensure correct implementation of functionality.

The Support Office has been in existence in various forms for more than 20 years. It has been a mainstay of customer service and product support for both AEM and its federal clients, and has been recognized for its dedication. The work has expanded in the past several years to include two more federal clients, and is set to expand to include additional federal clients in the coming months.

Implementing CMMI for Services

The CMMI for Services model has proven to be an excellent choice of a process framework for the Support Office. Its inclusion of core CMMI process areas and its emphasis on product and service delivery allows the Support Office to focus on its standard operating procedures, project management efforts, product development, product quality, and customer satisfaction.

Work Management Processes

Since the services provided by the Support Office are so varied, the project planning and management efforts have had to mature to cover all of these services in detail. Even though the Work Planning and Work Monitoring and Control process areas exist in both the CMMI for Development and CMMI for Services models, the maintenance of project or work strategy continues to be a key practice for Support Office management. Support Office management continuously identifies constraints and approaches to handling them, plans for resources and changes to resource allocation, and manages all associated risks. The project or work strategy drives the day-to-day management activities. Work activities within the CMMI for Services realm typically do not have fixed end dates. This is very much the case with the work performed by the Support Office. When the process implementers were reviewing the Support Office processes, much thought was given to the nature and workflow of the services they provide. As a result, the group’s processes support the continuous occurrence of work management tasks and resource scheduling. Additionally, Support Office management not only answers to multiple federal clients, but in this capacity also has to manage the site visits, support operations, data tasks, product development, and product testing for these clients. As such, the planning and monitoring of these efforts are performed with active client involvement, usually on a daily basis.

During the SCAMPI appraisal, Project Planning (in CMMI-SVC, V1.3, this becomes Work Planning) was rated at capability level 3, which was higher than the capability levels achieved by the organization for any of the other process areas in the appraisal scope. In addition, the extensive involvement of the clients was noted as a significant strength for the group.

Measurement and Analysis Processes

Over the years, the measurements collected by Support Office management have evolved significantly—in both quantity and quality. These measures contribute to the business decisions made by AEM and its clients. Measures are tracked for all of the customer support areas. Examples include travel costs, user training evaluation results, request volume, request completion rate, request type, application memory usage, application availability, change request volume, and defects from testing. Although the Measurement and Analysis process area exists in both the CMMI for Development and CMMI for Services models, of particular use to the Support Office is the relationship between Measurement and Analysis and Capacity and Availability Management in the CMMI for Services model. As they provide 24/7 customer support, both capacity and availability are highly important. The measures selected and tracked enable Support Office management and its clients to monitor status and proactively address potential incidents.

Quality Assurance Processes

Quality assurance checkpoints are found throughout the processes used by the Support Office. Each type of service includes internal review and approval, at both a process level and a work product level. Process implementers found that in the CMMI for Services environment, as in the CMMI for Development environment, the quality of the work products is highly visible to clients and users. They implemented processes to ensure high-quality work products. For example, the successful completion of site visits is of high importance to clients, in terms of both cost and customer satisfaction. Because of this, the work involved with planning for site visits is meticulously tracked. The processes for planning site visits are based on a 16-week cycle and occur in phases, with each phase having a formal checkpoint. At any point in any phase, if the plans and preparations have encountered obstacles and the checkpoint cannot be completed, then management works with the client to make a “go/no-go” decision. As another quality assurance task, process reviews are conducted by members of AEM’s corporate-level Process Improvement Team. Members of this team are external to the Support Office and conduct the reviews using their knowledge and experience with CMMI for Development process reviews and SCAMPI appraisals. They review the work products and discuss processes with members of the Support Office regularly.

Software Development Processes

A benefit of using the CMMI for Services model is that even though the model focuses on service establishment and delivery, it is still broad enough to apply to the software development work performed by the Support Office. The model allowed the Support Office to ensure that they have managed and controlled processes in place to successfully and consistently develop quality software that meets their clients’ needs. Using the CMMI for Development model, AEM has been able to address software development activities in a very detailed manner for its other contracts. In applying the CMMI for Services model to the Support Office contract, AEM’s process implementers found that the essential elements of software development still existed in the Services model. They found that these elements, including Configuration Management, Requirements Management and Development, Technical Solution, Product Integration, Verification, and Validation are addressed sufficiently in the Services model, though not necessarily as separate process areas. Given that the scope of the Support Office’s SCAMPI appraisal was focused on maturity level 2 process areas, Service System Development was not included. As the Support Office improves its processes and strives to achieve higher maturity, Service System Development will come into play.

Organizational Processes

In the past, one of the Support Office’s weaknesses arguably could have been its employee training processes. An individual team member’s knowledge was gained primarily through his or her experiences with handling user requests and working on-site with users. The experiences and relationships built with other team members in the office were also a great source of knowledge. This caused some problems for newer team members who did not have the same exposure and learning experiences as the other team members. The process of applying the CMMI for Services model helped the Support Office focus on documenting standard processes and making them available to all team members. All team members have access to the same repository of information. Lessons learned from site visits are now shared with all members of the team, and lessons learned by one team member more quickly and systematically become lessons learned by all team members. This focus has allowed the Support Office to turn a weakness into a strength. This strength was noted during the SCAMPI appraisal.

Implementation Results

Using continuous representation, the Support Office received a maturity level 2 rating with the CMMI for Services model. One of the greatest challenges the Support Office faces in its attempt to achieve advanced process improvement and higher capability and maturity ratings is an organizational issue. While the nature of the Support Office work is services rather than development, the team must integrate itself with AEM’s overall process improvement efforts, which are more in line with development. The Support Office, AEM’s CMMI for Development groups, and its Process Improvement Team will have to work together to achieve greater cohesiveness and synergy. Interestingly, the Support Office may become the first AEM group to measure its processes against higher maturity levels. Its processes as a service organization, rather than as a development organization, lead it closer to achieving quantitative management and optimization of its processes.

In summary, the CMMI for Services model was an excellent fit for the Support Office. It provides a framework for work management, including capacity and availability management, as well as measurement and analysis opportunities. Its emphasis on service delivery matches the customer support services provided by the group, and helps the group to provide higher-quality and more consistent customer service. The CMMI for Services model is flexible enough to encompass the work involved with developing software products, and also ensures that the processes to produce software and services sufficiently correspond to those used by AEM’s software development contracts.

Enhancing Advanced Use of CMMI-DEV with CMMI-SVC Process Areas for SoS

By Suzanne Miller and Lynn Penn

Book authors’ comments: Suzanne Miller has been working for several years with process improvement and governance for systems of systems. Lynn Penn works at Lockheed Martin, where she leads the process improvement work in a setting with a long history of successful improvement for its development work. Suzanne posited in the last edition of this book that systems engineering can be usefully conceived of as a service. Lynn most recently led a team to improve service processes at Lockheed Martin. Once her team had learned about the CMMI-SVC model, they recognized that service process areas could also provide a next level of capability to already high-performing development teams. In other words, Lynn proved in the field what Suzanne had conceptualized.

The term system of systems (SoS) has long been a privileged term to define development programs that are a collection of task-oriented systems that, when combined, produce a system whose functionality surpasses the sum of its constituent systems. The developers of these “mega-systems” require discipline beyond the engineering development cycle, extending throughout the production and operations lifecycles. In the U.S. Department of Defense context, many programs like these were the early adopters of CMMI-DEV. The engineering process areas found within the CMMI-DEV model reflect a consistently useful roadmap for developing the independent “subsystems.” Given a shared understanding of the larger vision, CMMI-DEV can also help to maintain a focus on the ultimate functionality and quality attributes of the larger system of systems.

However, the operators of these complex systems of systems are more likely to see the deployment and evolution of these SoS as a service that includes products, people to train them on the use of the SoS components, procedures for technology refresh, and other elements of a typical service system. From their viewpoint, an engineering organization that doesn’t have a service mindset is only giving them one component of the service system they need. From the development organization’s viewpoint, participating in a system of systems context also feels more like providing a service that contains a product more than the single product developments of the past. They are expected to provide much more skill and knowledge about their product to other constituents in the system of systems; they have much more responsibility for coordinating updates and upgrades than was typical in single product deliveries; and they are expected to understand the operational context deeply enough to help their customers make best use of their products within the end-user context, including adapting their product to changing needs. Changing the organizational mindset to include a services perspective can make it easier for a traditional development organization to transition to being an effective system of systems service provider.

With the introduction of CMMI-SVC, development programs involved in the system of systems effort are able to further enhance their product development discipline even within the context of their existing engineering lifecycle. As an example, the engineering process area of Product Integration (PI) has always been a cornerstone in effecting the combination of subsystems and the production of the final system. However, as the understanding of the operational context of a system of systems matures, it is also necessary to have the ability to add, modify, or delete subsystems while maintaining the enhanced functionality of the larger system. The Service System Transition (SST) process area provides a useful answer to this need. This CMMI-SVC process area now gives the system of systems developer guidance on methodically improving the product with new or improved functionality, while considering and managing the impact. Impact awareness is important internally to management, test and integration organization, configuration management, and quality, as well as externally to the ultimate users. All stakeholders share in the benefits of effective transition of new capabilities into the operational context. The planning of the transition coupled with product integration engineering provides both the producer and the ultimate customer with the confidence that a trusted system will continue to operate effectively as the context changes.

Another obvious gap in the development of these systems of systems is the absence of a system continuity plan. At Lockheed Martin, we have found that the CMMI-SVC process area of Service Continuity (SCON) can be effectively translated to “System” Continuity. The practices remain the same, but the object of continuity is the system or subsystem as well as the services that are typical in a sustainment activity. The ability of the producer to plan for subsystem failure, while maintaining the critical functions of the ultimate system, is a focus that can easily be missed during the SoS lifecycle. The prioritization of critical functionality (a practice within SCON) emphasizes the need for the customer and users to focus on the requirements that must not fail even under critical circumstances. Test engineers have always identified and run scenarios to ensure that the system can continue to function under adverse conditions. However, the Service Continuity guidance looks at the entire operations context, which focuses test engineering on the larger system of systems context and the potential for failure there, rather than just the adverse conditions of a particular subsystem.

Capacity and Availability Management (CAM) provides guidance to ensure that all subsystems, as well as deployment and sustainment services, meet their expected functionality and usage. When we use CAM and translate resources called for into subsystems and functionality, it becomes a valuable tool in system development. Management can use CAM practices to make sure costs that are associated with subsystem development and maintenance are within budget. Developers can use CAM practices to monitor whether the required subsystems are sufficient and available when needed. Risk Management (RSKM), a core process area, can be used to build on CAM to ensure that subsystem failures are minimized. Understanding capacity and availability is critical during SoS stress and endurance testing, since this kind of testing requires an accurate representation of the system within its intended operational environment, as called for in CAM.

Although these examples of translating CMMI-SVC process areas into the engineering context are specific to large systems of systems, they can be adapted to any development environment. Looking at the CMMI-SVC process areas as an extension to CMMI-DEV should be encouraged, especially for organizations whose product commitments extend into the sustainment and operations portions of the lifecycle. The ability of the one constellation to enhance the guidance from other constellations makes CMMI even more versatile as a process improvement tool than when it is operating as a single constellation. Proactively selecting process areas to meet a specific customer need—for either a product or service, or both—demonstrates both internally and externally a true understanding of the processes necessary to solve customer problems, not just provide a product customers have to adapt to obtain optimal performance. Shifting our mental model from pure product development to product development within the service of supporting an operational need demonstrates a multidimensional commitment to quality throughout the development and operations lifecycle that end users particularly value.

Multiple Paths to Service Maturity

By Gary Coleman

Book authors’ comments: Gary Coleman provides insight from CACI, an organization that has faced a challenge shared by many: which of the CMMI models to use, and also, whether CMMI and other frameworks, such as ISO, be used together, or whether an organization must choose one framework and adapt it to all circumstances. Gary demonstrates how CACI has adapted to using the various models and frameworks based on the context of the line of business, following multiple paths to maturity.

CACI is a 13,000-person company that provides professional services and IT solutions in the defense, intelligence, homeland security, and federal civilian government arenas. One of our greatest distinctions is the process improvement program we began more than 15 years ago to achieve industry-recognized credentials that will bring the greatest value and innovation to our clients. This has resulted in CACI earning CMM-SW, CMMI-DEV, ISO-9001, and ISO-20000 qualifications. Across the company, CACI has executed a multiyear push to implement the best practices of the ITIL framework, and the Project Management Institute’s PMBOK. Most recently, CACI achieved an enterprise-wide maturity level 3 rating against CMMI-DEV.

One aspect of the multimodel environment in which we operate is the competing nature of these models. We are becoming adept at dealing with this competition, and being able to select the bits and pieces of each model that will support the needs of a given program. Our organizational Process Asset Library contains process assets that support all of these models, as well as best practice examples that we collect and evaluate as we proceed through our project work.

This essay presents three cases of groups within CACI that have pursued credentials supporting their service related business. None of these teams were starting from scratch, and each of them had different starting points based on prior process work, customer interest, and management vision. They arrived at different end points as well.

• Case 1: adapted CMMI-DEV to its service areas

• Case 2: uses ISO-9001 to cover services, and may move to ISO-20000

• Case 3: has made the choice to move to CMMI-SVC

All three cases exist in the multimodel world, and have had to make their choices based on a variety of factors, under a range of competitive, technical, and customer pressures.

Case 1: CMMI-DEV Maturity Level to CMMI-DEV Maturity Level 3 Adapted for Services, 2004–2007

CACI’s first move into model-based services took place in a group that was developing products and delivering services. In most cases, the services were not related to the developed product, and while the distinct product and service teams shared a management team and some organizational functions, they had little need to organize between teams or to coordinate joint activities.

The drive to pursue a credential came about as management anticipated their customers’ interest in CMMI credentials. They had already installed an ISO-9001 quality management system to cover their management practices, and that had been a foundation for the first CMMI-DEV effort covering their engineering projects at maturity level 2, which they achieved in 2005.

As the service content of their business grew, it became clear to management that they could incorporate some of their lessons learned in the engineering areas, and the discipline of the CMMI, to their service activities. In fact, they came to realize that it would be easier to include these service functions in their process activities than it would be to work them by exception or exclusion. This insight led them to apply the CMMI-DEV model across the board.

There were challenges, of course, especially where the development of common standard processes was concerned. It was not always easy to see how some of the development engineering practices could be applied to services, and there was not much prior industry experience to rely upon. Attempts to write processes that could be applied in both development and service settings sometimes resulted in processes that were so generic in nature that significant interpretational guidance, and complex tailoring guidance, would be needed to support them. A large but very helpful effort went into building this guidance so that the project teams would know what was expected of them in their contexts of either products or services. This guidance was also very useful in the effort to put together the practice implementation indicators for the appraisal, and as supporting materials for the appraisal team members during the appraisal to remind them of the interpretations that were being relied upon.

In spite of the challenges, there were areas that were easily shared, and their experience showed that many of the project management areas and organizational functions could be defined in a way that allowed for their use in both engineering and service related projects. They defined methods and tools for the management of requirements and risks, for example, that could be easily visualized, taught, and executed in both domains. These successes have encouraged them to continue to look for other areas where their process efforts can be shared, and to concentrate on improvements that can be leveraged across both their services and their product development programs.

Case 2: CMM-SW to CMMI-DEV and ISO 9001

This group consisted of two main organizations: one that developed and maintained a suite of software products, and another that supported the customer’s use of those products in the field (installation, training, help desk, etc.). These two programs shared a common senior management function and an overarching quality assurance support function, but little else. In fact, initially the two programs were under different contracts and interacted with very different client communities.

The drive to CMM for their engineering side began as a result of the customer’s interest, and the CACI team worked closely with the customer teams so that both CACI and the customer achieved separate CMM ratings, and eventually CMMI ratings, at nearly the same time. Successes with the results of CMM-based process improvement led enlightened management to recognize the potential benefit of applying process improvement techniques on the service side. ISO-9001 was chosen at that time because the company had a considerable experience base in implementing ISO-9001-based Quality Management Systems in other areas, and no other suitable frameworks could be found at that time.

As both the ISO-9001 and CMMI systems developed in isolation at first, senior management noticed that similar processes, tools, reporting methods, and training existed in both areas. This revelation was the drive that started an effort to consolidate similar approaches for economies of scale. Even though the types of work being done in each area were quite different, there were many areas of commonality. This was especially true in the management and support processes, where the common process improvement team could define single approaches. A side benefit of this effort was that a new mindset that emphasized the potential for common processes overcame an older mindset that focused on the differences between the development and service teams.

When the customer proposed that the two contracts be brought together under a single umbrella contract, CACI was ready to propose a unified approach that not only included shared processes in areas that made sense, but also showed how the customer would benefit from the increased interconnectedness of the programs under CACI’s management. Tighter connection between the service-side help desk staff and the development-side requirements analysts would improve the refinement of existing requirements and uncover new requirements that would enhance both the products and their support. That same connection improved the testing capabilities because the early interactions of the development and service sides ensured better understanding of customers’ needs. The help desk and other customer-facing services could be more responsive to customers through improved communications with developers, and much of this would be due to the synergy of a common language of process and product.

This group has watched other groups within CACI that have implemented ISO-20000 on their service activities, and has also begun to evaluate whether the CMMI-SVC model offers benefits to them and their customer. They wonder if they have already achieved many of the benefits of the common set of foundation processes that the CMMI constellations share. They appreciate the flexibility of the ISO-20000 standard, and its relative clarity of interpretation. In addition, the sustainment aspects of the ISO-20000 external surveillance audits (missing in the CMMI world) appeal to the people who have relied on this in the ISO-9001 world. In the end, the decision of which way to go next will be heavily influenced by the customer’s direction.

Case 3: CMM-SW to CMMI-DEV Maturity Level 3 and Maturity Level 5 to CMMI-SVC

This is another group with a long history of process improvement within CACI, dating back to the days of CMM for Software. Their engineering and development efforts cover a range of maintenance of legacy software applications, re-platforming of some of those legacy programs, and development of new programs. In addition, this same group provides a variety of specialty services to their customers that include consulting in Six Sigma, and range to the installation and maintenance of servers and data centers.

The drive to CMM, and subsequently to CMMI-DEV, came initially from the customer, but was fueled by CACI management’s recognition that a credential would represent a “stake in the ground,” establishing their capability. At the same time, it would also be support for improvement and growth of their capability to satisfy their customer’s growing needs for higher-quality deliverables. When the customer raised the bar and identified a requirement for maturity level 4, this team had already begun their move to high maturity, targeting maturity level 5. They achieved level 5 and continue to grow their quantitative skills to this day.

This foundation in process improvement, and a culture of always looking to get better, was the basis for new inquiries into where they could apply their skills, and the obvious candidates were the services areas of their business. By now, the company had achieved several ISO-20000 credentials in a number of other IT groups, and there was certainly a motivation to take advantage of these prior successes. However, the strong CMMI awareness of this group inclined them to consider the CMMI-SVC model. The fact that the CMMI-SVC model covered a broader range of service types was important to the team in their decision, and the fact that it was organized in a way that allowed them to “reuse” and “retool” existing process assets and tools for the shared process areas is what forced the decision to go with CMMI-SVC.

A formal analysis was done and the results were shared with both CACI management and with the customer in a kickoff meeting where the plan to move forward with CMMI-SVC was presented. They anticipate that other customers will eventually see the value of the credential. They expect that CMMI-SVC will help them to grow their process excellence into the services areas, giving them the benefits that they have had for so long in their engineering and development areas.

With these three cases, all within one company, it’s clear that the claims of the CMMI product team at the SEI have merit. Adopters can use CMMI-DEV in service domains and CMMI-SVC in development, as well as in their primary intended discipline. In addition, CMMI models coexist in a compatible fashion in organizations using ISO, PMBOK, and other frameworks.

Using CMMI-DEV and ISO 20000 Assets in Adopting CMMI-SVC

By Alison Darken and Pam Schoppert

Book authors’ comments: SAIC has been a participant on both versions of the CMMI for Services model teams, and Alison Darken and Pam Schoppert worked on the initiative to adopt CMMI-SVC at SAIC. Like the prior essay authors, they found that it is feasible. They also found that it required care, even with their deep knowledge of a new model. They offer some detailed experience on what kinds of services seem to suit CMMI-SVC content best and what terminology differences among the models may call for additional understanding and adaptation of even a robust process asset set.

Upon release of Version 1.2 of the CMMI-SVC model, SAIC began developing a corporate-level CMMI-SVC compliant set of process documents, templates, and training as part of SAIC’s EngineeringEdge assets. The objective was to develop a CMMI-SVC compliant asset set consistent with existing process assets without exceeding CMMI-SVC requirements. The expected relationship between CMMI-SVC and the other models and standards already implemented in corporate asset collections was key to planning this development effort. At the time development began, SAIC had asset sets compliant with CMMI for Development (V1.2), ISO 20000, and ISO 9001:2001 (see Figure 6.1).

Figure 6.1 Preexisting SAIC Asset Sources

image

SAIC’s earlier work in organizational process development and adoption provided the CMMI-SVC initiative with the following:

• An understanding of the overall CMMI architecture and approach

• ITIL expertise

• Expectations of reuse of assets in existing “views” (e.g., asset collections)

Nine months after the project started, the CMMI-SVC compliant view was released, on time and within budget, using a part-time development team of four process engineers and managers. We learned that our existing understanding and resources, based on CMMI-DEV and ISO 20000, while helpful overall, were also the source of many unexpected challenges. Following are some of the key lessons we learned from our effort.

Understanding the Service Spectrum

It became apparent very early on that CMMI-SVC was not a standard, cookie-cutter fit for the myriad service types performed by SAIC and other companies. Some services better match the CMMI-SVC definition in that they are simultaneously produced and consumed (e.g., help desk and technical support services). On the other side of the spectrum of service types are those that are more developmental in nature, but do not involve software or system engineering full-lifecycle development. Examples include training development, engineering studies, and independent verification and validation (IV&V).

Organizations need to understand their particular service types and realize that service type can affect application of the CMMI-SVC model (see Figure 6.2). We found that the model’s service specific process areas were easiest to apply on the right side of the spectrum where service delivery was immediate upon request. Process areas such as Service Delivery (SD) and Incident Resolution and Prevention (IRP) were a natural fit; core process areas in the area of project management required more manipulation. In contrast, services to the left of the spectrum required up-front development before a service request could be satisfied. Often, considerable design, development, and verification were necessary to deliver just a single instance of a service, such as the development and delivery of a training class. We also had to address development of tangible products often accompanying these services and requiring the same level of rigor as products produced under CMMI-DEV. The model assumes such products are part of the service system, but in some cases they are unique to that one service request and, at SAIC, are not properly treated as part of the service system (e.g., an engineering study). Because of the developmental feel and reduced volume of service instances, service specific process areas such as Service System Development (SSD) and Capacity and Availability Management (CAM) required more massaging. Understanding the service spectrum and the specific service catalog of an organization can ease adoption and implementation of CMMI-SVC.

Figure 6.2 Applying CMMI-SVC to the Service Spectrum

image

Rethinking the Core Process Areas

Understanding the CMMI model required a paradigm shift. Prior knowledge of CMMI can be both an asset and a handicap in dealing with CMMI-SVC. To avoid going down the wrong path, we had to make a very conscious effort to avoid over-generalizing from past experience to CMMI-SVC. For instance, for some core process areas (Process Management process areas, Measurement and Analysis [MA], and Configuration Management [CM]), while the practices were the same, they needed to be viewed from a different perspective. The following are the areas in which we encountered a particular challenge.

• It was necessary to move from the project-centric view of CMMI-DEV and shift to a “program”-oriented process where a program was an ongoing activity that accommodated many specific projects, tasks, or individual service requests. ISO 20000 reflects this perspective, but with CMMI-SVC we were fighting the slant of the core process areas and our own past history with CMMI.

• An appreciation of the two-phase nature of a program was key. During establishment, the program matches well with a project and with the practices in CMMI-DEV. This phase produces an actual product, the service system, at its conclusion. After service delivery begins, the picture shifts away from development.

• A centralized notion of metrics as applied to software projects required rethinking as a distributed activity in terms of collection, analysis, and use. Metrics also refocused from measuring the progress of a project, applicable during service system development, to measuring performance, once delivery began.

• The CM process area in CMMI provided too little guidance and asked too little to responsibly serve an IT service project. We struggled with the tension between maintaining our goal of not exceeding the expectations of the standard and the knowledge that an IT services program would get in a lot of trouble with only that minimal level of rigor.

• It took some retraining to avoid conflating defects with incidents.

• As explained in the next section, we adopted the term problem to refer to underlying causes. The goal in IRP related to this activity did not require the full application of Causal Analysis and Resolution (CAR), even CAR without a quantitative management component. We had to make a particular effort not to require more in this area than CMMI-SVC required.

• CMMI-SVC required more-detailed, lower level assets for IT services. We had to find a solution for this while still maintaining a flexible, generic, corporate-level set. We adopted a template for manuals that contains shells for the needed detailed work instructions.

Understanding Customer Relationships

SAIC works through contracts with government and commercial customers. In a company such as SAIC, two situations need to be addressed: SAIC provides a standard set of services for many customers (see Figure 6.3), and SAIC performs a dedicated service(s) under contract for a particular customer (see Figure 6.4). Our approach to establishing the service system and on-boarding customers varied based on for whom and how the service was provisioned. This led to distinct service lifecycles.

Figure 6.3 Service Provider Paradigm

image

Figure 6.4 Contract-Based Paradigm

image

The lifecycles shown in Figure 6.3 and Figure 6.4 represent a unique paradigm related to our services business.

• An organization may use the “field of dreams” approach of “If you build it, they will come.” In this paradigm, the organization has a vision for a service before obtaining customer commitments. The activities related to on-boarding customers come after the service catalog in STSM is developed and after the service system is established in SSD. This situation was addressed by a service provider lifecycle.

• Organizations may provision services in response to a defined service agreement. This flow begins with identification and contractual negotiation with a customer, followed by establishment of the service system and service delivery. In this situation, the contract-based lifecycle is employed. It still mandates alignment to strategic organizational service management.

Both lifecycle models have the same key elements, but the order of activities varies.

To better understand the decisions made, it helps to be aware that, overall, our service asset view, as well as our other asset views, have the following features.

• They are primarily intended for work efforts that support external customers.

• They must comply with internal SAIC corporate policies including program management, contract management, and pre-award policies. Some of these fall outside of CMMI, but are essential to the way SAIC does business.

Some work efforts or contracts are equivalent to fulfilling a single service request (e.g., one engineering study, one training course). This presents some challenges since some process areas such as STSM, SSD, and IRP are not cost-effective or practical for a single request. Our solution was to develop an organization approach for these efforts to avoid over-burdening such small contracts with the requirements of these process areas and to shift the overhead to the organization owning the contract.

Understanding the New Terminology

We found that some terminology used in CMMI was not in sync with ISO 20000 and ITIL as already implemented in our environment. Sometimes the ITIL and ISO terminology was a better fit on its own merits. There weren’t many instances, but they were pervasive.

• “Service request” has a very specific meaning in the ITIL and ISO 20000 world, and it is not the same as the way it is defined in CMMI. An ITIL service request is a preapproved, standard request (e.g., change out a printer ribbon), as opposed to the use in CMMI to encompass not just preapproved, standard requests, but all customer requests. Because ISO 20000 and ITIL were already present in the organization, we were concerned about confusion. We also thought it was very useful to have a specific term for preapproved, standard requests since their handling differs from nonstandard requests.

• A term was needed to easily refer to the pursuit and management of addressing underlying causes. The term problem is used in ITIL and works well to make it easier to talk about the subject.

• As discussed earlier, management needed to be addressed in terms of programs, not just projects. With CMMI-SVC, Version 1.3, the term work is introduced in place of project. While this is helpful in avoiding some of the assumptions associated with project, in the SAIC environment, we see a need to address both program type management and project management in a service context. The ongoing service effort is managed as a program and larger individual service requests or service system modifications are managed as projects.

Understanding How to Reuse Existing CMMI Process Assets

At the time we developed our CMMI-SVC compliant set SAIC had three existing CMMI-DEV compliant asset views from which to draw: Software, Systems Engineering, and Services. The latter attempted to fit services to the CMMI-DEV model. None of the assets unique to the preexisting Services View were reused nor were any unique to the Systems Engineering View. All CMMI-DEV view assets that proved useful for CMMI-SVC were found in the Software View or were standard assets shared across multiple views. The exact relationship with the Software View is as follows.

• Sixteen percent of the Service View assets, mostly organizational assets, could be shared between views without modification.

• Twenty-seven percent, mostly from PM, QA, CM, and peer review, could be shared with relatively minor modification.

• Twenty-seven percent, unique to the Services View, were developed from a Software View source.

• The Software View was indicated in the Services View as an optional source for assets to support service system development (i.e., the Software Test Specification Template).

The types of modifications required to share were almost exclusively due to the following.

• Creation of assets unique to the Services View: References were now needed to cite both the Software View version and the Services View (e.g., citations might be required to both the Configuration Management Plan Template used in Software and Systems Engineering and the Service Configuration Management Plan Template used only in Services).

• Differences in terminology: Even in describing core process area activities, the same terminology would not always work for both software and services. For instance, using problem to describe underlying causes overloaded a term that had always been used as a synonym for defect in SAIC CMMI-DEV views. This meant we could not share our change tracking form between views.

There were some cases, such as metrics, that differed enough to require a unique asset with a different approach. In a significant number of cases, however, we didn’t feel that an asset could be shared despite often directing virtually the same activities and corresponding to the same practices. The motive for not sharing, in the majority of cases, was to increase user convenience. Our philosophy is that ease of use is more important than reducing complexity from the organization’s perspective. User push back is much more difficult to address than finding ways around maintaining multiple versions of the same thing, each suited to a different user community. We wanted to spare users from plowing through too many references to alternative versions of specific documents and so forth. This was the main reason, for instance, for developing a configuration management plan template for services distinct from that used for software and systems engineering.

We were also able to share or use as a source several of the customized training classes supporting the Software View process. Twenty-nine percent of the classes for the Services View had sources in the Software View. Forty-three percent were shared with the Software View after some modification.

Understanding How to Use and Reuse ISO 20000 Assets

Five percent of the assets unique to the Services View, all related to CMMI-SVC specific process areas, were developed from an ISO 20000 source. The small number of ISO 20000 sources is misleading. The assets developed from them were among the most significant in the revised Services View and would have required a great deal of effort to create without a source. They included plan templates for capacity and availability management, continuity management, and service transition, and a request, incident, and problem manual.

We were surprised that we couldn’t share service specific assets and had to do so much work to create the revised Services View version from the ISO 20000 source. This was due to the following factors.

• Terminology differences. This was not as serious as it might have been, because in many cases, we adopted the ISO 20000 or ITIL terminology (e.g., service request, problem management).

• References to other ISO 20000 assets not included in the Services View.

• Lack of overlap between CMMI and ISO 20000 as to what functions received more detail, rigor, or elaboration. This was a serious issue. Some examples include the following.

• QA: ISO 20000 appears to leave the definition of QA as understood in CMMI to the ISO 9001 standard. The only auditing discussed in the ISO 20000 standard refers to auditing against the ISO 20000 standard itself. Our ISO 9001 View did not contribute to our CMMI-SVC effort. Where the ISO 9001 scope overlapped with CMMI-SVC, we had better-matching CMMI-DEV resources available.

• CM: ISO 20000, as an IT standard, focuses on some recordkeeping issues that are essential to IT services (e.g., a configuration management database [CMDB]), but wouldn’t necessarily be needed in all services. It also delves more deeply into change management. Document management is specifically addressed.

• PM: ISO 20000 frequently focuses on different aspects of project management than CMMI, such as the customer complaint process and financial management.

• Organizational process: In ISO 20000, the organization and the service program are essentially identical; therefore, OPF, OPD, and OT occur within the service program.

• Structural differences as to how activities were grouped. Although this might not seem to make any difference—a practice is a practice no matter how you organize the discussion—it does result in certain natural groupings of responsibilities and practitioner ways of thinking. The main examples are inter-workings of availability, continuity, and capacity, and the definition of CM.

• In ISO 20000, availability and continuity are paired whereas CMMI joins availability and capacity. Furthermore, the expectations for what would be discussed within each differed in some aspects. We had created separate ISO 20000 plan templates for each of these three areas as well as a manual template for capacity management—four documents in all. For the Services View, a Capacity and Availability Plan Template and a Continuity Plan Template were sufficient to address the practices. The requirement for model representations of the service isn’t included in ISO 20000.

• In ISO 20000, the CMMI CM practices are distributed among three functional areas: Release, Change Management, and Configuration Management. Configuration Management is confined to the recordkeeping and repository maintenance function. Attention is also given to document management. These factors introduce a larger number of practices that need to be met and a multiplicity of roles that in the CMMI-SVC View would be assigned to the CM manager and maybe to the CM team.

Specialized ISO 20000 process training courses were heavily used as sources for 29 percent of the CMMI-SVC process classes.

Conclusion

Going into the development effort, we had initially expected a relatively painless exercise in which we would be able to reuse our CMMI-DEV assets for CMMI-SVC core process areas with relatively little modification, and base our assets associated with service specific activities directly on ISO 20000. We discovered that these transitions were not as effortless as we had hoped, and that, in fact, without exercising some care, prior knowledge of CMMI-DEV and ISO 20000 could take us down the wrong paths. On balance, it was very valuable to have a foundation in CMMI-DEV before attempting to take on CMMI-SVC, but implementers should exercise caution about applying what they know, and approach process adoption for services from a fresh perspective.

Experience-Based Expectations for CMMI-SVC

By Takeshige Miyoshi

Book authors’ comments: Takeshige Miyoshi has had a rich and varied career in engineering and process improvement in Japan. Many development organizations realize that CMMI-SVC may be particularly amenable to the service arms of an organization that has enjoyed success with Software CMM or CMMI-DEV. This eminent engineer traces the history of the earlier models and affirms how CMMI-SVC fits into that lineage. Then he goes further, and notes that CMMI-SVC content has something to offer for development environments as well.

One of the impressions that struck me most when I first read through the CMMI-SVC model was that it promises to be a widely useful process model. This impression is based on my ten years of field experience maintaining part of a mainframe operating system, and my experiences observing the implementation status of SW-CMM and CMMI in a variety of real-world situations through performing a number of assessments and appraisals. Also, its thoughtful model structure could provide benefits to many users of CMMI-DEV. Since 1994, I have been heavily involved in assessments and appraisals using the SPICE framework and the CMM (SW-CMM and CMMI) models at many companies in Japanese industries.

Expectations for CMMI-SVC to Be a Promising Model

The most prominent feature of the CMMI-SVC model is its applicability to a variety of business fields. This feature comes from its thoughtful model structure: corresponding to the Engineering category in CMMI-DEV, CMMI-SVC has the Service Establishment and Delivery category, which includes five process areas: Strategic Service Management (STSM), Service System Development (SSD), Service System Transition (SST), Service Delivery (SD), and Incident Resolution and Prevention (IRP).

In this category, the most appropriate service system, in terms of an individual business context for a wide range of service organizations, is developed by SSD according to the strategic needs and plans for the organization’s standard services defined in STSM. After deploying the service system to the service delivery environment by SST, actual services are delivered to customers using practices in SD by operating the service system. During service delivery, if service incidents occur, they are handled and resolved by practices in IRP. The series of service delivery activities are managed and supported by the CMMI model’s common Project and Work Management and Support categories. Furthermore, CMMI-SVC, like other CMMI models, provides a clear improvement path. Namely, the performance and quality of these service activities are improved, step by step, primarily by functions covered by process areas in the Process Management category.

In CMMI-DEV, software, hardware, or system products that are to be delivered to customers are developed using the process areas in the Engineering category. On the other hand, in CMMI-SVC, the SSD (Service System Development) process area, which is analogous to the Engineering process areas of CMMI-DEV, develops the organization’s service system to be used for delivering services. Actual services are provided to customers by operating this service system.

In CMMI-SVC, to develop an organization’s service system using SSD and to clearly define how to operate the service system, the organization must understand the real status of its working service sites. In what working environments, and using what kinds of tasks, are the workers providing daily services? Therefore, you must inevitably take the policy of following the “site-first” principle, namely, to attach importance to what is happening and what is being done at the service site or on the shop floor. By doing so, “process descriptions are consistent with the way work actually gets done,” as you can see on the first paragraph on slide 8 of Module 2 of the Introduction to the CMMI-DEV, V1.2 material. Having the “site-first” principle, and following the best practices of SSD, an excellent service system can be developed and will become the basis for providing superior services.

My experience-based intuition tells me that the principle of “providing intangible and non-storable service products by operating an organization’s specific service system” could be thoughtfully applied to a variety of organizations that find that implementing all of the CMMI-DEV practices is a heavy burden on them. In addition, it could provide an opportunity for those organizations that want to improve performance and product quality to thoughtfully interpret the model’s practices considering the organization’s own business objectives.

A Prelude

Back in the early 1960s, I started my career as an electrical engineer at an electric company that produced various kinds of broadcasting machines, including large-console-type videotape recorders. I often recall those old days with strong feelings. Having several production divisions with an independent quality assurance group, the company was rapidly growing in the field. However, labor union movements were prospering at that time in the electric industry, and we had several weeks of labor strikes every six months in the factory. Even in this unstable working environment, if we had process definitions of the production cycle, and if management really understood what was happening at the production site, it was much easier to produce high-quality products on schedule. Here, I learned that the “site-first” principle is most important for producing high-quality products. After examining CMMI-SVC, I see that the Service Continuity process area has practices that could have helped us to deal with these disruptions.

After joining a pioneering independent software house in Japan, Software Research Associates, Inc. (SRA), in 1970, I had experienced a number of application and basic software development projects and a variety of maintenance projects as well as a national R&D project. Especially from the early 1970s, I was fortunate to participate on the team that developed and maintained the IOCS (Input and Output Control System) of Operating Systems for UNIVAC 1100 series large-scale computers. This was in the golden days of UNIVAC Japan, one of the leading mainframers in Japan. UNIVAC’s large-scale computers were installed one after another at a number of organizations in various fields. Since IOCS is a necessary piece of software for inputting users’ data to the computer and to see the processing results, it was installed and used in more than 100 sites on this island, and accordingly, I experienced various kinds of trouble.

I often visited users’ sites from northern areas to southern regions of the island. In this maintenance service environment, service requests came from users of the UNIVAC 1100 Series via Field Support Groups, using User Service Request sheets. In light of CMMI-SVC, this would correspond to the request management practice in the Service Delivery process area. One time, we received an urgent request about serious trouble from one of the remote site users. Quickly, I jumped to the remote site by taking the Shinkansen and local trains. At midnight, I tried every possible method to reproduce the phenomenon, did huge tracing memory dumps, analyzed them, and found system bugs that could occur in very rare cases in conjunction with hardware functions. When I organized a brief report about my troubleshooting, the bright sun rose from the eastern sky. All of this is a dear memory of the old days.

Although we didn’t have a clearly defined maintenance process, we had a structured flowchart of the IOCS functions. This chart could be the service system representation called for in the Capacity and Availability Management process area or one of the important service system components that is developed using the Service System Development process area and is used in the Service Delivery process area of CMMI-SVC. In this case, instead of waiting for the user’s information from the remote site, I quickly visited the user site by myself and took a series of speedy actions: doing tracing memory dumps, analyzing them, and quickly developing a trouble report to explain it to the real user. In addition, we prevented similar future troubles by updating the IOCS software. Experienced service providers also recognize this activity as incident handling, as expressed in the Incident Resolution and Prevention process area of CMMI-SVC. Through these ten years of service tasks at UNIVAC Japan, by actively dedicating myself to tackling various kinds of troubleshooting, I had an opportunity to learn how software engineering theory works in real-world situations, and also that the “site-first” principle is most important. Those are my favorite memories of the old days.

My CMM Experience

In 1996, I joined the SEPG at Fuji-Xerox Corp. to promote the first formal CMM-based SPI project in Japan. After more than two years of effort, the organization achieved maturity level 2 for the first time in Japan, which had a good effect on many companies in Japan, drawing attention to their processes as a way to keep to their schedules and produce quality products. During my three years of experience on this project, I learned what could lead to a successful SPI project and what could lead to an unsuccessful one. In addition to my experiences in my early days, I had a belief again that the “site-first” principle would work here, too!

Soon after becoming a CBA-IPI lead assessor in 1999, I started supporting CMM training courses and formal assessments at various companies in Japan, including the forerunners of SW-CMM, such as Toden Software Inc. (currently called “TEPSYS”) and SRA. This professional support for many companies has continued to the present, using CMMI-DEV, and will continue with CMMI-SVC in the very near future.

From Compliance-Driven Improvement to Performance-Driven Improvement

As we can see in one of the three major drivers of revisions to CMMI, “Increasing confidence in and usefulness of SCAMPI appraisal results,” there is no denying the fact that until several years ago, many CMM(I) users were apt to rush to achieving a maturity level by conducting “compliance-driven improvement.” Times have changed, however, and we must promote SPI activities seriously under the motto of “performance-driven improvement” considering the severe economic situation we have experienced in recent years.

To promote performance improvement, it is most important to think about how to make good use of the process assets sharing framework in a way that functions most appropriately in the organization’s business environment. To create organizational business effects, CMM(I)’s traditional “conceptual software process framework” depicted in Figure 4.1 of the classic book, Software CMM, is still vividly alive in CMMI-SVC. This is, so to speak, the “Process Assets Sharing” framework, namely, the organization’s best practices, lessons learned, and data about its processes. This information is systematically collected, disseminated, and shared throughout the organization. This is one of the most important concepts, and has been consistently used throughout the model’s evolution, from SW-CMM to CMMI-DEV and CMMI-SVC.

In most large organizations in which large- and mid-size development projects are running, the six process areas in the Engineering category of CMMI-DEV are considered to have reasonable best practices. However, many organizations that have smaller projects or service functions have found that those Engineering best practices have been a heavy load to implement. Until a few years ago, some service organizations used CMMI-DEV practices, although they were not fully aligned with service activities. Using CMMI-SVC, they will be relieved from the restraint of implementing all of the practices of the Engineering category of CMMI-DEV.

Several years ago, I participated in SPI activities at a service organization that provided software support for open source customers. At that time, I had a hard time flexibly interpreting the practices of the Engineering process areas to map them to small span engineering tasks in the organization’s specific situations, although I seriously referred to the SEI’s technical note on interpreting CMMI-DEV for service, Interpreting CMMI for Service Organizations – a System Engineering and Integration Service Example [SEI 2003].

But now, fortunately, we have the promising process model, CMMI-SVC! This model addresses the needs of a wide range of service types by flexibly and effectively using every past experience of CMM(I) models, and provides a clear improvement path. As I have shown, even in development organizations, some of the practices of the CMMI-SVC process areas are helpful. And I believe that the aforementioned principle of “providing service products by operating an organization’s specific service system” could be thoughtfully applied not only to a variety of organizations that feel that implementing all of the CMMI-DEV practices is a little heavy, but also to many of the potential new CMMI users in a wide range of service types. In addition, by disseminating the usability of this principle, I would like to help produce an atmosphere of “performance-driven improvement” in the real world so that many CMMI-DEV users can really improve their performance and quality by deeply and thoughtfully interpreting the model practices in light of their organization’s business context and objectives.

An IT Services Scenario Applying CMMI for Services: The Story of How HeRus Improved Its IT Services

By Drew Allison of SSCI

Book authors’ comments: In this essay, Drew Allison, a Certified ITIL V3 Expert and Certified Lead Appraiser and Instructor for CMMI-DEV and CMMI-SVC from the Systems and Software Consortium, draws on her experiences bringing CMMI to IT service organizations to provide some observations and a scenario that reflects an amalgam of those experiences. These experiences indicate that some of the challenges that IT service organizations face in implementing CMMI-SVC are similar to those that were faced by development organizations in the early days of adopting SW-CMM and CMMI. However, some challenges are due to the unique characteristics of services discussed earlier in this book. At least for organizations like the one in the scenario, unique challenges are caused by the business and service delivery environments they face as external IT service providers. The good news is that ITIL and CMMI play very well together. Many of the assets developed in the last years of ITIL and CMMI implementation can be leveraged to speed up and strengthen the adoption of good IT service management and delivery practice, resulting in improved IT service performance and quality (and eventually, reduced cost).

Observations

Organizations like the one in the scenario I’ve provided are external IT service providers and they have internal IT departments. One of the great challenges they all face as IT contractors is managing the variety of services they provide to many different customers in a competitive environment in which periods for transitioning in a complex service system, operating the service system at required service levels, and transitioning out may by force happen within very short time periods and while operating “to the bone.” Customers do not understand enough about CMMI or ITIL to understand their critical role in the successful implementation of both frameworks. Therefore, customer participation in fulfilling the true intent of CMMI-SVC and ITIL may be lacking. The customer may not allow adequate time for contractors to institutionalize good practices and experience performance results, which can result in frequent contractor turnover. Poor acquisition practice can further aggravate these issues.

Other challenges relate to who is responsible for implementing CMMI-SVC (often legacy CMMI-DEV process groups with little understanding of services). These issues could be categorized as knowledge management and organizational issues. There isn’t much IT service contractors can do about the competition and customer maturity. However, the following observations will concentrate on knowledge management and organizational issues that companies can influence.

All of the companies that inspired the scenario implemented CMMI-DEV and achieved maturity level 3. This means they had a functioning process infrastructure, which included process and training groups with CMMI-DEV expertise and assets such as standard processes, training, and measurement data collected from development projects and other groups (but not service groups). All of the companies had active IT service improvement groups focused on the ITIL (which stands for Information Technology Infrastructure Library) framework separate from the CMMI process groups. The core members of the CMMI process groups had many challenges to work through, which included these:

• Mastering an understanding of what services are about, including where and how CMMI-DEV assets can and cannot be leveraged

• Being able to communicate with the ITIL group despite different terminology, framework purposes, structures, and levels of abstraction

• Working through the political and organizational challenges (which included obtaining charge codes for the time and resources necessary to coordinate between the two groups)

• Identifying assets developed by the ITIL groups that could be leveraged for the CMMI-SVC effort

Of course, the “elephant in the room” was how or whether these two groups’ paths would cross organizationally. As happens with so many organizations, true coordination between process and performance improvement initiatives for compliance with various standards and frameworks is rarely achieved because they have separate and sometimes competing reporting chains, budgets, incentives, and domain expertise. These differences result in language, cultural, and knowledge barriers. It will take time for the CMMI process groups to either learn about services or recruit members who do understand services and can communicate comfortably with the rest of the CMMI group. Organizational and knowledge management barriers are substantial.

The good news is that if the CMMI process group is operating at maturity level 3, it will have a good training infrastructure in place to bring the new service members of the group “up to speed” quickly on topics they will need to be effective members of the CMMI process group. Topics commonly include the scope of CMMI, process management, measurement, and process and product quality assurance. An active, functional process management infrastructure also serves as an example. Unfortunately, such a functioning infrastructure is not the case in all CMMI maturity level 3 organizations. Some have little or no process maturity in the area of process management and training despite having achieved maturity level 3. For example, no process descriptions for process management activities may be available, or process management may be simply inactive and dysfunctional. (Such dysfunction is often due to a lack of ongoing and consistent senior management support or constant organizational upheaval, including frequent changes in leadership or changes in customer direction regarding the importance of CMMI. Under such circumstances, it is difficult for new process group members to “hit the ground running.”)

Old habits die hard when a process professional has spent years growing and perfecting his or her knowledge in a particular framework. It was difficult for CMMI-DEV groups to stop focusing on schedules, effort, size, and tangible deliverables in favor of capacity, availability, performance, and other aspects of operating, monitoring, and managing the service system. What made understanding these aspects of services even harder is the state of practice in the service industry, which is nowhere near the ideal represented in ITIL and CMMI-SVC.

Most IT service organizations have not yet developed service catalogs (or if they have, the catalogs do not provide great value), are not planning strategically for their service, are not performing capacity and availability management beyond basic monitoring, and are not meeting the intent of service level management (often because they do not have customers mature enough to give them the opportunity to meet the intent). In other words, the processes in a service organization may not “live up to” SVC process area specific goals as well as a development organization might “live up to” DEV Engineering process area specific goals. Therefore, defining processes to satisfy the SVC process area specific goals may require more than discussions with subject matter experts (SMEs) to document how work is currently being done.

Process management, Process and Product Quality Assurance (PPQA), Measurement and Analysis (MA), and training processes plague many service organizations, just as they do many development organizations. Just as MA was the “long pole in the tent” for most organizations implementing CMMI-DEV, so it appears to be for SVC. However, the pole may be even longer given the state of the service industry. Not only are processes not documented, but the practices are neither performed nor managed. There is little focus on measurement objectives, process measurement, or measurement beyond what is currently provided by their tools automatically.

The situation faced by these implementers of CMMI-SVC was different from their experience with CMMI-DEV in many ways, including the following.

• They didn’t have a background in the services sold by their organization. (Although, of course, they were themselves providers of process improvement services.) For example, CMMI process group members lacked knowledge about how services were managed (e.g., day to day, week to week) and where and how interactions with customers occurred. Attempting to understand and document service activities and mapping the activities to CMMI-SVC practices is more difficult because roles and processes are not documented and GP 2.4, Assign Responsibility, is lacking.

• Some learning curves were misperceptions carried over from the use of CMMI-DEV such as “Configuration Management (CM) doesn’t exist in services because it’s only for software,” or “there’s no place for Decision Analysis and Resolution (DAR) in service operations because that’s about making design decisions during development.” Knowledge of how or even whether services did configuration management was lacking. In one case, the communication barrier between a legacy CMMI-DEV person discussing CM with a services person was so bad that the DEV person walked away with the impression that there was no CM on the services side.

• The gaps between the specific practices of CMMI-SVC and the activities of the organization were larger than they had been with CMMI-DEV due to the state of the industry described earlier. Most shortcomings using CMMI-DEV had been to process maturity and institutionalization practices (e.g., generic goals, process management, training, support) more so than the Engineering practices. This difference left the process team with not only a learning curve to understand services but, for at least some of the specific practices, no SMEs to consult in the organization who could tell them how the practices were performed (because they weren’t). In other words, there was a learning curve for potential SMEs as well as process group members.

• As always, scheduling time with SMEs was a challenge. However, given the dynamic and often unpredictable nature of the services (i.e., amount and frequency of ad hoc, firefighting activity) and the business pressure to operate “at the bone,” it was more difficult than ever. This shortage of SME availability affected process development and appraisal activities. One appraisal was affected by a major incident that made most interviewees unavailable.

Despite these challenges, there were many bright spots when the “light went on” either for the ITIL group or for the CMMI group. Each realized that an asset existed that one or the other needed. A barrier in communication dropped and they enjoyed an “aha” moment together. Or an organizational barrier showed signs of weakening, such as the CMMI group telling the ITIL group who they needed to contact in the training group to get training defined and coordinated for service roles. Rather than trying to co-opt the ITIL group’s efforts, the CMMI group proved they could be an asset because they had worked through many of the questions and challenges the ITIL group faced. Trust and sharing issues existed between some groups when they feared that their territory was being invaded or co-opted or that their processes would be thrown away or replaced with less useful ones.

Once the CMMI process groups had access to an expert that was fluent in both ITIL and CMMI and who could help them with mapping and other resources, the translation and learning process went considerably faster.

Another bright spot was that existing CMMI-DEV processes for Process Management, Support, and Project and Work Management process areas were leveraged for CMMI-SVC. However, no “plug and play” or “silver bullet” solutions were available and the definition of processes was in various stages of completion. The effort required to construct a process solution that works well for both development and service groups should not be underestimated.

ITIL provides insight into how some development processes may be made more useful for IT services. For example, ITIL has excellent IT service processes for CM (see ITIL’s Service Asset and Configuration Management and Change Management processes in the Service Transition book) and Supplier Agreement Management or SAM (see ITIL’s Supplier Management process in the Service Design book). Additional IT service insights for Organizational Process Focus (OPF), Organizational Process Definition (OPD), and MA can be extracted from ITIL’s Continual Service Improvement book and Knowledge Management process in the Service Transition book.

Of course, ITIL provides detailed processes for many of the SVC process areas of CMMI-SVC, such as these:

• Strategic Service Management or STSM (see ITIL’s Service Catalog Management process in the Service Design book and strategic service planning information in the Service Strategy book)

• Service Delivery or SD (see ITIL’s Service Level Management process in the Service Design book and Service Request Fulfillment process and service operation functions in the Service Operation book)

• Capacity and Availability Management or CAM (see ITIL’s Capacity Management and Availability Management processes in the Service Design book)

• Service Continuity or SCON (see ITIL’s IT Service Continuity Management process in the Service Design book)

• Service System Transition or SST (see ITIL’s Release and Deployment Management process in the Service Transition book)

• Incident Resolution and Prevention or IRP (see ITIL’s Incident Management and Problem Management processes in the Service Operation book)

Additional IT service insights may be gained for the Service System Development (SSD) and Project and Work Management process areas by reviewing the Service Design and Service Transition books, though these process areas are more difficult to map into specific ITIL processes because the related content is distributed.

Decades of ITIL use has resulted in additional literature that provides measurement examples for IT services, publicly available service catalog examples, user groups for IT service management (itSMF), and many other resources that will speed the implementation of CMMI-SVC in an IT services organization. Conversely, decades of CMMI use has resulted in powerful resources for implementing effective Process Management (e.g., OPF, OPD, Organizational Training or OT), Project and Work Management (e.g., Work Planning or WP, Work Monitoring and Control or WMC, Integrated Work Management or IWM, SAM, Requirements Management or REQM), and Support (CM, MA, PPQA, DAR, Causal Analysis and Resolution or CAR) so critical to institutionalizing good IT service management practice.

What It Looks Like in Practice

With challenges and opportunities for joint ITIL and CMMI-SVC use, let’s look at a scenario that is fictionalized but drawn from several real-world experiences to demonstrate how ITIL and CMMI-SVC work together in practice. The following scenario describes how a fictional IT service organization called Heroes Are Us (HeRus) applied the CMMI-SVC model to improve its service performance, reduce cost, and increase customer satisfaction. The scenario focuses on four Service process areas in the CMMI-SVC model. Mappings between the scenario and goals in CMMI-SVC are provided to help you make the connection between the scenarios and the model and to increase your depth of knowledge about CMMI-SVC. For help with terms, please refer to the glossary.

Introduction to the HeRus Scenario

Ms. Shandra Takie manages the IT department for HeRus, a mid-size (approximately 900 employees), privately held (family-owned), government contractor providing database management, application development, service desk, and data center services primarily to the Department of Defense (DoD). The IT department has 50 employees who support the work of HeRus. Like the employees they support, their motto is to be “Johnny on the spot” (i.e., available and willing to do whatever is needed).

HeRus has aggressive growth plans for the next five years and would like to “go public.” To realize its growth plans, HeRus must justify and control costs, increase performance, improve quality, and showcase the value its services provide. HeRus is under pressure from competitors, particularly in the area of cost. To realize its growth plans, HeRus must adopt industry best practices. Instead of relying on heroes and rewarding “end justifies the means” behavior, HeRus wants to rely on standard procedures and processes across the company that can be adapted to the requirements of each contract.

The business development office scans for requests for proposals (RFPs) from federal and state civil agencies and the DoD for IT services. Bidding on, ramping up for, and shutting down contracts consume a great deal of time and effort at HeRus. The business development office is often far along in developing a proposal before the right technical stakeholders in the company are identified and brought in to provide advice. Sometimes the advice of technical experts is too late and commitments are made to provide services that are not in the best interests of HeRus’s future. The current services and service levels offered are not documented in any centralized fashion. What little information exists on current services is documented in various contracts and service level agreements (SLAs) without a basis in standard services.

Shandra has been assigned the role of IT Service Process Czar with the goal of piloting new IT service processes with the internal IT staff before deploying them to contracts. Shandra attended a recent SEI Software Engineering Process Group (SEPG) conference and learned the importance of aligning services with business goals. Shandra has her own motives for moving forward with the process improvement initiative.

Budget cuts in recent years have reduced support for existing systems and applications as well as delayed the purchasing of new capacity. Shandra wants to show the value to HeRus’s bottom line of the IT services her department is providing. She knows that to support corporate growth plans, an upgrade to IT systems is needed, but in the current climate, strong rationale backed by data would have to be provided.

Shandra also understands that with better data and the means to estimate required capacity and availability to support HeRus’s growth plans, she can justify needed upgrades and increased automation of processes. Currently, HeRus relies on primarily manual processes that hinder the IT department’s ability to provide quality services at required service levels. She wants to justify greater investment in tools and automation of processes.

Shandra believes that the more closely IT services and service processes are aligned to business objectives and business processes, the more successful she will be. To achieve success, she must provide greater visibility into the achievements, challenges, performance, quality, costs, and contributions IT makes to HeRus. She must move the IT department from being focused on technology and infrastructure to being focused on service, with business objectives and processes driving IT service plans and processes.

Service Delivery (SD)

Shandra has had no SLA for IT operations, but the number and frequency of complaints indicate that IT is not meeting expectations. A service-level management process owner is appointed to address how HeRus plans, coordinates, agrees (Service Delivery process area), monitors, and reports on SLAs (Work Monitoring and Control process area), and maintains the SLAs (Service Delivery process area). The process owner will provide templates of SLAs for use by HeRus’s service-level managers.

The service-level management process owner decides that, as a first step, service-level managers should base their SLAs on the service catalog and analyze existing SLAs and data. These data include input from the capacity management process, availability of management process, incident management process, problem management process, service continuity process, information security process, and various IT functions. With this input, the SLA will then be defined, negotiated, and agreed. Quality Assurance (QA) will check whether the SLA is available to service providers, customers, and end users as planned. QA will also check whether the SLAs are periodically updated (SD SG 1).

Up until this time, no documentation existed to describe how to prepare for service delivery and how to deliver service. HeRus had relied on the knowledge of its experienced IT staff. Shandra knows that 50 percent of IT knowledge is in people’s heads, and 45 percent of IT will retire within five years. Because of this, and to increase consistency and quality, Shandra decides it’s time to document how HeRus prepares for and delivers its services. Standard processes and process assets will be stored in a Process Asset Library (PAL) available to the organization and used by QA in its compliance activities. QA is thrilled that it will have better information on what to check, but given the increased awareness of what actually needs QA’s involvement they’re lobbying for more resources (SG 2).

Shandra’s IT service process improvement steering committee decides to use the service desk as a pilot for its processes for SD preparation and fulfillment. The service-desk manager will document the approach used for SD, including how service requests are handled and required resources. What the service-desk manager documents will likely be elevated to a standard service-desk process for use on future contracts. Service-desk staff members will confirm readiness to deliver services according to procedures, and evidence of having followed readiness check procedures will be documented. Shandra has read the latest literature on the importance of checklists for improving service quality, so she encourages the use of checklists in the new processes (SG 2).

Service requests currently are processed and tracked in the same system as incidents, and there have been problems with the volume of service requests bogging down the incident management staff. Shandra decides that separate processes for service requests are needed. Service requests will be distinguished clearly from incidents, and procedures and mechanisms for storing, accessing, updating, tracking, and reporting service request records will be defined. Shandra will argue for investment in more self-help and self-service mechanisms to free the service-desk staff to work on incidents (SG 2).

The service-desk staff reports that they are receiving and processing service requests according to the SLA and meeting their targets consistently. The incident management staff reports that their performance has improved as a result of having clearer service request processes, including clear assignment of responsibility and authority (GP 2.4). Now that the service-request staff consistently review service-request status and resolution and confirm results with relevant stakeholders, customer satisfaction is way up. The service logs, performance reports, customer satisfaction data, and request management system records all show that the service system is being operated to deliver services according to SLAs and in compliance with processes (QA has confirmed this!). It is clear from looking at maintenance notifications, logs, and schedules that the service system is being maintained to ensure the continuation of service delivery (SG 3).

Capacity and Availability Management (CAM)

The IT department has been achieving a decent 99.9 percent uptime, but the downtime occurs at the worst times, and with the cutbacks in purchasing, increased demand, and lack of demand management, Shandra anticipates that she will not be able to maintain this uptime rate. To support HeRus’s long-term growth plans, a strategic approach to capacity and availability management is needed that considers future capacity and availability requirements.

She knows that these requirements are influenced by the other processes being defined, including the service continuity process and future innovations and emerging technologies process. Other influencers are patterns of business activity, demand, and how HeRus can affect them. Up until now, capacity and availability management has had an operational perspective focused on monitoring the performance, utilization, and throughput of the IT infrastructure and some aspects of IT services, such as response to incidents. HeRus has also monitored availability and reliability to a certain extent, forecasting whether agreed targets will be met.

Little analysis is going on and HeRus relies on the expert knowledge of its IT staff for many of the activities in CAM. HeRus has little documentation about what thresholds are set and why and what action should take place when certain conditions are met. Shandra knows that when the economy improves, some of her expert staff members will leave for “greener pastures.” The reliance on expert judgment and ad hoc practices has led to inconsistent performance and quality and represents a risk for HeRus.

When SLAs are documented, CAM data are rarely consulted, which is due in part to the overall lack of data. When decisions are made about changes to the service system, CAM data are rarely consulted. IT service continuity plans at HeRus do not have a firm foundation on data from other processes, such as capacity management or availability management. Shandra would like that to change because she knows the performance of the new processes relies in part on the availability and use of good data.

Shandra judges that IT service quality and performance at HeRus will improve with more analysis, a proactive approach to CAM, more reporting to relevant stakeholders, and more input from CAM to other processes, such as these:

• Service-level management (to enable better decisions about what targets are agreed in SLAs)

• Change management (to enable better decisions about change)

• IT service continuity management (to enable better continuity planning and reduce the risk of not being able to meet IT service continuity requirements)

The approach to CAM has been largely reactive at HeRus. Shandra decides that the approach has to change. She understands that with the budget constraints and competition in the marketplace, including vendors who represent possible IT outsourcing opportunities for HeRus, she must implement more sophisticated CAM practices and tools that will support a more proactive, data-based approach. She wants HeRus to reduce costs and increase performance by using tuning and exploring demand management.

Shandra establishes a process owner for capacity management and another process owner for availability management and reminds them that they need to get started right away on defining measures and analytic techniques to support the analysis she hopes to put into place. Shandra would like to see baseline models of current performance and resource utilization as a start. She knows these baseline models must be established before more predictive models can be established to help answer “what if” questions about changes, workload allocation and volume, SLAs, application sizing, and other questions from the design team, problem management group, and service continuity planning group.

Service Continuity (SCON)

HeRus has weak business continuity plans and policies, which only mention the importance of ensuring that there are contingency plans in place for “computer systems” and IT. Shandra knows this is a woefully inadequate treatment of IT service continuity. She knows that detailed plans must be put into place, personnel need training on the plans, and the plans should be validated to ensure that IT services can be resumed within required, agreed-to time frames (SG 2).

Shandra helps the IT service continuity process owner to begin planning by identifying and prioritizing the essential functions that must be performed and the essential resources to ensure service continuity (SG 1). They do this in close coordination with HeRus’s business process owners knowing that their ultimate goal is to support business continuity. To understand the essential resources, they need input from CM and other HeRus IT service processes.

Having a good start on the service catalog provides valuable input to their planning efforts. To maintain their IT service continuity plan adequately, they must receive inputs from HeRus’s change management process (to assess the potential impact of changes on their plans); CM (to understand the relationships between services, technology, and business processes); and other processes.

Having finished the HeRus IT service continuity plan, they establish training to ensure that the plans can be successfully executed. Having conducted the training, they analyze the evaluations and determine that some improvements are needed to both the training and their plans before they will be ready to verify and validate the plans. Once the improvements are made and preparations for verification and validation of the IT service continuity plan are made, they conduct the verification and validation activities as planned and analyze the results, making additional improvements where necessary (SG 3).

Incident Resolution and Prevention (IRP)

HeRus’s internal IT department has only been able to meet the target response time (35 minutes) for incidents about 30 percent of the time. They have no single repository for incidents, their underlying causes, and approaches to addressing them. Partly because of this lack of information, communication has been poor between the service desk and the rest of the IT department, particularly about known errors, incidents, and their underlying causes. Causes of incidents were not tracked sufficiently, and in fact, no effort was being made to discover the underlying causes of incidents and prevent their recurrence.

Shandra decided to define an incident management process focused on handling interruptions to normal service and returning normal service as quickly as possible. She also defined a process for preventing incidents, developing workarounds, and addressing underlying causes of selected incidents. She decided to clearly assign responsibility and authority (GP 2.4) for incident management, preventing incidents, developing workarounds, and developing action plans for underlying causes when documented criteria were met (SG 3).

Staff members were trained on the processes (GP 2.5). Responsibilities included identifying, controlling, and addressing incidents (SG 2). Using the new processes, staff members now responded in specific ways to specific incidents. They consulted the incident management system to know whether there were workarounds. Information recorded in the incident management system and other sources was used as input to help prevent incidents (e.g., through trend analysis). Information about incidents was recorded and could be grouped and linked to support analysis of trends and underlying causes.

Monitoring the status of incidents and communicating with stakeholders throughout incident handling (SP 2.5, SP 2.6) were emphasized in the training because many complaints had been received in the past about “being kept in the dark” and having to call the service desk to find out what was happening with an incident. These weaknesses were publicly acknowledged, and the new procedures were advertised to make sure stakeholders were aware that the IT department was doing something to address its poor service image.

The processes included preparing for incident resolution and prevention by establishing an approach to them and establishing an incident management system (SG 1). The approach included definitions of incidents and incident categories, incident handling, and incident reporting mechanisms.

Following the introduction of incident management processes based on CMMI-SVC’s IRP process area, the target response time is being met 85 percent of the time, and the number of recurring incidents has dropped.

Conclusion

Five years after initiating service process improvements at HeRus, Shandra received a Success Contributor Award on behalf of the internal IT department. The improvements implemented there have been adopted throughout HeRus and have been a major contributor to HeRus’s achievement of its growth plans. Service process improvements have helped HeRus remain competitive by delivering quality and performance while holding costs in check and increasing customer satisfaction. With this foundation of using data and measurement to ensure that quality and performance are well established, HeRus is positioned for even higher maturity and capability, and the business results associated with them.

Are Services Agile?

By Hillel Glazer

Book authors’ comments: Practitioners who are champions of Agile principles and practitioners using CMMI have been realizing recently just how much they have in common, rather than what separates them. This isn’t a recent insight for Hillel Glazer from Entinex, however, who has been a thought leader in both communities for some time. In this essay, Hillel considers the ways in which services may already be agile, an interesting insight into the nature of services as a means of organizing product development, and what CMMI for Services might bring to the conversation about using Agile methods and CMMI together. He is a certified instructor and high maturity lead appraiser for CMMI.

Some argue that “Agile” in the context of software development came about in response to an unhealthy trend. That trend distracted the attention of development projects from customer service and product excellence to demonstrable proof of process fidelity. That love affair with tools and an obsession with plans, contracts, and rigidity usurped relationships with customers and calcified responsiveness.

Look at the Agile Manifesto:

We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:

Individuals and interactions over processes and tools

Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan

That is, while there is value in the items on
the right, we value the items on the left more [Beck 2001].

The values in the Agile Manifesto are clearly in favor of individuals, interactions, results, customers, and responsiveness: all attributes classically characteristic of the business and the operation of a service.

Services are not performed in the vacuum of a cubicle where the people doing the work can throw their results “over the wall.” Under most circumstances, services require a human touch somewhere in the delivery of the service. Further, services generally require people to work together—whether in sync with policy and management or in coordination with coworkers.

The impact, output, and outcome of services are often detectable by the customer immediately. This characteristic of services makes meeting expectations through demonstrable results imperative to the service provider. People who can recall a great service experience will note that the experience was not with a machine or with a document, but with a person in the business who was working with them to meet their needs.

Truly, if a single attribute of services can be found among the many service situations, it’s that services generally account for a wide variety of inputs simultaneously. These inputs are often unpredictable and as often unknowable until some aspect of the service is provided. Overall, it is very much a dynamic situation in which a broad spectrum of inputs must be normalized to fit the pattern the organization created to provide the consistent “level of service” customers come to expect.

One might argue that whether intentionally, surreptitiously, or serendipitously, the progenitors, proponents, and practitioners of Agile principles and methods were creating a systematic approach to serving software clients better. In other words, in many ways, Agile puts the “services” back into software development.

Providing services for a living involves processes that are among the least likely to work well in “cookie-cutter” fashion. It’s true that at a macro level, many instantiations of a service will have common elements or fit a pattern for that class of service. For example, a hospital has check-in and registration steps, evaluation and analysis steps, diagnosis, prognosis, treatment or prescription, follow-up and discharge, and so forth. But at the specific, case-by-case point of delivery (“work”, “project,” or “patient” in our example), the services have the potential to be as unique as the customer (patient) receiving the service.

To enhance the provision of services amid the delivery of those services is akin to the classic metaphor of “changing the tires on a moving car.” To achieve this state, the processes involved in providing and improving services must themselves be responsive, adaptive, nonobstructive, and unobtrusive.

CMMI for Services was created with a keen eye toward this reality; in other words, the modelers did not want improvement processes that hinder service delivery processes. With this in mind, service-oriented process areas in CMMI-SVC (as well as the CMMI Model Foundation process areas) are written (and additional informative material is included) to discourage process improvement from overtaking the ongoing service to be provided and to accommodate the dynamic environment in which services are provided.

While agility and responsiveness are critical to services and to Agile software development, a simple concept cannot be overemphasized and must not be dismissed: Think through what will be done before it’s time to do it.

The creators of CMMI for Services do not expect that each time a customer walks into a bank or a patient is rushed into the emergency room, a work plan will be created. However, they do expect that when the banking customer steps into the branch or the patient emerges through the doors that the respective organizations have a pretty good idea of what they will be doing with the incoming service request, “work order,” “project,” or “ticket.”

When someone calls the help desk, the person who takes the call should not have to invent how to proceed, what information to collect, and where or how to record the information he or she gathers. A restaurant does not invent its menu with each customer and invent recipes with each order, even though some tailoring is usually allowed.

CMMI for Services operates at this level. It also has provisions for situations in which the customer does, in fact, have an unusual request, or the organization has to stretch its operations to meet a new need, or has to create a “project” to meet a particularly unique situation. We want the print, copy, and ship locations to make our custom order to our specifications, but we don’t want them having to learn on-the-job whether what we’ve requested is a custom order or to learn on-the-job how to operate the copier machine.

In CMMI for Services, the seven service specific process areas are designed to facilitate the continuous delivery of services while also providing the infrastructure for the continuous collection of experience and data to help to improve those services. In each process area case, the notion of standardization prevails. Knowing which services a customer expects, which services a customer can expect, and which services a customer should not expect may seem like an obvious consideration, but reconciling expectations is likely an experience most readers can relate to when it comes to a bad service experience. Knowing which services are routine and which aren’t seems like a common-sense, basic notion. Despite its simplicity, these are necessary early steps in the ability to improve services.

These ideas fit well with the development ideas of “agility.” A relentless focus on the customer and on value enhances the relationship between provider and customer. Innovation and creativity used to meet the needs of a customer may trump established processes in most well-run service organizations. Established processes can provide the basis for reliable service, but these processes must not be allowed to hinder the meeting of customer expectations. Ideally, the established processes themselves are set up to encourage innovation and improvement, and may even help the provider anticipate additional needs and tailoring.

It’s interesting to note recent developments in product development management techniques known as “pull” systems, or as popularized by David J. Anderson using the Japanese term Kanban. In this technique, the concept of “services” and “service levels” is used to differentiate the various paths and subsequent expectation management of product development. For example, requests for features will follow different development paths as a function of the type of request, its urgency, and particular attributes of the feature. Based on these characteristics, the expectations of the customer would be managed to know when to expect their feature will be delivered. Like many other services, the state of all requests in this technique is visually conveyed and viewable by all involved. Another parallel between this approach and traditional services is the notion of a continuous flow of value to the customer.

These concepts lead to other innovative ideas in the Agile and traditional product development fields. Borrowing from value streams, lean, and TQM (Total Quality Management), we learn the notion of “internal customers” where each step in the production of a product or service is considered the “customer” of the prior step. Add to this the idea that product development can be modeled as the specific organization of services such that the result of the services produces a product. When viewed in this regard, even the engineering of the product is a service and all the process areas unique to CMMI for Services add value to product development efforts.

It is important for users of CMMI-SVC to never abandon their customer orientation and their ability to respond to the dynamics of their service operations in pursuit of demonstrating a faithful implementation of a process improvement model.

• Service agreements need be no more formal than a list of services provided, costs, and other means of establishing expectations. A published, visible service menu or an order form may suffice for simple services.

• The means by which incidents are resolved and prevented should be no more complicated than the nature of the incidents, but having no means to resolve or prevent incidents would be as unforgivable to a service operation as not testing the product would be to a product operation.

• Managing the service operation’s capacity and availability seems basic enough, though anyone on hold waiting on the phone has clearly experienced the implementation (or lack thereof) of this idea. To an agile organization, knowing where and why bottlenecks occur facilitates workarounds and preferably the avoidance of bottlenecks. What can be more disruptive to a service or its ability to be agile than the total and complete loss of use of the primary operation? This situation is accounted for in disaster-recovery and continuity of operations concepts found in any well-run service organization, and also in the CMMI-SVC constellation. In agile organizations, this would be the ultimate expression of responding to change where the organization can continue to provide value-added services despite the absence of its usual facility, let alone its processes. But deciding which services must be foregone, which services can still be provided, and how they will be provided under unusual circumstances should be known ahead of needing to implement the backup plan.

• Ever experience the bumps and hiccups associated with a service provider trying out a new service, or switching from one way of delivering its service to another? Such spikes in the usual operational scenario can be avoided with some consideration of the impact of the change on the customers, on the operations, and on the people who provide the services. This consideration of impact is as much a courtesy as it is a necessity, agile or otherwise.

• While developing products relies on resources just as much as services do, in the context of services, and in a strong parallel to the values of agility, a strategic view of services to be provided relies heavily on the individuals and their interactions. In particular, service businesses must plan the availability of the right kind of people and forecast the types of services to be provided. In some ways, anticipating the direction of markets and resources and deciding which services to standardize and which to keep on the periphery until the market demonstrates the demand and validity of a service are somewhat forward-thinking concepts. But in any business, these are not far-fetched concepts, merely ones that prudent companies pursue. When providing services is your business, these activities are how you ensure that you are relevant now to your customers’ needs, and remain relevant in the future.

Finally, there’s a remaining aspect of CMMI-SVC that bears a clear resemblance to concepts that promote an agile organization: The notion of having to develop a service system in CMMI for Services was derived from taking the absolute minimum practices from the Engineering process areas of CMMI for Development and incorporating them into a single process area in CMMI-SVC. What in CMMI-DEV were five unique process areas, comprising 13 goals and 40 practices, were whittled down to one process area comprising three goals and 12 practices. For organizations whose primary efforts are in services and not developing systems (at least not developing very complicated ones), CMMI-SVC provides an abridged version of improvement practices in the engineering space. And those organizations that need simple service systems, perhaps consisting of just people and procedures, can opt out of this process area.

Where Agile development parts ways with CMMI-SVC is that most services themselves tend to not work well when delivered in increments or provided iteratively. People don’t want part of their shirts laundered and pressed, they don’t want some of their stock purchased at the target price, they don’t want a portion of their house saved from fire, and they don’t want to be taken 30 percent of the way to the airport. Customers also don’t want the services rendered for them to be experiments in early or frequent failures. They don’t want their change miscounted, they don’t want their meals undercooked, and they’d prefer to avoid someone getting lost on the way to the airport.

Nonetheless, despite this departure from Agile in the “development” sense, other concepts of agility such as eliminating wasteful effort, promoting self-organization, continuously delivering value, and facilitating trust and high morale among the team are all hallmarks of well-run service organizations.

Should organizations seek to adopt Agile approaches to services or to incorporate a service approach to development, and include an improvement schema that allows Agile approaches to flourish, the lessons learned from CMMI-DEV apply equally well to CMMI-SVC.

• CMMI (regardless of constellation) is a model; how to actually create an improvement system using this model will be unique to each organization.

• The artifacts of an improvement system come from the operation of the improvement system.

• Appraisals for CMMI determine whether (not how well) its improvement system shows signs that it was created using CMMI as the improvement model.

• It’s critical that the context of an improvement system—the service itself—be the arbiter of how to evaluate the artifacts created by that system.

Each service system requires a custom-fit improvement system or customers will leave. To do otherwise would not be agile and would not be good service. And that would be entirely unforgivable.

What We Can Learn from High-Performing IT Organizations to Stop the Madness in IT Outsourcing

By Gene Kim and Kevin Behr

Book authors’ comments: These two authors—who lead the IT Process Institute and work as C-level executives in commercial practice—have spent a decade researching the processes in IT that lead to high performance. Based on research in more than 1,500 IT organizations, they describe what processes make the difference between high performance and low or even medium performance. Their observations about the distinguishing characteristics of high performers are consistent with the goals and practices in CMMI for Services. They further note the potential downside of the pervasive trend of outsourcing IT services. Without adept and informed management of these outsourced IT contracts, harm is suffered by both provider and client. In response to this trend, they call on CMMI practitioners to use their experience and techniques to bring sanity to the world of IT outsourcing.

Introduction

Since 1999, a common area of passion for the coauthors has been studying high-performing IT operations and information security organizations. To facilitate our studies, in 2001 we co-founded the IT Process Institute, which was chartered to facilitate research, benchmarking, and development of prescriptive guidance.

In our journey, we studied high-performing IT organizations both qualitatively and quantitatively. We initially captured and codified the observed qualitative behaviors they had in common in the book The Visible Ops Handbook: Starting ITIL in Four Practical Steps.1

Seeking a better understanding of the mechanics, practice, and measurements of the high performers, we used operations research techniques to understand what specific behaviors resulted in their remarkable performance. This work led to the largest empirical research project of how IT organizations work; we have benchmarked more than 1,500 IT organizations in six successive studies.

What we learned in that journey will likely be no surprise to CMMI-SVC practitioners. High-performing IT organizations invest in the right processes and controls, combine that investment with a management commitment to enforcing appropriate rigor in daily operations, and are rewarded with a four to five times advantage in productivity over their non-high-performing IT cohorts.

In the first section of this essay, we will briefly outline the key findings of our ten years of research, describing the differences between high- and low-performing IT organizations, both in their performance and in their controls.

In the second section, we will describe a disturbing problem that we have observed for nearly a decade around how outsourced IT services are acquired and managed, both by the client and by the outsourcer. We have observed a recurring cycle of problems that occur in many (if not most) IT outsourcing contracts, suggesting that an inherent flaw exists in how these agreements are solicited, bid upon, and then managed. We believe these problems are a root cause of why many IT outsourcing relationships fail and, when left unaddressed, will cause the next provider to fail as well.

We will conclude with a call to action to the IT process improvement, management, and vendor communities, which we believe can be both a vanguard and a vanquisher of many of these dysfunctions. Our hope is that you will act and take decisive action, either because you will benefit from fixing these problems or because it is already your job to fix them.

Our Ten-Year Study of High-Performing IT Organizations

From the outset, high-performing IT organizations were easy to spot. By 2001, we had identified 11 organizations that had similar outstanding performance characteristics. All of these organizations had the following attributes:

• High service levels, measured by high mean time between failures (MTBFs) and low mean time to repair (MTTR)

• The earliest and most consistent integration of security controls into IT operational processes, measured by control location and security staff participation in the IT operations lifecycle

• The best posture of compliance, measured by the fewest number of repeat audit findings and lowest staff count required to stay compliant

• High efficiencies, measured by high server-to-system administrator ratios and low amounts of unplanned work (reactive work that is unexpectedly introduced during incidents, security breaches, audit preparation, etc.)

Common Culture Among High Performers

As we studied these high performers, we found three common cultural characteristics.

A culture of change management: In each of the high-performing IT organizations, the first step when the IT staff implements changes is not to first log in to the infrastructure. Instead, it is to go to some change advisory board and get authorization that the change should be made. Surprisingly, this process is not viewed as bureaucratic, needlessly slowing things down, lowering productivity, and decreasing the quality of life. Instead, these organizations view change management as absolutely critical to the organization for maintaining its high performance.

A culture of causality: Each of the high-performing IT organizations has a common way to resolve service outages and impairments. They realize that 80 percent of their outages are due to changes and that 80 percent of their MTTR is spent trying to find what changed. Consequently, when working on problems, they look at changes first in the repair cycle. Evidence of this can be seen in the incident management systems of the high performers: Inside the incident record for an outage are all the scheduled and authorized changes for the affected assets, as well as the actual detected changes on the asset. By looking at this information, problem managers can recommend a fix to the problem more than 80 percent of the time, with a first fix rate exceeding 90 percent (i.e., 90 percent of the recommended fixes work the first time).

A culture of planned work and continuous improvement: In each of the high-performing IT organizations, there is a continual desire to find production variance early before it causes a production outage or an episode of unplanned work. The difference is analogous to paying attention to the low-fuel warning light on an automobile to avoid running out of gas on the highway. In the first case, the organization can fix the problem in a planned manner, without much urgency or disruption to other scheduled work. In the second case, the organization must fix the problem in a highly urgent way, often requiring an all-hands-on-deck situation (e.g., six staff members must drop everything they are doing and run down the highway with gas cans to refuel the stranded car).

For long-time CMMI practitioners, these characteristics will sound familiar and the supports for them available in the model will be obvious. For those IT practitioners new to CMMI, CMMI-SVC has not only the practices to support these cultural characteristics, but also the organizational supports and institutionalization practices that make it possible to embrace these characteristics and then make them stick.

The Performance Differences between High and Low Performers

In 2003, our goal was to confirm more systematically that there was an empirically observable link between certain IT procedures and controls to improvements in performance. In other words, one doesn’t need to implement all the processes and controls described in the various practice frameworks (ITIL for IT operations, CobiT or ISO 27001 for information security practitioners, etc.).

The 2006 and 2007 ITPI IT Controls Performance Study was conducted to establish the link between controls and operational performance. The 2007 Change Configuration and Release Performance Study was conducted to determine which best practices in these areas drive performance improvement. The studies revealed that, in comparison with low-performing organizations, high-performing organizations enjoy the following effectiveness and efficiency advantages:

• Higher throughput of work

• Fourteen times more production changes

• One-half the change failure rate

• One-quarter the first fix failure rate

• Severity 1 (representing the highest level of urgency and impact) outages requiring one-tenth the time to fix

• One-half the amount of unplanned work and firefighting

• One-quarter of the frequency of emergency change requests

• Server-to-system-administrator ratios that are two to five times higher

• More projects completed with better performance to project due date

• Eight times more projects completed

• Six times more applications and IT services managed

These differences validate the Visible Ops hypothesis that IT controls and basic change and configuration practices improve IT operations effectiveness and efficiency. But the studies also determined that the same high performers have superior information security effectiveness as well. The 2007 IT controls study found that when high performers had security breaches the following conditions were true.

The security breaches are far less likely to result in loss events (e.g., financial, reputational, and customer). High performers are half as likely as medium performers and one-fifth as likely as low performers to experience security breaches that result in loss.

The security breaches are far more likely to be detected using automated controls (as opposed to an external source such as the newspaper headlines or a customer). High performers automatically detect security breaches 15 percent more often than medium performers and twice as often as low performers.

Security access breaches are detected far more quickly. High performers have a mean time to detect measured in minutes, compared with hours for medium performers and days for low performers.

These organizations also had one-quarter the frequency of repeat audit findings.

Which Controls Really Matter

By 2006, we had established by analyzing the link between controls and performance that not all controls are created equal. By that time, we had benchmarked about one thousand IT organizations, and had concluded that of all the practices outlined in the ITIL process and CobiT control frameworks, we could predict 60 percent of their performance by asking three questions: To what extent does the IT organization define, monitor, and enforce the following three types of behaviors?

• A standardized configuration strategy

• A culture of process discipline

• A systematic way of restricting privileged access to production systems

In ITIL, these three behaviors correspond to the release, controls, and resolution process areas, as we had posited early in our journey. In CMMI-SVC, these correspond to the Service System Transition, Service System Development, and Incident Resolution and Prevention process areas.

Throughout our journey, culminating in having benchmarked more than 1,500 IT organizations, we find that culture matters, and that certain processes and controls are required to ensure that those cultural values exist in daily operations.

Furthermore, ensuring that these controls are defined, monitored, and enforced can predict with astonishing accuracy IT operational, information security, and compliance performance.

Although behaviors prescribed by this guidance may be common sense, they are far from common practice.

What Goes Wrong in Too Many IT Outsourcing Programs

When organizations decide to outsource the management and ongoing operations of IT services, they should expect not only that the IT outsourcers will “manage their mess for less,” but also that those IT outsourcers are very effective and efficient. After all, as the logical argument goes, managing IT is their competitive core competency.

However, what we have found in our journey spanning more than ten years is that the opposite is often true. Often the organizations that have the greatest pressure to outsource services are also the organizations with the weakest management capabilities and the lowest amount of process and control maturity.

We postulate two distinct predictors of chronic low performance in IT.

IT operational failures: Technology in general provides business value only when it removes some sort of business obstacle. When business processes are automated, IT failures and outages cause business operations to halt, slowing or stopping the extraction of value from assets (e.g., revenue generation, sales order entry, bill of materials generation, etc.).

When these failures are unpredictable both in occurrence and in duration (as they often are), the business not only is significantly affected, but also loses trust in IT. This is evidenced by many business executives using IT as a two-letter word with four-letter connotations.

IT capital project failures: When IT staff members are consumed with unpredictable outages and firefighting, by definition this is often at the expense of planned activity (i.e., projects). Unplanned work and technical escalations due to outages often cause top management to “take the best and brightest staff members and put them on the problem, regardless of what they’re working on.” So, critical project resources are pulled into firefighting, instead of working on high-value projects and process improvement initiatives.

Managers will recognize that these critical resources are often unavailable, with little visibility into the many sources of urgent work. Dates are often missed for critical path tasks with devastating effects on project due dates.

From the business perspective, these two factors lead to the conclusion that IT can neither keep the existing lights on nor install the new lighting that the business needs (i.e., operate or maintain IT and complete IT projects). This conclusion is often the driver to outsource IT management.

However, there is an unstated risk: An IT management organization that cannot manage IT operations in-house may not be able to manage the outsourcing arrangement and governance when the moving parts are outsourced.

A Hypothetical Case Study

This case study reflects a commonly experienced syndrome while protecting the identities of the innocent. The cycle starts as the IT management function is sourced for bids. These are often long-term and expensive contracts, often in the billions of dollars, extending over many years. And as the IT outsourcing providers exist in a competitive and concentrated industry segment, cost is a significant factor.

Unfortunately, the structure of the cost model for many of the outsourcing bids is often fundamentally flawed. For instance, in a hypothetical five-year contract bid, positive cash flow for the outsourcer is jeopardized by year 2. Year 1 cost reduction goals are often accomplished by pay reductions and consolidating software licenses. After that, the outsourcer becomes very reliant on change fees and offering new services to cover up a growing gap between projected and actual expenditures.

By year 3, the outsourcer often has to reduce their head count, often letting their most expensive and experienced people go. We know this because service levels start to decline: There are an ever-increasing number of unplanned outages, and more Severity 1 outages become protracted multiday outages, and often the provider never successfully resolves the underlying or root cause.

This leads to more and more service level agreement (SLA) penalties, with money now being paid from the outsourcer to the client (a disturbing enough trend), but then something far more disturbing occurs. The service request backlog of client requests continues to grow. If these projects could be completed by the outsourcer, some of the cash flow problems could be solved, but instead, the outsourcer is mired with reactive and unplanned work.

So, client projects never get completed, project dollars are never billed, and client satisfaction continues to drop. Furthermore, sufficient cycles for internal process improvement projects cannot be allocated, and service levels also keep dropping. Thus continues the downward spiral for the outsourcer. By year 4 and year 5, customer satisfaction is so low that it becomes almost inevitable that the client puts the contract out for rebid by other providers.

And so the cycle begins again. The cumulative cost to the client and outsourcer, as measured by human cost, harm to stakeholders, damage to competitive ability, and loss to shareholders, is immense.

An Effective System of IT Operations

We believe that it doesn’t really matter who is doing the work if an appropriate system for “doing IT operations” is not in place. The system starts with how IT contributes to the company’s strategy (what must we do to have success?). A clear understanding of what is necessary, the definition of the work to be done, and a detailed specification of quantity, quality, and time are critical to creating accountability and defect prevention. Only then can a system of controls be designed to protect the goals of the company, and the output of those controls used to illuminate success or failure.

This situation is betrayed by the focus on SLAs by IT management—which is classic after-the-fact management—versus a broader systemic approach that prevents issues with leading indicator measurements. The cost of defects in this scenario is akin to manufacturing, where orders of magnitude in expense reduction are realized by doing quality early versus picking up wreckage and finding flight recorders and reassembling a crashed airplane to figure out what happened and who is at fault.

Call to Action

In our research, we find a four to five times productivity difference between high and low performers.

IT operations

• Are Severity 1 outages measured in minutes or hours versus days or weeks?

• What percentage of the organization’s fixes work the first time? Because they have a culture of causality, high performers average around 90 percent versus 50 percent for low performers.

• What percentage of changes fail, causing some sort of episode of unplanned work? High performers have a culture of change management and average around 95 percent to 99 percent, versus around 80 percent for low performers.

Compliance

• What percentage of audit findings are repeat findings? In high performers, typically fewer than 5 percent of audit findings are not fixed within one year.

Security

• What percentage of security breaches are detected by an automated internal control? In high performers, security breaches are so quickly detected and corrected that they rarely impact customers.

Many of these can be collected by observation, as opposed to substantive audits, and are very accurate predictors of daily operations. Formulating the profile of an outsourcer’s daily operations can help to guide the selection of an effective outsourcer, as well as ensuring that the selected outsourcer remains effective.

We can verify that an effective system of operations exists by finding evidence of the following.

• The company has stated its goals.

• IT has defined what it must do to help the company reach its goals.

• IT understands and has documented the work that needs to be done (e.g., projects and IT operations).

• IT has created detailed specifications with respect to the quantity of work, the quality required to meet the company’s goals, and the time needed to do this work.

• IT understands the capabilities needed to deliver the aforementioned work in terms of time horizons, and other key management skills and organization must be constructed to do the work.

• IT has created a process infrastructure to accomplish the work consistently in tandem with the organizational design.

• IT has created an appropriate system of controls to instrument the effectiveness of the execution of the system and its key components.

CMMI for Services includes practices for all of these, and with its associated appraisal method, the means to gather the evidence of these practices. Without an understanding of the preceding profile (and there is much more to consider), outsourcing success would be more akin to winning the lottery than picking up a telephone in your office and getting a dial tone.

Public Education in an Age of Accountability

By Betsey Cox-Buteau

Book authors’ comments: The field of education is among the service types in which we frequently hear people say that they hope to see application of CMMI for Services. (The other two areas most commonly mentioned are health care and finance.) Because good results in education are important to society, process champions are eager to see the benefits of process improvement that have been realized in other fields. Betsey Cox-Buteau is a career educator, administrator, and consultant, who works with struggling schools to improve performance and learning. Here she makes the case for how CMMI-SVC could make a difference in U.S. schools.

Federal Legislation Drives Change

On January 8, 2002, then-President George W. Bush signed into law his administration’s version of the expiring Elementary and Secondary Education Act (ESEA). This legislation was given the title of “No Child Left Behind” (NCLB). (You may remember Goals 2000 under the Clinton administration.) As I write, Congress is working on the rewrite of the now expiring NCLB law. Those of us in the field of public K–12 education await the next set of regulations that will be churned out when the Obama administration’s version of ESEA becomes law.

No Child Left Behind was written to force change onto an institution well known for its lethargy. Each state was required under NCLB to formulate an annual assessment process. Under these new requirements, if schools did not show adequate yearly progress (AYP), then they would fall subject to varying levels of consequences to be imposed by their state. The consequences included allowing students to attend other schools in their district, creating written school improvement plans, and even replacing teachers and administrators. School boards and administrations had to reconsider the frequently used excuse that public schools were different from businesses because their products were human beings; therefore, business standards and processes did not apply to them. NCLB forced all stakeholders to revisit the real product of a public school because of this new accountability. The product of the public school became defined as “student learning,” and that is measurable.

As the curriculum accountability required by NCLB became institutionalized through state testing over the life of the law, and the focus tightened on data analysis regarding levels of student learning, the concept that schools provide a “service” to students, parents, and society became clearer to those who work in our schools and those who create educational policy. The only hint of what is now under the new legislation is the change from the requirement of employing “Highly Qualified Teachers” to employing “Highly Effective Teachers.” Yet, how the new law will measure that effectiveness is still unknown to those of us on the outside of the revision process.

Orienting Education to Delivering Services

With the evolving definition of the purpose of our schools being to produce high levels of “student learning,” those of us in central office find ourselves tired of reinventing the wheel and desirous of creating processes that we can use efficiently and repeatedly, that are self-improving, and that are entirely data-driven. On the other hand, those of us in central office must also wrestle with the fact that to produce these high levels of student learning that will produce students ready for twenty-first century jobs, we are still expected to do so with twentieth century budgets. So, this twofold challenge brings a different “look” to what we do. We are to produce the highest quality product (student learning for the twenty-first century) at the lowest cost possible. This is not a new situation. Education administrators have always been supposed to deliver well-educated students on a shoestring budget, but we have never been held accountable through real-time data before. With data, we can truly begin to give our stakeholders more than “happy, well-adjusted students,” and instead can give them a high level of student learning with measurable outcomes while becoming more economically efficient.

A Service Agreement for Education

How can public education begin to take advantage of the CMMI for Services model? The education service system is already in place, and students have been graduating for a long, long time, so the system must be working, right? It may be “working” in some sense, but the same questions present themselves to each new administrator when he or she comes into a building or a central office. Is the present system working well? Is it efficient? Are the processes institutionalized or will they fade away over time? It is time for school administrators, from the central office to the individual buildings, to examine the processes involved in their service system and determine their efficacy against their “service agreement” with taxpayers. The CMMI for Services model provides many process areas within which to accomplish this task.

For example, look at any school’s mission statement. It often begins something like this: “Our school is dedicated to serving the individual academic, social-emotional, and physical needs of each student, to create lifelong learners....” Are these goals real? Are they measurable? If so, do our public schools have in place a reliable method to measure the achievement of these goals? If schools begin to look at themselves as service providers, then those services must be defined in a measurable manner. When the goals are measurable, then the processes to deliver those services can be measured and analyzed. Using these data, the services can be redesigned, refined, and institutionalized.

Although the nomenclature is different, the “mission statement” is, in essence, a school’s “service agreement” with its customers. The CMMI for Services model offers guidance in the Service Delivery process area (SP 1.1 and 1.2) to address the process for developing a measurable service agreement. Once a measurable service agreement is in place, all the stakeholders will have a firm foundation on which to build the processes necessary to successfully meet the requirements of that agreement.

A Process for Producing Consistently High Levels of Student Learning

In this age of measuring levels of student learning, we have to ask several questions of ourselves as educators. Chief among them is, “What do we want our students to learn?” This is the most basic of all questions for the public school system, and it is ever-changing as the needs of society change.

This critical question involves the determination of the desired curriculum. What is it that the student needs to learn? This area is another in which the CMMI for Services model can move a school system toward a streamlined, dynamic curriculum renewal and delivery refinement system. Public schools are ripe for a well-documented structuring of the delivery of their services in this area due to the legislative accountability requirements for student learning. A high-quality curriculum and its delivery tie directly to student learning and ultimately to test scores. The challenge here is to be consistent in how we assess student learning so that progress (or the lack of it) can be recognized and understood. A systematic review of the curriculum and its delivery in the various subject areas needs to be a standardized process.

One of the many possible applications of the Process and Product Quality Assurance process area of the CMMI for Services model is curriculum auditing or program evaluation. This would benefit much of the curriculum development and delivery area by enabling curriculum refinement and a delivery review system. Also, this process area can be used to develop an appropriate process for measuring employee compliance with delivering those curricular items efficiently to the highest level of student learning.

Ideally, curriculum review cycles should remain in place no matter who is in the front or central office. All too often, the superintendent or building principal leaves and the curriculum review and improvement process breaks down. The reasons for this breakdown are many, not the least of which is personnel turnover. Yet when the process is created using the CMMI for Services Process and Product Quality Assurance process area, assuring stakeholder buy-in, full enculturation of the process, and continuous improvement, these changes will not affect the continuation of good practice.

A Process for Efficient Decision Making

Beyond the more obvious areas of application such as curriculum review and its delivery, other education practices can benefit from the discipline of the CMMI for Services model. For example, the decision making in school buildings can be as simple as a librarian choosing a book, or as an involved as a large committee choosing a curriculum program. Decisions can be made by a harried teacher attempting to avoid internal conflict, or a principal who wants to defuse the anger of a parent. Many decisions are made. Some decisions affect few, and some affect many. Some decisions may have long-lasting implications for a child’s life, or for a parent’s or taxpayer’s trust in the system; and that trust (or lack of it) shows up in the voting booth each year. If the processes of each service delivery subsystem are mature and transparent, the provider and the customers will be satisfied and trust each other. When applied to refine the decision-making process in a school district, the Decision Analysis and Resolution process area of the model can be instrumental in ensuring that personnel make the best decisions possible using a standard, approved, and embedded process; the result is the establishment of greater trust with the customer.

Providing for Continuity

In this era of rapid turnover in school administration, the institutionalization of effective processes is paramount to the continuity of providing high-quality service to all stakeholders. As superintendents and school principals move to other administrative positions and school districts, the embedding of the generic goals and generic practices can provide a means of ensuring that these effective system processes will have continuity. Each time a process area is set in motion and refined, the generic goals and practices require that there is a framework in place behind it that ensures its continuity. That is where Part Two of the CMMI for Services model can truly make a difference in our schools. Policies documenting the adopted processes, positions of responsibility named for the implementation and follow-through of these new procedures, and other generic practices will remain intact and in effect long after any one person moves through the organization.

Other Applications for the Model in Education

These process areas are just a few of the many process areas of the CMMI for Services model that would be beneficial when applied to the public education system. Others would include

• Integrated Work Management: for the inclusion of stakeholders (i.e., parents and the community) in the education of children

• Measurement and Analysis: to ensure the correct and continuous use of data to inform all aspects of the educational process

• Organizational Performance Management: to ensure an orderly and organized process for the piloting and adoption of new curricula and/or educational programs

• Organizational Process Definition: to organize the standard processes in a school district ranging from purchasing supplies to curriculum review cycles

• Organizational Process Performance: to establish the use of data to provide measures of improvement of the processes used in the district in an effort to continually improve them

• Organizational Training: to establish the training of teachers and other staff members so that as they transition in and out of a building or position, continuity of the delivery of curricula and other services is maintained

• Service System Transition: to establish a smooth transition from one way of doing things to another while minimizing disruption to student learning

A Better Future for American Education

With no particular reason to believe that there will be a lifting or easing of the assessment and accountability measures placed on the public schools by the sunset of the No Child Left Behind Act, these institutions of student learning can benefit from the application of this model. If our schools are to deliver the highest rate of student learning using the least amount of taxpayer dollars, the CMMI for Services model is a natural and essential tool for accomplishing this goal.

Applying CMMI-SVC for Educational Institutions

By Urs Andelginger

Book authors’ comments: Urs Andelfinger is an SEI visiting scientist and CMMI instructor, as well as a university faculty member. Given those roles, he is particularly well positioned to consider the application of CMMI-SVC to education. The work described in the following essay is partially based on ideas jointly developed with Daniel Nyuyki from the University of Applied Sciences Darmstadt. The possible application of CMMI-SVC to education is the most frequent domain inquiry the SEI receives, so the use cases he details are especially interesting and practical.

Introduction

The best practices in the CMMI-SVC model are intended to apply to all organizations in the service industry. Education can also be considered part of the service industry, and educational institutions (both public and private) are the main players. Educational institutions provide educational services to students (external view) and other services to their staff members (internal view). However, most public educational institutions have institutionalized a functional structure rather than a service-oriented structure.

This essay aims to determine how and to what extent CMMI-SVC can be interpreted and applied in the educational sector using the Department of Computer Science (DCS) at the University of Applied Science Darmstadt (Germany) as a point of reference. It will also investigate how the CMMI-SVC model can be used to restructure an educational institution to become more service-oriented.

The essay does not aim to reinvent and completely (re)define the entire business model of DCS as a service. The purpose of the essay is instead to focus on investigating the applicability and benefits of CMMI-SVC in an educational environment and to demonstrate how CMMI-SVC can be used as an improvement roadmap. We therefore have adopted a “deep-not-wide” methodology: Some selected process areas will be detailed quite a lot for the sake of demonstrating their applicability and added value, while we will not cover the whole breadth of process areas from the CMMI-SVC model. The remainder of the essay is structured as follows:

First we describe our methodological approach, which takes a use-case-based perspective and follows a three-step procedure. Then we describe extracts from our sample interpretation of the Service Delivery (SD) process area for the educational domain. Based on our use-case-driven methodology, we then demonstrate how SD might be interpreted with respect to two selected use cases. We conclude with a description of some of our experiences and lessons learned.

Methodological Approach

Use-Case Orientation

As pointed out in the introduction, educational institutions (at least in the public educational system in Europe) are often organized according to functional criteria. This has led to the emergence of a mindset that is very much aligned with functional responsibilities. This mindset does not necessarily take into account the needs of the people expecting a service (e.g., registering a student, registering for a specific lecture, etc.). Instead, the prevailing functional mindset is often characterized by professionally delivering products or product components, but not necessarily entire solutions. The student’s perception is therefore quite often a very fragmented mode of delivery with respect to the original requirement (e.g., registering for a specific lecture).

As a first step toward the required service shift in the organizational mindset, we have therefore chosen a use-case orientation. A use case is a well-known concept in software engineering. It describes the externally visible behavior of an IT system to be developed. Use cases try to capture a description of the functionality that users of the system are expecting. Use cases then drive further development of the IT system.

To us, it seems to be an intuitive and smooth transition to leverage the use-case concept to describe the services that the service system of an educational institution should deliver. What use cases or services is the user expecting from the service system (i.e., the educational institution)?

The selected use cases for which we conducted our feasibility study are as follows.

1. Provision of degree programs—bachelor’s, master’s, Ph.D.:

• Besides the strategic value of this use case, the institution should also specify the requirements for each degree program and the final degree that will be obtained upon completion of the course. A student therefore has the option to choose a degree program that he or she wishes to study depending on his or her current educational background.

2. Application and admission for a degree program:

• A guide on how to apply for a degree program should be provided to interested candidates. Deadlines should also be specified.

3. Provision of a detailed syllabus for each degree program.

4. Provision of counseling services (mentoring):

• Counselors should be appointed and each student assigned to a counselor. This is usually an essential service for first-year students.

5. Provision of lectures and lab sessions, including lecture and lab materials, such as presentation slides and/or lecture notes in good quality.

6. Student assessments and exams.

7. Lecture evaluation.

8. Provision of library services.

The Three-Step Methodological Approach

As pointed out in the introduction, the aim of this essay is not to present a complete interpretation of the CMMI-SVC model for educational institutions, but to demonstrate the model’s applicability and to show how it can be effectively applied for process improvement in this domain. Therefore, we have developed the following three-step approach in applying CMMI-SVC.

Step 1: Interpret the SVC-Relevant Process Areas for the Educational Institution (To-Be Requirements)

In the first step, we used the following approach: Take the CMMI-SVC specific process areas (from the Service Establishment and Delivery category) and try to interpret them with respect to the needs and requirements of an educational service provider. As these process areas are not really to be implemented on their own, we complemented them with a collection of other process areas, also mainly from maturity level 2. Overall, we selected the following process areas from the model, for which we then developed an interpretation for our problem domain:

• Service Establishment and Delivery process areas:

1. SD: Service Delivery

2. CAM: Capacity and Availability Management

3. IRP: Incident Resolution and Prevention

4. SCON: Service Continuity

5. SSD: Service System Development

6. SST: Service System Transition

7. STSM: Strategic Service Management

• Additional process areas included in our interpretation for the educational sector:

1. CM: Configuration Management

2. MA: Measurement and Analysis

3. WMC: Work Monitoring and Control

4. WP: Work Planning

5. PPQA: Process and Product Quality Assurance

6. REQM: Requirements Management

We used the sample use cases (as mentioned earlier) as guidance for interpreting the selected process areas. The result of this step is a perfect (To-Be) service-oriented process model and a collection of requirements for a perfect educational service provider. (You can easily extend the result of this step toward a reference process model for educational service providers.) As always with CMMI, the result of this step is not a collection of ready-to-implement master processes, as the process areas are typically not directly implemented in their current form. Instead, they are a type of requirement that has to be met while executing the typical processes in the educational institution. For example, you do not implement SD as is, and instead will probably implement it in slightly different ways, seamlessly embedded (e.g., in the “register for a degree program” use case and in the “deliver lectures and lab sessions” use case). However, in both use cases, you will have to meet such requirements as, for example, SP 3.2, “Operate the system,” with respect to, for example, SP 8, “Collect customer satisfaction information immediately after services are delivered or service requests are fulfilled.”

Step 2: Gap Analysis: Analyze the Chosen Use Cases with the Help of the To-Be Requirements

In step 2, the currently executed (As-Is) use cases and As-Is processes were analyzed together with experienced members of the Department of Computer Science with respect to the defined requirements coming from the To-Be processes. This step capitalized on CMMI’s long-standing definition of process: “In the CMMI Product Suite, activities that can be recognized as implementations of practices in a CMMI model.” The gap analysis was based on this understanding of process. The main focus of this step was to find out how the defined use cases were really executed to identify the degree of conformance and domains of deviation with respect to the previously defined To-Be model. We successfully applied graphical process representations, interview sessions, and questionnaires as valuable techniques for conducting this step.

Step 3: Derive Improvement Opportunities

Based on the identified conformances and noncompliances of the As-Is processes with the To-Be model, we identified a prioritized improvement program for DCS. We also successfully applied interview sessions and questionnaires as valuable techniques for conducting this step. It is important to note that during this step, we included GG 2 and GG 3 and the related generic practices. This helped us to identify systematic improvement opportunities in our department, such as a lack of clearly defined and communicated responsibilities and authorities (GP 2.4) and incomplete organizational policies (GP 2.1).

Sample Interpretation of the SD Process Area for the Educational Domain

In this section, we will demonstrate how the SD process area might be interpreted with respect to the following two use cases:

1. Delivering degree programs

2. Offering a specific lecture

The intent of this section is not to completely document all details of the interpretation, but to give an impression of the applicability and added value of interpreting Service Delivery for an educational institution. It should also be noted that the interpretation assumes that the service-oriented shift in the mindset of the relevant stakeholders has already taken place (i.e., they understand their business already as providing educational services).

Interpretive Guidance for Use Case 1: Delivering Degree Programs

SG 1 Establish Service Agreements2

SP 1.1 Analyze Existing Agreements and Service Data

To achieve this, the following information should be collected and analyzed:

• Number of students who enrolled from the last semester or academic year

• Feedback from students for each lecture offered in the previous semester

• Student attendance for each lecture offered in the previous semester

• Suggestions from last general staff or board meeting

• Review of the current study regulations

SP 1.2 Establish the Service Agreement

Draw up a detailed study regulation, which should include, among other things:

• The detailed structure of a degree program

• The syllabus of a degree program

• The examination regulations

• The type of certificate to be obtained upon completion of the program

Publicize the study regulations so that students can easily have access to it without restrictions.

Make sure changes in study requirements are reflected in the regulations by periodic (e.g., annual) reviews.

SG 2 Prepare for Service Delivery

SP 2.1 Establish the Service Delivery Approach

Identify channels to be used by students to submit

• Application for admission

• Enrollment

• Application for withdrawal

Sample channels could be Web forms, telephone, or a service center.

Set deadlines for applications.

Set maximum duration for processing student requests (e.g., a student gets a response after applying for a degree program after a maximum of six to eight weeks).

SP 2.2 Prepare for Service System Operations

Define interfaces (roles and responsibilities) for receiving and processing student requests. For example:

All requests concerning application for admission should be sent to the secretary of the department that is responsible for forwarding the requests, if necessary, to the respective individuals or departments in charge of further processing.

Examination issues should be sent directly to the department of examinations. However, in some minor cases, lecturers concerned can be contacted directly.

SP 2.3 Establish a Request Management System

Typical student requests may include

• A new student requesting admission into a degree program

• A registered student requesting withdrawal

• Amendment proposals of the study regulations from the student board

A request management system can be put in place to manage and track the status of each student request

SG 3 Deliver Services

SP 3.1 Receive and Process Service Requests

Based on the interfaces defined for receiving and processing student requests, each new request is simply forwarded to the appropriate station.

For each request, define a detailed action line for its processing.

SP 3.2 Operate the Service System

Offer lectures each semester as stipulated in the study regulations.

Offer counseling to students by assigning each student to a mentor.

Offer library services as well as learning centers.

SP 3.3 Maintain the Service System

Regularly maintain the infrastructure and equipment (i.e., lecture rooms, hardware and software products)

Interpretive Guidance for Use Case 3: Offering a Specific Lecture

SG 1 Establish Service Agreements

SP 1.1 Analyze Existing Agreements and Service Data

Number of students registered for a lecture

Lecture materials, such as slides, notes

SP 1.2 Establish the Service Agreement

Draw up regulations for undertaking a lecture, which may include

• Active participation during lectures

• Active participation in lecture assessments (e.g., assignments and lab sessions)

• Exam registration

• Passing examination

SG 2 Prepare for Service Delivery

SP 2.1 Establish the Service Delivery Approach

Lectures will be offered as defined in the lecture schedule (i.e., lecture room, lecturer, and time slot).

Lectures will be provided using beamers, whiteboards, or overhead projectors.

SP 2.2 Prepare for Service System Operations

Ensure lecture rooms, lab rooms, hardware, and software are all in good condition.

SP 2.3 Establish a Request Management System

Put a system in place whereby students may register for participation in the lecture, for participation in the exam, and so on.

SG 3 Deliver Services

SP 3.1 Receive and Process Service Requests

Based on the interfaces defined for receiving and processing student requests, allocate each student into the courses he or she has registered for.

SP 3.2 Operate the Service System

Have qualified lecturers give lectures.

Allocate office hours for each lecturer.

Assess students (e.g., through assignments, tests, and/or examinations)

Have students evaluate lectures.

SP 3.3 Maintain the Service System

Maintain the lecture rooms and equipment.

Review lecture material.

Lessons Learned

Understanding the business in our Department of Computer Science as a service helped a lot in terms of clarifying a general understanding of what our department was currently doing and what it should be doing if we want to become a professional service provider of educational services. The application of the selected process areas to our business contributed greatly in terms of improving process transparency and identifying improvement opportunities. Additionally, a much better mutual understanding between the department’s staff and the students emerged. This can be interpreted as a big step from a functional mindset toward a (real) service-oriented mindshift in our department. Eventually, applying generic goals 2 and 3 (in combination with the related generic practices, e.g., GP 2.1 and GP 2.4) contributed toward making the identified process improvements sustainable. In the remainder of this section, we will point out two specific findings that we find worth mentioning:

Recursive nesting of services and service components

Educational services relying on a very cooperative culture

Finding 1: Recursive Nesting of Services and Service Components: Structural Similarity to Engineering Process Areas

During our analysis, we discovered that one person’s use case typically is composed of use cases of a finer granularity from another person’s perspective. So, what seems to be “a” use case typically is just a component of a use case for another person. This situation appears to be similar to the recursive understanding of the Engineering process areas of the CMMI-DEV model. The authors of the CMMI-SVC model confirm that this is not just similar, but the same intended relationship. Here is an example:

Apparently, use case 1, provisioning of degree programs, can be decomposed into delivering several study programs, each of which leads to one of the degrees offered in the overall degree program. Each study program in turn can be further refined by a set of syllabuses and related courses. Finally, each course needs to be further decomposed into single lecture units which will be offered on, for example, a weekly basis. In turn, the service delivering a lecture on a specific date for which a specific lecturer is responsible can be considered a service component from a more abstract level. This seems to be similar in structure to the relationship between a product and a product component from the CMMI-DEV model.

We created Figure 6.5 to depict this idea of recursive nesting (and repeated application of the CMMI-SVC process areas) more clearly.

Figure 6.5 Separation of Concerns and Recursive Nesting of Services and Service Components in the Educational Domain

image

Finding 2: Educational Services Are Relying on a Very Cooperative Culture

During our analysis, we discovered that in some cases, the service can be successfully delivered with low participation from the service consumers (i.e., the students involved). This is basically true for most administrative processes and services (e.g., the registration processes): Based on some data from the student, this service can successfully be accomplished by the educational institution regardless of the student’s further willingness to cooperate.

But for the core processes, such as providing degree programs or giving specific lectures, successful service delivery relies considerably on the willingness of students to actively cooperate. In particular, successful delivery of a lecture or degree program cannot, by the nature of the learning process, be the sole responsibility of the educational institution. Even the best educational service provider cannot guarantee the success of its degree programs if there is no willingness on the service consumer’s side (the students) to contribute their share of responsibility to the overall process.

Instead, successful delivery of educational services always requires active cooperation on the part of the student as well (e.g., preparing for lectures and actively doing homework). Successful service delivery in the educational domain is therefore more of a joint service production than just a one-way service offering from an educational institution to a student consumer. Nevertheless, the educational institution can do a lot to facilitate and improve such a successful learning process. And the CMMI-SVC model offers a lot of advice for doing just that.

Additional Findings

While we found that CMMI-SVC in general offers a high degree of applicability and practical advice for improving delivery of educational services, we have not yet determined how and where CMMI’s high maturity levels might be useful in this application domain. More work and more practical experience are required.

Also, we found out that it is hard for some academics to get used to the idea of understanding (academic) education as a service. Their mindset is often dominated by ideas of being researchers in pursuit of scientific truth and innovation. Our use-case-driven approach and our focus on the process aspects of academic work helped to improve their acceptance of the service paradigm substantially. The secret recipe for them is to have them understand that the main focus of this mindshift is the process side, while they remain in the driver’s seat for all content related aspects.

Next Steps

Our initial experience with applying CMMI-SVC in the educational domain was so rewarding and promising that we plan to further pursue that path in two directions.

We are currently investigating relationships between the CMMI-SVC approach and the general approach of Business Process Management (BPM). Some of the research questions in that regard are: To what extent might BPM benefit from a systematic service orientation, and how can BPM capitalize on the maturity model contained in CMMI-SVC?

During our work, we identified many interfaces to central organizational entities of our university. Thus far, we have not actively included them in the scope of our work, as we need them to become more service-aware and eventually service-oriented. Some of the research questions toward that end are the following: Do all departments simultaneously have to undergo a CMMI-SVC restructuring in order to make this work for an educational institution? Can we possibly identify appropriate organizational boundaries so that we can successfully apply CMMI-SVC without having to reorganize the whole institution at once? What incremental transition techniques might be available for this purpose?

Plans Are Worthless

By Brad Nelson

Book authors’ comments: From early in the development of the CMMI-SVC model, we began to hear concerns from users about the guidance or policy that might be imposed by government acquirers on providers bidding on service contracts. We sought the participation of experts such as Brad Nelson on our Advisory Group to ensure that we were considering these issues. In this essay, the author, who works on industrial policy for the Office of the Secretary of Defense, makes clear that it is appropriate capability that is sought, not the single digits of a maturity level rating. Further, he describes the ongoing responsibility of the government acquirer, rather than just the responsibility of the provider.

The Limits of the Maturity Level Number

The CMMI model developed by the CMMI Product Development Team (involving representatives from industry, government, and the Software Engineering Institute [SEI] of Carnegie Mellon) can be thought of as an advanced process planning tool. CMMI defines maturity levels ranging from level 1—ad hoc performance—through level 5—process and subprocess optimization.

Stated colloquially, a level 1 organization accomplishes goals without a well-developed organizational memory to ensure that good decisions leading to work accomplishment will be repeated. It’s sometimes said that a level 1 organization is dependent on individual heroes who react well to events and other people. On the other end of the spectrum, a level 5 organization has measurable processes that repeatedly guide good decisions and those processes are continuously improved.

A level 5 organization has a breadth and depth of institutional capability and culture to reliably optimize workflows and isn’t dependent on any one person. It’s certainly reasonable to expect a much higher probability of project success from a level 5 organization than from a level 1 organization. The SEI certifies individuals to provide CMMI maturity level appraisals using a Standard CMMI Appraisal Method for Process Improvement (SCAMPI). Given this well-organized CMMI infrastructure, wouldn’t it make sense for a buyer to require minimum maturity level ratings for potential suppliers?

What’s wrong with Department of Defense (DoD) staff members who provide opinions that “DoD does not place significant emphasis on capability level or maturity level ratings...?”3 Don’t they get it?

Some understanding of this opinion might be gained through the examination of a quote from General Dwight D. Eisenhower, who said that “plans are worthless but planning is everything.” It’s quite a pithy saying, and a couple of things can be quickly inferred. The first is that given complex endeavors with a large number of variables, plans are at high risk of obsolescence. The second is that the familiarization and insights gained from the planning process are invaluable to making informed adaptations to changing conditions.

Perhaps oversimplifying a bit, a plan is a static artifact and those who rely on static artifacts do so at their own peril. The real value of a plan is that its completion facilitates the detailed situational awareness of the planner and his or her ability to perform well in a changing environment. But don’t continuously improving level 5 organizations avoid the trap of static process planning?

While it may appear like “hairsplitting” to some, it’s critical to observe that the preceding discussion of the value of a plan applies to the CMMI rating, not process plans. In fact, it’s actually the CMMI rating that’s the static artifact.

Viewed from a different angle, even though a high maturity organization may have process plans that are adapted and optimized over time, the appraiser’s observation of that organization is static. Organizations and the people in them change over time. Furthermore, careful consideration must be given to the relationship between a SCAMPI appraiser’s observations of prior work by one group of people to the new work with possibly different people. What was relevant yesterday may not be relevant today.

Considerations for the Responsible Buyer

When committing hard-earned money to a purchase, a buyer hopes for the best results. Going beyond hope, a smart buyer looks for indicators that a seller actually has the capability to achieve expected results. An experienced smart buyer establishes requirements to accomplish the desired results by a capable supplier. Understanding what a CMMI rating is and isn’t helps the smart and experienced buyer evaluate CMMI ratings appropriately.

To estimate a supplier’s probability of success, a good buyer must first understand what is being purchased. A buyer expecting that a supplier’s mature processes will enhance the buyer’s probability of success must do more than hope that a past CMMI rating is applicable to the current project. Due diligence requires the buyer to analyze the relevance of a potential supplier’s processes to the buyer’s particular project.

When this analysis is done, it should carry substantially more weight than a CMMI rating. When it’s not done and significant weight is given to a rating, the buyer is effectively placing due diligence in the hands of the appraiser. This is an obvious misuse of the rating. A CMMI rating can be one of many indicators of past performance, but a rating is a “rating” and not a “qualification.” CMMI appraisers do not qualify suppliers.

Remaining Engaged after Buying

Once a savvy buyer chooses a qualified supplier, the buyer’s real work begins. In the words of ADM Hyman Rickover, father of the nuclear navy, “You get what you inspect, not what you expect.” This principle is applied every day by smart, experienced buyers when they plan and perform contract monitoring. It’s an axiom that performance monitoring should focus on results rather than process.

Nevertheless, intermediate results of many endeavors are ambiguous, and forward-looking process monitoring can reduce the ambiguity and point to future results. CMMI can provide valuable goals and benchmarks that help a supplier to develop mature processes leading to successful results, and SCAMPI ratings can provide constructive independent feedback to the process owner.

A rating, though, carries with it no binding obligation to apply processes at appraised levels to particular projects or endeavors. A rating is not a qualification and it’s also not a license. Qualifications and licenses imply mutual responsibilities and oversight by the granting authority. Once a CMMI rating is granted, neither responsibilities nor oversight is associated with it.

A failure or inability to maintain process performance at a rated level carries no penalty. There is no mechanism for the SEI or a SCAMPI appraiser to reduce or revoke a rating for inadequate performance. The appraiser has no monitoring function he or she uses after providing an appraisal rating at a point in time, and certainly has no responsibility to a buyer.

The obligation to perform to any particular standard is between the buyer and the supplier. Essentially, if a buyer depends on a supplier’s CMMI rating for project performance and uses it as justification for reducing project oversight, it would be a misunderstanding and misuse of the CMMI rating as well as an abdication of the buyer’s own responsibility.

Seeking Accomplishment as Well as Capability

It is intuitive that mature processes enable high-quality completion of complex tasks. CMMI provides an advanced framework for the self-examination necessary to develop those processes. The satisfaction of reaching a high level of capability represented by a CMMI rating is well justified. It is possible, though, to lose perspective and confuse capability with accomplishments.

It’s been said that astute hiring officials can detect resumes from job applicants that are over-weighted with documented capabilities and under-weighted with documented accomplishments. This may be a good analogy for proposal evaluation teams. The familiar term ticket punching cynically captures some of this imbalance. Herein lies another reason why CMMI can be quite valuable, yet entirely inappropriate, as a contract requirement.

In the cold, hard, literal world of contracts and acquisitions, buyers must be careful what they ask for. Buyers want suppliers that embrace minimum levels of process maturity, but “embrace” just doesn’t make for good contract language. While it might at first seem to be a good substitute to require CMMI ratings instead, it can inadvertently encourage a cynical “ticket punching” approach to the qualification of potential suppliers. Because ratings themselves engender no accountability, there should be no expectation that ratings will improve project outcomes.

Pulling this thread a bit more, it’s not uncommon to hear requests for templates to develop the CMMI artifacts necessary for an appraisal. While templates could be useful to an organization embracing process maturity, they could also be misused by a more cynical organization to shortcut process maturation and get to the artifact necessary for a “ticket punching” rating. Appraisers are aware of this trap and do more than merely examine the standard artifacts. But as a practical matter, it can be difficult to distinguish between the minimum necessary to get the “ticket punch” and a more sincere effort to develop mature processes.

If an influential buyer such as the Department of Defense were to require CMMI ratings, it would likely lead to more CMMI ticket punching rather than CMMI embracing. The best that the DoD can do to accomplish the positive and avoid the negative has already been stated as “not placing significant emphasis on capability level or maturity level ratings, but rather promot[ing] CMMI as a tool for internal process improvement.”4

Summary

To summarize, it certainly appears reasonable to expect more mature organizations to have a higher probability of consistently achieving positive results than less mature organizations. CMMI ratings are external indicators that provide information, but they are only indicators. A savvy buyer must know what the indicators mean and what they don’t mean. Ultimately, accountability for project success is between the buyer and the seller. CMMI and SCAMPI ratings are well structured to provide important guidance and feedback to an organization on its process maturity, but responsible buyers must perform their own due diligence, appraisal of supplier capabilities, and monitoring of work in progress.

CMMI Ensures Vehicle Insurance Services

By Gary H. Lunsford and Tobin A. Lunsford

Book authors’ comments: The vehicle insurance industry is a prime example of a large, complex service arena where the principles of the CMMI for Services model can play very effectively in promoting continuous process improvement. In this essay, the authors present an overview of the vehicle insurance industry and then demonstrate how two process areas from the CMMI for Services model can help to ensure significant process improvement in this highly competitive industry. Dr. Gary Lunsford is a senior principal analyst with ARINC Engineering Services, LLC, an SEI Partner. He is an SEI Certified Lead Appraiser and Instructor for the three CMMI models—Development, Services, and Acquisition. Tobin Lunsford, Dr. Lunsford’s son, is the AVP, National Material Damage, for the Infinity Insurance Companies. He is responsible for salvage, rental management, company estimatics, quality control, and coordination of the material damage training curriculum.

An Overview of the Vehicle Insurance Industry

Vehicle insurance is insurance purchased for cars, trucks, buses, and other vehicles. Vehicle insurance is also known as auto insurance, car insurance, or motor insurance and is considered to be part of property and casualty insurance. Vehicle insurance primarily provides protection against losses incurred as a result of accidents as well as the resulting liability.

In the United States, most states require a driver to purchase insurance before operating a motor vehicle on public roads. Although states vary in requirements and their enforcement, drivers who fail to purchase vehicle insurance may incur fines, have their driver’s license suspended or revoked, or face a jail sentence. Drivers who purchase vehicle insurance can protect themselves against both collision and comprehensive events. Collision insurance covers the insured’s vehicle in the event of a traffic accident. Comprehensive insurance covers vehicle damage incurred in incidents such as fire, theft, vandalism, and weather related events.

In most cases, whenever an insured motorist makes a claim involving repairs, the insured usually pays a fixed fee called a deductible. Normally, the payment is made directly to the repair facility that restores the damaged vehicle to preloss condition. If an accident is severe enough that the damage costs more to repair than the vehicle is worth, the car is declared a “write-off” or “total loss” and payment for the claim, minus the deductible, is made directly to the insured and/or to the lien holder.

Since large amounts of money are involved in providing, tracking, and monitoring these services, the vehicle insurance industry is constantly looking for ways to reduce costs for companies, streamline service for customers, and garner profits for stakeholders. However, a number of factors complicate the effort. We will look briefly at four factors that affect the cost of doing business in the insurance industry: determining premiums, securing reinsurance, providing training, and complying with regulations.

Determining Premiums

An often-overlooked cost factor is determining the premium that a policyholder pays for coverage. The premium may be mandated by the government or determined by the insurance company in accordance with governmental regulations that vary from state to state. Insurance companies hire actuaries to determine best estimates for premiums based on statistical data representing historical activity and future claim projections for given risk factors. These risk factors include the driver’s gender, age, marital status, driving history, anticipated annual mileage, and vehicle classification.

Securing Reinsurance

Another factor driving the vehicle insurance business is the cost of reinsurance. Simply stated, reinsurance is insurance for insurers (the companies) that face special needs or catastrophic loss potential. This type of insurance is designed to mitigate the significant risks that can financially threaten a given insurance company, particularly those with smaller capitalization, and affect the continuity of its service.

Providing Training

Because the vehicle insurance industry is complex, highly competitive, numbers-driven, and ever-changing, it has become very high-tech; therefore, domain training is essential. It is not surprising that several companies have developed specialized software applications exclusively for the insurance industry.

Complying with Regulations

Until recently, the government regulated the vehicle insurance industry mainly at the state level. Although the underlying goal of the regulations is to ensure that the insurance companies properly cover the losses of their policyholders, regulations are not uniform. All states have Departments of Insurance that strongly influence how companies operate within their jurisdictions. Interacting responsibly with 50 state insurance departments is a time-consuming, costly affair to vehicle insurance companies.

Currently, the U.S. Congress is considering legislation that will regulate some insurance functions at the national level. [Note: The vehicle insurance business is part of Finance within the U. S. Department of the Treasury.] In late 2009, the federal government began requiring every insurance carrier to report all instances of total loss to the Department of Justice. These reports and statistics have become the nucleus for the formation of a common database of information for insurers; the effort is being heralded very positively by the insurance companies themselves.

This overview of some of the business considerations within the vehicle insurance industry illustrates the complexity of the marketplace and highlights several performance issues facing insurance companies, stakeholders, and policyholders. Note some of the common terminology used in the industry: incidents, risks, risk mitigation, damages, claims, continuity of service, and training. These terms occur frequently in the CMMI for Services model. We will focus on two selected process areas within the model, namely Work Planning and Service Delivery, and see how they can positively affect insurance companies and policyholders. We will then indicate some specific areas in data management where the vehicle insurance industry can improve by using these process areas and related practices from the Measurement and Analysis process area.

Work Planning

The purpose of Work Planning is to establish and maintain plans for defined service activities. Sometimes the work involves obtaining services from outside vendors or providing services to outside vendors. Some service engagements have definite termination conditions, while others are long-term, ongoing, and continuous. In the flow of business, the vehicle insurance industry uses both types of service engagements, as we will illustrate shortly.

The specific goals of the Work Planning process area are threefold: estimating the attributes of the work products and tasks, developing a work plan, and obtaining commitment to the plan.

Estimating the Attributes of the Work Products and Tasks

The vehicle insurance industry uses a number of approaches to scope and define work products and tasks embedded in a proposed service. As a precursor to developing a strategy, insurance carriers often complete a trend analysis of data covering a two- or three-year period to investigate problem areas or identify new or expanded business areas. Comparisons are made with peer competitors’ performance profiles and strategies. Part of this activity involves asking strategic questions. For example, what is the real driving metric for a proposed project to improve settlement performance for damage claims? Is it to reduce cycle times (i.e., the time elapsed between a vehicle being declared a total loss to when the insured actually receives the settlement check), or to increase profitability in a certain area of salvage operations? The answers to this type of question help to define lifecycle phases for the work, determine necessary resources, and establish the estimates for effort and cost.

Another strategizing approach used is to form a focus group to assist in defining the scope of a proposed service. These special-purpose groups often consider business support and customer service indices in determining strategy. Once begun, they ensure that projects are reevaluated every two or three weeks for fine-tuning and possible revisiting of the early planning activities, assumptions, and rationales.

As another early step in developing estimates, many vehicle insurance companies establish preliminary work breakdown structures (WBS). Constructing a WBS can help in the following ways.

• Clarify understanding of the intended area by interviewing any previous managers and/or former process leaders.

• Determine the status of competitors’ activities—establish performance metrics to capture relative standing.

• Develop a marketing case for upper management buy-in.

• Look for any existing contracts that might constrain or bind activities.

Sizing activities can fan out in several different directions. For example, a focus or study group could do any or all of the following activities.

• Perform an analysis at the individual claim level by asking if adding more staff members would increase the throughput proportionately.

• Define metrics for the nature of the vehicles involved (personal passenger cars, business fleet cars, trucks, custom vehicles, etc.).

• Evaluate whether or not specialized knowledge is required to do specific processing.

• Revamp operations (e.g., introduce more software automation enhancements) in order to free staff members for other assignments.

• Differentiate between fixed costs and variable costs when establishing sizing type attributes.

Developing a Plan

At the heart of developing a work plan are budgeting and scheduling activities. Consider how one vehicle insurance company plans and manages its salvage operations. The salvage unit publishes a schedule that assigns adjusters and handlers to various claims units and identifies where and when various resources will be needed. A Material Damage Quarterly Planner (MDQP) is published and updated as needed. The MDQP, which essentially functions as the work plan for many of the salvage unit’s efforts, indicates when various salvage initiatives will be implemented. Field operating units are given 90 days advanced notice of a specific initiative, which could involve transferring all or most of their operations back to corporate headquarters or could lead to a significant increase of field staff members to handle additional responsibilities. Note: The trend is to move functions back to corporate headquarters from the field as the operations are more efficient in centralized organizations. Impacts on sister organizations are considered in establishing and maintaining schedules. Since dependence on specific tools and software packages is critical, the progress of the schedule is monitored closely and corrective actions are taken as often as biweekly.

Risks are identified early and tracked closely. Using the calendar year as their business operations year, many vehicle insurers formulate risk management activities in January or February for initiatives that will take place later in the year. Risks are reevaluated whenever there are changes in either external or internal circumstances, in the operating environment, or in the behavior or attitudes of customers. One of the continuing risks that plague insurance companies is the constant need to validate that their statistical models are properly tuned to changing market conditions. The trend is toward using more specialized models rather than generalized models.

Vehicle insurance carriers mine extensive amounts of data, especially when analyzing the damage claims of policyholders. Results are reported to management and the set of stakeholders. As mentioned previously, national data banks also provide commonly collected vehicle data.

The involvement of stakeholders is planned early and monitored as work progresses. From the standpoint of the vehicle insurance companies, the primary stakeholders are the members of the upper management team who approve initiatives and provide resources and staffing. The insureds (the policyholders) are really secondary stakeholders whose driving interests are that the insurance companies remain solvent and provide satisfactory resolution of damage claims.

Obtaining Commitment

To be effective, a plan must be feasible and backed by responsible participants committed to executing the provisions of the plan. Two of the biggest factors that can affect the success of a work plan in the vehicle insurance industry are the adequacy and training of staff members and government regulations. As a fundamental commitment, upper management must provide the necessary staffing, resources, and training called for in plans. Furthermore, when changes in state and federal insurance regulations occur (e.g., across-the-board rate increases or significant changes in reporting requirements), vehicle insurers need to quickly reconcile the differences between estimated resources and actual needs.

In conclusion, all plans require continuing commitment from participants. In the complex vehicle insurance industry, performance requires unusual cooperation across many people and facilities—from agents to adjusters to body shop operators and tow truck drivers.

Service Delivery

The purpose of Service Delivery is to deliver services in accordance with service agreements. Vehicle insurance companies set up service delivery agreements with their vendors and policyholders, take care of the various service requests they receive, and strive to maintain and operate their many-faceted service systems as efficiently as possible. Their service delivery operations map to the three specific goals of the Service Delivery process area: establishing service agreements, preparing for service delivery, and delivering services.

Establishing Service Agreements

The essence of service agreements lies in accurately capturing the requirements stated by the customer or end user. The service agreement is the vehicle insurance policy itself; this document establishes the relationship between the insurance company and the policyholder (the insured). Applicants obtain coverage by working with an insurance agent or by directly purchasing a policy from the company itself. Processing applications takes into account several factors including the type of vehicle to be insured and the credit rating and driving history of the applicant. Four different categories of service agreements (policies) are widely recognized: Non-Standard; Standard; Preferred; and Ultra-Preferred. Non-Standard equates to high risk. Standard policies, which account for about 60 percent of the policies written, reflect higher credit scores on the part of the applicants. Preferred indicates that the applicant has a high credit score but may have had an accident in the past. The Ultra-Preferred category is reserved for those applicants with the highest credit scores and lowest risk profiles.

Preparing for Service Delivery

When preparing to deliver services, a vehicle insurance company must develop an effective system to process requests from the policyholders in accordance with the provisions of their policies (service agreements). This preparation includes the following activities:

• Establishing an infrastructure and the statistical base necessary to determine the appropriate policy provisions and premiums

• Building the capability to write policies and track premium payments

• Establishing an environment to adjudicate and process accident claims—the primary type of service request

• Ensuring that the personnel and facilities are in place to support the full range of required support services that can range from addressing legal and/or medical issues to establishing national agreements with tow truck companies to certifying body shops for repairing damaged or impacted vehicles

Delivering Services

Depending on the situation, policyholders will interact with the vehicle insurance system in different ways. For example, a service request could involve obtaining repair estimates when a vehicle has been involved in an accident. The appropriate, corresponding support service system requires that several components are in place, including adjusters to assess damages and body shops to repair the damaged vehicle. The service delivery approach must also consider the degree of automation involved in submitting a claim in accordance with the consumers’ comfort level and their perceived need to deal with a “real live person” rather than simply filing a claim online.

Establishing and maintaining the service system(s) are complex tasks that must take into account the dynamics of state and federal regulatory laws as well as the types of service incidents themselves. Typical service incidents can include funding considerations, an unusual spate of accidents, or defaults on premium payments. When a policyholder defaults on a premium payment, it means that the insurance company has less discretionary funds to invest in capital improvements.

One particularly important component of service delivery is the claims system. Each carrier’s claims system must interact with both internal and external vendors, as well as submit the reports required by the state and federal government. Currently, the reporting systems’ data collection mechanisms vary widely from paper capture to electronic file transfers. Electronic data collection tools can be either mainframe-based or server-networked. In the vehicle insurance industry, frequent upgrades are common as the operational trend is to move away from mainframes toward server environments in order to interface more easily with vendors, government officials, and policyholders. Most platforms are Oracle-based.

Claims processing is often partitioned between internal servicing and external outsourcing in a way that is designed to be transparent to the service requester. Insurance companies are increasingly concerned with keeping the claimants informed about the progress of their claim requests and following up after the completion of service. Follow-up activities often involve a customer survey; vehicle insurance companies want customers to offer suggestions for service improvement. Some insurance companies define a Customer Service Index (CSI) that measures the customer’s satisfaction with the services performed.

Information technology (IT) departments perform crucial services, including the following vehicle insurance service functions.

• Maintain early policy records, such as initial policy ratings and binding requests.

• Monitor administrative service requests from claimants and the status of premium payments.

• Analyze the scope of each damage claim and monitor the progress of claims processing.

Service maintenance is performed on all components of the insurance delivery system, including corrective and preventive maintenance on the application platforms and upgrade training for the employees who provide services directly to the public. As indicated earlier, both adaptive and perfective maintenance are also the rule of the day as systems are constantly being upgraded in response to dynamic governmental regulations and technological advances.

Areas Where CMMI-SVC Can Help to Improve Service Operations

Given that the vehicle insurance industry already enjoys fair process discipline and effective use of data, we might wonder how CMMI-SVC could be used to further improve the field. The practices of the Measurement and Analysis process area can be used more extensively, and some of the demands for further capability would align with the practices in the four high maturity process areas. In this way, it is possible that the vehicle insurance industry can demonstrate some of the first high maturity behavior in the service industry, and that insurance companies who move to implement these practices may enjoy a market advantage.

1. Identifying trends earlier and enhancing trend analysis activities

Move from a reactive mode to a more proactive mode. Identify vehicle specific issues sooner, which will save estimating costs, decrease rental expenses, compress claims cycle times, and increase customer satisfaction.

2. Developing a new type of measurement to predict potential issues in a class of vehicles

Drill through collected data to identify “early warning” alerts that there are potential issues with a particular class of vehicle. Examples might include improper airbag deployment on four-door passenger vehicles, tire manufacturing defects that can create vehicle hazards, and cruise control modules with faulty wiring that can create fire hazards.

3. Recognizing the important data to mine and analyze

“Put the pieces together” of the plethora of data available to property and casualty companies. Incorporate predictive analytics on the truly important components that are available in the vehicle insurance databases.

4. Thinking outside the box about training

Much of the current training curricula are reflective of the data presently being processed by the insurance carriers. Innovative training could enhance their ability to identify and investigate data trends more imaginatively and target learning experiences toward more effective customer support. These activities would certainly increase an insurance carrier’s ROI.

5. Enhancing processes to accommodate the transition of new technologies more adeptly

The rapid increase of technology change will continue to require vehicle insurance carriers to incorporate and transition innovations into their service delivery systems while improving service response times and quality. The best practices of several process areas of CMMI-SVC can be invoked to help in this important area of vehicle insurance services.

Conclusions

In this essay, we showed how two specific process areas from the CMMI for Services model apply directly to the vehicle insurance industry. The authors’ own experiences have shown that practices in the CMMI for Services model can ensure continuous process improvement in vehicle insurance services in most, if not all, of their areas of responsibility and concern.

The vehicle insurance business, like all public and private enterprises, has a responsibility to provide services effectively and efficiently to their stakeholders and the communities that they serve. However, not all business performance can be measured in bottom-line monetary terms or reduced to data analysis. Process improvement and long-term business success must also take into account a company’s contribution to the overall economic, social, and environmental welfare of the larger community that it serves.

References

Selected general information was drawn from Wikipedia, The Free Encyclopedia (online).

Security and CMMI for Services

By Kieran Doyle and Eileen Forrester

Book authors’ comments: CMMI users regularly consider how to include security during implementation of improvement programs and during appraisal. For the first edition of this book, Kieran Doyle, an instructor and certified high maturity lead appraiser, described his field experience meeting a client need to include security during a SCAMPI appraisal for CMMI-SVC. For this edition, the essay is now expanded and in three parts. In the first section by Kieran, he updates that experience and explains how content from any other framework or model might be included in a SCAMPI appraisal. In the second section, Eileen Forrester describes the factors to consider in leaving security in or out of CMMI models and CMMI-SVC in particular. In the third section, Kieran and Eileen include a work product that shows possible content for something like a CMMI-SVC process area about security management.

How to Appraise Security Using CMMI for Services

By Kieran Doyle

Prior to the first edition of this book, interest in multimodel appraisals was beginning to appear. My own first tangible involvement with the problem came when a client laid down the gauntlet with these words:

“We would like to include security in the scope of our CMMI for Services appraisal.”

The subtext was, “We are already using something that includes security. I like CMMI, but I want to continue covering everything that I am currently doing.” These were the instructions received from the change sponsor. In this particular instance, I needed to determine from where the challenge emerges to include security in the appraisal scope. More importantly, was there a way to use the power of CMMI and SCAMPI to address all the client’s needs?

Information security is already an intrinsic part of both the Information Technology Infrastructure Library (ITIL) and the international standard for IT service management, ISO 20000. Both are in common use in the IT industry. The ISO 20000 standard provides good guidance on what is needed to implement an appropriate IT service management system. ITIL provides guidance on how IT service management may be implemented.

So, there is at least an implied requirement with many organizations that CMMI-SVC should be able to deal with most, if not all, of the topics that ISO 20000 and ITIL already address; and by and large it does! Indeed, there are probably advantages to using all three frameworks as useful tools in your process and business improvement toolbox.

As I’ve mentioned, ISO 20000 provides guidance on the requirements for an IT service management system. But it does not have the evolutionary structure that CMMI contains. In other words, CMMI-SVC can provide a roadmap along which the process capability of the organization can evolve.

Similarly, ITIL is a useful library of how to go about implementing IT service processes. In ITIL Version 3, this sense of it being a library of good ideas has come even more to the fore. But it needs something like CMMI-SVC to structure why we are doing it, and to help select the most important elements in the library for the individual implementation.

Thus, ISO 20000, ITIL, and CMMI-SVC work extremely well together. But CMMI-SVC doesn’t cover IT security, and it is not unreasonable for organizations already using ISO 20000 or ITIL to ask a lead appraiser if they can include their security practices in the appraisal scope, particularly when conducting baseline and diagnostic appraisals. So, how can we realistically answer this question?

One answer is just to say, sorry, CMMI-SVC is not designed to cover this area, at least not yet. But there is another tack we can take.

The SCAMPI approach is probably one of the most rigorous appraisal methods available. Although it is closely linked with CMMI, it can potentially be used with any reference framework to evaluate the processes of an organization. So, if we had a suitable reference framework, SCAMPI could readily cope with IT security.

What might such a reference framework look like? Well, we could look to ISO 27001 for ideas. This standard provides the requirements for setting up, then running and maintaining, the system that an organization needs for effective IT information security. How could we use this standard with the principles of both CMMI and SCAMPI?

One thing that CMMI, in all its shapes, is very good at helping organizations do is to institutionalize their processes. As long-time CMMI users know, the generic goals and practices are extremely effective at getting the right kind of management attention for setting up and keeping an infrastructure that supports the continued, effective operation of an organization’s processes. No matter what discipline we need to institutionalize, CMMI’s generic goals and practices would need to be in the mix somewhere.

So, in our appraisal of an IT security system, we would need to look for evidence of its institutionalization. The generic practices as they currently stand in CMMI-SVC can be used to look for evidence of planning the IT security processes, providing adequate resources and training for the support of the IT security system, and so on. But it turns out that ISO 27001 has some useful content in this respect as well.

Certain clauses in the ISO 27001 standard map very neatly to the CMMI generic practices. For example, consider the following.

• Clause 4.3. Documentation Requirements: contains aspects of policy (GP 2.1) and configuration control (GP 2.6).

• Clause 5, Management Responsibility: details further aspects of policy (GP 2.1) plus the provision of resources (GP 2.3), training (GP 2.5), and assigning responsibility (GP 2.4).

• Clause 6, Internal ISMS Audits: requires that the control activities, processes, and procedures of the IT security management system are checked for conformance to the standard that they perform as expected (GP 2.9).

• Clause 7, Management Review of the IT Security Management System: necessitates that managers make sure that the system continues to operate suitably (GP 2.10). But additionally, this management check may take input from measurement and monitoring type activities (GP 2.8).

• Clause 8, IT Security Management System Improvement: looks to ensure continuous improvement of the system. Some of this section looks similar to GP 2.9, but there is also a flavor of GP 3.1 and GP 3.2.

So, collecting evidence in a practice implementation indicator (PII) for IT security as we do in SCAMPI, we could use these sections of the ISO 27001 standard like GPs to guide our examination. But what is it about the material that is more unique to setting up and running an IT security management system? In CMMI, this material would be contained in the specific goals and practices.

Looking once more to ISO 27001, we find material that is a suitable template for this type of content. The following clauses of the standard appear appropriate.

• Clause 4.2.1, Establish the Information Security Management System: This deals with scoping the security system; defining policies for it; defining an approach to identifying and evaluating security threats and how to deal with them; and obtaining management approval for the plans and mechanisms defined.

• Clause 4.2.2, Implement and Operate the Information Security Management System: This deals with formulating a plan to operate the security system to manage the level of threat and then implementing that plan.

• Clause 4.2.3, Monitor and Review the Information Security Management System: This uses the mechanisms of the system to monitor threats to information security. Where action is required to address a threat (e.g., a security breach), it is implemented and tracked to a satisfactory conclusion.

• Clause 4.2.4, Maintain and Improve the Information Security Management System: This uses the data from measuring and monitoring the system to implement corrections or improvements of the system.

Incorporating this content into the typical structure of a CMMI process area could provide a suitable framework for organizing the evidence in a SCAMPI type appraisal of IT security management. Often, CMMI process areas are structured with one or more specific goals concerned with “Preparing for operating a process or system” and one or more specific goals dealing with “Implementing or providing the resultant system.” The match of this structure to the relevant ISO 27001 clauses is very appropriate.

We could structure our specific components of the PII to look for evidence in two main blocks.

1. Establishing and Maintaining an Information Security Management System: This involves activities guided by the principles in Clause 4.2.1 of the standard and would look something like this.

• Identify the scope and objectives for the information security management system.

• Identify the approach to identifying and assessing information security threats.

• Identify, analyze, and evaluate information security threats.

• Select options for treating information security threats relevant to the threat control objectives.

• Obtain commitment to the information security management system from all relevant stakeholders.

2. Providing Information Security Using the Agreed Information Security Management System: This would then involve implementing the system devised in part 1 and would look something like this.

• Implement and operate the agreed information security management system.

• Monitor and review the information security management system.

• Maintain and improve the information security management system.

Such an approach allows us to more easily include this discipline in the scope of a SCAMPI appraisal and enables the prior data collection and subsequent verification that is a signature of SCAMPI appraisals. It means that a non-CMMI area can be included alongside CMMI process areas with ease.

This approach has now been used a number of times. The advantage to the appraised organization is very clearly that it can use a single event to gain insight into strengths and weaknesses against more than one benchmark. The resulting appraisal results are also in a format that encourages a unified approach to addressing the issues. Separate appraisal and audit results sometimes result in different groups running different types of improvement activities. So, one set of results crossing the different “model” boundaries is more likely to lead to a coherent, consistent “solution.” Also, in today’s economic climate, there are savings to be made by running a single event.

This has led to speculation that this approach could be extended even further. Are there other models and standards that we could address in a similar manner? I believe the answer is a confident, “yes.” Recently, I have looked at the British Standard BS 25999 from this perspective. This standard deals with Business Continuity Management and in many ways shares some of the space that SCON occupies in the CMMI for Services model. However, in looking at the standard, it occurred to me that many standards documents are structured in a very similar way.

There are usually sections dealing with the following:

• Policy

• Documentation

• Control of records and documents

• Management responsibility

• Internal audit, and corrective and preventive actions

As with ISO 27001, we can see that this approximates to many of the CMMI’s generic practices. I would even suggest that if not all the CMMI GPs can be found in a particular standard, acting as if they were and making sure they were addressed in a particular implementation is likely to lead to a more enduring and effective system anyway.

In addition, a standard also holds some material that is unique to it, whether that is with setting up and running a business continuity system, an environmental management system, and so on. The essence of these unique clauses of the standards can be captured as “pseudo specific practices” in a PII in exactly the same way we have done for ISO 27001.

Thus, in principle, any standard could be converted to a “pseudo-CMMI format.” I believe this has advantages for building consistent management systems across the organization. In particular, we can use the strength of the CMMI GPs to institutionalize the right types of behavior across the multiple viewpoints.

For improvement and quality departments the advantage comes in how we can sell this to the organization. We can use effectively one, unified approach to collecting data on the health of a set of potentially diverse models and standards. We can use a single event to collect the information, saving multiple trips to bother the hard-working frontline groups. This will definitely gain brownie points!

It is doubtful that the single-event approach would be acceptable to the standards agencies for their qualifying and certifying purposes. However, this does not detract from the value of this approach significantly. The organization will still need to run some form of audits. So, combining these with CMMI type events saves time and money.

When I first looked at combining security with CMMI, it was a challenge. But now I think the challenge is not how to combine other disciplines with CMMI, but what are the right things to combine with it? Which concepts are best suited to looking at from a CMMI perspective and which ones should we avoid because they dilute the power of this model? I have no firm answer to this latter question yet. However, I think the CMMI for Services model has taken CMMI more strongly into the general realm of business management than previously. The answer to the question probably lies in discovering in which business contexts we can successfully apply the model. Only by doing this will we discover the natural bounds of the capabilities of the present CMMI architecture.

Considering Security Content for CMMI for Services

By Eileen Forrester

When the original CMMI for Services model team worked on the scope and architecture for the first CMMI-SVC draft in 2006, we considered whether and how to include security content. We ultimately decided against normative model content on security.

Among the reasons for that decision were the preferences of the CMMI Architecture Team and the larger CMMI Product Team. In their view at the time, security can be conceived of during development as a class of requirement and a type of risk, and therefore was already covered by process areas treating those topics. In addition, several existing models such as ISO 20000 and 27001, CobIT, and ITIL have security content available. At that time, we also knew that another SEI CERT team was building the Resilience Management Model and including considerable security content. That team has now published their model. Further, in 2008 a team chartered by the CMMI Steering Group began writing an assurance focus topic for the CMMI models. We already regarded these frameworks and materials as complementary to CMMI-SVC, and as we worked, we did not attempt to include everything they included. We considered them excellent sources of practice information likely to cover the security landscape and chose not to write anything like a process area for security—or even a goal and practices in an existing process area—in CMMI-SVC, V1.2.

However, as use of the CMMI-SVC model grows, we frequently hear from users who would prefer, as they say, that we had simply “given us a process area or two on security,” rather than asking them to look outside the model. This is not an issue for those users already combining CMMI-SVC and ITIL or ISO or RMM, but is more acute for those using CMMI-SVC alone and wanting to include security in their improvement program. We did not receive sufficient change requests on this topic to add security content to CMMI-SVC for V1.3, and even if we had, such a change would also have been beyond the scope of the V1.3 revision. Nonetheless, we often and increasingly hear a request from users for CMMI model content more directly on security, especially for CMMI-SVC.

In the first edition of CMMI for Services, Guidelines for Superior Service, Kieran Doyle wrote an essay at my invitation about including security in a SCAMPI appraisal on CMMI-SVC, which garnered attention and praise, and requests to take his work further. In addition, during the years before I worked on CMMI-SVC, I was working with colleagues in the SEI CERT program and elsewhere on including security content in CMMI. I still see a need for that content. Kieran and I have also worked as a subteam to a larger team I lead that works on combining CMMI-SVC effectively with other models and approaches. In the course of that work, we have experimented with the idea of what normative model content about security could look like. What follows is a work product from that subteam.

We took Kieran’s idea of how to shape content from any model into something appraisable, and proceeded a step or two further to experiment with model-like or “pseudo process area” content on security. Clearly, this is far from full model content, nor is it presumed to be CMMI content. We chose a wide security management scope, which could include a range of security types. We have, to date, experimented only with purpose statement, goals, specific practice statements, some subpractices, and generic practice elaborations. Mostly missing is the informative content that serves as explanatory material and implementation guidance in a CMMI process area; we have not yet written the notes that are crucial to assist with comprehension, implementation, and improvement, for example. Our purpose on the subteam was to examine whether credible content on security could be created at all in a smaller footprint than models like RMM.

Given the persistence of the requests for fuller content on security to be used with CMMI models, we provide it here to begin to get community comments and input. Should this work be taken further? Is the scope useful for improvement? What could be done next to make it more credible? Would you participate in developing it into something fuller? Our call to action is to ask you to take a look at the example that follows and provide us with comments by writing to [email protected]. This content will change frequently, so please check the CMMI-SVC website for the latest information: www.sei.cmu.edu/cmmi/tools/svc/.

Example Security Content for Comment

By Eileen Forrester and Kieran Doyle

Security Management (SM)

A work product to experiment with an example process area structure for CMMI for Services

Purpose

The purpose of Security Management is to establish and maintain a security management system (ISMS) that safeguards the essential assets of the organization.

Note

Essential assets cover such things as the essential functions and resources on which service and the organization depend. They can include, for example, staff and intellectual property of the organization. Some assets may be stored in many different forms, including physical documents, databases, websites, and other forms. Essential assets may also incorporate the computing systems themselves (e.g., servers) and even personnel. See the Service Continuity process area for more information on the essential functions and resources on which services depend. See the Resilience Management Model for more information on defining and safeguarding a range of assets.



Example Specific Practices by Example Goal

ESG 1 Establish a Security Management System

A security management system is established and maintained.

ESP 1.1 Establish Security Objectives

Identify the scope and objectives for the security management system.

ESP 1.2 Establish an Approach to Threat Assessment

Establish and maintain an approach to assessing vulnerabilities and threats to essential assets.

Subpractices

1. Select methods for assessing security threats.

2. Define criteria for evaluating and quantifying security threats.

3. Describe responsibility and resources for evaluating vulnerabilities and threats.

ESP 1.3 Identify Security Threats

Identify and record security threats.

Subpractices

1. Identify security threats.

2. Record information about security threats.

3. Categorize security threats.

ESP 1.4 Evaluate and Prioritize Security Threats

Evaluate each identified security threat using defined criteria and determine its relative priority.

ESP 1.5 Establish a Security Management Plan

Establish and maintain a plan for achieving security objectives.

Subpractices

1. Describe responsibility for treating vulnerabilities and threats.

2. Identify resources for treating vulnerabilities and threats.

ESP 1.6 Obtain Commitment to the Security Management Plan

Obtain commitment to the security management system from all relevant stakeholders.

ESG 2 Provide Security

Security is provided using the security management system.

ESP 2.1 Operate the Security Management System

Implement and operate the agreed security management system.

Subpractices

1. Monitor the status of individual security vulnerabilities and threats.

2. Respond to and prevent security incidents. For more information on incident management, see Incident Resolution and Prevention.

3. Maintain and improve the security management system.

ESP 2.2 Monitor the Security Management System

Monitor the security management system.

Subpractices

1. Monitor the performance of the security management system.

2. Evaluate the effectiveness of security.

3. Consult national and international threat agencies on developments in security issues.

Generic Practice Elaborations

GP 2.1 Establish an Organizational Policy

SM Elaboration

This policy establishes the organizational expectation for defining and operating a security strategy and system.

GP 2.2 Plan the Work

SM Elaboration

This plan for performing the security management process can be included in the work management plan, described in the Work Planning process area. This plan encompasses both the strategy for maintaining security and the specific activities to establish, operate, and maintain the security management system.

GP 2.3 Provide Resources

SM Elaboration


GP 2.4 Assign Responsibility

SM Elaboration

Responsibility is assigned for planning, operating, and monitoring the security management system.

GP 2.5 Train People

SM Elaboration


GP 2.6 Manage Configurations

SM Elaboration


GP 2.7 Identify and Involve Relevant Stakeholders

SM Elaboration


GP 2.8 Monitor and Control the Process

SM Elaboration


GP 2.9 Objectively Evaluate Adherence

SM Elaboration


GP 2.10 Review Status with Higher Level Management

SM Elaboration


GP 3.2 Collect Improvement Information

SM Elaboration


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.73.102