Service System Development: A Service Establishment and Delivery Process Area at Maturity Level 3


SSD Addition: Purpose

The purpose of Service System Development (SSD) is to analyze, design, develop, integrate, verify, and validate service systems, including service system components, to satisfy existing or anticipated service agreements.

Introductory Notes

The Service System Development process area is applicable to all aspects of a service system. It applies to new service systems as well as changes to existing service systems.

A “service system” is an integrated and interdependent combination of service system components that satisfies stakeholder requirements.

A “service system component” is a process, work product, person, consumable, or customer or other resource required for a service system to deliver value. Service system components can include components owned by the customer or a third party.

A “service system consumable” is anything usable by the service provider that ceases to be available or becomes permanently changed by its use during the delivery of a service.

The people who are considered service system components are those who perform tasks as part of the service system, including provider staff and end users, to enable the system to operate and thereby deliver services.

(See the definitions of “service system,” “service system component,” “service system consumable,” and “work product” in the glossary.)

Organizations that wish to improve and appraise their product development processes should rely on the complete CMMI-DEV model, which specifically focuses on development as an area of interest.

Service provider organizations can also choose to use the CMMI-DEV model as the basis for improving and appraising their service system development processes. This use of the CMMI-DEV model is preferred for organizations that are already experienced with CMMI-DEV and for organizations that develop large-scale, complex service systems.

However, the Service System Development process area offers an alternative means of achieving somewhat similar ends by covering requirements development as well as service system development, integration, verification, and validation in a single process area. Using SSD may be preferred by service provider organizations that are new to CMMI, especially those service providers that are developing simple services with relatively few components and interfaces. Even organizations that use the CMMI-DEV model for service system development may wish to refer to the Service System Development process area for helpful guidance on applying development practices to service system components such as people, processes, and consumables.

It is especially important to remember that the components of some service systems can be limited to people and the processes they perform. In those contexts and similar ones in which service systems are fairly simple, exercise care when interpreting the specific practices of this process area so that the implementations that result provide business value to the service provider organization.

The service system development process is driven by service and service system requirements that are collected from various sources such as service agreements and defects and problems identified during both service delivery and incident resolution and prevention processes.

The Service System Development process area focuses on the following activities:

• Collecting, coordinating, analyzing, validating, and allocating stakeholder requirements for service systems

• Evaluating and selecting from alternative service system solutions

• Designing and building or composing (as needed), integrating, and documenting service systems that meet requirements

• Verifying and validating service systems to confirm they satisfy their intended requirements and they will satisfy customer and end-user expectations during actual service delivery

CMMI does not endorse particular methods for service system development. How the service organization chooses to develop the service system can range from internal development to outsourcing to commercial product integration. Most service organizations in their efforts to build their service system will engage a development team and a particular development approach. The choice of development method(s) depends on the requirements to be achieved and what service system components will need to be developed. Agile methods constitute one possible family of approaches, but may not be appropriate for all (or any) components. (The phrase “Agile method” is shorthand for any development or management method that adheres to the Manifesto for Agile Development [Beck 2001] and that typically addresses software development.) For organizations that choose to use Agile, the following paragraphs can be helpful in implementing the practices of SSD.

In Agile environments, the requirements, design, development, and validation process is performed incrementally and through continuing engagement with relevant stakeholders, particularly customers and end users. Customer needs and ideas are iteratively elicited, elaborated, analyzed, and validated. Requirements are documented in forms such as user stories, scenarios, use cases, product backlogs, and iteration results. These requirements are prioritized into cycles of development from which design models, operational concepts, and diagrams are evolved to produce service system components. Agile methods give emphasis to a strong working relationship between the development staff, the service provision staff, and the customer (or end user). This iterative and cooperative development approach is used to select and refine the service system solution to provide high degrees of quality and efficiency during service delivery.

Short daily meetings or communications are held to obtain near real-time validation of the technical selections and decisions. End of cycle reviews are also conducted to validate current development and review requirements prioritization for the subsequent cycle of development. Due to the emphasis on early exploration and validation of needs and expectations, stakeholder commitment and availability is essential. Also, it is important that all parties understand their role and are willing to share in addressing the risks that arise from such collaborative work.

Further, when deciding to use an Agile method, consider the implications for other process areas. In particular, the effects on service system transition and delivery may need to be understood upfront; and discussions held on how best to mitigate any impacts.

For more information on how to apply Agile methods, see CMMI-DEV Section 5.0 Interpreting CMMI When Using Agile Approaches.

For standard services, the development processes described in this process area can also be applied at the organizational level to identify, develop, and maintain core assets (e.g., components, tools, architectures, operating procedures, service system representations, software) used in developing or customizing service systems for delivery of standard services (or tailored services).

Refer to the Strategic Service Management process area for more information about establishing strategic needs and plans for standard services.

Agility in Service System Development

Those who develop or modify service systems and who are not familiar with the concepts of Agile methods may need some additional explanation to understand why the latest version of CMMI-SVC contains newly added material about these approaches. What makes Agile methods significant, how can they apply to services, and what issues come into play when applying Agile methods to service system development?

Agile methods were initially conceived and promoted primarily by software developers who concluded that too many software development efforts failed because of a lack of agility, or responsiveness to changing needs, by the development teams. The intentions driving Agile methods were defined by the Agile Manifesto as cited in the model; the effect of these methods is to increase the ability of work groups to handle an evolving understanding of product requirements and work processes. Despite their initial focus on software development, Agile methods have been successfully applied in other types of development and management, and organizations delivering services can certainly benefit from their use. (Viewed from some perspectives, services by their nature tend to be Agile to some extent; see the essay “Are Services Agile?” by Hillel Glazer in Chapter 6 for an extended discussion.) However, when applied in the context of service system development, some Agile methods will be more broadly useful than others.

There are many different methods that could be categorized as consistent with an Agile approach, and many different books that discuss them. While the variability among them is great, they also have many similarities that emphasize common themes consistent with the Agile Manifesto:

• Maximizing direct relevant communication and collocation among team members

• Collaboration and frequent interaction with end users

• Rapid incremental development cycles (on the order of a few weeks to months)

• Frequent team review and reflection on what is working well and what is not

• Reliance on just enough process and documentation to accomplish what needs to be done

Most of these methods could be helpful for services in general and for service system development in particular, with the possible exception of rapid incremental development cycles. The difficulties may range from a fundamental inability to deliver only “part” of a service to a problem motivating end users to provide you with unbiased continuing feedback on a service system as it rapidly evolves and changes. Unless your end users are themselves familiar with Agile methods, they may very well prefer to stick with a stable if somewhat inferior service system over one that is incomplete or changes frequently based on feedback, even if the latter might yield a better service system for them eventually.

If you have a large enough population of end users, you may be able to work around this problem by piloting successive incremental releases of your new or changed service system to different groups of end users over time; this approach limits the exposure of any one end user to at most only a single “change experience” prior to the eventual final release. You might also be lucky enough to have a small loyal corps of representative experienced end users who are happy to collaborate with you and will provide the ongoing feedback you need. Even in these situations, however, only limited subsets of your end-user population will see a version of a new or changed service system before its full-scale transition into operation.

Of course, those components of a service system that are not directly visible to end users through interactions and interfaces with the service system may well be good candidates for rapid incremental development cycles. For example, a major change to a service system to reduce the cost of operation or increase overall capacity without affecting the nature or quality of delivered services might more easily use rapid incremental development. In that case, the “end users” for Agile purposes become the stakeholders in your organization who will be directly affected by the expected changes.

SSD Addition: Related Process Areas

Refer to the Service Delivery process area for more information about maintaining the service system.

Refer to the Service System Transition process area for more information about deploying the service system.

Refer to the Strategic Service Management process area for more information about establishing standard services.

Refer to the Decision Analysis and Resolution process area for more information about analyzing possible decisions using a formal evaluation process that evaluates identified alternatives against established criteria.

Refer to the Organizational Performance Management process area for more information about selecting improvements and deploying improvements.

Refer to the Requirements Management process area for more information about managing requirements of products and product components and ensuring alignment between those requirements and the work plans and work products.

Specific Goal and Practice Summary

SG 1 Develop and Analyze Stakeholder Requirements

SP 1.1 Develop Stakeholder Requirements

SP 1.2 Develop Service System Requirements

SP 1.3 Analyze and Validate Requirements

SG 2 Develop Service Systems

SP 2.1 Select Service System Solutions

SP 2.2 Develop the Design

SP 2.3 Ensure Interface Compatibility

SP 2.4 Implement the Service System Design

SP 2.5 Integrate Service System Components

SG 3 Verify and Validate Service Systems

SP 3.1 Prepare for Verification and Validation

SP 3.2 Perform Peer Reviews

SP 3.3 Verify Selected Service System Components

SP 3.4 Validate the Service System

SSD Addition: Specific Practices by Goal

SG 1 Develop and Analyze Stakeholder Requirements

Stakeholder needs, expectations, constraints, and interfaces are collected, analyzed, and transformed into validated service system requirements.

This goal covers the transformation of collected stakeholder needs, expectations, and constraints into requirements that can be used to develop a service system that enables service delivery.

Needs are collected from sources that can include service agreements; standard defined services; organizational policies; and communication with end users, customers, and other relevant stakeholders. These service needs can define stakeholder expectations of what is to be delivered, specify particular levels or grades of service, or identify constraints on how, when, how often, or to whom services are to be delivered. In particular, the quality attribute related needs, expectations, and constraints of relevant stakeholders should be determined. Quality attributes are properties of the service and service system (e.g., responsiveness, availability, security) that are critical to customer satisfaction and to meeting the needs of relevant stakeholders. (See the definition of “quality attributes” in the glossary.)

These needs, expectations, and constraints in turn may need to be analyzed and elaborated to identify needed details of delivered services not considered by the original sources. The result is a set of stakeholder requirements specified in the language of service system developers, not in the language of those who submitted the requirements.

For example, a customer might establish a requirement to “maintain the equipment listed in Table 25 in working order” with additional details of availability rates, average repair times, and other service levels. However, this requirement may also imply a need for a variety of specialized sub-services, such as diagnostics, field support, and preventive maintenance, each with their own implied sub-service requirements. These refinements may not be of interest or even visible to the original stakeholders but their full specification is needed to identify everything that a service system must do to meet the service delivery requirements.

As service requirements are analyzed and elaborated, they eventually yield derived service system requirements, which define and constrain what the service system must accomplish to ensure the required service is delivered. For example, if the service has a response time requirement, the service system must have derived requirements that enable it to support that response time.

The process of developing and analyzing requirements can involve multiple iterations that include all relevant stakeholders in communicating requirements and their ramifications so that everyone agrees on a consistent defined set of requirements for the service system. Changes can be driven by changes to stakeholder expectations, or by new needs discovered during subsequent service system development activities, service system transition, or service delivery. Since needs often change throughout the service lifecycle, the development and analysis of requirements should rarely be considered a one-time process.

As with all requirements, appropriate steps are taken to ensure that the approved set of service and service system requirements is effectively managed to support development of the service and service system.

Refer to the Requirements Management process area for more information about managing requirements changes.

SP 1.1 Develop Stakeholder Requirements

Collect and transform stakeholder needs, expectations, constraints, and interfaces into prioritized stakeholder requirements.

The needs of relevant stakeholders (e.g., customers, end users, suppliers, builders, testers, manufacturers, logistics support staff, service delivery staff, the organization) are the basis for determining stakeholder requirements. Stakeholder needs, expectations, constraints, interfaces, operational concepts, and service concepts are analyzed, harmonized, refined, prioritized, and elaborated for translation into a set of stakeholder requirements.

Requirements collected from customers and end users of the service to be delivered are documented in the service agreement. These requirements are also used to derive requirements for the service system. These derived requirements are combined with other requirements collected for the service system to result in the complete set of stakeholder requirements.

Refer to the Service Delivery process area for more information about analyzing existing agreements and service data.

These stakeholder requirements should be stated in language that relevant stakeholders can understand, yet precise enough for the needs of those who develop the service or service system.

Examples of stakeholder requirements include the following:

• Operations requirements

• Customer delivery requirements

• Monitoring requirements

• Instrumentation requirements

• Documentation requirements

• Operating level agreement requirements

• Organizational standards for product lines and standard services

• Requirements from agreements with other relevant stakeholders

Example Work Products

1. Customer requirements

2. End-user requirements

3. Customer and end-user constraints on the conduct of verification and validation

4. Staffing level constraints

Subpractices

1. Engage relevant stakeholders using methods for eliciting needs, expectations, constraints, and external interfaces.

Eliciting goes beyond collecting requirements by proactively identifying additional requirements not explicitly provided by customers through methods such as surveys, analyses of customer satisfaction data, prototypes, simulations, or quality attribute elicitation workshops.

2. Transform stakeholder needs, expectations, constraints, and interfaces into prioritized stakeholder requirements.

The various inputs from relevant stakeholders should be consolidated and prioritized, missing information should be obtained, and conflicts should be resolved in documenting the recognized set of stakeholder requirements.

3. Define constraints for verification and validation.

SP 1.2 Develop Service System Requirements

Refine and elaborate stakeholder requirements to develop service system requirements.

Stakeholder requirements are analyzed in conjunction with the development of the operational concept to derive more detailed and precise sets of requirements called “derived requirements.” These requirements address all aspects of the service system associated with service delivery, including work products, services, processes, consumables, and customer and other resources; as well as the functionality and quality attribute needs of relevant stakeholders.

Derived requirements arise from constraints, consideration of issues implied but not explicitly stated in the stakeholder requirements baseline, and factors introduced by the selected service system architecture, the design, the developer’s unique business considerations, and strategic priorities, including industry market trends. The extent and depth of derived requirements vary with the complexity of the service system needed to meet stakeholder requirements.

Refer to the Strategic Service Management process area for more information about establishing standard services.

In some service contexts, derived requirements can be as simple as identification and quantification of required resources. For complex service systems with many types of components and interfaces, the initial requirements are iteratively refined into lower level sets of more detailed requirements that can be allocated to service system components as the preferred solution is refined.

Through such analysis, refinement, derivation, and allocation activities, the functionality and quality attribute requirements for the service system are established.

Example Work Products

1. Derived requirements with relationships and priorities

2. Service requirements

3. Service system requirements

4. Requirement allocations

5. Architectural requirements, which specify or constrain the relationships among service system components

6. Interface requirements

7. Skill level requirements

Subpractices

1. Develop requirements and express them in the terms necessary for service and service system design.

In particular, these requirements include architectural requirements that specify critical quality attributes.

2. Derive requirements that result from solution selections and design decisions.

3. Establish and maintain relationships among requirements for consideration during change management and requirements allocation.

Relationships include dependencies in which a change in one requirement can affect other requirements.

Relationships among requirements can aid in design and in evaluating the impact of changes.

4. Prioritize derived requirements.

Prioritization of requirements can assist in defining iterative development cycles.

5. Allocate the requirements to logical entities, service system components, and other entities as appropriate.

As the operational concept evolves, requirements are allocated to logical entities (e.g., functions, processes) that aid in relating the requirements to the operational concept. These logical entities also serve to organize the requirements and assist in synthesis of the technical solution. As the technical solution is selected or emerges, requirements are allocated to service system components (or the architecture, in the case of many nonfunctional requirements) as appropriate. In the case of an iterative or incremental approach to developing the service system, requirements are also allocated to iterations or increments.

6. Identify interfaces both external and internal to the service system.

7. Develop requirements for the identified interfaces.

SP 1.3 Analyze and Validate Requirements

Analyze and validate requirements, and define required service system functionality and quality attributes.

Requirements analyses are performed to determine the impact the intended service delivery environment will have on the ability to satisfy the stakeholders’ needs, expectations, constraints, and interfaces. Depending on the service delivery context, factors such as feasibility, mission needs, cost constraints, end-user heterogeneity, potential market size, and procurement strategy should be taken into account. A definition of required functionality and quality attributes is also established. The objectives of the analyses are to determine candidate requirements for service system concepts that will satisfy stakeholder needs, expectations, and constraints and then to translate these concepts into comprehensive service system requirements. In parallel with this activity, the parameters used to evaluate the effectiveness of service delivery are determined based on customer and end-user input and the preliminary service delivery concept.

Requirements are validated by working with relevant stakeholders to increase the probability that the resulting service system will deliver services as intended in the expected delivery environment.

Example Work Products

1. Operational concepts and scenarios, use cases; and activity diagrams, user stories

2. Service system and service system component installation; training, operational, maintenance, support, and disposal concepts

3. Definition of required functionality and quality attributes

4. Architecturally significant quality attribute requirements

5. New requirements

6. Requirements defects reports and proposed changes to resolve

7. Assessment of risks related to requirements

8. Record of analysis methods and results

Subpractices

1. Develop operational concepts and scenarios that include operations, installation, development, maintenance, support, and disposal as appropriate.

Identify and develop scenarios that are consistent with the level of detail in the stakeholder needs, expectations, and constraints in which the proposed service system is expected to operate.

2. Develop a detailed operational concept that defines the interaction of the service system, end users, and the environment, and that satisfies operational, maintenance, support, and disposal needs.

Operational concept and scenarios are iteratively refined to include more detail as solution decisions are made and as lower level requirements are developed (e.g., to further describe interactions among the service system, end users, and the environment). Reviews of operational concepts and scenarios are held periodically to ensure that they address the functionality and quality attribute needs of relevant stakeholders, different lifecycle phases, and modes of service system usage. Reviews can be in the form of a walkthrough.

3. Establish and maintain a definition of required functionality and quality attributes.

This definition of required functionality and quality attributes describes what the product is to do. (See the definition of “definition of required functionality and quality attributes” in the glossary.) This definition can include descriptions, decompositions, and partitioning of the functions of the product.

In addition, the definition specifies design considerations or constraints on how the required functionality will be realized in the service system. Quality attributes address such things as service system availability; maintainability; modifiability; timeliness, throughput, and responsiveness; reliability; security; and scalability. Some quality attributes will emerge as architecturally significant and thus drive subsequent service system high-level design activities. A clear understanding of the quality attributes and their importance based on mission or business needs is an essential input to the design process.

4. Analyze requirements to ensure that they are necessary, sufficient, and balance stakeholder needs and constraints.

As requirements are defined, their relationship to higher level requirements and the higher level defined functionality should be understood. Key requirements that will be used to track progress are determined. A cost benefit analysis can be performed to assess the impact of architecturally significant quality attribute requirements on service and service system cost, schedule, performance, and risk. Higher level requirements that are found to result in unacceptable costs or risks may need to be renegotiated.

5. Validate requirements to ensure the resulting service system will perform as intended in the end user’s environment.

SG 2 Develop Service Systems

Service system components are selected, designed, implemented, and integrated.

A service system can encompass work products, processes, people, consumables, and customer and other resources.

An important and often overlooked component of service systems is the human aspect. People who perform tasks as part of a service system enable the system to operate, and both provider staff and end users can fill this role. For example, a service system that processes incoming calls for a service should have available trained staff that can receive the calls and process them appropriately using the other components of the service system. In another example, end users of an insurance service may need to follow a prescribed claims process to receive service benefits from the service system.

A consumable is anything usable by the service provider that ceases to be available or becomes permanently changed because of its use during the delivery of a service. An example is gasoline for a transportation service system that uses gasoline powered vehicles. Even service systems that are composed primarily of people and manual processes often use consumables such as office supplies. The role of consumables in service systems should always be considered.

This goal focuses on the following activities:

• Evaluating and selecting solutions that potentially satisfy an appropriate set of requirements

• Developing detailed designs for the selected solutions (detailed enough to implement the design as a service system)

• Implementing the designs of service system components as needed

• Integrating the service system so that its functions and quality attributes can be verified and validated

Typically, these activities overlap, recur, and support one another. Some level of design, at times fairly detailed, may be needed to select solutions. Prototypes, pilots, and stand-alone functional tests can be used as a means of gaining sufficient knowledge to develop a complete set of requirements or to select from among available alternatives.

From a people perspective, designs can be skill level specifications and staffing plans, and prototypes or pilots may try out different staffing plans to determine which one works best under certain conditions. From a consumables perspective, designs can be specifications of necessary consumable characteristics and quantities. Some consumables can even require implementation. For example, specific paper forms may need to be designed and printed to test them as part of the service system later.

Development processes are implemented repeatedly on a service system as needed to respond to changes in requirements, or to problems uncovered during verification, validation, transition, or delivery. For example, some questions that are raised by verification and validation processes can be resolved by requirements development processes. Recursion and iteration of these processes enable the work group to ensure quality in all service system components before it begins to deliver services to end users.

SP 2.1 Select Service System Solutions

Select service system solutions from alternative solutions.

Alternative solutions and their relative merits are considered in advance of selecting a solution. Key requirements (including quality attribute requirements), design issues, and constraints are established for use in alternative solution analysis. Architectural features that provide a foundation for service system improvement and evolution are considered.

Refer to the Decision Analysis and Resolution process area for more information about analyzing possible decisions using a formal evaluation process that evaluates identified alternatives against established criteria.

A potentially ineffective approach to implementing this practice is to generate solutions that are based on only the way services have been delivered in the past. It is important to consider alternatives that represent different ways of allocating and performing necessary functions (e.g., manual vs. automated processes, end user vs. service delivery staff responsibilities, prescheduled vs. on-the-fly service request management).

Components of the service system, including service delivery and support functions, can be allocated to external suppliers. As a result, prospective supplier agreements are investigated. The use of externally supplied components is considered relative to cost, schedule, performance, and risk. Externally supplied alternatives can be used with or without modification. Sometimes such items can require modifications to aspects such as interfaces or a customization of some of their features to better meet service or service system requirements.

Refer to the Supplier Agreement Management process area for more information about managing the acquisition of products and services from suppliers.

Example Work Products

1. Alternative solution screening criteria

2. Selection criteria

3. Service system component selection decisions and rationale

4. Documented relationships between requirements and service system components

5. Documented solutions, evaluations, and rationale

Subpractices

1. Establish defined criteria for selection.

2. Develop alternative solutions.

The development of alternative solutions can involve the use of architectural patterns, reuse of components, investigation of commercial off-the-shelf (COTS) solutions, service outsourcing, and consideration of technology maturation and obsolescence.

3. Select the service system solutions that best satisfy the criteria established.

The selection is based on an evaluation of alternatives using the defined criteria. In high-risk situations, simulations, prototypes, or pilots can be used to assist in the evaluation.

Selecting service system solutions that best satisfy the criteria is the basis for allocating requirements to the different aspects of the service system. Lower level requirements are generated from the selected alternative and used to develop the design of service system components. Interface requirements among service system components are described.

SP 2.2 Develop the Design

Develop designs for the service system and service system components.

The term “design” in this practice refers to the definition of the service system’s components and their intended set of relationships; these components will collectively interact in intended ways to achieve actual service delivery.

Service system designs should provide the appropriate content not only for implementation, but also for other aspects of the service system lifecycle such as modification; transition and rollout; maintenance; sustainment; and service delivery. The design documentation provides a reference to support mutual understanding of the design by relevant stakeholders and supports making future changes to the design both during development and in subsequent phases of the lifecycle.

A complete design description is documented in a “design package” that includes a full range of features and parameters including functions, interfaces, operating thresholds, manufacturing and service process characteristics (e.g., which functions are automated versus manually performed), and other parameters. Established design standards (e.g., checklists, templates, process frameworks) form the basis for achieving a high degree of definition and completeness in design documentation.

Examples of other service system design related work products include the following:

• Descriptions of roles, responsibilities, authorities, accountabilities, and skills of people required to deliver the service

• Functional use cases describing roles and activities of service participants

• Designs or templates for manuals, paper forms, training materials, and guides for end users, operators, and administrators

“Designing people” in this context means specifying the skills and skill levels necessary to accomplish needed tasks and can include appropriate staffing levels as well as training needs (if training is necessary to achieve needed skill levels).

“Designing consumables” in this context means specifying the consumable properties and characteristics necessary to support service delivery as well as resource utilization estimates for service system operation.

Example Work Products

1. Service system architecture

2. Designs of service system components and consumables

3. Skill descriptions and details of the staffing solution (e.g., allocated from available staff, hired as permanent or temporary staff)

4. Interface design specifications and control documents

5. Criteria for design and service system component reuse

6. Results of make-or-buy analyses

Subpractices

1. Develop a design for the service system.

Service system design typically consists of two broad phases that can overlap in execution: preliminary and detailed design. Preliminary design establishes service system capabilities and the architecture. Detailed design fully defines the structure and capabilities of the service system components.

2. Ensure that the design adheres to allocated functionality and quality attribute requirements.

3. Document the design.

4. Design interfaces for the service system components using established criteria.

The criteria for interfaces frequently reflect critical parameters that should be defined, or at least investigated, to ascertain their applicability. These parameters are often peculiar to a given type of service system and are often associated with quality attribute requirements (e.g., safety, security, durability, mission critical characteristics). Carefully determine which processes should be automated or partially automated and which processes should be performed manually.

5. Evaluate whether the components of the service system should be developed, purchased, or reused based on established criteria.

SP 2.3 Ensure Interface Compatibility

Manage internal and external interface definitions, designs, and changes for service systems.

Many integration problems arise from unknown or uncontrolled aspects of both internal and external interfaces. Effective management of interface requirements, specifications, and designs helps to ensure that implemented interfaces will be complete and compatible.

In the context of service systems, interfaces can be broadly characterized according to one of four major groups:

• Person-to-person interfaces are interfaces that represent direct or indirect communication between two or more people, any of whom might be service provider staff or end users. For example, a call script, which defines how a help desk operator should interact with an end user, defines a direct person-to-person interface. Log books and instructional signage are examples of indirect person-to-person interfaces.

• Person-to-component interfaces are interfaces that encompass interactions between a person and one or more service system components. These interfaces can include both graphical user interfaces for automated components (e.g., software applications), and operator control mechanisms for automated, partially automated, and non-automated components (e.g., equipment, vehicles).

• Component-to-component interfaces are interfaces that do not include direct human interaction. The interfaces of many interactions between automated components belong to this group but other possibilities exist, such as specifications constraining the physical mating of two components (e.g., a delivery truck, a loading dock).

• Compound interfaces are interfaces that merge or layer together interfaces from more than one of the other three groups. For example, an online help system with “live” chat support might have a compound interface built on an integrated combination of person-to-person, person-to-component, and component-to-component interfaces.

Interfaces can also be characterized as external or internal interfaces. “External interfaces” are interactions among components of the service system and any other entity external to the service system, including people, organizations, and systems. Internal interfaces can include the interactions among the staff, teams, and functions of the service provider organization. “Internal interfaces” can also include interaction between the staff or end users and service system components.

Examples of user interface work products include the following:

• Customer interaction scripts

• Reporting types and frequency

• Application program interfaces

Example Work Products

1. Categories of interfaces with lists of interfaces per category

2. Table or mapping of interface relationships among service system components and the external environment

3. List of agreed interfaces defined for each pair of service system components when applicable

4. Reports from meetings of the interface control working group

5. Action items for updating interfaces

6. Updated interface description or agreement

Subpractices

1. Review interface descriptions for coverage and completeness.

The interface descriptions should be reviewed with relevant stakeholders to avoid misinterpretations, reduce delays, and prevent the development of interfaces that do not work properly.

2. Manage internal and external interface definitions, designs, and changes for service system components.

Management of the interfaces includes maintenance of the consistency of the interfaces throughout the life of the service system, compliance with architectural decisions and constraints, and resolution of conflict, noncompliance, and change issues. It is also important to manage the interfaces between components acquired from suppliers and other service system components.

SP 2.4 Implement the Service System Design

Implement the service system design.

The term “implement” in this practice refers to the actual creation of designed components of the service system in a form that can subsequently be integrated, verified, and validated. “Implement” does not refer to putting the service system into place in the delivery environment. That deployment process occurs later during service system transition.

In some cases consumables and people (e.g., provider staff) may be “implemented.” For example, specialized paper forms may need to be printed. The “implementation” of people may involve hiring new staff or putting into place a new organizational or team structure to handle new kinds of responsibilities. Such new structures should be integrated, verified, and validated prior to the start of service transition.

Refer to the Service System Transition process area for more information about deploying the service system.

Service system components are implemented from previously established designs and interfaces. The implementation can include standalone testing of service system components and usually includes the development of any necessary training materials for staff and end users.

Example activities during implementation include the following:

• Interface compatibility is confirmed.

• Component functionality is incrementally delivered.

• Software is coded.

• Training materials are developed.

• Electrical and mechanical parts are fabricated.

• Procedures that implement process designs are written.

• Facilities are constructed.

• Supplier agreements are established.

• Staff are hired or transferred.

• Organizational and team structures are established.

• Custom consumables are produced (e.g., disposable packaging materials).

Example Work Products

1. Implemented service system components

2. Training materials

3. User, operator, and maintenance manuals

4. Procedure descriptions

5. Records of new hires and staff transfers

6. Records of communications about organizational changes

Subpractices

1. Use effective methods to implement the service system design.

2. Adhere to applicable standards and criteria.

3. Conduct peer reviews of selected service system components.

4. Perform standalone testing of service system components as appropriate.

5. Revise the service system as necessary.

SP 2.5 Integrate Service System Components

Assemble and integrate implemented service system components into a verifiable service system.

Integration of the service system should proceed according to a planned integration strategy and procedures. Before integration, each service system component should be verified for compliance with its interface requirements. Service system components that are manual processes should be performed while making appropriate use of any other necessary service system components to verify compliance with requirements.

During integration, subordinate components are combined into larger, more complex service system assemblies and more complete service delivery functions are performed. These combined service system assemblies are checked for correct interoperation. This process continues until service system integration is complete. During this process, if problems are identified, the problems are documented and corrective actions are initiated.

Some service systems can require assembly with customer or end-user resources to complete full integration. When these resources are available under the terms of a service agreement, they should be incorporated as appropriate in integration activities. When such resources are not available from customers and end users, substitute equivalent resources can be employed temporarily to enable full service system integration.

Example Work Products

1. Service system integration strategy with rationale

2. Documented and verified environment for service system integration

3. Service system integration procedures and criteria

4. Exception reports

5. Assembled service system components

6. Interface evaluation reports

7. Service system integration summary reports

8. Staffing plans that show the sequence of where and when staff are provided

Subpractices

1. Develop a service system integration strategy.

The integration strategy describes the approach for receiving, assembling, and evaluating service system components that comprise the service system.

The integration strategy should be aligned with the service strategy described in the Work Planning process area and harmonized with the service system solution and design. The results of developing a service system integration strategy can be documented in a service system integration plan, which is reviewed with stakeholders to promote commitment and understanding.

2. Ensure the readiness of the integration environment.

3. Confirm that each service system component required for integration has been properly identified, behaves according to its description, and that all interfaces comply with their interface descriptions.

4. Evaluate the assembled service system for interface compatibility, and behavior (functionality and quality attributes).

SG 3 Verify and Validate Service Systems

Selected service system components and services are verified and validated to ensure correct service delivery.

Some service providers refer to all verification and validation as “testing.” However, in CMMI, “testing” is considered a specific method used for verification or validation. Verification and validation are described separately in this process area to ensure that both aspects are treated adequately.

Examples of verification methods include the following:

• Inspections

• Peer reviews

• Audits

• Walkthroughs

• Analyses

• Architecture evaluations

• Simulations

• Testing

• Demonstrations

• Continuous integration (i.e., Agile approach to identify integration issues early)

Examples of validation methods include the following:

• Discussions with users, perhaps in the context of a formal review

• Prototype demonstrations

• Functional presentations (e.g., service delivery run-throughs, end-user interface demonstrations)

• Pilots of training materials

• Tests of services and service system components by end users and other relevant stakeholders

• Cycle reviews for incremental development

Verification practices include verification preparation, conduct of verification, and identification of corrective action. Verification includes testing of the service system and selected service system components against all selected requirements, including existing service agreements, service requirements, and service system requirements.

Examples of service system components that may be verified and validated include the following:

• People

• Processes

• Equipment

• Software

• Consumables

Validation demonstrates that the service system, as developed, will deliver services as intended. Verification addresses whether the service system properly reflects the specified requirements. In other words, verification ensures that “you built it right.” Validation ensures that “you built the right thing.”

Validation activities use approaches similar to verification (e.g., test, analysis, inspection, demonstration, simulation). These activities focus on ensuring the service system enables the delivery of services as intended in the expected delivery environment. End users and other relevant stakeholders are usually involved in validation activities. Both validation and verification activities often run concurrently and can use portions of the same environment. Validation and verification activities can take place repeatedly in multiple phases of the service system development process.

SP 3.1 Prepare for Verification and Validation

Establish and maintain an approach and an environment for verification and validation.

Preparation is necessary to ensure that verification provisions are embedded in service and service system requirements, designs, developmental plans, and schedules. Verification encompasses selection, inspection, testing, analysis, and demonstration of all service system components, including work products, processes, and consumable resources.

Similar preparation activities are necessary for validation to be meaningful and successful. These activities include selecting services and service system components and establishing and maintaining the validation environment, procedures, and criteria. It is particularly important to involve end users and front-line service delivery staff in validation activities because their perspectives on successful service delivery can vary significantly from one another and from service system developers.

Example Work Products

1. Lists of the service system components selected for verification and validation

2. Verification and validation methods for each selected component

3. Verification and validation environment

4. Verification and validation procedures

5. Verification and validation criteria

Subpractices

1. Select the components to be verified and validated and the verification and validation methods that will be used for each.

Service system components are selected based on their contribution to meeting service objectives and requirements and to addressing risks.

2. Establish and maintain the environments needed to support verification and validation.

3. Establish and maintain verification and validation procedures and criteria for selected service system components.

SP 3.2 Perform Peer Reviews

Perform peer reviews on selected service system components.

Peer reviews involve a methodical examination of service system components by the producers’ peers to identify defects for removal and to recommend changes.

A peer review is an important and effective verification method implemented via inspections, structured walkthroughs, or a number of other collegial review methods.

Example Work Products

1. Peer review schedule

2. Peer review checklist

3. Entry and exit criteria for service system components and work products

4. Criteria for requiring another peer review

5. Peer review training material

6. Service system components selected for peer review

7. Peer review results, including issues and action items

8. Peer review data

Subpractices

1. Determine what type of peer review will be conducted.

Examples of types of peer reviews include the following:

• Inspections

• Structured walkthroughs

• Active reviews

2. Establish and maintain peer review procedures and criteria for the selected service system components and work products.

3. Define requirements for the peer review.

Peer reviews should address the following guidelines:

• The preparation should be sufficient.

• The conduct should be managed and controlled.

• Consistent and sufficient data should be recorded.

• Action items should be recorded.

Examples of requirements for peer reviews include the following:

• Data collection

• Entry and exit criteria

• Criteria for requiring another peer review

4. Establish and maintain checklists to ensure that service system components and work products are reviewed consistently.

Examples of items addressed by checklists include the following:

• Rules of construction

• Design guidelines

• Completeness

• Correctness

• Maintainability

• Common defect types

Checklists are modified as necessary to address the specific type of work product and peer review. Peers of checklist developers and potential end-users review the checklists.

5. Develop a detailed peer review schedule, including dates for peer review training and for when materials for peer reviews will be available.

6. Prepare for the peer review.

Examples of peer review data that can be analyzed include the following:

• Actual preparation time or rate versus expected time or rate

• Actual number of defects versus expected number of defects

• Types of defects detected

• Causes of defects

• Defect resolution impact

7. Ensure that the service system component or work product satisfies the peer review entry criteria and make the component or work product available for review to participants early enough to enable them to adequately prepare for the peer review.

8. Assign roles for the peer review as appropriate.

Examples of roles include the following:

• Leader

• Reader

• Recorder

• Author

9. Conduct peer reviews on selected service system components and work products, and identify issues resulting from the peer review.

One purpose of conducting a peer review is to find and remove defects early. Peer reviews are performed incrementally as service system components and work products are being developed.

Peer reviews can be performed on key work products of specification, design, test, and implementation activities and specific planning work products. Peer reviews can be performed on staffing plans, competency descriptions, organizational structure, and other people oriented aspects of a service system. However, they should be used to review individual performance and competency with caution, and should be employed only in coordination with other methods of individual evaluation that the organization already has in place.

When issues arise during a peer review, they should be communicated to the primary developer or manager of the service system component or work product for correction.

10. Conduct an additional peer review if the defined criteria indicate the need.

11. Ensure that exit criteria for the peer review are satisfied.

12. Record and store data related to the preparation, conduct, and results of the peer reviews.

Typical data are service system component or work product name, composition of the peer review team, type of peer review, preparation time per reviewer, length of the review meeting, number of defects found, type and origin of defect, and so on. Additional information on the service system component or work product being peer reviewed can be collected.

Protect the data to ensure that peer review data are not used inappropriately. The purpose of peer reviews is to verify proper development and identify defects to ensure greater quality, not to provide reasons for disciplining staff or publicly criticizing performance. Failure to protect peer review data properly can ultimately compromise the effectiveness of peer reviews by leading participants to be less than fully candid about their evaluations.

13. Analyze peer review data.

Preparation activities for peer reviews typically include the following:

• Identifying the staff who will be invited to participate in the peer review of each service system component or work product

• Identifying the key reviewers who should participate in the peer review

• Preparing and updating the materials to be used during the peer reviews, such as checklists and review criteria

SP 3.3 Verify Selected Service System Components

Verify selected service system components against their specified requirements.

The verification methods, procedures, criteria, and environment are used to verify the selected service system and any associated maintenance, training, and support processes. Verification activities should be performed throughout the service system lifecycle.

Example Work Products

1. Verification results and logs

2. Verification reports

3. Analysis report (e.g., statistics on performance, causal analysis of nonconformance, comparison of the behavior between the real service system and models, trends)

4. Trouble reports

5. Change requests for verification methods, criteria, and the environment

Subpractices

1. Perform verification of selected service system components and work products against their requirements.

Verification of the selected components includes verification of their integrated operation with one another and with appropriate external interfaces.

2. Record the results of verification activities.

3. Identify action items resulting from the verification of service system components and work products.

4. Document the “as-run” verification method and deviations from the available methods and procedures discovered during its performance.

5. Analyze and record the results of all verification activities.

SP 3.4 Validate the Service System

Validate the service system to ensure that it is suitable for use in the intended delivery environment and meets stakeholder expectations.

The validation methods, procedures, and criteria are used to validate selected services, service system components, and any associated maintenance, training, and support processes using the appropriate validation environment. Validation activities are performed throughout the service system lifecycle.

Validation of overall service system operation should take place in an environment that provides enough similarities to the delivery environment to confirm the service system will fulfill its intended use. The delivery environment is the complete set of circumstances and conditions under which services are actually delivered in accordance with service agreements. Sometimes validation can be effectively performed in a simulated environment but in other contexts it can only be performed in a portion of the delivery environment. In the latter cases, care should be taken to ensure that validation activities do not perturb ongoing service activities to the point of risking failures of agreed service delivery. (See the definition of “delivery environment” in the glossary.)

Example Work Products

1. Validation reports and results

2. Validation cross reference matrix

3. Validation deficiency reports and other issues

4. Change requests for validation methods, criteria, and the environment

5. User acceptance (i.e., sign off) for service delivery validation

6. Focus group reports

Subpractices

1. Perform functionality and quality attribute validation on selected service system components to ensure that they are suitable for use in their intended delivery environment.

The validation methods, procedures, criteria, and environment are used to validate the selected service system components and any associated maintenance, training, and support services.

2. Analyze the results of validation activities.

The data resulting from validation tests, inspections, demonstrations, or evaluations are analyzed against defined validation criteria. Analysis reports indicate whether the needs were met. In the case of deficiencies, these reports document the degree of success or failure and categorize probable cause of failure. The collected test, inspection, or review results are compared with established criteria to determine whether to proceed or to address requirements or design issues.

Verification and Validation

How do you make sure that your service system works properly and delivers services that your customers actually need? If you still think that “working properly” and “delivering actually needed services” mean the same thing, keep reading. The distinction is familiar to some people but is foreign to most.

When you verify your service system, you are checking that it satisfies all your service requirements. These requirements include all the derived requirements for services, subservices, service system components, and interfaces, as well as the initial requirements derived from service agreements or standard service definitions. Verification can be performed by testing, but many other methods are available and may be appropriate for different situations, including inspections, peer reviews, prototyping, piloting, and modeling and simulation. The important thing to remember about verification is that it can only tell you if the service system satisfies all the expressed requirements. Your service system can meet all of its expressed requirements and still fail to satisfy end users.

A common way this can occur is through a requirements defect, in which one or more of the initial service requirements are ambiguous, incorrectly specified, outright wrong, or completely missing. If initial service requirements are specified well, derived requirements may have been developed inadequately. You may also have customers or end users with conflicting requirements that are not fully reflected in the initial requirements statements, or that are fully expressed but without sufficient guidance for prioritizing requirements or otherwise resolving the conflicts.

Too often, these types of issues come to light only when a service system is actually delivering services to end users. Fixing a requirements defect after a service system has been built can be expensive at best and can create a good deal of customer ill will at worst.

Validation practices help to keep these problems from occurring by making sure that customers and end users are involved throughout service system development. (The distinction between customers and end users is important here because they often have a different perspective on service requirements.) The model states, “Validation demonstrates that the service system, as developed, will deliver services as intended.” What it fails to state explicitly, and what needs some reinforcement, is that the focus is on delivering services as intended by the customer and the end user. If the service system does only what the service provider organization intends, that may not be good enough.

Both verification and validation are important for service system development, but of the two, validation is probably the greater challenge for the work group. Validation makes the work group dependent on both input from and cooperation with customers and end users, and this dependency adds risks and management complications. (If you are employing Agile methods for service system development as described earlier in this process area, then you should automatically achieve some reduction in validation risks, because your end users are more likely to be integrated with development activities, including validation.)

Also, because services are intangible by definition and cannot be stored, validating the actual delivery of services requires a service system to be operational in a way that can legitimately deliver real value to real end users. For some work groups, performing this level of validation before a service system is fully deployed can be difficult. Piloting a new or modified service system with a sample group of informed and willing end users is one method of validating it, and piloting can even work in situations with high-impact risks (e.g., controlled clinical testing of a new medical procedure).

But piloting often requires extra resources to be set aside from ongoing service delivery that may simply be unavailable for some organizations. In these cases, a work group may be able to complete final validation of service delivery only after a new or changed service system is partially or fully deployed. Customer and end-user feedback can always be solicited at that point to help with validation.

Regardless of how you handle it, validation is not something you want to skip over simply because it can create difficulties for your work group. If you do, you risk encountering much greater challenges in the long run.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.189.220