Service System Development

A Service Establishment and Delivery Process Area at Maturity Level 3

Purpose

The purpose of Service System Development (SSD) is to analyze, design, develop, integrate, verify, and validate service systems, including service system components, to satisfy existing or anticipated service agreements.

Introductory Notes

The Service System Development process area is applicable to all aspects of a service system. It applies to new service systems as well as changes to existing service systems.

A “service system” is an integrated and interdependent combination of service system components that satisfies stakeholder requirements.

A “service system component” is a process, work product, person, consumable, or customer or other resource required for a service system to deliver value. Service system components may include components owned by the customer or a third party.

A “service system consumable” is anything usable by the service provider that ceases to be available or becomes permanently changed by its use during the delivery of a service.

The people who are considered service system components are those who perform tasks as part of the service system, including provider staff and end users, to enable the system to operate and thereby deliver services.

See the definitions of “service system,” “service system component,” “service system consumable,” and “work product” in the glossary.

Organizations that wish to improve and appraise their product development processes should rely on the complete CMMI-DEV model, which specifically focuses on development as an area of interest.

Service provider organizations may also choose to use the CMMI-DEV model as the basis for improving and appraising their service system development processes. This use of the CMMI-DEV model is preferred for organizations that are already experienced with CMMI-DEV and for those that must develop large-scale, complex service systems.

However, the Service System Development process area offers an alternative means of achieving somewhat similar ends by covering requirements development as well as service system development, integration, verification, and validation in a single process area. Using SSD may be preferred by service provider organizations that are new to CMMI, especially those that are developing simple services with relatively few components and interfaces. Even organizations that use the CMMI-DEV model for service system development may wish to refer to the Service System Development process area for helpful guidance on applying development practices to service system components, such as people, processes, and consumables.

It is especially important to remember that the components of some service systems may be limited to people and the processes they perform. In those and similar contexts in which service systems are fairly simple, exercise care when interpreting the specific practices of this process area so that the implementations that result provide business value to the service provider organization.

The service system development process is driven by service and service system requirements that are collected from various sources, such as service agreements and defects and problems identified during both service delivery and incident resolution and prevention processes.

The Service System Development process area focuses on the following activities:

• Collecting, coordinating, analyzing, validating, and allocating stakeholder requirements for service systems

• Evaluating and selecting from alternative service system solutions

• Designing and building or composing (as needed), integrating, and documenting service systems that meet requirements

• Verifying and validating service systems to confirm that they satisfy their intended requirements and will satisfy customer and end-user expectations during actual service delivery

Related Process Areas

Refer to the Service Delivery process area for more information about maintaining the service system.

Refer to the Service System Transition process area for more information about deploying the service system.

Refer to the Strategic Service Management process area for more information about establishing standard services.

Refer to the Decision Analysis and Resolution process area for more information about analyzing possible decisions using a formal evaluation process that evaluates identified alternatives against established criteria.

Refer to the Organizational Innovation and Deployment process area for more information about selecting and deploying incremental and innovative improvements.

Refer to the Requirements Management process area for more information about managing requirements.

Specific Practices by Goal

SG 1 Develop and Analyze Stakeholder Requirements

Stakeholder needs, expectations, constraints, and interfaces are collected, analyzed, and transformed into validated service system requirements. This goal covers the transformation of collected stakeholder needs, expectations, and constraints into requirements that can be used to develop a service system that enables service delivery.

Needs are collected from sources that may include service agreements, standard defined services, organizational policies, and communication with end users, customers, and other relevant stakeholders. These service needs may define stakeholder expectations of what is to be delivered, specify particular levels or grades of service, or identify constraints on how, when, how often, or to whom services are to be delivered. These needs, expectations, and constraints in turn may need to be analyzed and elaborated to identify needed details of delivered services not considered by the original sources. The result is a set of stakeholder requirements specified in the language of service system developers, not in the language of those who submitted the requirements.

For example, a customer might establish a requirement to “maintain the equipment listed in Table 25 in working order” with additional details of availability rates, average repair times, and other service levels. However, this requirement may also imply a need for a variety of specialized subservices, such as diagnostics, field support, and preventive maintenance, each with its own implied subservice requirements. These refinements may not be of interest or even visible to the original stakeholders, but their full specification is needed to identify everything that a service system must do to meet the service delivery requirements.

As service requirements are analyzed and elaborated, they eventually yield derived service system requirements, which define and constrain what the service system must accomplish to ensure that the required service is delivered. For example, if the service has a response time requirement, the service system must have derived requirements that enable it to support that response time.

The process of developing and analyzing requirements may involve multiple iterations that include all relevant stakeholders in communicating requirements and their ramifications so that everyone agrees on a consistent defined set of requirements for the service system. Changes may be driven by changes to stakeholder expectations or by new needs discovered during subsequent service system development activities, service system transition, or service delivery. Since needs often change throughout the life of the project, the development and analysis of requirements should rarely be considered a one-time process.

As with all requirements, appropriate steps are taken to ensure that the approved set of service and service system requirements is effectively managed to support development of the service and service system.

Refer to the Requirements Management process area for more information about managing requirements changes.

SP 1.1 Develop Stakeholder Requirements

Collect and transform stakeholder needs, expectations, constraints, and interfaces into stakeholder requirements.

The needs of relevant stakeholders (e.g., customers, end users, suppliers, builders, testers, manufacturers, logistics support personnel, service delivery personnel) are the basis for determining stakeholder requirements. Stakeholder needs, expectations, constraints, interfaces, operational concepts, and service concepts are analyzed, harmonized, refined, and elaborated for translation into a set of stakeholder requirements.

Requirements collected from customers and end users of the service to be delivered are documented in the service agreement. These requirements are also used to derive requirements for the service system. These derived requirements are combined with other requirements collected for the service system to result in the complete set of stakeholder requirements.

Refer to the Service Delivery process area for more information about analyzing existing agreements and service data.

These stakeholder requirements should be stated in language that the stakeholders can understand yet precise enough for the needs of those developing the service or service system.

Examples of stakeholder requirements include the following:

• Operations requirements

• Customer delivery requirements

• Monitoring requirements

• Instrumentation requirements

• Documentation requirements

• Operating level agreement requirements

• Requirements from agreements with other stakeholders

Typical Work Products

1. Customer requirements

2. End-user requirements

3. Customer and end-user constraints on the conduct of verification and validation

4. Staffing level constraints

Subpractices

1. Engage relevant stakeholders using methods for eliciting needs, expectations, constraints, and external interfaces.

Eliciting goes beyond collecting requirements by proactively identifying additional requirements not explicitly provided by customers through surveys, analyses of customer satisfaction data, prototypes, simulations, etc.

2. Transform stakeholder needs, expectations, constraints, and interfaces into stakeholder requirements.

The various inputs from relevant stakeholders must be consolidated, missing information must be obtained, and conflicts must be resolved in documenting the recognized set of stakeholder requirements.

3. Define constraints for verification and validation.

SP 1.2 Develop Service System Requirements

Refine and elaborate stakeholder requirements to develop service system requirements.

Stakeholder requirements are analyzed in conjunction with the development of the operational concept to derive more detailed and precise sets of requirements called “derived requirements.” These requirements address all aspects of the service system associated with service delivery, including work products, services, processes, consumables, and customer and other resources.

See the definition of “operational concept” in the glossary.

Derived requirements arise from constraints, consideration of issues implied but not explicitly stated in the stakeholder requirements baseline, and factors introduced by the selected service system architecture, the design, the developer’s unique business considerations, and strategic priorities, including industry market trends. The extent and depth of derived requirements vary with the complexity of the service system needed to meet stakeholder requirements.

Refer to the Strategic Service Management process area for more information about establishing standard services.

In some service contexts, derived requirements may be as simple as identification and quantification of required resources. For complex service systems with many types of components and interfaces, the initial requirements are reexamined through iterative refinement into lower level sets of more detailed requirements that parallel the functional architecture as the preferred solution is refined.

Typical Work Products

1. Derived requirements with relationships and priorities

2. Service requirements

3. Service system requirements

4. Requirement allocations

5. Design constraints

6. Interface requirements

7. Skill-level requirements

Subpractices

1. Develop requirements and express them in the terms necessary for service and service system design.

2. Derive requirements that result from solution selections and design decisions.

3. Establish and maintain relationships among requirements for consideration during change management and requirements allocation.

Relationships among requirements can aid in design and in evaluating the impact of changes.

4. Allocate the requirements to service system components.

Relationships include dependencies in which a change in one requirement may affect other requirements.

5. Identify interfaces both external and internal to the service system.

6. Develop requirements for the identified interfaces.

SP 1.3 Analyze and Validate Requirements

Analyze and validate requirements, and define required service system functionality.

Requirements analyses are performed to determine the impact the intended service delivery environment will have on the ability to satisfy the stakeholders’ needs, expectations, constraints, and interfaces. Depending on the service delivery context, factors such as feasibility, mission needs, cost constraints, end-user heterogeneity, potential market size, and procurement strategy must be taken into account. A definition of required functionality is also established. All specific methods of service delivery are considered and a timeline analysis is generated for time-critical sequencing of functions.

The objectives of the analyses are to determine candidate requirements for service system concepts that will satisfy stakeholder needs, expectations, and constraints, and then to translate these concepts into comprehensive service system requirements. In parallel with this activity, the parameters used to evaluate the effectiveness of service delivery are determined based on customer and end-user input and the preliminary service delivery concept.

Requirements are validated by working with relevant stakeholders to increase the probability that the resulting service system will deliver services as intended in the expected delivery environment.

Typical Work Products

1. Operational concepts, use cases, activity diagrams, and timeline scenarios

2. Service system and service system component installation, training, operational, maintenance, support, and disposal concepts

3. New requirements

4. Functional architecture

5. Requirements defects reports and proposed changes to resolve

6. Assessment of risks related to requirements

7. Record of analysis methods and results

Subpractices

1. Develop operational concepts and scenarios that include functionality, performance, maintenance, support, and disposal as appropriate.

Identify and develop scenarios consistent with the level of detail in the stakeholder needs, expectations, and constraints in which the proposed service system is expected to operate.

Operational concept and scenario development is an iterative process. Reviews of operational concepts and scenarios should be held periodically to ensure that they agree with the requirements. The review may be in the form of a walkthrough.

2. Develop a detailed operational concept that defines the interaction of the service system, end users, and the environment and that satisfies operational, maintenance, support, and disposal needs.

3. Establish and maintain a definition of required functionality.

The definition of functionality, also referred to as “functional analysis,” is the description of what the service system is designed to do. The definition of functionality can include actions, sequence, inputs, outputs, or other information that communicates the manner in which the service system will operate.

4. Analyze requirements to ensure that they are necessary, sufficient, and balance stakeholder needs and constraints.

As requirements are defined, their relationship to higher level requirements and the higher level defined functionality must be understood. One of the other actions is to determine which key requirements will be used to track progress.

5. Validate requirements to ensure that the resulting service system will perform as intended in the user’s environment

SG 2 Develop Service Systems

Service system components are selected, designed, implemented, and integrated.

A service system can encompass work products, processes, people, consumables, and customer and other resources.

An important and often-overlooked component of service systems is the human aspect. People who perform tasks as part of a service system enable the system to operate, and both provider staff and end users may fill this role. For example, a service system that processes incoming calls for a service must have available trained staff that can receive the calls and process them appropriately using the other components of the service system. In another example, end users of an insurance service may need to follow a prescribed claims process to receive service benefits from the service system.

A consumable is anything usable by the service provider that ceases to be available or becomes permanently changed because of its use during the delivery of a service. An example is gasoline for a transportation service system that uses gasoline-powered vehicles. Even service systems that are comprised primarily of people and manual processes often use consumables such as office supplies. The role of consumables in service systems should always be considered.

This goal focuses on the following activities:

• Evaluating and selecting solutions that potentially satisfy an appropriate set of requirements

• Developing detailed designs for the selected solutions (detailed enough to implement the design as a service system)

• Implementing the designs of service system components as needed

• Integrating the service system so that its functions can be verified and validated

Typically, these activities overlap, recur, and support one another. Some level of design, at times fairly detailed, may be needed to select solutions. Prototypes, pilots, and stand-alone functional tests may be used as a means of gaining sufficient knowledge to develop a complete set of requirements or to select from among available alternatives.

From a people perspective, designs may be skill-level specifications and staffing plans, and prototypes or pilots may try out different staffing plans to determine which one works best under certain conditions. From a consumables perspective, designs may be specifications of necessary consumable characteristics and quantities. Some consumables may even require implementation. For example, specific paper forms may need to be designed and printed to test them as part of the service system later.

Development processes are implemented repeatedly on a service system as needed to respond to changes in requirements, or to problems uncovered during verification, validation, transition, or delivery. For example, some questions that are raised by verification and validation processes may be resolved by requirements development processes. Recursion and iteration of these processes enable the project to ensure quality in all service system components before it begins to deliver services to end users.

SP 2.1 Select Service System Solutions

Select service system solutions from alternative solutions.

Alternative solutions and their relative merits are considered in advance of selecting a solution. Key requirements, design issues, and constraints are established for use in alternative solution analysis. Architectural features that provide a foundation for service system improvement and evolution are considered.

Refer to the Decision Analysis and Resolution process area for more information about analyzing possible decisions using a formal evaluation formal evaluation process that evaluates identified alternatives against established criteria.

A potentially ineffective approach to implementing this practice is to generate solutions that are based on only the way services have been delivered in the past. It is important to consider alternatives that represent different ways of allocating and performing necessary functions (e.g., manual versus automated processes, end user versus service delivery personnel responsibilities, prescheduled versus on-the-fly service request management).

Components of the service system, including service delivery and support functions, may be allocated to external suppliers. As a result, prospective supplier agreements are investigated. The use of externally supplied components is considered relative to cost, schedule, performance, and risk. Externally supplied alternatives may be used with or without modification. Sometimes, such items may require modifications to aspects such as interfaces or a customization of some of their features to better meet service or service system requirements.

Refer to the Supplier Agreement Management process area for more information about managing the acquisition of products and services from suppliers.

Typical Work Products

1. Alternative solution screening criteria

2. Selection criteria

3. Service system component selection decisions and rationale

4. Documented relationships between requirements and service system components

5. Documented solutions, evaluations, and rationale

Subpractices

1. Establish defined criteria for selection.

2. Develop alternative solutions.

3. Select the service system solutions that best satisfy the criteria established.

Selecting service system solutions that best satisfy the criteria is the basis for allocating requirements to the different aspects of the service system. Lower level requirements are generated from the selected alternative and used to develop the design of service system components. Interface requirements among service system components are described (primarily functionally).

SP 2.2 Develop the Design

Develop designs for the service system and service system components.

The term “design” in this practice refers to the definition of the service system’s components and their intended set of relationships; these components will collectively interact in intended ways to achieve actual service delivery.

Service system designs must provide the appropriate content not only for implementation but also for other aspects of the service system lifecycle, such as modification, transition and rollout, maintenance, sustainment, and service delivery. The design documentation provides a reference to support mutual understanding of the design by relevant stakeholders and supports making future changes to the design both during development and in subsequent phases of the lifecycle.

A complete design description is documented in a “design package” that includes a full range of features and parameters, including functions, interfaces, operating thresholds, manufacturing and service process characteristics (e.g., which functions are automated versus manually performed), and other parameters. Established design standards (e.g., checklists, templates, process frameworks) form the basis for achieving a high degree of definition and completeness in design documentation.

Examples of other service-related work products include the following:

• Descriptions of roles, responsibilities, authorities, accountabilities, and skills of people required to deliver the service

• Functional use cases describing roles and activities of service participants

• Designs or templates for manuals, paper forms, training materials, and guides for end users, operators, and administrators

“Designing people” in this context means specifying the skills and skill levels necessary to accomplish needed tasks and may include appropriate staffing levels as well as training needs (if training is necessary to achieve needed skill levels).

“Designing consumables” in this context means specifying the consumable properties and characteristics necessary to support service delivery as well as resource utilization estimates for service system operation.

Typical Work Products

1. Service system architecture

2. Service system component and consumable designs

3. Skill descriptions and details of the staffing solution (e.g., allocated from available staff, hired as permanent or temporary staff)

4. Interface design specifications and control documents

5. Criteria for design and service system component reuse

6. Results of make-or-buy analyses

Subpractices

1. Develop a design for the service system.

2. Ensure that the design adheres to allocated requirements.

3. Document the design.

4. Design interfaces for the service system components using established criteria.

The criteria for interfaces frequently reflect critical parameters that must be defined, or at least investigated, to ascertain their applicability. These parameters are often peculiar to a given type of service system and are often associated with safety, security, durability, and mission-critical characteristics. Carefully determine which processes should be automated or partially automated and which processes should be performed manually.

5. Evaluate whether the components of the service system should be developed, purchased, or reused based on established criteria.

SP 2.3 Ensure Interface Compatibility

Manage internal and external interface definitions, designs, and changes for service systems.

Many integration problems arise from unknown or uncontrolled aspects of both internal and external interfaces. Effective management of interface requirements, specifications, and designs helps to ensure that implemented interfaces will be complete and compatible.

In the context of service systems, interfaces can be broadly characterized according to one of four major groups:

• Person-to-person interfaces are those that represent direct or indirect communication between two or more people, any of whom might be service provider personnel or end users. For example, a call script, which defines how a help desk operator should interact with an end user, defines a direct person-to-person interface. Log books and instructional signage are examples of indirect person-to-person interfaces.

• Person-to-component interfaces are those that encompass interactions between a person and one or more service system components. These interfaces can include both graphical user interfaces for automated components (e.g., software applications) and operator control mechanisms for automated, partially automated, and nonautomated components (e.g., equipment, vehicles).

• Component-to-component interfaces are those that do not include direct human interaction. The interfaces of many interactions between automated components belong to this group, but other possibilities exist, such as specifications constraining the physical mating of two components (e.g., a delivery truck, a loading dock).

• Compound interfaces are those that merge or layer together interfaces from more than one of the other three groups. For example, an online help system with “live” chat support might have a compound interface built on an integrated combination of person-to-person, person-to-component, and component-to-component interfaces.

Interfaces can also be characterized as external or internal interfaces. “External interfaces” are interactions among components of the service system and any other entity external to the service system, including people, organizations, and systems. Internal interfaces can include the interactions among the staff, teams, and functions of the service-provider organization. “Internal interfaces” can also include interaction between the staff or end users and service system components.

Examples of user interface work products include the following:

• Customer interaction scripts

• Reporting types and frequency

• Application program interfaces

Typical Work Products

1. Categories of interfaces with lists of interfaces per category

2. Table or mapping of interface relationships among service system components and the external environment

3. List of agreed interfaces defined for each pair of service system components when applicable

4. Reports from meetings of the interface control working group

5. Action items for updating interfaces

6. Updated interface description or agreement

Subpractices

1. Review interface descriptions for coverage and completeness.

The interface descriptions should be reviewed with relevant stakeholders to avoid misinterpretations, reduce delays, and prevent the development of interfaces that do not work properly.

2. Manage internal and external interface definitions, designs, and changes for service system components.

SP 2.4 Implement the Service System Design

Implement the service system design.

The term “implement” in this practice refers to the actual creation of designed components of the service system in a form that can subsequently be integrated, verified, and validated. “Implement” does not refer to putting the service system into place in the delivery environment. That deployment process occurs later, during service system transition.

In some cases, consumables and people (e.g., provider staff) may be “implemented.” For example, specialized paper forms may need to be printed. The “implementation” of people may involve hiring new staff or putting into place a new organizational or team structure to handle new kinds of responsibilities. Such new structures should be integrated, verified, and validated prior to the start of service transition.

Refer to the Service System Transition process area for more information about deploying the service system.

Service system components are implemented from previously established designs and interfaces. The implementation may include stand-alone testing of service system components and usually includes the development of any necessary training materials for staff and end users.

Example activities during implementation include the following:

• Interface compatibility is confirmed.

• Software is coded.

• Training materials are developed.

• Electrical and mechanical parts are fabricated.

• Procedures that implement process designs are written.

• Facilities are constructed.

• Supplier agreements are established.

• Personnel are hired or transferred.

• Organizational and team structures are established.

• Custom consumables are produced (e.g., disposable packaging materials).

Typical Work Products

1. Implemented service system components

2. Training materials

3. User, operator, and maintenance manuals

4. Procedure descriptions

5. Records of new hires and staff transfers

6. Records of communications about organizational changes

Subpractices

1. Use effective methods to implement the service system design.

2. Adhere to applicable standards and criteria.

3. Conduct peer reviews of selected service system components.

4. Perform standalone testing of service system components as appropriate.

5. Revise the service system as necessary.

SP 2.5 Integrate Service System Components

Assemble and integrate implemented service system components into a verifiable service system.

Integration of the service system should proceed according to a planned integration sequence and available procedures. Before integration, each service system component should be verified for compliance with its interface requirements. Service system components that are manual processes should be performed while making appropriate use of any other necessary service system components to verify compliance with requirements.

During integration, subordinate components are combined into larger, more complex service system assemblies, and more complete service delivery functions are performed. These combined service system assemblies are checked for correct interoperation. This process continues until service system integration is complete. During this process, if problems are identified, the problems are documented and corrective actions are initiated.

Some service systems may require assembly with customer or end-user resources to complete full integration. When these resources are available under the terms of a service agreement, they should be incorporated as appropriate in integration activities. When such resources are not available from customers and end users, substitute equivalent resources may be employed temporarily to enable full service system integration.

Typical Work Products

1. Service system integration sequence with rationale

2. Documented and verified environment for service system integration

3. Service system integration procedures and criteria

4. Exception reports

5. Assembled service system components

6. Interface evaluation reports

7. Service system integration summary reports

8. Staffing plans that show the sequence of where and when staff members are provided

Subpractices

1. Ensure the readiness of the integration environment.

2. Confirm that each service system component required for integration has been properly identified and functions according to its description, and that all interfaces comply with their interface descriptions.

3. Evaluate the assembled service system for interface compatibility, functionality, and performance.

SG 3 Verify and Validate Service Systems

Selected service system components and services are verified and validated to ensure correct service delivery.

Some service providers refer to all verification and validation as “testing.” However, in CMMI, “testing” is considered a specific method used for verification or validation. Verification and validation are described separately in this process area to ensure that both aspects are treated adequately.

Examples of verification methods include the following:

• Inspections

• Peer reviews

• Audits

• Walkthroughs

• Analyses

• Simulations

• Testing

• Demonstrations

Examples of validation methods include the following:

• Discussions with users, perhaps in the context of a formal review

• Prototype demonstrations

• Functional presentations (e.g., service delivery run-throughs, end-user interface demonstrations)

• Pilots of training materials

• Tests of services and service system components by end users and other relevant stakeholders

Verification practices include verification preparation, conduct of verification, and identification of corrective action. Verification includes testing of the service system and selected service system components against all selected requirements, including existing service agreements, service requirements, and service system requirements.

Examples of service system components that may be verified and validated include the following:

• People

• Processes

• Equipment

• Software

• Consumables

Validation demonstrates that the service system, as developed, will deliver services as intended. Verification addresses whether the service system properly reflects the specified requirements. In other words, verification ensures that “you built it right.” Validation ensures that “you built the right thing.”

Validation activities use approaches similar to verification (e.g., test, analysis, inspection, demonstration, simulation). These activities focus on ensuring that the service system enables the delivery of services as intended in the expected delivery environment. End users and other relevant stakeholders are usually involved in validation activities. Both validation and verification activities often run concurrently and may use portions of the same environment. Validation and verification activities can take place repeatedly in multiple phases of the service system development process.

SP 3.1 Prepare for Verification and Validation

Establish and maintain an approach and an environment for verification and validation.

Preparation is necessary to ensure that verification provisions are embedded in service and service system requirements, designs, developmental plans, and schedules. Verification encompasses selection, inspection, testing, analysis, and demonstration of all service system components, including work products, processes, and consumable resources.

Similar preparation activities are necessary for validation to be meaningful and successful. These activities include selecting services and service system components and establishing and maintaining the validation environment, procedures, and criteria. It is particularly important to involve end users and front-line service delivery personnel in validation activities because their perspectives on successful service delivery can vary significantly from one another and from service system developers.

Typical Work Products

1. Lists of the service system components selected for verification and validation

2. Verification and validation methods for each selected component

3. Verification and validation environment

4. Verification and validation procedures

5. Verification and validation criteria

Subpractices

1. Select the components to be verified and validated and the verification and validation methods that will be used for each.

Service system components are selected based on their contribution to meeting project objectives and requirements and to addressing project risks.

2. Establish and maintain the environments needed to support verification and validation.

3. Establish and maintain verification and validation procedures and criteria for selected service system components.

SP 3.2 Perform Peer Reviews

Perform peer reviews on selected service system components.

Peer reviews involve a methodical examination of service system components by the producers’ peers to identify defects for removal and to recommend changes.

A peer review is an important and effective verification method implemented via inspections, structured walkthroughs, or a number of other collegial review methods.

Typical Work Products

1. Peer review schedule

2. Peer review checklist

3. Entry and exit criteria for service system components and work products

4. Criteria for requiring another peer review

5. Peer review training material

6. Service system components selected for peer review

7. Peer review results, including issues and action items

8. Peer review data

Subpractices

1. Determine what type of peer review will be conducted.

Examples of types of peer reviews include the following:

• Inspections

• Structured walkthroughs

• Active reviews

2. Establish and maintain peer review procedures and criteria for the selected service system components and work products.

3. Define requirements for the peer review.

Peer reviews should address the following guidelines:

• The preparation must be sufficient.

• The conduct must be managed and controlled.

• Consistent and sufficient data must be recorded.

• Action items must be recorded.

Examples of requirements for peer reviews include the following:

• Data collection

• Entry and exit criteria

• Criteria for requiring another peer review

4. Establish and maintain checklists to ensure that service system components and work products are reviewed consistently.

Examples of items addressed by checklists include the following:

• Rules of construction

• Design guidelines

• Completeness

• Correctness

• Maintainability

• Common defect types

Checklists are modified as necessary to address the specific type of work product and peer review. Peers of checklist developers and potential users review the checklists.

5. Develop a detailed peer review schedule, including dates for peer review training and for when materials for peer reviews will be available.

6. Prepare for the peer review.

Preparation activities for peer reviews typically include the following:

• Identifying the staff who will be invited to participate in the peer review of each service system component or work product

• Identifying the key reviewers who must participate in the peer review

• Preparing and updating the materials to be used during the peer reviews, such as checklists and review criteria

7. Ensure that the service system component or work product satisfies the peer review entry criteria, and make the component or work product available for review to participants early enough to enable them to adequately prepare for the peer review.

8. Assign roles for the peer review as appropriate.

Examples of roles include the following:

• Leader

• Reader

• Recorder

• Author

9. Conduct peer reviews on selected service system components and work products, and identify issues resulting from the peer review.

One purpose of conducting a peer review is to find and remove defects early. Peer reviews are performed incrementally as service system components and work products are being developed.

Peer reviews may be performed on key work products of specification, design, test, and implementation activities and specific planning work products. Peer reviews may be performed on personnel staffing plans, competency descriptions, organizational structure, and other people-oriented aspects of a service system. However, they should be used to review individual performance and competency with caution and should be employed only in coordination with other methods of individual evaluation that the organization already has in place.

When issues arise during a peer review, they should be communicated to the primary developer or manager of the service system component or work product for correction.

10. Conduct an additional peer review if the defined criteria indicate the need.

11. Ensure that exit criteria for the peer review are satisfied.

12. Record and store data related to the preparation, conduct, and results of peer reviews.

Typical data are service system component or work product name, composition of the peer review team, type of peer review, preparation time per reviewer, length of the review meeting, number of defects found, type and origin of defect, and so on. Additional information on the service system component or work product being peer reviewed may be collected.

Protect the data to ensure that peer review data are not used inappropriately. The purpose of peer reviews is to verify proper development and identify defects to ensure greater quality, not to provide reasons for disciplining personnel or publicly criticizing performance. Failure to protect peer review data properly can ultimately compromise the effectiveness of peer reviews by leading participants to be less than fully candid about their evaluations.

13. Analyze peer review data.

Examples of peer review data that can be analyzed include the following:

• Actual preparation time or rate versus expected time or rate

• Actual number of defects versus expected number of defects

• Types of defects detected

• Causes of defects

• Defect resolution impact

SP 3.3 Verify Selected Service System Components

Verify selected service system components against their specified requirements.

The verification methods, procedures, criteria, and environment are used to verify the selected service system and any associated maintenance, training, and support processes. Verification activities should be performed throughout the service system lifecycle.

Typical Work Products

1. Verification results and logs

2. Verification reports

3. Analysis report (e.g., statistics on performance, causal analysis of nonconformance, comparison of the behavior between the real service system and models, trends)

4. Trouble reports

5. Change requests for verification methods, criteria, and the environment

Subpractices

1. Perform verification of selected service system components and work products against their requirements.

2. Record the results of verification activities.

3. Identify action items resulting from the verification of service system components and work products.

4. Document the “as-run” verification method and deviations from the available methods and procedures discovered during its performance.

5. Analyze and record the results of all verification activities.

SP 3.4 Validate the Service System

Validate the service system to ensure that it is suitable for use in the intended delivery environment and meets stakeholder expectations.

The validation methods, procedures, and criteria are used to validate selected services, service system components, and any associated maintenance, training, and support processes using the appropriate validation environment. Validation activities are performed throughout the service system lifecycle. It is particularly important to involve actual end users and front-line service delivery personnel in validation activities because their perspectives on successful service delivery can vary significantly from each other and from service system developers.

Validation must take place in an environment that provides enough similarities to the delivery environment to ensure that validation actually occurs. The delivery environment is the complete set of circumstances and conditions under which services are actually delivered in accordance with service agreements. Sometimes validation may be effectively performed in a simulated environment, but in other contexts it can be performed only in a portion of the delivery environment. In those latter cases, care must be taken to ensure that validation activities do not perturb ongoing service activities to the point of risking failures of agreed service delivery.

See the definition of “delivery environment” in the glossary.

Typical Work Products

1. Validation reports and results

2. Validation cross-reference matrix

3. Validation deficiency reports and other issues

4. Change requests for validation methods, criteria, and the environment

5. User acceptance (i.e., sign off) for service delivery validation

6. Focus group reports

Subpractices

1. Perform functional and nonfunctional validation on selected service system components to ensure that they are suitable for use in their intended delivery environment.

The validation methods, procedures, criteria, and environment are used to validate the selected service system components and any associated maintenance, training, and support services.

2. Analyze the results of validation activities.

The data resulting from validation tests, inspections, demonstrations, or evaluations are analyzed against defined validation criteria. Analysis reports indicate whether the needs were met. In the case of deficiencies, these reports document the degree of success or failure and categorize probable cause of failure. The collected test, inspection, or review results are compared with established criteria to determine whether to proceed or to address requirements or design issues.

Verification and Validation

How do you make sure that your service system works properly and delivers services that your customers actually need? If you still think that “working properly” and “delivering actually needed services” mean the same thing, keep reading. The distinction is familiar to some people but is foreign to most.

When you verify your service system, you are checking that it satisfies all your service requirements. These include all the derived requirements for services, subservices, service system components, and interfaces, as well as the initial requirements derived from service agreements or standard service definitions. Verification can be performed by testing, but many other methods are available and may be appropriate for different situations, including inspections, peer reviews, prototyping, piloting, and modeling and simulation. The important thing to remember about verification is that it can only tell you if the service system satisfies all the expressed requirements. Your service system can meet all of its expressed requirements and still fail to satisfy end users.

A common way this can occur is through a requirements defect, in which one or more of the initial service requirements are ambiguous, incorrectly specified, outright wrong, or completely missing. If initial service requirements are specified well, derived requirements may have been developed inadequately. You may also have customers or end users with conflicting requirements that are not fully reflected in the initial requirements statements or that are fully expressed but without sufficient guidance for prioritizing requirements or otherwise resolving the conflicts.

Too often, these types of issues come to light only when a service system is actually delivering services to end users. Fixing a requirements defect after a service system has been built can be expensive at best and can create a good deal of customer ill will at worst.

Validation practices help to keep these problems from occurring, by making sure that customers and end users are involved throughout service system development. (The distinction between customers and end users is important here, because they often have a different perspective on service requirements.) The model states, “Validation demonstrates that the service system, as developed, will deliver services as intended.” What it fails to state explicitly, and what needs some reinforcement, is that the focus is on delivering services as intended by the customer and the end user. If the service system does only what the service provider organization intends, that may not be good enough.

Both verification and validation are important for service system development, but of the two, validation is probably the greater challenge for the project. Validation makes the project dependent on both input from and cooperation by customers and end users, and this dependency adds risks and management complications.

Also, because services are intangible by definition and cannot be stored, validating the actual delivery of services requires a service system to be operational in a way that can legitimately deliver real value to real end users. For some projects, performing this level of validation before a service system is fully deployed can be difficult. Piloting a new or modified service system with a sample group of informed and willing end users is one method of validating it, and piloting can even work in situations with high-impact risks (e.g., controlled clinical testing of a new medical procedure).

But piloting often requires extra resources to be set aside from ongoing service delivery that may simply be unavailable for some organizations. In these cases, a project may be able to complete final validation of service delivery only after a new or changed service system is partially or fully deployed. Customer and end-user feedback can always be solicited at that point to help with validation.

However you handle it, validation is not something you want to skip over simply because it can create difficulties for your project. If you do, you risk running into much greater challenges in the long run.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.220.22