The Run & Improve roadmap stage will usually commence when the Implement stage is completed, although often some aspects of the transition will still be in Implement when the stage begins. If the chosen implementation approach is ‘phased’, the Run & Improve stage will take on elements of delivery in an incremental way as each phase, service, process or service provider exits the Implement stage (see section 4.1.2 Phased approach).

The inputs to this stage have been designed during the Discovery & Strategy, and Plan & Build stages, and then deployed during the Implement stage and include:

SIAM model

Process models

Governance framework, including the structural elements

Performance management and reporting framework

Collaboration model for service providers

Tooling strategy

Ongoing improvement framework

5.1 Operate governance structural elements

The structural elements are intended to provide stability, support and governance of the SIAM ecosystem, enabling collaboration activities, easing smooth running and focusing on continual improvement.

Governance boards have an important role in the control of the overall SIAM ecosystem. During the Discovery & Strategy stage, the high-level governance framework is created (see section 2.3 Establish a SIAM governance framework). In Implement, this is transferred to the live environment. In Run & Improve, the governance boards, process forums and working groups perform their roles.

A success factor for a SIAM ecosystem is that a customer organization is able to define, establish and continually adapt the service integration governance. For example, Figure 23 illustrates a SIAM governance model where COBIT is used to establish the service integration governance (and structural) elements.

Figure 23: Mapping SIAM roles onto the COBIT 5 business framework

Governance activities are carried out at strategic, tactical and operational levels through governance boards. Boards are decision-making bodies and provide the required level of governance in a SIAM environment. In complex environments with many different service providers, more boards might be created to address specific areas. However, in less complex environments with fewer service providers, fewer boards may be more appropriate.

For example, there may be fewer boards within an ecosystem that has smaller service providers with fewer operational staff who can contribute. To reduce the overhead on these service providers, it might be necessary to reduce the number of meetings by combining boards or process forums, or reducing the meeting frequency (see section Governance boards).


The Bank of Blue is a multinational bank and financial services organization, headquartered in London, UK. It provides the following services:

Retail banking

Corporate and investment banking

Credit card solutions

Home loans services

The IT division of Bank of Blue has recently implemented COBIT for enterprise IT management and IT governance. As a first step, the IT manager decided to implement control objectives for the management of service requests and incidents. As the service integration team in Bank of Blue IT is responsible for end-to-end management of service requests and incidents across multiple service providers, the IT manager requested the service integration manager to propose how an appropriate governance structure could be implemented.

Each COBIT governance and management domain and process is mapped to related guidance. This guidance describes the standards and/or frameworks with detailed reference to which section of a standard document or specific framework the guide is relevant.

At an IT governance and management layer, the COBIT management process ‘Manage Service Requests and Incidents’ was used to define the required governance and enterprise needs from this management process.

Below is an example from the COBIT publication Governance of Enterprise IT based on COBIT 5:

Process Number: DSS02 (it is a management process in COBIT)

Process Name: Manage Service Request and Incidents

Area: Management

Domain: Deliver, Service and Support

Process Description: Provide timely and effective response to user requests and resolution of all types of incidents. Restore normal service, record and fulfil user requests, and record, investigate, diagnose, escalate and resolve incidents

Process Purpose Statement: Achieve increased productivity and minimize disruptions through quick resolutions of user queries and incidents

RACI Chart:

oDSS02.01: Define incident and service request classification schemes

oDSS02.02: Record, classify and prioritize requests and incidents

oDSS02.03: Verify, approve and fulfil service requests

oDSS02.04: Investigate, diagnose and allocate incidents

oDSS02.05: Resolve and recover from incidents

oDSS02.06: Close service requests and incidents

oDSS02.07: Track status and produce reports

The service integrator defined the integrated service request and incident management process to fulfil the needs of IT governance, as defined by the COBIT management process guidelines.

The service integrator also defined the principles and policies for this process and let the service providers define their own work instructions and procedures to support the process.

5.1.1 Strategic governance: Executive boards

Executive boards provide governance and oversight at the most senior level.

The attendees for these boards are senior staff with accountability for their organization’s role in the SIAM ecosystem.

The central tenet of governance is transparency across decision-making processes. In a SIAM ecosystem, the executive boards need to demonstrate this with regard to any decisions made, from the initial investment in the SIAM model (doing the right thing), right through to delivering the desired benefits during the SIAM transformation (doing things right).

These boards hold the service providers and the service integrator to account for their performance and should:

Adopt agreed standards and policies

Set priorities and approve resource allocation

Make decisions that are acted upon

Reward as well as censure

Adjudicate escalated issues

In addition to the executive board attended by all service providers, each service provider has an individual board with the customer and the service integrator. This allows a service provider to discuss commercial performance and sensitive issues with an appropriate audience. If the service integrator is also an external organization, it may need to leave these individual boards for specific commercial discussions.

The Executive Steering Board is the executive board responsible for setting the SIAM vision and directing the role of SIAM governance within the larger context of IT governance. This board should also include attendees from principal parts of the business to represent consumers of the services (see section Executive Steering Board).

The early focus of this board is on achieving the SIAM implementation. In the Run & Improve stage, it changes its focus to the operation of the SIAM model. This can be a challenge for some attendees who are more at home with project delivery. There is sometimes a change of attendees at this stage, from senior project team members to senior service delivery representatives.

There are likely to be unfinished activities from the previous stages. If a phased transition approach has been adopted, the next phases will still be in the planning, building or implementation stages. The board should maintain a focus on these as well as on the live services. This is normally achieved through having separate agenda items.

Some organizations choose to keep two separate boards until all the phases have completed – one focusing on live services and the other on projects that are not yet live. Although this can seem to be a good idea and can work well, it can also lead to conflicts and tension between the two boards and if not carefully managed can result in gaps, overlaps and confusion.

Project boards that never end

A large SIAM transformation project involving multiple service providers had an executive project board that had been established for some time. The executive project board had been in place before the design and implementation of the SIAM model and its associated governance boards. The focus of the executive project board was on project delivery. They had strong relationships with all the service providers. Once the services were live, this board continued to meet and started to discuss live service issues.

For some time, there was conflict and confusion between this board and the Executive Steering Board established under the SIAM model. Both boards considered that they were responsible for live service issues, but there were still project activities to govern as well as the services to be developed and enhanced. The service providers preferred to attend the project board as they had built good relationships with the attendees.

The resolution was to terminate the executive project board and transfer its responsibilities to the Executive Steering Board.

There is a significant risk that the executive boards try to deal with too much detail, including items that should sit with the tactical and operational governance boards. This is particularly the case where the same people sit on the different boards, as can happen with smaller SIAM ecosystems.

In this situation, it is important to establish:

Clear terms of reference for each board

Agenda items that detail what should be discussed

Strong chairing capabilities

Defined procedures for escalating and devolving items between the different board levels

5.1.2 Tactical governance boards

Tactical boards sit between the strategic and operational boards. They undertake preparation activities in readiness for the strategic board, and can be used to carry out discussions before meeting with the customer organization, or another stakeholder, at an executive level. They should also be used to identify items for escalation to the strategic board, and act as a point of escalation for operational boards.

These boards are not typically attended by the customer, and will be chaired by the service integrator, acting as the customer’s agent. In some instances, such as the early days when the service integrator role is being established, representation from the customer organization’s retained capabilities may be needed to provide initial support and to reaffirm the integrator’s role.

Service review board

A service review board is an example of a tactical board. Its activities may include:

Ensuring alignment of SIAM medium-term strategies with IT governance and the vision as given by strategic boards

Optimizing the design, delivery, operations and sourcing of services

Providing recommendations to the strategic boards for change in contracts, service providers or financials

Conducting an annual review of service provider performance, service improvements and the service portfolio

Reviewing the key potential risks from the operational boards

5.1.3 Operational governance boards

The main operational board convenes to discuss service performance at a lower level than either the strategic or tactical boards.

It will review service performance and act as an escalation point for all other operational boards and process forums. For example, it may authorize budget or resources to carry out improvement activities identified in a process forum that exceeds its approval limit.

Other operational boards will be scheduled as required to support decision making. The most common example of this is the integrated change advisory board. To provide a clear view of the operational environment and support the operational boards, it is commonplace to use a visual management tool to display service performance information.

Using visual tools to assist operational boards

The service integrator at the Bank of Blue implemented and ran a Kanban board (in this context ’board’ means a tool for visualization and not a governance body). This used Lean systems thinking to create representations of service status and issues on physical or electronic whiteboards.

The Kanban board was used by an operational governance board at two levels:

The level one operational governance board was attended by team leaders for respective service provider teams at a component level

The leadership team used the level two board, which covered an integrated view of service and any escalations, bottlenecks or concerns from level one

The Bank of Blue found that this approach provided the following advantages:

The status and outcomes were transparent to all, providing a plan and supporting dialogue

It provided an indicator for team leaders and supervisors to react and if necessary stop to initiate a countermeasure plan

It facilitated discussions about performance across teams at all levels

Figure 24 shows an example of a simple Kanban board that can support visual management (see section 5.7.2 Measurement practices for more on visual management).

Figure 24: Kanban board

5.2 Process forums and working groups

At an operational level, working groups and process forums all help to establish relationships and encourage communication between service providers and the service integrator. These working groups and process forums are part of the structural elements of the SIAM ecosystem, spanning the SIAM layers.

There are many possible process forums and working groups that can be implemented. Decisions regarding what is required are considered during the Plan & Build stage, but the value of these must be evaluated on an ongoing basis. The service integrator must work to balance the requirement to bring teams together against the impact on service delivery. It is necessary to ensure that they do not create a challenge where the overhead of participation may be too much, as this will negate the value.

Example of a traditional working group: Major problem working group

A major problem is any problem where the severity is such that it is deemed necessary to perform urgent problem analysis on the issue with the intent to identify the root cause. Within a SIAM model, this would be carried out via a working group. The scope of a major problem analysis may include people, process, measurement, environment, technology and material.

Techniques such as Kepner-Tregoe problem analysis can be used to facilitate a major problem review or root cause analysis (RCA). This technique can be used when bringing together a group of subject matter experts (SMEs) within a working group.

The scope of a major problem working group is typically:

Major incident investigation where RCA is required to restore services

Recurring incidents leading to major problem analysis

High-priority problem analysis to avoid possible high-priority incidents

Since many service providers have experience of traditional service management approaches, such as ITIL and COBIT, they are often comfortable with the working group format and can engage successfully in this environment.

Structural elements such as process forums in a SIAM ecosystem are typically aligned to a specific process or practice. Members work together on proactive development, innovations and improvements. It is acceptable to combine process forums and working groups where there is a case to do so. For example, a problem management forum may exist that has within its scope an action to convene a working group when a major problem is identified or indeed a problem record backlog needs to be acted on. Similarly, process forums can be amalgamated, for example, an integrated change and release forum. In each instance of adaption, it is necessary to ensure understanding of the scope and intent of the group, and undertake ongoing value measurement.

When operating in a relatively stable environment, it may make sense to introduce a multi-layer structure for control and governance, aligned to a more traditional and formal structure. In an evolving or change-driven environment, flexibility and less formal structures for control and governance may be best.

It is important to understand that structural elements are not limited to using service management frameworks such as ITIL, or standards such as ISO/IEC 20000 only. With the adoption of Agile methodologies into the service management discipline, the structural elements may use practices based on Agile and Scrum. For example, Agile retrospectives could be considered as a process forum under the end-to-end continual improvement elements within a SIAM ecosystem. A retrospective can be used to discuss what could be changed that might make it more productive next time.

It is common to use the following questions during retrospectives:

What went well?

What did not go so well?

What should we do differently next time?

Example of Agile-based SIAM structural elements

The Clearwater organization is a service provider of water, wastewater and drainage services. The company employs more than 5,000 people and manages an asset base of $25 billion.

Clearwater primarily provides the following services:

1.Main water supply schemes

2.Wastewater systems



Clearwater was running a significant project to add additional services and improved service levels to its portfolio of mains water supply services to end customers, and wanted the IT department to provide a faster service to Clearwater as a business.

The IT division started using Agile principles to support the business. The initiative was focused on improving mains water supply services by enhancing the automated billing system. The IT division was having issues in maintaining consistency with this Agile practice across the many service providers. Because of these ongoing inconsistencies, the head of IT asked the service integration manager to introduce Agile retrospectives as a new process forum for the service providers of the billing system improvement initiative.

Initially, the service integrator felt that Agile rituals could not be considered as structural elements, and they should be managed outside of the SIAM ecosystem. The Scrum Master explained that the activities undertaken within the process forums and governance boards would be appropriate and requested a trial.

The service integration manager tried the initiative, which demonstrated some success where service providers unfamiliar with Agile practices recognized its value and implemented it within their individual service provider process activities.

Commercial matters should be excluded from operational process forums. Top layers of governance are the appropriate settings for discussing contracts. This operational level should be reserved for discussing operations and improvement activities.

Daily or weekly standups

Daily standups are an Agile technique, now commonly used outside the software development environment. Standups are where members of a team meet every day for a quick status update, ensuring that all the main parties are aware of current issues, major events planned for the day and to raise any concerns. The idea is to stand up, to encourage keeping the meeting short (no more than 30 minutes). They are often held next to boards that provide visual supporting information. Where teams are not co-located, as is often the case in a SIAM ecosystem, standups may be held using collaboration tools.

Daily standups are associated with frameworks such as Scrum and Kanban. These two Agile methodologies are often used interchangeably, but there are differences between them.


Scrum is a framework used to organize work into small, manageable pieces that can be completed by a cross-functional team collaborating within a prescribed time period (called a sprint, generally two to four weeks long). The aim is to plan, organize, manage and optimize this process.


Kanban is also a tool used to organize work for the sake of efficiency. Like Scrum, Kanban encourages work to be broken down into manageable chunks, such as backlog (the to-do list) and work in progress (WIP). The work can be visualized as it progresses through the workflow using a Kanban board.

Where Scrum limits the amount of time allowed to complete an amount of work (by means of sprints), Kanban limits the amount of work allowed in any one condition: only so many tasks can be ongoing, only so many can be on the to-do list.

Whichever approach is used, it is recommended that ground rules are established for standup meetings. Three valuable questions to be answered at a standup are:

What did you do yesterday?

What will you do today?

Are there any impediments in your way?

Daily, or regular standups (sometimes referred to as ‘daily prayer’ meetings) can be used to ensure that service teams have the right focus. Scrum or Kanban standups can be very useful techniques for working groups as they can be used to tackle a specific objective, task or issue.

Example of daily standups

A standup meeting can also be used within a specific service management process activity, such as problem or incident management.

At the Bank of Blue, a problem involving three different service providers was taking a long time to investigate, as the providers were in different geographical locations. A daily standup was held using a collaboration tool. This was led by the service integrator, to share progress and keep track of all investigation actions.

5.3 Ongoing performance management and improvement

Metrics help a business determine whether its goals are being achieved, but are effective only if they have been carefully chosen to represent progress towards objectives.

The model for monitoring and measuring service performance should have been initially considered during the definition of the governance framework in the Discovery & Strategy stage (see section 2.3.14 Monitoring and measuring service performance), and refined during the Plan & Build stage (see sections 3.1.7 Performance management and reporting framework), before being implemented. In the Run & Improve stage, the performance of services and the service providers is actively measured.

The framework will continue to develop over the initial period (Run), as experience highlights areas where improvement is necessary or possible (Improve), as business objectives and supporting metrics evolve and knowledge is acquired. The service integrator should own this framework. Once in place, a periodic review undertaken by the service integrator will ensure that the correct elements are measured to assess the ongoing value of the SIAM model.

The performance of all services and processes should be measured and monitored against key performance indicators (KPIs) and defined service level targets. The measurements should be both qualitative and quantitative, and show both point-in-time performance and longer-term trends.

Although it is important that each service provider has measurable service targets to work towards, they need to form part of the end-to-end performance management and reporting framework. This will, in turn, provide evidence of demonstrable achievement of service objectives, business benefits and value.

If there is no clear definition and communication of value or end-to-end metrics, service providers may focus only on their own performance and not see the big picture. Commitment to contractual requirements for managing performance is defined in the Plan & Build stage, and the service integrator is responsible for engaging with service providers to ensure their obligations are met.

5.3.1 Key performance indicator mapping and service-level reporting

Measurements are used to create meaningful and understandable reports for various audiences across the SIAM ecosystem. They provide visibility of performance issues, and support trend analysis to provide early warning of possible failures or potential delivery issues.

In some cases, a service provider might identify that it is likely to miss a target in one area, possibly because it is focusing resources in another area following previous issues. It is a good idea to make the service integrator aware of this as soon as it is identified, as the service integrator could help the service provider to prioritize when there is a conflict between individual targets and end-to-end service targets.

Reports should be used not only to measure service achievement and value but also to identify opportunities for improvement and innovation. Routine service improvement activities should include review and management of actions arising from the information and review of report relevance. Within a SIAM ecosystem, reports also need to include feedback about how the service is perceived by consumers, referred to as qualitative reporting (see section Choosing the right measurements).

The complexity of a SIAM ecosystem can make the production of reports a considerable overhead. Although the reporting provides value at various levels, it should not be allowed to become all-consuming.

The following types of report are useful:

Service provider-focused reporting: this describes how each individual service provider is performing against its commercial service level targets and KPIs. It describes the overall commercial picture, highlighting where measures have been achieved and describing where failures have occurred and why they happened.

Service-focused reporting: this focuses on the performance of the services provided, in terms of service level agreement (SLA) performance and specific targets, for example, the processing of incidents, problems and changes.

Business/customer-focused reporting: this focuses on the performance of the SIAM ecosystem in terms of end-to-end services, and is perhaps the most useful in providing the customer organization with insight into the quality of services being provided, especially when expressed in business terms. The number of major incidents is one measure, but if you can translate that into lost production it has more meaning and can result in better support for corrective action.

Example KPI mapping

A service integrator wanted to compare the performance of each service provider’s change management procedures. Each provider used a different internal procedure as part of the end-to-end change management process.

The change management forum was asked to create a set of consistent KPIs. The change managers from the service providers and the service integrator developed a simple set of KPIs that were easy for each provider to measure, but which gave a good indication of performance. At the end of each month they sent their KPI results to the service integrator, who then collated and shared them with all service providers. This drove competition and hence improvement.

The KPIs were:

Percentage of emergency changes

Percentage of late presented changes (the target was five days’ notice for non-emergencies)

Percentage of changes rejected as incomplete by the service integrator

Percentage of failed changes

An aggregate score was created, with varying weightings on each KPI.

All of these were trended over months and presented as part of the service integrator’s consolidated service report.

Shared KPIs are useful when it is necessary to compare the performance of different service providers in a specific process area. There needs to be a consistent definition on what is included in the KPI. They also need to be carefully designed to ensure that any comparison is valid. Service-focused reporting

This requires metrics that focus on the entire service offering, requiring shared, dependent and related service levels that track the collaborative delivery of the services aligned to business outcomes. Metrics that offer no value to the business, or that cannot be understood in business terms, are likely to be ignored and risk damaging the relationship between the service integrator and the customer organization.

Measuring the performance of a service requires:

A focus on the value provided to the customer organization

Measuring the end-to-end service performance

Using (near) real-time data when reporting on service-focused KPIs can provide many benefits, as depicted in figure 25.

Figure 25: Reporting using real-time information

In order to facilitate this approach, it is important to:

Implement any changes necessary to be able to capture the required service measurements within the agreed tools

Where possible, develop an automated reporting capability

Monitor service level compliance

Work with service providers to improve performance

It is important that the service integrator works with the customer organization’s contract management function to ensure any changes are incorporated and reflected in SLAs and contracts.

SLAs and KPIs should be a part of defined service commitments. It is often useful to run a pilot exercise as these measurements are being established or refined so that there is clarity on expectations, confirmation that service targets are achievable and that performance baselines can be established. KPI aggregation

The role of KPI aggregation is to demonstrate both end-to-end service delivery and the effectiveness of the service integration layer. For a KPI reporting framework to be valuable, metrics and targets should be developed jointly with the service providers and support the customer organization’s business outcomes. A performance management and reporting forum can be useful in the development and maintenance of aligned service metrics and the sharing of measurement best practices.

Aggregated KPIs should:

Focus on results of services and demonstrate the impact of service delivery on business outcomes, not the output of service providers.

Provide both qualitative and quantitative measures to give a balanced view.

As much as possible, be objective, to minimize subjective interpretation.

Be specific, measurable, agreed, relevant and time bound (SMART).

Be appropriate and the total number of KPIs should be limited across the organization (such as approximately three per goal).

Have boundaries. If a service provider does not control all the parts of service performance, then that service provider cannot be held responsible for failure to meet targets.

KPI aggregation across a complex multi-service provider model may be challenging unless the concept of shared KPIs is well understood and communicated across all layers within the SIAM ecosystem (see section 3.1.7 Performance management and reporting framework). Reporting tool

Ideally, reporting should be generated from the service toolset, which should act as the single or central authoritative source of service data from across the SIAM ecosystem. This ‘single source of truth’ reduces the need for manual data manipulation and provides a trusted basis for all reporting. Some of the data may need to come from each service provider’s tool, to be consolidated in the service integrator’s tool, depending on the tooling strategy (see section 3.1.9 Tooling strategy).

If the service integrator’s tool cannot meet the reporting requirements, it may be appropriate to supplement this with a specialist reporting tool with sophisticated data analytics capabilities. Although ‘simple’ spreadsheets are still used even in the largest of organizations, they are not as reliable and can be subject to errors. However, they are a good proving ground for new measures before complex reports are built in a tool.

Analytics capabilities will need to be built gradually to ensure they are sustainable, and to ensure there is strong governance in place to manage ongoing reporting requirements. Failure to do this could result in the service providers and service integrator meeting every reporting requirement presented to them, but failing to provide business value through the provision of insightful and useful reports (leading to the ‘watermelon effect’). Reconciliation between different providers

There will be occasions where a service provider fails to meet a target because of circumstances outside its control, typically when the failure was because of another service provider. One useful approach to deal with this is the ‘excusing cause’.

In this approach, the affected service provider provides the service integrator with the full information for why the failure occurred, including details of which service provider was the cause, supporting metrics and timelines. The service integrator then considers the request to excuse the failure, often consulting the ‘causing’ service provider.

The service integrator has the following options:

Reject the request.

Accept the request, allow the affected service provider to resubmit its performance report with this failure removed, and ensure that the ‘causing’ service provider has taken it into account in its performance measure.

Accept the request, allow the affected service provider to resubmit its performance report with this failure removed, but leave the issue as ‘unresolved’ because it cannot prove that the ‘causing’ service provider was responsible for the issue.

This may require invocation of a dispute resolution process (see section Dispute resolution). The service integrator needs to have defined delegated authority to prevent decisions needing to be escalated to the customer organization’s retained capabilities.

5.3.2 Differentiation between provider and integrator performance

The contract or agreement for the service integrator will usually have broad, aggregate-level targets to track end-to-end service performance. As these targets are not necessarily directly actionable metrics, they may not provide unfiltered visibility into service performance.

The performance measurement in place in the SIAM model needs to track:

Individual service provider performance

End-to-end service performance

The performance of the service integrator and how it is fulfilling its role

This requires different types of targets. Measuring the service integrator

It is often challenging to measure the value of the service integrator. On the surface, it may seem simple: if the end-to-end service is running well, the service providers are performing and the customer organization is happy, then things must be going well.

Measuring the service integrator’s value is far more difficult than measuring an individual service provider, so a degree of innovation is needed. The measures need to be experiential and behavioral, for example:

Analysis of governance activities undertaken, for example:

oReduction in service credits

oAlignment of service performance to the customer organization’s strategic objectives

oAdherence to legislative or regulatory obligations

Effectiveness of the structural elements, for example:

oPercentage attendance at governance meetings of all appropriate service providers

oReduction in disputes between service providers

oReduction in disputes with the service integrator

oReduction in escalations to the customer organization

oEffectiveness of governance meetings in planning and risk reduction terms, including how many risks have been identified and how many require mitigation/action plans?

Process maturity and integration measures, for example:

oAchievement of end-to-end service targets

oPercentage of incidents/requests allocated to the appropriate service provider

oReduction in the number of incidents passed between service providers

Ability to coordinate the demand, scheduling and delivery cycles of the customer organization, and feed this into capacity and availability plans

Collaboration – subjective measure of the performance of the service integrator in driving collaboration across the ecosystem

Improvement – driving through improvements, running successful service improvement plans and coordinating actions across service providers within the ecosystem, for example:

oIncreasing usage of the shared knowledge management repository

oPercentage of suppliers involved in collaborative improvement initiatives

oPercentage of improvement initiatives that have achieved a quantified positive business impact

Innovation – demonstrable evidence of genuine service innovation, as opposed to improvement Dealing with incomplete or non-standard data

In any SIAM ecosystem there may be some service providers that are part of the SIAM model but have not agreed to tailor their performance reports to align with the SIAM model reporting standards. The impact of this can include:

Inability to obtain reports

Incomplete measurements and data

Irregular reports

Misaligned reporting periods and deadlines

Different calculation methods

This situation typically occurs when the following service provider types are part of the SIAM ecosystem:

Commodity or standardized service providers that provide the same service to all of their customers with minimum customization, and will therefore not tailor any reporting for a specific contract

Large service providers that provide the same reports to all of their customers

Specialist service providers, where the service integrator’s standard requirements are not aligned with the characteristics of the service delivered

Small service providers, where the cost of meeting the reporting requirements is disproportionally high when compared to the value of the contract

For reporting purposes, non-compliant service providers leave the service integrator with one of the following choices:

Exclude the measurements from end-to-end reports. This can lead to an inaccurate picture of service performance where these services are an essential part of end-to-end delivery, such as hosting services.

Take the data from the service provider and do the calculations again. This can be a challenge if the service provider uses different reporting intervals or does not provide the base data.

Take its own measurements of the performance and availability of the services. This is likely to require specialized tooling to capture the data, but will provide the most accurate information. Imposing service credits

Service credits are pre-specified financial compensation that the customer organization may become entitled to when a service level or target is not achieved (see section Service credits).

The challenge within a SIAM ecosystem arises in determining when it is fair to apply these, since many service providers are likely to be contributing to the end-to-end service. The service integrator, acting as the customer organization’s agent and provided with delegated authority, often has to consider the appropriateness of levying service credits.

When applying service credits, it is important for the service integrator to approach the calculation by considering how they might affect the other remedies that would otherwise be available under the contract or common law. Unless the contract is carefully drafted, an approach that attempts to impose service credits on a 'no-service-no-pay' basis can lead to situations where the pre-specified service credits become the exclusive remedy for serious performance failures.

The application of service credits can only be done fairly within an effective measurement framework. This is constructed in the Plan & Build stage (see section 3.1.7 Performance management and reporting framework). In practice, when disputes over performance arise, the monitoring of service levels may be the most complete record of performance under the contract. The service integrator may wish to show that there have been breaches and, indeed, that the failure to meet the service levels is a symptom of bigger failings. However, the service provider may also wish to rely on the service-level reporting to show that it did everything that the parties regarded as being important.

This is where the complexities lie in a SIAM ecosystem. Often, service providers are requested to forgo meeting their commercial targets to achieve a benefit for the end-to-end service. In this case, the service integrator must apply credits considerately, in a fair and equitable manner. If this is not done well, relationships will soon break down, making collaborative working unlikely.

5.3.3 Evolving ways of working

Ways of working are established in the Plan & Build stage, but will need to evolve in the Run & Improve stage. The role of the service integrator is to create an appropriate environment through:

Information sharing in an open manner

Transparent decision making

‘Win-win’, ‘can do’ attitude

Impartial and equitable imposition of service credits

Recognizing that if an approach benefits all parties, it is most likely to succeed in:

oPromoting trust and reliance on each other

oSupporting collaborative working

oDiscouraging protective behavior, while recognizing commercial realities

oSharing of concerns and ideas for issue resolution so that no party feels unduly threatened or compromised Evolving through operational management

The operational management role of the service integrator in a SIAM ecosystem is multi-faceted and typically involves several elements.

Real-time business as usual (BAU) management deals with the monitoring of incident queues, escalations regarding compromised or breached service levels and coordination of the resolution of major incidents and problems. The service integrator must ensure BAU management is successful, without doing the BAU activities. The leadership abilities and cross-functional skills of individuals involved in the ecosystem operation are key to success. Understanding the effectiveness of BAU management, or where it needs to be more effective, can focus attention on evolving improved capability where it is most critical.

Periodic meetings are necessary to review performance and provide a mechanism to ensure governance within the ecosystem. These reviews support vital collaboration activities. Reviews are an opportunity to demonstrate both the capability of the service integrator’s coordination and management role, and the service provider’s service delivery capabilities. Monthly service provider service reviews, cross-service provider and single provider service reviews are examples.

Service review meetings may highlight where some service providers are performing better than others and may have better ways of working to share, or conversely highlight aspects of poor performance, where service providers need to find better ways to deliver service in order to achieve the required objectives (see section Service provider review).

Process forums provide excellent opportunities to evaluate the overall effectiveness of the processes in operation within the SIAM ecosystem. Process forums allow the service integrator to identify operational challenges and drive continual improvement. Examples include Continual Service Improvement And Innovation Forums Or A Quality Management forum that facilitates discussion across teams and at all levels.

Similarly, the process forums themselves should be evaluated for their effectiveness and alignment to the needs of the SIAM ecosystem. Consideration should be given to the need, scope, objectives, achievements and stakeholders for these structural elements. Terms of reference should be reviewed, amended and approved by the appropriate senior staff and circulated to all participating parties to ensure ongoing agreement and value.

Collaboration remains a key attribute required to Run & Improve a SIAM ecosystem. Areas that may be investigated to assess the success of collaboration include:

Participation in cross-organization problem solving working groups

Membership and active participation in the SIAM structural elements, such as cross-organization process forums and meetings

Consistent and prompt payment

Clear and meaningful reports

Decisive action promptly delivered when promised


Delivery on obligations, supported by evidence

Effectiveness of relationships

Sharing of knowledge and experience for the benefit of the organization

Openness and even-handedness in addressing issues, clarity on what both parties need to do for outcomes to be effective

Collaboration in the family

Managing service providers in a SIAM ecosystem has similarities to the dynamics within a family: parents (the service integrator) need to be firm with their children (the service providers), but not authoritative, instead guiding and directing them (and being fair to all children/service providers equally).

Trust here is not solely based on age and position (‘do what I say’/‘because I said so’), but also on a long-term relationship that has proven beneficial.

Only when working together (‘give and take’) will optimal benefits for all family members be achieved. Driving improvement and innovation

To drive improvement and innovation, the service integrator should define methods to help stakeholders within all SIAM layers to work productively and collaboratively. The challenge in any environment is the people. People can be both the biggest contributor to and resistor of change.

These elements will provide focus to encourage improvement and innovation:

Focus on individuals

Focus on the team

Customer organization and retained capabilities


Innovation and improvement outcomes

Psychological climate

Physical environment

Organizational culture

Economic climate/market conditions

Geopolitical culture

Focus on individuals – the basic building block of getting things done is an individual. Organizations, departments, divisions, groups, teams, etc. are all units built from individuals. Focus on strengthening the primary building block to start pushing innovation activities forward.

Focus on the team – individuals make things happen, but in most cases, they cannot do it all by themselves. Innovation requires multiple skill sets, whether it is invention, development, funding, marketing, patenting, operations, etc. Those skill sets almost never exist in one person, so it requires several people to move it forward.

Different service providers have different skills. Focus on improving effective and collaborative team dynamics to keep the innovation engine running smoothly. Involving all layers in the creative and innovation process increases the probability that the innovation will be successful.

Customer organization and retained capabilities – even individuals in successful teams can become resistant to change. The successful innovation team of yesterday becomes the ‘this is the way we’ve always done it’ team of tomorrow. The customer organization needs to give thought to creating and sustaining enterprise-wide procedures, policies, metrics, recognition and executive level accountability to keep innovation running.

Processes – establish how to improve the processes or methods being used to drive innovation, but do so across all three levels described below:

The individual level: for example, processes to enhance self-awareness, emotional intelligence and cognitive ability

The group level: for example, using a structured brainstorming, ideation or creative process to support teams in creating innovative solutions

The enterprise level: for example, the organizational system for idea management

The structural elements, particularly process forums, provide an ideal opportunity for this type of collaboration.

Learning to trust

Immediately after a SIAM implementation, a service integrator wanted to approve every change made by every service provider. This was because of two things: lack of trust and the change management staff in the service integrator wanting to use their experience as operational change managers.

In view of the number of service providers and high volume of changes, a change advisory board was being held twice a day, every day of the week. This continued for ten months.

Eventually, the service integrator introduced an approach where low-risk, repeatable changes that were local to a service provider could be approved by that provider. This immediately reduced the workload, enabling the board to meet twice a week.

Over time, as trust improved, the service providers were encouraged to approve their own changes, under a change management policy developed by all parties.

Several years later, the integrated change advisory board now only meets by exception.

Innovation and improvement outcomes – innovation and improvement are two different, related topics. Improvement is gradual within scope (doing the same, better), whereas innovation is a step change that could affect many parts of the business (doing the same, differently, or doing different things).

There are various perspectives of the improvement and innovation processes. To focus only on a product or outcome is to overlook services, business models, alliances, processes, channels and more. The service integrator should consider the broader picture of improvement and innovation opportunities, driven by reporting and feedback loops built into the SIAM model.

Psychological climate – reporting alone should not drive the innovation and improvement efforts. There is much to be learned about the quality of services within a SIAM ecosystem by listening to stories. User experience stories, service provider stories and customer organization stories all help provide focus on where improvements are needed.

What is working?

What is not working?

What is acceptable?

What has changed in the industry?

What is our scope?

The SIAM ecosystem will need to evolve along with changing business and customer requirements. To drive innovation, the right amount of personal freedom (within boundaries) should be offered to all the layers of the ecosystem, offering the capacity and scope to explore new areas. Supporting an effective psychological climate is a requirement for sustained innovative output.

Physical environment – this is often an issue in SIAM environments because of the dispersed nature of the parties. There may be physical challenges in terms of separate commercial organizations operating over various geographies and time zones (see section 3.2.3 Virtual and cross-functional teams).

This is a challenge for the service integrator to overcome. It should consider:

Are stakeholders at the various layers able to get together easily to communicate and work?

Do they understand their scope and boundaries?

Are they able to make time for these activities?

Are decision-making accountabilities clearly defined?

Is there an appropriate space to review document prototypes/results/data?

Different people have a different concept of the ideal environment. It is imperative that the service integrator works with key stakeholders to ensure that the appropriate environments for enabling collaboration meet all parties’ needs. This often requires alternative approaches, various forums and methods. Engaging all parties in defining the environments will enhance the likelihood of success.

Organizational culture – in a SIAM ecosystem there will be more than one culture evident. When onboarding service providers into the SIAM model, it is important to consider cultural alignment. This may not always be possible. Unique providers can have a unique culture and cannot (and should not) always be brought ‘into line’.

Developing an understanding of the different cultures allows the service integrator to understand how best to engage with them. To gain clarity about the cultures evident within the layers, it is a good idea to look at the structural elements to understand the stories that people tell about success and failure.

How do people discover and share how things really get done?

Which practices are in force to work around established processes if they are not fit for purpose?

Which processes or activities do people avoid?

Which service providers are deemed as easy to work with, or not?

What organizational leaders say is often drowned out by what people know is really going on. It is not enough to just say innovation is important! The customer organization must provide the framework, scope and boundaries for it during the Plan & Build stage, so that the efforts can happen in the Run & Improve stage. Organizational policies, management behavior, things that are measured and executive messaging must all align to create the stories that explain the desired culture.

Economic climate/market conditions – an innovative culture is easiest to maintain when market conditions mean there is not too much fear, nor too much confidence. These are rare moments in the business cycle.

In a fast-evolving ecosystem where there is significant change, innovation can and will fade away during periods of disruption, such as service provider retirement, organizational cutbacks and restructuring activities. Service providers will usually ‘play it safe’ and stop making innovation and improvement suggestions when they are aware that sales are down, or that the economy is in decline. Similarly, if the customer organization announces market dominance or impressive financial figures, service providers may become complacent.

The customer organization should set the tone by setting resources aside to support innovation in both good times and bad. Paradoxically, many organizations only get radically innovative when they are in distress situations: when there is no other choice but to change things.

Geopolitical culture – this is a significant consideration in a SIAM ecosystem, especially in one that operates over many regions. Local culture elements can make a difference, such as:

Where people were born or live

The language they speak

Where they work

How they were educated

Different cultures communicate differently, see the world differently, perceive different threats and find value in different things. Every culture has strengths and weaknesses. The service integrator must consider which cultural strengths can be exploited, and which cultural impediments must be overcome. Paying attention to the habits and needs of the people in all layers of the SIAM ecosystem will support innovation.

5.3.4 Ongoing service provider management

Operational management will be successful only if supported with the ongoing ability to measure and manage each service provider. The service integrator should maintain a detailed contact matrix for each service provider, defining its individual responsibilities and accountabilities for delivery.

The service integrator should perform an ongoing evaluation of the role of each service provider. It is useful in this instance to use some form of SCMIS. Ideally, the SCMIS will be an integrated element of a more comprehensive knowledge management system.

The SCMIS should be used by the service integrator to capture ongoing records about all service providers. As well as information about their contract details, it should include:

Details of the type of service(s) or product(s) provided

Service relationships with other service providers (dependencies)

Importance of the service provider’s role in service delivery


Cost information (where available and appropriate)

Information within the SCMIS will provide a complete set of reference information for any service level, service measurement and service provider relationship management procedures and activities undertaken as part of the service integrator role.

An important ongoing element of the service integrator’s role is to provide information to the customer organization about the performance and value of the various service providers within the SIAM ecosystem. Through measurement and evaluation, the service integrator should identify the position and relevance of each service provider. This information will also help the service integrator to establish the appropriate level of operational monitoring and review the schedule required.

In line with the agreed performance management framework, the service integrator should review the delivery obligations and service performance of service providers in preparation for scheduled meetings.

Any instances of underperformance or conversely of exceptional additional value should be included in the regular service reporting and fed back to the governance boards and customer organization. Underperformance by a service provider should be dealt with through appropriate corrective actions, including launching formal service improvement plans.

On a more granular level, daily standups (see section 5.2 Process forums and working groups) could be convened by the service integrator to consider operational concerns such as support backlogs, major incidents, escalations and planning for the day.

The major success factors for successful performance and operational management are:

Clearly defined roles and responsibilities

Constant flow of communication within the ecosystem

Well documented measurement framework

Efficient measurement mechanisms

Consistent monitoring and reviews

Course correction mechanisms

Ability to identify and recognize exceptions

Ability to reward exceptional value addition

Defined ‘ways of working’ that are well understood by everyone Service provider review

Service review meetings provide an important role in assuring the service(s) delivered, as well as enabling continual service improvement and refinement to take place and be tracked formally.

Adopt a layered approach when establishing service review meetings:

Monthly (or fortnightly) operations meeting – to consider:

oIncident status discussion based on monthly reports

oAreas requiring focus

oMajor escalations

oCustomer feedback

oAction Items from previous operations meetings

Quarterly (or monthly) service provider meeting – to consider:

oService level agreement (SLA) target performance

oService improvement plan

oImprovement initiatives

oCorrective plans for issues

oAction items from previous meetings

oChallenges faced

oFeedback from the customer and/or from other ecosystem partners

Annual contractual reviews – to consider:

oEngagement with the customer organization as required

oConsolidated SIAM scorecard (see section 2.8.2 Measurement practices)

oService/service scope reviews

oService level reviews, aligned to the annual review of the end-to-end key performance indicators (KPIs)/SLAs

oStrategic opportunities

oRoadmap ahead, including any necessary changes

oSecurity review to ensure there are no specific security risks within the ecosystem

oRegulatory and compliance obligations

Note that timeframes and frequencies are indicative only and may be increased or decreased depending on specific circumstances and whether the service provider is classed as strategic, tactical, operational or commodity (see section Governance boards).

Minutes should be taken at all meetings and made available to relevant parties in an accessible location, with action trackers to monitor activities through to completion. Such documents can be valuable inputs to contract renewal reviews to establish eligibility for renewal or changes necessary to the service levels or commercials. Adding and removing service providers

Because of the nature of a SIAM ecosystem, with multiple service providers, potentially shorter contract times and agile ‘loose coupling’, there will often be a need for the service integrator (and the customer organization’s retained capabilities) to add and/or remove a service provider (see section Onboarding and offboarding of service providers and 3.3.3 Transition planning).

Reasons to terminate a contract with a service provider include:

Consistently failing to provide services and service levels that meet business requirements

Services no longer align to business needs

Finding a more cost-effective, better or more reliable service provider

Analysis of performance or demand patterns reveals changes in the volume, transactions or service level and requirements that the incumbent is unable to satisfy (inability to scale the service)

Natural contract end date occurs and there is no desire to renew

Cultural misalignment

Fraudulent actions

Service provider ceases to trade

In all instances, it is important to check the contract first to see whether there are penalties for terminating early or indeed notice periods previously agreed. Exit clauses should have been drawn up with the initial contract, and the contract management process should be aware of such (see section End of contract). In instances where termination is required at speed, penalties may have to be accepted by the customer organization.

As well as the financial barriers to changing or removing providers, when there is a switch to a new service provider with different processes or systems, there are likely to be operational challenges. This may lead to disruption from new ways of working, processes, tools and loss of knowledge.

The service integrator, following the exit agreements defined, should ensure that all appropriate information and artefacts are obtained from the outgoing service provider. If possible, negotiate so that your new service provider takes responsibility for handling the changeover process with the incumbent.

Considerations during offboarding or contract change

Disengaging a service provider can be a complicated and risky business, especially if its role has been a strategic one and the services it delivers are deemed to be vital to the organization. The SIAM governance framework provides a mechanism to consider the associated risks and ensure, once identified and understood, plans are created for their mitigation or management. Guidelines should have been defined at the Plan & Build stage to deal with this activity.

The customer organization will expect continued smooth running of operations during on or offboarding, using the defined procedures created for this purpose within the Plan & Build stage. Within the Implement stage, this guidance regarding on and offboarding is used. This provides for repeatable processes with detailed quality gates to govern any service retirements and/or decommissioning activity. Significant lessons learned can be gained from these activities, which should be applied to these processes in readiness for future reuse.

The objective of ‘quality gate’ based transition planning and support is to ensure that all required quality and performance parameters are met, including:

Relevant intellectual property (IP), policy, process and procedural documentation is retained as appropriate

Knowledge transfer is undertaken

Service continuity is retained (where appropriate)

New operational teams are engaged and trained

Customer acceptance is obtained

It is important to avoid alienating a service provider that is still required, for example, if it is providing other services within the ecosystem or it is likely it may return in the future. Non-conformant service providers

In instances where service providers fail to fulfil service agreements or obligations, depending on the nature of the failure, the service integrator should take the following actions:

Undertake a full review to establish the cause and point of failure.

Quantify the impact of the issue on the customer organizations’ business operations.

Consider, and if appropriate and/or possible, quantify the impact of the issue on other service providers.

Convene with appropriate stakeholders through the agreed performance management process. This may be via a board.

Consider the application of any defined contractual remedies such as service credits, versus the application of non-contractual remedies such as improvement plans. In instances where financial penalties are applied, consider consulting with contract management.

It is important to apply contractual remedies, such as service credits, consistently across all service providers to avoid any allegations of favoring one service provider over another. It is also important to apply them consistently throughout the life of a contract, as a decision to apply them later in the contract term may cause challenges from the service provider.

Whether or not contractual remedies are applied, service failures should also be addressed using measures including review meetings and performance improvement plans. During any meetings, be sure to document expectations, the success criteria and how achievement will be measured. This should include planning for how to make the improvements, the agreed communication methods and timescales. In a worst-case scenario, this documentation could be used as a record should any contract breach and subsequent legal proceedings arise.

Ensure that the service provider’s senior management is aware of the failure to meet expectations. Request that they take ownership of the agreed remedial actions within their own organization, providing support as required.

Hold regular, planned progress meetings with the service provider to assess progress of improvement activities, discuss any issues and offer support where required. It is important that these meetings are seen as an opportunity to work together on resolution, otherwise they can damage the relationships between the service integrator and the service provider.

Although the service integrator is usually responsible for managing the service agreements with the service provider, it may be necessary, for example when financial remedies are applicable, to involve the customer organization in discussions with the service provider. This is also the case when an agreement cannot be reached or when a service provider is regularly failing agreed targets, and improvement actions and/or penalties have been unsuccessful. Dispute resolution

Dispute resolution in a SIAM ecosystem needs to be multi-level, allowing disputes to be resolved at the lowest level of escalation possible. Since the customer organization retains ownership of contracts, mechanisms need to be defined to allow the service integrator to manage most service provider and contract related issues, unless they become serious enough for the customer organization’s retained capabilities to step in.

To this end, it is important to draw a distinction between performance management and relationship management, which are a service integrator’s concern, and contract management, which falls into the realm of the customer organization’s retained capabilities.

Dispute escalations

Often, the complainant will go straight to the top of the customer organization and the issues get blown out of proportion, whereas peer negotiation might work better. The overall culture within the SIAM model can help to reinforce appropriate escalation.

There are always disputes in contract management, but the more clarity about accountability embedded in the contracts, the easier it is to resolve them. To achieve this clarity, the strategy should begin with the end in mind, and the contracts should support the strategy (see section Dispute management).

It is possible for contract management to become adversarial and, in extreme cases, lead to back-and-forth reprisals. This is not only because of a difference of opinion, but also because of different perspectives between the parties. Service managers are often concerned by service quality factors, whereas commercial and financial managers may be more interested in who pays what, to whom.

Within a SIAM ecosystem, contract management should create an environment of collaboration, seeking win-win scenarios. The structure of the contract and the supporting schedules need to facilitate the ability for changes to the service arrangements, rather than a multi-year lock-in with no flexibility.

When engaging in dispute resolution, attention must be paid to:

The law – normally, the governing jurisdiction of the contract defines which law applies

The contract, including dispute resolution clauses

Any precedent which may affect interpretation of clauses Using collaboration agreements

The division of services between multiple service providers creates the requirement for service integration. The obligations on service providers to participate in coordinated end-to-end delivery may be collected in a single schedule or distributed across the contract. Ideally, these obligations are standardized for efficiency.

Where they are standardized, maintain a single document under change control. These types of documents have been called various names, such as collaboration agreements, cross-functional statements of work, engagement models or operations manuals (see section Collaboration agreements).

Whatever they are called, these agreements must:

Define the roles of customer organization’s retained capabilities, the service integrator and service providers

Define the methods for communication and collaboration

Define how to escalate and resolve operational issues

Define how collaboration will be measured, along with incentives to increase collaboration

Be easy to understand

Contain as much as is necessary, and no more

Simple collaboration agreements

A large organization commissioned a SIAM ecosystem. It decided it needed a collaboration agreement, so it engaged commercial lawyers to create one, at significant cost.

The collaboration agreement that was produced contained more pages than the main contract, and was written using mostly legal language. This scared away many potential bidders.

Once services commenced, the agreement was never used, as the selected service providers understood the required outcomes and wanted to ‘do the right thing’.

The document was worthless. A collaboration agreement should be a living and breathing document that provides reference to support clarity of obligation.

Although it can be difficult to strictly enforce collaboration, having a collaboration agreement helps to set the tone and define the expectations around working arrangements and engagement between the service providers and the service integrator, and with the customer organization’s retained capabilities.

The collaboration agreement sets the baseline for the relationships with the service providers, but should not preclude any additional behaviors that could enhance the delivery of services. Otherwise, there is a risk that some service providers may work strictly to the agreement and go no further, which may also restrict service improvements.

Collaboration agreements provide:

Clarity regarding the overall service outcomes and individual outputs that are sought from the service providers

Easily understood definitions of which party is responsible for what and the mechanism that is best placed to achieve these

Clarity regarding where standardization is to be applied (for example, selection of master tooling set) and where discretion is available (for example, each service provider's internal tooling is acceptable if it integrates and exchanges with the master)

Service agreement schedules that will consider both end-to-end and individual service provider accountabilities

Links to any governing artefacts, including integration and interface requirements

Success will depend largely on the support of the service providers in accepting the conditions defined within the collaboration agreements. The service integrator’s role is to assure operational conformity and take action over issues that affect collaborative working. Trust-based supplier management

In SIAM, the best outcomes are achieved when there is trust between the customer organization, the service integrator and the service providers. Trust-based supplier management is an approach that recognizes this, varying the amount of governance performed by the service integrator over service providers, depending on the level of trust in each provider. This helps to further build trust, support cooperation and allows the service integrator to allocate its management time in the most effective way.

Many organizations have historically managed suppliers using an approach that relied solely on contracts. This can lead to caution and mistrust when designing a supplier and contract management strategy for SIAM, resulting in very detailed contracts with excessive reporting requirements and penalty clauses. Such organizations can find it difficult to transition to a SIAM model that requires collaboration, cooperation and trust in order to successfully manage suppliers and contracts.

Trust-based supplier management can be used instead of, or in conjunction with, these more traditional supplier management techniques. The choice of approach will depend on the nature of the contracts and the maturity of the relationship between the service integrator and each service provider.

Trust in individuals or trust in organizations?

Although there is often talk about the need to build trust between organizations or teams, trust actually evolves between individuals. Trust is people based rather than contract based. Trust can exist between individuals at all levels in organizations. Trust earned at C(Chief)-level may not always translate to staff at an operational level.

Trust and goodwill are important foundations for collaboration across the SIAM ecosystem and successful interactions between all layers in a SIAM model. Therefore, trust should also be considered in the design of the wider SIAM model, including the process model, collaboration model, tooling strategy, ongoing improvement framework, and the performance management and reporting framework. This provides surety and consistency across all stakeholders (see also the SIAM Foundation BoK on the challenge of the level of control).

The design of the detailed SIAM model in the Plan & Build stage should consider where trust is required for successful operation. This should include the level of trust required, how to build and maintain that level, and the responsibilities for making it happen. The scope of the design should include the SIAM practices, especially the People and Process practices.

The levels of trust that exist within a SIAM ecosystem will evolve and change over time. Figure 26 shows how trust can increase over time and is also affected by specific events.

Figure 26: Levels of trust impacted by time and events

Example situations where the level of trust can be affected negatively are:

When a new service provider is introduced to the SIAM model because its performance is not proven

If service providers oversell their capabilities to win a contract and cannot deliver what was promised

Following a major incident

When key personnel change and new relationships need to be established

When service scope is not clear and service providers feel they are punished for things that are outside of their control

When expectations and requirements change without communication, and contractual terms and metrics no longer appropriately reflect the customer organization’s needs

Pressure for end-to-end efficiency to the benefit of the customer places a greater demand than contracted upon a service provider

Customer organizations that have decided to use an externally sourced service integrator should try to ensure that their expectations on trust align with that of the service integrator’s. This will avoid issues where the customer disagrees with the service integrator’s supplier management approach. As the service integrator is acting on behalf of the customer, it is important that they understand and represent the customer’s underlying attitudes and behaviors towards trust.

Missing trust

Two months after the implementation of a new SIAM model, a major incident was caused by a service provider’s engineer shutting down the wrong server in a data center.

The service integrator did not trust the service provider to prevent it from happening again, and insisted on new access control procedures. These required the integrator to approve every request to enter the data center.

The more controlled approach improved the level of trust between the parties. As a result, access control levels were reconsidered gradually and ways of working evolved so that the controls were gradually relinquished.

The trust management cycle

Trust management is an ongoing journey. Even in tried-and-tested relationships, influencing factors will have a direct impact on the trust relationship at a point in time.

Consider the development of trusted relationships as a journey with many stages:

Define trust. What is it, how is it measured? Keep it simple, perhaps you have defined trust as confidence that you can depend on a service provider or team. This can be displayed by having multi-provider teams share tasks and feel comfortable asking for assistance from each other.

Understand barriers to achieving trust. Move from service provider by service provider-based targets to an end-to-end measurement. This removes the feeling of competition and allows trust to be established (see section Service-focused reporting).

Build the foundations for a trusted relationship to occur. Building trust takes time. It requires an awareness of the service model dynamics and the opportunity to practice the process of trusting others. Team building can be informal, such as social events or as part of the interactions through the structural elements. Evolve activities through process improvements or value-stream optimization, allowing for the development of trust and reinforcing the principles you wish to establish.

Support, maintain and modify the trust environment. As the level of trust develops there will be situations that impact it. Dealing with these situations immediately and addressing any issues in relation to trust, will allow the environment to be maintained. The measurement of trust is a point in time indicator. Once you have defined what it looks like, check it often, measure and rate it, and make the ratings visible. Ensure that all layers within the SIAM model understand it and that trust may increase or decrease based on circumstance. It is not a blame game, rather a recognition that all relationships, even those that are long established, will change over time and circumstance.

Once a good understanding is gained about what management practices need to change for high-trust relationships to grow, a better understanding of the conditions that make trust-based SIAM practices necessary should be developed. Rigidly formalized methods of cooperation are replaced in favor of new principles of agile cooperation, to achieve high-performing, cohesive teams that cross organizational boundaries.

Examples of SIAM practices to build trust across the ecosystem are:

Create a baseline ‘trust level’ for each service provider and identify where relationships or culture need improvement.

Support collaboration using face-to-face interaction as well as collaboration technology and tools.

Be inclusive of everyone and consider the social elements of the workplace, such as summer parties, project kick-offs and team building across organizations.

Embrace group dynamic and action-based learning. Trust builds over time and the acceptance of a decision grows when the service providers are not only informed but can also discuss their recommendations and ask for justifications and explanations.

The baseline ‘trust level’ provides the service integrator with an understanding of, and clearly mapped, levels of trust across the ecosystem. For example:

Low-trust relationships: intensive management is required, often including more detailed reporting and regular meetings. The focus is to confirm the service provider can get the job done to the level agreed.

Medium-trust relationships: management becomes less intensive. The service provider has more autonomy and can focus on improving how the job is done.

High-trust relationships: the focus moves from management and oversight to growing and maintaining a positive and healthy relationship, with a focus on shared goals and innovation.

Developing a sense of unity and cohesiveness creates the trust in which impartiality and openness take their place. This is sometimes referred to as a ‘one team’ attitude that encompasses all actors in the SIAM ecosystem.

Often, trust is expressed by symbols of identity, for example, logos or specific expressions that have a unique meaning to group members only. Each SIAM ecosystem will develop its own symbols, rituals and communication patterns to create identity. Trust-based supplier management can recognize and develop these expressions of identity.

Gummi bears as a reward

A SIAM team had the habit of bringing gummi bears (sweets) to work and started handing these out – as a reward – whenever someone (from outside the team) actively promoted SIAM in favor of their own way of working.

The gummi bear eventually became a symbol for SIAM and even posters in the offices had the gummi bear on it.

Challenges for growing trust include:

A high employee attrition rate (as trust is based on a personal relationship)

Dependence on specific persons (or service providers) and their knowledge

Competitive situations (between service providers)

Resistance to change (as in changing ways of working for the service provider staff or even the customer-retained organization)

Inability to physically meet (instead relying on remote interactions)

Cultural alignment

5.4 Audit and compliance

Most businesses are regulated in some way or another. There are compliance standards relating to industry, corporate and business governance, setting out requirements to adhere to industry practices, legal and government requirements or an internal objective to meet a certain benchmark. This results in the service providers needing to meet and maintain adherence to certain quality, audit, compliance and regulatory standards.

In a SIAM ecosystem, when the service integrator has accountability to govern service providers, responsibilities extend to the management of quality and compliance parameters. In addition, the service integrator will need to maintain audit readiness or compliance posture through record-keeping and performing preparatory internal reviews and audits.

Compliance management and audits are defined in the Plan & Build stage (see section 3.1.5 Governance model). In the Run & Improve stage, audits should be carried out according to the schedules in the contracts, or in response to a major issue that highlights any potential non-compliance with obligations.

Audits should only be undertaken by staff with the appropriate experience and/or qualifications. In a SIAM ecosystem, this may include staff from multiple domains or service providers. Although there may be local, specific, confidential areas for individual service providers, this should not impact assurance activities, which should be achievable within an open and collaborative culture. Driving improvements post audit/assessment

Audits are an integral part of most management systems and are usually a requirement of external standards (such as ISO 900133 or ISO 1400134) or external regulatory frameworks. Audits or assessments can identify not just non-conformities to be addressed through corrective action, but also identify issues that are either systemic or show trends that identify potential areas of weakness.

Audit activities help all stakeholders improve the organization by providing useful information to the business in pursuit of continuous improvement. Auditing should be considered a way of helping the organization identify and improve the effectiveness and efficiency of its practices in pursuit of the organization’s objectives.

This is different from simply identifying compliance or otherwise. The difference between these two approaches may be regarded as the difference between facilitation and observation. Holistic audits ensure the business is ‘doing things right’ and also validate that ‘the right things are being done’. For example, this means not just ensuring that processes are being followed, but that they are appropriate for the needs of the business and that those processes contribute to customer satisfaction, however that is defined or measured.

Value may be added during audits by triggering a discussion on best practices or making suggestions for improvement.

The objectives of audits include:

Supporting stakeholders in the organization to deliver their goals and objectives

Measuring the performance of business processes (efficiency, effectiveness and conformity)

Assessing the organization’s ability to meet customer requirements, and internal and external rules and regulations

Facilitating the process of identifying and sharing best practice

Identifying improvement opportunities, risks and non-conformities

Supporting the adoption of external standards

An audit program that will outline the approach and frequency of audit activities, is defined within the governance framework in the Discovery & Strategy stage (see section 2.3.12 Auditing controls) and undertaken within the Run & Improve stage based on the guidance provided within the governance framework. Audit reports

Audit reports must be timely, supported by facts and evidence, objective, agreed between the members of the auditing team and the findings must be well documented.

There are three parts to a well-documented finding:

A record of the requirements against which the finding is identified

The finding statement itself

The audit evidence to support audit findings

The structure of the audit report should meet the needs of the customer organization and could, for example, include:


An executive summary that gives an overall assessment of the health of the area being audited

A statement of whether the area or activity reviewed conforms to the requirements placed upon it

Any opportunities for improvement

Any findings and areas of concern

Areas that may be considered best practice

Information for future audit planning

Areas that require follow up

Targets for preparing, approving and distributing audit reports should be agreed as part of the governance framework and approved during the audit closing meeting. Where possible, audit reports should be distributed to all the stakeholders as quickly as is practicable. Audit reports that are released and distributed long after the audit has taken place may be discredited or not given priority. Follow-up activities

The customer organization should set clearly defined timescales for the completion of actions agreed during the audit activity. For external audits, these are usually defined by the body performing the audit.

It is commonplace in many organizations that these timescales are also adopted for the equivalent internal audit finding categories. However, an alternative approach may be to consider the severity and impact of the audit finding when agreeing the target date for closure. A flexible negotiated approach (with defined boundaries) can help ensure agreement and cooperation of all stakeholders, and will increase the likelihood of successful completion.

Improvement post audit

After an audit, process A had a major non-conformity raised against it and process B had a minor non-conformity. Under the governance policy, major non-conformities had to be addressed in six weeks and minor non-conformities in 13 weeks.

Upon further investigation, process A had a six-month cycle and would not run again for another five months, while process B had a monthly cycle and was due to run again in two weeks.

With finite resources available, it would make more sense to negotiate a more effective set of deadlines that reflect this, based on resource requirements, available mitigation and risks to the business.

To help ensure that findings are completed as expected, it is also recommended that any action with a timescale of more than two months has a milestone plan agreed, with any individual milestone no longer than two months. If progress is then monitored against the milestones, the likelihood of successfully meeting the overall target date is improved.

The timely closure of agreed actions is very important to the effectiveness of the audit process. However, it is recognized that from time to time audit actions will not be addressed within the agreed timescales. This may be because of any number of factors, such as lack of resources, change of business priorities, recognition of need, etc. To manage such events, a suitable escalation process should be established that can be used in cases where agreement to address the issues cannot readily be obtained.

5.5 Risk and reward mechanisms

Risk and reward systems are designed to align the motivations of service providers to the motivations of the customer(s). Service providers must care about the end outcome and avoid self-serving behavior. Risk and reward mechanisms must be defined during the Plan & Build stage. There are a number of mechanisms that can be employed. In some cases, this is an opportunity to apply service credits or service credit earn back (see sections Service credits and Incentives).

There is also an option to become more innovative, by using mechanisms to align the service providers and the service integrator to the customer organization’s goals. There are anecdotal stories demonstrating this in action, for example, where bonus payments have been tied to the revenue of the customer organization. This level of goal alignment requires maturity and high commitment to partnership and transparency. In the SIAM ecosystem, the ability of a service provider to manage the risk of a customer attaining a target can seem daunting. Service providers may be reluctant to have revenue dependent on a customer or another service provider (who may also be a competitor).

The typical approach is the use of shared targets, based on a customer key performance indicator (KPI) where attainment of the target allows financial benefit. This benefit may be graded based on the level of attainment. For example, loss of productivity of no more than ten percent because the services being provided might earn a reward, but a much larger reward occurs where there is zero loss of productivity, perhaps on a sliding scale. This may be linked to the current baseline, and to improved achievement levels over time to drive improvement and innovation.

The targets and the allocation of service credits should be managed in a dynamic way. There needs to be a mechanism at the governance level to modify the targets, either by negotiation or by a contracted change approach. Since the goal is to encourage positive behavior, locking in an arrangement over a three- or five-year contract term would be unworkable.

Attributes of the risk/reward program would include:

Driving collaborative behavior to the desired outcomes

Service credits and earn-back – this allows a scenario where desired, collaborative behavior is rewarded

Shared KPIs are particularly useful in driving shared risk/reward and hence collaborative outcomes

Monitoring the effects of the reward mechanisms in place – great care should be taken to monitor for negative behavior being encouraged by the program

Those who take risks, reap rewards – this means asking providers to step up to higher-level outcomes, particularly with respect to grouped services

Creating a provider performance tree (making results visible and showing who is doing well)

Mechanisms to improve ecosystem culture and transparency between stakeholders

The importance of knowledge management and continual training across the ecosystem

5.6 Ongoing change management

The service provider landscape is likely to change. Once the initial model has been implemented, the customer organization may choose to onboard more service providers to the model.

Transition planning and support is required not only for new service introductions, but also in cases where a service has been significantly changed. It is also important that architecture, security, delivery and other standards and policies are in place.

Harmonization between supplier management in the integration layer and contract management within the customer organization’s retained capabilities, is essential to handle service provider exit and entry scenarios or in the event of a new sourcing requirement. This will create a cascading impact on the supplier management, transition planning and support, change management and release management processes.

There will be situations, such as organizational changes within the ecosystem, that will have significant impact on the people working within it. The service integrator should encourage all the ecosystem service providers to have effective knowledge management and backup plans in case of changes.

Effective operation of a SIAM ecosystem depends upon the ability of all stakeholders to understand the model and demonstrate the desired behavior. Therefore, any change in the people can have significant impact on the SIAM model, if not anticipated and mitigated early enough. As the service integrator’s control on service provider staff will be limited, this is a risk that needs to be carefully mitigated.

The people perspective of the change is something that is often neglected. The guidance provided within the Plan & Build stage on organizational change management (OCM) is a useful resource here (see section 3.2 Organizational change management approach). Another key factor to make ongoing change management effective and efficient is the focus on process integration. A change, no matter how small, will impact several elements within the ecosystem. If processes (and technology and people) are not integrated, absorbing the positive or negative impact of any change may result in significant imbalance in the overall model.

Managing a SIAM environment requires early detection of any areas where ‘siloed’ ways of working exist and dealing with them directly. One way to address this is having detailed lessons learned sessions soon after any change in the ecosystem. This allows for the development of an understanding of what went well and what went wrong. The lessons learned should not be superficial, but must address the layers, people, process and technology.

5.7 Applicable SIAM practices

Practice definition

“The actual application or use of an idea, belief or method, as opposed to theories relating to it.”35

Within the SIAM Foundation BoK, four types of practice are described:

1.People practices

2.Process practices

3.Measurement practices

4.Technology practices

These practice areas address governance, management, integration, assurance and coordination across the layers, and need to be considered when designing, transitioning or operating a SIAM model. This section looks at each of these practice areas and provides specific, practical considerations within the Run & Improve stage. Note that the people and process practices will be combined and referred to as ‘capability’.

5.7.1 Capability (people/process) considerations

The Run & Improve stage supports the operational delivery of the SIAM ecosystem in an incremental way, as each phase, service, process or service provider exits the Implement stage.

Initially, it will be necessary to ensure that knowledge levels and process capabilities are sufficient and mature. Often, immediately after implementation, knowledge levels and process capability maturity are the minimum required to take on services and processes. In the early stages, they will not always be proven under stress, and execution can be immature. As the model matures, the requirement will be to ensure that the capability of people and processes is optimized based on changing customer needs.

SIAM is a combination of people, processes and tools. These components need to work together effectively for a SIAM environment to run smoothly.

The following activities support the Run & Improve stage within a SIAM environment. Ongoing capability assessment

The customer organization will have defined the expectations for people capabilities within both the service integrator and service provider layers. They will relate to the standards required to support performance and relevance (see section 3.1.6 Detailed roles and responsibilities). Within the Run & Improve stage, the service integrator will provide assurance against these standards by managing the capability framework.

Each service provider will maintain its own framework and systems for assessing the effectiveness of its people. Examples include the skills of teams or functions, such as project management, service management and specialist IT staff, and the evolving digital systems they design, deliver and support. Skills mapping

The role of the service integrator is like the captain of a ship. It needs to translate the direction of the customer organization, chart a course and have a crew to help reach the desired destination. Having a good map is critical.

Maintaining a skills map is an ongoing activity throughout the Run & Improve stage. It helps to maximize the skills and capabilities of people while enabling staff to undertake work that is aligned to their skills and aspirations (see Figure 19: Communication skills map). With such diverse teams in a SIAM ecosystem, it is necessary to understand the capabilities required to achieve optimal results.

Each service provider must identify the levels of capability and capacity they need to deliver its services and then consider the skills it has against what is needed. This will identify gaps that need to be addressed to avoid risks associated with not having the right level of capability available at the right time. Gaps can be because of insufficient depth of knowledge, insufficient capacity to cover the volume or hours of work, or because of single points of failure. This must be regularly reviewed, maintained and gaps acted on. Ongoing training needs analysis and training plans

Competency frameworks represent the starting point for staff development and workforce planning initiatives in all layers. Continuing to develop staff can help organizations to stay competitive.


Training can be described as the acquisition of skills, concepts or attitudes that result in improved performance within the job environment.

A training needs analysis (TNA) identifies training gaps by isolating the difference between current and future skills. A TNA looks at each aspect of an operational domain so that the initial skills, concepts and attitudes of the human elements of a system can be identified effectively and appropriate training can be specified. A TNA is the first stage in the training process and involves a procedure to determine whether training will indeed address the problem or gap that has been identified. Succession planning

Succession planning is a strategic process that identifies critical roles, identifies and assesses possible successors, and provides them with the appropriate skills and experience for present and future opportunities. This facilitates the transfer of corporate skills and knowledge.

Succession planning provides a security net for the customer organization and protects it from risks that may result from service provider staff changes. To preserve organizational memory, it should be a deliberate and systematic effort designed to ensure continued effective performance of the SIAM ecosystem by making provision for the development and replacement of key people over time.

The service integrator should facilitate the transfer of skills and knowledge from service providers moving in and out of the ecosystem, to ensure sustained ways of working, and the right people with the right skills in the right place at the right time. The service integrator must identify those capabilities that are most critical to the success of the SIAM ecosystem, prioritizing succession risks and interventions accordingly. This approach needs to be based on the evolving needs of the customer organization, overcoming both structural rigidity and misalignment between strategic priorities and talent capabilities.

The hazards of stepping into service providers’ responsibilities

In one case, a service provider was viewed as performing poorly on its change control process responsibilities. As a tactical step, the service integrator stood up a change management team and instructed the service provider to allow it to conduct change control.

When attempting to normalize the situation, the service integrator was told that the service provider team had redeployed the change managers, and there would be a cost to re-establish that team. Under the principle of estoppel, the service provider asked for funding to cover this cost.

There are three important phases of succession planning:

Mapping leadership roles and critical positions: look beyond the basic skills and knowledge required to perform an adequate job and into the deeply rooted capabilities, such as traits and motives.

Define the parameters of critical positions: the service integrator should create tools and templates to help identify critical roles. These should identify specific skills, capabilities, knowledge and qualifications required for success in all critical positions. This should lead to the development of a more comprehensive competency list based on staffing needs and associated risks.

Generate detailed position descriptions: define the knowledge, skills and experience required for success for anyone assuming the role.

In addition, it is important to detail the type of learning and development that must be provided to train team members for these vacancies. This will serve as a learning curriculum, to support those moving into those roles.

Development opportunities within the curriculum may take the form of:




Coaching Improving processes

Most management frameworks emphasize measuring processes as a key element in ensuring the quality of their output and improving them. The effectiveness of the process is measured by comparing the output to the purpose. The minimum viable process should have everything defined to allow the most important measurements. Process lead time can be used as an efficiency metric.

A minimum viable process must have defined:





During the Run & Improve stage, the service integrator needs to apply close (operational) governance to ensure that all service providers (including the service integrator itself) are complying with the process requirements and agreements, especially with respect to process integration (see section 5.4 Audit and compliance).

Measurement is essential during Run & Improve. It should focus on value and effectiveness, as well as clear communication, demonstrated through cohesive team working, clear roles and responsibilities, effective communication and a positive working environment. Processes used in the right environment and for the right reason will streamline work and provide consistency.

During the Plan & Build stage, the service integrator will define process inputs and outputs. The service providers should continue to check the value of how they are performing processes by reviewing the outputs and evaluating steps for relevance and value, thus ensuring process step relevance. The approach should be to provide leaner processes that deliver required outputs.

Problem management example

The customer organization wishes to have problems managed to reduce the impact of incidents caused by them. The process model defines what the service provider must deliver as output – either an improved workaround or a definitive solution.

It defines to which party that process output must be delivered, often the people resolving incidents or the people making the changes to remove the causes of problems, and what triggers problem management – the criteria for deciding which problems to track or to investigate.

However, there is no fixed set of steps that are guaranteed to lead to a successful result. To paraphrase Tolstoy, each problem is unhappy in its own way. Although there are many possible methods useful to manage problems, it is up to the service providers and the service integrator to apply those methods in a flexible way, and appropriate to the desired results and the available resources.

All providers should be encouraged to simplify their approaches. The fewer steps and interactions there are, the easier it is to provide cohesion across service providers. A minimum viable process is a process that can achieve its purpose with the least possible amount of definition and elaboration.

Traditionally, there are many different process elements:

Purpose of the process

Process owner

Activities to be performed, and in which order


Inputs and outputs of the various activities (and of the process as a whole)

Providers of the various inputs and the consumers of the various outputs

Rules, policies and other constraints that should be respected in performing the process activities

Resources required to perform the activities of the process

Tools used by those resources to support the execution of the process

Process roles and their responsibilities

Mapping of the organizational structure to the process roles

Process documentation

Process metrics

Expected levels of performance

Processes need to evolve over time to accommodate change and to check that non-value adding activities have not been introduced. Poor processes do more harm than good and lead to:

Negative impact to business processes and outcomes

Customer complaints regarding service

User, customer and support staff frustration

Duplicated or missed work

Cost increases

Wasted resources


Reviews of process relevance and value are necessary, either as part of an ongoing improvement initiative or when issues arise.

The following steps offer an action plan for such a review:

1.Map the process

2.Analyze the process

3.Redesign the process

4.Acquire resources, if necessary

5.Implement and communicate change

6.Review the process

Map the process

Process models are designed during the Plan & Build stage (see section 3.1.4 Process models). They should include a flowchart or a swimlane diagram for each sub-process, and show the steps in the process visually. Swimlane diagrams are slightly more complex than flowcharts but are better for processes that involve several people or groups. It is important to explore each process step in detail, as some processes may contain sub-steps that are unknown or assumed.

Analyze the process

Use your flowchart or swimlane diagram to investigate the issues within the process. Consider the following questions:

Where do team members or customers get frustrated?

Which of these steps creates a bottleneck?

Where do costs go up and/or quality go down?

Which of these steps requires the most time, or causes the most delays?

Techniques to trace a problem to its origin can be useful, such as value-stream mapping, root cause analysis, cause and effect analysis or the ‘Five Whys’. Speak to the people who are affected by the process. What do they think is wrong with it? Which suggestions do they have for improving it? Try a workshop setting with appropriate stakeholders from all layers within the SIAM ecosystem, and continually consider the relevance and value of all processes.

Lean systems thinking

Lean thinking is a business methodology that aims to provide a new way of thinking about how to organize human activities to deliver more benefits to society and value to individuals while eliminating waste.

Lean thinking assesses the waste inadvertently generated by the way the process is organized, by focusing on the concepts of:


Value streams




The aim of Lean thinking is to create a Lean enterprise, one that sustains growth by aligning customer satisfaction with employee satisfaction, and offers innovative products or services profitably while minimizing unnecessary over-costs to customers, suppliers and the environment.

Lean thinking seeks dynamic gains rather than static efficiencies. It is a form of operational excellence aimed at taking costs out of processes. This is relevant in a SIAM ecosystem where double handling and complications can make their way into processes, simply because of the complexity of interactions created from having multiple stakeholders.

Redesign the process

This activity involves re-engineering process activities based on the identified shortcomings. It is best to work with those who are directly involved in the process. Their ideas may reveal new approaches, and they are more likely to buy into changes if they have been involved at an early stage.

Make sure that everyone understands what the process is meant to do. Then, explore how problems identified in previous steps can be addressed. Note down everyone's ideas for change, regardless of the costs involved.

As a next step, narrow the list of possible solutions by considering how the team's ideas would translate to a real-life context. Conduct an impact analysis to understand the full effects of the ideas generated. Then, carry out a risk analysis and a failure mode and effects analysis to spot possible risks and points of failure within your redesigned process. Depending on the focus, there may be an opportunity to consider customer experience mapping at this stage.

These tests will help to demonstrate the full consequences of each proposed idea, so the end result is the right decision for everyone. Once the team agree on a process, create new diagrams to document each step.

It is a good idea to use a process forum to undertake this activity, or a working group if quick results are needed for an issue that has recently occurred and needs prompt action.

Acquire resources

Some resource and cost allocations will be within the scope of the service provider management team or the service integrator. If not, the resources and budget need to be agreed and acquired. This may require the production of an outline business case listing the arguments for how this new or amended process will benefit the SIAM ecosystem, as well as timescales, costs and risks.

Implement and communicate change

Usually, new ways of working will involve changing existing systems, teams or processes. Once approved, the change can commence.

Rolling out a new process could be managed as a project, especially if it affects multiple layers. Plans will support careful management (see section 3.2 Organizational change management approach). Planning includes ensuring training is done at the appropriate level. For a minor change, a briefing note or even a discussion might be all that is required. If the changes are significant, formal training programs may be necessary.

Whoever is leading the implementation should allocate time for dealing with early issues and consider running a pilot first, to check for potential problems. It is also important to ensure that, in the initial weeks of operation, the new process is adopted, and staff do not revert to old ways of working.

Review the process

Few things work perfectly right from the start. After making any change it is good practice to monitor how things are going in the weeks and months that follow, to ensure that the change is performing in line with expectations. This monitoring will also allow the identification of issues as they occur. Make it a priority to ask the people involved with the new process how it is working, and what – if any – feedback they have. Service integrator activities

The activities carried out by the service integrator in the Run & Improve stage will depend on the SIAM model in use within the ecosystem.

Typical activities include:

Table 9: Service Integrator Activities

Activity Example
Major incident coordination

Coordinating the investigations by multiple service providers

Communicating the status to users and stakeholders

Obtaining root cause analysis reports

Release planning

Maintaining and publishing an integrated release plan with all providers’ releases (where relevant to the SIAM model)

Identifying and planning for any potential clashes

Assuring integration testing of end-to-end services

Capacity planning

Consolidating business demand forecasts

Maintaining and sharing an integrated capacity plan for the end-to-end services with service providers

Checking service providers’ capacity plans to ensure timely provision of capacity

End-to-end monitoring

Monitoring end-to-end services

Alerting service providers

Supporting investigation of major incidents and problems

Incident coordination

Coordinating the investigations by multiple service providers

Communicating the status to users and stakeholders

Problem coordination

Coordinating the investigations by multiple service providers

Communicating the status to users and stakeholders

Reviewing the priority with the business

Change management

Managing the approval of high risk and high impact changes, and changes that affect multiple providers

5.7.2 Measurement practices

Within the Run & Improve stage, the focus is on end-to-end service delivery. End-to-end service measurement refers to the ability to monitor an actual service, not just its individual technical components or providers. Effective measurement practices support the performance management and reporting framework defined in the Plan & Build stage (see section 5.3 Ongoing performance management and improvement).

Peter Drucker is often quoted as saying, “you can’t manage what you can’t measure”. In a SIAM ecosystem, objective measurements are very necessary to be able to hold all parties accountable.

Once in place, the SIAM ecosystem needs to be measured in terms of the outcomes delivered to the customer organization. Additionally, a framework for the assessment of the issues contributing to substandard performance must be used.

Metrics support decisions on how to improve the ability to meet the business goals. Metrics act as a guide showing the current state of a service, process or component. In this sense, KPIs are acting as decision-making indicators of performance, against a process or technology aspect serving the organization.

One of the challenges when setting objectives against metrics is the tendency to drift from ‘managing by metrics’ to ‘managing metrics’. Focus on the importance of measuring collaborative outcomes – the ‘sum of the parts’. Metrics are the measurements used for assessing the outcomes. When parties try to ‘game the system’ by achieving metrics without managing the underlying outcomes, undesirable behavior follows.

Ultimately, metrics are needed to perform two activities:

1.Allowing the measurement of aggregated outcomes, showing the links from the end-to-end service down to individual components and from components back up to the end-to-end service

2.Managing the behavior of the service providers to encourage them to be more collaborative and focus on the aggregate goal

In the context of SIAM, the service should meet the requirements of the customer organization and its customers and stakeholders. In general, customers consume the ‘top level’ end-to-end services delivering business outcomes, and so that is what they care about. The service elements that are grouped to deliver these end-to-end services must be measured to assess the source of any issue, in addition to measuring the end-to-end service. Nothing is gained from the components all meeting their targets if the end-to-end service does not.

The service model used during implementation to map accountabilities is used in the Run & Improve stage to understand the contribution of components to the end-to-end service(s).

Tools and processes such as configuration management show the links and dependencies between components, services and service providers. This data can be used to aggregate information, events and statistics about the end-to-end context and possible impact of the components.

The following are examples of measurements and targets that would help measure the value of the SIAM ecosystem and end-to-end services:

Reduce the failure of critical service outages by x% each quarter

Improve the performance of every service delivered by x% every quarter (continual improvement)

Reduce the cost of managing technology by x% each quarter (exclusive of people)

Increase the use of self-service by x% in those areas where this is appropriate

‘Right first time’ as a metric is imposed across the value stream with the goal of no defects, bugs or incidents passed downstream

Mean time to restore (MTTR) – reduce the time to notice, alert, log, investigate, diagnose, resolve, close and confirm closure of incidents by x% per quarter for all priority one incidents

MTTR – reduce the time to notice, alert, log, investigate, diagnose, resolve, close and confirm closure of incidents by x% per year for all priority two incidents or lower

All changes must be capable of being rolled back or forward fixed within agreed timescales for each change type

All changes must be version controlled including documentation, software, infrastructure

All services will pass a continuity test annually or, if critical, every quarter

Configuration management system must be accurate to within x%, measured quarterly

No change deemed critical can go live without service integrator approval (automated or manual)

All services provided (people, processes, architecture, software) will meet SIAM or corporate governance policies unless otherwise agreed

Any reporting must be consistent and coordinated across the value stream

Everything is a priority one

One company had set a contract clause that defined that all incidents were treated as a priority one.

The implications of this were understood but there was a desire to drive the culture of ‘right first time’ and ‘never let a defect go live’.

It took two years, but incident volumes reduced by more than 60 percent overall, customer satisfaction was no longer measured as it was so high and the cost of service last recorded had decreased by 18 percent.

The objectives for SIAM need to be synchronized with the overall customer organization objectives, focusing on both the short- and long-term business objectives of the commissioning organization. With this information, the service integrator can create targets that are aligned with the big picture. When targets are clear, it is easier to agree on what to measure. Metrics are an essential management tool. The right metrics provide the information to make qualified decisions based on facts.

Reports should be designed with care and consider what is important to the stakeholders at the time, but remember these requirements will change and reports may need to change as well.

Visual management

Visual management is the set of practices that allow individuals to see what is happening quickly, understand if there are any issues, highlight opportunities and act as a guide for improvement. Visual management is the ability of a system to quickly show the current status to anyone who stands and observes, using key indicators.

Visual management should use a tool that can display real-time information, such as production status, delivery status, process or technology status. It uses simple representations that everyone (including the customer organization, service integrator, service providers and other stakeholders) will find meaningful.

This approach requires information to be available to those doing the work in a timely fashion, displayed so everyone in the area understands it. The information and metrics provided are intended to drive decisions and actions, and there needs to be a clearly defined process for acting and obtaining support from relevant parties when needed. This aligns with the role of the process forums and working groups.

Visual controls cover more broadly how an ecosystem is operating. Examples could include the status of incidents and problems, a baseline of configuration information, and the flow of operational activity and performance. All service delivery elements can be displayed using such visual controls. This approach to measurement throughout the SIAM ecosystem results in a minimum viable product (MVP) of measurement. Despite the collaborative end-to-end focus for outcomes, measurement should not be incapable of revealing individual failure or success. Drill-down reporting should be available to maintain focus on end-to-end outcomes (see figure 27 below).

Figure 27: Drill-down reporting

The service integrator should maintain documentation that includes the measurement design and calculations to ensure clarity and transparency between all layers. This should include a statement of the intent of the measurement, which in turn enables a wider understanding and helps to provide a foundation for future improvement.

Outcome-focused measurements should be balanced with behavioral measurements, as measuring outcomes alone can drive unintended behavior. The relationship of related measurements should be defined, and where there are competing factors, use hierarchies or categories to articulate criticality. Balance the use of leading and lagging indicators, as each addresses the measurement of an outcome from a different perspective. Define the audiences for each measurement to ensure appropriate relevance and representation.

It is important to continue the journey of defining and documenting measurements throughout the Run & Improve roadmap stage, as knowledge is built and new measurements are identified to support continual improvement.

Visual management

A small organization with only limited experience was considering requirements to create a visual management solution. The organization wanted to keep things simple and gradually build on data.

They agreed that they wanted to:

Build a simple view of the SIAM Run & Improve roadmap stage

Highlight the service providers involved

Visually highlight any issues

Be capable of drilling down for more information

Be adjustable to serve the needs of various viewers

Be easy for all relevant service providers to participate

Be flexible

Only monitor information that showed the state of something

Make the service integrator accountable for maintaining its view of the visual diagram

Ensure that it was checked no less than every major release of an application, addition of a new feature or technical change

They defined some simple rules and created a value chain of activities. By bringing various factors together, they iteratively created a model that was flexible to use, change and apply against technology tool(s).

Hint – begin with something as simple as a series of large notes stuck to a wall and see if the flow works.

Figure 28 shows an example flow:

Start high level


Add detail


This can then be mapped to organization models, tools, roles, RACI matrices, etc. When mapping a process, ensure that the level of detail is consistent throughout the map. It is common to find some areas explored in detail and others at a high level, which can create ambiguity and inhibit action.

Figure 28: Visual management – ITIL process to organizational chart

Visual management provides a way of mapping the way the processes work to the organization, tools, metrics, roles, etc.

5.7.3 Technology practices

Within the Run & Improve stage of the SIAM roadmap, all layers need to keep abreast of emerging technology. The pace of change in the technology sector is accelerating rapidly, and organizations need to understand the potential of new technology.

There are several ways to keep abreast of new and upcoming technologies:

Customer organizations often give early indication of a changing need or requirement. Much recent innovation is based on customer organizations expecting the same level of technology accessibility and functionality that they can get from their personal devices. It is not appropriate to ignore consumer technologies as not ‘enterprise grade’. Customers often follow technology closely and are happy to provide their perspective on what is happening.

It is the role of both the service providers and the service integrator to recognize new technology. For service providers that sell products, keeping up with the latest technologies helps maintain and improve their products, which better serves customers and meets changing needs, and enables market penetration and success. It is a good idea to encourage service providers to share their own product and service roadmaps as an input to future SIAM strategy. Recognize that this can have its challenges where information is commercially sensitive and relates to the service provider’s market position and long-term business success.

For service integrators, especially those that are external, keeping up with the latest trends and advising on them becomes a part of their added value.

Staying relevant in the technology industry is an ongoing challenge. There are some activities that will support the service providers and service integrator in staying abreast of emerging technologies. A technology assessment is the study and evaluation of new technologies. This is an important input into future business strategies and as such is a task for the customer organization to undertake or be involved with (see section 3.1.9 Tooling strategy).


33 For more information, visit:

34 For more information, visit:


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.