Performance-Based Project Management produces tangible documents that are evidence that the method has been used. These documents are in addition to the contracts and other program documents normally found in a large, complex systems development effort. The documents and evidence they represent are materials produced during the project that demonstrate the increasing maturity of the products or services. Although these alone are not measures of progress, they are the supporting materials for these measures. The documentation for each deliverable becomes part of a larger set of documents needed to give the customer visibility into the performance of the project. Ultimately, the working products or services are the tangible evidence of progress to plan.
The documents start with the Statement of Work (SOW), which describes the deliverables from the project. The SOW contains a narrative of the work to be performed on the project. This is an anchor usually provided by the customer, but it may also be developed jointly by the customer and the provider of the products or services of the project. In all cases, a well-written SOW lays the groundwork for success of the project.
Some in the project management business say that Performance-Based Project Management requires too many documents or that the documents are too complex. This approach uses documents as a narrative between the requesters of the project’s outcomes and the providers of those outcomes. The physical nature of each document is not defined; only that they are needed and must contain a minimal set of information that the customer and provider can use to take action or make decisions.
Most of these documents have been introduced in previous chapters. In this chapter, we will develop the details needed for the following documents so that you can actually produce them for your own projects:
Statement of Work (SOW). Establishes and defines all nonspecification requirements for the work efforts of the project.
Statement of Objectives (SOO). States the overall performance objectives.
Concept of Operations (ConOps). A verbal or graphic statement that clearly and concisely expresses what the customer intends to accomplish and how it will be done using available resources.
Work breakdown structure (WBS). A product-oriented family tree of hardware, software, services, data, and facilities that are required for system development, deployment, and sustainment.
Organizational breakdown structure (OBS). The hierarchical description of the staff that works on the project.
Responsibility Assignment Matrix (RAM). The intersection of the WBS and the OBS, showing “who” is doing “what” work to produce the outcomes from the project.
Integrated Master Schedule (IMS). An integrated, networked schedule containing all the detailed discrete work packages and planning packages necessary to accomplish the project.
Risk Management Plan (RMP). A plan to foresee risks, estimate impacts, and define responses to issues.
Performance Measurement Baseline (PMB). A time-phased budget plan for accomplishing work, against which contract performance is measured. It includes the budgets assigned to scheduled control accounts and the applicable indirect budgets.
Requirements Traceability Matrix (RTM). A tool used to ensure that the project’s scope, requirements, and deliverables are connected with each other.
Measures of Effectiveness (MoE). Operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment under a specific set of conditions.
Measures of Performance (MoP). Measures that characterize physical or functional attributes relating to the system operation, measured or estimated under specific conditions.
Key Performance Parameters (KPP). Capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessment, or termination of the program.
Technical Performance Measures (TPM). Attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal.
The Statement of Work is the start of the project. It is a document developed by the customer, sometimes with the help of the supplier of the products or services, but written primarily from the customer’s point of view. The SOW, as its name suggests, describes the work to be performed during the project that will produce the project’s outcomes. The term work is broad. It could mean buying something, teaming up with someone to provide the solution, installing a “readymade” solution, or actually building something, although that is not always necessary.
The SOW specifies in clear and understandable language the work to be done in producing the outcomes of the project through the delivered products or services. An effective SOW requires both an understanding of the goods or services that are needed to satisfy a particular requirement and an ability to define what is required in specific, performance-based, quantitative terms. This sounds like a lot of work, but the SOW is the core document for the success of any project. It enables suppliers to clearly understand the customer’s needs. This allows the suppliers to prepare a credible proposal to deliver the required goods or services.
The SOW must state the requirements in general terms of what (result) is to be done, rather than how (method) it is done. It gives the supplier the maximum flexibility to devise the best method to accomplish the required result. However, the SOW must also be descriptive and specific enough to protect the interests of the customer and to promote competition between multiple suppliers. The clarity and explicitness of the requirements in the SOW will enhance the quality of the proposals submitted. A definitive SOW is the source of definitive proposals, which reduces the time needed for proposal evaluation.
Preparing a Statement of Work begins with an analytical process that involves an examination of the customer’s requirements and tends to be a “bottom-up” assessment with “reengineering” potential. Each requirement needs to be connected to a capability the customer is asking for. When we start at the bottom and work our way up, we may have requirements that have no home—no capability to support. One of two things can be done. Drop the requirement or reassess a possible missing capability. When we start the Statement of Work at the top and work down, we have no way to determine if we have missed any requirements that may be needed from missing or undiscovered capabilities. This analysis is the basis for establishing performance requirements, developing performance standards, writing the performance work statement, and producing the quality assurance plan. Those responsible for the mission or program are essential to the writing of the SOW and defining the needed capabilities produced by the project.
A SOW describes the work to be performed and usually includes a timeline and level of effort so that a vendor or contractor can respond to the Request for Quote (RFQ) with a proposal and cost estimate. The customer can then select the most qualified vendor at the most affordable cost.
SOWs should include:
Work to be performed and location of the work to demonstrate how the job will be completed as planned
Period of performance, timeline, and deliverable schedule
Any special requirements
The evaluation criteria may include the bidder’s:
Plan for performing the work described in the SOW. This plan may or may not be a full schedule, but the sequence of the work and the identified outcomes need to be described. A partial schedule can be developed for the upcoming work in enough detail to manage the project. For work beyond the planning horizon, a budget and a general description of what will be done is called “rolling wave” planning.
Skill and experience of the individuals who will perform the work and their assigned work efforts.
Past performance and completed projects to demonstrate competency, capability, and capacity to perform the work correctly.
Price of the proposed products or services.
The Statement of Objectives is an alternative to a Statement of Work. The SOO is a broader description of the outcomes of the project used when the technical and operational details are not needed to convey what “done” looks like to those implementing the project. It is a summary of key goals, outcomes, or both, which is incorporated into performance-based contracts so that competitors can propose their solutions, including their technical approach, performance standards, and a quality assurance surveillance plan based on commercial business practices.
The SOO should not address each WBS element, but each WBS element should be traceable to and do something included in the SOO. For example, a SOO may instruct the bidder to address the engineering approach. That is not a particular WBS element, but several WBS elements might be created to break out the engineering tasks. Try not to group all WBS elements in the same objective. End users will get the best service, and competition will be maintained, if dissimilar objectives are submitted to contractors on separate SOOs.
The Statement of Objectives provides basic, top-level objectives of the project and is provided in the Request for Proposal (RFP) in lieu of a formal Statement of Work. The SOO gives potential suppliers the flexibility to develop cost-effective solutions and the opportunity to propose innovative alternatives meeting the objectives.
A Concept of Operations (ConOps) describes from the user’s perspective a system’s characteristics, the needed capabilities it will fulfill, its relationship to other systems, and the ways it will be used. A ConOps can also describe the user’s organization, mission, and objectives from an integrated systems point of view and is used to communicate overall quantitative and qualitative system characteristics to stakeholders. It describes the characteristics of a proposed system from the viewpoint of those who will use that system. The ConOps includes:
Statement of the goals and objectives of the system assessed through Measures of Effectiveness (MoE).
The current system or situation with background, objectives, and scope. The operational policies and constraints in the current system. This description should identify the operational environment and its characteristics, the major components of the system and the interconnections between these components, the interfaces to external systems, and the capabilities and functions of the current system.
Strategies, tactics, policies, and constraints affecting the system.
Organizations, activities, and interactions among participants and stakeholders.
Clear statement of responsibilities and authorities delegated.
Specific operational processes for fielding the system.
Processes for initiating, developing, maintaining, and retiring the system.
A ConOps should relate a narrative of the process to be followed in implementing a system. It should define the roles of the stakeholders involved throughout the process. The ConOps should provide a clear methodology to realize the goals and objectives of the system, but it should not be an implementation or transition plan itself.
A typical ConOps table of contents looks like this:
Key goals of the resulting system
Key assumptions in achieving those goals
Stated purpose of the ConOps
Major business functions or major mission functions
Entities performing these functions
Major supporting technology
New capabilities‘ effect on managing the business or leading the mission
Top-level schedule for deploying the needed capabilities
A formal definition of a project work breakdown structure “is a deliverable or product-oriented grouping of project work elements shown in graphical display to organize and subdivide the total work scope of a project.”1 The WBS is a critically important project tool for the simple reason that it is the first place the actual deliverables of the project are listed and their behaviors named in the WBS dictionary. The WBS dictionary describes the technical and operational measures needed to confirm that the deliverable from the WBS has been correctly implemented. This narrative, often referred to as a “build to specification,” describes the “acceptable criteria” for the deliverable. Thought and planning must be given to the development and implementation of the WBS and the accompanying dictionary so that the need for subsequent changes is minimized.
This WBS framework, as illustrated in Figure 4.3, allows the project to be separated into its logical component parts and makes the relationship of the parts clear. It defines the project in terms of hierarchically related action-oriented elements. Each element provides logical summary points for assessing technical accomplishments and for measuring cost and schedule performance.
A work breakdown structure is the foundation for effective project planning, execution, controlling, statusing, and reporting. All the work contained in the WBS is to be identified, estimated, scheduled, and budgeted using the WBS number. The WBS is the hierarchical structure and a set of codes that integrates project deliverables, from the lowest work element to the final system deliverable—the delivered capabilities. The WBS contains the project’s scope baseline that is used to achieve the technical objectives of the project. The WBS is generally a multilevel framework that organizes and graphically displays elements representing work to be accomplished in logical relationships.2
The project manager structures the project work into WBS elements (work packages) that are:3
Definable. The deliverables of the project can be described and easily understood by project participants.
Manageable. The size and complexity of the work can be assigned to people with specific responsibility and authority.
Estimable. The duration and effort of the work can be estimated with confidence, along with the cost and the resources required to complete the work as planned.
Independent. Each element of the WBS has a minimum interface with or dependence on other elements.
Integrable. Each element of work can be integrated with other project work elements and with higher-level cost estimates and schedules to include the overall project.
Measurable. Each element of work has a means to measure progress. The work has start and completion dates and measurable interim milestones.
Adaptable. Each work element is sufficiently flexible so the addition, change, or elimination of work scope can be accommodated in the WBS framework.
Relationships between the WBS elements and detailed descriptions of each element are presented in the WBS dictionary that accompanies the hierarchical WBS diagram. The WBS dictionary is a project definition tool that defines the scope for each work element; documents the assumptions about the work, including deliverables, milestones, key performance parameters, and quantities; lists required resources and processes to accomplish the work; identifies a completion schedule, including measurable milestones; and provides links to key technical design or engineering documents.
A WBS dictionary is a set of specific definitions that thoroughly describes the scope of each work element identified in the WBS. It defines each WBS element down to the control account or work package level in terms of the content of the work to be performed. The dictionary is composed of two components:
A tabular summary of the dictionary elements cross-referenced to the WBS indenture level, the WBS revision, the element title, the project contractor WBS code, and (if desired) the contractor’s accounting code.
A work element dictionary sheet that provides the title of the work element, the project contractor WBS and the contractor’s accounting codes, the budget and reporting number, and a detailed description of the work to be performed by this element, including deliverables.4
The WBS defines the project’s deliverables and groups these project elements so that the project’s work activities can be managed effectively. The OBS, defined next, shows how the staff is structured to perform this work. The WBS is product centric, not organization centric. It does not include such functions as “design” or “test.” These are not deliverables from the project. Rather they are “functions” performed during the delivery of the products or services from the project. Our primary motivation for using the WBS is that it:
Provides a framework for organizing and managing the approved project scope
Ensures that we have defined all the work that makes up the project
Provides a framework for planning and controlling cost and schedule information
The organizational breakdown structure describes the structure of the workforce that delivers the products and services described in the work breakdown structure. A concise description of the OBS can be found in government documents, where the OBS is a formal document submitted monthly to the contracting officer. Through the WBS, work is defined to a level where unique organizational and personal responsibilities can be established. This may occur at any one of several levels within the project and functional organization. The individual assigned responsibility for accomplishing work at the control account level is often designated a control account manager. Control accounts are divided into smaller, discrete scopes of work called work packages, and a work package manager is assigned to each work package. Integrating the WBS with the project and functional organizations ensures that all contract work is accounted for, and that each element of work is assigned to the level of responsibility necessary for planning, tracking progress, accumulating costs, and reporting.5
The development of the organizational breakdown structure starts with the organization chart. This seems obvious, but without knowing “who” actually works on the project, we can’t really arrange those resources in a way that ensures that we have enough capacity to do the work (see Figure 4.4). This organization chart also identifies the needed resources. This can be done through a skills assessment, via experience with technology or processes, or simply by using the “job titles” or “job classifications” for the available staff. Matching this structure against the elements of the work breakdown structure creates an intersection between “what has to be delivered” and “who has to deliver these outcomes.” The OBS indicates the organizational relationships between the resources providing the work and the assignment of these resources to the actual work.
The intersection of the work breakdown structure, which shows the project’s deliverables, and the organizational breakdown structure, which shows the resources assigned to produce those deliverables, is found in the Responsibility Assignment Matrix (RAM). This document may seem unnecessary for many projects, and, for smaller projects, this might be true. When someone asks, “Who is working on what?” the answer can be found in the OBS, but the RAM displays only the lowest level at which the WBS and the OBS intersect, and identifies the specific responsibilities for specific project work. At this intersection, the budget is assigned to the resources performing the work so they may produce the outcomes of that work, as shown in Figure 4.4.
With the WBS, OBS, and RAM, we have all we need to know about the “who” and “what” of our project:
The work breakdown structure is a tool that defines a project and groups the project’s elements in a way that helps organize and define the total scope of the work to be accomplished. This is done by identifying the final products and the major deliverables of the project, incorporating the appropriate levels of detail to show how the products are “assembled” for final delivery, and obtaining stakeholder agreement that these products (or services) properly represent the Statement of Work. This last piece is a narrative document—the WBS dictionary—describing the attributes of each deliverable using Measures of Effectiveness, Measures of Performance, and Technical Performance Measures.
The OBS indicates the relationships among parts of the organization; it is used as the framework for assigning work responsibilities.
The Responsibility Assignment Matrix merges the WBS and OBS to identify who has specific responsibility for specific project tasks shown in Figure 7.1.
The Integrated Master Schedule (IMS) is a time-based network of detailed tasks necessary to ensure successful program/contract execution. The IMS is traceable to the Integrated Master Plan, the contract WBS, and the Statement of Work. The IMS is used to verify how attainable the contract objectives are, to evaluate progress toward meeting program objectives, and to coordinate the scheduled activities with all related components. Figure 4.5 illustrates how the elements of the IMS are arranged using work packages that contain the work activities needed to produce the deliverables described in the WBS and the staff described in the OBS, who deliver the outcomes described in the SOW that fulfill the customer’s needed capabilities.
There are five core attributes of a credible Integrated Master Schedule. These attributes must be in place before the project can be considered “ready to execute.” The IMS must be:
1. Complete. It must contain the entire project’s scope of work. The WBS and its coding are included in the IMS. This means all the work activities in the IMS must be coded with the WBS number. This tells the reader “why” the work is being done, “what” is being produced, and “when” the work will produce the outcome contained in the WBS.
2. Traceable. This is an extension of completeness. The work in the IMS must tie into the requirements as well as the Statement of Work, Statement of Operations, and the Concept of Operations. The IMS does this by using the same numbers for each work activity as are used in these other documents.
3. Transparent. This is a comprehensive description of what work needs to performed, how the work will be performed, what measures of progress to plan are needed for the work, what risks are associated with the work, and the measures of performance for the outcomes from the work. By transparent we mean the work is “clear and concise” so there is no confusion about the outcomes or what “done” looks like.
4. Usable. The IMS must be used daily to execute the project. The project’s participants must use it in their conversations about the status of the project, risks that are impeding progress, and who is working on what. The IMS is the “playbook” for the project. To be “usable,” the IMS has to state not only what “done” looks like but also how the team is going to reach “done.”
5. Controlled. As a critical document, the IMS must be under change control of a single responsible person. This person is the “scheduler.” It is the guiding roadmap and as such must represent the current and approved directions for the project.
Many projects build an Integrated Master Schedule and call it complete, without considering “how” and “why” they are producing this critical document. They simply lay out the work in some sequence, assign the work, and assume that the IMS is ready to be used to manage the project. However, if we expect to increase our project’s probability of success, the following mistakes must be avoided:
Not aligning the IMS with the customer’s needed capabilities. The IMS must show how each of the capabilities will be developed and deployed through the work activities. This requires that the IMS be a “narrative” of these capabilities, not just a description of the work efforts. This narrative starts with work activities containing the nouns and verbs connecting the activity to the capabilities. For example, “Develop transaction processes for provider network” is a work activity supporting a needed capability of the health insurance IT project developed in Chapter 5.
Not including the entire scope and the “exit criteria” for the work being performed. “Test” is not a credible description of a work activity; but “Test database update service for provider network” is a credible description of the work. The “exit criteria” for this activity would then be contained in the WBS dictionary. These criteria are the Measures of Performance and Technical Performance Measures, and, often, the Key Performance Parameters of the deliverable produced by the work effort. The details of these are described in Chapter 3’s endnotes.
Not cross-referencing the IMS to the project’s other documents. For example, the deliverables described in the Statement of Work must be referenced in the IMS with a SOW number, the work breakdown structure number, all the Measures of Effectiveness and Performance, and—most of all—a reference to the risks, which are described in the Risk Register. The IMS must indicate if these risks are being “retired” with specific work activities.
Not using the IMS as a “game plan” for the project team. Without it, the team members don’t know what work they are accountable for, when that work is to be performed, what the outcomes of that work look like, and, most important, who they are dependent on and who depends on them for products from the project. While specifically not an IMS development issue, the IMS must be “usable” during the management of the project. This means the IMS must be treated as a piece of literature—readable, understandable, unambiguous, not confusing to the project team, and most of all having the integrity needed to make management decisions.
The best way to think about risk management is to reflect on Tim Lister’s advice: “Risk management is how adults manage projects.”6 Risk management is essential to the success of any significant project. Information about key project cost, performance, and schedule attributes is often unknown until the project is under way. These risks can be mitigated, reduced, or retired with a risk management process. Risk management is concerned with the outcomes of a future event, the impacts of which are unknown.7 Risk management is about dealing with this uncertainty.
In Chapter 3, we discussed how to perform Continuous Risk Management. Now, let’s look at the motivation for this process and the beneficial outcomes it produces. Five simple concepts of risk management are represented in the Risk Register and the Risk Management Plan:
1. Hope is not a strategy. Hoping that something positive happens will not lead to success. Preparing for success is the basis of success. We need a written plan for identifying and handling the risks that threaten the success of the project. This is the basis of the Risk Management Plan. If it is not written down, it will not be handled.
2. All single-point estimates are wrong. Single-point estimates of cost, schedule, and technical performance are no better than 50/50 guesses in the absence of knowledge about the variances of the underlying distribution. In our Risk Management Plan, we must have credible estimates for our work and the performance of the outcomes from the work. This means a probabilistic estimate of cost and schedule at a minimum. With these estimates, we can plan for the contingencies needed to deliver on-time and on-budget.
3. Without integrating cost, schedule, and technical performance, we are looking in the rearview mirror. The effort to produce the product or service and the resulting value cannot be made without making these connections. Our Risk Management Plan must describe how we are going to integrate these three elements and state who is accountable for ensuring that they are properly connected to show not only the risks, but also how they will be handled.
4. Without a model for risk management, you are driving in the dark with the headlights turned off. Risk management is not an ad hoc process that you can make up as you go along. A formal foundation for risk management is needed. Choose one that has worked in high-risk domains, such as defense, nuclear power, or manned spaceflight.
5. Risk communication is everything. Identifying risks without communicating them is a waste of time. The Risk Management Plan must include a Risk Communication Plan that connects the project participants with the customer so all the risks are visible, are agreed upon, and have acceptable handling plans in place.
To be credible, the Risk Management Plan must contain the following four sections:
1. Executive Summary. A short summary of the project and the risks associated with the activities of the project. Each risk needs an ordinal rank, a planned mitigation for any active risks, and approval by the Risk Board, which is accountable for reviewing and approving the risks. Once approved, the Risk Board reviews progress to retiring or handling the risks and confirms that the risks are not growing or new risks have not been recognized. The mitigation plans are then included in the IMS with their costs defined and approved.
2. Project Description. A detailed description of the project and the risk associated with each of the deliverables. Each of the descriptions needs to speak to what happens if the risk occurs, how it is going to be prevented from happening, the probabilities of it happening, and residual probability of the risk after the mitigation work has been done. This residual risk probability is always there, because in the project management world, there is no such thing as anything being 100 percent certain.
3. Risk Reduction Activities by Phase. Using a formal risk management process that connects risk, mitigation, and the IMS. The efforts for mitigation need to appear in the schedule.
4. Risk Management Methodology. The risk management process, as shown in Figure 3.3, is a good place start. This approach is proven and approved for use in high-risk, high-reward projects. The steps in the process are not optional and should be executed for ALL risk processes.
The Risk Management Plan tells us how we are going to manage risks on the project. The Risk Register is where we record these risks, their probability of occurrence and impacts, and our handling strategy. Figure 7.2 shows a hypothetical Risk Register for our kitchen remodel project, showing the minimum number of elements needed to manage project risks. Risk is composed of two core components:
1. The “Threat.” Circumstances with the potential to produce a loss or harm the project in some way.
2. The “Consequences.” The loss that will occur when the threat is realized.
There are three ways to structure the risk statement in the Risk Register, but they always contain the same syntax:
1. An If-Then Statement. “If we miss our next milestone, then the project will fail to achieve its production, cost, and schedule objectives.”
2. A Conditions-Concern Statement. “Data indicate that some tasks are behind schedule and staffing levels may be inadequate. We are concerned the program could fail to achieve its production, cost, and schedule objectives.”
3. A Condition-Event-Consequence Statement. “Data indicate that some tasks are behind schedule and staffing levels may be inadequate (condition). This will mean (event) missing our next milestone, with the project (consequence) failing to achieve its production, cost, and schedule objectives.”
The Risk Register must contain descriptions of the risk, impact, consequences, and conditions before any risk-handling plans can be made. Simply making a list of risks is not sufficient to protect the project from their occurrence.
The next step is to rank the risks so we can prioritize them, along with their impacts and cost to “handle.” In Performance-Based Project Management, risks need to be defined in terms of “cardinal” measures,8 that is, measures that are “calibrated” to the domain of the risk.9 The ordinal risk measure is just a relative ranking of the risks—one risk is higher rank than another. The cardinal risk measure is a numeric value describing the specific impact on the project in term of cost, schedule, or technical performance. For example, an A-level cardinal risk will have a 15 percent unfavorable impact on cost and a 20 percent unfavorable impact on schedule.
The details of this cardinal approach are described on a scale of A, B, C, D, E, and each of these values is assigned specific probabilities of occurrence. For example, the probability of occurrence in cardinal terms might be expressed as:
A = unlikely to occur = 10 percent probability of occurrence
E = highly likely to occur = 90 percent probability of occurrence
The impacts require more detail about what will actually happen, and might be expressed as:
A = minimal or no impact
B = minor reduction in technical capability
E = severe degradation of the technical performance
These cardinal values can then be used to construct a Risk Register like the one shown in Figure 7.3.
The Performance Measurement Baseline may seem redundant, but it is the primary assessment tool for assuring the credibility of the project’s plan. It is the baseline of the cost, schedule, and deliverables for each work activity. Four activities must be done to produce a PMB:
1. Identify the business needs by describing the required business capabilities. These capabilities transcend simple features and function. They are the capabilities required by the business to meet its strategic objectives. The outcome of this effort is:
a. A clear and concise description for the business’s needed capabilities.
b. A description of the value stream these business capabilities enable. This value stream can be connected to the business case to close the loop for project governance.
2. Establish a requirements baseline derived from these business capabilities. These requirements should first be stated in business process terms, then in technical feature terms. Deconstructing these requirements is done using a requirements tree that traces to the work breakdown structure. In this way, the work packages that implement the requirements that, in turn, fulfill the needed capabilities are identified.
3. Establish a Performance Measurement Baseline. This is based on the work derived from the requirements baseline, and is represented in work packages that are arranged in a logical network with budgets spread across the packages of work in a time-phased manner showing the amount of budget and when that amount is needed. In some domains, budget is the same as cash. In others, this spread is a picture of cash flow. In other domains, budget and funding are separated, so this budget spread is just an indication of how much money will be consumed during the planned period of work. The “cash” for the work comes through the invoicing process.
a. Balance the budgeted cost for the work for each work activity. Determine how much it is going to cost to deliver a capability, the requirements for that capability, and the actual deliverables that implement the requirements. This “estimate” is just that—an estimate. It does not have to be precise, nor can it be precise. But it needs to be “reasonable” and “credible.” Without these estimates, the customer and the project team have no real understanding of what is ahead.
b. Balance the budget across the entire project. Examine where there is risk in the work to be performed. Develop a “management reserve” for those areas, and explicitly assign that reserve to be used to cover that risk.
c. Identify the physical percent complete measurement criteria for each deliverable. Measure only “tangible outcomes” from effort. Do not measure progress by measuring effort or cost; neither is a measurement of progress.
4. Execute the Performance Measurement Baseline needed to control the work by rolling up the properly spread budgets into a project-level budget assessment.
a. Capture the actual cost of work performed and physical percent complete. Both are needed to assess progress to plan. The cost of the work performed measures the cost variance: “How much should we have spent at this point in the project?” The physical percent complete measures the schedule variance. “How much physical progress should we have made at this point in the project?”
b. Make management decisions based on the performance of the project using these numbers. Compare planned versus actual for both cost and schedule. With the variances, take explicit management action for the future work to “get back to GREEN.”
When we say Performance Measurement Baseline, there are actually three of them. Figure 6.4 shows them at the high level; here are the details:
1. The Technical Performance Baseline is the requirements flow down and traceability map for each deliverable in the program.
a. A critical performance measure of the Technical Performance Baseline is the stability of requirements. The expected technical achievement for the actual progress is compared to the planned progress using periodic measurements or tests. These measures start with the Technical Performance Baseline, which defines the units of measure for a properly delivered product or service.
b. An important aspect of the Technical Performance Baseline is defining the units of measure for each deliverable in order to know what “done” looks like at each incremental assessment of maturity.
2. The Schedule Performance Baseline is the sequence of the work activities that produces the products or services from the program. This baseline contains the schedule margin derived from a Monte Carlo simulation. The Monte Carlo simulation is a mechanism that constructs a sample of the durations for the work from a probability distribution of the possible duration values. Using the scheduling tools, these samples are used to compute the completing times of the deliverables. A histogram is then constructed of the “probability of occurrence” of all the possible completion times that shows. The benefit of this approach is to show the “confidence levels” for all estimates, not just the single number. With these confidence levels, the project manager can have visibility into the probability of success of delivering on time and on budget in ways not possible with just a single scalar number.
3. The Cost Performance Baseline is the “authorized time-phased budget-at-completion used to measure, monitor, and control overall cost performance on the program.”10
Earlier chapters provided an overview of the Performance Measurement Baseline; now we will look at how that baseline is created.
1. Deconstruct project scope. This signifies deconstructing the project into a work breakdown structure starting with well-formed technical, business, and project requirements. Well-formed means the requirements are traceable to the business case. Without this connection, the necessary technical, cost, and schedule tradeoffs cannot be made in an analytical manner.
2. Assign responsibility. Identify those accountable for the individual work packages. All aspects of the work package are under the control of the person assigned, including the descriptive information and attributes fields. Although this seems simple, it is a critical aspect of the PMB, because without a single point of integrative responsibility the development of a credible baseline can become a vague and out-of-control process.
3. Arrange work packages. The resulting work packages are organized in a logical network, with predecessors and successors. No widows or orphans, no lead or lag relationships, no hard constraints (only as soon as possible).
4. Develop time-phased budget. Budget spreads are developed from the labor assignments with named resources and their labor rates. The work package manager is accountable for producing a credible budget spread for each work package.
5. Assign performance measures. Objective performance measures are limited to 0 percent/100 percent completion and apportioned milestones with percent assignment. The apportioned milestones must be based on physical evidence of completion and agreed to by the project manager.
6. Set Performance Measurement Baseline. With the budget spreads and objective performance measures in place, the network of work activities can be baselined.
When we start to develop requirements that are derived from the needed user capabilities, the first impulse is to simply make a list of the requirements. But we actually need a Traceability Matrix in the form of a table that connects the requirements to the other artifacts of the project, starting with a map of the requirements and the needed capabilities. These requirements are traced to the needed capabilities and to the WBS. From the WBS, the requirements are traced to the work packages in the IMS. Each requirement then has a home, a reason for being—the capability; an implementation activity—the WBS and work package; a MoE; a MoP; a KPP; and a TPM. The matrix that contains this information is the guide for assessing if the deliverable is “done.” This leaves no ambiguity about what “done” means.
The terms “Measures of Effectiveness” and “Measures of Performance” are likely new terms outside the defense and space flight industries, but they are critical to determining what “done” looks like for any project. These artifacts need to be written down and agreed to by the customer and the project management team before committing to a cost and schedule. In order for requirements that fulfill the needed capabilities that must be delivered by the project to be met, we need to know in some clear and concise way how effective the outcomes must be.
In almost every project, the description of the problem to be solved is “ill-formed,” with no clear criteria to guide the selection of the solution.11 These measures are the basis for increasing the probability of the project’s success, and must be established before proceeding to develop the other attributes the project requires. They state what “done” looks like in units meaningful to the decision makers. Here, we’ll simply define what they mean:
Measures of Effectiveness (MoE). These are operational measures of success that are closely related to the achievements of the mission or operational objectives evaluated in the operational environment under a specific set of conditions. The MoEs focus on capabilities independent of any technical implementation. They describe what “success” means for the customer. The MoEs are “mission” or “business case” dependent and must discriminate between the choices that can be made during the development of the project’s outcomes. They measure the extent to which MoPs satisfy their requirements. In the end, the MoE is a measure of the utility of the solution: Can the resulting system “do the job it is intended to do”?And if so, how will we recognize that it is doing that job? The MoE is the basis of customer satisfaction. The utility is defined by the customer. In our ERP example, effectiveness is measured by reducing the costs, but also by establishing a platform for scaling the solution to continue to reduce the cost of transaction processing over time. For our kitchen remodel, the “utility” is the look and feel of the result. This is one of those intangible benefits to the remodel that can’t be “spec’d in” on the drawings. The role of the kitchen designer is to ensure that the “feel” of the result is satisfying.
Measures of Performance (MoP). Characterize physical or functional attributes relating to the system’s operation, measured or estimated under specific conditions. They are the attributes that ensure that the system has the capability to perform the needed functions or services. They assess the system to ensure that the design requirements satisfy the Measures of Effectiveness. For example, the MoP for an automobile might be how fast it can accelerate from zero to sixty. This is meaningful to the driver when merging into traffic on the highway. In other words, it provides some tangible evidence that the resulting product or service is fulfilling its “mission.” The MoP is the way to measure what the system will achieve when it is operational.
Key Performance Parameters (KPP). Represent the capabilities and characteristics so significant that failure to meet them can be cause for reevaluation, reassessment, or termination of the project. The customer gets to define the KPPs, starting with which ones are “key.” The “key” KPP for the kitchen remodel will be the traffic flow for cooking work. If there are collisions all the time between the cooks in the kitchen, then the design and implementation KPP will not have been met. For the ERP system, a KPP would be the transaction rate, but also the error rate for those transactions. Having high-transaction capacity but a high error rate does no good, because the corrections of the transaction swamp the throughput.
Technical Performance Measures (TPM). Attributes that determine how well a system or system element is satisfying or expected to satisfy a technical requirement or goal. The TPMs are assigned during the design process. They define the compliance to the performance requirements of the outcomes of the project. They are the primary attribute used to describe risk, because if the TPM is not “in compliance,” the outcome of the project will not be acceptable to the customer. One TPM for the ERP system is the availability of the system around the world. These “. . . ilities” are the starting point for TPM: reliability, sustainability, reparability, maintainability.
The management of projects encompasses a variety of activities such as managing cost, schedule, resources, performance assessments, and so on. But before those can be successful, we need to know what to measure, how to measure it, the units of measure, and the descriptions of what “done” looks like using those measures. In this chapter, we’ve examined a few—the critical few—documents needed to perform these measurements. Starting with the Statement of Work and Statement of Objectives, there is agreement on what the project should be doing to produce the outcomes the customer needs. A Concept of Operations further describes what the outcomes of the project will be “doing” when they arrive. The ConOps is the source of the Measures of Effectiveness and Measures of Performance for the project’s products or services.
In order to identify what work needs to be done and what exactly will result from that work we need a work breakdown structure. The WBS tells us what deliverables will be produced and how they are related. In addition to the WBS, we need the organizational breakdown structure to know “who” is going to be doing the work to produce the “what.” The Responsibility Assignment Matrix shows how these connections are made and the person “accountable” for the successful delivery of the outcomes.
With these documents in hand, we can start to build the Integrated Master Schedule to perform the work needed to produce the project’s outcomes. We also need to identify the impediments to our success through the Risk Management Plan and the resulting Risk Register. Finally, all of these elements are assembled into the Performance Measurement Baseline, which is a “time-phased” cost and schedule of the work to be performed to complete the project.
In order to measure our performance, we need to know how effective we must be for success and what performance goals the project must satisfy. We can’t measure everything on the project, but we need to measure the Key Performance Parameters (KPP) and the technical performance of each deliverable as the project progresses toward its goal. When all the measurement metrics are used, the customer gains visibility into the delivered capabilities that provide the business value, the technical performance that ensures that capabilities will be provided on time, on budget, and with the planned performance.
Only when all of these measures are in place, traceable to the plans and scheduling, measured using physical percent complete, and produced on time, on budget, with risks identified and handled, with all the planned resources available and ready to work can the project be considered “under control.”
18.188.202.155