CHAPTER 1

Introduction

Given the temporary and unique nature of projects (typically within new and different contexts and conditions), the practice of project management is often a complex and challenging endeavor. Project managers must be able to deal with uncertainty and unknowns. Without access to historic data or known information, project managers use estimation, forecasting, and prediction to plan for projects. One of the most commonly used approaches is to gather expert judgment to “fill in the gaps” in information. For example, project managers may obtain the judgment of experts to estimate the resources necessary to successfully complete a project. Likewise, project managers may gather expert judgment to forecast likely future scenarios or risks that may occur during the life of a project.

1.1 Background

Expert judgment is by far the most frequently listed tool/technique in A Guide to the Project Management Body of Knowledge (PMBOK® Guide) – Fifth Edition. As illustrated in Table 1, expert judgment is explicitly listed as a tool/technique for 28 of the 47 project management processes (59.6%) and mentioned implicitly in another six processes (bringing the frequency to 72.3%).

For example, within the PMBOK® Guide, expert judgment is suggested as a potential tool/technique in all six of the processes contained in the Project Integration Management Knowledge Area. On the other end of the spectrum, expert judgment is not explicitly listed as a tool/technique for any of the three Project Quality Management Knowledge Areas.

images

Across all 10 Knowledge Areas, expert judgment is listed as a tool/technique five times more frequently than the next most commonly listed project management tool/technique. By considering this fact, we may conclude that expert judgment plays an important role in project management.

Despite its prevalence as a project management tool/technique, expert judgment lacks full description within the PMBOK® Guide, which provides the following short definition:

Judgment provided based upon expertise in an application area, knowledge area, discipline, industry, etc. as appropriate for the activity performed. Such expertise may be provided by any group or individual with specialized education, knowledge, experience, skill, or training. (PMI, 2013, p. 538)

This definition is expanded upon in only a few of the project management processes, but any additional detail is typically confined to minor descriptions of who might be included as experts, lists of quantities or qualities to be characterized using expert judgment, or precautions about taking expert bias into account. There is no description about how the tool/technique of expert judgment may or should be applied. As a result, it would be difficult for a practitioner to understand exactly how to apply expert judgment as a tool/technique using only the definition provided.

By comparison, tools/techniques such as the critical path method (CPM) and the probability-impact (P-I) matrix, which are referenced in only a single specific project management process, are much more fully described in the PMBOK® Guide, such that a practitioner would be able to more easily apply that particular tool/technique. Additionally, although these two less frequently invoked tools/techniques (i.e., CPM and P-I matrix) are contained in the PMI Lexicon of Project Management Terms (2012), the ubiquitous expert judgment is not.

There is a vast amount of literature about expert judgment elicitation. There are even entire books devoted to the subject of expert judgment elicitation (e.g., Ayyub, 2001; Cooke, 1999; Meyer & Booker, 2001; O’Hagan et al., 2006). Yet, none of these books focuses exclusively on the expert judgment needs of project managers. In project management, expert judgment includes both qualitative and quantitative methods, both direct and indirect elicitation, and both individual and consensus aggregation. Examples of project management expert judgment elicitation methods include brainstorming, the Delphi method (Dalkey & Helmer, 1963), direct point elicitation, distribution estimation (including the prominent PERT [program evaluation and review technique], a three-point estimation technique developed by Malcolm, Roseboom, Clark, and Fazar [1959]), the analytic hierarchy process developed by Saaty (1980), and scaling methods (e.g., Kent, 1964). Though many of these expert judgment elicitation techniques are well established, much has been learned over the past several decades about how they can be improved (e.g., Armstrong, 2011). Yet, despite the fact that the number of articles about expert judgment has increased steadily in recent years (Jørgensen & Shepperd, 2007), extensive study about how to improve expert judgment in project management seems to be confined to two specific aspects of project management: time and cost estimation (which represent only a small portion of the project management processes that call on expert judgment as a tool/technique).

Even with considerable advances in the areas of time and cost estimation, it seems they have not been incorporated into the most frequently used practitioner references (including the PMBOK® Guide), the basic software packages (such as Microsoft Project), or the project management texts (e.g., Kerzner, 2012; Mantel, Meredith, Shafer, & Sutton, 2010). Therefore, based upon a preliminary and limited review of the relevant literature (before embarking on this study), even though much empirical and theoretical work has been conducted regarding the elicitation of expert judgment, we anticipate that these developments have not been widely adopted into the practice of project management.

1.2 Problem Statement

This lack of definition for expert judgment represents a significant gap in the PMBOK® Guide toolkit. Without well-designed elicitation processes, expert judgment is subject to known flaws that can render the resulting estimates inaccurate. When project management processes are based upon flawed judgments and estimates, projects are susceptible to missed deadlines, budget overruns, and/or failure to meet stakeholder expectations. This is not uncommon. In general, project management practice lags behind theory, as described by Ahlemann, Arbi, Kaiser, and Heck (2013):

Despite [a] long tradition of prescriptive research, project management methods suffer a number of problems, such as lack of acceptance in practice, limited effectiveness, and unclear application scenarios. We identify a lack of empirical and theoretical foundations as one cause of these deficiencies. (p. 44)

Expert judgment suffers from all three of the above-noted problems. Whether it is known or not, the prescriptive research about how to improve expert judgment has not been widely adopted into project management practice. Further, given that there is an over-reliance on ad hoc and qualitative methods, elicitation of expert judgment is less effective than it might be. Additionally, with such a wide range of diverse application scenarios (e.g., see Table 1), it is not apparent that the most appropriate methods are being applied for each project management process and scenario.

Thus, it is critical that project managers have access to a foundational set of guidelines in order to handle expert judgment appropriately and make more accurate project estimates. This research project seeks to narrow the PMBOK® Guide toolkit gap by addressing the following question: How might expert judgment be better defined to ensure that the most accurate information is elicited for use in today’s project management processes?

1.3 Objectives and Scope

In order for this translational research to address that basic research question, the following three specific aims were identified:

  1. Identify the state of the art/science in expert judgment elicitation broadly (e.g., across a variety of disciplinary areas such as engineering, political science, and environmental management).
  2. Determine the state of the practice in expert judgment elicitation for project management.
  3. Narrow the theory-practice gap in expert judgment elicitation for project management.

1.4 Methodology

The research contained in this report involves a mixed methodology (Creswell, 2013) to achieve the overarching goal of developing a practitioner-ready expert judgment reference for project managers.

images

As illustrated in Figure 1, this mixed methodology comprises three interrelated, sequential phases that address the three specific aims of the research:

  • Phase 1 comprised a literature review to identify the state of the art/science in expert judgment elicitation broadly.
  • Phase 2 employed a descriptive survey to determine the state of the practice in expert judgment elicitation in project management.
  • After identifying several of the theory-practice gaps, phase 3 used controlled experiments to determine the effectiveness of different methods for selecting experts.

Because the design types noted above (i.e., literature review, descriptive survey, and controlled experiments) are straightforward and well established, the details of those design types will not be provided here. Rather, for each of the three phases, details regarding type of research, rationale, process, type of data, source and selection of data, expected outcomes, and potential problems will be provided in the following sections. Further, the general design type has been identified because research design is logical, rather than logistical, in nature (Yin, 2003).

The following is a high-level description of some of the key elements of the methodological procedures for each of the three phases of this research:

1.4.1 Phase 1

  • Goal: Identify the state of the art/science in expert judgment elicitation broadly.
  • Design Type: Literature Review
  • Rationale: In order to determine which expert judgment techniques are most appropriate for today’s project management processes, it was necessary to first determine the spectrum of expert judgment techniques available for application. A review was conducted of the literature from across many disciplinary areas (e.g., engineering, political science, and environmental management) in addition to project management.
  • Process: A systematic literature review process (Brereton, Kitchenham, Budgen, Turner, & Khalil, 2007) was employed.
  • Type of Data: Qualitative
  • Data Source and Selection: Data were taken from a systematic search of selected bibliographic databases containing published research studies.
  • Expected Outcomes: It was anticipated that hundreds of studies and dozens of expert judgment techniques would be identified from the many disciplinary areas. Many of the expert judgment techniques are not widely used in project management. These will provide opportunities to enhance the accuracy of the estimates conducted in project management.
  • Potential Problems/Alternative Approaches: Expert judgment of some form is used in virtually all disciplinary areas, so one major challenge was to develop an effective means of narrowing the search parameters while maintaining an adequate pool of literature. Also, because available subscriptions to certain desired databases are limited, some articles were not readily obtainable. In those cases, alternative sources (such as interlibrary loans) were sought to obtain the relevant publications.

1.4.2 Phase 2

  • Goal: Determine the state of the practice in expert judgment elicitation in project management.
  • Design Type: Descriptive Survey
  • Rationale: It was anticipated that a variety of expert judgment elicitation techniques are in use in project management today. Thus, it was essential to identify prominent practices in an effort to identify current practices.
  • Process: A standard survey methodology (Groves et al., 2013) was observed.
  • Type of Data: Qualitative and quantitative
  • Data Collection: An online survey was administered to project management professionals using the Project Management Institute Survey Links program. Based on previous similar studies, it was anticipated that there would be roughly 400 to 500 respondents. Demographic variables included job position, experience, field of specialty, and office location. Expert judgment study variables included frequency of use, purpose of use, context, policies, and methods.
  • Expected Outcomes: It was anticipated that the response rate would be sufficient to provide meaningful results because the survey would be relatively short, relevant to project management professionals, and administered through Survey Links.
  • Potential Problems/Alternative Approaches: A low response rate to the online survey was anticipated to be a potentially significant problem. In such a case, an alternative would have been to deploy the survey through PMI’s regional chapters, as was done in a recent earned value management study (Song, 2010).

1.4.3 Phase 3

  • Goal: Identify how general expert judgment methods (e.g., expert selection) can be adapted to project management.
  • Design Type: Controlled Experiment
  • Rationale: Once a collection of potential practices (from phase 1) and current practices (from phase 2) of expert judgment in project management was identified, it was necessary to determine means of identifying experts to be used for judgment elicitation.
  • Process: An expert elicitation protocol was employed. The protocol consisted of two main parts: expert training and expert elicitation. Subjects were asked to make estimations about known quantities in a variety of modes.
  • Type of Data: Quantitative and qualitative
  • Data Collection: Actual estimations by project management professionals were to be collected using a standard expert elicitation protocol at professional society meetings (such as the PMI® Global Congress—North America, or PMI regional chapter meetings) and also virtually using elicitation techniques such as the Delphi method.
  • Expected Outcomes: It was anticipated that several rounds of experiments would need to be successfully conducted to provide sufficient results.
  • Potential Problems/Alternative Approaches: It was anticipated that gaining agenda time at the various Project Management Institute events to conduct the experiments would be difficult. In such a case, voluntary participation would be sought outside of the agenda or via online methods (e.g., through webinar format). Alternative professional and academic venues were also sought.

1.5 Data Analysis

Phase 1 involved a literature review that was designed to identify the state of the art/science in expert judgment elicitation broadly. In this portion of the study, the data were the individual articles and the findings of the studies contained therein. Each article was recorded with specific attention being paid to the expert judgment elicitation methods used. This phase did not attempt to complete a meta-analysis of the studies’ results.

Phase 2 involved a descriptive survey designed to determine the state of the practice in expert judgment elicitation in project management. The data were the responses of the survey participants. Two forms of analysis were completed in this portion of the study. First, summary statistics were developed for all closed responses. Then, a small cross-sectional analysis was conducted using the demographics as the independent variables to determine if certain expert judgment methods were used in certain situations. For the open responses, contextual content analysis was conducted using raters and software (Krippendorff, 2012).

Finally, after several theory-practice gaps were identified in the first two phases, phase 3 employed controlled experiments to address the gap in how to select experts to provide judgments for project management. The data were the participants’ responses on the expert elicitation worksheet, as well as their scores on the critical thinking, fluency, and numeracy instruments. By gathering experts’ estimation of quantities on the elicitation forms, processes of expert selection were evaluated. Most often, the expert judgments consisted of a series of estimates or value judgments to be analyzed as quantities. For example, in the case of elicited distribution parameters, responses will be treated using an arcsine transformation for ease of comparison. Hit rates were established to determine the accuracy of experts, and standard statistical methods were used to determine the most effective expert judgment elicitation methods.

1.6 Organization of Report

The report will follow an organization that is aligned with the design of the research. This current chapter provides an overview of the study. Chapter 2 provides an overview of the state of the art/science of expert judgment broadly across many domains and disciplines. This state-of-the-art/science information was obtained through a review of the literature in phase 1 of the research project. Chapter 3 provides a summary of the state of the practice for using expert judgment in project management. This state-of-the-practice information was obtained through a survey of project management practitioners in phase 2 of the research project. Chapter 4 provides information about expert selection. This practical information was obtained through a pair of experiments developed to test two methods for selecting experts (i.e., phase 3 of the research project). Chapter 5, the final chapter of this report, provides a discussion about how the information gathering in this research project can be used by project management practitioners to improve their use of expert judgment as a tool/technique.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.206.25