CHAPTER 5

Isolation of Program Effects

It’s always good to be underestimated.

—Donald Trump

Route Guidance: Give Credit Where Credit Is Due

We have all been in those meetings where good things happen and everyone takes credit. For example, the CEO of a large financial institution asks his executive team why there has been an increase in consumer loan volume. The executive responsible for the consumer lending points out that his loan officers are now more aggressive. The marketing chief adds that she thinks the increase is related to a new promotional program and an increase in advertising. The chief financial officer posits the increase is due to declining interest rates. The vice president of HR reminds the team that the consumer loan referral incentive plan had been slightly altered with an increase in the referral bonus to all employees who refer legitimate customers for consumer loans. She claims, “When you reward employees to bring in customers, they will bring them in, hence the increase in loan volume.” Finally, the chief learning officer speaks up: “We just revised the consumer lending seminar and it was extremely effective. When you have effective training and build skills in sales, loan volume increases.”

While every one of the claims has a sound basis, the responses still puzzle the CEO as to the cause of the increase.

Isolating the effects of a program on improvement in business measures allows program owners to give credit where credit is due and offer an explanation as to how much of an improvement is due to their program. While some might argue that there is no need to include this step when evaluating programs, others will argue that without it, there is no way to answer the question, “How do you know it was your program that caused the results?” There are a variety of ways to isolate the effects of programs on improvement in business measures. Each one has benefits; each one has challenges and opportunities.

Suggested Route: Evidence Versus Proof

Today more than ever, clients and senior executives ask program owners to answer a fundamental question: “How do you know it was your program that caused the results you report?” This simple question causes anxiety for many learning and development professionals. For some, the fear has to do with their not understanding the best approach to answer the question. For others, it is the concern that if they answer it, their answer will lack the accuracy with which they would feel most comfortable. The good news is that there are a variety of ways to address this question, allowing learning and development professionals in all organizations to address it with confidence.

The Techniques

Control group arrangement is a classic approach to isolate a program’s effect on performance in a measure. Control group arrangement includes two groups: an experimental group (those involved in the program) and a control group (those not involved in the program). Out of a survey of 235 users of the ROI Methodology, 32 percent identify the control group as a technique they use to isolate the effects of their programs.

Another technique is trend line analysis. This requires tracking existing data for a period of time, then forecasting the trend to determine where it would go if there were no changes in conditions. Following program implementation, the actual data are tracked and compared with the projection. Twenty-nine percent of users of the ROI Methodology apply trend line analysis as a technique for isolating the effects of a program.

Forecasting methods based on regression analysis are also good tools for isolating the effects of programs. While forecasting is applied only 5 percent of the time, use of these models is increasing given the current interest and growth in human capital analytics.

The isolation technique most frequently applied is the estimation process, which is used 55 percent of the time. Estimates require gathering input from sources of information who can make a reliable judgment about the cause of improvement in business measures, and then adjusting those estimates based on the level of confidence the sources of data have in their estimates. This approach ensures that the output is as credible and reliable as possible given the nature of the questioning. The frequent use of estimates is primarily because the other techniques are not feasible. It is a back up technique when all else fails.

Case studies and identifying the contribution of other factors and subsequently allocating what is left to the program are two other techniques that are used less frequently than control group, trend line analysis, forecasting, and estimations. An evaluator might consider researching previous case studies in other organizations, identifying the contribution of a program based on the case studies, and then applying the same contribution level to their program. The downside to this technique is that the case studies described in the literature do not often reflect the exact conditions of the organization. Sometimes it is easier to identify the factors that caused the improvements by accounting for the other influences first. The remaining improvement is then allocated to only a few factors, including the learning and development program.

A final technique is the use of customer input. Customers are an excellent source of data because of the objectivity they have in responding to questions. Unfortunately, customers are not always the best source of data when isolating the effects of a program, because they cannot account for factors that occur within the organization. They can certainly identify why they make a purchase, why they visit a store, and why they interact with an organization. But they would be hard-pressed to pass judgment on the effectiveness of a new pay system, a new technology, or some other internal process of an organization.

Control Group Arrangement

The gold standard technique to isolate the effects of a program is the control group arrangement. While there are many types of control group designs (classified as experimental and quasi-experimental designs) the two most fundamental designs include two groups: the experimental group and the control group.

Classic Experimental Design

The classic experimental design involves random selection of participants in your “experiment” and random assignment of half to the experimental group and half to the control group. From there you compare pre-program data to post-program data for each group. Because you select participants randomly from a defined, rather homogeneous population and randomly assign them to groups, you essentially control for factors that can influence improvement in the measure of interest. The only factor that is different between the two groups is the program. This design answers the question: “What is the difference in the change in performance between the two groups?” Figure 5-1 depicts this arrangement.

FIGURE 5-1. CLASSIC CONTROL GROUP ARRANGEMENTS

For example, let’s assume your organization has an absenteeism problem, specifically unexpected absences. You plan to implement a program to help supervisors reduce absenteeism, but you want to pilot the program before you roll it out to all supervisors.

Begin in the corporate office because the work environment is similar for all potential participants. Exclude any departments without an absenteeism problem and then randomly identify supervisors from departments with absenteeism problems and with similar environments including staff size. Next, randomly assign half of the supervisors to the program, leaving the remaining half as the control group. Prior to the program’s start, take a pre-program measurement of absenteeism for each group. In this example, the experimental group represents 90 employees working 240 days per year with an average absenteeism rate of 4 percent (864 per year; average 72 per month). The control group represents 87 employees working 240 days per year with an average absenteeism rate of 4 percent (835 absences per year; average 70 per month). Implement the program and track absenteeism for the next three months. At the end of the three-month timeframe, compare the absenteeism rates between both groups. After the three months of program implementation, the experimental group monthly average number of absences was 64 and the control group monthly average number of absences was 67. Absenteeism went down in both cases, but it is clear that absenteeism went down at a greater rate for the program group. Compare the experimental group’s pre-program average monthly absences of 72 to the post-program average of 64. That is a change of eight absences. Compare the pre- versus post-absenteeism for the control group. Prior to the program, the control group had 70 absences per month and post-program the monthly average was 67. The difference in the control group’s pre- versus post-monthly average is three. So the difference in the change in performance due to the program is five absences.

FIGURE 5-2. CLASSIC CONTROL GROUP EXAMPLE

It is important to remember, when using experimental or quasi-experimental designs, the experiment is with programs, not people.

There are a variety of ways to randomly select participants for your study and randomly assign them to either the control or the experimental group. Simple random assignment and selection is the most common approach. It is typically done with a random numbers table or a random numbers generator. Graphpad.com provides a variety of statistical tools. One is a tool for the selection for random numbers, which offers options to randomly select a subset of subjects and randomly assign subjects to groups. Using these two selections, you can easily identify subjects for your experiment and assign them to the experimental and control groups.

Post-Program-Only Design

Another control design is called post-program-only design, which answers the question, “What is the difference between the two groups?” Figure 5-3 depicts this arrangement.

FIGURE 5-3. POST-PROGRAM-ONLY DESIGN

While random assignment can be used to select participants of the experimental group and control group, the lack of a pre-program measure can cause selection bias. An alternative way to select your two groups is to work with your team and your client to identify the factors by which you will match your groups. With randomized groups, the randomization process, in theory, controls for factors that could influence improvement in a measure. With nonrandom groups, you identify the factors that will have the greatest influence on the performance of the measure and match your groups accordingly. One match is in performance in the measure.

For example, in a large retail store chain of 420 retail stores, there was interest in increasing sales. Senior leaders, along with the help of the learning and development team, identified an off-the-shelf program that would help sales representatives better interact with their customers. The thought was that if salespeople interacted more effectively and frequently with customers, there should be an increase in sales. Because there were 420 stores, it was decided that the team would pilot the program in three stores and match those three stores to three other stores that did not receive the training. It was further determined that the focus of the training would be on the electronics department, as that was the department identified with the greatest opportunity for improvement. The measure of interest was weekly sales per person, so an initial match was performance in sales. This, in essence, controlled for the sales volume. It was also determined that three other factors would have the greatest influence on the sales: customer traffic, store location, and store size. So the criteria for matching the groups were department in store (electronics), store performance, customer traffic, store location, and store size.

After the program, weekly sales data were collected for three months for both the control group and the experimental group. Table 5-1 shows the data for the first three weeks of the program. Averaging the three weeks after the program is more appropriate than simply using data for the final week of the three months, because a spike in the data could affect the results. As the data show, there is a difference between the two groups. The difference in performance due to the program is $1,626.

TABLE 5-1. POST-PROGRAM-ONLY DATA

Level 4: Average Weekly Sales

Post-Training Data

Weeks After Training

Trained Groups

Control Groups

1

$ 9,723

$ 9,698

2

9,978

9,720

3

10,424

9,812

13

$ 13,690

$ 11,572

14

11,491

9,683

15

11,044

10,092

 

Average for weeks 13, 14, and 15

$ 12,075

$ 10,449

These two classic designs offer as close to proof as possible of the contribution your program makes to the improvement in a measure. The key is in matching the groups. The gold standard is true experimental design, which includes randomly selecting participants for the study from a defined population, and then randomly assigning half of the group to the experimental group and the other to the control group. Unfortunately, we do not have the opportunity to use the gold standard very often. Much of the time we offer programs to predefined groups. So, an alternative is to purposefully define the criteria by which to match the groups. However, there are times when control group arrangement is not appropriate or feasible given the nature of the program, organization, and evaluation. If that is the case, then look to alternative approaches.

Trend Line Analysis

Another technique to isolate the effects of the program is trend line analysis, which requires tracking existing data over a period of time and determining the extent to which a trend exists. A trend simply means that the data are stable; they are improving, getting worse, or are flatline. Once you determine the stability in your data, you can use your favorite analysis tool (such as Excel or Numbers) to project the future trend assuming nothing else happens to influence the measure. If a program can help improve the future trend even further, implement the program and track the measure over a period of time to determine the extent to which the trend indicates actual improvement in that measure. At a predetermined point in time, compare where the improvement is versus where the improvement would be had nothing else occured. The difference is the improvement due to the program.

For example, Figure 5-4 shows trend data for an organization’s reject rate. In January, the reject rate was 20 percent. From January to June, the actual reject rate decreased, resulting in a six-month average of 18.5 percent. The organization projected that if the trend continued for the next six months, the reject rate would fall to an average of 14.5 percent. The supervisor suggested the trend can be further improved through a process improvement initiative. The program was implemented in July, and the reject rate was tracked for the next six months. The actual average reject rate post-program was 7 percent.

FIGURE 5-4. TREND LINE ANALYSIS

By comparing the projected reject rate of 14.5 percent with the actual post-program reject rate of 7 percent, the change due to the program is 7.5 percent. You would not compare the post-program 7 percent with the pre-program average of 18 percent, because that does not account for the trend in the data. The trend line accounts for the other factors that might influence the reject rate, in addition to your program. To use the trend line analysis, the following conditions must exist:

• The data must exist.

• The data are stable.

• The trend is likely to continue.

• Nothing else major occurs during the evaluation period that could also influence improvement in the measure.

While this is a valuable and useful tool to isolate the effects of the program, it is not used as frequently as some other techniques. This is because the data often are not available, or stable, and inevitably, some additional investment is made to improve the measure of interest. But when it is appropriate, using a trend line analysis to isolate the effects of your program is a simple, cost-effective, and credible method.

Forecasting Techniques

A third technique used to isolate the effects of a program is based on regression analysis, which compares the movement in independent and dependent variables (measures). The correlation between the two measures indicates the direction and strength of the relationship between those two measures; as one moves, so moves the other at a certain level of statistical significance. Figure 5-5 depicts the relationship between an independent (x-axis) and dependent (y-axis) variable resulting from regression analysis.

FIGURE 5-5. SIMPLE REGRESSION

As many will argue, just because a correlation exists, it does not mean causation exists. If the correlation is meaningful and strong enough, you can assume that some cause-and-effect relationship exists. An example of the use regression to isolate the effects of a program is in retail sales.

For decades organizations have tracked the relationship between advertising and sales. Most organizations demonstrate this relationship through a form of regression. In one organization, advertising and sales were tracked over time. Using regression, a mathematical relationship became apparent. This mathematical equation, Y = 140 + 40X, represents the linear relationship between the two variables, where Y = sales and X = advertising ÷ 1,000. The theory is that sales increase is a function of advertising multiplied by 40, plus an additional $140. If the organization decides not to invest in advertising, sales of $140 will still occur.

The organization decided to implement a program to help boost sales. Prior to the program, weekly sales per salesperson were $1,100 and advertising was $24,000. Three months after the program, they discover that sales increased to $1,500 per week per person and advertising increased to $30,000. They know that advertising has some influence on the sales increase, so they first use the mathematical relationship to determine how much of the weekly sales per person is due to advertising. Once they understand that output, they attribute what remains to the program. Table 5-2 shows the math the learning and development team used to determine how much in sales was due to the program. Figure 5-6 demonstrates the model depicting the influence of advertising and training on sales.

TABLE 5-2. TRAINING’S CONTRIBUTION TO SALES

Sales due to advertising

prior to the program

Sales due to advertising

post-program

Post-program sales

Y = 140 + 40X

Y = 140 + 40X

 

= 140 + 40(24)

= 140 + 40(30)

 

= $1,100

= $1,340

= $1,500

 

$1,340 – $1,100 = $240 in sales due to advertising.

$1,500 – $1,340 = $160 in sales due to program.

These techniques will provide you credible, reliable data that show that your program is influencing business measures. It is a step beyond assuming that improvement in business outcomes is due to your program just by making a connection between the investment, the relevance of the program, the knowledge acquired, and the use of knowledge and skills. Isolating the effects of the program answers the question that the chain of impact does not: “How do you know it was your program that caused the results?”

FIGURE 5-6. ISOLATING THE EFFECTS OF TRAINING

Cause and Effect Using Observational Data

Researchers have long argued that the only way to determine cause and effect is through controlled experimental trials. “Correlation is not causation,” is a commonly repeated maxim. But Joris Mooij and his team (2014) at the University of Amsterdam in the Netherlands has begun to explore ways to determine cause and effect using observational data. The basis of their approach assumes that the relationship between X and Y is not symmetrical. They say that in any set of measurements there will always be noise from various causes. The assumption is that the pattern of the noise in the cause will be different from the pattern of the noise in the effect. Using their additive noise model, they work out which of the variables is the cause and which is the effect. They say that the additive noise model is up to 80 percent accurate in correctly determining cause and effect. Mooij and his team offer statisticians a powerful new tool to help change the preconceived notion that it is impossible to determine cause and effect from observational data alone.

Detour: When All Else Fails, Estimate

While the previous techniques are the ideal route to take when isolating the effects of a program, alternative routes can still get you where you want to go. Estimation is one of those alternative routes. The key to successful estimation is to ensure you look to the most credible sources of information for input and follow a path that you can explain. If you can explain what you did, how you did it, and why you did it that way, your stakeholders will feel much more comfortable about the estimates you report. Here is an example.

A large financial institution was employing a variety of new initiatives to increase the sales of various products. One of the initiatives was a sales training program, which was intended to give employees in the bank branches the skills they needed to sell new and existing products, including credit card accounts. The sales training team implemented the program. Six months after the program they sent a questionnaire to the bank branch managers, asking them to provide information on what caused the improvement in the credit card accounts. The bank branch managers called on the most credible sources of data, those people working directly with customers, to help them with this estimate. The increase in credit card accounts at one bank branch was an average of 175 per month. In a focus group setting, the team answered three simple questions to identify how many of the new credit card accounts were due to the program:

1.   Given the increase in credit card accounts, what factors caused the improvement?

2.   As a percentage, how many of the new credit card accounts were due to each factor?

3.   As a percentage, how confident are you in your estimates?

Table 5-3 shows the results of the input of one branch’s focus group participants.

TABLE 5-3. ESTIMATION PROCESS

Monthly Increase in Credit Card Accounts: 175

Contributing Factors

Average Impact on Results

Average

Confidence Level

Sales Training Program

32%

83%

Incentive Systems

41%

87%

Goal Setting and Management Emphasis

14%

62%

Marketing

11%

75%

Other ________________

2%

91%

Total

100%

 

Focus group participants identified five factors that influenced the improvement in the measure, as shown in the first column. The second column shows their estimates of the contribution each factor had on the increase in credit card accounts. This column should always equal 100 to ensure all factors are identified. The third column indicates their confidence in their estimate. This error adjustment addresses some of the subjectivity inherent in estimates.

How many new credit card accounts were due to the sales training program? Given that there is an average of 175 new credit card accounts per month, and the focus group participants suggested that 32 percent of the new credit card accounts was due to sales training, they estimated that the number of new credit card accounts due to the program was 56. Because this is an estimate, they adjusted it for confidence. The adjustment was 83 percent. This adjusted estimate reflected 46.48, or 46 new credit card accounts due to the program.

The underlying basis for this process is shown in Table 5-4. There had been an increase in credit card accounts of 175 per month on average, and the most credible sources of data estimated that 32 percent of the increase was due to the program. They are saying they think 56 new credit cards are due to the program, but are not sure; they are only 83 percent confident. This leaves 46 credit card accounts attributable to the program.

By adjusting for the 83 percent confidence, the sources of data are basically reporting that they are 17 percent uncertain in their estimate. If you multiply the estimated contribution of 56 by 17 percent, the result will be a margin of error of 9.52. Thus, they think 56 new credit card accounts are due to the program, give or take 9.52. If you add 9.52 to 56, you will get 65.52, or 66. Subtract 9.52 from 56, and the difference is 46.48, or 46. Thus, the contribution of the program can be anywhere from 66 to 46 new credit card accounts. By following standards that require adjusting estimates for error and choosing the most conservative alternative given a range of choices, in this case 46, you can answer the question, “How much improvement is due to the program?” with some level of reliability.

Credibility Is in the Eyes of the Beholder

When it comes to getting buy-in into your evaluation results, addressing the following factors that influence credibility is a must:

• reputation of the source of the data

• reputation of the source of the study

• motives of the researchers

• personal bias of the audience

• methodology of the study

• assumptions made in the analysis

• realism of the outcome data

• type of data

• scope of the analysis.

TABLE 5-4. OUTPUT OF ESTIMATIONS

While the use of estimates is not ideal, using a conservative approach you can capture reliable data that provide stronger evidence of your program’s contribution than if you had relied on the chain of impact alone. Executives, managers, and professionals of all types routinely rely on estimates. The key is calculating the most conservative estimate using a set of standards to lead the way. In this case, the standards include going to the most credible sources of data, using the most conservative alternative, and adjusting for the error in estimates by asking participants for their confidence in their estimates.

Guideposts

When isolating the effects of your program follow the guidelines below.

Always isolate the effects of your programs. Following a standard to always isolate the effects of the program is critical if you want to report the results in a credible way. This step answers the question, “How do you know it was your program that caused the results?” While the chain of impact provides evidence of a connection between the investment in a program and the outcomes, this step cinches that connection and informs stakeholders that you took the necessary approach to confirm alignment between business results and the program.

Consider research-based techniques first. While there are a variety of ways to isolate the effects of the program, it is important that you consider the research-based techniques rather than defaulting to estimates. While estimates can provide a conservative result, pursuing research-based techniques, such as control group, trend line analysis, and various regression models, demonstrates your ability to select the most appropriate techniques given the context under which you are working and the resources and time you have available to implement that approach.

Use estimates as last resort. While there are organizations who use the estimation process to isolate the effects of the program every time they evaluate a program, this is not necessarily the best process to follow. Estimates are a useful tool and can provide good information; but if estimates are all you use, stakeholders may wonder whether there are other techniques that provide more credible data and question your reasoning for only using that particular approach.

Go to the most credible sources of data. The importance of going to the most credible sources of data cannot be reinforced enough. To get to the most credible sources of data, determine who knows best about the measures you are taking and what influences affect those measures. Rank and title in the organization are not synonymous with credibility. Think through who has the best vantage point in terms of influencing factors on improvement in measures and use them as your source.

Adjust for error with estimates. While estimates are an inherent part of measurement and evaluation, when isolating the effects of the program and using the estimation process it is important to adjust estimates for error. Just like any statistical analysis, some level of error exists. By adjusting for that error, you report the worst-case scenario in terms of outcomes. This step in the estimation process improves the credibility and reliability of results.

Gain consensus of project stakeholders. As mentioned in chapter 3, planning your evaluation up front is critical. This is when you gain consensus with key stakeholders on your data collection methods as well as your data analysis methods. This includes the techniques you plan to use to isolate the effects of the program. By gaining consensus at the beginning, you eliminate or avoid questions about your approach and keep the focus on the results.

Know what you have done, how you did it, and why you did it that way. You have heard it before, but it is worth repeating: If you cannot explain what you are doing as a process, you do not know what you are doing. Know what you did, how you did it, and why you did it that way. Be able to explain your approach clearly to stakeholders and why that approach is the best approach.

Point of Interest: Sometimes the Crowd Knows Best

In today’s analytics environment, quantitative analysis and experimental designs are all the rage. Results must come down to a discrete number that can be verified through robust statistical analysis. Thus the thought of asking participants to report on their own behavior or offer input into the factors that influence results is unheard of to some learning and development professionals. Yet, some of the most influential data in an organization are data that come from people—the data based on estimates, perceptions, and subjective measures.

The British scientist, explorer, and anthropologist Francis Galton has many claims to fame, but many remember his research in human intelligence, including researching the implications of his cousin’s, Charles Darwin, theory of evolution. Galton was one of those who thought that accuracy exists in the input of the few elite, rather than the many commoners. The story goes that one day in the fall of 1906, Galton left the town of Plymouth for a country fair. He was 85 years old at the time, but as curious as ever.

The Competition

Galton’s destination was the annual West of England Fat Stock and Poultry Exhibition, where local farmers and townspeople would gather to appraise the quality of one another’s cattle, sheep, chickens, horses, and pigs. Galton had great interest in measures of physical and mental qualities and breeding, so the fair was the optimal environment for him to study the effects of good and bad breeding.

Galton believed that only a few people had the characteristics necessary to keep societies healthy, and much of his career was devoted to measuring them in order to prove that the vast majority of people did not have them. His research left him with little faith in the intelligence of the average person. He believed “Only if power and control stayed in the hands of the select, well-bred few, could a society remain healthy and strong.”

As he walked through the exhibition, Galton came across a weight-judging competition. An ox had been selected and placed on display. A crowd of 800 people was lining up to place wagers on what the weight of the ox would be after it had been slaughtered and dressed. A sixpence bought a stamped and numbered ticket where each person would fill in name, occupation, address, and estimate. The best guesses would receive prizes.

Many of the betters were butchers and farmers, who were presumably experts at judging the weight of livestock, but there were also quite a few people with no knowledge of cattle. Galton wrote later in the scientific journal Nature, “the judgments were unbiased by passion and uninfluenced by oratory and the like. The sixpenny fee deterred practical joking, and the hope of a prize and the joy of competition prompted each competitor to do his best.

“The average competitor was probably as well fitted for making a just estimate of the dressed weight of the ox, as an average voter is of judging the merits of most political issues on which he votes,” Galton continued. Wanting to prove that the average voter was capable of very little, Galton turned the competition into an experiment (Galton 1907).

The Analysis

After the prizes were awarded, Galton borrowed the tickets from the organizers and ran a series of statistical tests on them. After discarding 13 tickets for being defective or illegible, he arranged the remaining 787 tickets from highest to lowest, and graphed them to see if they would form a bell curve. Then he calculated the crowd’s collective wisdom by adding all the contestants’ estimates and determining the mean. Galton undoubtedly thought that the group’s average would be way off the mark.

The Results

The crowd had guessed that the ox would weigh 1,197 pounds on average (Surowicki 2004). The median weight, as shown in Galton’s notes, was 1,207 pounds (Figure 5-7). The actual weight was 1,198 pounds. The crowd’s judgment was essentially perfect. Galton wrote, “This result is, I think, more creditable to the trust-worthiness of a democratic judgment than might have been expected” (Galton 1907).

FIGURE 5-7. CONTESTANTS ESTIMATE COMPILATION

That day Francis Galton stumbled on the simple, but powerful, truth: Under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them. Groups do not need to be dominated by exceptionally intelligent people in order to be smart. Even if most of the people within a group are not especially well informed or rational, they can still reach a collectively wise decision.

Conclusion

Experiments, analytics, and quantitative approaches make us feel comfortable with our analysis and, therefore, our results. But sometimes estimates are just as powerful. For a more recent, real world application of the wisdom of crowds, watch The Code: The Wisdom of the Crowd, a YouTube video by BBC’s professor Marcus du Sautoy.

Refuel and Recharge

This chapter focused on how to isolate the effects of your programs on improvement in key business measures, answering the question, “How do you know it was your program that caused the improvement in the measure?” Three research-based techniques were described along with the estimation process. Consider how you can use these techniques to isolate the effects of your programs on improvements in key measures.

Travel Guides

du Sautoy, M. 2011. BBC - The Code - The Wisdom of the Crowd. August 15, www.youtube.com/watch?v=iOucwX7Z1HU&feature=youtu.be&list=FLV1WJ8RpyaJVhJJqgwAH1Tg.

Galton, F. 1907. “Vox Populi (The Wisdom of Crowds).” Nature 75(1949): 450-451. www.all-about-psychology.com/support-files/the-wisdom-of-crowds.pdf.

Mooij, J.M., J. Peters, D. Janzing, J. Zscheischler, and B. Scholköpf. 2014. “Distinguishing Cause From Effect Using Observational Data: Methods and Benchmarks.” Journal of Machine Learning Research, December 11.

Phillips, J.J., and B.C. Aaron. 2008. Isolation of Results: Defining the Impact of the Program. San Francisco: Pfeiffer.

Phillips, J.J., and P.P. Phillips. 2007. Show Me the Money: How to Determine ROI in People, Projects, and Programs. San Francisco: Berret-Koehler.

Sadish, W.R., T.D. Cook, and D.T. Campbell. 2002. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin.

Surowicki, J. 2004. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economics, Societies and Nations. New York: Doubleday.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.8.82