7

Getting the Most from Measurement Results

We have to admit it: we are fans of 360-degree feedback, especially when it is used to support performance development. Done well, it can provide access to reinforcing praise, constructive criticism, and developmental suggestions that people otherwise may not receive. It can be a low-cost, high-impact process that adds significant value to both individuals and organizations. And yet few measurement tools are quite as likely to evoke a weary sigh in us as 360s. This is because no other measurement results are probably quite so open to being so utterly unused as 360-degree feedback. We have lost count of the number of times that we have seen people receive a 360 report, read it once, and, then, well … nothing. Some may put it in a drawer and file it away for another day. Others may put it in their bag along with good intentions to read it at home or on a train or plane, but somehow they never quite get around to doing so.

Some people, of course, do read, reread, and then act on their 360 report. But without the support of a formal development process, the reality is that most people will not improve their performance as a result of their feedback. It is not that they are not motivated to improve. It is just that the pace and demands of everyday business life get in the way.

Because 360-degree feedback is useful only insofar as businesses and individuals put it to work, it is a perfect example of talent measurement's Achilles' heel. So in this chapter we are going to look at this reality of all talent measurement processes and explore what your business can do to address it. We will see that there are two challenges here. The first is how to ensure that everyone applies the outputs of measurement—the ratings, results, and reports—in an effective way, and not just sometimes but consistently. And the second is how to use the talent intelligence we produce for something more than just guiding individual selection decisions or personal development.

To enable measurement to have its full impact, organizations need to stop overlooking and start acting on both of these challenges. Even among companies that manage to use results consistently well, it is rare to find one that uses the data for something more than just hiring, promoting, or selecting people for high-potential programs. Yet getting these things right represents a huge and important opportunity for businesses to ensure that measurement has the business impact it can and should have.

As an example of what we mean, let us stay with 360s for a moment. In terms of ensuring consistent good use in aiding personal development, some simple things can be done. To begin with, the feedback needs to be useful, by which we mean it should contain specific suggestions on how to develop. So the first step is educating feedback givers on the most helpful type of feedback. This need not involve a long training course but simple, clear, and repeated instructions. In addition, it is essential that there be a formal follow-up process around the 360. For instance, individuals may be required to discuss their report with their manager, agree on key development actions, and then review progress at a later date.

As for using the data for something more, the first step is to collate the data (see chapter 6). Once that is done, firms can start doing things like looking at rating tendencies for each question or competency to check how effectively the 360 is working. If they find that people are always rated very high or low on a particular item, then something may need adjusting. Norm groups or benchmarks can be created, too, which show the average strengths and weaknesses of the company overall or particular business units or teams. And the feedback ratings of high and low performers can be compared to identify which behaviors predict performance and which are most valued by the business. These may not be perfect processes, but they are easy wins that can make all the difference to the value that 360-degree feedback offers.

This, though, is only one tool. How can businesses go about addressing these issues more broadly? In answering this, we will look first at how to use talent measurement results consistently well, before then exploring how to use the data in wider ways and how to build a culture of useful talent measurement within your company.

Using Results Consistently Well

For measurement results to be useful and have an impact, the people applying these results need to do two things with them: understand them and act on them.

Understanding Measurement Results

In some countries, including the United Kingdom, regulations require vendors to sell psychometric tests only to people trained in their use. In South Africa, there is even a law to back this up. Only suitably qualified psychologists are allowed to use these tests. The idea is that psychometrics can be complex and that the people who are using and interpreting the results ought to understand the complexities. It is a laudable idea, yet the reality is that in most countries, measurement results end up in the hands of people who are not trained in the technical complexities of the trade. This is true not only for psychometrics but for all measurement methods and tools. Vendors are aware of this, of course, which is why they produce “manager versions” of test reports to explain and simplify the results for lay users.

The issue remains, though, that the end users of results—business managers, HR managers, and recruiters—often lack a deep understanding of measurement, and this can create problems. Ratings and results may be misinterpreted, and decisions may thereby be misinformed. Many businesses are aware of these risks but are unsure what to do about it. After all, with training programs lasting three to five days and costing thousands of dollars per person, it is not feasible to train everyone who uses results.

Moreover, the widespread availability of preinterpreted results such as manager versions of reports has taken the heat from the issue. The reports appear to provide the support that end users require, especially when combined with access to a trained person should any questions arise. So from most organizations' perspective, no further action seems necessary.

Yet it is, because subtle yet critical misunderstandings about the nature of results persist. Preinterpreted results have helped to a certain extent in terms of assisting end users to understand what specific scores and ratings mean. Yet they are also prone to oversimplifying results and do not help managers understand broader issues about what results are and what they can and cannot tell us. This may sound harmless, but time and again we have seen these issues undermine the way businesses use the results. So doing nothing and relying on preinterpreted reports is not enough. To ensure proper use, businesses need to improve people's understanding of the tools they are using.

The good news is that acting on these issues need not be difficult or time-consuming. We are not talking about large-scale or expensive training initiatives. In fact, many of the associated problems can be avoided by communicating three things to end users: higher scores are not always better; results are estimates, not facts; and beware of oversimplifying results. These could be part of a training program, but they could just as easily be golden rules that are stated every time results are given to end users.

Higher Scores Are Not Always Better.

One common misunderstanding is that higher scores are always better. It is, of course, correct that people who score higher on certain measures generally go on to perform better. But the key word here is generally. These are general rules. Consider intelligence. We know that it is the single best predictor of success. But a genius may well grow bored in some jobs, and exceptionally high intelligence scores are sometimes accompanied by less desirable qualities—for example, the inability to communicate ideas or think more pragmatically. Similarly, people who score very high on measures of conscientiousness can sometimes come across as inflexible or bureaucratic. And being very high in agreeableness is not always a good thing either, especially in roles that require tough-mindedness. The problem with this misunderstanding that higher is better, then, is that results can be misinterpreted, leaving decisions ill informed.

A related issue here is the risk of homogeneity. One of the problems with companies having a clear view about the type of person they want is that they can end up getting only that type of person. We have seen this in particular when organizations use one of the simpler personality tools that measure only three or four dimensions. Assuming that more is better, these organizations focus on finding people who have these few dimensions in abundance. As a result, they can end up with a bunch of people who are all very similar.

Addressing these issues need not be difficult, but it is important:

  • Communicate the idea. The rule is simple and easily remembered: more is not always better.
  • Measure fit. Checking the level of fit between results and role requirements will help shift the focus away from who has the highest scores to who is most likely to meet the demands of particular roles. The check should include the teams individuals will be part of, the managers and stakeholders they will work with, and the wider organizational culture. To ensure it happens, ask HR and line managers to rate the level of fit between the results and each of these elements. It may not be particularly scientific, but it is quick, easy, and a lot better than doing nothing.

Results Are Estimates, Not Facts.

A less simple but equally common misunderstanding is that users will view the results of measures as facts or truths. For example, a job candidate may obtain a low agreeableness score in a personality test, from which a recruiter may conclude that the individual is not agreeable. This sounds reasonable, but it is not because ratings and results are not absolute facts or truths: they are more like estimates.

This is partly because every measure is open to inaccuracies. Assessors may make a wrong judgment, or job applicants may pretend to be something they are not. Part of it, though, is also the fact that how people perform varies from day to day, and when you are assessing them, you do not know if you are catching them on a good day or a bad day. The low agreeableness score, for example, does not mean that someone is not agreeable; rather, it suggests that he or she may tend not to behave this way.

Peter Saville, one of the founding figures of modern measurement, uses the analogy of golf to explain this.1 As he notes, golfers have a handicap score—a kind of average score that shows how good they are. But on any one day, the score they achieve may not be in line with this handicap. They may do far better than their handicap would suggest one day and far worse the next. Measurement results are pretty much the same. The ratings and results people achieve are determined not only by how good they are, but also by circumstances. As a result, the ideal, most accurate way to show ability is not with a specific score, but with a range of scores. For example, rather than saying that someone has a score of, say, 21 out of 30, it would be better to say that her or his ability can range from 17 to 22.

Of course, without testing someone many times, there is no way for measures to show this range. So it is therefore important to understand that measurement scores are not perfect indicators; they are just in the ballpark. And it is down to the people who are reading and interpreting the results to work out where exactly in the ballpark they are—whether a score is at the top end of someone's range or the bottom end.

As an example of why this is so important, consider an HR leader in a large multinational we observed. In addition to the standard recruitment interviews, the company liked to assess candidates for senior roles using an individual psychological assessment process. Yet it had a rule that applicants who achieved too low a score on an intelligence test were rejected. Never mind if they had a first-class degree, a track record of success, or specialist technical skills that the business needed. If the person scored too low, the company ignored all other information and rejected the person. It was not that this HR leader had researched what level of intelligence was predictive of high performance. It was an arbitrary cut-off point. Because the company did not understand that scores are estimates, it wasted over six thousand dollars per individual psychological assessment on these candidates by dismissing their application on the basis of just this one piece of evidence and no doubt missed out on some potentially good people.

So what should businesses do?

  • Communicate the idea. Results are estimates, and people have ranges, not scores. In our experience, managers know what we mean when we use golfing or ballpark analogies to explain the issue. Some may not like it, because they would prefer to have an easy, quick answer. But they understand it, and most prefer to get it right and make good decisions.
  • Cross-reference results. If someone completes a personality test and you have a chance to talk about the result with her, do so. Ask if she feels that the results are a true reflection and what impact her personality has on her work. If someone has an intelligence test, cross-reference his score with his academic achievement. And if someone attends an individual psychological assessment or assessment center, cross-reference the results with whatever other information you have, such as other interviews.

Beware of Oversimplifying Results.

At present, the market tends to present measurement results—and businesses consequently tend to view them—too simplistically. For example, a typical personality test report lists the various dimensions measured and shows an individual's scores. It then briefly explains what these scores mean. A particular sales manager might be high in conscientiousness, low in agreeableness, and about average in everything else. A really good report might point out that conscientiousness is a reasonable predictor of success in sales staff, though slightly less effective for managers.2 And it may add that having below-average agreeableness is not generally a problem for managers.3

What such a report will usually not mention is that the combination of high conscientiousness and low agreeableness is not a good sign. There is some evidence that others may see people with these traits as micromanaging and inflexible.4 So what is frequently not shown is the interaction among the various factors being measured. Even when interactions are shown for the factors assessed by a single test, they are rarely shown for the factors measured by different tests. We struggle, for instance, to think of an intelligence test report that provides advice on the impact of personality profiles on how people use their intellect.

Similarly, reports from individual psychological assessments, written by consultants for managers, tend not to address interactions either. They typically have one section describing an individual's overall strengths and another describing weaknesses. But they usually do not describe the interplay between these aspects—how one strength affects another, how one weakness exacerbates another, or how a particular strength mitigates a certain weakness.

Why the simplistic view? Well, the ability of experts to interpret the interactions of factors is limited by the fact that researchers have not studied these interplays in great depth. Yet the bigger factor here is that consultants tend to present talent data in the simplest format because that is what the businesses seem to want: as clear and unambiguous a message as possible about the type and level of talent individuals have.

People generally believe that they themselves are complex combinations of qualities and characteristics, yet when it comes to others, they often want a very simple box to put them in, and understandably: managers, who are often the ultimate users of measurement results, are busy enough without having to decipher complex reports. Yet the counterpoint here is that the oversimplification of talent measurement risks ruining it and undermining its potential value to businesses. At a fundamental level, people are complex, and if we ignore this reality, we will inevitably make poor decisions about them.

Yet complexity and clarity are not incompatible. It is perfectly possible to provide more detail and more complex interpretations while also giving a clear message. Managers do not need to become psychologists or test experts; they just need to be aware of the complexities and become educated consumers. Otherwise they risk spending their hard dollars on oversimplified tests that are limited in value and inadvertently misleading.

Importantly, businesses have the power to change this situation:

  • Make sure that measurement results and reports show some of the complexities. Ask vendors to ensure that preinterpreted results and reports show the interactions between the different qualities and aspects of people. Vendors may not be able to provide all the answers, because the research may not yet have been done. But by asking about these complexities, you will be focusing both you and them on these issues. And in doing so, you will help yourself to make better decisions and create market pressure for vendors to develop a better understanding of such interactions.
  • Use a simple model to help managers understand the complexities. To help end users think about some of the complexities of measurement results, provide a simple model—something catchy and quick to prompt their thinking. The one we frequently use with firms is the Three Cs, which we describe on the previous page.

The Three Cs
The Three Cs is a simple model designed to help people think about and interpret measurement results. As its name suggests, it focuses on three Cs of all measurement results: contexts, consequences, and caveats. So whenever we are told that someone has a particular quality or ability, such as being strongly driven to achieve results, we do not automatically accept this as a good thing. Instead, we ask what the contexts, consequences, and caveats are of this quality are.
So, yes, an individual may be highly driven to achieve tangible results, but in what contexts would this be more or less relevant for performance? For example, it may be essential for a setup role, in which he is establishing a new business stream. But it may be less essential for a maintenance role, which is more about keeping a process running smoothly than achieving targets.
Next come consequences: How is the characteristic relevant for performance? In other words, why should we care? How does the characteristic help him perform better? Is it directly, by enabling him to do something better? Or is it indirectly, by affecting other characteristics? For example, the ability to consider and use others' opinions can help people to use their intellect to make decisions.
Finally, there are the caveats. What cautions or advisory notices should be given about the person's drive? Under what circumstances would this drive not translate into better performance? What if he is very low on agreeableness? Would he come across as overly pushy and demanding rather than driven? And what if his confidence drops? Will he still be as driven then?
Whenever we are presented with measurement results, then, we look for the Three Cs. It is a simple model that can make a big different in how we interpret and understand people's talents.

Acting on Measurement Results

Helping end users understand results is merely the first step in ensuring that measures are consistently used well. Users also need to act on the results effectively. As we mentioned earlier with regard to 360s, it is essential to ensure for developmental assessments that there is some kind of formal follow-up process. Individuals who are not motivated to develop themselves will get only so much out of this, but for those who are motivated to some degree, this kind of support can be essential.

When measurement is used in selection processes, however, ensuring effective action can be trickier. The heart of the matter is the degree to which measurement results influence selection decisions. For example, when measures are used as part of sifting processes, the results usually determine or have a direct impact on the decision about whether to proceed with candidates. In other selection processes, however, the impact of the results is less clear-cut.

We tend to encounter two situations: either businesses and individual leaders overrely on results, always following them no matter what, or they are skeptical about measurement and ignore results if they do not match or confirm how they perceive candidates. This can be about individuals' overconfidence in their own judgment. Often, though, it is about politics and power, about managers wanting to feel they are in control and that it is up to them whom they employ. This is especially so at more senior levels where, ironically, the cost of making a poor decision is higher.

In practical terms, the key issue here is what businesses do when a hiring manager disagrees with measurement results. What happens when a manager thinks yes but the results say no? On the one hand, letting managers ignore the results of measures that the business has invested in seems odd. This is why some businesses have a set policy that these recommendations must be followed. A more covert version of this situation occurs when an influential leader believes in a certain measurement process and managers might feel that they have little option but to follow the results.

On the other hand, there is little point trying to force managers to hire someone they do not want or do not believe in. That rarely ends in success. Moreover, as a matter of principle, we believe that people who are accountable for decisions should be the ones making the decisions. No matter how predictive a measure, the decision maker must always be the one who makes the final decision. A balance is required.

The solution here is to create a sense of accountability for both hiring decisions and how measurement results are used:

  • Communicate clear expectations. The business should have a clear and simple policy about how measurement results are used. It need not be detailed—for example: “Measurement results provide important information and should be a key part of all recruitment decisions. However, hiring managers are not expected to always follow the results and recommendations of these measures, and are ultimately responsible for the hiring decision.”
  • Review hiring decisions. Whenever a new employee does not work out, both HR and the hiring manager should review the information available at the time of hiring. The goal is not to find fault but to learn lessons that can help prevent the same situation from arising in the future.
  • Track what happens when results and recommendations are both followed and ignored. When measurement methods deliver a clear hire or no-hire recommendation, both this advice and managers' ultimate hiring decisions should be recorded. These data can be tracked to see what the impact of ignoring measurement results actually is, which can then be relayed to the business.

Man Versus Machine
In 1954 the American psychologist Paul Everett Meehl breathed new life into an old debate when he published a study looking at how medical diagnoses were made.5 He found that when the results of tests were combined into a diagnosis using statistical methods, the correct diagnosis was achieved more often than when doctors relied only on their clinical judgment. His one caveat was that humans seemed better at identifying unusual bits of information—things that were not part of standard diagnostic tests. But in general, mechanical judgment trumped human judgment.
Although the issue can evoke some passion, the cold, hard facts are that in the half-century since Meehl's work, most of the research has supported his findings. It is not that human judgment cannot be good; in fact, at times it can exceed mechanical judgment. It is simply that human judgment is too unreliable. Sometimes it is great; other times it is not. Mechanical algorithms, though, are always reliable. They are never distracted, never influenced by mood, and never rushed to a premature decision.
One conclusion from this could be that we might be better off doing a series of tests and then plugging the results into an algorithm. Yet it is not quite so simple. In selection decisions, the hiring managers and how they feel about a candidate are a critical part of the equation. And as we have noted, if decision makers are accountable for decisions, they must have the final say.

For measurement to have the impact it should, the outputs and results need to be consistently used effectively. And for this to happen, businesses need to ensure that end users both understand the results and act on them. Organizations can drive this by doing three things. First, they need to communicate a few clear principles about the nature of results and how they should be used. Mass training can be great, but it is not necessary. Good communication and a simple model to help people think about results may be enough. Next, they need to track and review what happens—who makes what decisions and what the outcome is. And, finally, they need to create a sense of accountability for using measurement results.

How measures are applied can be a complex issue, and change certainly cannot be achieved overnight. This is particularly so when organizations have a long tradition of using measures in a certain way. Yet change can come, and the solutions can be quite straightforward. Given this and the cost of not doing anything, we are continually surprised by the number of businesses that seem to have a blind spot here. But businesses must act because investing in measurement solutions without ensuring that they are used properly risks fundamentally undermining their utility and value.

Using Measurement Data to Do More

The vast majority of companies tend to stop here: they focus on ensuring that measurement methods are used effectively to inform and support individual people decisions and development. They tend not to go one step further and use the data for something more. Since much of the potential value of measurement can come from doing this something extra, it is a massive missed opportunity.

By “something more,” we mean using measurement data to inform and support processes such as onboarding, talent management, and organizational learning. A recent study found that fewer than 20 percent of companies do this.6 Even fewer do it effectively or as much as they could.

Although doing this may sound complicated, it tends to be easier than ensuring that managers use measurement results effectively. You do not necessarily need specialist expertise to do it—just a basic comfort with numbers and a good spreadsheet. That, and the will to do it.

Linking Measurement to Onboarding

Probably the easiest win here is for businesses to link the outputs of recruitment assessments to onboarding. It may sound obvious, but it is done surprisingly infrequently. Indeed, one of the most common mistakes that companies make with new employees is to assume that they will need little support to integrate well and get up to speed.7 To us, it seems odd to go to great lengths to identify candidates' strengths and weaknesses and then not to use this information to ensure that they are successful. Yet in many businesses, the idea that a candidate may require support tends to be viewed as a weakness and raises questions about suitability. So the challenge here is partly about readjusting expectations so that providing support is a common part of onboarding.

Even when hiring managers want to provide support, creating a development plan for new employees can be difficult, not least because managers lack information about them. This is where measurement can come in. Even if the only method businesses use is interviews, measurement processes can provide key bits of information to help direct onboarding support. This can be simple, consisting of nothing more than giving interview notes to managers (especially when interviewers are asked to suggest what onboarding support candidates may require). Or it can be more sophisticated, with the outputs of interviews and other measurement methods being fed to both the hiring manager and an onboarding team. Either way, it is a big and easy win.

  • Collect the information. Make sure the selection process captures candidates' strengths and weaknesses so that you can use this information to support new people. It should not be onerous. As interviewers almost always form ideas about what level of support candidates might require, it is merely a matter of recording them.
  • Have an onboarding process. This can be as uncomplicated as a meeting between the new employee and the manager to agree on an onboarding plan.
  • Review progress. Hold a follow-up session to review the new employee's integration after sixty or ninety days.

Talent Analytics

Probably the biggest win to be achieved from applying mea­surement data lies in talent analytics. Again, this may sound complicated, but it is just about using the data to inform other people processes, such as talent management or learning and leadership development.

A global business asked us to help it establish measurement processes to support three key people decisions: the recruitment of new employees, identification of people with high potential, and selection for promotion. The processes created were not complex. They mostly involved interviews, supported at more junior levels by sifting methods and at senior levels by individual psychological assessment. But they were implemented with all data centrally collected and just one competency framework used across the business. As a result, we were able to use these simple data to do far more than merely support people decisions:

  • We looked at the competency ratings of new employees in each business division. This enabled us to ask two questions: Were some divisions attracting stronger candidates than others? And were the qualities of new employees aligned with each unit's business objectives? Sure enough, two divisions appeared to be attracting lower-quality candidates. Another unit, whose strategy involved fast organic growth, was hiring relatively risk-averse people. As a result of these findings, all three divisions were able to change their attraction and hiring activities.
  • We compared the average competency ratings of new employees with those of the people nominated as those with high potential. We found that the new people had an uncannily similar pattern of strengths and weaknesses to the current employees. This led to a debate in the business about whether it was “just employing clones,” which eventually led to changes in the hiring process.
  • Allied to this, we compared the average ratings of new employees with those of applicants who were not selected. We found that what most distinguished those who were not selected was that they tended to be extroverts and less risk averse. This reinforced the finding that the company was employing clones.
  • We were able to look at the qualities that distinguished those identified as having high potential and those being promoted. We found that the people labeled as high potential were generally better at performing well, being outgoing, and showing entrepreneurial spirit. In a business trying to adopt a faster-paced, edgier, and more entrepreneurial approach, this was a good finding. But when we looked at the qualities most likely to lead to promotion, we found something slightly different. When it came to actual career progression, it seemed that the people being chosen were those who performed well and were viewed as team players. For all the encouragement the business was trying to give people with the qualities it thought it wanted, the people being promoted into leadership roles were different. As a result of these findings, the business developed new criteria for promotion.
  • Finally, we looked at the average competency profiles of the groups measured and fed the findings into the learning and leadership development functions. As a result, specific learning and development programs were created to address key competency weaknesses in particular groups of employees. The measurement data thus enabled better targeting of learning and development investment.

These were all simple steps, accomplished with simple data and without resorting to expensive systems. But they led to powerful findings that ultimately helped the business deliver its growth strategy.

A less simple and more headline-grabbing example of using data to achieve more is closed-loop analytics: connecting measurement results with key talent and business performance data points—things like sales revenue, customer service scores, performance ratings, absenteeism, and even bonuses. This may sound daunting, but it is really a matter of collecting the data in one place. Specialist systems do exist and can help considerably, but a large spreadsheet will also do the job.

The data can then be used to identify the impact of each measure on different business outcomes. For example, it might be found that targeting particular personality profiles is associated with lower levels of employee theft. As a result, businesses are able to do two things. They can adjust and improve the measures they use to be able to predict certain outcomes more accurately. And since they will have a better understanding of what measures predict which outcomes, they will be able to adjust their hiring practices to target particular types of candidates.

We have seen a couple of companies go one step further and connect four data points: measurement results, talent data, business metrics, and employee engagement data. They can then see, for example, exactly how various leadership behaviors affect engagement and performance.

However, as we have argued, although smart systems can be great, they are not always necessary. As a recent McKinsey report noted, simply making data more accessible to relevant stakeholders can create tremendous value.8 In this vein, at the company described earlier, we developed annual measurement data reports at the group, divisional, and functional levels. The frequency of reporting should obviously depend on the amount of measurement activity in a business. In our experience, though, the key is to have a reporting period that allows new insights with each report. Producing quarterly reports is possible, but if each new one does not provide new insights, this risks undermining the perceived value of the reports.

As we have seen, measurement data can be used for so much more than individual people decisions and development. Precisely what will depend on your business, but we hope that the examples will provide you with ideas and inspiration. We had originally hoped to give many more examples, with case studies showcasing the companies that excel at this. But the more we investigated, the more we realized that the majority of organizations are not currently leveraging their measurement data to provide this “something extra.” This is a shame, because in every case, it is a missed opportunity. If the rise of Web search engines has taught us nothing else, it has taught us that we should collect, connect, and use data.

Things are changing slowly. In this emerging era of big data, companies are trying to leverage all the information they can get to better understand and improve the way their businesses function. Talent analytics is one of the latest frontlines of this data-driven push for profits, with the promise of talent intelligence held out as the prize. Yet as we noted at the beginning of this book, these systems tend not to include measurement data. This is a critical point because measurement results are the key to turning plain old administrative talent data into genuine talent intelligence.

Measurement brings the ability not only to know the talents and competency profiles of people but also to achieve more effective and targeted hiring, promotion, and development processes. It also affords the opportunity to use these processes to change and develop the competencies of different employee groups to match, support, and drive business objectives.

Building a Culture of Measurement

Of course, building talent intelligence takes time. Initial insights are possible, but it usually takes a year or so to be able to start seeing trends. And although taking steps to ensure that people use results well can have an immediate impact, it often takes some years to achieve fully.

Perhaps this is why so many businesses do not focus on these issues: they seem too long term. But we cannot say too many times that businesses should focus on them. Measurement is useless unless the results are used well, and businesses are extracting only half of the value that measurement has to offer if they do not do more with the data.

Moreover, by pursuing these things, a tipping point can be reached, at which understanding and expectations about measurement change. People come to understand what they can and cannot do with results and assume and expect more from the data. And it is this self-sustaining culture of measurement, not just operational logistics, that should be a key goal of implementing measurement processes. Getting the measure to the participant and the results to the manager is the easy bit. Building the culture is the challenge. And every use of measurement is an opportunity to change the way people think about it and treat it.

A few years ago, we met a global financial services business that claimed to have built a culture of measurement. Impressed, we explored further. We found that there was indeed an expectation that measurement should be an integral part of people decisions, as well as a widespread belief that it could add value. But there was no real attempt to ensure consistent use and only a limited effort to use the data to do more. So whatever culture of measurement existed was limited. Yet it did raise an interesting point: for some businesses, achieving such a culture seems a long way off and simply getting managers to accept the use of measurement can be a big win. For companies in this situation, we have found that three simple steps invariably work to begin moving the business toward measurement: show the business case; start small, get it right, and then expand; and get the interviews right.

Show the Business Case

Sometimes the challenge is as fundamental as convincing managers that they need help and that measurement can provide it. Other times it is simply about showing that measurement need not slow selection processes down or lead to lost candidates. Whatever the specific issue, the answer is always the same: build a compelling business case that senior executives can buy into. The place to begin is with hard data on the current situation—things like failure rates and average performance ratings of new employees. Of course, sometimes these figures do not show any need for measurement. It could be that turnover is generally low and performance ratings are uniformly high. Or it may be that people decisions are already pretty good and the business simply wants to improve them further. In these cases, there is plenty of evidence freely available about the positive impact that measurement can have when done well. Either way, the first step in introducing measurement is to establish evidence of why the business needs it.

Start Small, Get It Right, and Then Expand

We are often asked which measurement processes should be introduced first. Our answer is to start with the process that is most likely to yield the biggest benefit, but begin with a small trial. Use this trial to hone it, tweak it, and get it working well, and only then expand it elsewhere in the business. The trial period will also provide an opportunity to evaluate the process and build evidence of its value, and the senior leaders involved can become champions for the process.

Get Interviews Right

Probably the most frequent touch point that managers have with measurement is the hiring interview. So even if a firm's objective is to introduce psychometrics to selection processes, it needs to make sure that it gets interviews right. This is partly because any positive impact of testing will be limited if the accompanying interviews are not done well. But it is also because getting interviews right provides an ideal opportunity to educate the business at large about some of the basics of measurement—things like the value of being better at measurement, the risks of rating biases, and the benefits of focusing on fit.

So how can business get interviews right? The subject is worthy of a book, but there are five main levers to use:

1. Communication with the candidate. Good communication equals a better candidate experience. As with all other measurement processes, then, it is important that candidates know exactly what to expect when they come for an interview.
2. Training for interviewers. All interviewers should be given basic information about regulatory compliance—whether there are any questions they have to ask and what kinds of things they are not legally allowed to ask. Beyond this, though, training can help ensure that interviewers make better judgments and provide a better candidate experience.
Some companies choose to teach interviewers about building rapport and asking different sets of questions. Another current vogue is to educate interviewers on potential biases that they may have and that can reduce the quality of their judgments. Both types of training can help. But there is evidence to suggest that if companies do only one piece of interviewer training, it should be frame-of-reference training: providing interviewers with examples of interviews and then getting them to rate the interviewee.9 The idea is that this helps create a common reference point for what good looks like.
As for who to train, obviously the more people you train the better, but there is another option. One thing we have seen some businesses do when candidates have more than one interview is to focus on training a small cadre of “superinterviewers”: individuals who are trained in interviewing skills to a higher level than other people in the company. They then ensure that every candidate is seen by one of these superinterviewers.
3. Interview guides. Preset lists of questions for interviewers to ask are useful because they provide a process to follow. They can thus ensure consistency, as well as give interviewers suggestions for questions. Most textbooks will tell you that structured interviews are what you need. But as we explained in chapter 4, they are not always the best option. By far the preferred option, especially at middle or senior levels, is the semistructured interview. It provides interviewing managers with an opportunity to build rapport and gives candidates a much better experience.
4. Interview outputs. The typical outputs of interviews are comments and ratings. We know of some businesses in which there simply are no outputs from interviews: no notes are taken, no formal comments given, and no ratings made. We also know of other businesses that require detailed notes to be taken and a page or two of comments and ratings to be made afterward. In our experience, a compromise between these extremes works best. Some ratings need to be made and some brief comments can be helpful to the hiring manager or HR. But these should be kept to a minimum. Managers are busy people, and only information that is going to be subsequently used should be recorded.
5. Accountability. Finally, businesses need to ensure that interviewers feel accountable for their judgments. The flip side of this is that good interviewers should be praised and rewarded. Interviewing is a valuable skill for businesses, and it should be treated as such.

One final point about interviews is that businesses also have the option to buy a ready-made solution like an interview system. This combines interview guides and training in how to do interviews. Some of the interview guides have set structures; others are flexible and can be tailored. They are attractive because they can appear to be a quick route to improving interviews, and some of them are genuinely good. But they are also short-term solutions and, in our experience, are usually overpriced for what they are, in particular those that require all users to be “certified.” Instead, it tends to be cheaper and better to use independent consultants to build your own interview guides and provide training. There is certainly no need to tie yourself into using inflexible systems that require interviewers to be certified. So if you are serious about interviews, build your own. It can be as effective as any off-the-shelf interview system and will be more tailored and considerably cheaper.

Spreading the Expertise

Using measurement results and data effectively, then, is about building a culture of measurement. It is about changing perceptions in the business at large about what measurement is and can do. In this respect, measurement expertise should be distributed throughout the business rather than residing in a few individuals. We do not mean here that everyone needs to be a trained expert, just that the business end users of measurement methods should be effective, educated consumers.

Of course, some specialist expertise will still be required, be that an internally employed expert or external consultants or vendors. And this is the topic that we address in the next chapter: how organizations can source the expertise they need to implement and use measurement methods.


Case Study
Standardizing Interviews Across a Decentralized Global Business
Improving measurement processes need not always be expensive or involve complex technologies. A few years ago, we worked with a large global business in the energy sector, helping it develop its measurement practices. One of the issues we quickly uncovered was that both how interviews were done and how well they were done varied considerably across the business.
To ensure interviews were consistently effective, we wanted to introduce a common process, a standardized way of doing them. Yet the company was hugely decentralized, with a culture of every business unit doing its own thing. And it quickly became clear that there was no budget for training interviewers.
In talking to frontline HR staff and managers, we discovered that almost everyone seemed to struggle with the questions to ask in interviews. To address this, we developed a new interview process. At the heart of it was a single interview guide that could be used for all nontechnical interviews at all levels of the business.
Created in a simple Microsoft Excel spreadsheet, it presented users with a simple form. All they had to do was provide four pieces of information:
  • Which country the role was within
  • What level of the business the role was in
  • Whether it was a people management or individual contributor role
  • Which four of sixteen competencies were most critical for success in the role
Users then pressed the Print button, and an interview guide appeared. It had a standardized structure, with instructions for how to conduct the interview. It provided suggestions for introductory questions to build rapport with the applicant. It listed the questions that legally had to be asked. Then for each of the four selected competencies, the interview guide suggested five questions, with users being advised to select just two or three. Each question was accompanied by follow-on prompts and behavioral indicators—suggestions of what a good and a poor answer might look like to guide the rating of competencies. Finally, there was space for notes and a form requiring certain ratings and comments to be made.
In total, there were 480 main questions, with many more follow-ons, and a similar number of behavioral indicators. This may sound complicated and expensive, but it was not. To build it, we first asked a vendor to provide a list of questions and behavioral indicators. Since the vendor already had such a list for its own generic competency frameworks, it was not difficult to prepare this. All it had to do was match the competencies in the company's framework with its own. We then asked a second vendor to check and add to the list. Finally, we reviewed the list ourselves before asking a specialist in Excel to build a spreadsheet to contain all the information. All in all, it cost under twelve thousand dollars.
It succeeded because it made life easier for people, so the business wanted to use it. It was simple and easy to use. The Excel format meant that it could be used anywhere in the world. And the standardized structure meant that a more or less common interview process was followed across the business.
It did not solve all issues, of course. Training was still lacking. It was a good start, though, and it bought us the goodwill and credibility with frontline managers that we needed to do more to improve interviews. We did a lot of things at that company to develop its measurement processes, some of them quite expensive and technical. But in retrospect, it was this simple tool that probably had the biggest impact of all, touching as it did almost every manager in the firm.

Notes

1. Saville, P. (2011). Personal communication.

2. Hurtz, G. M., & Donovan, J. J. (2000). Personality and job performance: The Big Five revisited. Journal of Applied Psychology, 85, 869–879.

3. Barrick, M. R., & Mount, M. K. (1993). Autonomy as a moderator of the relationships between the Big Five personality dimensions and job performance. Journal of Applied Psychology, 78, 111–118.

4. Witt L. A., Burke, L. A., Barrick, M. R., & Mount, M. K. (2002). The interactive effects of conscientiousness and agreeableness on job performance. Journal of Applied Psychology, 87(1), 164–169.

5. Meehl, P. E. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. Lanham, MD: Rowan & Littlefield/Jason Aronson. (Original work published 1954).

6. MacKinnon, R. A. (2010). Assessment and talent management survey. Thame, Oxfordshire: Talent Q.

7. Fernández-Aráoz, C. (2005). Getting the right people at the top. MIT Sloan Management Review, 46(4), 67–72.

8. Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., & Hung Byers, A.(2011). Big data: The next frontier for innovation, competition, and productivity. New York, NY: McKinsey.

9. Roch, S. G., Woehr, D. J., Mishra, V., & Kieszczynska, U. (2012). Rater training revisited: An updated meta-analytic review of frame-of-reference training. Journal of Occupational and Organizational Psychology, 85, 370–395.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.211.164