2

Standard Measures of Talent

How Good Are They Really?

In 2011, close to 25 million people in the United Kingdom and United States alone tuned in to watch the X-Factor each week, a TV music competition in which aspiring singers compete for a recording contract. The winner is chosen by a celebrity panel and the audience at home, who phone in to vote for contestants. It is a franchise business, with versions screened in over forty countries: from El Factor X in Colombia to XSeer Al Najah, the pan-Arab version. Talent spotting has become entertainment for the masses, a casual evening activity, and we all think we can do it.

Dictionaries define X factor as a special quality or talent that is essential for success but difficult to describe. Yet over the past hundred years, this is precisely what psychologists have tried to do. They have tried to identify and describe the qualities essential for workplace success. Just like the X-Factor audience, most managers and leaders would probably claim to “know talent when they see it.” What researchers have done is to check whether these ideas and hunches are correct, and in this chapter, we are going to look at what they have found.

It is a good place to start, because many businesses are unsure about what they should measure. So focusing on four main qualities that most assume are essential for success, we explore what the latest research can tell us about these factors. And in doing so, we will notice ways in which organizations currently often over- or underestimate the value of these standard measures of talent.

First, though, let's take a step back and consider how we can tell whether an ability or quality really is an X factor and reliable sign of talent.

Finding Dependable Measures of Talent

For some jobs, it is easy to test people's talent. If you are hiring a data entry clerk, for example, you could use a keystroke to evaluate his or her speed and accuracy entering data. With financial traders, you could simply look to see who makes the most money (assuming they have the same level of opportunity). Yet there are many jobs where measuring talent or results is not so easy and many situations, such as recruitment, where you do not have access to this information. So rather than measuring talent itself, we usually have to look for the signs and symptoms of it—particular behaviors or characteristics that we think are indicative of talent like intelligence, ambition, and conscientiousness.

For jobs where we are able to measure talent directly, creating measures of this talent tends to be fairly simple. All we need to do is to make sure that our measure accurately represents how good individuals are at this talent. But when we are measuring only the signs of talent, we need to make a second important check: whether the sign or factor we are measuring really is indicative of talent—so with intelligence, whether how bright someone is genuinely related to or predictive of workplace success. These two basic checks are important because they reveal whether a particular measure works and whether a specific factor really is a sign of talent. And as we look at some of the signs of talent that psychologists have tried to measure, we keep returning to these two questions.

For many years, it looked as if it would not be possible to find any signs or measures that reliably predict workplace success. Researchers found factors that appeared to predict performance in specific situations, yet when they looked at whether these factors could do so in different situations—in other jobs or companies—they usually found that they did not appear to be linked to success. Talent, it seemed, was a variable thing in that what was required to succeed in every job was different. There did not seem to be any single X factor.

In the 1970s, however, new statistical methods changed that belief. Researchers started using some sophisticated statistical methods to combine the results of multiple studies in a process known as meta-analysis. The difference this made was that instead of looking at single studies that used the results from, say, a hundred people, researchers were now able to combine the results from thousands of people. And when they did this, they found that some factors did appear to be common to success in a variety of jobs. They discovered that, yes, if you compared just two jobs or two companies, then what was required for success could be very different in each of them. But when you looked at all the jobs and companies together, certain things did seem to signal success in most situations.

We are going to look at four of the main potential indicators of talent that researchers have studied:

  • Prior experience—what you have done before
  • Competencies and capabilities—what you can do now
  • Intelligence—how bright you are
  • Personality—your personal characteristics, typical ways of behaving, and the attitudes you hold

Tens of thousands of research papers have been published on these four characteristics, but what follows will summarize the key findings for you in an accessible form. You may be surprised by some of this, as some of the factors that we often think of as important for talent do not in fact seem to be. At the same time, others are highly effective for companies wanting to get ahead of the game in spotting talent.

To begin, let's check what research has to say about most organizations' starting point in the hunt for talent: the experience that people bring.

Experience as a Sign of Talent

Typically the first thing we do is look at a résumé to see what individuals have done before. We look at what types of roles they have held, how long they have stayed in them, and what they have achieved. We look at their educational level, where they have lived and worked, and try to form a sense of what type of person they are. We do this because we assume that prior experience in roles similar to the one that we want them to fill now will enable them to perform better. And we do it because we assume that we can predict how well people will do in the future from how well they have done in the past. But can we?

Well, yes, but there are limits and challenges. Experience can clearly give people useful knowledge and skills that can help them succeed in a new role.1 Sometimes that is all we really want to know: whether someone is sufficiently qualified and has the technical ability to do the job. In these cases, we can supplement a résumé with a test of technical knowledge or a work sample test in which we actually observe someone's skill at something. Studies have found that for highly technical or skilled jobs, these types of measures can be good at predicting performance.

Researchers have also found that prior experience can help us predict people's future performance in more general roles. Yet the results have not been as positive as we might expect.2 The reason, it seems, is that there is more to experience than knowledge and skills. Every workplace has its own way of doing things, its own culture. It can leave people with habits, attitudes, and ways of working that may be well suited to one firm but do not fit a new job or company.3 For instance, a common scenario is to meet a candidate for a senior role who has a strong background in terms of having already done similar jobs in other companies. Yet he may previously have worked only in business cultures that are very different from the one of the company now considering employing him. So people's experiences can hinder or get in the way of their future performance just as they can help predict it.

The challenge this creates is that although the relevance of job knowledge and technical expertise is often obvious, the impact of experience on ways of working is typically less visible. As a result, we tend to overrate the value of experience by focusing more on the positives associated with it while overlooking the negatives.

Other factors also add to the difficulty of interpreting work experience. Over one-third of résumés are thought to contain inaccuracies, and people increasingly have less predictable career patterns.4 In addition, the importance of various types of work experience can vary too. In many jobs, the experience of having carried out similar tasks before is more important for predicting success than whether a person has worked in a particular industry. But there are some roles (such as auditing) in which experience in a specific industry appears to be the more critical issue.5

Some of our most common assumptions about work experience are not supported by the evidence either. Here's one example that really surprised us. What would you think if you were comparing candidates and one of them had held five jobs in five years while the other had had only one job in that time? Most people will assume that the one-job person is the better candidate and that the five-job person is perhaps unreliable or unable to hold a job for long. Yet a recent study looking at over twenty thousand people found no relationship between how long people have lasted in previous jobs and how long they will stay in their next one.6

Biodata: A Somewhat More Systematic Attempt.

Perhaps because of the difficulty in interpreting work experience, some companies have adopted a broader and more systematic approach to the issue. They collect information about a range of factors from the life histories of employees, referred to as biodata. They then analyze which experiences are most linked to outcomes like performance, retention, or avoiding accidents so that they can look for these things in job applicants. For example, one of the best predictors of training success among trainee pilots in World War II was purportedly the question, “Have you ever built and flown a model airplane”?7

What is important here is how broad a range of factors biodata can consider. Questions used can vary from the clearly work related (“How often were you late to work in your last job?”) to the less obvious (“How many times did you have to take your driving test?”). They can even include the unbelievably odd (“How old were you when you first kissed someone romantically?”).

The use of biodata has persisted because their ability to predict success is mainly good when the tools are well developed, with data collected from large numbers of people. (Small companies can still do this by acting in a consortium with other similar businesses.) And biodata seem to be able to predict a wide variety of things, from performance to absenteeism to ethical decision making.8 Just how good a predictor these data are depends on the particular test used and what you are trying to predict. But validities of between 0.3 and 0.38 have been reported for predicting performance (see the box).9


Predictive Validity Explained
Throughout this book, to show how good particular measures are at predicting success, we refer to their predictive validity.
The validity figure is a number between 0 and 1 that indicates how strong the relationship is between a particular factor (such as experience) and a specific outcome (such as performance). If the validity is 0, this means that the factor is no better at predicting the outcome than chance. We might as well flip a coin. If the validity is 1, this means that the factor predicts the outcome perfectly every time; it is never wrong. In practice, a validity of 0.3 is considered good, and a validity of 0.5 is considered great. So when we say that biodata have a validity of somewhere between 0.3 and 0.38, that means the measure is pretty good at predicting performance.
Interestingly, the validity figure can also tell us what proportion or percentage of all the reasons people succeed is accounted for by the measure. To work this out, we simply square the validity number (multiply it by itself). A measure with a validity of 0.3 thus accounts for 9 percent of the reasons people succeed or not (0.3 × 0.3 = 0.9, or 9 percent). And a validity figure of 0.38 for biodata means that it can account for up to 14 percent of the determinants of success. These figures do not sound large. Yet if we think about the sheer number of factors involved in determining people's behavior, they are fairly good.
For more detailed information on validity, including different ways of measuring it, see the section about it in the appendix.

Businesses, however, have not used biodata as much as we might expect given the good validities. Probably the main reason for this is that a lot of research on preferably thousands of employees is necessary to be able to identify which factors are the best predictors. This needs to be done individually too for each role and organization. In addition, no matter how useful they are, biodata factors sometimes just do not look right (psychologists would say that they lack face validity). In one study, for example, it turned out that the question that best predicted success in jewelry sales was, “How many times have you purchased real estate?” Furthermore, biodata do not help us understand why factors like this predict something, which can limit both their usefulness and firms' level of comfort with using them.

Experience and Talent Overall.

So we have found that having the right experience can indeed enable people to bring certain knowledge and skills that will help them perform well in a job. Yet it is not always obvious what the right experiences are, building good tools to measure them is not easy, and we tend to overlook the potential downsides of experience. Prior experience may be the first place we look for an indication of talent, but perhaps it should not be. Even when it is measured systematically, it is not the whole answer, and as we will see, it is not the best predictor. In our hunt for the X factor, we need to look elsewhere.

Competencies and Capabilities as Signs of Talent

Probably the second thing that most people look at when measuring talent is what is called competency or capability. Although some commentators have strong views about what is a competency and what a capability, the terms are widely used to refer to the same things. They are something of a catch-all phrase for knowledge, skills, abilities, and other characteristics (which you will sometimes see in research papers abbreviated as “KSAOs”). In other words, they refer to pretty much everything that is not experience. They are what people can do and how they tend to do it.

The idea of using competencies to spot talent gained popularity in the 1970s with the work of the noted American psychologist David McClelland.10 He argued that we need to think about how competent people are not only in terms of their knowledge and skills, but also in terms of their broader behaviors, motives, and attitudes. Subsequent research has supported this idea too, showing, for example, that how leaders behave can affect both their individual and their company's results.11

When it comes to accurately measuring competencies, assessment centers (actually a process rather than a place, as we'll describe in chapter 4) are often cited as the most effective method. And there is evidence that competencies measured using assessment centers can predict performance with validities roughly on a par with collecting biodata—around 0.3 to 0.37. This means that they can predict up to 14 percent of the causes of success.

However, assessment centers are not the most frequently used method of measuring competencies. More often the method is unstructured interviews, managers' ratings of their employees, or employees' self-ratings of their own capabilities. These may be easier and cheaper to implement than more formal measurement methods, but they also tend to be a lot less accurate. And for every drop in accuracy, the ability of competencies to predict performance is reduced.

Deciding Which Competencies to Measure.

Perhaps the bigger challenge, though, is knowing which competencies to measure. There are, of course, some usual suspects that are commonly believed to be important no matter what the role—things like drive for results and decision-making and communication skills. Yet while these are largely accepted as necessary foundations for success, it is not clear that they are also what distinguishes great performers from ones who are just okay.

In theory, for each job family, a company would conduct an analysis of high and low performers to identify which competencies are the most essential. This level of detail is important because different roles require different competencies. A cosmetics salesperson in Germany, for example, requires a different set of competencies from a systems engineer in Indianapolis.

Merely in terms of the time and resources required, however, this kind of detailed job analysis is not practical in many organizations, so companies often cut corners. A common compromise is to apply the competencies that are identified in high performers in a particular role more broadly than is warranted. One national health care provider's competency framework, for instance, is applied to all leaders across all parts of the organization. This may sound fine, but the research on which it is based involved only a small group of senior executives.12 Even assuming that their competencies are the right ones to take the business forward, it is not clear that what makes them successful is also required from all other leaders. Compromises like this may seem harmless, but as before, with each corner cut, the utility of competencies in predicting performance is reduced.

A common alternative is for companies to use generic competency frameworks developed by vendors. This is particularly common when trying to predict success a long way in advance—for example, when trying to predict which of a new group of graduates are most likely to be leaders of the future. These models can sound quite compelling. Vendors may be able to provide evidence that high performers in similar businesses tend to be strong in certain competencies. Yet this approach assumes that what has enabled others to succeed to date is the same as what will enable your business's people to succeed. And in many cases, this is just not so. Furthermore, since most of these frameworks are commoditized and owned by vendors, there is a lack of independent research into their efficacy.

Moreover, part of the challenge in trying to pinpoint which competencies are the most important for a particular job is that for most jobs, there is more than one way to do them well. This is especially so for more senior roles.13 Researchers have found that the competencies that the best performers are rated highly on can vary even when these people are in very similar roles. In one study, for example, the competencies that top salespeople were rated highly on differed considerably.14

Implementation Challenges.

So competencies are compelling and clearly potentially useful, but they also have implementation challenges that fundamentally undermine their usefulness. In the few instances where the link between competencies and performance is simple, clear, and close and a good measure exists, competencies can of course help identify talent. Yet beyond such instances, it is tough to find specific behaviors that can reliably predict performance. As noted above, competencies are often either not measured accurately or are not what is genuinely required for success. As with prior experience, using competencies to measure talent appears more popular than is justified by their actual efficacy.

From Not So Good to Better.

Already, then, we are beginning to see why selection failure rates have remained high for so long. The two main signs of talent that most businesses look at, experience and competencies, do not appear to be doing the job. This does not mean that they cannot, however. Indeed, there are some simple things that companies can do to improve the efficacy of these factors in predicting success. We look at these when we start exploring implementation issues in chapter 6. What is important for now, though, is that despite their common use, both have so far generally failed to fulfill their promise in helping identify talent.

So what does work? Well, there are two main contenders left: intelligence and personality. Ironically, McClelland introduced the idea of competencies because he was disillusioned with the effectiveness of intelligence and personality tests. But as the years have passed, these measures have emerged as more promising options.

Intelligence as a Sign of Talent

Over the years, intelligence has been defined in many different ways and called many different things, most recently “reasoning ability” and “cognitive ability.” Then there are the abbreviations: IQ (intelligence quotient), g (general intelligence), GCA (general cognitive ability), and GMA (general mental ability). Whatever we call it, the way we measure it has not changed in the hundred years since the first intelligence tests were devised.

The French started it. In the late nineteenth century, the French government decided to identify children who needed specialized education programs and so ordered the creation of a test. By 1916 the test had been adapted for American populations, and for a while, intelligence tests were used mainly to predict academic performance. During World War I, though, military forces started using tests to screen people for service, and a large element of this tended to be intelligence testing.

In the 1960s, the use of intelligence tests took a knock as they became embroiled in political debates about racial differences in scores. But in the 1980s, new evidence emerged from meta-analyses that intelligence is indeed a reliable and capable predictor of work performance. Today it is widely regarded by many psychologists as the single best predictor of workplace success. It is not always the most important factor for every job in every circumstance, but overall, across all roles and all situations, it seems to be able to predict success better than any other factor. And given the growing complexity of the workplace, there is an argument to suggest that its importance is growing too. Just how good is it? A summary of over four hundred studies found that the validity of intelligence in predicting employees' performance was 0.38 for low-complexity jobs, roughly on a par with biodata. But for medium-complexity roles, the validity was 0.51, and for high-complexity jobs, it was as high as 0.57.15 This would mean that for high-complexity jobs, it can account for over 32 percent of the causes of success. Given how unpredictable the world around us is, that really is incredible. Subsequent research has supported these findings and extended them to show that intelligence tests appear able to predict performance in almost all jobs in all cultures.

The Importance of Intelligence.

There is more, though, for intelligence has also emerged as more important than any other personal characteristic in determining a whole host of different life outcomes.16 The list is long but includes some highly relevant to the workplace:

  • Educational achievement in elementary school, high school, and college
  • Ultimate educational level attained
  • Adult occupational level
  • Adult income
  • A wide variety of indexes of adjustment at all ages

In contrast, low scores on intelligence tests have been found to predict:

  • Delinquency and criminal behavior
  • Accident rates on the job
  • Disciplinary problems in school
  • Poverty
  • Divorce
  • Having an illegitimate child

Clearly some of these are politically laden findings and have stimulated often heated debate. What is important for our purpose is that intelligence has proved to be such a strong predictor of life and work success. This is why so many talent measurement processes use intelligence tests: they are perceived to be an efficient and effective means of identifying the brightest and best.

Asked how intelligence helps performance, most people suggest that it is by improving things like problem solving and decision making. And indeed some research has shown that intelligence levels are more important than experience for the ability to think strategically.17 The biggest impact of intelligence, however, seems to be on the acquisition of job knowledge. Simply put, people higher in intelligence acquire more job knowledge and acquire it faster.18 An excellent demonstration of the importance of this comes from a series of studies run by the US military in the 1980s. They found that recruits with below-average intelligence required more than three years to reach the same levels of performance than recruits with higher intelligence began with. Moreover, even with on-the-job experience, enlistees with lower intelligence continue to lag behind those with higher intelligence.19

Limitations of Intelligence Tests.

There are, of course, some limits to all this. For instance, intelligence does not seem to be equally good at predicting performance in all jobs. The research on this issue is not complete, but the ability of intelligence to predict sales performance, for example, has been found to be mixed.20 In addition, there is some debate about the type of performance that intelligence can predict. Some researchers have found that it is better at predicting quantity and speed of work than quality of outputs.21 Others have found that intelligence tends to predict best possible performance rather than typical day-to-day performance levels.22

In addition, intelligence levels may be so generally high in some environments that intelligence may not be an effective way of distinguishing between the good and the great.23 We faced this difficulty recently when trying to use an intelligence test to evaluate applicants for roles in financial trading. The candidates were all so intelligent that the test could not discriminate among them.

There is also good anecdotal evidence that more intelligence is not always better. Beyond a certain level, individuals may “intellectualize” things too much and be insufficiently practical, which can have a negative impact on performance. This may be one of the reasons that low intelligence test scores have been shown to be better able to predict failure than high scores can predict success. It seems that while low intelligence may sometimes be enough to ensure failure, high intelligence is not usually sufficient on its own to secure success, and too much of it can be counterproductive.

One last limitation with intelligence tests is that there is more to intelligence than is measured by many of these tests.24 For example, most people would agree that thinking styles and how we use our intelligence are important for success, yet these elements of intelligence have received surprisingly little attention by both researchers and vendors alike.25

There have, of course, been attempts to broaden how we measure intelligence and distinguish different types. For instance, in the 1980s, the psychologist Howard Gardner suggested that there are nine different types of intelligence that are all quite separate—things like linguistic, musical, and interpersonal intelligence. Yet subsequent research has shown that these abilities are all measured by standard intelligence tests and do not appear different at all. More promising is the concept of practical intelligence—the ability to deal with the problems and situations of everyday life.26 It does not appear to be as strong a predictor of performance as standard intelligence tests, yet as a broader approach to thinking about intelligence, it does show merit.

Finally, there are measures of complexity of thought: the degree to which people are able to engage in strategic thinking. Some of these measures show promise, such as those based on Elliott Jaques's theory of categories of mental processes.27 They are making substantial claims, too, with vendors reporting validities of over 0.7 or even 0.9. Nevertheless, caution is advised because of a lack of independent research into just how effective these measures are. What research does exist, then, is too limited to tell whether this approach can reliably produce these validities.

So the measurement of intelligence has its limits and needs to evolve further. It is the best predictor of talent available today for most types of jobs, especially more complex ones, but even at best estimates, it can account for only around 30 percent of the reasons behind performance. It may often be necessary for success, but on its own, it is not usually sufficient. Something else is involved.

Personality as a Sign of Talent

Character or personality is often claimed as the source of much success. In fact, most people seem intuitively to see it as more important than intelligence. In part, this is because there tends to be much more variability in personality than in levels of intelligence, and so it can appear more salient.28 Yet for all its apparent importance, the emergence of personality tests for measuring talent is relatively new.

A Five-Factor Model.

For many years, the use of personality tests in businesses was held back by the fact that they were mostly developed for clinical settings, and so did not feel or look right to organizations. And progress in developing new tests was hampered by the vast number of different models of personality used, which hindered researchers in comparing results and reaching conclusions. This all changed after World War II, though, with the development of tests designed for organizations and the emergence of a widely accepted model of personality: the five-factor model.

Not all tests available are based on the so-called Big Five factors this model describes. Yet it has been critical in enabling the industry to move forward because it has allowed researchers to compare findings and develop a shared body of knowledge. The model has changed over time, but its modern form has been around for twenty years. The five personality characteristics it describes are:

1. Openness to experience. The degree to which people like to learn new things, having a wide variety of interests, and being imaginative and insightful
2. Conscientiousness. The degree to which people like to be reliable, prompt, organized, methodical, and thorough
3. Extraversion. The degree to which people derive their energy from being with others and enjoy interacting with others
4. Agreeableness. The degree to which people are friendly, cooperative, and considerate
5. Neuroticism or emotional stability. The degree to which people are emotionally labile, experience negative emotions, and can seem moody or tense

From tests on thousands upon thousands of people, these five personality factors have been shown to be more or less distinct from one another. So generally, the score a person obtains on one of these dimensions will have little bearing on how he or she scores on the other four.

The model has generated a mass of research into the links between personality and performance. Overall, studies have found that of the five traits, neuroticism, extraversion, and conscientiousness are often the most relevant for job performance.29 Unsurprisingly, different jobs appear to require different types of personality. Successful managers, for example, tend to be low in neuroticism and moderately high in extraversion and conscientiousness. For skilled and semiskilled jobs, conscientiousness and low neuroticism seem most important. For law enforcement jobs, low neuroticism, conscientiousness, and agreeableness all appear useful.30 As a general rule, there seem to be no jobs in which being high in neuroticism or low in conscientiousness is desirable.

Of all the five factors, conscientiousness has generated the most enthusiasm. In fact, one recent study showed that HR professionals believe it to be more predictive of performance than intelligence.31 In reality, though, the validity of conscientiousness is usually estimated to be between 0.22 and 0.28, below that of intelligence tests.32 Indeed, these validities mean that on its own, conscientiousness can account for only between 5 and 8 percent of the reasons for success.

The link between conscientiousness and performance is not straightforward either. It appears, for example, to be more important for some jobs than others, and possibly less important for managers than other staff.33 It also seems to be a better predictor of performance in experienced employees than in new employees or applicants.34 Finally, high levels of conscientiousness may actually be a negative sign for success in some roles because it can lead to behavior that is seen as bureaucratic and indecisive.35

One of the difficulties for businesses in using personality tools is that interpreting what results mean and which characteristics are most important for which roles is not easy. Interestingly, the research has not confirmed most people's intuitive beliefs about the relative importance of character. Of all the studies done, hardly any have found validities for personality traits to predict performance above the level of 0.30.36 This means that personality tests do not appear able to account for more than 9 percent of the reasons that people succeed or fail. Moreover, even these validities may be overestimated, since evidence has recently been found that publication bias—the tendency to publish only positive results—may have inflated results.37

Beyond the Five Factors.

So why do we all think that character is important for success, but the research does not back us up? Just as with competencies, one of the problems is likely to be that no single type of personality is the key to success. There is also the issue of how effectively we are measuring personality. One increasingly recurring theme here is that progress may have been limited by the five-factor model that has enabled the industry to get this far. Although the model has been useful in allowing researchers to progress in their work, test developers may need to look beyond it to develop better tools.

For example, each of the five factors is made up of additional, more specific characteristics. Although these components are described differently by different researchers, it has been suggested, for example, that conscientiousness consists of hard work, orderliness, conformity, and dependability.38 What is interesting is that studies have shown that these more specific components of the five factors may be more able to predict success than the Big Five.39

This may sound odd. If conscientiousness is made up of dependability and orderliness, how can these individual components be better predictors of success than conscientiousness itself? Well, to give an example, writing a book arguably has a lot to do with orderliness but is a lot less about dependability and conformity. If we measure these things separately, we may thus find that orderliness is a good predictor of book-writing success but that dependability and conformity are less effective. So when we bundle all these elements into a single thing—conscientiousness—we combine good predictors with less effective ones. As a result, conscientiousness as a whole ends up being less effective than the best of its more specific ingredients.

The five-factor model has also been criticized for not being complete or sufficiently business focused. Some alternative models and approaches have been suggested, and a new generation of tests is emerging.40 None of these has yet challenged the Big Five's dominance in research circles, but two types show particular promise in helping businesses spot talent.

First, a number of tools with more business-focused language are being developed. Saville Consulting's Wave test, for one, certainly sounds very different from traditional personality tests, with factors such as “Adapting Approaches” and “Influencing People.” It may be doing something different too, since a comparison study with traditional personality tests has suggested that it can exceed them for validity, with levels over 0.45.41 Caution is required since more independent research is needed to confirm these findings. Yet the approach of developing more business-oriented tools clearly has promise for organizations.

Second, there is the idea of personality derailers. This work turns on its head the traditional assumption that lack of success is due to people not having the “right stuff.” It suggests instead that people who do not succeed may have some “wrong stuff,” or unhelpful personal characteristics.42 Derailers could be considered competencies, but many of the measurement tools available are based on personality characteristics. The idea has been popularized by Sydney Finkelstein's book, Why Smart Executives Fail, and by the Hogan “Dark Side” personality tests, which try to measure potential derailing behaviors.43 Judging by the popularity of these tests, the idea seems to be striking a chord with many businesses. Again, though, caution is necessary. Although there is some evidence that these measures may predict things like staff turnover, in general the validity evidence for them is scarce at present. As with intelligence tests, choosing the right personality measure for your needs can be challenging, a topic we come back to in chapter 5.

What is clear is that personality tests do not currently appear to be as useful in our hunt for the X factor as we might expect. We say currently because, like many other people, we believe that character is important for success. And we are convinced that with time, better tests will be produced that are capable of showing how character helps. In fact, we come back to one very important role that personality tests can play in the next chapter. The challenge right now, though, is that most personality tests do not do as good a job as we would hope in helping to predict performance. Personality might indeed be more important for success than intelligence. But for the moment, talent measures are not able to prove this and show how.

Values, Social Skills, and Other Possible Signs of Talent

Experience, competencies, intelligence, and personality are, then, the four main places that we have tended to hunt talent. Several others are worth mentioning, though. For example, measuring values and integrity, although not new, has become an increasingly popular avenue for research over the past ten years. For the most part, the purpose of these measures is to help detect people who may engage in what are called counterproductive work behaviors—everything from stealing to being prone to accidents. And these tools have generally proved effective at doing this (we look at this is more detail in the appendix).

Yet these tools have also been used to try to predict success. Alignment between the values of managers and their business has been shown to be related to both individuals' success and their intention to remain with a firm.44 And some integrity tests that are based on Big Five personality measures appear able to predict overall performance levels at validity levels roughly on a par with the best personality tests.45

Measuring values more generally can be problematic, however. There are suggestions that tests of values are too easy to fake and are thus unreliable as a source of information. And it sometimes seems that the move to measure values appears driven more by idealism than by the potential efficacy of values as a predictor of success. A major oil multinational recently discovered this when it replaced its competency model with a values framework that described six core values of the business. Yet it soon found that the information gained from rating the six values was not enough to make selection decisions and so had to reintroduce a competency model alongside the values.

Testing social and political skills is another option. These tests have become increasingly popular as companies have become larger and more global—and more reliant on such skills. Research has shown that these measures pick up something different from intelligence and personality tests and that they may indeed be able to help predict managerial performance.46 Yet the best way to measure these skills is still not clear. For a while, many people advocated tests of emotional intelligence, but recent studies have shown that these measures have disappointing validity levels.47 More recently, measures of social capital have appeared that look at the size and shape of individuals' social networks. These tools certainly look interesting and are getting some headlines, but as yet remain unproven in their ability to predict success.

One other area of study is motivation and drive. The desire to achieve objectives has, hardly surprisingly, been shown to predict performance.48 Yet tests of motivation are relatively rare, and the findings of the research that does exist are mixed when it comes to the importance of different types of motivation for performance.

So as with the main four factors studied, there is promise and some progress in each of these other possible signs of talent but no clear way forward—yet.

Finding a Way Forward

What, then, are the signs of success that businesses should use to identify talent? In this chapter, we have looked at the main factors commonly measured and have reviewed evidence about the ability of each to predict success. We have discovered that all have some predictive validity, which means that all are better than mere chance at predicting success. So to some extent, we are right to look at these factors because they all have the potential to help us spot talent and make decisions about people in some situations.

Yet we have also seen that generally these standard markers of talent have limitations and implementation issues that make them less effective as measures than we might expect. No single, special X factor seems to exist, and we need to be careful in attributing too much to the results of any one measure.

Intelligence may be as close as we will ever get to finding such a factor. And given the number of variables involved in determining success, the fact that intelligence alone can account for up to 30 percent of the reasons that people succeed in some types of roles is genuinely impressive. But intelligence is not equally important for all roles and so has its limits. There simply is no one thing that is the key to predicting success in all situations, no magic bullet to measure that we can always use to spot talent.

This is probably not surprising. After all, most people would agree that success is not just about having one quality in abundance. Instead, it is typically about having the right combination of qualities. But how then can businesses determine which combinations of measures are the best to use? And is there anything they can do to boost their chances of spotting talent and accurately predicting success? It is these critical questions that we address in the next chapter.

Notes

1. Dye, D. A., Reck, M., & McDaniel, M. A. (1993). The validity of job knowledge measures. International Journal of Selection and Assessment, 1, 153–157.

2. Quinones, M. A., Ford, J. K., & Teachout, M. S. (1995). The relationship between work experience and job performance: A conceptual and meta-analytic review. Personnel Psychology, 48, 887–910.

3. Dokko, G., Wilk, S. L., & Rothbard, N. P. (2009). Unpacking prior experience: How career history affects job performance. Organization Science, 20(1), 51–68.

4. Society for Human Resource Management. (2011). Background checking: Conducting reference background checks. Alexandria, VA: Author; Dokko et al. (2009).

5. Moroney, R., & Carey, P. (2011). Industry versus task-based experience and auditor performance. Working paper, AFAANZ Conference, Gold Coast.

6. Housman, M. (2012). Does previous work history predict future employment outcomes? San Francisco, CA: Evolv.

7. Weekley, J. (2009). Biodata: A tried and true means of predicting success. Wayne, PA: Kenexa.

8. Bliesener, T. (1996). Methodological moderators in validating biographical data in personnel selection. Journal of Occupational and Organizational Psychology, 69, 107–120; Barge, B. N., & Hough, L. M. (1986). Utility of biographical data for predicting job performance. In L. M. Hough (Ed.), Utility of temperament, biodata and interest assessment for predicting job performance: A review and integration of the literature. Alexandra, VA: Army Research Institute; Manley, G. C., Benavidez, J., & Dunn, K. (2006). Development of a personality biodata measure to predict ethical decision making. Journal of Managerial Psychology, 22(7), 664–682.

9. Bobko, P., & Roth, P. L. (1999). Derivation and implications of a meta-analytic matrix incorporating cognitive ability, alternative predictors, and job performance. Personnel Psychology, 52, 561–589; Reilly, R. R., & Chao, G. T. (1982). Validity and fairness of some alternative employee selection procedures. Personnel Psychology, 35, 1–62.

10. McClelland, D. (1973). Testing for competence rather than intelligence. American Psychologist, 28, 1–14.

11. Semadar, A., Robins, G., & Ferris, G. R. (2006). Comparing the validity of multiple social effectiveness constructs in the prediction of managerial job performance. Journal of Organizational Behavior, 27, 443–461.

12. Bolden, R. (2010). Leadership competencies: Time to change the tune? Leadership, 2(2), 147–163.

13. Hollenbeck, G. P. (2009). Executive selection—what's right … what's wrong. Industrial and Organizational Psychology, 2, 130–143.

14. Smith, B., & Rutigliano, T. (2003). Discover your sales strengths: How the world's greatest salespeople develop winning careers. New York, NY: Warner Business Books.

15. Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72–98.

16. Schmidt, F. L. (2002). The role of general cognitive ability and job performance: Why there cannot be a debate. Human Performance, 15(1/2), 187–210.

17. Dragoni, L., Oh, I.-S., Vankatwyk, P., & Tesluk, P. E. (2011). Developing executive leaders: The relative contribution of cognitive ability, personality, and the accumulation of work experience in predicting strategic thinking competency. Personnel Psychology, 64, 829–864.

18. Schmidt, F. L. (2012). Cognitive tests used in selection can have content validity as well as criterion validity: A broader research review and implications for practice. International Journal of Selection and Assessment, 20(1), 1–13; Schmidt, F. L., Hunter, J. E., & Outerbridge, A. N. (2012). Impact of job experience and ability on job knowledge, work sample performance, and supervisory ratings of job performance. Journal of Applied Psychology, 71, 432–439.

19. Sellman, W. S., Born, D. H., Strickland, W. J., & Ross, J. J. (2011). Selection and classification in the US military. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection. New York, NY: Routledge.

20. Hauknecht, J. P., & Langevin, A. M. (2011). Selection for service and sales jobs. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection. New York, NY: Routledge.

21. Nathan, B. R., & Alexander, R. A. (1998). A comparison of criteria for test validation: A meta-analytical investigation. Personnel Psychology, 41, 517–535.

22. DuBois, C.L.Z., Sackett, P. R., Zedeck, S., & Fogli, L. (1993). Further exploration of typical and maximum performance criteria: Definitional issues, prediction, and white-black differences. Journal of Applied Psychology, 78, 205–211.

23. Hollenbeck. (2009).

24. Murphy, K., Cronin, B., & Tam, A. (2003). Controversy and consensus regarding the use of cognitive ability testing in organizations. Journal of Applied Psychology, 88, 660–671.

25. Ben-Hur, S., Kinley, N., & Jonsen, K. (2012). Coaching executive teams to reach better decisions. Journal of Management Development, 31(7), 711–723; Guion, R. M. (2011). Employee selection: Musings about its past, present, and future. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection. New York, NY: Routledge.

26. Wagner, R. K., & Sternberg, R. J. (1985). Practical intelligence in real world pursuits: The role of tacit knowledge. Journal of Personality and Social Psychology, 49, 436–458; Sternberg, R. J. (1988). The triarchic mind: A new theory of human intelligence. New York, NY: Penguin Books.

27. Jaques, E. (1956). Measurement of responsibility: A study of work, payment and individual capacity. Cambridge, MA: Harvard University Press.

28. Topor, D. J., Colarelli, S. M., & Han, K. (2007). Influences of traits and assessment methods on human resource practitioners' evaluations of job applicants. Journal of Business and Psychology, 21(3), 361–376.

29. Judge, T., Higgins, C. A., Thoresen, C. J., & Barrick, M. R. (1999). The Big Five personality traits, general mental ability, and career success across the life span. Personnel Psychology, 52, 651–652.

30. Ones, D. S., Dilchert, S., Viswesvaran, C., & Judge, T. A. (2007). In support of personality assessment in organizational settings. Personnel Psychology, 60, 995–1027.

31. Rynes, S. L., Brown, K. G., & Colbert, A. E. (2002). Seven common misconceptions about human resource practices: Research findings versus practitioner beliefs. Academy of Management Executive, 16(3), 92–103.

32. Cook, M. (2009). Personnel selection: Adding value through people. Chichester, West Sussex: Wiley.

33. Robertson, I. T., Baron, H., Gibbons, P., MacIvor, R., & Nyfield, G. (2000). Conscientiousness and managerial performance. Journal of Occupational and Organizational Psychology, 73(2), 171–181.

34. Tracey, J. B., Sturman, M. C., & Tews, M. J. (2007). Ability versus personality: Factors that predict employee job performance. Cornell Quarterly, 48, 313–322.

35. Robertson et al. (2000).

36. Heggestad, E. D., & Gordon, H. L. (2008). An argument for context-specific personality assessments. Industrial and Organizational Psychology, 1, 320–322.

37. McDaniel, M. A., Rothstein, H. R., & Whetzel, D. L. (2006). Publication bias: A case study of four test vendors. Personnel Psychology, 59, 927–953.

38. Hogan, J., & Ones, D. S. (1997). Conscientiousness and integrity at work. In R. Hogan, J. A. Johnson, & S. R. Briggs (Eds.), Handbook of personality psychology. San Diego, CA: Academic Press.

39. Hough, L., & Dilchert, S. (2011). Personality: Its measurement and validity for employee selection. In J. L. Farr & N. T. Tippins (Eds.), Handbook of employee selection. New York, NY: Routledge.

40. Hough, L. M., & Oswald, F. L. (2000). Personnel selection: Looking toward the future—remembering the past. Annual Review Psychology, 51, 631–664.

41. Saville, P., MacIver, R., Kurz, R., & Hopton, T. (2008). Project Epsom: How valid is a questionnaire? Jersey: Saville Consulting Group.

42. Lombardo, M. M., Ruderman, M. N., & McCauley, C. D. (1988). Explanations of success and derailment in upper-level management positions. Journal of Business and Psychology, 2, 199–216.

43. Finkelstein, S. M. (2003). Why smart executives fail: And what you can learn from their mistakes. New York, NY: Portfolio.

44. Posner, B. Z., Kouzes, J. M., & Schmidt, W. H. (1985). Shared values make a difference: An empirical test of corporate culture. Human Resource Management, 24(3), 293.

45. Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensive meta-analysis of integrity test validities: Findings and implications for personnel selection and theories of job performance. Journal of Applied Psychology, 78, 679–703.

46. Marlowe, H. A. (1986). Social intelligence: Evidence for multidimensionality and construct independence. Journal of Educational Psychology, 78, 52–58; Ferris, G. R., Witt, L. A., & Hochwarter, W. A. (2001). The interaction of social skill and general mental ability on work outcomes. Journal of Applied Psychology, 86, 1075–1082; Semadar, A., Robins, G., & Ferris, G. R. (2006). Comparing the validity of multiple social effectiveness constructs in the prediction of managerial job performance. Journal of Organizational Behavior, 27, 443–461.

47. Van Rooy, D. L., & Viswesvaran, C. (2004). Emotional intelligence: A meta-analytic investigation of predictive validity and nomological net. Journal of Vocational Behavior, 65(1), 71–95; O'Boyle, E. H., Humphrey, R. H., Pollack, J. M., Hawver, T. H., & Story, P. A. (2010). The relationship between emotional intelligence and job performance: A meta-analysis. Journal of Organizational Behavior, 32, 788–818.

48. Payne, S. C., Youngcourt, S. S., & Beaubien, J. M. (2007). A meta-analytic examination of the goal orientation nomological net. Journal of Applied Psychology, 92, 128–150.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.181.196