Chapter 6

Selecting Employees Who Fit

Image

A MANAGER'S PERSPECTIVE

JAVIER'S MESSAGE FROM THE HUMAN RESOURCE DEPARTMENT FILLS HIM WITH BOTH ANXIETY AND EXCITEMENT. HE JUST RECEIVED AUTHORIZATION TO HIRE AN ADDITIONAL MEMBER FOR HIS CUSTOMER SERVICE TEAM. JAVIER IS EXCITED BECAUSE HIRING THE RIGHT PERSON COULD REALLY BOOST THE TEAM'S PERFORMANCE; HE IS ANXIOUS BECAUSE THIS WILL BE HIS FIRST HIRING DECISION.

Javier has total freedom to hire anybody he wants. What should he focus on when he makes his hiring decision? Should he hire someone who is likely to stay with the company for a long time? Should he look for someone who already has the skills to do the job? Is it more important to hire someone with the potential to be a high performer in several different jobs? Should he try to find someone who is similar to current team members, or should he bring in new ideas by hiring someone very different?

One reason for Javier's anxiety is a story he recently heard where a manager in a different department asked a number of illegal questions during an interview. He has also heard a number of stories about managers being evaluated negatively because they spent too much money searching for employees. Javier has a general idea of the questions that should be avoided when conducting interviews, but he makes himself a note to be sure to ask someone from the human resource department to remind him of potentially problematic questions. He also wants to get some help identifying the most cost-effective hiring methods.

Javier also thinks about the specific methods he might use to evaluate people. He has participated in several job interviews, and he knows that interviews are important. But what questions should he ask? Should he ask everyone the same questions? Will he be able to judge whether an answer is good or bad? Should he have someone else interview a group of finalists for the job?

Javier knows he won't have time to interview everyone who will apply. How should he screen applicants? A friend recently told him about using personality tests for hiring. Javier also remembers taking some type of intelligence test when he applied for a different job a number of years ago. He thought the intelligence test was kind of interesting, but he wonders if such tests really help organizations identify successful employees. Would using tests help him make a better hiring decision? If so, how can he identify the tests that he should use? What about reference checking? He would like to talk to previous employers, but he knows that the policy of his own company is not to give references. Would it be worth the effort to try checking references?

As he continues to reflect, Javier wonders how the results of several different assessments should be combined to arrive at a final hiring decision if he uses tests, reference checking, and interviewing? Would it be best to give scores on all of the measures the same emphasis? Should he give more weight to the interview? The hiring decision is an important one for Javier, as he well knows. He can prove himself as an emerging leader if he makes a good choice. Not only that, his job as team leader will become easier if he hires a new team member who is a real contributor.

Image

WHAT DO YOU THINK? Image

Suppose you are listening to a conversation between Javier and another manager, Elena. Elena makes the following statements. Which of the statements do you think are true?

Image You should hire people who already have the skills and knowledge they will need on the job.
Image The benefits of making good hiring decisions are highest when the organization has a lot of job applicants.
Image Intelligence tests are very helpful for predicting who will be effective in almost any job.
Image Reference checking provides valuable information about prospective employees.
Image You need to ask each job applicant individualized questions to determine his or her true strengths and weaknesses.

LEARNING OBJECTIVES

After reading this chapter you should be able to:

Image Describe how employee selection practices can strategically align with overall HR strategy.
Image Explain what makes a selection method good; be able to apply the concepts of reliability, validity, utility, legality and fairness, and acceptability to appropriately evaluate different employee selection methods.
Image Describe several commonly used selection methods, evaluate their strengths and weaknesses, and explain how they link with particular employee selection strategies.
Image Explain how to combine scores from several different selection methods to arrive at a final selection decision.

How Can Strategic Employee Selection Improve an Organization?

Employee selection is the process of choosing people to bring into an organization. Effective selection provides many benefits. Selecting the right employees can improve the effectiveness of other human resource practices and prevent numerous problems. For instance, hiring highly motivated employees who fit with the organizational culture can reduce disciplinary problems and diminish costs related to replacing employees who quit. Such benefits help explain why organizations that use effective staffing practices have higher annual profit and faster growth of profit.1 In short, a strategic approach to selecting employees can help an organization obtain and keep the talent necessary to produce goods and services that exceed the expectations of customers.

Employee selection

The process of testing and gathering information to decide whom to hire.

One organization that expends a lot of effort on selection is the military. Each year, the United States Marine Corps selects and trains over 38,000 people. Of course, not everyone who wants to become a marine is accepted into the Corps. Before being accepted and enduring training at either the San Diego, California, or Parris Island, South Carolina, location, recruits must pass both mental and physical examinations.2

Because marines are required to make sound decisions quickly, the Marine Corps bases part of its selection decisions on scores for a mental ability test. This examination is known as Armed Services Vocational Aptitude Battery (ASVAB). The test consists of 225 multiple-choice questions in the areas of General Science, Arithmetic Reasoning, Word Knowledge, Paragraph Comprehension, Mathematics Knowledge, Electronic Information, Auto and Shop Information, Mechanical Comprehension, and Assembling Objects. Test results determine not only whether someone will be admitted to the Marine Corps but also what type of occupations can be pursued once basic training is complete. For example, it takes a higher score to become an aerial navigator than it does to become a combat photographer.3

Image Building Strength Through HR

UNITED STATES MARINE CORPS

The United States Marine Corps thus uses testing to evaluate not only its 38,000 new recruits each year but also the physical abilities of continuing marines. Testing practices include

  • Using a minimum score on a mental ability test to determine who is accepted, as well as cutoff scores to place marines into specific positions.
  • Requiring a minimum score on a physical fitness test measuring basic strength and endurance.
  • Ongoing assessment of physical ability not only to ensure a minimum level of fitness but also to determine promotions.

Given that the job also requires physical fitness, recruits must additionally pass an assessment known as the Initial Strength Test (IST). The test for men requires pull-ups, crunches, and a timed run. For women the test has historically required a flex armed hang in place of pull-ups, but test administrators are discussing whether they should require female recruits to demonstrate pull-ups the same as male recruits. This shift parallels the recent change to allow female marines into combat units.4

Physical testing does not, however, end once someone is accepted into the Marine Corps. Each year a marine must complete an evaluation known as the Physical Fitness Test (PFT), which is similar to the IST. Each marine must also complete an annual Combat Fitness Test (CFT), which includes completing a timed endurance test, a lifting exercise, and an obstacle course requiring activities such as crawling, carrying, and throwing.5

 LEARNING OBJECTIVE 1 

How Is Employee Selection Strategic?

As we can see from the Marine Corps example, hiring the right employees often takes a great deal of planning. An organization's employee selection practices are strategic when they ensure that the right people are in the right places at the right times. This means that good selection practices must fit with an organization's overall HR strategy. As described in Chapter 2, HR strategies vary along two dimensions: whether they have an internal or an external labor orientation and whether they compete through cost or differentiation. These overall HR strategies provide important guidance about the type of employee selection practices that will be most effective for a particular organization.

ALIGNING TALENT AND HR STRATEGY

Consistent with the overall HR strategies, strategic selection decisions are based on two important dimensions. One dimension represents differences in the type of talent sought. At one end of the continuum is generalist talent—employees who may be excellent workers but who do not have particular areas of expertise or specialization. At the other end of the continuum is specialist talent—employees with specific and somewhat rare skills and abilities.6

Image

Figure 6.1 Strategic Framework for Employee Selection.

Another dimension represents the type of relationship between the employees and the organization. At one end of the continuum is long-term talent. Employees in this category stay with the organization for a long time and develop a deep understanding of company practices and operations. At the other end of the continuum is short-term talent. These employees move from organization to organization without developing expertise in how things are done at any particular place.7 Combining the two dimensions yields four general categories shown in Figure 6.1: short-term generalist talent, long-term generalist talent, long-term specialist talent, and short-term specialist talent. Next, we look at each of these categories in turn and consider how they fit with the HR strategies introduced in Chapter 2.

Short-Term Generalists

If you were hired to work at a drive-in restaurant, you would not need specialized skills, you would not earn high wages, and you probably would not keep the job for a very long time. Fast-food workers are short-term generalists, who provide a variety of different inputs but do not have areas of special skill or ability. Other examples include some retail sales clerks and hotel house-keepers. Short-term generalist talent is associated with the Bargain Laborer HR strategy.8 Organizations with this HR strategy fill most positions by hiring people just entering the workforce or people already working in similar jobs at other companies. Selection has the objective of identifying and hiring employees to produce low-cost goods and services, and selection decisions are based on identifying people who can perform simple tasks that require little specialized skill.

Short-term generalists

Workers hired to produce general labor inputs for a relatively short period of time.

Hiring generalists can be beneficial because people without specialized skills do not generally demand high compensation, which keeps payroll costs as low as possible. Because generalists lack specific expertise, they also are usually more willing to work in routine jobs and do whatever they are asked.

Long-Term Generalists

If you were to take a job working for an electricity provider, you might not need specialized skills, but you would most likely plan to remain with the organization for a long career. People working for utility companies are often longterm generalists who do not have technical expertise but who develop skills and knowledge concerning how things are done in a specific organization. Other common examples of long-term generalists are people who work for government agencies and for some package delivery companies. These workers contribute in a number of areas but do not need specific technical skills and abilities. Long-term generalists are beneficial for organizations using the Loyal Soldier HR strategy.9 Organizations with this HR strategy focus on keeping employees once they are hired. Staffing still has the objective of hiring employees to produce low-cost goods and services, but a stronger commitment is formed, and efforts are made to identify people who will remain with the organization for a long time.

Long-term generalists

Workers hired to perform a variety of different jobs over a relatively long period of time.

The generalist's lack of specific expertise allows firms to reduce payroll costs. However, over time employees develop skills and abilities that are only valuable to the specific organization, which reduces the likelihood that they will move to another employer. People develop relationships and form a strong sense of commitment to the organization.

Long-Term Specialists

Suppose you took a job as an accountant with a large firm that makes and sells consumer goods such as diapers and cleaning products. People doing this job are most often long-term specialists who develop deep expertise in a particular area. Pharmaceutical sales representatives and research scientists are commonly employed as long-term specialists. People in these jobs are expected to develop specialized skills and stay with the organization for a long time. The use of longterm specialists fits the Committed Expert HR strategy.10 Organizations that use this HR strategy develop their own talent. Selection has the objective of identifying people capable of developing expertise in a particular area so they can innovate and produce superior goods and services over time.

Long-term specialists

Workers hired to develop specific expertise and establish a lengthy career within an organization.

Hiring people who can develop specialized skills over time enables organizations to create and keep a unique resource of talent that other organizations do not have. Employees are given the time and assets to develop the skills they need to be the best at what they do.

Short-Term Specialists

Information technology specialists often work as short-term specialists—employees who provide specific inputs for relatively short periods of time. These workers are valuable for organizations using the Free Agent HR strategy.11 Organizations with this HR strategy hire people away from other organizations. Staffing is aimed at hiring people who will bring new skills and produce innovative goods and top-quality service. Selection decisions focus on identifying people who have already developed specific skills. Other examples of this type of talent include investment bankers and advertising executives.

Short-term specialists

Workers hired to provide specific labor inputs for a relatively short period of time.

Hiring short-term specialists allows firms to quickly acquire needed expertise. New hires bring unique knowledge and skills to the organization. The organization pays a relatively high price for such knowledge and skills but makes no long-term commitments.

MAKING STRATEGIC SELECTION DECISIONS

Another way to examine how organizations make employee selection decisions focuses on two primary factors: the balance between job-based fit and organization-based fit and the balance between achievement and potential. As you can see in Figure 6.1, both factors relate clearly to the talent categories just discussed.

Balancing Job Fit and Organization Fit

The first area of balance concerns whether employees should be chosen to fit in specific jobs or to fit more generally with the organization. When job-based fit is the goal, the organization seeks to match an individual's abilities and interests with the demands of a specific job. This type of fit is highly dependent on a person's technical skills. For instance, high ability in mathematics results in fit for a job such as financial analyst or accountant. In contrast, organization-based fit is concerned with how well the individual's characteristics match the broader culture, values, and norms of the firm. Organization-based fit depends less on technical skills than on an individual's personality, values, and goals.12 A person with conservative values, for example, might fit well in a company culture of caution and tradition. Employees who fit with their organizations have higher job satisfaction, and better fit with the organization has been shown to lead to higher performance in many settings.13 As described in the “How Do We Know?” feature, the HR strategy has an impact on how job and organization fit are weighted.

Job-based fit

Matching an employee's knowledge and skills to the tasks associated with a specific job.

Organization-based fit

Matching an employee's characteristics to the general culture of the organization.

As suggested earlier, we can combine the concept of fit with the talent-based categories discussed earlier. In general, job-based fit is more important in organizations that seek to hire specialists than in those that seek generalists. Similarly, organization-based fit is more important for long-term than for short-term employees. These differences provide strategic direction for employee selection practices.

Organizations pursuing Bargain Laborer HR strategies are not highly concerned about either form of fit. Employees do not generally bring specific skills to the organization. Neither are they expected to stay long enough to necessitate close organizational fit. Thus, for firms pursuing a Bargain Laborer HR strategy, fit is not strategically critical, and hiring decisions tend to focus on obtaining the least expensive labor regardless of fit.

Organizations pursuing the Loyal Soldier HR strategy benefit from hiring employees who fit with the overall organization. Job-based fit is not critical. Employees rotate through a number of jobs, and success comes more from loyalty and high motivation than from specific skills. In contrast, lengthy expected careers make fit with the organization very important. Employee selection decisions in organizations with a Loyal Soldier HR strategy should thus focus primarily on assessing personality, values, and goals.

Organizations pursuing a Committed Expert HR strategy require both job-based fit and organization-based fit. Organization-based fit is necessary because employees need to work closely with other members of the organization throughout long careers. Job-based fit is necessary because employees are expected to develop expertise in a specific area. Even though new employees may not yet have developed specific job skills, general aptitude in the specialized field, such as accounting or engineering, is important. Selection decisions in firms pursuing Committed Expert HR strategies should thus be based on a combination of technical skills and personality, values, and goals.

Image How Do We Know?

WHICH TYPE OF FIT IS MOST IMPORTANT?

Do managers making selection decisions pay more attention to job fit or organization fit? Are candidates who have average fit on both dimensions preferred to candidates who are high on one and low on the other? Does it depend on the nature of the job? Tomoki Sekiguchi and Vandra Huber conducted two studies to answer these questions. In one study of 120 and a second study of 92 middle- and senior level executives, participants completed evaluations of candidate profiles that assessed how they would evaluate the credentials of potential job applicants for positions such as management, attorney, nurse, and nurse assistant.

The executives were most likely to reject applicants who had very poor job fit. They were more tolerant of poor organization fit. However, individuals with very poor fit for either the organization or the job were more likely to be rejected than individuals with average fit in both areas. Consistent with the contingency approach to human resource management, organization fit was given more weight for permanent compared to temporary positions, whereas job fit was more important for short-term and knowledge-intensive positions. Managers thus seem to take into account characteristics of the position being filled and adjust their fit assessments to match the situation.

The Bottom Line. Poor fit with a job is the most likely reason why an individual might not be selected for a position, but being really low on either job or organizational fit is likely to result in rejection. Organization fit is more likely to be taken into account when positions are longterm in nature, which suggests that managers do indeed make strategic hiring decisions. Professors Sekiguchi and Huber thus concluded that characteristics of the position do indeed influence the balance between job and organization fit.

Source: Tomoki Sekiguchi and Vandra L. Huber, “The Use of Person–Organization Fit and Person–Job Fit Information in Making Selection Decisions,” Organizational Behavior and Human Decision Processes 116 (2011): 203–216.

Job-based fit is critical for organizations pursuing a Free Agent HR strategy. These organizations hire employees specifically to perform specialized tasks and expect them to bring required knowledge and skills with them. An employee's stay with the organization is expected to be relatively short, which means that fit with the organization is not critical. Selection decisions in organizations with a Free Agent HR strategy should thus focus primarily on assessing technical skills and abilities.

Balancing Achievement and Potential

The second area of balance concerns whether employees should be chosen because of what they have already achieved or because of their potential for future accomplishments. Assessments aimed at measuring achievement focus on history and past accomplishments that reveal information about acquired abilities and skills. For instance, a job applicant for an elementary school teaching position might have graduate degrees and years of experience that demonstrate teaching skills. In contrast, assessments aimed at measuring potential are future-oriented and seek to predict how a person will learn and develop knowledge and skill over time.14 In this case, an applicant for an elementary teaching position may just have graduated with high honors from a prestigious university, demonstrating high potential.

Achievement

A selection approach emphasizing existing skills and past accomplishments.

Potential

A selection approach emphasizing broad characteristics that foreshadow capability to develop future knowledge and skill.

Again, we can relate the choice between achievement and potential to the framework in Figure 6.1. Organizations that use Bargain Laborer HR strategies do not require highly developed skills.15 Measures of achievement are not required. For these organizations, selection methods assess potential by predicting whether applicants will be dependable and willing to carry out assigned tasks.

Hiring people based on potential is critical for organizations with longterm staffing strategies. These organizations provide a great deal of training, which suggests that people learn many skills after they are hired. With a Loyal Soldier HR strategy, selection measures should focus on ability, motivation, and willingness to work in a large variety of jobs. For a Committed Expert HR strategy, the focus is on assessing potential to become highly skilled in a particular area.

Organizations seeking short-term specialists focus on measuring achievement, because they seek employees who already have specific skills. Required skills change frequently, and a general lack of training by the organization makes it very difficult for these employees to keep up with new technologies. Hiring practices for organizations with Free Agent HR strategies thus focus on identifying individuals who have already obtained the necessary skills and who have demonstrated success in similar positions.

Gaining Competitive Advantage from Alignment

Of course, not all organizations have selection practices that are perfectly aligned with overall HR strategies. Some firms hire long-term generalists even though they have a Free Agent HR strategy. Other firms hire short-term specialists even though they have a Bargain Laborer HR strategy. The selection practices in such organizations are not strategic, and these organizations often fail to hire employees who can really help them achieve their goals. In short, organizations with closer alignment between their overall HR strategies and their specific selection practices tend to be more effective. They are successful because they develop a competitive advantage by identifying and hiring employees who fit their needs and strategic plans.16 What works for one organization may not work for another organization with a different competitive strategy. A key for effective staffing is thus to balance job fit and organization fit, as well as achievement and potential, in ways that align staffing practices with HR strategy.

Image

CONCEPT CHECK

  1. 1. What are the four types of talent, and how do they fit with the four approaches to overall HR strategy?
  2. 2. What is the difference between organization fit and job fit, and which is most critical for each of the HR strategies?
  3. 3. How do achievement and potential fit with strategic selection?

 LEARNING OBJECTIVE 2 

What Makes a Selection Method Good?

We have considered strategic concerns in employee selection. The next step is to evaluate specific methods that help accomplish strategy. How can an organization go about identifying tests or measures that will identify people who fit or who have the appropriate mix of potential and achievement? Should prospective employees be given some type of paper-and-pencil test? Is a background check necessary? Will an interview be helpful? If so, what type of interview is best? Answers to the questions provide insights about the accuracy, cost effectiveness, fairness, and acceptability of various selection methods. Next, we examine a few principles related to each question. These principles include reliability, validity, utility, legality and fairness, and acceptability. Figure 6.2 illustrates basic questions associated with each principle.

RELIABILITY

Reliability is concerned with consistency of measurement. An example that illustrates this concept can be made by examining the selection of university athletes.

Reliability

An assessment of the degree to which a selection method yields consistent results.

Imagine that two coaches for a football team have just returned from separate recruiting trips. They are meeting to discuss the recruits they visited. The first coach describes a great recruit who weighs 300 pounds. The second coach reports about someone able to bench press 500 pounds. Which player should the coaches select? It is impossible to compare the recruits, since different information was obtained about each person. The measures are not reliable.

The football example may seem a bit ridiculous, but it is not much different from what happens in many organizations. Just think of the interview process. Suppose five different people interview a person for a job. In many organizations, the interviewers' judgments would not be consistent.

How, then, can we determine whether a selection method is reliable? One way to evaluate reliability is to test a person on two different occasions and then determine whether scores are similar across the two times. We call this the test-retest method of estimating reliability. Another way to evaluate reliability is to give two different forms of a test. Since both tests were designed to measure the same thing, we would expect people's scores to be similar. This is the alternate-forms method of estimating reliability. A similar method involves the use of a single test that is designed to be split into two halves that measure the same thing. The odd- and even-numbered questions might be written so that they are equivalent. We call this the split-halves method of estimating reliability. A final method, called the inter-rater method, involves having different raters provide evaluations and then determining whether the raters agree.

Test-retest method

A process of estimating reliability that compares scores on a single selection assessment obtained at different times.

Image

Figure 6.2 What Makes a Selection Method Good?

Alternate-forms method

A process of estimating reliability that compares scores on different versions of a selection assessment.

Split-halves method

A process of estimating reliability that compares scores on two parts of a selection assessment.

Inter-rater method

A process of estimating reliability that compares assessment scores provided by different raters.

Each method of estimating reliability has its own strengths and weaknesses. However, all four methods rely on the correlation coefficient, a numerical indicator of the strength of the relationship between two sets of scores. Correlation coefficients range from a low of 0, which indicates no relationship, to a high of 1, which indicates a perfect relationship. Figure 6.3 provides an illustration of correlation coefficients. Two scores for each person are represented in the graph. The first score is plotted on the horizontal axis, and the second score is plotted on the vertical axis. Each person's two scores are thus represented by a dot. In the graph representing a low correlation, you can see that some people who did very well the first time did not do well the second time. Others improved a lot the second time. The scores are quite scattered, and it would be difficult to predict anyone's second score based on his or her first score. In the graph representing a high correlation, the scores begin to follow a straight line. In fact, scores with a correlation of 1 would plot as a single line where each person's second score could be predicted perfectly by his or her first score.

Correlation coefficient

A statistical measure that describes the strength of the relationship between two measures.

Correlation coefficients can also be negative (indicating that high scores on one measure are related to low scores on the other measure), but we do not generally observe negative correlations when assessing reliability. When it comes to reliability estimates, a higher correlation is always better. A correlation coefficient approaching 1 tells us that people who did well on one of the assessments generally did well on the other.

Just how high should a reliability estimate be? Of course, this depends on many different aspects of the assessment situation. Nevertheless, a good guideline is that a correlation coefficient of .85 or higher suggests adequate reliability for test-retest, alternate-forms, and split-halves estimates.17 Inter-rater reliability estimates are often lower because they incorporate subjective judgment, yet high estimates are still desirable.

Image

Figure 6.3 Graphical Illustration of Correlations.

Knowing in general how high reliability estimates should be makes managers and human resource specialists better consumers of selection procedures. Consulting firms and people within an organization often propose many different selection methods. Important decisions must be made about which of the many possible methods to use. The first question to ask about any selection procedure is whether it is reliable. Information about reliability should be available from vendors who advocate and sell specific tests and interview methods.

VALIDITY

Once reliability has been established, we can turn to a selection method's validity. Suppose the football coaches in the earlier example have been taught about reliability. They go back to visit the recruits again and obtain more information. This time they specifically plan to obtain consistent information. When they report back, one of the coaches states that his recruit drives a blue car. The second coach says that his recruit drives a green car. The problem of reliability has been resolved. The coaches are now providing the same information about the two recruits. However, this information most likely has nothing to do with performance on the football field. We thus conclude that the information does not have validity, which means that it is not relevant for job performance.

Validity

The quality of being justifiable. To be valid, a method of selecting employees must accurately predict who will perform the job well.

How do we know if a test is valid? Evidence of validity can come in many forms, and assessments of validity should take into account all evidence supporting a relationship between the assessment technique and job performance.18 Nevertheless, as with reliability, certain methods for determining validity are most commonly used.

One method, called the content validation strategy, involves determining whether the content of the assessment method is representative of the job situation. For instance, a group of computer programmers might be asked to look at a computer programming test to determine whether the test measures knowledge needed to program successfully. The experts match tasks from the job description with skills and abilities measured by the test. Analyses are done to learn if the experts agree. The content validation strategy thus relies on expert judgments, and validity is supported when experts agree that the content of the assessment reflects the knowledge needed to perform well on the job. Content validation is a particularly important step for developing new tests and assessments. As a student, you see content validation each time you take an exam. The course instructor acts as an expert who determines whether the questions on the exam are representative of the course material.

Content validation strategy

A process of estimating validity that uses expert raters to determine if a test assesses skills needed to perform a certain job.

A second method for determining validity is known as the criterion-related validation strategy. This method differs from the content validation strategy in that it uses correlation coefficients to show that test or interview scores are related to measures of job performance. For example, a correlation coefficient could be calculated to measure the relationship between a personality trait and the dollars of business that sales representatives generate. A positive correlation coefficient can indicate that those who have high scores on a test of assertiveness generate more sales. In this case, a negative correlation coefficient might also be instructive, as it would indicate that people who have lower scores on a particular trait, such as anxiety, have higher sales figures. Either way, the test scores will be helpful for making hiring decisions and predicting who will do well in the sales position.

Criterion-related validation strategy

A process of estimating validity that uses a correlation coefficient to determine whether scores on tests predict job performance.

In practice, two methods can be used to calculate criterion-related validity coefficients. One method uses the predictive validation strategy. Here, an organization obtains assessment scores from people when they apply for jobs and then later measures their job performance. A correlation coefficient is calculated to determine the relationship between the assessment scores and performance. This method is normally considered the optimal one for estimating validity. However, its use in actual organizations presents certain problems. One problem is that it requires measures from a large number of people. If an organization hires only one or two people a month, it might take several years to obtain enough information to calculate a proper correlation coefficient. Another problem is that organizations may also be reluctant to pay for ongoing measurement before they have evidence that the assessments are really useful for predicting performance.

Predictive validation strategy

A form of criterion-related validity estimation in which selection assessments are obtained from applicants before they are hired.

A second method for calculating validity coefficients uses the concurrent validation strategy. Here, the organization obtains assessment scores from people who are already doing the job and then calculates a correlation coefficient relating those scores to performance measures that already exist. In this case, for example, a personality test could be given to the sales representatives already working for the organization. A correlation coefficient could be calculated to determine whether sales representatives who score high on the test also have high sales figures. This method is somewhat easier to use, but it too has drawbacks. One problem is that the existing sales representatives do not complete the personality assessment under the same conditions as job applicants. Applicants may be more motivated to obtain high scores and may also inflate their responses to make themselves look better. Existing sales representatives may have also learned things and changed in ways that make them different from applicants, which might reduce the accuracy of the test for predicting who will perform best when first hired.

Concurrent validation strategy

A form of criterion-related validity estimation in which selection assessments are obtained from people who are already employees.

Neither the predictive nor the concurrent strategy is optimal in all conditions. However, both yield important information, and this information comes in the form of a correlation coefficient. How high should this correlation coefficient be? Validity coefficients are lower than reliability coefficients. This is because a reliability coefficient represents the relationship between two things that should be nearly identical. In contrast, a validity coefficient represents a relationship between two different things: the test or interview and job performance. Correlation coefficients representing validity rarely exceed .50. Many commonly used assessment techniques are associated with correlation coefficients that range from .25 to .50, and a few that are useful range from .15 to .25. This suggests that, as a guideline for assessing validity, a coefficient above .50 indicates a very strong relationship, coefficients between .25 and .50 indicate somewhat strong relationships, and correlations between .15 and .25 weaker but often important relationships.19 Once again, this information can help managers and human resource specialists become better consumers of assessment techniques. As with reliability, information about validity should be available for properly developed selection methods.

One additional concept related to validity is generalizability, which concerns the extent to which the validity of an assessment method in one context can be used as evidence of validity in another context. In some cases, differences in the job requirements across organizations might result in an assessment that is valid in one context but not in another. For instance, a test that measures sociability may predict high performance for food servers in a sports bar but not for servers in an exclusive restaurant. This variability is known as situational specificity. In other cases, differences across contexts do not matter, and evidence supporting validity in one context can be used as evidence of validity in another context, a condition known as validity generalization. A common example of a personality trait that exhibits generalization is conscientiousness. Being organized and goal oriented seems to lead to high performance regardless of the work context. We return to this subject later in discussions about different forms of assessment.

Situational specificity

The condition in which evidence of validity in one setting does not support validity in other settings.

Validity generalization

The condition in which evidence of validity in one setting can be seen as evidence of validity in other settings.

UTILITY

The third principle associated with employee selection methods is utility, which concerns the method's cost effectiveness. Think back to the football example. Suppose the university has decided to give all possible recruits a one-year scholarship, see how they do during the year, and then make a selection decision about which players to keep on the team. (For the moment, we will ignore NCAA regulations.) Given an entire year to assess the recruits, the university would likely be able to make very good selection decisions, but the cost of the scholarships and the time spent making assessments would be extremely high. Would decisions be improved enough to warrant the extra cost?

Utility

A characteristic of selection methods that reflects their cost effectiveness.

Several factors influence the cost effectiveness, or utility, of a selection method. The first issue concerns validity. All other things being equal, selection methods with higher validity also have higher utility. This is because valid selection methods result in more accurate predictions. In turn, more accurate predictions result in higher work performance, which leads to greater organizational profitability.

A second issue concerns the number of people selected into a position. An organization can generate more money when it improves its hiring procedures for jobs it fills frequently. After all, a good selection procedure increases the chances of making a better decision each time it is used. Even though each decision may only be slightly better than a decision made randomly or with a different procedure, the value of all the decisions combined becomes substantial. This explains why even selection decisions with moderate to low validity may have high utility.

A third issue concerns the length of time that people stay employed. Utility is higher when people remain in their jobs for long periods of time. This principle is clear when we compare the probable monetary return of making a good selection decision for someone in a summer job versus someone in a 40-year career. Hiring a great employee for a few months can be very helpful. Hiring a great employee for an entire career, however, can yield a much greater financial benefit.

A fourth issue that influences utility is performance variability. To understand this concept, think about the difference in performance of good and bad cooks at a fast-food restaurant versus the difference in performance of cooks at an elite restaurant. The fast-food cooking process is so standardized that it usually does not matter who cooks the food. In this case, making a great selection decision has only limited value. In contrast, the cooking process at an elite restaurant requires the cook to make many decisions that directly influence the quality of the food. Selecting a good cook in this situation is often the difference between a restaurant's success and failure. Measuring performance variability for specific jobs can be somewhat difficult. Just what is the dollar value associated with hiring a good candidate versus a bad one? A number of studies suggest that salary provides a good approximation of this value.20 Variability in performance increases as salary increases. The dollar value of hiring a good company president is greater than the dollar value of hiring a good receptionist, and this difference is reflected in the higher compensation provided to the company president.

A fifth issue involves the ratio of applicants to hires for a particular position and concerns how choosy an organization can be. An organization that must hire three out of every four applicants is much less choosy than an organization that hires one out of every ten. If an organization hires almost everyone who applies, then it will be required to hire people even when the selection method suggests that they will not be high performers. Because people are hired regardless of the assessment results, very little value comes from developing quality selection procedures. In contrast, an organization that receives a large number of applications for each position can benefit from good selection techniques that help accurately predict which of the applicants will be the highest performer.

Still another issue related to utility is cost. Cost issues associated with selection methods can be broken into two components: fixed costs associated with developing an assessment method and variable costs that occur each time the method is used. For example, an organization may decide to use a cognitive ability test to select computer programmers. The organization will incur some expenses in identifying an appropriate test and training assessors to use it. This cost is incurred when the test is first adopted. Most likely, the organization will also pay a fee to the test developer each time it gives the test to a job applicant. In sum, utility increases when both fixed and variable costs are low. In general, less-expensive tests create more utility, as long as their validity is similar to that of more-expensive tests.

Let's look more closely at the variable costs of the assessment. Because it costs money for each person to take an assessment, utility decreases as the number of people tested or interviewed increases. However, there is a tradeoff between the number of people being assessed and selectivity. Unless a test has low validity and is very expensive, the tradeoff usually works out such that the costs associated with giving the test to a large number of people are outweighed by the advantages of being choosy and hiring only the very best applicants.

Table 6.1 summarizes factors that influence utility. Of course, dollar estimates associated with utility are based on a number of assumptions and represent predictions rather than sure bets. Just like predictions associated with financial investments, marketing predictions, and weather forecasting, these estimates will often be wrong. Some research even suggests that providing managers with detailed, complex cost information does not help persuade them to adopt the best selection methods.21 This does not, however, mean that cost analyses are worthless. Utility estimates can be used to compare human resource investments with other investments such as buying machines or expanding market reach. Estimates are also more likely to be accepted by managers when they are presented in a less complex manner and when they are framed as opportunity costs.22 Managers can use utility concepts to guide their decisions. For instance, managers should look for selection procedures that have high validity and relatively low cost. They should focus their attention on improving selection decisions for jobs involving a large number of people who stay for long periods of time. They should also focus on jobs in which performance of good and bad employees varies a great deal and in which there are many applicants for each open position.

Image

LEGALITY AND FAIRNESS

The fourth principle associated with selection decisions concerns legality and fairness. Think back to the football example again. Suppose the coaches decided to select only recruits who could pass a lie detector test. Is this legal? Chapter 3 specifically described a number of legal issues associated with human resource management.

Validity plays an important role in the legality of a selection method. As we discussed in Chapter 3, if a method results in lower hiring rates for members of a protected subgroup of people—such as people of a certain race—then adverse impact occurs. In this case, the company carries the burden of proof for demonstrating that its selection methods actually link with higher job performance. Because adverse impact exists in many organizations, being able to demonstrate validity is a legal necessity.

High validity may make it legal for an organization to use a test that screens out some subgroups at a higher rate than others, but this does not necessarily mean that everyone agrees that the test is fair and should be used. Fairness goes beyond legality and includes an assessment of potential bias or discrimination associated with a given selection method. Fairness concerns the probability that people will be able to perform satisfactorily in the job, even though the test predicted that they would not.

Fairness

A characteristic of selection methods that reflects individuals' perceptions concerning potential bias and discrimination in the selection methods.

From the applicants' perspective, selection procedures are seen as more fair if they believe they are given an opportunity to demonstrate their skills and qualifications.23 Because of this and other factors, assessments of fairness often depend a great deal on personal values. The very purpose of employee selection is to make decisions that discriminate against some people. Under optimal conditions, this discrimination is related only to differences in job performance. Yet no selection procedure has perfect validity. All techniques screen out some people who would actually perform well if given the opportunity. For example, some research has found that tests can unfairly screen out individuals who believe that people like them don't perform well on the specific test.24 For instance, a woman may not perform well on a mathematics test if she believes that women aren't good at math. Simply seeing the test as biased can result in decreased motivation to try hard and thereby lower scores, even though these people have the skills necessary to do the job.

Even tests with relatively high validity screen out a number of people who could perform the job. Thus, some employee selection procedures may provide economic value to organizations at the expense of individuals who are screened out even though they would perform well. This situation creates a tradeoff between a firm's desire to be profitable and society's desire for everyone with an equal chance to obtain quality employment. Perceptions of the proper balance between these values differ depending on personal values, making fairness a social rather than scientific concept.

ACCEPTABILITY

A final principle for determining the merit of selection techniques is acceptability, which concerns how applicants perceive the technique. Can a selection method make people see the organization as a less-desirable place to work? Think back to the football coaches. Suppose they came up with a test of mental toughness that subjected recruits to intense physical pain. Would completing the test make some recruits see the school less favorably? Would some potential players choose to go to other schools that did not require such a test?

Acceptability

A characteristic of selection methods that reflects applicants' beliefs about the appropriateness of the selection methods.

The football example shows that selection is a two-way process. As an organization is busy assessing people, those same people are making judgments about whether they really want to work for the organization. Applicants see selection methods as indicators of an organization's culture, which can influence not only their decisions to join the organization but also subsequent feelings of job satisfaction and commitment.25 Organizations should thus be careful about the messages that their selection techniques are sending to applicants.

In general, applicants have negative reactions to assessment techniques when they believe that the organization does not need the information being gathered—that the information is not job related. For instance, applicants tend to believe that family and childhood experiences are private and unrelated to work performance. Applicants also tend to be skeptical when they do not think the information from a selection assessment can be evaluated correctly. In this sense, many applicants react negatively to handwriting analysis and psychological assessment because they do not believe these techniques yield information that can be accurately scored.26

One interesting finding is that perceptions of fairness differ among countries. For instance, people in France see handwriting analysis and personality testing as more acceptable than do people in the United States. At the same time, people in the United States see interviews, résumés, and biographical data as more acceptable than do people in France.27

There is also some evidence that applicants react more positively to a particular assessment when they believe they will do well on it. One study, for example, found people who use illegal drugs to be less favorable about drug testing.28 Although this is hardly surprising, it does illustrate the complexity of understanding individual reactions to employee selection techniques.

Image

CONCEPT CHECK

  1. 1. What criteria are used to determine whether employee selection methods are good?
  2. 2. What are ways to assess selection method validity?
  3. 3. What influences the cost effectiveness of a selection method?

 LEARNING OBJECTIVE 3 

What Selection Methods Are Commonly Used?

Methods for selecting employees include testing, gathering information, and interviewing. We discuss particular practices associated with each of these categories in the sections that follow.

TESTING

Employment testing provides a method for assessing individual characteristics that help some people be more effective employees than others. Tests provide a common set of questions or tasks to be completed by each job applicant. Different types of tests measure knowledge, skill, and ability, as well as other characteristics, such as personality traits.

Cognitive Ability Testing

Being smart is often measured through cognitive ability testing, which assesses learning, understanding, and ability to solve problems.29 Cognitive ability tests are sometimes referred to as “intelligence” or “mental ability” tests. Some measure ability in a number of specific areas, such as verbal reasoning and quantitative problem solving. However, research suggests that general mental ability, which is represented by a summation of the specific measures, is the best predictor of performance in work contexts.30 Of course, cognitive ability is somewhat related to education, but actual test scores have been shown to predict job performance better than measures of educational attainment.31

Cognitive ability testing

Assessment of a person's capability to learn and solve problems.

In general, cognitive ability tests are very effective selection tools. Specifically, they have high reliability; people tend to score similarly at different times and on different test forms.32 In addition, these tests are difficult to fake, and people are generally unable to substantially improve their scores by simply taking courses that teach approaches to taking the test.33 Validity is higher for cognitive ability tests than for any other selection method.34 This high validity, combined with relatively low cost, results in substantial utility. Cognitive ability tests are good, inexpensive predictors of job performance.

A particularly impressive feature of cognitive ability tests is their validity generalization. They predict performance across jobs and across cultures.35 Everything else being equal, people with higher cognitive ability perform better regardless of the type of work they do.36 Nevertheless, the benefits of high cognitive ability are greater for more complex jobs, such as computer programmer or physician.37 One explanation is the link between cognitive ability and problem solving. People with higher cognitive ability obtain more knowledge.38 Example items from a widely used cognitive ability test are shown in Table 6.2. Can you see why these tests predict performance better in complex jobs? Researchers have also posited that people with higher cognitive ability adapt to change more quickly, although the actual evidence supporting better adaptation is inconsistent.39

Image

Source: Sample items for Wonderlic Personnel Test-Revised (WPT-R). Reprinted with permission from Wonderlic, Inc.

A concern about cognitive ability tests is that people from different racial groups tend to score differently.40 This does not mean that every individual from a lower-scoring group will score low. Some individuals from each group will score better and some will score worse, but on average, some groups do worse than others. The result is adverse impact, wherein cognitive ability tests screen out a higher percentage of applicants from some minority groups. Because of their strong link with job performance, cognitive tests can be used legally in most settings. However, a frequent social consequence of using cognitive ability tests is the hiring of fewer minority workers.

In terms of acceptability, managers see cognitive ability as one of the most important predictors of work performance.41 Human resource professionals and researchers strongly believe in the validity of cognitive ability tests, even though some express concern about the societal consequences of their use.42 In contrast, job applicants often perceive other selection methods as being more effective.43 Not surprisingly, negative beliefs about cognitive ability tests are stronger for people who do not perform well on the tests.44

In summary, cognitive ability tests are a useful tool for determining whom to hire. As discussed in the “How Do We Know?” feature, these tests can predict long-term success. They predict potential more than achievement, making them best suited for organizations pursuing long-term staffing strategies. High cognitive ability is particularly important for success in organizations with long-term staffing strategies, as employees must learn and adapt during long careers. Using cognitive ability tests is thus beneficial for organizations seeking long-term generalists and specialists. Organizations seeking short-term generalists can also benefit by using these tests to inexpensively assess basic math and language ability.

Personality Testing

Personality testing measures patterns of thought, emotion, and behavior.45 Researchers have identified five broad dimensions of personality: agreeableness, conscientiousness, emotional stability, extraversion, and openness to experience.46 The five broad personality dimensions can be accurately measured in numerous languages and cultures, making the tests useful for global firms. Patterns of relationships with work performance are similar across national boundaries.47 A description of each dimension and a summary of its general relationships to job performance and job satisfaction are presented in Table 6.3.

Personality testing

Assessment of traits that show consistency in behavior.

Image How Do We Know?

IS IT BETTER TO BE SMART OR BEAUTIFUL?

Do smart people have a better chance of getting rich? How about people who are physically attractive? Are they more likely to be rich? Timothy Judge, Charlice Hurst, and Lauren Simon sought to answer these questions with a study of 191 randomly selected people between the ages of 25 and 74. Participants completed a cognitive ability measure and reported on their level of education attainment and their core self-evaluations (levels of confidence, self-esteem, sense of internal control, and lack of anxiety). They also provided a photograph that was rated for physical attractiveness. At a later time, participants also reported their income.

Results showed a positive effect on income for both intelligence and beauty. Smarter people had higher income, as did people who were rated higher on physical attractiveness. Smarter people attained more education and had more positive perceptions about themselves, which in turn translated into higher income. The effect was similar for physical attractiveness. Better-looking people similarly attained more education and had more positive self-perceptions, which corresponded with increased income.

The Bottom Line. Being either smart or good looking makes someone more likely to be rich. But if you had to choose one or the other, choose being smart, as the effect of being smart was twice as large as the effect of being beautiful. Nevertheless, the authors conclude that being beautiful does indeed provide people with a seemingly unfair advantage.

Source: Timothy A. Judge, Charlice Hurst, and Lauren S. Simon, “Does It Pay to Be Smart, Attractive, or Confident (or All Three)? Relationships Among General Mental Ability, Physical Attractiveness, Core Self-Evaluations, and Income,” Journal of Applied Psychology 94 (2009): 742–755.

Image

Sources: Information from Timothy A. Judge, Daniel Heller, and Michael K. Mount, “Five-Factor Model of Personality and Job Satisfaction: A Meta-Analysis,” Journal of Applied Psychology 87 (2002): 530–541; Murray R. Barrick, Michael K. Mount, and Timothy A. Judge, “Personality and Performance at the Beginning of the Millennium,” International Journal of Selection and Assessment 9 (2001): 9–30.

Looking at personality tests in general, we find that measures for the five personality dimensions demonstrate adequate reliability.48 Different forms and parts of the test correlate highly with each other.

Personality tests with items that specifically ask about typical actions in employment settings tend to yield consistent measures of behaviors that are important at work.49 Specifically, relationships between personality dimensions and performance, which represent validity, are highest when tests specifically instruct people to respond in relation their work behavior, rather than in relation to their actions across different settings.50 Personality traits are generally good predictors of citizenship behavior such as helping others and going beyond minimum expectations.51 Yet, strength of validity often differs depending on the personality dimension being measured. In general, personality dimensions associated with motivation are good predictors of performance. One such dimension is conscientiousness.

Conscientious employees are motivated—they set goals and work hard to accomplish tasks.52 Conscientious people also tend to be absent from work less frequently.53 Conscientious workers are more satisfied with their jobs and are more likely to go beyond minimum expectations to make the organization successful.54 Conscientiousness thus exhibits validity generalization in that it predicts work performance regardless of the type of work. Research evidence suggests that emotional stability does not relate as strongly to performance as conscientiousness, yet, it too captures aspects of motivation and demonstrates validity generalization. People high on emotional stability are more confident in their capabilities, which in turn increases persistence and effort.55 Yet, people who are highly anxious can actually perform better in some contexts, such as air traffic controller, that require workers to pay very close attention to detail in a busy environment.56

Relationships with the other three personality dimensions depend on the work situation, meaning that these measures have situational specificity. Extraversion corresponds with a desire to get ahead and receive rewards, making it a useful predictor for performance in sales and leadership positions.57 More extraverted employees who are also more emotionally stable—think happy and bubbly personalities—have also been found to excel in customer-service jobs such as those found in a health and fitness center. Agreeableness is important for interpersonal relationships and corresponds with high performance in teams and service jobs that require frequent interaction with customers.58 Much of this effect occurs because agreeable employees are more likely to go beyond minimum expectations and help their coworkers.59 Openness to experience is seldom related to work performance, but recent research suggests that it can increase performance in jobs that require creativity and adaptation to change.60 One setting requiring adaptation is working in a foreign country, and people more open to experience do indeed perform better in such assignments.61 People who are more open to experience are also more likely to be entrepreneurs.62

A notable feature of personality tests is their helpfulness in predicting the performance of entire teams. Teams that include just one person who is low on agreeableness or conscientiousness have lower performance.63 This means that personality tests predict not only individual performance but also how an individual's characteristics will influence the performance of other people. This feature increases the utility of personality testing, because hiring someone with desirable traits yields benefits related not just to the performance of that individual but also to the performance of others.

As shown in Table 6.4, a survey of selection practices around the world suggests that personality testing is used more frequently in countries other than the U.S. Within the U.S a few states have laws that prohibit personality testing. However, in most cases, the use of personality tests does not present problems as long as organizations use well-developed tests that do not ask outwardly discriminatory questions.64 Personality tests do have some adverse impact for women and minorities. For minorities the negative effect is less than that for cognitive ability tests.65

Image

Source: Information from Ann Marie Ryan, Lynn McFarland, Helen Baron, and Ron Page, “An International Look at Selection Practices: Nation and Culture as Explanations for Variability in Practice,” Personnel Psychology 52 (1999): 359–391.

With regard to acceptability, a common concern about the use of personality tests is the potential for people to fake their responses. Indeed, research has shown that people are capable of faking and obtaining higher scores when instructed to do so. Moreover, people do inflate their scores when they are being evaluated for selection.66 Although faking does have the potential to make personality tests less valid predictors of job performance,67 the overall relationship between personality measures and job performance remains,68 meaning that even with faking, personality tests can be valid selection measures. Using statistical procedures to try correcting for faking does little to improve the validity of tests.69 However, faking does involve issues of fairness. Some people fake more than others, and people who do not inflate their scores may be unfairly eliminated from jobs.70 Faking can thus lead to decisions that are unfair for some individuals, even though it has little negative consequence for the organization. To reduce the potentially negative impact on individuals, organizations can use personality tests in early stages of the selection process to screen out low scorers rather than in later stages to make final decisions about a few individuals.71 Obtaining personality scores from ratings provided by friends and coworkers rather than applicants themselves is also helpful, and current research is exploring the usefulness of techniques such as eye-tracking technology to identify when applicants are faking.72

Another method for reducing faking is to create personality tests with items that have less obvious answers. An example of this approach is a conditional reasoning test. Conditional reasoning tests are designed to assess unconscious biases and motives. With this approach, job applicants are asked to solve reasoning problems which do not have answers that are obviously right or wrong. People with certain tendencies base their decisions on particular forms of reasoning.73 For example, a person prone to aggression is more likely to attribute actions of others as hostile. What appears to be the most reasonable answer to the aggressive person (that other people do things because they are mean) is different from what less-aggressive people see as the most reasonable answer. Because they tap into unconscious beliefs, these tests are more difficult to fake.74 Unfortunately, conditional reasoning tests are somewhat difficult to create and as of yet do not measure the full array of personality traits.

Personality testing, then, is another generally effective tool for determining whom to hire. These tests are increasingly available on the Internet, as explained in the accompanying “Technology in HR” feature. This makes personality tests relatively simple to administer. Yet, personality tests often relate more to organization fit than to job fit, suggesting that personality measures are most appropriate in organizations that adopt long-term staffing strategies. People with personality traits that fit an organization's culture and work demands are more likely to remain with the organization.75 Personality testing is thus especially beneficial for organizations adopting Committed Expert and Loyal Soldier HR Strategies.

Situational Judgment Tests

Situational judgment tests are a relatively new development. These tests place job applicants in a hypothetical situation and then ask them to choose the most appropriate response. Items can be written to assess job knowledge, general cognitive ability, or practical savvy. Indeed, a potential strength of these tests is their ability to assess interpersonal skills, which are difficult to measure.76 Situational judgment tests also tend to capture broad personality traits such as conscientiousness and agreeableness, as well as tendencies toward certain behavior (like taking initiative) in more specific situations.77

Situational judgment test

Assessment that asks job applicants what they would do, or should do, in a hypothetical situation.

Image Technology in HR

ADMINISTERING TESTS ON THE INTERNET

Widespread access to computers and the Internet provides a potentially improved method for administering employment tests. Using the Internet, people can take tests whenever and wherever they want. Testing can also be individualized so that responses to early questions are used to choose additional questions. Perhaps more important, scoring can be done quickly and accurately. These potential benefits are accompanied by a number of concerns, however.

One source of concern is test security. If someone takes a test at home, can the organization be sure that the test was actually completed by the applicant? Are scores from a computer version of a test equivalent to scores from a paper-and-pencil version of the test? Do people fake their scores more when using a computer? Will people from racial subgroups score higher or lower on a computerized test?

Given the potential benefits of computer-administered tests, researchers have conducted a great deal of research in this area. One large study compared responses from 2,544 people completing a paper-and-pencil version of a personality and biographical data test with responses from 2,356 people completing the same test in a Web-based format. The computer test had higher reliability and less evidence of faking. Other studies have generally concluded that computer-administered tests are just as reliable and valid as traditional tests. In addition, in many instances, computer-based tests have less adverse impact and are seen as more fair by applicants from minority groups. Overall, the results suggest that increased use of technology can result in improved employment testing.

Concerns about faking do, nevertheless, continue to be discussed, and evidence suggests that applicants who take a test multiple times do indeed raise their scores on subsequent tests. Providing a warning that faking will be assessed and detected does, however, appear to reduce instances of distortion.

Image

Sources: Information from Richard N. Landers, Paul R. Sackett, and Kathy A. Tuzinski, “Retesting after Initial Failure, Coaching Rumors, and Warnings Against Faking in Online Personality Measures for Selection,” Journal of Applied Psychology 96 (2011): 202–210; Robert E. Ployhart, Jeff A. Weekley, Brian C. Holtz, and Cary Kemp, “Web-Based and Paper and Pencil Testing of Applicants in a Proctored Setting: Are Personality, Biodata, and Situational Judgment Tests Comparable?” Personnel Psychology 56 (2003): 733–752; Wendy L. Richman, Sara Keisler, Suzanne Weisband, and Fritz Drasgow, “A Meta-Analytic Study of Social Desirability Distortion in Computer-Administered Questionnaires, Traditional Questionnaires, and Interviews,” Journal of Applied Psychology 84 (1999): 754–775.

Some situational judgment tests use a knowledge format that asks respondents to pick the answer that is most correct. Other tests use a behavioral tendency format that asks respondents to report what they would actually do in the situation. Although the questions are framed a bit differently, the end result seems to be the same.78 Situational judgment tests have been found to have good reliability and validity. They predict job performance in most jobs, and they provide information that goes beyond cognitive ability and personality tests.79 Situational judgment tests thus appear to represent an extension of other tests. They closely parallel structured interviews, which we will discuss shortly. Questions can be framed to measure either potential in organizations with long-term orientations or achievement and knowledge in organizations with short-term labor strategies. They can also be designed to emphasize either general traits or specific skills. This makes them useful for organizations pursuing any of the human resource strategies.

Physical Ability Testing

Physical ability testing assesses muscular strength, cardiovascular endurance, and coordination.80 These tests are useful for predicting performance in many manual labor positions and in jobs that require physical strength. Physical ability tests can be particularly important in relation to the Americans with Disabilities Act, as organizations can be held liable for discrimination against disabled applicants. Managers making selection decisions should thus test individuals with physical disabilities and not automatically assume that they cannot do the job.

Physical ability tests have high reliability; people score similarly when the same test is given at different times. Validity and utility are also high for positions that require physical inputs, such as police officer, firefighter, utility repair operator, and construction worker.81 Validity generalization is supported for positions where job analysis has shown work requirements to be physically demanding.82

As long as job analysis has identified the need for physical inputs, physical ability testing presents few legal problems. However, men and women do score very differently on physical ability tests. Women score higher on tests of coordination and dexterity, whereas men score higher on tests of muscular strength.83 Physical ability tests thus demonstrate adverse impact. In particular, selection decisions based on physical ability tests often result in exclusion of women from jobs that require heavy lifting and carrying.

The usefulness of physical ability testing is not limited to a particular HR strategy. Physical tests can be useful for organizations seeking any form of talent, as long as the talent relates to physical dimensions of work.

Integrity Testing

In the past, some employers used polygraph—or lie detector—tests to screen out job applicants who might steal from them. However, the Employee Polygraph Protection Act of 1988 generally made it illegal to use polygraph tests for hiring decisions. Since then, organizations have increasingly turned to paper-and-pencil tests for integrity testing. Such tests are designed to assess the likelihood that applicants will be dishonest or engage in illegal activity.

Integrity testing

Assessment of the likelihood that an individual will be dishonest.

There are two types of integrity test: overt and covert. Overt tests ask questions about attitudes toward theft and other illegal activities. Covert tests are more personality-based and seek to predict dishonesty by assessing attitudes and tendencies toward antisocial behaviors such as violence and substance abuse.84

Research evidence generally supports the reliability and validity of integrity tests. These tests predict absenteeism and overall performance, but they most strongly correspond with counterproductive work behavior such as theft, property destruction, unsafe actions, poor attendance, and intentional poor performance.85 Most often, such tests are used in contexts that involve the handling of money, such as banking and retail sales.

In many ways, integrity tests are similar to personality tests. In fact, strong correlations exist between integrity test scores and personality test scores, particularly for conscientiousness.86 As with personality tests, a concern is that people may fake their responses when jobs are on the line. The evidence suggests that people can and do respond differently when they know they are being evaluated for a job. Even so, links remain between test scores and subsequent measures of ethical behavior.87 Furthermore, integrity tests show no adverse impact for minorities88 and appear to predict performance consistently across national cultures.89

Integrity tests can be useful for organizations with Bargain Labor HR strategies. These firms hire many entry-level workers to fill positions in which they handle substantial amounts of money. In such cases, integrity tests can provide a relatively inexpensive method for screening applicants. This explains why organizations like grocery stores, fast-food chains, and convenience stores make extensive use of integrity testing to select cashiers.90

Drug Testing

Drug testing normally requires applicants to provide a urine sample that is tested for illegal substances. It is quite common in the United States, perhaps because, according to some estimates as much as 14 percent of the workforce uses illegal drugs, with as many as 3 percent of workers actually using drugs while at work.91 Illegal drug use has been linked to absenteeism, accidents, and likelihood of quitting.92 Drug testing, which is both reliable and valid, appears to be a useful selection method for decreasing such nonproductive activities. Even though administration costs can be high, basic tests are modestly priced, supporting at least moderate utility for drug testing.

Research related to drug testing has looked at how people react to being tested. In general, people see drug testing as most appropriate for safety-sensitive jobs such as pilot, heart surgeon, and truck driver.93 Not surprisingly, people who use illicit drugs are more likely to think negatively about drug testing.94

Drug testing can be useful for firms that hire most types of talent. Organizations seeking short-term generalists use drug testing in much the same way as integrity testing. Organizations with long-term employees frequently do work that requires safe operational procedures. In these organizations, drug testing is useful in selecting people for positions such as forklift operator, truck driver, and medical care provider.

Work Sample Testing

One way of assessing specific skills is work sample testing, which directly measures performance on some element of the job. Common examples include typing tests, computer programming tests, driving simulator tests, and electronics repair tests. In most cases, these tests have excellent reliability and validity.95 Many work sample tests are relatively inexpensive as well, which translates into high utility. Because they measure actual on-the-job activities, work sample tests also involve few legal problems. However, in some cases work test scores are lower for members of minority groups.96

Work sample testing

Assessment of performance on tasks that represent specific job actions.

A problem with work sample tests is that not all jobs lend themselves to this sort of testing. What type of work sample test would you use for a medical doctor or an attorney, for example? The complexity of these jobs makes the creation of work sample tests very difficult. However, human resource specialists have spent a great deal of time and effort developing a work sample test for the complex job of manager. The common label for this tool is assessment center.

Assessment center

A complex selection method that includes multiple measures obtained from multiple applicants across multiple days.

Assessment center participants spend a number of days with other managerial candidates. Several raters observe and evaluate the participants' behavior across a variety of exercises. In one typical assessment center exercise, the leaderless group discussion, for example, managerial candidates work together in a group to solve a problem in the absence of a formal leader. For the in-basket exercise, participants write a number of letters and memos that simulate managerial decision making and communication. Managers and recruiters from the organization serve as observers who rate the participants in areas such as consideration and awareness of others, communication, motivation, ability to influence others, organization and planning, and problem solving.97

Assessment centers have good reliability and validity, which suggests that they can be excellent selection tools in many contexts.98 Validity improves when assessment center evaluators are trained and when exercises are specifically tailored to fit the job activities of the participants.99 Minority racial groups have been found to score lower in assessment centers, but women often score higher.100 Creating and operating an assessment center can be very expensive, which substantially decreases utility for many organizations. Because of their high cost, assessment centers are normally found only in very large organizations.

Assessment centers are most common in organizations with long-term staffing strategies, particularly those adopting Committed Expert HR strategies. Proper placement of individuals is extremely critical for these organizations, and the value of selecting someone for a long career offsets the high initial cost of assessment. Other types of work sample tests are useful for organizations pursuing any of the staffing strategies. A typing test can be a valuable aid for hiring a temporary employee as part of a Bargain Laborer HR strategy, for example. Similarly, a computer programming test can be helpful when hiring someone as part of a Free Agent HR strategy.

INFORMATION GATHERING

In addition to tests, organizations use a variety of methods to directly gather information about the work experiences and qualifications of potential employees. In fact, as illustrated in the “Building Strength Through HR” feature, most organizations combine multiple methods of testing and information gathering. Common methods for gathering information include application forms and résumés, biographical data, and reference checking.

Application Forms and Résumés

Many entry-level jobs require potential employees to complete an application form. Application forms ask for information such as address and phone number, education, work experience, and special training. For professional-level jobs, similar information is generally presented in résumés. The reliability and validity of these selection methods depends a great deal on the information being collected and evaluated. Measures of things such as work experience and education have at least moderately strong relationships with job performance.101

Image Building Strength Through HR

TARGET

Target has over 1,700 retail stores that employ 365,000 people. Achieving the overall corporate goal of “friendly service from team members ready to assist with your list, fully stocked shelves and a speedy checkout process” necessitates effective employee selection. Target thus uses a variety of selection tools to help identify job candidates who are friendly and have an upbeat attitude. Specific methods include the following:

  • Personality test
  • Behavioral interview
  • Drug test
  • Background check

Using such selection tools has helped Target develop a reputation as having excellent customer service. The process also seems to be acceptable to most candidates, as two stores opening in Los Angeles recently reported having over 4,000 applicants for 250 positions.

The selection tools do, however, sometimes generate controversy from people believing the tests and information gathering discriminate unfairly. For example, a number of years ago Target's use of a specific personality test was challenged because some of the questions asked about sexual practices and religious beliefs. Keeping up with advances in test development, Target settled the claims and replaced the test with an improved personality assessment that does not contain problematic inquiries. A more recent controversy focuses on background checks that are being challenged because they screen out workers with criminal records. A group called TakeAction Minnesota filed complaints with the Equal Employment Opportunity Commission claiming that the practice is unfair and potentially creates adverse impact for some minority groups. Target's response is that the background check is necessary to provide a safe and secure environment, and that the process is not designed to screen out everyone with a criminal background but only those who present an unreasonable risk to safety.

Image

Sources: Information from target.com; Anonymous, “Complaints Filed Against Target Hiring Policies, St. Cloud Times, February 21, 2013; Clair Gordon, jobs. aol.com, August 31, 2012. Accessed online at http://jobs.aol.com/articles/2012/08/31/target-is-hiring-the-inside-scoop-on-getting-a-job/.

With regard to education, the evidence shows that what you do in college really does matter. Employees with more education are absent less, show more creativity, and demonstrate higher task performance.102 People who complete higher levels of education and participate in extracurricular activities are more effective managers. Those who study humanities and social sciences tend to have better interpersonal and leadership skills than engineers and science majors.103 Grades received, particularly in a major, also have a moderate relationship with job performance, even though managers do not always use grades for making selection decisions.104

Application forms and résumés also provide valuable information about work experience. People with more work experience have usually held more different positions, been in those positions for longer periods, and more often done important tasks.105 Because they have been exposed to many different tasks, and because they have learned by doing, people with greater experience are more valuable contributors. In addition, success in previous jobs demonstrates high motivation, and executives with more experience are better at strategic thinking.106 Work experience thus correlates positively with performance, particularly when performance is determined by output measures such as production or amount of sales.107

One special advantage of application forms and résumés is their utility. Because these measures are generally inexpensive, they are frequently used as early screening devices. In terms of legality and fairness, measures of education and experience do have some adverse impact.108 Information being obtained from application forms and résumés should therefore be related to job performance to ensure validity.

Application forms and résumés can provide important information about past achievements, which makes them most valuable for organizations seeking people with specific skills. However, these selection tools can also capture potential and fit, so many organizations seeking long-term employees find them useful as well. Application forms are used mostly in organizations hiring generalists. They provide good measures of work experience and education that help identify people who have been dependable in jobs and school. Résumés are more commonly used in organizations that hire specialists. In particular, résumés provide information about experience and education relevant to a particular position.

Biographical Data

Organizations also collect biographical data, or biodata, about applicants. Collecting biodata involves asking questions about historical events that have shaped a person's behavior and identity.109 Some questions seek information about early life experiences that are assumed to affect personality and values. Other questions focus on an individual's prior achievements based on the idea that past behavior is the best predictor of future behavior. Common categories for biographical questions include family relationships, childhood interests, school performance, club memberships, and time spent in various leisure activities. Specific questions might include the following:

Biographical data

Assessment focusing on previous events and experiences in an applicant's life.

How much time did you spend with your father when you were a teenager?

What activities did you most enjoy when you were growing up?

How many jobs have you held in the past five years?

Job recruiters frequently see these measures as indicators of not only physical and mental ability but also interpersonal skill and leadership.110 The information provided by biodata measures does not duplicate information from other measures, such as personality measures, however.111

Biodata measures have been around for a long time, and they are generally useful for selecting employees. Scoring keys can be developed so that biodata responses can be scored objectively, just like a test. Objective scoring methods improve the reliability and validity of biodata. With such procedures, biodata has adequate reliability.112 Validity is also good, as studies show relatively strong relationships with job performance and employee turnover.113 In particular, biodata measures appear to have high validity for predicting sales performance.114 One common concern has been the validity generalizability of biodata. Questions that link with performance in one setting may not be useful in other settings. However, some recent research suggests that carefully constructed biographical measures can predict performance across work settings.115 Identifying measures that predict work performance across settings can help overcome a weakness of biodata, which is the high initial cost of creating measures. Finding items that separate high and low performers can take substantial time and effort, making items that predict performance across settings highly desirable.

Some human resource specialists express concern about legality and fairness issues with biodata. Much of the information collected involves things beyond the control of the person being evaluated for the job and is likely to have adverse impact for some. For instance, children from less wealthy homes may not have had as many opportunities to read books. Applicants' responses may also be difficult to verify, making it likely that they will fake. Using questions that are objective, verifiable, and job-related can minimize these concerns.116

Biodata measures can benefit organizations, whatever their staffing strategies. Organizations seeking long-term employees want to measure applicants' potential and should therefore use biodata measures that assess core traits and values. In contrast, organizations seeking short-term employees want to measure achievement and can benefit most from measures that assess verifiable achievements.

Reference Checking

Reference checking involves contacting an applicant's previous employers, teachers, or friends to learn more about the applicant. Reference checking is one of the most common selection methods, but available information suggests that it is not generally a valid selection method.117

The primary reason reference checking may not be valid relates to a legal issue. Organizations can be held accountable for what they say about current or past employees. A bad reference can become the basis for a lawsuit claiming defamation of character, which occurs when something untrue and harmful is said about someone. Many organizations thus adopt policies that prevent managers and human resource specialists from providing more than dates of employment and position. Such information is, of course, of little value. Even when organizations allow managers to give more information, the applicant has normally provided the names only of people who will give positive recommendations.

Defamation of character

Information that causes injury to another's reputation or character; can arise as a legal issue when an organization provides negative information about a current or former employee.

Nevertheless, a second legal issue makes reference checks critical in certain situations. This issue is negligent hiring, which can occur when an organization hires someone who harms another person and the organization could reasonably have determined that the employee was unfit.118 For instance, suppose an organization has hired someone to be a daycare provider. Further suppose that the organization did not conduct a thorough background investigation and that, if it had investigated, it could easily have discovered that the person had been previously convicted of child abuse. If this person abuses children in the employment setting, the organization can be held liable.

Negligent hiring

A legal issue that can arise when an organization does not thoroughly evaluate the background of an applicant who is hired and then harms someone.

The competing legal issues of defamation of character and negligent hiring make reference checking particularly troublesome. On the one hand, most organizations are not willing to risk providing reference information. On the other hand, safety concerns make a background check mandatory for many jobs, such as daycare provider, transportation worker, and security guard. One result has been the growth of professional firms that use public information sources, such as criminal records and motor vehicle registrations, to learn about an applicant's history. Such investigations should be conducted only after initial screening tools have been used and only if the applicant signs an authorization release.

INTERVIEWING

The most frequently used selection method is interviewing, which occurs when applicants respond to questions posed by a manager or some other organizational representative. Most interviews incorporate conversation between the interviewer and the applicant. The interview is useful not only for evaluating applicants but also for providing information to applicants and selling the organization as a desirable place to work.

Assessing Interview Effectiveness

Depending on the questions, an interview can be used to measure a variety of characteristics. Typical areas include knowledge of job procedures, mental ability, personality, communication ability, and social skills. The interview also provides an effective format for obtaining information about background credentials, such as education and experience.119 People who are more conscientious and extraverted tend to do better in interviews, partly because they tend to spend more time learning about the company and position before the interview actually occurs.120 Applicants who present themselves well and build rapport with the interviewer also excel in interviews.121 As described in the “How Do We Know?” feature, even how someone shakes hands can make a difference.

Although the research is somewhat mixed, it appears that applicants who receive training in how to act in interviews do indeed perform better.122 One concern about the interview is that candidates seek to impress interviewers, which means that the interviewer is not seeing and evaluating the true person. Evidence does indeed show that job applicants seek to manage impressions in job interviews, and that people who excel at making a good impression are not necessarily higher performers.123

Although researchers have historically argued that the interview is not a reliable and valid selection method, managers have continued to use this method. Recent research suggests that the conclusions of early studies were overly pessimistic and that managers are right in believing that the interview is a useful tool.

The reliability of interviews depends on the type of interview being conducted. We discuss some particularly reliable types of interviews shortly. For these types, reliability can be as high as for other measures, such as personality testing and assessment centers.124 The overall validity of the interview is in the moderate range. However, again, validity varies for different types of interviews, with some types showing validity that is as high as that for any selection method.125 The interview also provides unique information that cannot be obtained through other methods.126

The interview is also valuable in determining whether people “fit” with the job, workgroup, or organization. Interviewers often assess the likelihood that applicants will excel in the particular organization. These judgments are not based on typical qualifications, such as knowledge and experience, but rather on characteristics such as goals, interpersonal skills, and even physical attractiveness.127

Image How Do We Know?

DOES IT MATTER HOW YOU SHAKE HANDS IN AN INTERVIEW?

Can a good handshake really help you get a job? A search of the Internet yields over a million sites that provide information about the proper way to shake hands in an employment interview. Yet, little scientific research has been done to determine if the handshake really matters. So Greg Stewart, Susan Dustin, Murray Barrick, and Todd Darnold designed a study to learn more about the handshake. Students who were seeking jobs participated in practice interviews. During the interview process six different people secretly evaluated each student's handshake. Neither the students nor the interviewers were aware that handshakes were being evaluated. Students shook hands with interviewers before a 30-minute interview. At the end of the interview, interviewers provided ratings of how likely they were to hire students. Ratings of the handshake were then correlated with final interview ratings to determine if the handshake was related to assessments of hirability.

Results showed that people with a better handshake (firm and complete grip, eye contact) were indeed more likely to receive job offers. Women were found to have less firm handshakes than men. However, women with a good handshake got more benefit out of it than did men with a firm handshake. Women may therefore not be as good as men at shaking hands, but those who do it well get extra credit from interviewers.

The Bottom Line. Little things like having a good handshake can indeed make a difference in an interview setting. Job candidates can benefit from a good handshake, which includes a complete grip of the hand, a firm grasp, moderate up-and-down movement, comfortable duration, and eye contact.

Source: Greg L. Stewart, Susan L. Dustin, Murray R. Barrick, Todd C. Darnold, “Exploring the Handshake in Employment Interviews,” Journal of Applied Psychology 93 (2008): 1139–1146.

One concern about the interview is its expense: The time managers spend conducting interviews can be costly. The interview thus has relatively low utility, and generally, only applicants who have been screened with less-expensive selection methods should be interviewed. Another potential concern is discrimination. Interviewers make a number of subjective judgments, bringing up questions of possible bias. Indeed, research does suggest that interviewers can be biased in their judgments.128 Yet, the general conclusion is that bias is relatively low as long as the structuring techniques described below are used.129 Of course, interviewers must be careful not to ask questions that violate the laws discussed in Chapter 3. In particular, interviewers should avoid questions about family and marital relationships, age, disability, and religion.

Using Structured Interviews

We have seen that reliability and validity vary with the type of interview conducted. What makes some interviews better than others? The biggest difference between types of interviews concerns the amount of structure. The typical interview is an unstructured interview in which a single rater asks a series of questions and then provides an overall recommendation on whether the person interviewed should be hired. The questions asked usually vary from interviewer to interviewer, and interviewers can base their evaluations on anything that they think is important. Managers tend to prefer this type of interview. Research has traditionally shown that this type of unstructured assessment is not as reliable and valid as more-structured interviews.130 According to some newly emerging research, however, the unstructured interview can be a reliable tool when several people conduct interviews and then combine their individual evaluations.131

A different type of interview, generally seen as superior, is the structured interview, which uses a list of predetermined questions based on knowledge and skills identified as being critical for success. This ensures that the questions are appropriate and that all applicants are asked the same questions. The structured interview is conducted by a panel of interviewers rather than by a single person. Members of the rating panel use formal scoring procedures that require them to provide numerical scores for a number of predetermined categories. The basic goal of the structured interview is to make sure that everyone who is interviewed is treated the same. This consistency across interviews improves reliability, which in turn improves validity. More-structured interviews are also more effective in reducing the biasing effect of applicant impression management.132 A method for creating structured interview questions and responses is outlined in Figure 6.4.

Structured interview

Employment interview that incorporates multiple raters, common questions, and standardized evaluation procedures.

Most structured interviews fit into two types: (1) the situational interview, in which the interviewer asks questions about what the applicant would do in a hypothetical situation, and (2) the behavioral interview, in which the questions focus on the applicant's behavior in past situations. Researchers disagree about which type is better, with some research supporting each type.133 In general, both types seek to have people discuss actions in a specific context and thus tend to generate responses that are good predictors of job performance. Examples of both types of interview questions are shown in Table 6.5. The table also shows scoring for sample responses; one reason these interview formats work is that they provide raters with clear examples for determining how a response should be scored.

Situational interview

Type of structured interview that uses questions based on hypothetical situations.

Behavioral interview

Type of structured interview that uses questions concerning behavior in past situations.

Linking Interviews to Strategy

Interviews are used by organizations with all of the HR strategies. The focus of the interview questions, however, depends on strategy. Organizations seeking Free Agents focus on assessing achievement. Typical questions relate to job experience and certification in specific skills. In contrast, organizations seeking Loyal Soldiers focus on assessing fit. Specific questions measure personality characteristics, motivation, and social skills. Organizations seeking Committed Experts use a combination approach that assesses both potential and fit. Typical questions measure problem-solving ability and aptitude in a particular field, such as sales or engineering.

Image

Figure 6.4 Creating Structured Interview Questions.

Image

Effective organizations thus begin the interview process by thinking carefully about their HR strategy. After clearly determining their strategy, they begin to develop questions that help them identify individuals with the characteristics they most desire. Using the interview to properly identify and hire employees who are most likely to engage in the behaviors that facilitate either a low cost or differentiation strategy is thus a very effective method for using human resource management to create competitive advantage. Having the right employees develops an organizational culture that helps organizations meet the needs of customers.

Image

CONCEPT CHECK

  1. 1. What are common methods of testing?
  2. 2. What information can be obtained from application blanks and résumés?
  3. 3. How can the reliability and validity of employment interviews be improved?

 LEARNING OBJECTIVE 4 

How Are Final Selection Decisions Made?

What happens after an organization tests, interviews, and gathers information about job applicants? In most cases, the organization ends up with several different scores from several different methods. How should it combine these bits of information to arrive at a final hiring decision?

One possibility is that decision makers will simply look at the scores from each method and then make a judgment about who should be hired. This is what frequently happens, but it does not usually lead to the best decision.134 A better method is to use a set of decision rules and statistical tactics. Here, decision makers first obtain a numerical score for the outcome of each selection method and then apply decision strategies to the numerical scores. Common decision strategies include weighting the predictors, using minimum cutoffs, establishing multiple hurdles, and banding.135

PREDICTOR WEIGHTING APPROACH

In predictor weighting, we combine a set of selection scores into an overall score in which some measures count more than others. For instance, suppose a manager has three applicants for an engineering position. Each candidate has a cognitive ability score, an interview score, and a biographical test score. One applicant has a high cognitive ability score and a low interview score; the second applicant has a low cognitive ability score and a high biographical test score; and the third applicant has an average score on all three tests. How can the manager use these scores to predict which applicant will perform best?

Predictor weighting

Multiplying scores on selection assessments by different values to give more important means greater weight.

Of course, one approach is to take a simple average of all three test scores, but this procedure ignores the fact that one type of test might provide better information than another. The alternative is to establish a weight for each test, so that the method that provides the most valuable information has a higher influence on the overall decision. For instance, the cognitive ability test and interview might both be weighted as 40 percent of the overall score, with the biodata being weighted as 20 percent. Each score is multiplied by its assigned weight, and final selection decisions are based on an overall score.

How should the weights be determined? Experts who have a thorough knowledge of what it takes to succeed in the job might set the weights. However, an even better method is to use statistical methods for determining the best set of weights. Regardless of how the weights are determined, the process of predictor weighting is helpful for ensuring that managers and human resource specialists give appropriate attention to the information obtained from each selection method.

MINIMUM CUTOFFS APPROACH

Predictor weighting allows an applicant's strength in one area to compensate for weakness in another area. Someone with a low cognitive ability score might still be hired if interview and biodata scores are high, for example. This makes sense in many contexts but not in every case. For instance, consider an organization that is hiring people to work in self-managing teams. These teams succeed only if team members are able to cooperate and work together. Suppose an applicant for a position on the team has a high cognitive ability score but a very low score on an interview measuring interpersonal skills. In the team setting, high cognitive ability will not make up for problems created by low interpersonal skills, and the organization will need to take this fact into consideration.

In such a situation, the organization can take a minimum cutoffs approach, requiring each applicant to have at least a minimum score on each selection method. An applicant who is very weak on any of the measures will not be hired.

Minimum cutoffs approach

The process of eliminating applicants who do not achieve an acceptable score on each selection assessment.

In practice, many organizations use minimum cutoffs to identify a pool of people who meet at least minimum requirements in a number of areas. Once this pool of people is identified, then weighted predictors are used to make the final hiring decision.

MULTIPLE HURDLES APPROACH

As we have seen, some selection methods are much more expensive than others. Using minimum cutoffs in a number of areas in progressive order can thus increase the utility of the overall selection process. A relatively inexpensive test, such as a cognitive ability test, is given first. Those who achieve at least the minimum score then go on to the next selection method. This second method might be more expensive, such as an interview. The multiple hurdles approach thus involves multiple cutoffs applied in order, and applicants must meet the minimum requirement of one selection method before they can proceed to the next. One advantage of the multiple hurdles approach is that fewer minority candidates may be eliminated because they meet the acceptable criteria even if they are not the highest scorer on a particular test.136 A potential problem with this approach is that decision makers eliminate applicants without knowing how they would score on all the tests. The process makes sense, though, when organizations use expensive selection tests and wish to limit the number of applicants who take those tests.

Multiple hurdles approach

The process of obtaining scores on a selection method and only allowing those who achieve a minimum score to take the next assessment.

BANDING APPROACH

Because few employment tests are totally reliable, two people with slightly different scores may not really differ on the characteristic being measured. The difference in the scores is caused by poor measurement. This possibility has led some experts to create a process called banding. The banding approach uses statistical analysis to identify scores that may not be meaningfully different. People with such scores are placed in a common category, or band, and managers and selection specialists are then free to choose any one of the applicants within the band.137

Banding approach

The process of treating people as doing equally well when they have similar scores on a selection assessment.

The practice of banding is somewhat controversial. Some people argue that banding can help organizations meet affirmative action goals. If the band of applicants includes a member of a minority group, this person can be hired even if someone else had a slightly higher score. Others, however, argue that banding can lead to decreased utility because people with lower scores, and thus lower potential to succeed, are often hired.138

Image

CONCEPT CHECK

  1. 1. How can scores from different selection measures be combined to make a final hiring decision?
  2. 2. How is the multiple hurdles method different from the minimum cutoffs method?

Image

A MANAGER'S PERSPECTIVE REVISITED

IN THE MANAGER'S PERSPECTIVE AT THE BEGINNING OF THE CHAPTER, JAVIER WAS RESPONSIBLE FOR HIRING A NEW MEMBER OF HIS CUSTOMER SERVICE TEAM. HE FACED A NUMBER OF ISSUES CONCERNING WHAT KIND OF PERSON TO HIRE SELECTION METHODS TO USE, AND HOW TO MAKE HIS FINAL DECISION. FOLLOWING ARE THE ANSWERS TO THE “WHAT DO YOU THINK?” QUIZ THAT FOLLOWED THE CASE. WERE YOU ABLE TO CORRECTLY IDENTIFY THE TRUE STATEMENTS? COULD YOU DO BETTER NOW?

  1. You should hire people who already have the skills and knowledge they will need on the job. Image Although organizations using short-term employment strategies may prefer to hire employees who already have the necessary skills, the potential of new employees is often more important for organizations using long-term employment strategies.
  2. The benefits of making good hiring decisions are highest when the organization has a lot of job applicants. Image Organizations with numerous applicants can be choosier about whom they hire, which increases the utility, or dollar value, of selection methods.
  3. Intelligence tests are very helpful for predicting who will be effective in almost any job. Image Intelligence tests are good predictors of work performance, and they demonstrate generalizability across settings.
  4. Reference checking provides valuable information about prospective employees. Image Unfortunately, because of problems with defamation of character, reference checking provides very little useful information.
  5. You need to ask each applicant individualized questions to determine his or her true strengths and weaknesses. Image Asking applicants individualized questions creates problems with reliability. Structured interviews in which each applicant is asked the same questions are generally better than unstructured interviews.

Javier's situation is one that almost all managers eventually face. When managers make good hiring decisions, they help the organization secure high-performing employees. These employees, in turn, help produce goods and services of high quality and low cost, resulting in competitive advantage for the organization. The principles discussed in this chapter can help improve hiring decisions.

Image

Image

Employee selection practices should align with overall HR strategy. Employees provide short-term talent when the organization hires from outside sources and long-term talent when the organization promotes from within. Employees offer specialist talent when they possess highly developed expertise in a particular area and generalist talent when they operate in a variety of positions. Combinations of talent can be linked to overall HR strategies. Short-term generalist talent corresponds with a Bargain Laborer HR strategy, long-term generalist talent with a Loyal Soldier HR strategy, long-term specialist talent with a Committed Expert HR strategy, and short-term specialist talent with a Free Agent HR strategy.

Organizations need to achieve a strategic balance between job-based fit and organization-based fit. Fit is not critical for organizations with short-term generalist talent. Organization-based fit is critical for organizations with long-term generalist talent. Job-based fit is critical for organizations with long-term specialist talent. Organization-based fit and job-based fit are both critical for organizations with long-term specialist talent.

Another staffing characteristic that underlies strategic employee selection decisions is the balance between potential and achievement. Organizations with long-term employees who are either generalists or specialists hire based on potential. Organizations with short-term specialist talent hire based on achievement.

Image

Reliability, validity, utility, fairness, and acceptability represent five principles that are helpful for determining whether a selection method is good. Reliability concerns the consistency of the method. Validity represents the relationship between what the method measures and job performance. Utility focuses on the cost effectiveness of the method. Fairness concerns the effect of the method on individuals and minority groups. Acceptability focuses on how applicants react when they complete the selection method.

Image

The usefulness of a particular selection method often differs depending on the context of the organization and job. However, a number of selection methods generally satisfy the five principles for being effective. Common selection tests include cognitive ability testing, personality testing, physical ability testing, integrity testing, drug testing, and work sample testing. Cognitive ability and personality tests can be very useful for assessing potential to succeed. Other methods of information gathering include application forms and résumés, biographical data, and reference checking. Application forms and résumés are generally inexpensive methods for obtaining information about job applicants. The interview is another commonly used method of gathering information. Interviews are more reliable and valid when they are structured to ensure consistent treatment of each person being interviewed.

Image

Managers and human resource specialists should use good decision-making procedures to combine information from different selection methods. One procedure is predictor weighting, which allows more-important selection methods to have a stronger influence on the final decision. Another procedure, labeled minimum cutoffs, requires successful applicants to achieve at least a minimum score on each method. A third procedure is multiple hurdles, where applicants must achieve a minimum score on one selection method before they can advance to the next method. A final procedure is banding, wherein employees with similar scores on a selection method are grouped into categories. People in a given category are seen as having the same score, even though their scores are slightly different.

Image

Acceptability 218

Achievement 209

Alternate-forms method 212

Assessment center 228

Banding approach 237

Behavioral Interview 234

Biographical data 230

Cognitive ability testing 219

Concurrent validation strategy 214

Content validation strategy 213

Correlation coefficient 212

Criterion-related validation strategy 213

Defamation of character 231

Employee selection 204

Fairness 217

Integrity testing 226

Inter-rater method 212

Job-based fit 208

Long-term generalists 207

Long-term specialists 207

Minimum cutoffs approach 236

Multiple hurdles approach 237

Negligent hiring 231

Organization-based fit 208

Personality testing 221

Potential 210

Predictive validation strategy 214

Predictor weighting 236

Reliability 211

Short-term generalists 206

Short-term specialists 207

Situational interview 234

Situational judgment test 224

Situational specificity 215

Split-halves method 212

Structured interview 234

Test-retest method 211

Utility 215

Validity 213

Validity generalization 215

Work sample testing 227

Image

  1. How do the concepts of long- and short-term talent and generalist and specialist talent fit with overall HR strategy?
  2. For what type of HR strategy is organization fit most important? When is job fit most needed? What type of organization should base hiring on achievement? What type should hire based on potential?
  3. What is reliability? How is it estimated?
  4. What is validity? How is it estimated?
  5. What factors affect the utility of selection methods?
  6. What is the difference between fairness and legality?
  7. Why do people sometimes react negatively to certain selection methods?
  8. What are the strengths and weakness associated with the following selection methods: cognitive ability testing, personality testing, physical ability testing, integrity testing, drug testing, application forms and résumés, biodata, work sample testing, reference checking, and interviewing?
  9. Which selection methods are best for organizations with the various employee selection strategies?
  10. What are the methods for combining scores from different selection methods?

Image

Outback Steakhouse, Inc., now a $3.25 billion company with 65,000 employees and 1,100 restaurants worldwide, began modestly in the spring of 1988. A key to making Outback a great place to work is hiring the right people. One of the things we recognized early on is that you cannot send turkeys to eagle school: Smart leaders do not hire marginal employees and expect them to be able to keep the commitments of the company to customers or to remain very long with the company. If you start with the right people and provide a positive employee experience, turnover stays low. Thus, a rigorous employee selection process was developed in the early years of the company that is rooted in the Principles and Beliefs.

Outback's selection process for hourly and management Outbackers is proprietary; however, we can share some of the details here about the steps involved in the hiring process:

  • All applicants are given a realistic job preview that shares both the benefits and the responsibilities of working for Outback. We explain to applicants that being an Outbacker means taking care of others, and we tell them how they will be held accountable for that.
  • We share a document, called a Dimension of Performance, which provides detailed examples of the kinds of behavior expected of Outbackers and how those behaviors are tied to the vision of Outback. This is a candidate's first exposure to our vision. (At this point, some candidates have withdrawn from the process because these dimensions set a very high standard.)
  • When candidates agree to move forward in the process, they are asked to complete an application. The information they provide is reviewed with an eye toward determining if the candidate can perform the job, fit into the Outback culture, and stay with the company.
  • Successful applicants are assessed for their cognitive ability, personality, and judgment through a series of tests that have been validated against existing Outbackers who have been successful in the company.
  • Applicants who pass these tests are interviewed using questions that probe not only their experience but also their orientation toward aspects of the Outback culture, including service mindedness, hospitality, teamwork, and ability to think on their feet.

QUESTIONS

  1. How do the employee-selection methods at Outback Steakhouse help achieve competitive advantage?
  2. How important is organization fit for Outback Steakhouse?
  3. Why does Outback Steakhouse order the selection methods such that applicants first complete an application, then complete tests, and then participate in an interview?
  4. Why do you think these selection methods are valid?

Source: Tom DeCotiss, Chris Sullivan, David Hyatt, and Paul Avery, “How Outback Steakhouse Created a Great Place to Work, Have Fun, and Make Money,” Journal of Organizational Excellence (Autumn 2004): 23–33. Reprinted with permission of John Wiley & Sons, Inc.

Image

Stringtown Iron Works is a small fictional shipyard on the East Coast dedicated to ship overhaul. It focuses on obtaining government contracts for overhauling naval ships. These overhauls require Stringtown to maintain a quality workforce that is capable of rapid production. The position of pipe fitter is particularly critical for success.

Pipe fitters are responsible for repairing and installing the piping systems on board the vessels. Employees in the pipe fitter classification may also be called on to work in the shop, building pipe pieces that are ultimately installed on the ships. Like most union jobs in the yard, pipe fitters are predominantly white men between the ages of 30 and 45. As part of the most recent bargaining agreement, work is primarily done in cross-functional teams.

Job Description

 

Job: Pipe fitter

Pay: $12.00 to $20.00 per hour

A pipe fitter must:

  1. Read and interpret blueprints and/or sketches to fabricate and install pipe in accordance with specifications.
  2. Perform joint preparation and fit-up to fabricate and install brazed and welded piping systems.
  3. Perform layout and calculations to fabricate and install pipe.
  4. Fabricate pipe pieces up to 10” in diameter and up to 10' long to support shipboard pipe installation.
  5. Install ship's piping, such as water, drains, hydraulics, lube oil, fuel oil, high temperature air, etc. on location and within tolerances per design.
  6. Inspect and hydro test completed piping systems to ensure compliance with ship's specifications.
  7. Use a variety of hand and power tools to perform joint preparation, assembly bolt-up, and positioning during fabrication and installation.
  8. Utilize welding equipment to tack-weld pipe joints and to secure pipe supports to ship's structure.

Completion of the above tasks requires pipe fiT1tters to do the following:

  • Frequent lifting and carrying of 25–50 pounds
  • Occasional lifting and carrying of over 50 pounds
  • Occasional to frequent crawling, kneeling, and stair climbing
  • Frequent pushing, pulling, hammering, and reaching
  • Frequent bending, stooping, squatting, and crouching
  • Occasional twisting in awkward positions
  • Occasional fume exposure

 

QUESTIONS

  1. Which of the overall HR strategies would be best for Stringtown Iron Works?
  2. Should Stringtown focus on job fit or organization fit?
  3. Should Stringtown hire based on achievement or potential?
  4. What selection methods would you recommend for Stringtown? Why?

Image

Interview a family member, friend, or someone else who has a job you would like to someday have. Learn about the hiring practices of the organization where this person works. Ask questions like the following:

  1. What makes the company different from its competitors? Does it focus mostly on reducing costs, or does it try to provide goods and services that are somehow better than what competitors offer?
  2. What tasks do you do on the job? What knowledge, skills, and abilities do you need in order to do this job effectively?
  3. How long do most people stay at the company? Is this a place where most people work for their entire career? How long do you think you will continue working with the company?
  4. What did you have to do to get hired at the company? Did you take any tests? Did they ask for a résumé? What was the interview like?
  5. What type of qualifications do you think are most important for someone who wants to work at your company? If you were making a decision to hire someone to work with you, what characteristics would you want that person to have? How would you measure those characteristics?

Using the information obtained from the interview, do the following:

  1. Identify the competitive business strategy of the organization.
  2. Identify the human resource strategy of the organization.
  3. Evaluate whether the competitive business strategy and the human resource strategy fit.
  4. Evaluate the effectiveness of the organization's selection methods for achieving its human resource strategy.
  5. Make recommendations about the selection methods that you think would be most appropriate for the position of the person you interviewed.

Image

Access the companion website to test your knowledge by completing a Graphics Design, Inc., interactive role play.

In this exercise you have identified several potential candidates for the new positions at GDI, and it is now time to begin the selection process. In designing the appropriate selection system for the company, you must consider reliability, validity, utility, legality, and acceptability, along with common testing methods, information-gathering sources, and interview types. Whatever system you choose, you know that you'll need to gain buy-in from the managers who need these new employees. You know, too, that the system must support GDI's basic HR strategy, the Loyal Soldier strategy. Your recommendations on the appropriate selection system are due this afternoon. What will it look like? Image

ENDNOTES

1. Chad H. Van Iddekinge, Gerald R. Ferris, Pamela L. Perrewe, Fre R. Blass, Thomas D. Heetderks, and Alexa A. Perryman, “Effects of Selection and Training on Unit-Level Performance Over Time: A Latent Growth Modeling Approach,” Journal of Applied Psychology 94 (2009): 829–843; David E. Terpstra and Elizabeth J. Rozell, “The Relationship of Staffing Practices to Organizational Level Measures of Performance,” Personnel Psychology 46 (1993): 27–48.

2. Rod Powers, “Surviving Marine Corps Basic Training,” accessed online at http://usmilitary.about.com/od/marinejoin/a/marinebasic.htm.

3. http://www.marines.com/eligibility/prep-test; http://www.instantasvab.com/score/requirements-for-marine-corps-jobs.html.

4. http://www.marines.com/becoming-a-marine/howto-prepare; Elizabeth Bumiller, “First Pull-Ups, Then Combat, Marines Say,” New York Times, February 1, 2013.

5. Lance Cpl. John Robbart III, “Tips on Training for Marine Corps' Physical Tests,” available online at http://thevillagenews.com/story/50903; http://www.marines.com/becoming-a-marine/how-to-prepare.

6. Paul Osterman, “Choice of Employment Systems in Internal Labor Markets,” Industrial Relations 26 (1987): 46–67; Randall S. Schuler and Susan E. Jackson, “Linking Competitive Strategies with Human Resource Practices,” Academy of Management Executive 9 (1987): 207–219.

7. John E. Delery and D. Harold Doty, “Modes of Theorizing in Strategic Human Resource Management: Tests of Universalistic, Contingency, and Configurational Performance Predictions,” Academy of Management Journal 3 (1996): 802–835; Schuler and Jackson, “Linking Competitive Strategies,” 207.

8. Judy D. Olian and Sara L. Rynes, “Organizational Staffing: Integrating Practice with Strategy,” Industrial Relations 23 (1984): 170–183; Jeffrey A. Sonnenfeld and Maury A. Peiperl, “Staffing Policy as a Strategic Response: A Typology of Career Systems,” Academy of Management Review 13 (1988): 588–601; Peter Bamberger and Ilan Meshoulam, Human Resource Strategy: Formulation, Implementation, and Impact (Thousand Oaks, CA: Sage Publications, 2000).

9. Olian and Rynes, “Organizational Staffing,” 170–183; Sonnenfeld and Peiperl, “Staffing Policy,” 588–601; Bamberger and Meshoulam, Human Resource Strategy.

10. Olian and Rynes, “Organizational Staffing,” 170–183; Sonnenfeld and Peiperl, “Staffing Policy,” 588–601; Bamberger and Meshoulam, Human Resource Strategy.

11. Ibid.

12. Jeffrey R. Edwards, “Person-Job Fit: A Conceptual Integration, Literature Review and Methodological Critique,” in Vol. 6 of International Review of Industrial/Organizational Psychology (London: Wiley, 1991), 283–357; Amy L. Kristof, “Person–Organization Fit: An Integrative Review of Its Conceptualizations, Measurement, and Implications,” Personnel Psychology 49 (1996): 1–49.

13. Winfred Arthur, Jr., Suzanne T. Bell, Anton J. Villado, and Dennis Doverspike, “The Use of Person-Organization Fit in Employment Decision Making: An Assessment of Its Criterion-Related Validity,” Journal of Applied Psychology 91 (2006): 786–801.

14. Olian and Rynes, “Organizational Staffing,” 170–183.

15. Ibid.

16. Delery and Doty, “Modes of Theorizing,” 802–835; Mark A. Youndt, Scott A. Snell, James W. Dean, Jr., and David P. Lepak, “Human Resource Management, Manufacturing Strategy, and Firm Performance,” Academy of Management Journal 39 (1996): 836–866.

17. Robert Gatewood and Hubert S. Field, Human Resource Selection, 5th ed. (Mason, OH: South-Western, 2000).

18. Frank J. Landy, “Stamp Collecting Versus Science: Validation as Hypothesis Testing,” American Psychologist 41 (1986): 1183–1192.

19. Gatewood and Field, Human Resource Selection.

20. Frank L. Schmidt, John E. Hunter, R. McKenzie, and T. Muldrow, “The Impact of Valid Selection Procedures on Workforce Productivity,” Journal of Applied Psychology 64 (1979): 609–626.

21. Gary P. Latham and Glen Whyte, “The Futility of Utility Analysis,” Personnel Psychology 47 (1994): 31–47; Glen Whyte and Gary Latham, “The Futility of Utility Analysis Revisited: When Even an Expert Fails,” Personnel Psychology 50 (1997): 601–611.

22. Kenneth P. Carson, John S. Becker, and John A. Henderson. “Is Utility Really Futile? A Failure to Replicate and an Extension,” Journal of Applied Psychology 88 (1998): 84–96; John T. Hazer and Scott Highhouse, “Factors Influencing Managers' Reactions to Utility Analysis: Effects of SD-Sub(U) Method, Information Frame, and Focal Intervention,” Journal of Applied Psychology 82 (1997): 104–112.

23. Deidra J. Schleicher, Vijaya Venkataramani, Frederick P. Morgeson, and Michael A. Campion, “So You Didn't Get the Job . . . Now What Do You Think? Examining Opportunity-to-Performa Fairness Perceptions,” Personnel Psychology 59 (2006); 559–590.

24. Hannah-Hanh D. Nguyen and Ann Marie Ryan, “Does Stereotype Threat Affect Test Performance of Minorities and Women? A Meta-Analysis of Experimental Evidence,” Journal of Applied Psychology 93 (2008): 1314–1334; Ryan P. Brown and Eric Anthony Day, “The Difference Isn't Black and White: Stereotype Threat and the Race Gap on Raven's Advanced Progressive Matrices,” Journal of Applied Psychology 91 (2006): 979–985.

25. I.T. Robertson, P.A. Hes, L. Gratton, and D. Sharpley “The Impact of Personnel Selection and Assessment Methods on Candidates,” Human Relations 44 (1991): 963–982; Bradford S. Bell, Darin Wiechmann, and Ann Marie Ryan, “Consequences of Organizational Justice Expectations in a Selection System,” Journal of Applied Psychology 91 (2006): 455–466.

26. Sara L. Rynes, “Who's Selecting Whom? Effects of Selection Practices on Applicant Attitudes and Behavior,” in N. Schmitt, W.C. Borman, and Associates, eds., Personnel Selection in Organizations (San Francisco: Jossey-Bass, 1993); Stephen W. Gilliland, “Effects of Procedural and Distributive Justice on Reactions to a Selection System,” Journal of Applied Psychology 79 (1994): 691–701.

27. Dirk D. Steiner and Stephen W. Gilliland, “Fairness Reactions to Personnel Selection Techniques in France and the United States,” Journal of Applied Psychology 81 (1996): 134–141.

28. Kevin R. Murphy, George C. Thornton III, and Douglas H. Reynolds, “College Students' Attitudes Toward Employee Drug Testing Programs,” Personnel Psychology 43 (1990): 615–631.

29. Wonderlic Personnel Test & Scholastic Level Exam: User's Manual (Libertyville, IL: Wonderlic Personnel Test, 1992).

30. Malcolm J. Ree, James A. Earles, and Mark S. Teachout, “Predicting Job Performance: Not Much More Than g,” Journal of Applied Psychology 79 (1997): 518–524.

31. Christopher M. Berry, Melissa L. Gruys, and Paul R. Sackett, “Educational Attainment as a Proxy for Cognitive Ability in Selection: Effects on Levels of Cognitive Ability and Adverse Impact,” Journal of Applied Psychology 91 (2006): 696–705.

32. Anne Anastasi, Psychological Testing, 6th ed. (New York: Macmillan, 1988).

33. Ann Marie Ryan, Robert E. Ployhart, Gary J. Greguras, and Mark J. Schmit. “Test Preparation Programs in Selection Contexts: Self-Selection and Program Effectiveness,” Personnel Psychology 51 (1998): 599–622.

34. John E. Hunter and Ronda F. Hunter, “Validity and Utility of Alternative Predictors of Job Performance,” Psychological Bulletin 96 (1984): 72–98.

35. Jesus F. Salgado, Neil Anderson, Silvia Moscoso, Cristina Bertua, and Filip de Fruyt, “International Validity Generalization of GMA and Cognitive Abilities: A European Community Meta-Analysis,” Personnel Psychology 56 (2003): 573–606.

36. Mark W. Coward and Paul R. Sackett. “Linearity of Ability–Performance Relationships: A Reconfirmation,” Journal of Applied Psychology 75 (1990): 297–300.

37. John E. Hunter, Frank L. Schmidt, and Michael K. Judiesch, “Individual Differences in Output Variability as a Function of Job Complexity,” Journal of Applied Psychology 75 (1990): 28–42.

38. John E. Hunter, “Cognitive Ability, Cognitive Aptitudes, Job Knowledge, and Job Performance,” Journal of Vocational Behavior 29 (1986): 340–362.

39. Jeffrey A. LePine, Jason A. Colquitt, and Amir Erez, “Adaptability to Changing Task Contexts: Effects of General Cognitive Ability, Conscientiousness, and Openness to Experience,” Personnel Psychology 53 (2000): 563–594; Jonas W.B. Lang and Paul D. Bliese, “General Mental Ability and Two Types of Adaptation to Unforeseen Change: Applying Discontinuous Growth Models to the Task-Change Paradigm,” Journal of Applied Psychology 94 (2009): 411–428.

40. Christopher M. Berry, Malissa A. Clark, and Tara K. McClure, “Racial/Ethnic Differences in the Criterion-Related Validity of Cognitive Ability Tests: A Qualitative and Quantitative Review, Journal of Applied Psychology 96 (2011): 881–906.

41. Wendy S. Dunn, Michael K. Mount, and Murray R. Barrick, “Relative Importance of Personality and General Mental Ability in Managers' Judgments of Applicant Qualifications,” Journal of Applied Psychology 80 (1995): 500–509.

42. Kevin R. Murphy, Brian E. Cronin, and Anita P. Tam, “Controversy and Consensus Regarding the Use of Cognitive Ability Testing in Organizations,” Journal of Applied Psychology 88 (2003): 660–671.

43. Therese Hoff Macan, Marcia J. Avedon, Matthew Paese, and David E. Smith, “The Effects of Applicants' Reactions to Cognitive Ability Tests and an Assessment Center,” Personnel Psychology 47 (1994): 715–738.

44. David Chan, “Racial Subgroup Differences in Predictive Validity Perceptions on Personality and Cognitive Ability Tests,” Journal of Applied Psychology 82 (1997): 311–320; David Chan, Neal Schmitt, Joshua M. Sacco, and Richard P. DeShon, “Understanding Pretest and Posttest Reactions to Cognitive Ability and Personality Tests,” Journal of Applied Psychology 83 (1998): 471–485.

45. David C. Funder, The Personality Puzzle, 2nd ed. (New York: Norton, 2001).

46. Murray R. Barrick and Michael K. Mount, “The Big Five Personality Dimensions and Job Performance,” Personnel Psychology 44 (1991): 1–26; Gregory M. Hurtz and John J. Donovan, “Personality and Job Performance: The Big Five Revisited,” Journal of Applied Psychology 85 (2000): 869–879.

47. Mark J. Schmit, Jenifer A. Kihm, and Chet Robie, “Development of a Global Measure of Personality,” Personnel Psychology 53 (2000): 153–194; Jesus F. Salgado, “The Five Factor Model of Personality and Job Performance in the European Community,” Journal of Applied Psychology 82 (1997): 30–43.

48. Robert Hogan and Joyce Hogan, Hogan Personality Inventory Manual (Tulsa, OK: Hogan Assessment Systems, 1992); Paul T. Costa and Robert R. McCrae, NEO PI-R Professional Manual (Odessa, FL: Psychological Assessment Resources, 1992).

49. Mark J. Schmit, Ann Marie Ryan, Sandra L. Stierwalt, and Amy B. Powell, “Frame-of-Reference Effects on Personality Scale Scores and Criterion-Related Validity,” Journal of Applied Psychology 80 (1995): 607–620; Mark N. Bing, James C. Whanger, H. Kristi Davison, and Jayson B. VanHook, “Incremental Validity of the Frame-of-Reference Effect in Personality Scale Scores: A Replication and Extension,” Journal of Applied Psychology 89 (2004): 150–157; John M. Hunthausen, Donald M. Truxillo, Talya N. Bauer, and Leslie B. Hammer, “A Field Study of Frame-of-Reference Effects on Personality Test Validity,” Journal of Applied Psychology 88 (2003): 545–551; Filip Lievens, Wilfried DeCorte, and Eveline Schollaert, “A Closer Look at the Frame-of-Reference Effect in Personality Scales and Validity,” Journal of Applied Psychology 93 (2008): 268–279.

50. Jonathan A. Shaffer and Bennett E. Postlethwaite, “A Matter of Context: A Meta-Analytic Investigation of the Relative Validity of Contextualized and Noncontextualized Personality Measures,” Personnel Psychology 65 (2012): 445–493.

51. Dan S. Chiaburu, In-Sue Oh, Christopher M. Berry, Li Ning, and Richard G. Gardner, “The Five-Factor Model of Personality Traits and Organizational Citizenship Behaviors: A Meta-Analysis,” Journal of Applied Psychology 96 (2011): 1140–1166.

52. Ian R. Gellatly, “Conscientiousness and Task Performance: Test of Cognitive Process Model,” Journal of Applied Psychology 81 (1996): 474–482; Murray R. Barrick, Michael K. Mount, and Judy P. Strauss, “Conscientiousness and Performance of Sales Representatives: Test of the Mediating Effects of Goal Setting,” Journal of Applied Psychology 78 (1993): 715–722; Greg L. Stewart, Kenneth P. Carson, and Robert L. Cardy, “The Joint Effects of Conscientiousness and Self-Leadership Training on Employee Self-Directed Behavior in a Service Setting,” Personnel Psychology 49 (1996): 143–164; Timothy A. Judge and Remus Ilies, “Relationship of Personality to Performance Motivation: A Meta-Analytic Review,” Journal of Applied Psychology 87 (2002): 797–807.

53. Timothy A. Judge, Joseph J. Martocchio, and Carl J. Thoresen, “Five-Factor Model of Personality and Employee Absence,” Journal of Applied Psychology 82 (1997): 745–755.

54. Remus Ilies, Ingrid Smithey Fulmer, Matthias Spitzmuller, and Michael D Johnson, “Personality and Citizenship Behavior: The Mediating Role of Job Satisfaction,” Journal of Applied Psychology 94 (2009): 945–959.

55. Judge and Ilies, “Relationship of Personality to Performance Motivation,” 797–807.

56. Luke D. Smillie, Gilliam B. Yeo, Adrian F. Furnham, and Chris J. Jackson, “Benefits of All Work and No Play: The Relationship Between Neuroticism and Performance as a Function of Resource Allocation,” Journal of Applied Psychology 91 (2006): 139–155.

57. Murray R. Barrick, Greg L. Stewart, and Mike Piotrowski, “Personality and Job Performance: Test of the Mediating Effects of Motivation Among Sales Representatives,” Journal of Applied Psychology 87: 43–51; Greg L. Stewart, “Reward Structure as a Moderator of the Relationship Between Extraversion and Sales Performance,” Journal of Applied Psychology 81 (1996): 619–627. Timothy A. Judge, Joyce E. Bono, Remus Ilies, and Megan W. Gerhardt, “Personality and Leadership: A Qualitative and Quantitative Review,” Journal of Applied Psychology 87 (2002): 765–780.

58. Michael K. Mount, Murray R. Barrick, and Greg L. Stewart, “Five-Factor Model of Personality and Job Performance in Jobs Involving Interpersonal Interactions,” Human Performance 2/3 (1998): 145–166.

59. Remus Ilies, “Personality and Citizenship Behavior,” 945–959.

60. Jeffrey A. LePine, Jason A. Colquitt, and Amir Erez, “Adaptability and Changing Task Contexts: Effects of General Cognitive Ability, Conscientiousness, and Openness to Experience,” Personnel Psychology 53 (2000): 563–593; Jennifer M. George and Jing Zhou, “When Openness to Experience and Conscientiousness Are Related to Creative Behavior: An Interactional Approach,” Journal of Applied Psychology 86 (2001): 513–524.

61. Margaret A. Shaffer, David A. Harrison, Hal Gregersen, Stewart J. Black, and Lori A. Fezandi, “You Can Take It with You: Individual Differences and Expatriate Effectiveness,” Journal of Applied Psychology 91 (2006): 109–125.

62. Hao Zhao, and Scott E. Seibert, “The Big Five Personality Dimensions and Entrepreneurial Status: A Meta-Analytical Review,” Journal of Applied Psychology 91 (2006): 259–271.

63. Murray R. Barrick, Greg L. Stewart, Mitchell J. Neubert, and Michael K. Mount, “Relating Member Ability and Personality to Work-Team Processes and Team Effectiveness,” Journal of Applied Psychology 83 (1998): 377–391; George A. Neuman and Julie Wright, “Team Effectiveness: Beyond Skills and Cognitive Ability,” Journal of Applied Psychology 84 (1999): 376–389; Jeffrey A. LePine, John R. Hollenbeck, Daniel R. Ilgen, and Jennifer Hedlund, “Effects of Individual Differences on the Performance of Hierarchical Decision-Making Teams: Much More Than g,” Journal of Applied Psychology 82 (1997): 803–811.

64. Daniel P. O'Meara, “Personality Tests Raise Questions of Legality and Effectiveness,” HRMagazine 39(1994): 97–100.

65. Hanna J. Foldes, Emily E. Duehr, and Deniz S. Ones, “Group Differences in Personality: A Meta-Analysis Comparing Five U.S. Racial Groups,” Personnel Psychology 61 (2008): 579–616; Ann Marie Ryan, Robert E. Ployhart, and Lisa A. Friedel, “Using Personality Testing to Reduce Adverse Impact: A Cautionary Note,” Journal of Applied Psychology 83 (1998): 298–307.

66. Jill E. Ellingson, Paul R. Sackett, and Leatta M. Hough, “Social Desirability Corrections in Personality Measurement: Issues of Applicant Comparison and Construct Validity,” Journal of Applied Psychology 84 (1999): 155–166; Leatta M. Hough, Newell K. Eaton, Marvin D. Dunnette, John D. Kamp, et al., “Criterion-Related Validities of Personality Constructs and the Effect of Response Distortion on Those Validities,” Journal of Applied Psychology 75 (1990): 581–595; Rose Mueller-Hanson, Eric D. Heggestad, and George C. Thornton III, “Faking and Selection: Considering the Use of Personality from Select-In and Select-Out Perspectives,” Journal of Applied Psychology 88 (2003): 348–355.

67. Shawn Komar, Douglas J. Brown, Jennifer A. Komar, and Chet Robie, “Faking and the Validity of Conscientiousness: A Monte Carlo Investigation,” Journal of Applied Psychology 93 (2008): 140–154.

68. Hough et al., “Criterion-Related Validities,” 581–595; Murray R. Barrick and Michael K. Mount, “Effects of Impression Management and Self-Deception on the Predictive Validity of Personality Constructs,” Journal of Applied Psychology 81 (1996): 261–272.

69. Neal Schmitt and Frederick L. Oswald, “The Impact of Corrections for Faking on the Validity of Noncognitive Measures in Selection Settings,” Journal of Applied Psychology 91 (2006): 613–621.

70. Lynn A. McFarland and Ann Marie Ryan, “Variance in Faking Across Noncognitive Measures,” Journal of Applied Psychology 85 (2000): 812–821; Joseph G. Rosse, Mary D. Stecher, Janice L. Miller, and Robert A. Levin, “The Impact of Response Distortion of Preemployment Personality Testing and Hiring Decisions,” Journal of Applied Psychology 83 (1998): 634–644.

71. Mueller-Hanson, Heggestad, and Thornton, “Faking and Selection,” 348–355.

72. In-Sue Oh, Gang Wang, and Michael K. Mount, “Validity of Observer Ratings of the Five-Factor Model of Personality Traits: A Meta-Analysis,” Journal of Applied Psychology 96 (2011): 762–773; Edwin A.J. van Hooft and Marise Ph. Born, “Intentional Response Distortion of Personality Tests: Using Eye-Tracking to Understand Response Distortion When Faking,” Journal of Applied Psychology 97 (2012): 287–300.

73. L.R. James, M.D. McIntyre, C.A. Glisson, J.L. Bowler and T.R. Mitchell, “The Conditional Reasoning Measurement System for Aggression: An Overview,” Human Performance 17 (2004): 271–295.

74. James M. LeBreton, Cheryl D. Barksdale, Jennifer Robin, and Lawrence R. James, “Measurement Issues Associated with Conditional Reasoning Tests: Indirect Measurement and Test Faking,” Journal of Applied Psychology 92 (2007): 1–16.

75. Benjamin Schneider, “The People Make the Place,” Personnel Psychology 40 (1987): 437–453.

76. Filip Lievens and Paul R. Sackett, “The Validity of Interpersonal Skills Assessment Via Situational Judgment Tests for Predicting Academic Success and Job Performance,” Journal of Applied Psychology 97 (2012): 460–468.

77. Ronald Bledow and Michael Frese, “A Situational Judgment Test of Personal Initiative and Its Relationship to Performance,” Personnel Psychology 62 (2009): 229–258; Michael A. McDaniel, Nathan S. Hartman, Deborah L. Whetzel, and W. Lee Grubb III, “Situational Judgment Tests, Response Instructions, and Validity: A Meta-Analysis,” Personnel Psychology 60 (2007): 63–91.

78. Filip Lievens, Paul R. Sackett, and Tine Buyse, “The Effects of Response Instructions on Situational Judgment Test Performance and Validity in a High-Stakes Context,” Journal of Applied Psychology 94 (2009): 1095–1101.

79. Stephan J. Motowidlo, Amy C. Hooper, and Hannah L. Jackson, “Implicit Policies about Relations between Personality Traits and Behavioral Effectiveness in Situational Judgment Items,” Journal of Applied Psychology 91 (2006): 749–761; Michael A. McDaniel, “Situational Judgment Tests,” 63–91.

80. Joyce Hogan, “Structure of Physical Performance in Occupational Tasks,” Journal of Applied Psychology 76 (1991): 495–507.

81. Barry R. Blakley, Miguel A. Quinones, Marnie Swerdlin Crawford, and I. AnnJago, “The Validity of Isometric Strength Tests,” Personnel Psychology 47 (1994): 47–274.

82. Calvin C. Hoffman, “Generalizing Physical Ability Test Validity: A Case Study Using Test Transportability, Validity Generalization, and Construct-Related Validation Evidence,” Personnel Psychology 52 (1999): 1019–1043.

83. Michael Peters, Philip Servos, and Russell Day, “Marked Sex Differences on a Fine Motor Skill Task Disappear When Finger Size Is Used as Covariate,” Journal of Applied Psychology 75 (1990): 87–90; Richard D. Arvey, Timothy E. Landon, Steven M. Nutting, and Scott E. Maxwell, “Development of Physical Ability Tests for Police Officers: A Construct Validation Approach,” Journal of Applied Psychology 77 (1992): 996–1009.

84. Paul R. Sackett, Laura R. Burris, and Christine Callahan, “Integrity Testing for Personnel Selection: An Update,” Personnel Psychology 42 (1989): 491–530.

85. Chad H. Van Iddekinge, Philip L. Roth, Patrick H. Raymark, and Heather N. Odle-Dusseau, “The Criterion-Related Validity of Integrity Tests: An Updated Meta-Analysis,” Journal of Applied Psychology 97 (2012): 499–530; William G. Harris, John W. Jones, Reid Klion, David W. Arnold, Wayne Camara, and Michael R. Cunningham, “Test Publishers' Perspective on “An Updated Meta-Analysis”: Comments on Van Iddekinge Roth, Raymark, and Odle-Dusseau (2012),” Journal of Applied Psychology 97 (2012): 531–536; Deniz S. Ones, Chockalingam Viswesvaran, and Frank L. Schmidt, “Integrity Tests Predict Counterproductive Work Behaviors and Job Performance Well: Comment on Van Iddekinge, Roth, Raymark, and Odle-Dusseau,” Journal of Applied Psychology 97 (2012): 537–542.

86. Joyce Hogan and Kimberly Brinkmeyer, “Bridging the Gap Between Overt and Personality-Based Integrity Tests,” Personnel Psychology 50 (1997): 587–600; James E. Wanek, Paul R. Sackett, and Deniz S. Ones, “Towards an Understanding of Integrity Test Similarities and Differences: An Item-Level Analysis of Seven Tests,” Personnel Psychology 56 (2003): 873–894.

87. Ones, Viswesvaran, and Schmidt, “Comprehensive Meta-Analysis of Integrity,” 679–703; Cunningham, Wong, and Barbee, “Self-Presentation Dynamics,” 643–658.

88. Deniz S. Ones and Chockalingam Viswesvaran, “Gender, Age, and Race Differences on Overt Integrity Tests: Results Across Four Large-Scale Job Applicant Datasets,” Journal of Applied Psychology 83 (1998): 35–42; Bernardin and Cooke, “Validity of an Honesty Test,” 1097–1108.

89. Bernd Marcus, Kibeom Lee, and Michael C. Ashton, “Personality Dimensions Explaining Relationships Between Integrity Tests and Counterproductive Behavior: Big Five, or One in Addition?” Personnel Psychology 60 (2007): 1–34.

90. Sackett, Burris, and Callahan, “Integrity Testing for Personnel Selection,” 491–530.

91. Michael R. Frone, “Prevalence of Illicit Drug Use in the Workforce and in the Workplace: Findings and Implications from a U.S. National Survey,” Journal of Applied Psychology 91 (2006): 856–869.

92. Jacques Normand, Stephen D. Salyards, and John J. Mahone, “An Evaluation of Preemployment Drug Testing,” Journal of Applied Psychology 75 (1990): 629–639.

93. Bennett Tepper, “Investigation of General and Program-Specific Attitudes Toward Corporate Drug-Testing Policies,” Journal of Applied Psychology 79 (1994): 392–401; Paronto et al., “Drug Testing,” 1159–1166; Kevin R. Murphy, George C. Thornton III, and Kristin Prue, “Influence of Job Characteristics on the Acceptability of Employee Drug Testing,” Journal of Applied Psychology 76 (1991): 447–453.

94. Kevin R. Murphy, George C. Thornton III, and Douglas H. Reynolds, “College Students' Attitudes Toward Employee Drug Testing Programs,” Personnel Psychology 43 (1990): 615–632.

95. Hunter and Hunter, “Validity and Utility,” 72–98; Jerry W. Hedge and Mark S. Teachout, “An Interview Approach to Work Sample Criterion Measurement,” Journal of Applied Psychology 77 (1992): 453–461; Winfred Arthur Jr., Gerald V Barrett, and Dennis Doverspike, “Validation of an Information-Processing-Based Test Battery for the Prediction of Handling Accidents Among Petroleum-Product Transport Drivers,” Journal of Applied Psychology 75 (1990): 621–628.

96. Philip Roth, Philip Bobko, Lynn McFarland, and Maury Buster, “Work Sample Tests in Personnel Selection: A Meta-Analysis of Black–White Differences in Overall and Exercise Scores,” Personnel Psychology 61 (2008): 637–661.

97. Winfred Arthur Jr., Eric Anthony Day, Theresa L. McNelly, and Pamela Edens, “A Meta-Analysis of the Criterion-Related Validity of Assessment Center Dimensions,” Personnel Psychology 56 (2003): 125–154.

98. Ibid.; Kobi Dayan, Ronen Kastan, and Shaul Fox, “Entry-Level Police Candidate Assessment Center: An Efficient Tool or a Hammer to Kill A Fly?” Personnel Psychology 55 (2002): 827–850; Filip Lievens and Fiona Patterson, “The Validity and Incremental Validity of Knowledge Tests, Low-Fidelity Simulations, and High-Fidelity Simulations for Predicting Job Performance in Advanced-Level High-Stakes Selection,” Journal of Applied Psychology 96 (2011): 927–940.

99. Mark C. Bowler and David J. Woehr, “A MetaAnalytic Evaluation of the Impact of Dimension and Exercise Factors on Assessment Center Ratings,” Journal of Applied Psychology 91 (2006): 1114–1124; Neal Schmitt and Jeffrey R. Schneider, “Factors Affecting Validity of a Regionally Administered Assessment Center,” Personnel Psychology 43 (1990): 1–13; Annette C. Spychalski, Miguel A. Quinones, Barbara B. Gaugler, and Katja Pohley, “A Survey of Assessment Center Practices in Organizations in the United States,” Personnel Psychology 50 (1997): 71–90; Deidra J. Schleicher, David V Day, Bronston T. Mayes, and Ronald E. Riggio, “A New Frame for Frame-of-Reference Training: Enhancing the Construct Validity of Assessment Centers,” Journal of Applied Psychology 87 (2002): 735–746.

100. Neil Anderson, Filip Lievens, Karen van Dam, and Marise Born, “A Construct-Driven Investigation of Gender Differences in a Leadership-Role Assessment Center,” Journal of Applied Psychology 91 (2006): 555–566; Michelle A. Dean, Philip L. Roth, and Philip Bobko, “Ethnic and Gender Subgroup Differences in Assessment Center Ratings: A Meta-Analysis,” Journal of Applied Psychology 93 (2008): 685–691.

101. Miguel A. Quinones, J. Kevin Ford, and Mark S. Teachout, “The Relationship Between Work Experience and Job Performance: A Conceptual and Meta-Analytic Review,” Personnel Psychology 48 (1995): 887–910; Philip L. Roth, Craig A. BeVier, Fred S. Switzer III, and Jeffrey S. Schippmann, “Meta-Analyzing the Relationship Between Grades and Job Performance,” Journal of Applied Psychology 81 (1996): 548–556.

102. Thomas W. H. Ng and Daniel C. Feldman, “How Broadly Does Education Contribute to Job Performance?” Personnel Psychology 62 (2009): 89–134.

103. Ann Howard, “College Experiences and Managerial Performance,” Journal of Applied Psychology 71 (1986): 530–552.

104. Arlise P. McKinney, Kevin D. Carlson, Ross L. Mecham III, Nicolas C. D'Angelo, and Mary L. Connerley, “Recruiters' Use of GPA in Initial Screening Decisions: Higher GPAs Don't Always Make the Cut,” Personnel Psychology 56 (2003): 823–846; Roth et al., “Meta-analyzing the Relationship.”

105. Quinones, Ford, and Teachout, “The Relationship Between Work Experience.”

106. Lisa Dragoni, In-Sue Oh, Paul Vankatwyk, and Paul E. Tesluk, “Developing Executive Leaders: The Relative Contribution of Cognitive Ability, Personality, and the Accumulation of Work Experience in Predicting Strategic Thinking Competency,” Personnel Psychology 64 (2011): 829–864; Paul E. Tesluk and Rick R. Jacobs, “Toward an Integrated Model of Work Experience,” Personnel Psychology 51 (1998): 321–356.

107. Quinones, Ford, and Teachout, “The Relationship Between Work Experience.”

108. Philip L. Roth and Philip Bobko, “College Grade Point Average as a Personnel Selection Device: Ethnic Group Differences and Potential Adverse Impact,” Journal of Applied Psychology 85 (2000): 399–406.

109. Fred A. Mael, “A Conceptual Rationale for the Domain and Attributes of Biodata,” Personnel Psychology 44 (1991): 763–792.

110. Barbara K. Brown and Michael A. Campion, “Biodata Phenomenology: Recruiters' Perceptions and Use of Biographical Information in Resume Screening,” Journal of Applied Psychology 79 (1994): 897–908.

111. Michael K. Mount, L. A. Witt, and Murray R. Barrick, “Incremental Validity of Empirically Keyed Biodata Scales over Gma and the Five Factor Personality Constructs,” Personnel Psychology 53 (2000): 299–323; Anthony T. Dalessio and Todd A. Silverhart, “Combining Biodata Test and Interview Information: Predicting Decisions and Performance Criteria,” Personnel Psychology 47 (1994): 303–316.

112. Garnett Stokes Shaffer, Vickie Saunders, and William A. Owens, “Additional Evidence for the Accuracy of Biographical Data: Long-Term Retest and Observer Ratings,” Personnel Psychology 39 (1986): 791–810.

113. Hunter and Hunter, “Validity and Utility,” 72–98; Neal Schmitt, Richard Z. Gooding, Raymond A. Noe, and Michael Kirsch, “Meta-Analyses of Validity Studies Published Between 1964 and 1982 and the Investigation of Study Characteristics,” Personnel Psychology 37 (1984): 407–423; Fred A. Mael and Blake E. Ashforth, “Loyal from Day One: Biodata, Organizational Identification, and Turnover Among Newcomers,” Personnel Psychology 48 (1995): 309–334.

114. Margaret A. McManus and Mary L. Kelly, “Personality Measures and Biodata: Evidence Regarding Their Incremental Predictive Value in the Life Insurance Industry,” Personnel Psychology 52 (1999): 137–148; Andrew J. Vinchur, Jeffery S. Schippmann, Fred S. Swizer III, and Philip L. Roth, “A Meta-Analytic Review of Predictors of Job Performance for Salespeople,” Journal of Applied Psychology 83 (1998): 586–597.

115. Hannah R. Rothstein, Frank L. Schmidt, Frank W. Erwin, William Owens, et al., “Biographical Data in Employment Selection: Can Validities Be Made Generalizable?” Journal of Applied Psychology 75 (1990): 175–184; Kevin D. Carlson, Steven E. Scullen, Frank L. Schmidt, Hannah Rothstein, and Frank Erwin, “Generalizable Biographical Data Validity Can Be Achieved Without Multi-Organizational Development and Keying,” Personnel Psychology 52 (1999): 731–756.

116. Julia Levashina, Frederick P. Morgeson, and Michael A. Campion, “Tell Me More: Exploring How Verbal Ability and Item Verifiability Influence Responses to Biodata Questions in a High Stakes Selection Context,” Personnel Psychology 65 (2012): 359–383; Thomas E. Becker and Alan L. Colquitt, “Potential Versus Actual Faking of a Biodata Form: An Analysis Along Several Dimensions of Item Type,” Personnel Psychology 45 (1992): 389–406.

117. Hunter and Hunter, “Validity and Utility,” 72–98.

118. Ann Marie Ryan and Marja Lasek, “Negligent Hiring and Defamation: Areas of Liability Related to Pre-Employment Inquiries,” Personnel Psychology 44 (1991): 293–319.

119. Richard A. Posthuma, Frederick P. Morgeson, and Michael A. Campion, “Beyond Employment Interview Validity: A Comprehensive Narrative Review of Recent Research and Trends over Time,” Personnel Psychology 55 (2002): 1–82; Allen I. Huffcutt, James M. Conway, Philip L. Roth, and Nancy J. Stone, “Identification and Meta-Analytic Assessment of Psychological Constructs Measured in Employment Interviews,” Journal of Applied Psychology 86 (2001): 897–913.

120. David F. Caldwell and Jerry M. Burger, “Personality Characteristics of Job Applicants and Success in Screening Interviews,” Personnel Psychology 51 (1998): 119–136.

121. Brian W. Swider, Murray R. Barrick, Brad T. Harris, and Adam C. Stoverink, “Managing and Creating an Image in the Interview: The Role of Interviewee Initial Impressions,” Journal of Applied Psychology 96 (2011): 1275–1288; Murray R. Barrick, Brin W. Swider, and Greg L. Stewart, “Initial Evaluations in the Interview: Relationships with Subsequent Interviewer Evaluations and Employment Offers,” Journal of Applied Psychology 95 (2010): 1163–1172.

122. Todd J. Maurer and Jerry M. Solamon, “The Science and Practice of a Structured Employment Interview Coaching Program,” Personnel Psychology 59 (2006): 433–456; Todd Maurer, Jerry Solamon, and Deborah Troxtel, “Relationship of Coaching with Performance in Situational Employment Interviews,” Journal of Applied Psychology 83 (1998): 128–136; Posthuma, Morgeson, and Campion, “Beyond Employment Interview Validity.”

123. Murray R. Barrick, Jonathan A. Shaffer, and Sandra W. DeGrassi, “What You See May Not Be What You Get: Relationships Among Self-Presentation Tactics and Ratings of Interview and Job Performance,” Journal of Applied Psychology 94 (2009): 1394–1411.

124. James M. Conway, Robert A. Jako, and Deborah F. Goodman, “A Meta-Analysis of Interrater and Internal Consistency Reliability of Selection Interviews,” Journal of Applied Psychology 80 (1995): 565–579.

125. Michael A. McDaniel, Deborah L. Whetzel, Frank L. Schmidt, and Steven D. Maurer, “The Validity of Employment Interviews: A Comprehensive Review and Meta-Analysis,” Journal of Applied Psychology 79 (1994): 599–616; Allen I. Huffcutt and Winfred Arthur Jr., “Hunter and Hunter (1984) Revisited: Interview Validity for Entry-Level Jobs,” Journal of Applied Psychology 79 (1994): 184–190.

126. Jose M. Cortina, Nancy B. Goldstein, Stephanie C. Payne, H. Kristl Davison, and Stephen W. Gilliland, “The Incremental Validity of Interview Scores over and above Cognitive Ability and Conscientiousness Scores,” Personnel Psychology 53 (2000): 325–352; Michael A. Campion, James E. Campion, and J. Peter Hudson, “Structured Interviewing: A Note on Incremental Validity and Alternative Question Types,” Journal of Applied Psychology 79 (1994): 998–1102.

127. Sara L. Rynes and Barry Gerhart, “Interviewer Assessments of Applicant ‘Fit’: An Exploratory Investigation,” Personnel Psychology 43 (1990): 13–36.

128. Sharon L. Segrest Purkiss, Pamela L. Perrewe, Treena L. Gillespie, Bronston T. Mayes, and Gerald R. Ferris, “Implicit Sources of Bias in Employment Interview Judgments and Decisions,” Organizational Behavior and Human Decision Processing 101 (2006): 152–167.

129. M. Ronald Buckley, Katherine A. Jackson, Mark C. Bolino, John G. Veres, III, and Hubert S. Field, “The Influence of Relational Demography on Panel Interview Ratings: A Field Experiment,” Personnel Psychology 60 (2007): 627–646; Joshua M. Sacco, Christine R. Scheu, Ann Marie Ryan, and Neal Schmitt, “An Investigation of Race and Sex Similarity Effects in Interviews: A Multilevel Approach to Relational Demography,” Journal of Applied Psychology 88 (2003): 852–865; Philip L. Roth, Chad H. Van Iddekinge, Allen I. Huffcutt, Carl E. Eidson, Jr., and Philip Bobko, “Corrections for Range Restriction in Structured Interview Ethnic Group Differences: The Values May Be Larger Than Researchers Thought,” Journal of Applied Psychology 87 (2002): 369–376; Allen I. Huffcutt and Philip L. Roth, “Racial Group Differences in Employment Interview Evaluations,” Journal of Applied Psychology 83 (1998): 179–189.

130. Klaus G. Melchers, Nadja Lienhardt, Miriam Von Aarburg, and Martin Kleinmann, “Is More Structure Really Better? A Comparison of Frame-of-Reference Training and Descriptively Anchored Rating Scales to Improve Interviewers' Rating Quality, Personnel Psychology 64 (2011): 53–87; Karen I. van der Zee, Arnold B. Bakker, and Paulien Bakker, “Why Are Structured Interviews So Rarely Used in Personnel Selection?”Journal of Applied Psychology 87 (2002): 176–184.

131. Frank L. Schmidt and Ryan D. Zimmerman, “A Counterintuitive Hypothesis about Employment Interview Validity and Some Supporting Evidence,” Journal of Applied Psychology 89 (2004): 553–561.

132. Murray R. Barrick, “What You See May Not Be What You Get,” 1394–1411.

133. Steven D. Maurer, “A Practitioner-Based Analysis of Interviewer Job Expertise and Scale Format as Contextual Factors in Situational Interviews,” Personnel Psychology 55 (2002): 307–328; Allen I. Huffcutt, Jeff Weekley, Willi H. Wiesner, Timothy G. DeGroot, and Casey Jones, “Comparison of Situational and Behavior Description Interview Questions for Higher-Level Positions,” Personnel Psychology 54 (2001): 619–644; Elaine D. Pulakos and Neal Schmitt, “Experience-Based and Situational Interview Questions: Studies of Validity,” Personnel Psychology 48 (1995): 289–308.

134. Yarv Ganzach, Avraham N. Kluger, and Nimrod Kayman, “Making Decisions from an Interview: Expert Measurement and Mechanical Combination,” Personnel Psychology 53 (2000): 1–21.

135. Gatewood and Field, Human Resource Selection.

136. David M. Finch, Bryan D. Edwards, and Craig J. Wallace, “Multistage Selection Strategies: Simulating the Effects on Adverse Impact and Expected Performance for Various Predictor Combinations,” Journal of Applied Psychology 94 (2009): 318–340.

137. Michael A. Campion, James L. Outtz, Sheldon Zedeck, Frank L. Schmidt, Jerard F. Kehoe, Kevin R. Murphy, and Robert M. Guion, “The Controversy over Score Banding in Personnel Selection: Answers to 10 Key Questions,” Personnel Psychology 54 (2001): 149–185.

138. Ibid.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.17.110.58