by Tomas Chamorro-Premuzic, Frida Polli, and Ben Dattner
Artificial intelligence has disrupted every area of our lives—from the curated shopping experiences we’ve come to expect from companies like Amazon and Alibaba to the personalized recommendations that channels like YouTube and Netflix use to market their latest content. But when it comes to the workplace, in many ways, AI is still in its infancy. This is particularly true when we consider the ways it is beginning to change talent management. To use a familiar analogy: AI at work is in the dial-up mode. The 5G Wi-Fi phase has yet to arrive, but we have no doubt that it will.
To be sure, there is much confusion around what AI can and cannot do, as well as different perspectives on how to define it. In the war for talent, however, AI plays a very specific role: to give organizations more accurate and more efficient predictions of a candidate’s work-related behaviors and performance potential. Unlike traditional recruitment methods, such as employee referrals, CV screening, and face-to-face interviews, AI is able to find patterns unseen by the human eye.
Many AI systems use real people as models for what success looks like in certain roles. This group of individuals is referred to as a training data set and often includes managers or staff who have been defined as high performers. AI systems process and compare the profiles of various job applicants to the “model” employee it has created based on the training set. Then, it gives the company a probabilistic estimate of how closely a candidate’s attributes match those of the ideal employee.
Theoretically, this method could be used to find the right person for the right role faster and more efficiently than ever before. But, as you may have realized, it has become a source of both promise and peril. If the training set is diverse, if demographically unbiased data is used to measure the people in it, and if the algorithms are also debiased, this technique can actually mitigate human prejudice and expand diversity and socioeconomic inclusion better than humans ever could. However, if the training set, the data, or both are biased, and algorithms are not sufficiently audited, AI will only exacerbate the problem of bias in hiring and homogeneity in organizations.
In order to rapidly improve talent management and take full advantage of the power and potential AI offers, then, we need to shift our focus from developing more ethical HR systems to developing more ethical AI. Of course, removing bias from AI is not easy. In fact, it is very hard. But our argument is based on our belief that it is far more feasible than removing it from humans themselves.
When it comes to identifying talent or potential, most organizations still play it by ear. Recruiters spend just a few seconds looking at a résumé before deciding who to “weed out.” Hiring managers make quick judgments and call them “intuition” or overlook hard data and hire based on cultural fit—a problem made worse by the general absence of objective and rigorous performance measures. Further, the unconscious bias training implemented by a growing number of companies has often been found to be ineffective and, at times, can even make things worse. Often, training focuses too much on individual bias and too little on the structural biases narrowing the pipeline of underrepresented groups.
Though critics argue that AI is not much better, they often forget that these systems are mirroring our own behavior. We are quick to blame AI for predicting that white men will receive higher performance ratings from their (probably also white male) managers. But this is happening because we have failed to fix bias in the performance ratings that are often used in training data sets. We are shocked that AI can make biased hiring decisions but are fine living in a world where human biases dominate them. Just take a look at Amazon. The outcry of criticism about their biased recruiting algorithm ignored the overwhelming evidence that current human-driven hiring in most organizations is ineradicably worse. It’s akin to expressing more concern over a very small number of driverless car deaths than the 1.2 million traffic deaths a year caused by flawed and possibly also distracted or intoxicated humans.
Realistically, we have a greater ability to ensure both accuracy and fairness in AI systems than we do to influence or enlighten recruiters and hiring managers. Humans are very good at learning but very bad at unlearning. The cognitive mechanisms that make us biased are often the same tools we use to survive in our day-to-day lives. The world is far too complex for us to process logically and deliberately all the time; if we did, we would be overwhelmed by information overload and unable to make simple decisions, such as buying a cup of coffee (after all, why should we trust the barista if we don’t know him?). That’s why it’s easier to ensure that our data and training sets are unbiased than it is to change the behaviors of Sam or Sally, from whom we can neither remove bias nor extract a printout of the variables that influence their decisions. Essentially, it is easier to unpack AI algorithms than to understand and change the human mind.
To do this, organizations using AI for talent management, at any stage, should start by taking the following steps.
However, this trade-off is based on techniques from half a century ago, prior to the advent of AI models that can treat the data very differently than traditional models. There is increasing evidence that AI could overcome this trade-off by deploying more dynamic and personalized scoring algorithms that are sensitive as much to accuracy as to fairness, optimizing for a mix of both. Therefore, developers of AI have no excuse for not doing so. Further, because these new systems now exist, we should question whether the widespread use of traditional cognitive assessments, which are known to have an adverse impact on minorities, should continue without some form of bias mitigation.
If organizations address these issues, we believe that ethical AI could vastly improve organizations not only by reducing bias in hiring but also by enhancing meritocracy and making the association between talent, effort, and employee success far greater than it has been in the past. Further, it will be good for the global economy. Once we mitigate bias, our candidate pools will grow beyond employee referrals and Ivy League graduates. People from a wider range of socioeconomic backgrounds will have more access to better jobs—which can help create balance and begin to remedy class divides.
To make the above happen, however, businesses need to make the right investments, not just in cutting-edge AI technologies but also (and especially) in human expertise—people who understand how to leverage the advantages that these new technologies offer while minimizing potential risks and drawbacks. In any area of performance, a combination of artificial and human intelligence is likely to produce a better result than one without the other. Ethical AI should be viewed as one of the tools we can use to counter our own biases, not as a final panacea.
TAKEAWAYS
AI could give organizations faster, better, and more efficient predictions of a job candidate’s behaviors and performance potential. But if the training set, the data, or both are biased, and algorithms are not sufficiently audited, AI will only exacerbate the problems of bias in hiring and homogeneity in organizations. To ensure both accuracy and fairness in AI systems used for talent management, take the following steps:
Adapted from “Building Ethical AI for Talent Management,” on hbr.org, November 21, 2019 (product #H05AD9).
54.152.77.92