Chapter 12

Skills, Capability, and Talent Management

For centuries, agriculture remained unchanged—farmers plowed their fields with the help of farm animals and irrigated them using water from nearby rivers. In many emerging countries, such as India, small farmers still use these centuries-old methods, which require limited skills. Innovations in agriculture started in the eighteenth century, with crop rotation, and continued in the twentieth century with mechanization, synthetic fertilizers, pesticides, and high-yield seeds.

Today, farming has become a high-tech operation. GPS in autonomous tractors ensures that they do not cover the same ground twice or miss any spots, thus reducing fuel consumption and improving the utilization of fertilizers. Technology has also led to “precision farming.” John Deere’s equipment can plant individual seeds accurately within an inch, and a sensor on its equipment can measure the amount of nutrients in the soil and adjust the amount of fertilizer in real time. The University of Sydney has developed a solar-powered device that can identify individual weeds and zap them.1

An article in The Economist compared farming to matrix algebra: “A farmer must constantly juggle a set of variables, such as the weather, his soil’s moisture levels and nutrient content, competition to his crop from weeds, threats to their health from pests and diseases, and the cost of taking action to deal with these things.”2 Using past and real-time data on these variables from sensors and other sources, companies like John Deere are developing software algorithms to help farmers optimize yield and maximize profits.

These developments in software and data analytics are not limited to any one industry. From farming to financial services, companies are hiring hundreds or even thousands of software engineers with computer-science and data-analytic skills. In 2017, over nine thousand employees, or about one-quarter of Goldman Sachs’s staff, had engineering backgrounds, and 37 percent of its 2016 analyst class had STEM (science, technology, engineering, or math) degrees.3 Data is the new oil, and companies need people with the skills to refine and extract value from this important resource.

Data Analytics, Machine Learning, and Artificial Intelligence

According to an IBM report, we create 2.5 quintillion bytes of data every day. Put differently, 90 percent of the data in the world today has been created in the last two years.4 This data comes from consumer activities such as web browsing, social media posts, and mobile usage—and increasingly from sensors built into machines. Making sense of this vast amount of data has therefore become the key challenge for companies, including every company discussed at length in this book—Adobe, Amazon, GE, Goldman Sachs, Mastercard, the New York Times, and The Weather Company.

One powerful approach for leveraging massive amounts of data is through machine learning and artificial intelligence (AI). Today, AI is the force behind automation, and it creates fear and excitement at the same time. While some people, like Elon Musk, worry about the possibility of AI running amok, others feel that this technology will revolutionize every industry in the future. No matter which view you hold, there is no doubt that AI will have a dramatic impact on the future of work and the skills needed to thrive in the digital era. To understand the potential impact of AI on jobs, it is perhaps useful to understand what it is and how it might enable machines to perform many tasks that are currently done by people.

Origins of Artificial Intelligence

The idea of artificial intelligence dates back to a workshop held at Dartmouth College in the summer of 1956. The early attempts focused on creating rule-based “expert systems” in which a machine learned rules (e.g., how to play chess) from a human expert and then used its computational power to sift through millions of combinations to arrive at the best decision. In 1996, IBM used this approach for developing its chess supercomputer, Deep Blue, to play against reigning world champion Garry Kasparov. Deep Blue searched six, eight, sometimes even twenty moves ahead, processing two hundred million positions per second. This brute-force computational power allowed Deep Blue to win the first game against Kasparov, but Kasparov won the next three games and drew the remaining two.

Expert systems leveraged the computing power of machines, but the results, however impressive, were hardly “intelligent.” Most practical situations, such as language translation, don’t follow neat and well-defined rules the way chess does. As a result, interest in AI remained dormant for decades. It resurfaced with the availability of large data and the ability to detect patterns.

Big Data and Pattern Recognition

The term big data crept up into the business lexicon to represent the large amounts of complex data available today. Is big data a big deal or a big hype? After all, data has always formed the backbone of decision-making in business. What does a large amount of data allow us to do that we could not do before? In 2008, researchers at Google provided a glimpse of the power of big data. In a 2015 article, I described this research as follows:

In November 2008, researchers at Google published an article in the journal Nature about Google Flu Trends (GFT)—a model that used hundreds of billions of US consumer searches on Google about influenza during the years 2003–2008 to predict the incidence of the flu. Google scientists claimed that no prior knowledge of influenza was used in building the model. Instead, they analyzed more than 50 million of the most commonly used search queries, and automatically selected the best-fitting search terms by estimating 450 million different models on candidate queries. The final model used a mere 45 search terms to predict rate of flu in several regions of the [United States]. The results of this model were compared with the actual incidence of influenza as reported by the Centers for Disease Control (CDC). The paper reported an incredible accuracy rate with correlations between actual and predicted influenza rates between 0.90 and 0.97.5

Suddenly it seemed that you didn’t need domain expertise to predict the incidence of flu. Although the Google research came under some criticism later, it opened up the possibility of data and algorithms replacing experts.6

Rebirth of Artificial Intelligence

Most studies based on big data, including Google’s flu research, use numerical data such as the number of queries of a search term. This is the kind of data that we are used to dealing with in business through an Excel spreadsheet or an econometric model. However, data now comes in many other forms—texts, images, videos—that don’t fit neatly into the rows and columns of an Excel spreadsheet. How do you sort through thousands of your digital photos to identify only those pictures that show one of your children? This problem was the focus of ImageNet, a database of millions of images, which challenged scholars to build models to correctly classify images. In 2010, machines and software algorithms had an accuracy rate of 72 percent, whereas humans were able to classify images with an average accuracy of 95 percent. In 2012, a team led by Geoff Hinton, at the University of Toronto, used a novel approach, called “deep learning,” which allowed them to improve the image-recognition accuracy to 85 percent. Today, facial-recognition algorithms based on deep learning have an accuracy rate of over 99 percent.7

Deep learning is based on artificial neural networks (ANNs), which are rooted in how the human brain works. An average human brain has 100 billion neurons, and each neuron is connected to up to 10,000 other neurons that allow it to transmit information quickly. When a neuron receives a signal, it sends an electric impulse that triggers other neurons, which in turn propagate the information to neurons connected to them. The output signal of each neuron depends on a set of “weights” and an “activation function.” Using this analogy and massive amounts of data, researchers train an ANN by adjusting the weights and the activation function to get the desired output.

ANNs have been around for several decades, but the breakthrough came from a new method called “convolution neural net.” Effectively, this approach shows that with a single “layer” of network you can identify only simple patterns, but with multiple layers you can find patterns of patterns. For example, the first layer of a network might distinguish an object from the sky in a photo. The second layer might separate a circular object from a rectangular figure. The third layer might identify a circular figure as a face, and so on. It is as if with each successive layer the image comes more and more into focus. Networks with twenty to thirty layers are now commonly used. This level of abstraction, dubbed deep learning, is behind the major improvement in machine learning and AI.

Training Machines to Learn

Instead of the rule-based or top-down approach used in the past, machine learning relies on a bottom-up, data-based approach to recognize patterns. Young children don’t learn languages by memorizing the rules of grammar but instead by simply immersing themselves. Pattern recognition works best when there is a large amount of data, which is where the power of big data becomes apparent. There are several approaches for training machines to do tasks performed by humans today.

The first approach, called “supervised learning,” is now used by Google to identify spam emails or translate a web page into over one hundred different languages. None of these activities involve the manual processing of data. How are machines able to do this automatically? How are computer scientists at Google able to translate one hundred different languages without any knowledge of them? Supervised learning involves using large amounts of “training” data, say emails, which are first classified or labeled by humans as spam or not spam. These emails are then fed to a machine to see if it can correctly identify spam emails. No rules are specified for identifying spam emails. Instead, the machine automatically learns which phrases or sentences to focus on based on the accuracy of its prediction. This trial-and-error process improves as more data becomes available.

Similarly, Google Translate would convert a sentence or a page from, say, English to Spanish and back and then measure its own accuracy, without any knowledge of either language. As more data becomes available, the machine learns and becomes more accurate. Today, Google Translate is so accurate that it is hard to distinguish its translations from those done by a linguist. Google can now automatically translate conversations in a different language in real time. Suddenly, language skills have become less critical. And this is not limited to languages. Machines can read X-ray or MRI images as well as or better than radiologists who have years of training and experience. Any repetitive task for which large amounts of data are available for training machines can and will be automated.

While supervised learning requires us to classify or label data to train machines, “unsupervised learning” involves no such guidance. This approach effectively asks the machine to go and look for interesting patterns in the data, without telling the machine what to look for. This can be very useful in identifying cyberattacks, terrorist threats, or credit card fraud. But unsupervised learning is extremely difficult, and scholars struggled for a long time to find a way to make it work. The breakthrough came in 2011, when Quoc Le, a doctoral student of the leading AI scholar Andrew Ng, developed an approach that successfully detected cats from millions of unlabeled YouTube videos.8 The findings, in what is fondly known as “the cat paper,” were published in 2012, and they opened up the area of unsupervised learning, which is currently considered the frontier of machine learning.

In between supervised and unsupervised learning is “reinforcement learning.” Here the machine starts with unsupervised learning, and when it finds an interesting pattern, a researcher sends positive reinforcement to the machine to direct its search. Another approach is “transfer learning,” in which experts (e.g., in cybersecurity) transfer their knowledge to the machine instead of letting the machine start from scratch. Most of the current AI approaches are domain specific—for example, language or medicine. The next frontier of this research is to develop “generalized AI,” which would be capable of synthesizing and finding patterns from multiple domains, just like the human brain does.9

Impact of Automation on Jobs

The ability to collect data and train machines to analyze and learn is transforming every industry, and it is going to have a major impact on jobs. In a highly cited 2013 study, Oxford University researchers examined 702 typical occupations and found that 47 percent of workers in the United States had jobs that were at risk of automation. The probability of automation varied by profession: 99 percent for telemarketers, 94 percent for accountants and auditors, 92 percent for those in retail sales, but only 0.3 percent for recreational therapists and 0.4 percent for dentists.10 A 2017 study by McKinsey reported that while less than 5 percent of jobs have the potential for full automation, almost 30 percent of tasks in 60 percent of occupations could be computerized.11 Even highly skilled and well-educated lawyers and radiologists are under threat from automation. Automation in the 1960s and 1970s replaced blue-collar jobs in factories, but the type of automation driven by AI is likely to replace many of the white-collar jobs. Effectively, jobs that are routine, repetitive, and predictable can be done by machines better, faster, and cheaper. The distinction is no longer between manual and cognitive skills, or blue-collar and white-collar work, but whether a job has large elements of repetition.

Automation creates anxiety among people and politicians because of the prospect of mass unemployment. It evokes scenarios where we won’t have jobs and simply sit around watching Netflix all day. However, history shows that many of these fears are overblown. In the 1930s, John Maynard Keynes, the famous economist, predicted that technology would allow his grandkids to have a fifteen-hour work week—far from what we experience today.12 ATMs were expected to end the careers of bank tellers, and while the number of bank tellers fell from twenty per branch in 1988 to thirteen in 2004, the number of bank branches increased by 43 percent over the same period. ATMs did not destroy jobs. They shifted the role of bank teller from one of simply dispensing cash to one of customer service.13 The same type of shift is likely to happen with AI and the current wave of automation. Routine and repetitive parts of jobs will be automated, and people will need to retrain themselves for the nonrepetitive aspects of the job. While some jobs will be eliminated—as happened in the past, for instance, with elevator operators—new jobs will get created that require new skills.

Talent Management in the Digital Age

Not only are jobs changing, but the process by which firms recruit, develop, and manage talent is also undergoing dramatic change.

Recruiting

Technology is forcing firms to rethink who they hire and how they hire them. Goldman Sachs has automated many parts of its IPO process and replaced the majority of its traders with software engineers who write algorithms. GE Digital has over thirty thousand people with software and cloud-computing skills and those related to the internet of things. After becoming the CEO of Gap, Art Peck decided that instead of relying on the instincts and vision of his creative directors, the company should instead mine data from Google Analytics and its own sales and customer databases to come up with new designs. This increased focus on data is universal across companies and is leading them to hire people with skills in data analytics. Degrees in computer science and the ability to code are in great demand. The marketing function is also shifting, with greater emphasis on digital-marketing skills.

How firms hire is also undergoing radical change. The traditional approach of interviewing is subjective and can introduce bias. Additionally, it is time consuming and limits the number of candidates a company can screen. Some companies are even beginning to question the reliability of traditional data points, such as college degrees and academic records, for spotting the right talent. Catalyst DevWorks, a software-development company, evaluated hundreds of thousands of IT professionals and found no significant relationship between a college degree and success on the job.14

In 2003, Michael Lewis wrote a provocative book, Moneyball, which described how the Oakland Athletics used an analytics- and evidence-based approach, instead of the judgment of sport scouts, to assemble a powerful baseball team. Can firms use this approach for hiring? Guy Halfteck, founder and CEO of Knack, a startup based in Silicon Valley, believes that by using mobile games and analytics his company can do as well as or better than companies that use the traditional interview process for recruiting consultants, financial analysts, surgeons, or people of just about any skill.

How can a ten- or twenty-minute mobile game identify the right candidates? When I posed this question to Halfteck, he described how Knack works:

Knack’s games create an incredibly immersive and engaging digital experience. Playing a game involves up to 2,500 microbehaviors per game, or about 250 microbehaviors per minute. These include active and passive decisions, actions, reactions, learning, exploration, and more. Knack scores are computed from the patterns of how an individual plays the game, rather than how well they score on the game. From the raw data, our automated analysis distills within-game markers of different behaviors (e.g., how quickly the player processes information, how efficiently they attend to and use social cues like facial expressions of emotion, how they handle challenges, how they learn, how they adapt and change their behavior and thinking, and much more). These behavioral markers are articulated by a combination of machine learning and state-of-the-art behavioral science.

We then combine the behavioral markers to build and validate predictive models of psychological attributes, such as social intelligence, quantitative thinking, resilience, planning, and more. Each Knack score comes from one of these models. Having developed models (“Knacks”) for 35 Human Model behavioral attributes, we predict real-world behavior, such as job performance, leadership impact, ideation, learning success, and more.15

Recognizing that identifying talent through mobile games is likely to elicit skepticism, Halfteck and his team embarked on a series of studies with universities, scholars, and companies to validate their approach. In a 2017 study, researchers at NYU Langone Medical Center asked 120 current and former orthopedic trainees to play Wasabi Waiter, one of Knack’s games, and found that the data from the game was able to predict participants’ performance on the Orthopedic In-Training Exam during their residency.16

Rino Piazzolla, head of human resources at AXA Group, one of the largest insurance companies in the world, with over $100 billion in revenue and 165,000 employees, is among the ardent fans of Knack. He described how he became a huge supporter and user of Knack:

We found out about Knack when we started looking at several startups to understand what innovations were happening in HR. We decided to test Knack to recruit for our call center, where we hire a large number of people. Then we followed the people we hired and did some back testing. We now have statistical evidence that by using Knack we are hiring people who are better fit for the job.17

This data-analytic and gamification approach to recruiting has won Knack many clients, including BCG, Citigroup, Deutsche Telekom, Nestlé, IBM, Daimler, and Tata Hotel and Resorts. In a recent talent-acquisition effort, BCG sent a link of Knack’s mobile game to a large number of universities, many of which BCG did not visit in the past due to the time constraints of its own executives. Within a few weeks it obtained thousands of applicants. Without having the résumé or the background of applicants, Knack used its algorithms to analyze the data to identify the top 5 percent of candidates that would be a good fit for BCG.

Knack and its clients are not the only companies using this approach. Startups such as Gild, Entelo, Textio, Doxa, and GapJumpers as well as established firms such as Unilever, Goldman Sachs, and Walmart are also experimenting with the data-driven approach to spotting talent and broadening their pool of candidates. Typically, algorithms screen candidates in the early stages and face-to-face interviews happen only in the final phase. Unilever found that this approach was faster, more accurate, and less costly and that it increased the reach of the company to a pool of candidates that it had never interviewed before.18

Advances in analytics and AI have significantly improved the power and accuracy of “people analytics,” the Moneyball approach described by Michael Lewis. This is even more important in the “gig economy,” where many freelance workers are available for short periods of time for specific tasks and it would be too costly for a company to spend enormous resources in selecting these part-time employees.

Training and Development

Almost every company has online training courses and tools to help employees update their skills and learning. Using gamification that appeals to millennials, Appical, a Dutch startup, is helping companies onboard their young employees. The next step in this journey is to understand the specific needs of each employee and create customized training courses. While this idea is still in its early stages, it will be achieved by leveraging the knowledge of how firms customize content for their customers in real-time. Eyeview, a video marketing–technology company, can offer real-time customized video ads to consumers based on their past purchases and browsing behavior. Adobe is able to send specific learning content to users based on their current level of knowledge and potential needs. The same technology can be adapted to customize training content for employees.

Rino Piazzolla of AXA Group is now using Knack for executive development. This is how he described it:

Knack originally positioned itself for recruiting, but I talked to Guy [founder of Knack] and said that if you can see my strengths, then you can also see my weaknesses, which could be valuable for assessment and development purposes. So I decided to be a guinea pig and play the game myself. Over my career at large companies like AXA, GE, and Pepsi, I have done—as well as received—a lot of executive assessment using traditional methods, so I was curious to see what Knack would come up with. Knack results were stunning, and its assessment was one of the best that I had seen in my long career in HR. Traditional assessment methods are usually contextualized and can create bias, but Knack knew nothing about me or my long career, and yet it came back with amazing results in a short period of time. So now we are using Knack for many of our executive-development programs, such as emerging-leaders development. We also used it in the [United States] for strategic workforce planning to gauge what are our current skills and what skills we will need in the future. We asked, on a voluntary basis, all our US employees to play the Knack game. Over 30 percent of the people played the game, and that allowed us to create an inventory of our current human capital by function.

Rapid changes in technology also warrant continuous learning. Senior executives in a company may be familiar with Snapchat and WeChat, which their target customers are immersed in, but it is their junior employees who really know how millennials use and engage with these technologies. Recognizing this gap, Unilever instituted a “reverse mentoring” program, in which a senior executive is paired with a young employee. The junior person helps the senior colleague in understanding the role that new technology plays in young consumers’ lives, and the senior executive mentors the junior partner about company strategy. The CEO of Coke in China told me a few years ago that he created a teen advisory board where he invited a few teenagers every quarter to help him get a deep understanding of their media consumption and buying behavior.

The need for continuous learning also raises important questions for our education system. Is the four-year college degree, which has been around for centuries, the right model today? Should universities and governments invest more in apprentice-based models, as Germany has done so successfully? Many countries, such as India, are investing heavily in skill-based education that needs to be constantly updated as the skill requirements change and evolve. For executives and universities, it means investing more in executive-development programs. According to Rino Piazzolla of AXA Group, the core competency employees will need in the future is the ability and desire to learn.

Performance Evaluation

Every company uses some version of a performance-evaluation system that includes 360-degree feedback and quarterly or annual reviews to set salary and bonuses and suggest improvements for the future. This traditional approach has three major limitations. First, it is very time consuming. Deloitte found that it spends over two million hours per year to do performance evaluations for its 65,000 employees. This in itself may not be bad, because people are a major asset of an organization and spending time to measure and improve their performance is valuable. However, the second problem with the traditional approach is that it is often ineffective. One study examined how 4,492 managers were evaluated by their bosses, peers, and subordinates. It found that raters’ perception accounted for 62 percent of the variance in ratings and that actual performance accounted for only 21 percent of the variance.19 This biased approach leads neither to productivity improvement nor to employee engagement. The third limitation of the current approach is its batch mode, wherein employees often receive feedback only at the end of the year and not in the moment when it could help them improve their performance.

Digital tools and technology are now allowing firms to test new and faster ways to assess employees’ performance. For its 300,000 employees, GE is in the middle of scrapping its decades-old annual reviews, which were notorious under previous CEO Jack Welch, and replacing them with an app, PD@GE, that allows workers to get real-time feedback from peers, subordinates, and bosses.20 GE is also now developing an app that uses past employee data to help leaders do better succession planning and career coaching.21 In April 2017, Goldman Sachs introduced Ongoing Feedback 360+, a system designed to let employees exchange real-time feedback with their managers. The system will also provide a dashboard that summarizes the feedback an employee has received throughout the year.22 Impraise and DevelapMe are other examples of apps for real-time performance feedback.

Technology is not only useful in providing real-time feedback. The data-driven approach can also identify good performers without any bias inherent in rater-driven evaluations. When Hans Haringa, an executive at Royal Dutch Shell who runs a team that solicits, evaluates, and funds disruptive ideas from inside and outside the company, heard about Knack, he decided to see if it could help him with his task. He asked 1,400 people who had contributed ideas in the past to play Knack games. He then gave Knack information on how the ideas of three-quarters of these people had done in terms of seed funding or more. Using the game and performance data, Knack built a model, which was then used to predict the potential success of the remaining 25 percent of people. “Without ever seeing the ideas,” Haringa noted, “without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process.”23

Talent Retention

Retaining talented employees is a constant challenge for every organization. Often firms learn about imminent departures too late, when an executive has already secured another job and is ready to move on. If we can predict customer churn for a credit card or a wireless company using historical data of customers’ past usage, why can’t we use a similar approach to predict employee churn? This question has led to the development of new tools that are able to predict, well in advance, the likelihood of an employee leaving a firm. GE is currently testing such an application, to predict six months in advance if an employee is likely to leave, so that the company can design an appropriate intervention before it is too late.24 Scores of academics are using their skills in customer-churn modeling to develop machine-learning algorithms for predicting employee churn.

Human resource decisions ranging from recruiting and training to evaluation and retention will be driven by data and machine-learning algorithms. Machines will not replace human judgment, but they will be major complementary assets to what we currently do to manage talent. The technology revolution is only going to accelerate in the future, and we better prepare and brace ourselves for it.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.59.129