CHAPTER 9

EMBRACING ARTIFICIAL INTELLIGENCE

It has been called the equivalent of the invention of electricity, or even the harnessing of fire. It has also been called God, as John Thornhill, the FT’s innovation editor and author of the foreword to this book, said at an FT Forums meeting. Or, as the tech analyst Benedict Evans said, it may just amount to having an infinite number of interns.

We are talking about artificial intelligence, or AI, also known as machine learning. It is seen by some as heralding a revolutionary information age, one that will free us from drudge work, help leaders make better decisions, provide patients with more accurate diagnoses and make translation so fast and accurate that no one will need to bother to learn a foreign language. Some believe AI is fraught with danger. Stephen Hawking, the late ­British physicist, said that AI carried both tremendous potential and human-­threatening peril. ‘Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know,’ he said in 2017.1

Elon Musk, space entrepreneur and head of Tesla, told The New York Times in 2020 that ‘we’re headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.’2

AI certainly pervades our lives. When we ask the satellite navigation on our phones to direct us to the restaurant where we are meeting our friends, when we chat to an online bot about why our parcel hasn’t arrived or when, as happened to me, Google Translate assists when our Polish carpenter doesn’t know the English word for the window he is installing (sosna = pine), we are using AI.

Shortly before sitting down to write this chapter, I moderated a Zoom conference (about artificial intelligence, as it happened) with more than 200 managers at one of the world’s leading technology companies. Because not all of them had English as their first language, and some may have had trouble understanding the conversation, we activated Zoom’s subtitling service, so that participants could read what we were saying as we said it. The subtitles (in English) weren’t perfect, but they were pretty good. That, too, was AI.

AI has performed tasks more sophisticated than these. As John ­Thornhill said in a 2021 FT column: ‘Automated decision-making systems are approving mortgages and consumer loans and allocating credit scores. Natural language processing systems are conducting sentiment analysis on corporate earnings statements and writing personalised investment advice for retail investors. Insurance companies are using image recognition systems to assess the cost of car repairs.’3

Because it has become ubiquitous, leaders need to understand AI. It poses challenges, but it also provides opportunities. Customers benefit from it, but some fear the misuse of it. AI may help leaders make fairer hiring decisions, but also open them to allegations of bias. And governments are struggling with how to regulate it.

So, to begin, what exactly is it? ‘The truth is, there is no one accepted definition of AI,’ Michael Wooldridge, professor of computer science at the ­University of Oxford and author of The Road to Conscious Machines: The Story of AI, told the FT Forums meeting. There are many views of AI, he said. One is the ‘Hollywood version’, which is ‘about building machines that have the kind of full range of intellectual capabilities that a human being has. If that grand dream became a reality, the Hollywood version, then we would have machines that could do everything that human beings could do. They could read and interpret books, they could engage in conversation. They could create works of art. They could cook an omelette, they could ride a bicycle, drive a car. Anything that human beings could do, they could do – and somewhat more. At the extreme end, perhaps these machines, in the Hollywood version, would be conscious, self-aware, sentient in the same way that human beings are.’

This is not a new thought, he said. ‘It goes back really hundreds of years. Mary Shelley’s Frankenstein is a version of this.’ Are we closer to anything like this today, to machines that could behave in the way we do, although with access to vast amounts more information than any of us could hold in our heads? ‘The science and technology of AI today is nowhere near that. And my belief is that it’s absolutely nowhere in prospect,’ Michael Wooldridge said.

So, if AI is not the Hollywood version, what is it? One way of looking at AI is that it takes in huge reams of data and recognises patterns in them. The results can be unspectacular but useful. For example, instead of junior lawyers spending hours searching through reams of legal precedents, some law firms are using AI to search for key terms in cases so that they can turn up the relevant documents.4 More excitingly, DeepMind, the ­Google-owned UK AI company we met in Chapter 6, teamed up with Britain’s National Health Service and Moorfields Eye Hospital in London to detect eye diseases. The hospital provided thousands of 3D retinal scans that doctors had labelled with eye diseases. The patients’ names and details had been removed.

‘Because the images provide rich data with millions of pixels of information, the algorithm can learn to analyse them for signs of the three biggest serious eye diseases: glaucoma, diabetic retinopathy and age-related macular degeneration,’ the FT reported. The DeepMind project demonstrated that AI could then detect these diseases far more quickly than the doctors could.5

You may have noticed something interesting here. The DeepMind programme had no idea what eye diseases it was looking for until the doctors told it, through the mass of labelled retinal images. Thousands of hours of human work were needed to assemble this data. The DeepMind programme merely gathered it all together and, relying on what the eye doctors had said, interpreted the images it was seeing. This is one of AI’s paradoxes. Some, like Elon Musk, fear it could outsmart humans – something we can’t yet predict. But AI relies on humans to provide the information in the first place.

The machines can learn fast. Take that Zoom subtitling tool that I used for my online conference: Vivienne Ming, a neuroscientist and entrepreneur, told the FT Forums meeting that this sort of simultaneous subtitling is now available, free of charge, on many platforms. ‘It’s just a throwaway little service, it may not be perfect. But, honestly, even 20 years ago, the idea that a one-time presentation could be close-captioned in real time by a machine would have been truly astounding. And now, of course, we don’t think of it as exciting, or sexy, or maybe even artificial intelligence, just because it seems mundane. But it really is an amazing set of advances.’

Michael Wooldridge pointed to language translation as another area where there has been huge progress. ‘Translating from one language to another, 30 years ago, felt like a very, very long way in the future. And it’s rapidly developed to a usable technology. It won’t translate Proust for you, but it will get you around your holiday in Skopelos, it will get you round your holiday in Beijing,’ he said. And, as I discovered, it will help you find out what wood your windows are made of.

ON YOUR LEADERSHIP AGENDA

  • AI is a field that changes constantly – not just in its technology but in the debates on how it is applied. It is important for leaders to try to keep up, even if they do not become experts. But one of the advantages of leading an organisation is the access it gives you to experts. Track some of them down to speak to them or, if appropriate, employ them.
  • What is your and your organisation’s understanding of what AI means? Are there areas of your business that are using machine learning where you weren’t using it five years ago?
  • Have you had a leadership discussion about what your AI threats and opportunities are? What progress have your competitors made in using machine learning in their manufacturing or customer services?

USING AI TO MAKE BETTER HIRING DECISIONS

Performing in one of the USA’s top orchestras is, for many musicians, the pinnacle of achievement. The prestige is huge and the financial benefits can be too, including not just salary but recording contracts, soloist fees and payments while on tour. For many years, these rewards were available largely to men. In a paper in the American Economic Review in 2000, ­Claudia Goldin and Cecilia Rouse wrote that, traditionally, symphony orchestra musicians were handpicked by the musical director. ‘Although virtually all had auditioned for the position, most of the contenders would have been the (male) students of a select group of teachers,’ they wrote.6

Openings at these orchestras do not come often. Each orchestra has around 100 members, and that doesn’t change. As Goldin and Rouse wrote, orchestras are not like companies that take on more people as business expands; they couldn’t have increased the number of women musicians very quickly. Some possibly didn’t want to. The esteemed conductor Zubin Mehta is reputed to have said: ‘I just don’t think women should be in an orchestra.’

The outcome was predictable. ‘Among the five highest-ranked orchestras in the nation (known as the “Big Five”) – the Boston Symphony ­Orchestra (BSO), the Chicago Symphony Orchestra, the Cleveland ­Symphony ­Orchestra, the New York Philharmonic (NYPhil), and the ­Philadelphia ­Orchestra – none contained more than 12 per cent women until about 1980,’ Goldin and Rouse said.

But, as long ago as 1952, one of those Big Five, the Boston Symphony Orchestra, decided to do something new. Instead of watching and listening to potential recruits while they played, they auditioned them, at least for the preliminary rounds, behind a screen, so that those deciding who to hire knew nothing about them, other than the music they played. Other orchestras followed in the 1970s and 1980s. Some even laid down carpets so that the musician’s gender couldn’t be revealed by the heaviness of their footsteps.

The result was a big rise in women hired to play in these orchestras. The number of women musicians in the New York Philharmonic had risen to 35 per cent by the time Goldin and Rouse wrote their article. There were increases at other orchestras that used screens for their auditions. The rises were, the authors of the paper said, ‘extraordinary’, because, as we have said, the turnover of musicians at these top orchestras is so low. ‘The ­proportion of new players who were women must have been, and indeed was, exceedingly high,’ Goldin and Rouse said.

The orchestra story tells a familiar tale. A small number of managers may be openly prejudiced against people who are of a different gender, race or background to them. Many more try to be fair, but, in the end, often end up recruiting people who are similar to them.

It is not only female musicians who find it hard to get work – many people do, in many different fields. In 2009, the UK Department for Work and Pensions commissioned a study that looked at how potential employers made hiring decisions based on people’s names. Researchers applied for 987 job vacancies under three different names: two were ‘white-sounding’ names; one was ‘non-white sounding’. When the responses came, 10.7 per cent of the applications with white-sounding names got a positive answer, such as being called for an interview or asked their salary expectations, compared with 6.2 per cent of those with apparently ethnic minority names.

But were responses fairer when the names were hidden? ‘There have been experiments in France, Germany, Sweden and the Netherlands over the past 10 years to try to answer that question,’ the FT reported in 2016. ‘According to a summary of these studies by Germany’s Institute for the Study of Labour, anonymous job applications do seem to increase the probability that applicants from ethnic minorities are invited for interview. But the evidence is too patchy to tell whether their chances of a job offer diminish again at the interview stage.’7

In some cases, name-blind recruitment clearly did not make recruitment fairer. In the French study, ethnic minority applicants had worse call-back rates even when their names were hidden. ‘The authors of the study speculated that could be because employers were able to discern their background from other information in their applications, such as their addresses, the schools they had attended and their language skills,’ the FT said.

So could AI, which doesn’t care which school you went to or know the cultural connotations of your name, make fairer hiring decisions? Well, AI could hardly make worse decisions than those made by humans, Loren Larsen, chief technology officer of HireVue, a US provider of online recruitment tools, told the FT in 2019. ‘When you’re actually measuring things related to the job and people are being evaluated on that, that’s better than 95 per cent of the processes that are happening today,’ he said.8

But, unfortunately, for the advocates of AI-based recruiting, disillusionment set in quickly. The answer to the question ‘A machine couldn’t be biased, could it?’ was ‘Yes, it could.’ The problem is, as we have already seen, that AI relies on masses of data derived from human decisions to make its own decisions. And if those human decisions have been biased, so will the AI decisions that rely on them.

In 2018, Reuters reported that Amazon’s ‘machine learning specialists uncovered a big problem: their new recruiting engine did not like women’.9 Amazon had been working on a tool that would make its recruitment faster and fairer. ‘Everyone wanted this holy grail,’ a source familiar with Amazon’s effort said. ‘They literally wanted it to be an engine where I’m going to give you 100 resumés, it will spit out the top five, and we’ll hire those.’ But those top five largely turned out to be men, because the AI tool was basing its decisions on CVs submitted to Amazon over the previous 10 years – most of which had come from men.

The same happens to AI systems that evaluate how safe it is to lend money to people. These systems look promising because they can rely on vast amounts of data on how likely people were to be able to repay their loans. ‘But it’s important to know that these judgments are based on historical data,’ Vivienne Ming told the FT Forums meeting. ‘So, while these judgements on average might be better than an analyst might make, they tend to harbour a lot of historical discrimination. Perhaps a mortgage offered to someone might be downgraded based on subtle cues no human might recognise but are still present in the financial data.’ In other words, biased decisions in the past feed into biased AI decisions in the present.

So, should leaders simply give up on using AI to recruit people or evaluate loans? No. The benefits, in speed and, yes, in better decision making if handled properly, are too enticing. It isn’t easy. John Thornhill wrote: ‘Ensuring that AI systems are used appropriately is one of the wickedest challenges of our times.’10 He offered five ways to counter AI bias, if not entirely eliminate it. First, increase the diversity of those providing the original data that goes into AI. The vast majority of those in the tech industry are white men. In 2021, FT writer Gillian Tett cited a Google diversity report showing that fewer than a third of the company’s global workforce was female and only 5.5 per cent of its US employees were black, compared with 13 per cent of the country’s population.11

Second, John Thornhill wrote, AI systems should only be used when the net benefit was clear and acceptable to those people affected by their use. He cited facial recognition as an example. Facial recognition has been controversial, first because some people dislike being photographed when they are going about their ordinary business and, second, because, again, of allegations of bias. The FT reported in 2021 that there had been three reported cases of wrongful arrests in the USA based on facial ­recognition – all of black men.12 In 2019, the city of San Francisco, in spite of being home to some of the world’s greatest AI experts, banned the use of facial recognition on its streets. But, as John Thornhill said, if people believed that AI was working for them, opposition to it tended to fall away.

As a writer of the FT’s business travel column, I became convinced that, if the technology worked, airline passengers would welcome facial technology to allow them to board their planes, rather than having to go through the tedium of queuing to have their boarding passes validated and their passports being given a cursory check.

Third, John Thornhill said, tech companies needed to embed ethical thinking into all their AI decision making. Fourth, they needed to subject their AI systems to independent scrutiny and, fifth, there should be broader regulation, possibly through an AI regulator equivalent to the US Food and Drug Administration, an issue we will return to.

ON YOUR LEADERSHIP AGENDA

  • While much of what we have said about AI can be viewed as criticism, we should not forget the benefits it can bring, in speed, efficiency, and – ­managed properly – in fairness. Which of your current processes, whether in handling your employees or customers, might be susceptible to the benefits of machine learning?
  • If your organisation has used AI in recruitment, do you have a process that allows you to track the success of recruits that have come in through an AI process, compared with those that arrived through the traditional application and interview route? How do they compare?
  • Adopting AI technologies that are more likely to be bias-free means becoming more aware of the biases within your organisation. AI makes it clear that diversity and inclusion are more than just fads: they influence how effective an organisation like yours will be in an AI-rich age.

ARE WE NEARLY THERE?

In 2018, a video appeared showing a woman reading a book and dozing under a duvet on what looked like a fully reclined chair. She was driving, or rather being driven, by her autonomous car. Volvo, which made the film, asked: ‘What if autonomous travel could eliminate the stress that lurks between A and B?’13

That relaxed vision of the self-driving car was the near future once. Elon Musk told the World Artificial Intelligence Conference in Shanghai that autonomous driving was very close. It ‘will happen and I think it will happen very quickly’. In fact, he said, by the end of that year.14 That year was 2020, which came and went without self-driving cars. ‘The future is not arriving any time soon,’ FT columnist John Gapper wrote at the beginning of 2021.15

In 2015, in an FT Forums talk, Bill Gates called the self-driving car the Rubicon, saying Uber on its own, compared with taxis, was just a ‘reorganisation of the labour pool’. In 2016, Uber itself had called self-driving vehicles ‘basically existential for us’. In 2020, it abandoned its self-driving taxi project, instead taking a minority stake in Aurora, a driverless-car startup.16

This doesn’t mean that attempts to produce more intelligent, AI-driven cars have gotten nowhere. New car models are semi-autonomous and pretty impressive. As John Gapper wrote of his new Volvo: ‘Driving on an uncrowded highway in my car is pleasing: it maintains the right speed, slowing and accelerating smoothly in sympathy with cars in front. It steers around gentle bends, tracking its progress with a camera and radar sensor.’

But the driver needs to stay alert. ‘Approaching a red light the other day, I half-expected my car to stop by itself. A split second later, I realised it would not and put my foot on the brake. That is the problem of driving a car that has half a mind.’

How far are we from truly self-driving cars? Michael Wooldridge told the FT Forums meeting that autonomous vehicles have had their ‘Kitty Hawk moment’ – referring to the Wright brothers’ first aircraft flight at Kitty Hawk, North Carolina in 1903. That flight covered 120 feet in 12 seconds. It was, Michael Wooldridge said, ‘at a very, very low speed with a plane that was not going to do anything useful for anybody. But, you know, within 70 years, we went from that to 747s and flying off on holiday for a weekend in Spain.’

In Michael Wooldridge’s estimation, the Kitty Hawk moment for driverless cars came in 2005, when the US Defense Advanced Research Projects Agency (DARPA) successfully organised a competition for industry and academia to create cars that could drive 130 miles across the desert. The first time DARPA ran the event was actually in 2004. But that time was not a success, Michael said: ‘No car made it more than seven miles. An awful lot of cars didn’t even make it out of the starting grid. And I remember quite vividly at the time, my response was: these guys are crazy. The technology is a long, long way in the future.’

But the next year, DARPA ran the challenge again. Several teams succeeded in sending their cars without drivers along the same route. ‘And the winning car, from Stanford University, averaged about 20 miles an hour. It drove for about six hours across the desert terrain. That, I think, was the Kitty Hawk moment for driverless cars,’ Michael said. ‘What that established is that this technology is really viable. But it’s nowhere near the point of us yet actually jumping into a car and saying, “take me to Middlesbrough”.’

To drive without drivers, autonomous vehicles would have to be able to perform all those split-second decisions that we make all the time on the road. And to do that, Michael explained, autonomous vehicles needed to be taught by humans. Michael said that companies developing driverless cars ‘are training them, for the most part, with human beings behind the wheels, but they are driving these cars around millions and millions of miles of real-world tracks, getting data about the environment. And then that data is being laboriously explained to the machine: this is a person, this is a bicycle, this is the stop sign and so on. But that’s huge amounts of data and enormous amounts of money and effort going into that.’

So, we are back to where we were earlier. AI learns from a huge number of human decisions. It finds out from humans what the road conditions look like, what pedestrians are, what the traffic light is telling you to do. Where is this leading? Michael sees driverless cars in the future working well in constrained spaces: ports, airports, military installations, ‘where you don’t have lots of random pedestrians or tourists wandering around the street or wandering into the road and so on’.

Much of the talk about AI brings to mind Charles de Gaulle’s observation about Brazil: that it was the country of the future, and always would be. Will AI always recede into the future? No. As we have said, it is already here. That it has been oversold does not mean it is not happening. It is, and assisted driving – as opposed to autonomous driving – is just one benefit. It is important, though, that we understand where we are with AI. A long way, but not there yet. As John Gapper said, drivers need to know that their car is assisting them, not taking control. They are still in charge.

In 2018, a pedestrian in Arizona was killed by an Uber self-driving car as she wheeled her bicycle across the road. Investigators discovered that the back-up driver in the Uber car was not in a position to take over because she was streaming a television show at the time of the crash.17

Two important issues arise out of this incident. When it comes to AI, who is to blame when things go wrong – the company that created the system or the person operating it? This issue is a long way from resolution. And the question for the next section: given that 1.3 million people die in car accidents around the world every year, why did one self-driving death attract so much attention?

WE EXPECT MORE FROM AI

We met Daniel Kahneman in Chapter 5, where he told us about the unlimited optimism of founders of new businesses. Let’s pick up his story again, because his fascinating work helps us understand many different questions, including some of AI’s consequences. Kahneman has had an extraordinary life. He was born in 1934 in Tel Aviv, in what was then British-controlled Palestine, while his mother was visiting from Paris. Kahneman spent the Second World War in France, on the run and hiding from the Nazis with his Jewish parents. After the war, he and his mother (his father had died of poorly treated diabetes) moved to Jerusalem where, during Israeli army service, he developed his understanding of military psychology – and of all psychology. He especially began understanding how we manage, persistently, to fool ourselves.

Kahneman established a fruitful and often tempestuous working partnership with fellow psychologist Amos Tversky, who died of cancer in 1996. But their research on judgement and decision making laid the basis for what is now called behavioural economics and won Kahneman the Nobel economics prize in 2002. Kahneman showed how leaders and policy makers get things wrong even when they are acting on what they believe to be the facts.

In his best-selling book Thinking, Fast and Slow, Kahneman writes about a study on the incidence of kidney cancer in the 3,141 counties of the USA. The study reached a remarkable conclusion. The counties with the lowest incidence of kidney cancer were ‘mostly rural, sparsely populated and located in typically Republican states’. Why should this be? Did these areas have lower levels of kidney cancer because the air and water were less polluted?

Kahneman then pointed to the counties with the highest incidence of kidney cancer. They were mostly rural, sparsely populated, typically ­Republican-voting states. How could both these findings be true? How could counties with identical characteristics account for both the highest and lowest incidences of kidney cancer? ‘The key factor is not that the counties were rural or predominantly Republican. It is that rural counties have small populations,’ Kahneman wrote. Small sample sizes tend to produce results at either extreme.

It is easier to understand this if we think about Kahneman and ­Tversky’s ‘hospital problem’. Imagine a town with two hospitals, a larger one in which 45 babies are born every day and a smaller one with 15 births a day. Over a year, each hospital records the number of days on which at least 60 per cent of the babies born are boys. Which is likely to have more such days, the larger or the smaller hospital? Most people participating in ­Kahneman and Tversky’s experiment thought the larger and smaller hospitals would be about the same. But, in fact, as with the kidney cancer problem, a small hospital is more likely to have 60 per cent of births of one sex than a large one, because the sample size is smaller. The larger the number of babies born in a hospital, the more likely the boy/girl ratio will tend to be around 50 per cent.18

We understand intuitively that large sample sizes will produce a more representative outcome than smaller ones when we think about opinion polls. We know that asking five people a question is likely to produce a less representative answer than asking 2,000 – but Kahneman and Tversky, with the kidney cancer and hospital examples, demonstrate how we often fool ourselves.

We are not as rational as we think. We saw this when we talked about human bias in hiring classical musicians or recruiting people with different-sounding names. We may think we are being fair, but we are often deceiving ourselves, and discriminating against others. AI offers the prospect of changing this. As we have discussed, if we can eliminate the human biases in the data that AI relies on, it is possible to imagine a world in which machines make better decisions, like the DeepMind eye disease project. Kahneman, too, sees enormous potential in areas like disease detection. In an interview with The Guardian in 2021, he said: ‘Some medical specialties are clearly in danger of being replaced, certainly in terms of diagnosis.’ When it is matched against human intelligence, he said, ‘clearly AI is going to win. It’s not even close.’19

But Kahneman made another observation, which is highly relevant to driverless cars, but also to much else that comes from AI: we are far readier to accept human mistakes than those made by machines. Take that one pedestrian fatality from a self-driving car and compare it to the 1.3 ­million people killed on the roads each year, usually from driver error. The 1.3 ­million tragedies pass without comment. The one caused by a self-driving car makes headlines around the world. Self-driving cars will have to be far safer than human driven ones if we are to accept them. ‘They have to be so close to perfection,’ Kahneman said in an interview with the FT.20

It is not enough for AI to make better decisions than humans. For them to be acceptable to society, they have to be much better – close to error-free.

ON YOUR LEADERSHIP AGENDA

  • Adopting AI requires paying attention to what it can achieve and what it can’t. If competitors are making enticing offers to customers based on their previous purchasing decisions, you may lose out if you don’t do the same.
  • On the other hand, you don’t want to alienate customers with, for example, a chat bot that leaves questions unanswered and customers so frustrated that they go somewhere else. You need a human backup.
  • You need to be particularly cautious about adopting AI if you are in a safety-critical industry. You will suffer great damage if your human mistakes lead to injury or loss of life. But mistakes caused by AI can be even more ­damaging – because they exacerbate fears of machines running out of control.

WHO IS IN CHARGE HERE?

While people like Stephen Hawking and Elon Musk have worried about it becoming smarter than humans, to many of us, AI sometimes seems a little stupid. Buy a pair of hiking boots online, and you are deluged with adverts offering you hiking boots. I’ve already bought the boots, you may think. If AI really is as clever as people say, surely it would be trying to sell me boot polish?

AI-generated online adverts can also offend by making overly broad assumptions about people. Kriti Sharma, an AI expert and chief product officer for legal technology at Thomson Reuters, told the FT Forums meeting: ‘I have this weird problem whenever I go to social media platforms. I’m just inundated, because of my age and background and profile, with Indian ads of Indian marriage bureaus which are targeting me. There’s some algorithm’s gone wrong, which can’t understand: I’m a millennial, I’m a snowflake, I’m my own person.’

These incidents make AI seem like a free-for-all. Who is watching what all these machines are doing? Who is monitoring not just the clumsy targeting of ads, but the harm done by all the bias and discrimination we have talked about? ‘In a competitive marketplace, it may seem easier to cut corners. But it’s unacceptable to create AI systems that will harm many people, just as it’s unacceptable to create pharmaceuticals and other products – whether cars, children’s toys, or medical devices – that will harm many people,’ Eric Lander and Alondra Nelson, two Biden White House science advisers, wrote in Wired.21 Just as the USA adopted the Bill of Rights after ratifying its constitution, it needed a bill of rights ‘to guard against the powerful technologies we have created’, they wrote.

Among the provisions in a technology bill of rights could be a rule that we are entitled to know when and how AI is affecting us. Another could ensure we are not subjected to AI unless it has been audited to ‘ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets’. Remedies under a tech bill of rights could include the US federal government refusing to buy technology that does not comply with the provisions, Landor and Nelson said.

In Europe, the courts have intervened to deal with unfair AI-enabled decisions. In 2021, a Dutch court said that Uber and Ola Cabs, the Indian-owned ride-hailing company, had to disclose the data on which they based their decisions on the drivers. Jacob Turner, the London barrister who acted for the drivers suing these two companies, said the cases were the first in the world to be based on the right to an explanation of how automated decisions were reached – a right that comes from the EU’s General Data Protection Regulation (GDPR). ‘The court ruled that Uber must disclose data about alleged fraudulent activities by the drivers, based on which Uber deactivated their accounts (“robo-firing”) as well as data about individual ratings,’ Turner wrote. ‘Ola Cabs were required to provide access to “fraud probability scores”, earning profiles, and data that was used in a surveillance system.’22

As we can see, the ways in which people can challenge decisions made by AI are still developing. ‘The trouble with algorithmic decision-making is that the technology has come first, with the due diligence an afterthought,’ FT columnist Anjana Ahuja wrote. In the absence of enforceable worldwide agreements that allowed people the right to know how AI was affecting them, tech companies wielded enormous power, she said.23

AI-using companies can wait to be challenged in the courts, hoping that the worldwide patchwork of developing regulation will make consistent challenges to AI difficult. But that is to risk general consumer disillusionment with AI, and reputational damage to individual organisations if it emerges that they have been using technology in disreputable or discriminatory ways.

So, what can leaders do? The main thing, Kriti Sharma told the FT Forums meeting, was to ensure transparency. ‘The issue if you’re a company that’s using AI to make certain decisions is improving the transparency and explainability, especially in large-scale companies making important business decisions,’ she said.

This is not always easy. As AI legal specialist Andrew Burt wrote in a Harvard Business Review article in 2019, transparency can help allay fears of fairness and discrimination. But revealing what lies behind an organisation’s AI carries its own risks, both to the technology and the organisation. ‘Explanations can be hacked, releasing additional information may make AI more vulnerable to attacks, and disclosures can make companies more susceptible to lawsuits or regulatory action,’ he wrote.24

Making AI transparent not only requires monitoring by the tech and HR departments; lawyers must be involved too. This is not just an issue in the litigious USA; as we have seen, European courts are scrutinising it as well.

AI is evolving. Each development, whether in recruitment, ad targeting or autonomous driving, brings new business opportunities – and new risks. There are no hard answers for leaders beyond talking and reading widely, and adjusting as the technology, law and society’s acceptance changes.

ON YOUR LEADERSHIP AGENDA

  • As we pointed out in Chapter 4, corporate crises, including reputational crises, can break suddenly, damaging an organisation that has spent years building up its reputation. Bias resulting from use of AI programmes can be a potent source of reputational damage.
  • Looking at your AI systems should be an integral part of your reputational risk assessment. Leaders should have regular meetings with all the relevant departments – IT, HR, communications and legal – to talk about the risks the organisation may be exposed to with AI.
  • Rehearsing for crises should include potentially reputation-damaging AI ­disclosures and accidents.

POINTS TO PONDER

As Elon Musk has appeared in this chapter, here is a little rhyme about trouble he had with the US Securities and Exchange Commission, written by Dr Seuss.

The SEC said, ‘Musk,/your tweets are a blight./They really could cost you your job,/if you don’t stop/all this tweeting at night.’

That wasn’t really written by Dr Seuss, of course. Theodor Seuss Geisel, the pseudonymous Dr Seuss, died in 1991, too early to have written about Elon Musk. Also, as any child who regularly reads Dr Seuss will recognise, the rhyming is a little off. The poem is AI-generated by OpenAI, an artificial intelligence lab in San Francisco, partly founded by Elon Musk. While the poem might not hit the mark, as The Economist observed in 2020, other people’s writings can convince: ‘Human readers struggled to distinguish between news articles written by the machine and those written by people.’25 Some journalism, from business to baseball reports, is already being written by robots.26

Is AI going to put people out of work – not just journalists and children’s book writers, but many others too? Is it going to destroy people’s jobs? Not yet. The legal research software we talked about was predicted by some to make junior lawyers’ work redundant, leaving the high-skill advice and advocacy to the fabulously well-paid partners, but raising the question of how people would become senior lawyers if there was no longer any need for junior ones. Instead, the FT reported in 2021 that UK and US law firms based in London were engaged in a bidding war for rookie lawyers that saw their starting salaries leaping.27

AI will probably change the shape of work one day. Vivienne Ming told the FT Forums meeting that she thought those that did the most creative, highly skilled jobs would survive. So would those at the other end of the salary ladder. (That is probably true: robots cannot yet make hotel beds or pick strawberries.) Some jobs in the middle, many the sort of data-crunching jobs done by university graduates, may well be automated away.

FURTHER READING

AI is an area in which leaders without great tech skills need to find ways of constantly keeping up with what is happening. There are some pacy and readable books. Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World by Cade Metz provides a useful survey of AI’s development, including an intriguing look at the origin and development of DeepMind.

For those interested in the fascinating work of Daniel Kahneman and Amos Tversky, along with their personal stories and the relationship between them, Michael Lewis’s The Undoing Project: A Friendship That Changed the World is both accessible and entertaining.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.20.231