CHAPTER 3

Disruption and the Crisis of Technology

The end of World War II marked the beginning of the period that resulted in remarkable economic and social progress but would eventually lead to the forces that drove the worries expressed around the world in the form of ADAPT (asymmetry, disruption, age, polarization, and trust). It also marked another important point in human history: with the advent of the nuclear bomb, we had created a technology that could eradicate us and much of the natural world. The planet and human society have seen massive disruptions before, but most of those have been the result of natural disasters or outbreaks of disease. When they were the consequence of war, the result was usually limited to a single major region or political system.1 The potential disruptive consequences of a nuclear war required us to rethink how nations interact, how we protect ourselves from ourselves, and how we envision our relationship to each other.

Today we face a similar dilemma. The sheer ubiquity of rapidly and dynamically advancing technology in our lives and our societies threatens us and our environments in ways that are perplexing and seemingly intractable. Although technology certainly contributes to innumerable improvements in critical facets of our day-to-day existence, there are nonetheless two extremely dangerous risks that arise from the outsized role technology plays in our lives. The first is the disruptive impact of information technology on our daily existence and goals and aspirations. Chapter 5 focuses on the other, arguably greater, disruption: the basic technologies by which we generate energy, grow the food we eat, make and transport goods, go from one place to another, build the things we work and live in, and keep ourselves comfortable are making the world too hot.

IT Disruption and Harmful Effects on People and Society

If you ask a child to draw a robot, more than likely the picture will be of a boxy human—square face, eyes, ears, nose, mouth, and all of the same limbs as people. Meanwhile, since 1950, adults have been administering the Turing Test to computers of every ilk, trying to identify machines that have crossed over into the human sphere by whether they exhibit intelligent behavior indistinguishable from that of a person.

Both statements point to the distinctive relationship that people have with computers and the digital world. No other products or services are so identified with human intelligence, sharing our unique capacity for juggling complex groups of knowledge and information and—to a degree—our sentient characteristics. And none, not even the automobile, has the potential to so disrupt fundamental aspects of how we live and what matters most to us. As a result, over the years the role that new technology plays in our lives is constantly being adjusted and recalibrated, in an effort to balance the good it brings to society with its impact on critical human values involving privacy, livelihoods, quality of life, education, and the way people relate to our friends, families, and communities.

For decades, before the widespread use of the Internet and, importantly, before the more recent emergence of the huge technology platforms that dominate our economies—the Big Four in much of the world (Google, Amazon, Facebook, and Apple) and Baidu, Alibaba, and Tencent (BAT) in the rest—maintaining this balance was difficult but not impossible. Indeed, many of the efforts to bring new digital breakthroughs into our workplaces, homes, and social spaces were founded on the assumption that technology is benign, valuable, and certainly manageable.

But that conclusion is no longer so clear-cut (even among technologists themselves).2 Precisely because digital technologies increasingly mirror characteristics of human intelligence, are more efficient than us, and are advancing at a speed that makes the past seem slothful, it is becoming harder to temper the disruptions in our lives engendered by new hardware, software, apps, and platforms. As a result, in nearly all industries as well as social, political, and economic circles, technology is freely rewriting assumptions that used to guide day-to-day activities, taking over essential tasks from people while offering new channels of communication and information of varying quality.

The sheer depth (and likelihood of a continuing wave) of technology-based disruptions is provoking a growing chorus of global worries about the threats posed by technology, a sentiment I heard from almost everybody I spoke to. The dangers of the sudden rise and global ubiquity of the giant technology platforms was uppermost in people’s minds.3 These companies have deftly built their business models on key attributes of technological success—from massive economies of scale, to an ability to accumulate vast amounts of data and use artificial intelligence to glean the most valuable information from that data, to completely reshaping the nature of shopping, media, healthcare, social relationships, and finance into a platform economy. As a result, substantial wealth, influence, consumer and personal data, and market control accrue to this small group of companies and those who run them. Meanwhile, other technological advances—for instance, in robotics artificial intelligence and virtual reality—are imperiling jobs and livelihoods.

To be sure, technology can be an extraordinarily positive force. Because of it, we get nearly everything better, cheaper, and faster. Goods and services can be tailored to our personal needs. We can do things we never imagined before. Tremendous amounts of knowledge are at our fingertips. New entrepreneurial industries are rising up to replace old and inefficient ones. The challenge is that because of technology’s pervasiveness in our lives the harm that it can potentially do is also colossal in scale.

Simply put, we face a number of conundrums: how can we have the benefits of this new world and manage or mitigate the liabilities? To what degree should we hold technology companies accountable for negative outcomes stemming from their products? What is the responsibility of technology companies to police themselves from doing harm? How can we safeguard against technology changing us in detrimental ways? Or, as Apple CEO Tim Cook put it in a commencement address at MIT in 2017: “Technology is capable of doing great things, but it doesn’t want to do great things. That part takes all of us. . . . I’m not worried about artificial intelligence giving computers the ability to think like humans. I’m more worried about people thinking like computers without values or compassion, without concern for consequences.” This chapter elaborates on the most salient threats that new technology and digitization pose. Later chapters discuss how to tackle these threats.

Wealth Disparity

Duke University professor Phil Cook and his coauthor, Robert Frank, were among the first to identify a core characteristic of the economy: it has increasingly become “winner take all.”4 Their argument was that in a highly networked, knowledge-rich, platform-based world, a few at the top are disproportionately compensated because their place atop the market enables them to reach more potential customers than their rivals, further solidifying the advantaged position they have. In turn, these few at the top can provide more benefits and convenience to their users (think Amazon Prime) and outpace new competition before it even gets off the ground.

Although this explanation of extreme wealth and power among the big platform companies is important and cautionary, often unnoticed is how this has contributed to wealth disparity. Consider the most valuable firms in the United States—and to a large degree in the world—in 1967 and 2017: from one to five in 1967 they were IBM, AT&T, Eastman Kodak, General Motors, and Standard Oil of New Jersey; in 2017 they were Apple, Alphabet (Google’s parent), Microsoft, Amazon, and Facebook. The top five firms in 1967 created a huge number of jobs between them, both directly and indirectly through their suppliers. Not so for the five in 2017. In 1967, for example, AT&T had about one million employees. In 2017, Alphabet had only ninety-eight thousand. Software-based companies require fewer people and fewer suppliers to build wealth. As a result, only a small subset of workers can benefit from the best jobs in the most powerful companies, while large portions of the population have to compete for lower wages in less well-heeled organizations.

Regional advantage works in a similar manner.5 In a knowledge economy, technologically oriented cities and regions with highly trained workers, entrepreneurs, and available seed capital have the lion’s share of tech startups and continue to attract more companies because of their privileged position. It’s remarkable that the five most valuable US companies in 2017 are headquartered in just two cities.

There is another side to regional variability. In economies with shared currency, areas that have invested and benefited the most from technology far outperform those that have been less able to invest or derive productivity gains from technology. For example, Gujarat has benefited from India’s GDP growth nearly twice as much as the regions less able to take advantage of the benefits of technology.6 Interior China, parts of the midwestern United States, and southern Europe have similar stories. This is one of the negative consequences for the EU, for example, as southern Europe has embraced technology in industry slower and lost on pricing against northern Europe. Left without intervention the trend is obvious. Wealth will accrue to fewer and fewer cities, companies, and people.

Job Losses

One of the most significant factors driving global pessimism is a numbing fear of being downsized out of the job market or forced to take low-wage positions because of new technologies. The 2019 Edelman Trust Barometer, which surveyed some thirty-three thousand people around the world, found that nearly 60 percent of respondents were worried about not having the skills to get a well-paying job in the future and 55 percent of respondents were concerned about automation or other innovations taking away their job. Richard Edelman, CEO and creator of the survey, described the attitudes beyond the numbers: “People are governed by their fears at the moment; by a two to one margin they think that the pace of innovation is too fast. Four out of five actually believe that their economic circumstances are going to be worse 10 years ahead. Those are unprecedented numbers. And it goes down to: I basically am afraid that machines are going to take my job.”7

How realistic is this conclusion? A seminal Oxford Martin School paper published in 2013 concluded that 47 percent of US jobs are at high risk of automation in the next few decades.8 As shown in Figure 3.1, a 2018 PwC study of twenty-nine countries found that close to 30 percent of jobs would be severely disrupted or disappear due to automation.9 Although the precise numbers may vary, most studies agree that those hardest hit by automation will be people with lower levels of education, women, and youths.

Considering the potential scope and scale of automation technology in the future, the coming years could be extremely traumatic in the job market. Urgent responses to minimize the damage are needed. As Oxford Martin economist Carl Benedikt Frey has argued, “the Industrial Revolution created unprecedented wealth and prosperity over the long run, but the immediate consequences of mechanization were devastating for large swaths of the population. Middle-income jobs withered, wages stagnated, the labor share of income fell, profits surged, and economic inequality skyrocketed.”10 Frey suggests that these same trends are emerging during what he calls the Computer Revolution and how traumatic the consequences will be depends entirely upon how the short term is managed by helping people gain essential skills, generating new jobs, and supporting new job-creating industries or businesses during the nascent phases.

Images

FIGURE 3.1 Percentage of existing jobs at potential risk of automation, waves. SOURCE: PwC, “Will Robots Really Steal Our Jobs? An international analysis of the potential long term impact of automation,” 2018. PwC analysis, based on data from the OECD Programme for the International Assessment of Adult Competencies public database http://www.oecd.org/skills/piaac/publicdataandanalysis/.

Privacy Intrusions

The evolution of IT platforms, cloud computing, and data analytics have resulted in amazing benefits in terms of convenience, efficiency, personalized solutions, advances in knowledge, and availability of information, products, and services. But the trade-off is that the amount of personal data stored in the cloud today exceeds all imagination. This presents a set of new challenges; critically, how do we make data available for all of the uses we find desirable while ensuring our that our privacy is safeguarded and that the data collected about us is accurate?11

The core issue with data accumulation and AI-based analysis of this data is that these systems need to know a lot about us—and continue to learn more and more—to generate the most accurate assessments of who we are, what we like, our hobbies, jobs, and lifestyles. More and better information about us is continuously needed to provide the personalized solutions we want technology to deliver. To be of use, the machine needs to know a lot about us. But throughout the world there is a growing backlash against technology platforms and providers that ignore people’s privacy concerns. Because of this, there are increasing calls from policymakers to rein in technology companies that are less than transparent about what personal information people are giving up to use the platform and what the companies may do with this data. Most of us would find it acceptable for personal information to be used to enhance the public good, such as medical research, and to increase the quality of products or services we receive. But we are generally opposed to the loss of privacy and sense of personal security that accompanies these benefits.

Centralized Control

One of the biggest dangers of platform and other technology companies accumulating data about individuals is that this information will be in the hands of either very few, very large organizations or governments and be used for purposes not in our best interest. The platform companies have an incentive to profit from our data and governments have an incentive to use it for surveillance. The consequence of a misplaced profit motive is the subject of this chapter. As to governments and surveillance, to a degree this is already happening. For instance, the Chinese government is putting together a social scoring system that will rank every citizen using metrics such as their bill-paying history, schoolwork, adherence to traffic laws and birth control regulations, use of technology, and shopping patterns. This system will draw on information from banks, mobile phone companies, and e-commerce firms such as Alibaba, among many other sources.12

Less aggressive, but in some cases equally intrusive, programs in Western countries rely on data from online providers, such as Google, Apple, and mobile phone companies, for law enforcement investigations. Although these programs generally require subpoenas or warrants to access the data, there have been examples of government or law enforcement overreach. For many people, the idea that governmental authorities have channels through which to access personal data accumulated by technology companies is cause for alarm and sizably contributes to the loss of trust in public institutions.

The Disturbing Effect of Social Media

With more than two billion total users, Facebook has about as many followers as Christianity. Twitter and Instagram each have more than one hundred million active users, many of them on the platform for long periods each day. As we have seen in the many disinformation and misinformation imbroglios that these social networks have found themselves in, they have almost limitless power to distribute and distort ideas and facts, aggravate opinions and emotions, disseminate real and imagined narratives, and guide the topics that people around the globe end up talking about.

Although the social media companies should be accountable for these problematic activities on their sites, the more insidious deleterious effects of social media are a consequence of human inclinations. People spend much more time on negative content than positive content.13 Thus not surprisingly, platform business models that depend on attracting and retaining viewers create a skew toward negative content. In addition, people are more likely to bully and demean others on social media platforms than face to face. Moreover, we tend to read things online or favor people who align with our view of the world. That, of course, solidifies the fracture of societies into idea camps that are unwilling to give credence to what others outside of their orbits have to say.

Perhaps the most dramatic criticism of social media came from Sean Parker, who founded the peer-to-peer music site Napster and advised Mark Zuckerberg when Facebook was being launched. Parker recently sold all of his Facebook holdings because of his concerns about the impact of social media platforms on society. He said that the site grew by “exploiting a vulnerability in human psychology” with people’s need for attention fed by a careful reward system to keep users addicted. “We need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever . . . it’s a social-validation feedback loop . . . a vulnerability in human psychology,” he admitted to news site Axios.14

In establishing successful platform companies, many founding entrepreneurs had noble goals of enhancing how we are governed, bringing society together, unleashing creativity and innovation, and advancing our best ideas; but this isn’t happening and won’t if left unmanaged.

Technology That Makes Us Dumber

University of California at San Francisco neuroscience professor Adam Gazzaley and California State University, Dominguez Hills professor of psychology Larry Rosen are concerned about smartphones and their impact on human intelligence. They describe the human brain as serving two core functions. The first, high-level thinking: synthesizing data, connecting to existing knowledge, creating, linking to emotions, planning, and deciding. The second, helping us execute plans and take action. Interestingly, our high-level cognitive functioning is well advanced beyond other species, but our critical cognitive execution capabilities—based on shortterm memory and attention span—are about as good as a chimpanzee’s.

That’s bad enough, but in their book, aptly titled The Distracted Mind, Gazzaley and Rosen say that the search capacity and interruptive nature of the smartphone are worsening the situation, weakening even more our short-term intelligence. At the heart of this problem are two basic consequences of ubiquitous technology. First, we have dramatically increased our propensity to multitask, which unfortunately is really just rapidly switching from one task to another—the brain cannot really do two attentional tasks at once. Gazzaley and Rosen worry we have lost the ability to single task. “Glance around a restaurant, look at people walking on a city street, pay attention to people waiting in line for a movie or a theater, and you will see busily tapping fingers,” they write. “We appear to care more about the people who are available through our devices than those who are right in front of our faces. And perhaps more critically, we appear to have the lost the ability to be simply alone with our thoughts.”15

The second is that we have grown to spend less and less time on any single task. Our capacity to attend has shrunk—from students instructed to study a really important topic, to employees asked to work on a critical task, to driving a car. Our patience seems to be continuously shrinking: “More recent work has even suggested that the four-second rule may actually be closer to a ‘two-second rule’ or even a ‘400 millisecond rule’ (less than one half a second), indicating that we are all quite impatient and prone to diverting our attention rapidly from one screen to the next if our needs are not met.”16 All of this is a result of entrancing sounds, compelling visuals, and irresistible vibrations that just cannot be ignored.

The deeper worry is that the brain is neuroplastic—it rewires itself based on use—and thus these tendencies developed early in life can sustain through our lifetime. If we read and observe in short bursts, we adapt to being able to maintain brain activity in only short bursts. If we immediately search for items we cannot remember, we let memory capacity atrophy. Certainly, technology can and does make us smarter in many ways, but the results of Gazzaley and Rosen’s research point to a troubling consequence of smartphones that will only worsen if neglected.

Contributing to Self-Harm

I am extremely proud of my daughter-in-law and in the context of this chapter she probably has a job for life. Her specialty is dialectical behavior therapy, which treats people who are anxious or depressed and likely to harm themselves through cutting and suicide. We had a fascinating conversation about why self-harm is on the increase. Some of it can be credited to better reporting of patient activities and greater recognition of mental health issues. But technology plays a big role as well, she said. The reasons behind this are eye-opening.

For example, people tend to present idealized versions of themselves, their jobs, their relationships, and their family life on social media. Thus, when people who are dissatisfied with their lives or who feel inferior compare themselves to others on social media, deep depression can set in. Social media interactions can be cruel and bullying, feeding downward spirals. The Web is a hypochondriac’s field of dreams. If you are not worried about a symptom now, just look it up, and you are likely to be certain you have had that symptom for some time.

Artificial Intelligence Is Puzzling

No one doubts that artificial intelligence has the potential to ultimately change our lives for the better. Supporting human intelligence with a machine that is equal in brain power will accelerate advancements in fighting diseases, developing drugs, solving engineering puzzles, managing intractable issues like climate change, freeing people from drudgery, extending creativity, distributing valuable information, and even providing companionship for the lonely and immobile.

But we don’t fully understand how AI programs learn and how individual facets of AI programs affect the performance of the larger system to control outcomes sufficiently. When designing an AI program, we are still somewhat in the dark about why it does things that we did not intend it to do. What type of programming or data analysis interaction led the system to produce an unanticipated outcome? With those uncertainties, an AI system could be all too easily programmed to do something beneficial and still have the unintended consequence of causing greater harm. MIT physicist Max Tegmark provides an amusing example of this.17 “Imagine you ask an obedient intelligent car to take you to the airport as fast as possible,” he says. “It might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for.” But Tegmark is not pessimistic about the future for AI—and even our ability to effectively corral it. “We invented fire, repeatedly messed up, and then invented the fire extinguisher, fire exit, fire alarm and fire department,” he reminds us.

Images

The very notion that computing technology has the potential to be our equal in intelligence presents an array of possibilities for digital devices, software, systems, and networks that go well beyond what the imagination can conjure up today. But there is an even bigger unknown: how will we handle the difficult task of crafting policies and strategies to rein in the deleterious impact of technology platforms and digitization? Especially the harm they already inflict on individual livelihoods, social relationships, the political sphere, economic systems, and global comity. To address this, we first need to fully examine the threats that technology poses from a broad and granular perspective. That knowledge alone will give us a boost in generating adequate solutions that propel technology toward being an indispensable handmaiden instead of something to worry about.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.136.27.75