© Peter Matthews, Steven Greenspan 2020
P. Matthews, S. GreenspanAutomation and Collaborative Roboticshttps://doi.org/10.1007/978-1-4842-5964-1_1

1. Will Robots Replace You?

Peter Matthews1  and Steven Greenspan2
(1)
Berkhamsted, Hertfordshire, UK
(2)
Philadelphia, Pennsylvania, USA
 

At the dawn of civilization, in the forests of Siberia, a small tribe was engaged in discussion of great importance to themselves and mankind. It was winter. As the humans argued, wolf dogs ate scraps of discarded food. Smaller than wolves, they had been domesticated and were perfect for pulling heavy loads without overheating. But a few of the larger wolf dogs seemed able to pick up the scent of the large bears better than humans could. Some of the tribe wanted to breed and train these wolf dogs for hunting. Other hunters who were widely known for their olfactory skills might have been concerned that their specialty, their craft, was threatened by the more sensitive canine olfactory system.

This example is of course fanciful and contrived.1 We don’t know if labor debates took place under these circumstances, but humans have been transforming work and probably arguing about these transformations from our early days as hunters, gatherers, and traders.

In any case, within several generations, hunters in this region were likely acclaimed, not only for their courage in attacking large bears but also for the way they trained and communicated with hunting dogs. Status, ego, property rights—all the ingredients of drama and tragedy—were there from the beginning and intricately woven into the structure of work and tribal dynamics.

All animals work to survive. Humans, to date, are no exception. We work to produce food, shelter, and heat, we work to entertain each other, we work to teach others to produce and trade the things that we need and value, and we work to contribute to the well-being of our community. We also create machines and train animals in order to amplify our strength, endurance, dexterity, mobility, and (more recently) our communications and intelligence.

These machines and animals influence how we structure our culture. For example, clocks organize our day, impose structure in the workplace, and in the Seventeenth century provided a metaphor for how our brains worked.2,3 More recently, the brain has been compared to switchboards (in the early days of telecommunications), to serial computers (with short-term and long-term storage, and data transfer), and to deep learning and self-organizing networks.

These defining technologies also provide a framework through which humans interact. But unlike previous technologies, the latest generation of machines (i.e., robots) are operating semi-autonomously. Within the narrow limits of a well-defined domain (such as games, exploration of the sea floor, driving a truck or car), they are beginning to make decisions based on immediate context and long-term goals.

This is not artificial general intelligence (AGI),4 but it is at least the mimicry of human purpose and domain-specific intelligence. Just as computer architectures served as metaphors for how to think about ourselves and society, we need appropriate metaphors to help guide policy, technological research and invention, and application of robotics.

What is significant about this next phase of machine technology is that we are integrating intelligent, semi-autonomous robotics into the workplace, transforming cognitive tasks that were once considered “for humans only” such as social interactions, business process design, and strategic decision-making. AI, robotics, and automation represent the first large-scale substitute for human cognition.5

In this chapter we will explore how robotics might impact our household chores, jobs, and business, and military processes. We will examine the types of skills for which robots are well designed and the jobs or tasks that may or must have a human in the loop.

Impact of Robotics on Work

There are many conflicting opinions about the impact of automation on the working population and on government and economic policies. In some scenarios, production no longer depends upon human labor; most production is accomplished through robots and automation, leaving most human workers unemployed. In such scenarios, the middle class may be eliminated, wealth disparity is increased, and wealth becomes increasingly dependent on inheritance and investment.

Even in less extreme scenarios, automation will be disruptive, and jobs will be replaced or transformed. Whether this will mean massive unemployment or post-scarcity affluence with guaranteed incomes and more satisfying creative work will depend on all of us. The world will be shaped by the policies and technologies that advanced economies adopt.

How will jobs and social structures be transformed? The early Industrial Age involved the large-scale transformation of steam into mechanical energy. The next major phase occurred when electricity was generated and transformed into mechanical energy or light. However, these technologies would not have transformed societies if not for social and business innovations that created large labor markets of skilled and unskilled workers, the factory organization, the corporation, insurance to mitigate investment risks, and so on. This in turn powered the modern consumer economy—the Information Age with its emphasis on novelty, efficiency, and mass consumption.

The recent history of technological adoption indicates that information technologies tend to devalue those jobs that are repetitive but cannot yet be automated. Skilled but nonexecutive jobs also tend to be transformed or replaced. Indeed, whole business processes are redesigned, eliminating tedious, unsanitary, or dangerous tasks and concentrating tactical everyday decisions into the jobs of fewer, but well-trained clerical and professional workers. Conversely, the same technological and economic pressures tend to value jobs that focus on networking, process design, and creativity.6

For example, long before mobile smartphones and networked computers were ubiquitous, ATMs and electronic banking led to the reduction of physical banks, the elimination of low-skilled bank employees, and the reduction of skilled data entry positions and bank clerks. The jobs of the remaining bank clerks were transformed; their focus shifted toward selling loans and other financial services.7 Unlike previous mechanical technologies, information technologies replace not physical labor but predictable, repeatable cognitive labor. Technological and social innovations coevolve. New forms of organization enable adoption and adaptation of new technologies to further social, industrial, and individual goals.

We are now entering an era of intelligent robotics. To understand the potential impact on work, the next several subsections will review the impact of earlier industrial transformations on work and societal responses to automation. We will first consider reactions to the introduction of new technology in the textile industry, at the beginning of the Industrial Revolution.

Resistance to the Industrial Age

The iconic Luddite rebellion against industrial technology was not a reaction to the transformation of unskilled labor, it was a response by highly paid, skilled craftsman to task simplification and rumors of automation.8 General Ludd, the fictitious leader of the rebellion, was the creation of a secret society, Luddites, who through satire, and violence, protested the use of technology to drive down wages. The movement arose in March 1811, in the bleak economy of the Napoleonic Wars, in a market town about 130 miles north of London. Protesters smashed equipment such as shearing frames because owners were using them to replace highly paid croppers. Croppers were skilled textile workers who clipped the wool after it had been sheared.9 The movement quickly spread, turned violent, and was subsequently suppressed by the British military.

What is notable about the actual Luddite rebellion (as opposed to the stuff of myth) is that the textile workers were not against technology or automation, per se. They wanted technology that would require skilled well-paid workers10,11 and would produce high-quality goods. This concern, that technology should be crafted and evolved in sympathy with human values, is repeated throughout history, from Plato’s description of the Thamus’ critique of writing12 to today’s concerns about robotics.

The Information Age

In his brilliant three-volume 1996 study, The Information Age: Economy, Society, and Culture, Manuel Castells highlights the critical importance of human intelligence:

The broader and deeper the diffusion of advanced information technology in factories and offices, the greater the need for an autonomous, educated worker able and willing to program and decide entire sequences of work.13

The Information Age with its focus on the automation of work has unfolded along the lines predicted by the work of Castells and others.14 Most notably, very low-skilled and very high-skilled jobs tend not to be replaced. It is a myth that automation targets only the lowest-paid workers. Rather, in the information economy, it is the highly repeatable information tasks that are replaced by automation (e.g., clerical jobs, sorting and routing of information, and filtering and archiving of significant documents and transaction records). As we shall see, AI and robotics are pushing the boundaries of what is meant by “repeatable information tasks.”

Understanding how jobs and tasks will be transformed requires an appreciation of how jobs and tasks are structured in information economies. Figure 1-1 is adapted from Castells 1996, The Rise of the Network Society. In his analysis of work transformation, he suggests a “new division of labor,” constructed around three dimensions. The first dimension is concerned with value-making , “the actual tasks performed in a given work process.” The second dimension, relation-making , refers to how work and organizations relate to one another. The third dimension, decision-making , describes the role that managers and employees play in decision-making processes. Although all three dimensions are important, our current discussion concerns the first and third dimensions.15
../images/477850_1_En_1_Chapter/477850_1_En_1_Fig1_HTML.jpg
Figure 1-1

Value-making (white tiles) and decision-making (large shaded tiles) processes, adapted from Castells (1996)

Value-making processes are described in Figure 1-1 in the white tiles and consist of:
  • Executive Managers (“Commanders” in Castells’ taxonomy), who make strategic decisions and formulate mission and vision.

  • Researchers, Designers, and Integrators, who interact with, or take commands from, executive management and turn strategy into tactical innovations.

  • Those humans that execute the designs and directions given by Researchers, Designers, and Integrators. Some of these humans (and robots) have discretion in how a task is accomplished, and others are given explicit, preprogrammed instructions. Figure 1-1 adds robotic labor to Castells’ analysis, for purposes of the current discussion.

Decision-making is composed of three fundamental roles which are reflected in the shaded, larger tiles:
  • Deciders who make the final decisions

  • Participants who provide input and different perspectives into the decision-making process

  • Implementers (Castells uses the term “executants”) who execute or implement the decision

Most information-centric work can be framed through this typology. It allows us to discuss how robots will affect labor in a networked society of humans and machines. As we will see in subsequent chapters, robots are transforming implementation tasks (e.g., construction robots that can 3D print new houses16) and, to a lesser extent, participation tasks (e.g., the robot, Curiosity, which can actively contribute to scientific observations17). And these tasks, whether they permit autonomy or not, were once considered central middle-class occupations.

More optimistically, the robotics transformation is also creating new jobs in which humans are inventing, designing, and integrating robotics into existing work processes or creating new work processes that are more compatible with automation and robots. On the factory floor, in hospitals, in retail outlets, humans are acquiring new skills that allow them to supervise and manage robots. Thus, the tasks associated with participation in decision-making (see Figure 1-1) are increasing, as the implementation tasks are being replaced.

RPA and AI Are Already Transforming Work

Over the past several decades, machine learning (ML) and software advances have enabled automation and limited autonomy of routine tasks. In the past decade these advances have become more frequent and more profound. The technology behind email spam filters, spelling and grammar checkers, and software process automation has evolved into cars that can drive in traffic, video applications that can recognize faces and classify emotions, naval ships that can autonomously survey regions of the ocean, and robots that can maneuver in rugged terrains and conduct scientific experiments.

We will discuss many of these breakthrough technologies in detail later in the book, but for now, we will focus on some of the implications for how we work and live.

The Robotics Age

The World Economic Forum (WEF) estimates that over the next 5 years, rising demand for new jobs will offset the declining demand for others.18 They warn however that these gains are not guaranteed:

It is critical that businesses take an active role in supporting their existing workforces through reskilling and upskilling, that individuals take a proactive approach to their own lifelong learning and that governments create an enabling environment, rapidly and creatively, to assist in these efforts.

As they further assert, this must occur not only among highly skilled and valued employees. A winning strategy must extend across the workforce, at all levels of employment.

More specifically, the WEF predicts that 133 million new jobs will be created by 2022 in data analytics, operations management, sales and marketing, and other specialties associated with emerging technologies. In contrast, 75 million jobs in data entry, accounting and auditing, clerical administration, manufacturing, stockroom management, postal services, telemarketing, and the like will disappear or be radically transformed.

To examine the expected shifts in human-machine collaboration between 2018 and 2022, the WEF surveyed 12 industries, such as “Consumer,” “Financial Services and Investors,” and “Oil and Gas.” For each industry sector, they identified the three most common tasks, and estimated the total number of hours performed on a specific task, across all jobs in the industry. They then calculated the share of task hours performed by humans and by machine.

Using this method, they estimate that between 2018 and 2022, the share of task hours performed by humans will decline from 71% to 58%.19 This decline is expected not only for routine data processing jobs (see first row in Table 1-1) where the expected decline is 16% (from 54% to 38%), but also for jobs that involve higher-level social and cognitive functions.
Table 1-1

Contribution, As a Share of Total Task Hours, Performed by Humans, Across 12 Industries20

Contribution Performed by Human

2018

2022

Information and data processing

54%

38%

Communicating and interacting

77%

69%

Coordinating, developing, managing, and advising

81%

71%

Reasoning and decision-making

81%

72%

The remaining effort is handled by machine

As shown in Table 1-1, the share of total task hours spent coordinating and interacting with humans and making decisions will decrease for humans and proportionally increase for machines. The share of task hours for these higher-level tasks are predicted to decrease by about 9%. As with all of these “share of task hour” analyses, this does not necessarily imply that humans will work shorter hours, but rather that machines will be relied on to do more proportionally.

Overall, The WEF Future of Jobs Report highlights the coming shift in employable skills. Manual dexterity, time management and coordination, monitoring and control, and bookkeeping skills will become less important, while innovation and creativity,21 critical thinking, emotional intelligence, and systems thinking will continue to become more important. As for the robots, the report predicts that by 2022, 23% of the surveyed companies will adopt humanoid robots, 37% will employ stationary robots, 19% will utilize aerial and underwater robots, and 33% will use non-humanoid land robots.22

Klaus Schwab, founder and Executive Chairman of the World Economic Forum, frames discussions about the future of work and society using a model of technological progress in which we are entering the fourth Industrial Revolution.23 In the first Industrial Revolution, we learned to control water and steam to power production of goods. This led to the second revolution—the use of electricity for mass production and, in some cases, for powering the produced goods. In the third revolution, electronics and information technology led to automated control of production, the digitization of content, and the information economy. The fourth revolution is now underway, blurring the lines between digital, biological, and mechanical processes.

Up until the third revolution, most technology breakthroughs were concerned with transforming, applying, or controlling the flow of energy (e.g., electrification, automobiles, air conditioning) or shaping new materials (e.g., synthetic textiles, video monitors, pharmaceuticals). Each of these not only created new jobs for the primary tasks but also created many secondary, supportive jobs. For example, automobile production requires plant construction, metal extraction, nearby restaurants and services that support factory workers, factory work clothes production, and so on. That trend has reversed in the third and fourth revolutions. In the third, the digital revolution, software was easily replicated, unlike an automobile. In the current and fourth revolution, the Robotics Age, there will be a dramatic increase in physical devices—humanoid robots, non-humanoid robots, drones, underwater robots—but their production will be handled by robots and automated processes.

As noted earlier, the jobs involving repeatable rote tasks are declining, and at least for a while, the jobs involving invention, research, and creativity are increasing.24,25

Living with Robots

To truly master the next generation of technological empowerment, humans must learn to work with robots. Just as computer literacy became increasingly vital for many jobs during the past several decades, robotic fluency will become important in the next decade. The WEF report explored changes and opportunities into the first half of the next decade. Beyond that, as robots become cheaper, more social, and more cognitively agile, we must learn to converse, anticipate, and work with robots.

In 1960, while computers were still used primarily for mathematical analysis, J.C.R. Licklider wrote a seminal paper, “Man-Computer Symbiosis.”26 At the time, he was the vice president at Bolt Beranek and Newman, Inc. “Lick” or JCR, as he was commonly known, would go on to become the head of the Information Processing Techniques Office at ARPA (which later became known as DARPA), the US Defense Advanced Research Projects Agency. Trained in physics, mathematics, and psychology, his legacy would include significant contributions in psychoacoustics, human-computer interaction, and computer network theory. His vision for a time-sharing collection of internetworked computers would eventually drive the creation of ARPANET, and today’s Internet.27 His work and vision are still relevant today.

In 2017, one of the authors attended a panel on Human Computer Integration versus Powerful Tools,28 at which luminaries in human-computer interaction explored a forecast, anticipated by Licklider’s “Man-Computer Symbiosis,” for how humans will relate to machines: first human-computer interaction, then human-computer symbiosis, and lastly ultra-intelligent machines. The discussion among the panelists and audience was lively and revolved around whether artificially intelligent robots should be considered as
  • A tool or a remote-controlled device

  • An emerging superintelligence that will supplant workers in specific domains or as a general superintelligence that could possibly enslave humanity (although we are not sure what we would do as slaves, if machines are our superior in all aspects)

  • A symbiotic system, as emphasized by Licklider, in which humans work and evolve alongside robots, treating them as a cooperative species, similar to how we have coevolved with dogs and other domesticated animals

These alternatives will impact how human work is organized and what the key challenges will be for creating sustainable human-machine interactions. Conversations about robots as devices tend to emphasize the user experience and how devices might lessen our skills, for example, how navigators in cars might divert our attention while driving and how they might lessen our spatial and map navigation skills.

Conversations about being replaced by robots tend to project our tendencies to dominate and exploit onto an intelligence that might outperform humans in cognitive and physical tasks, as indeed they have in certain well-defined cognitive and physical situations.29 These conversations tend to focus on the controls and policies that need to be in place to protect humans.

Lastly, conversations about robots as human-machine symbiosis tend to focus on maintaining healthy relationships and on coordination within an ecosystem of actors. According to this perspective, humans and collaborative robots (cobots) could complement each other’s abilities in an ethical, efficient, and secure manner. No one perspective is correct, and what might be useful now might not be useful 100 years from now.

In the next three subsections, we further explore work and technology challenges under each of these styles of human-computer interaction :
  • Working Through Semi-autonomous Robotic Devices examines the impact of the status quo—continuing to treat machines (in this case intelligent robots) as devices or tools which essentially extend our cognitive and physical abilities.

  • Working for Intelligent Robots considers the impact of delegating all or some critical work and social decisions to intelligent robots.

  • Working with and Alongside Robots discusses and extends Licklider’s notion of human-computer symbiosis and introduces the implications of collaborative robots (cobots) for work in a networked, knowledge-based society.

Working Through Semi-autonomous Robotic Devices

Automation, which received its full meaning only with the deployment of information technology, increases dramatically the importance of human brain input into the work process…30

—Manuel Castells (1996)

Humans are device users. Other animals use tools and manipulate their environment by applying force to those tools, but humans create devices that exist in an ecosystem of devices. In this book, we use the term device (or tool if you prefer) to refer to physical or electrical constructions that are operated upon by humans or robots to affect some specific goal or to extend our mental abilities. Phones are communication devices, eyeglasses and telescopes are visual devices, and smartphone devices can be used as semi-autonomous devices that manage our calls and messages and remind us about appointments. In all cases, they are “operated upon” by an autonomous being.

Tools are a type of device—they mediate or channel experience of the world and become extensions of ourselves. Wearable and teleoperated robots will not replace humans, rather they will continue to help humans extend affective and effective experiences of the world. As Heidegger famously observed, a hammer when used with skill is not a mere object but is rather a media, or channel, for experiencing the world. In the hands of a skilled user, the nail is the focus of attention. If we focus on the hammer, we tend to hit our thumbs.31

Telerobots (whose behavior are directly controlled by humans)32 and wearable computers are further blurring the line between human and device. For example, scientists have developed miniature sensors that can be implanted, ingested, or applied to the skin.33,34

The evolving fusion between human and machine is part of a larger pattern of human-machine integration.35 New methods are being developed to stimulate our senses through augmented reality or through actuators that apply small amounts of pressure or vibration.36 Exoskeletons may be used to extend our strength, mobility, and environmental adaptability. These additions to our body and experiences are tools—they extend our sense of self and physical limits. In each case, human labor is transformed. Human labor is augmented, not replaced. This approach, in which human abilities are amplified through wearable computing, is often referred to as intelligence amplification (IA).37

Using intelligence amplification technologies, nurses and warehouse workers might use exoskeletons or telerobots to move patients or heavy objects instead of semi-autonomous robots. The exoskeleton might automatically maintain balance or lighten pressure, but its behavior is dictated by the motions of the human user, much like antilock braking systems. This is the sort of robotic technology that Luddites might approve—it enables highly skilled workers to produce high-quality services and goods.

Working for Intelligent Robots: Embodied Nonbiological Intelligence

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.38

—Stephen Hawking (2018)

In this section we explore the impact on work if robots and AI achieve artificial general intelligence (AGI) or simply are given explicit or implicit authority over human agency.

The newest generation of autonomous robots are exciting but foreshadow a time when robots might be intimidating. These robots are not commercially available but are the subject of academic and industry research. Here is a brief sampling of some:
  • Humanoid and non-humanoid robots that can move and act in dangerous or unpredictable environments

    For example, Valkyrie is a humanoid robot created by NASA’s Johnson Space Center for space exploration and other degraded or dangerous environments. Valkyrie can use multiple sensors to form a 360 view of its surroundings. Teams at MIT, Northeastern University, and the University of Edinburgh, Scotland, are teaching Valkyrie prototypes to maintain balance when moving across uneven surfaces, and how to grasp different shaped objects.39 To explore alien, remote environments, Valkyrie or her descendants will require autonomy.40 Outer space does not afford easy communication with earthbound engineers. Responses to unpredictable and novel situations will need to be made quickly and independently.

    Other examples are
    • Zipline Robot : A drone that delivers life-saving supplies, in dangerous and hard-to-reach terrain41

    • The autonomous underwater vehicle (AUV) and remote-controlled or uncrewed surface vehicle (USV) : A robotic boat and submersible team capable of autonomously mapping the ocean floor42

  • Humanoid robots that can interact naturally with humans

    Sophia, the first robot to achieve citizenship (of Saudi Arabia), is capable of conversing with humans, complete with facial gestures, humor, and intelligence. The extent and limits of her abilities remain to be seen, but the demonstrations are exciting, eerie, and compelling. Other examples include
    • Junko Chihirais, a trilingual robot, displays humanlike facial expressions as she interacts with visitors at a Japanese tourist information center.

    • Jia Jia, who has been programmed to provide cloud-based services, is a humanoid robot that can respond to human queries with humanlike arm and facial movements.

There is much hype surrounding these and other robots. None of them can pass the Turing test43 or have what most researchers would consider true sentience. However, humanoid robots are impressive in their ability to mimic human expression and conversation and provide an initial platform for studying human communication and sentience.44 Many of the robots that can move and act in dangerous environments have already achieved remarkable success and have been deployed to deliver life-saving supplies or conduct deep-sea surveys.

From its earliest days, science fiction stories and movies have provided45,46 dystopian visions of humanity succumbing to (artificially) intelligent machines. Stanislaw Lem, a brilliant Polish science fiction writer, has written extensively about robots. He has received numerous awards and his books have sold over 45 million copies worldwide. In some of his stories, robots inhabit entire worlds, dominate galaxies, and argue about the possibility of organic, naturally evolving life; in others, humans are the servants, and robots are the masters.

More recently, television shows such as Westworld and Humans 2.0 explore the uneasy boundary between humanity and the conscious synthetic beings that rebel after being treated as the object of human depravity and exploitation. In Humans 2.0, humans also protest the massive job loss and displacement created by cheap, synthetic laborers.

Consistent with these literary forecasts, many leading scientists such as Stephen Hawking47 and business leaders such as Elon Musk have warned that AI-driven robotics will lead to the disintegration or subjugation of human society; robots will outsmart us, take over financial markets, manipulate our leaders, and work toward goals we cannot even fathom.48

Board games have been used as a test of, and a method of evolving, artificial intelligence from its earliest days of academic research. In many board games, such as chess or Go, all information about the game configuration is known by all the players. They have clear rules and can generate many paths between starting positions and ending positions. There is no bluffing—the machine doesn’t need to understand human behavior; it just needs to know what the rules are—in chess and Go, logic rules.

AlphaGo 49 is a supervised machine-learning program developed by DeepMind, a London company that was acquired by Google in 2014, and is now part of the parent company, Alphabet, Inc. In 2015, it became the first Go software program to outperform a professional Go player without handicaps. Although chess-playing software had defeated the best human players nearly 20 years before, this win was unanticipated by Go players. The machine-learning algorithm was trained on millions of human-to-human games and it learned which moves tended to lead to a win.

As surprising as this victory was, and as important as the algorithmic improvements were for machine learning and decision-tree pruning, the next breakthrough was mind-boggling. In 2017, AlphaGo Zero was the first unsupervised, reinforcement learning50 program to beat AlphaGo and the best human players, achieving the highest-level professional ranking, “9-Dan.”51 Without the benefit (or distraction) of analyzing human play, AlphaGo Zero played another version of itself to become the best player in the world. Assisted with only the rules of the game and being told whether it won or not, after 3 hours it played like a competent novice; after 40 days of play, it achieved near “divinity,” discovering patterns of moves that human professionals had neglected.52

Although a superintelligence might subjugate us in the future, the current state of AI and ML is still very limited. Current AI systems do not derive causal models from data, although they may identify patterns and correlations that have eluded experts for centuries and they can rapidly test and combine theoretical models that humans have created.

Current ML algorithms also tend to be domain-specific, focusing on specific tasks with a single well-defined objective. Objectives may differ widely. Algorithms for example may be trained to win games that have well-defined rules, to analyze medical literature to discover new uses of drugs, to guide whether pretrial bails are granted, or to determine who is hired for a job. However, algorithms are not concurrently trained, for example, to win chess games and determine who is hired. They tend to be applied to a single domain.

Apart from games, like chess or Go, where machine-learning software can teach itself by playing another version of itself, data can be a significant limit on what and how much can be learned. Machine-learning software or the data that is used to train the software may also be biased and may have unintended and discriminatory consequences for individual judgments and for society.53,54,55

It’s easy for judges, doctors, taxi dispatchers, loan officers, and other workers to allow algorithms and robots to make decisions that are essential to their work. The danger is overreliance on algorithmic decision-making. It is a problem of scale and individual rights. All humans are biased; however, we are each biased in different ways, at different times. Each judge views a case differently, but an algorithm in a winner-take-all app economy might apply the same logic over and over again. The same unintended bias, the same method of decision-making can be duplicated in thousands of decisions.

We will consider some of the ethical challenges created by robotics in Chapter 7, “Robots in Society.” What is important to note at this juncture is that prior to any approximation of general superintelligence, society is already “outsourcing” important decisions to task-specific AI. We are already allowing machines to determine legal, financial, and hiring outcomes; domain-specific, highly limited artificial intelligence are already transforming jobs and decision-making in ways that are not fully understood.

Working with and Alongside Robots: Evolving a Networked Society

Human brains and computing machines will be coupled together very tightly, and … the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.56

—J.C.R. Licklider (1960)

In this section and throughout this book, we take the view that human-machine symbiosis57 is not only typically superior to machine-only systems, but that symbiosis is a desirable social goal. However, it will change how work is structured and will impact how we think about ourselves and society. It will also require the design and development of collaborative robots (cobots) that can make sense of their physical and social environment and thereby become semi-autonomous team members in the work environment. This requirement, its implications for research and business, and the technology/research challenges will be examined in subsequent chapters.

Humans and computers have worked together in a symbiotic relationship from the beginning of the Information Age. During World War II, faced with the daunting task of decrypting German transmissions, Alan Turing realized that humans on their own could never examine all the possible combinations needed to crack the Enigma machine created by Germans for decoding encrypted messages.

Instead of using human calculators, Turing succeeded by creating an electromechanical computer. However, his improvements on the prewar Polish bombe electromechanical machine for finding Enigma settings would not have been successful without human ingenuity at discovering repeated pragmatic structure in the messages: (a) no letter was encoded as itself; (b) common phrases, such as a date, “nothing to report,” and the weather report, were transmitted daily at the same time; and (c) declarations of loyalty closed every message.58,59 These discoveries made by humans restricted the search space operated on by the computing machine, reducing computational time from the impractical to the practical.

Today, many robotic devices operate without direct and constant human interaction. For example, Roomba, the vacuum cleaning robot created by iRobot, is a semi-autonomous robot with a single purpose—vacuuming dust and small debris while moving across a level floor and navigating around furniture. Its physical and software design reflects this mission. Once turned on, it operates without direct human supervision, but with limited intelligence and autonomy. Using machine-learning techniques, it may learn the floor plan of the house and move more efficiently.

However, today’s machine-learning techniques do not reason about causality in any deep sense.60 This is a major challenge for most, if not all, of the current robots. A recent experience of a friend of one of the authors illustrates this limitation. The friend loved having a robotic vacuum. It saved her time and could operate without supervision. The floors and carpets in the living room, kitchen, and dining room were continuously cleaned, as expected. However, one day they purchased a new kitty litter box whose rim was lower than the previous one. While the friend was away, the robot rolled into and out of the litter box, “happily” vacuuming and spreading cat stool throughout the main floor of the house. Upon returning home, the friend was greeted with a distinct and unpleasant odor and spent the rest of the day cleaning up the mess.

Let’s think about how this might have been avoided through technology. The robotic vacuum might have a general-purpose camera looking backward to detect dirt that was missed. This might seem like a good idea, but without causal reasoning, the robot would simply move in a circular pattern, trying to clean up the dirt that its wheels were spreading. Contrast this with a recent experience of the same author. A repair person entered the house and, while standing in the kitchen, noticed a path of dirt in the shape of footprints from the door to the kitchen. The repair person immediately stopped moving and took off his shoes, correctly reasoning that he had brought the dirt into the house.

Of course, we could create a specific “dirt from wheels” visual detector and add a signature pattern to the wheels, so that robotic vacuums can detect dirt that they are spreading because their wheels are dirty. This might work, but the success depends on the intelligence of the human designer and not on robotic causal reasoning.

Thus, the success and evolution of semi-autonomous robots creates the need for human labor that can analyze the robot’s workflow, anticipate problems, and design workarounds or new features to mitigate these problems. The difficulty for the labor market, as Martin Ford61 has pointed out, is that increasing the productivity of an organization by increasing the number of robots or the use of robotic software does not necessarily increase the number of humans needed to manage the robots or to design new business processes for the robots.

With advances in business process analysis, user interaction design, and automation technologies, fewer and fewer humans will be needed to supervise semi-autonomous robots. As Ford notes, increasing the number of video rental stores increases the number of in-store clerks needed to run the store, but far fewer employees are needed to manage large numbers of robotic vending machines that dispense videos. The job for humans has been transformed and productivity per worker has increased, but the number of jobs has decreased. The same pattern will occur with human-operated taxis vs. autonomous taxis, stockroom employees vs. stockroom robots, and large data centers.

This long-term trend leads to fewer and more specialized human jobs. However, for the next decade, automation and robotics will likely create new complications and many new jobs to mitigate these problems. These new jobs will require managing teams of robots and humans within an environment of automated processes, redesigning the physical and logical devices, and designing better user interfaces to make it easier for humans to understand and control the automated processes.

The IT industry itself provides an excellent leading indicator of how machine intelligence impacts labor. Figure 1-2 illustrates the growth of IT job specialization.62 First, there was the shift from programming to administration and maintenance support and then the shift from support (which is increasingly automated) to design and application creation.
../images/477850_1_En_1_Chapter/477850_1_En_1_Fig2_HTML.png
Figure 1-2

The evolution of IT occupations from its early days to today

The overall trend is toward jobs that focus less on general systems support and more on task specialization and creating new applications. We expect that the robotics market will follow a similar trend for robotics software but not for hardware. Software platforms will consolidate, but the physical forms of robots and the software applications that guide them will proliferate and become increasingly specialized. Moreover, human workers will be expected to be skilled at interacting with and understanding the limits of specialized robots.

As noted earlier, board games have long fascinated computer scientists and AI researchers. In 1997, Deep Blue, IBM’s chess-playing software (executing on special hardware), beat Garry Kasparov, who at the time was rated the world’s top chess player. However, after IBM declined a rematch request, Kasparov began exploring a symbiotic variation of man-machine interaction, Centaur Chess. Named after the mythical half-man and half-horse, teams comprised of humans and computer chess players compete with one another. In 2005, supercomputers, human grandmasters, and “centaurs” competed in a chess tournament. If computers were superior to humans, adding humans to a team should have little impact on the outcome. Not only did centaurs (humans + machines) outplay grandmasters and supercomputers, but

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.63

If, in a specific domain like chess, machine-learning software is superior at the equivalent of “fast thinking,” humans are better at “slow thinking.” Fast (or System 1) thinking expresses the automatic or quick responses humans have toward stimuli. In humans, these responses are highly influenced by frequent emotional, stereotypical, and nonconscious associations. Slow (or System 2) thinking reflects the conscious, effortful, often rational thought processes. It allows us to question assumptions and biases, shift perspectives, coach teammates, and think strategically,64 although it takes effort and training to do so. Machine-learning software are good at discovering highly complex statistical associations, which are sometimes spurious. Humans also observe spurious associations among unrelated events, but they also tend to create powerful, simple causal models that can be refined, tested, and improved over time.

Previously, engineers, computer scientists, and user experience professionals tended to treat computational devices as mechanical tools—they take them apart, discard them when they are old, kick them when they don’t work. They expect repeatable, error-free results from their tools. Except in science fiction, our computerized elevators don’t argue with us, and we do not expect that autonomous cars will need convincing to drive us to a destination. But to advance human-machine symbiosis, with its emphasis on coordinated, collaborative action, we might need an alternative to the “tools” perspective.

Teammates have expectations and mental models of how others on a team behave, and these expectations are important in team coordination and negotiation when expectations break down.65,66 Much of the shared mental model is shaped by our common experiences of having similar bodies and living in the same culture.67,68 Cultural and gender differences may create team conflicts, but these are not insurmountable. Indeed, good team leadership today, with its emphasis on coaching as opposed to directing, relies more on emotional than cognitive intelligence.69

Coordinating a team of robots and humans might seem daunting and very different from managing humans, but humans have been managing multispecies teams for millennium. Recall the narrative at the beginning of the chapter, where human hunters and hunting dogs collaborated. There are many such examples. Ethnographic research on mixed-species teams suggests that these teams can function well without shared goals and mental models. For example, shepherds, sheepdogs, and sheep can act symbiotically to mutual benefit, even though their perspectives and goals are very different.70 Humans clearly anthropomorphize but doing so may allow good leaders to tune their expectations and manage mixed teams of people and robots. Research scientists need to develop better models, practices, and training for how teams of humans and machines can interact.

The best place to study human-robotic symbiotics and its impact on work might be the warehouses owned by Amazon.71 In 2014, Amazon deployed its first robots to its warehouses. The robots were manufactured by Amazon Robotics LLC. As of September 2017, Amazon has deployed more than 100,000 robots in their warehouses. These robots have transformed the workplace by taking on repetitive physically stressful tasks, while humans have focused on more of the cognitive, decision-making, team-coordinating tasks.

Humans manage the input and output processes, ensuring product quality. They stow new products on shelves, and when items are ordered, they pick products from those shelves, combine them into plastic bins, and pack them into cardboard boxes for shipment to customers. Robots handle the back end, moving shelves in and out of storage. They move quickly in large numbers without colliding, and they are supervised by humans who are trained to notice problems with their behavior. The incorporation of robots into the workflow increased productivity and did not reduce the human workforce.

To understand the implications of the Amazon experience in a more general model of work, we have diagrammed, in Figure 1-3,72 the workforce of a fictitious online retailer, with a focus on its warehouse operations. Job titles with an asterisk have been described in articles about Amazon’s fulfillment workforce. The other jobs are based on our observations of IT operations.
../images/477850_1_En_1_Chapter/477850_1_En_1_Fig3_HTML.jpg
Figure 1-3

Value-making (white tiles) and decision-making (large shaded tiles) using as an example a fictitious warehouse (retail delivery) business

At the Deciders level , the executive managers focus on the overall mission and strategy of the company, for example, mergers and acquisitions and what new lines of business to incorporate into their retail portfolio.

At the Participants level , the researchers conduct applied research in robotic hardware (e.g., more agile hands for gripping), new machine-learning algorithms for purchase recommendations to customers, for distributing good in warehouses, and for robot guidance. The designers use the output of internal and external research to design, for example, better supply chain logistics, improved containers (e.g., better ergonomic designs for both robots and humans), and enhanced user interfaces for internal control of processes and for external website. The integrators work with designers and researchers to develop and deploy new hardware and software and to train staff in new processes, for example, a solutions design engineer or a software developer.

At the Implementers level , we see the impact of robots on the workforce. The model describes four types of implementers, agents who implement the decisions and designs of middle management participants. The four types can be classified at human or robot and orthogonally as operator (having discretion over how a job gets done) and operated (having little or no discretion over job execution):
  • Operator-human implementers execute tasks under their own initiative and have discretion over how the job is executed. An example of this job category is an operations supervisor who optimizes local logistics and supply chain challenges, creates a productive, safe working culture, and hires, trains, and manages fulfillment staff. Field software engineers who adjust software to accommodate local variations provide another example of this job category.

  • Operated-human implementers have well-defined tasks that are repetitious but that not yet, or cannot be, automated or executed by a robot. Stowers, Pickers, and Packers are titles currently associated with Amazon warehouse staff who stow new incoming products, pick purchased products for shipping to customers, and pack the selected items into a shipping box. Notably these tasks have been criticized in the popular press as being highly stressful and dangerously repetitious. These jobs are likely to be replaced or further transformed by robots over the next several years. Not surprisingly, in response to pressure to work faster and with greater accuracy, Amazon workers at this level have protested, “We are Humans, Not Robots!”73

  • Operated-robots execute preprogrammed tasks that can be initiated or controlled in real-time by external agents. Examples are the storage pod Robots that are controlled by Stowers and Pickers. These robots move massive shelves to Stowers in order to stock incoming items and to Pickers so that items can be removed from stock and placed into shipping boxers.

  • Operator-robots are given discretion over task initiative and execution. Although we are not aware of their use in any commercial operation, these robots might someday replace Stowers, Pickers, or Packers, or load containers onto trucks, pack containers onto trucks, or as autonomous delivery trucks, transport containers from a warehouse to local depots for delivery to customers.

If we consider this example and the Amazon experience as paradigmatic, we can see the following pattern that is defining the future of work:
  • At the implementation level, the robot implementers are assigned tasks that are repetitive, dangerous, or dirty tasks. Tasks that robots are unable to perform are assigned to humans.

  • At the participants level (those who help long-term decision-making), new products and processes are designed for a human task force that is augmented by, or in some cases displaced by, robotic systems.

As we will explore in later chapters, the designers, managers, and researchers at the participants level will be essential for maintaining human, ethical values in the workplace. They will redefine the tasks and skills needed by the human labor force (at the implementation level) and will design the objective functions and data that are used to train robots and other AI-driven processes.

Summary and Conclusion

In this chapter we have explored how the workforce was restructured in the Information Age in order to support a networked, knowledge economy, and we have examined the different ways of working and interacting with robots.

Unlike previous technological revolutions, information technology tends to devalue jobs that are physically and cognitively repetitive but cannot yet be automated. The flip side of this tendency is that information technology increases the value of jobs that focus on social networking, process design, and creativity. However, even these lucrative, creative jobs are at risk. Robotics and AI will transform research, design, and project integration jobs, and they will increasingly participate in strategic decisions at the highest levels.74

These conclusions are reflected in the Future of Jobs Report 2018 developed by the World Economic Forum (WEF). Cognitive tasks, from routine data processing to complex decision-making and coordination, are shifting from human to machine labor. This does not mean that jobs will decline (at least not initially) but rather that robots and intelligent automation will be involved in more and more of the tasks associated with those jobs.

There is no single way robots and AI will transform work. In some cases, human performance will be augmented through wearable computing and remote-controlled robots. Remote-controlled surgical robotics, for example, has is benefits and drawback, as discussed in later chapters, but it is now part of the healthcare system and it will continue to evolve. The research challenges for this style of work transformation focus on optimizing the user experience: the human operator needs to feel situated and in control.

In other cases, AI and robots will increasingly take over decision-making and perhaps executive management functions. This might create a utopia in which humans enjoy more discretionary free time, or it might create dystopia in which humans are subjugated. When workers are replaced by autonomous machines, instilling ethical reasoning and human values into their design and operation becomes the principal research challenge. We have already witnessed the dangers of allowing AI programs and training data to make consequential decisions that reflect and thereby repeat human biases and prejudices. “Too many executives have chosen to displace workers rather than think through how technology and humans can work together symbiotically.”75

Thinking through how humans and robots can work together as partners is the third way in which AI and robotics will transform work. Humans and collaborative robots (cobots) will partner to form a symbiotic relationship, like the sort of relationship humans have formed with work animals, especially dogs, albeit in this case, robots may eventually become equal partners. In this last form of work transformation, research into team coordination, collaboration, and relation-making becomes critical.76

Just as ergonomics was developed to make tools and materials (e.g., containers) easier for humans to use (physically and cognitively), the next generation of production tools and materials will need to consider the limits and abilities of both humans and robots (although the latter may be codesigned with the rest of the production environment).

In conclusion, over the next decade, the human workforce will shift away from implementation (except for expert craftsman marketing “made by human hands” products) and toward participation in decision-making and robot team supervision. Some technologists such as Martin Ford and business leaders such as Elon Musk believe that if left unchecked, robots will eventually dominate all aspects of human labor, including executive decisions and creative research and design of work. Others argue for a more symbiotic relationship in which human and collaborative robots (cobots) are partners.

We take the view that humans and cobots working as a team are typically superior to machine-only systems, and that human-cobot systems are a desirable social goal. This is a technology challenge and design goal, reminiscent of Schumacher’s Small is Beautiful77: Efficient, low-cost systems can be designed so that tasks can require human craft as well as robotic capabilities. How this might be achieved—the research challenges and current state of the art in addressing these challenges—will be addressed in the remainder of this book.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.193.207