CHAPTER 10

Culture: Refreshing Technology

Innovation as a Social Good

Recently, a cross-section of US residents was asked about their feelings toward some of the more highly publicized new technologies that are predicted to play a big role in the contours of the future. The results were less than flattering—if you are a piece of digital equipment, that is. According to the poll conducted by Pew Research, 73 percent of Americans are very or somewhat worried about a world in which robots and computers are capable of performing many human jobs, and 76 percent of respondents said that they believed automation of jobs would intensify economic inequality. A full 75 percent of respondents were skeptical that new, better-paying positions will be created for humans who lose their jobs to machines.1

Autonomous vehicles (AVs) fared only slightly better: 56 percent of Americans said they would never trust the technology enough to ride in one. Although one of the primary selling points of self-driving cars is that they will in time eliminate fatal vehicle accidents, as many as 30 percent of Pew respondents said that AVs would do the opposite—lead to an increase in road deaths.

These findings are not surprising when you consider that much of the new technology we are just beginning to get used to today comes with a very sharp double-edged sword. Some of it is already making the world a better place. The most important events in our lives—for instance, buying a home or a car or planning a wedding—as well as essential chores like shopping are simplified, made more convenient, and enhanced with a personal touch through customization. Information that once took hours or days to find is at our fingertips. The business models of slow-to-change, inefficient industries are being modernized and their assumptions questioned. Staying in touch with each other and our communities has never been easier.

Yet despite the obvious ways that technology serves us well, headlines consist of technology’s unrelenting dark side: its impact on jobs and wealth, its intrusions on our privacy and political systems, the odd isolation that social media engenders, the disinformation channels that manipulate truth, online bullying, and more. The list is long and portends peril. Because of how readily we bring new technologies—hardware, software, platforms, and apps—into our lives (even as we fear them), the harm that they can visit upon us as individuals and citizens of global communities can be enormous and near permanent.

Urgently we have to make tough decisions about the way technology is implemented and used today. With appropriate thoughtfulness—dispassionately identifying the undesirable elements of technology and championing its advantages—our choices can lead to positive outcomes and mitigate the negative ones. We need to forge disciplines, practices, and policies to help people make those decisions smartly—and we have to protect each other from online retribution, reputational risk, or threats to our livelihoods if we attack the excesses of technology. Five steps can be taken immediately to rein in technology. They are fundamental but essential, including (1) upskilling for a digital world; (2) safeguarding data from misuse; (3) enhancing the role of civil society to help find solutions to privacy questions and better police technology; (4) setting controls on artificial intelligence; and (5) improving our personal behavior and controlling our appetites vis-à-vis technology.

We Can’t Control Technology If We Don’t Understand It

Technology is leaving a large number of us behind. On the one hand, there are those who are adept around platforms and apps or who have access to large amounts of cloud storage and various pieces of connected hardware to ingest information and quickly transform it into useable knowledge. These folks are in a better position than the second group of people, who are Luddites or otherwise do not have the wherewithal or the access to participate in the digital revolution.

This technology divide is a costly and unviable situation and can be ameliorated only by an enormous global upskilling campaign aimed at achieving three urgent goals. The first goal is ensuring that people everywhere are as future-proof as possible, minimizing employment disruption, particularly among aging populations, as jobs transition into requiring advanced digital skills. The second goal is allowing everyone to engage thoughtfully and intelligently in finding ways to humanize technology; the search for solutions to the excesses of technology must be citizen-led, while business and government leaders must participate. The third goal is beginning to reduce the knowledge, social, and economic gaps between the haves and have-nots, gaps that are expanding as technology increasingly drives these elements of daily life.

An upskilling effort of this magnitude and necessity can be achieved only with a massive, fast campaign. This effort can be broken down into these actions:

Images   Building workforce capabilities. Individuals in every region must obtain the minimum level of digital skills that are needed for jobs of today and tomorrow.

Images   Amplifying digital understanding. Leaders in the public and private sectors must have sufficient understanding of the potential harm that technology can do as well as sensitivity to the fears that global residents harbor about technology. Without that, they cannot play a role in driving technology for the public good and leading technology-propelled organizations effectively. In addition, citizens around the world must grasp technology well enough to participate in managing it and in holding their leaders accountable.

Images   Help the have-nots. Find and train people currently excluded from the opportunities that technology offers because they opted out or live in places that are digitally disadvantaged.

Targeting these actions requires that the case for upskilling be made and agreed upon globally. Ongoing sharing of intellectual property and best practices in the name of conceiving a digitally prepared global populace is necessary. Moreover, at the local and regional levels a governance structure for the upskilling campaign must be drawn up that includes clear definitions of priorities, strategies for addressing these priorities, funding channels, partnerships, and capabilities needed to train and retrain a large group of people. And perhaps most important, there must be concerted and large-scale engagement from the private sector; governments and civil society are important, but without private sector buy-in the training efforts would not be focused on the types of skills needed for people to retain jobs during digital disruption, nor would they be financially secure enough.

We Have a Data Dilemma

Information, especially personal information, is both a public and private good. In various forms—that is, anonymized or as a component of statistics or even as raw data—big data pools of individual habits and preferences, health histories and shopping records, information accessed, and what we read are extremely valuable. These data pools help companies use technology for positive outcomes, delivering efficient and customizable services, catalyzing breakthroughs in, for instance, medical equipment and pharmaceuticals and giving us more hours in the day for leisure and things to do with this free time. Equally important, though, is that personal and sensitive data about each of us is protected from public distribution or misuse by corporations or governments.

Some parts of the world take that mission very seriously. Since the end of World War II, Europe has led the way in legislating strong privacy protections for data about individuals and their activities. The latest EU regulations require companies to get permission from European customers before collecting their personal data. The United States has lagged in this—just look at how much private information Facebook secretly hoovers up and turns into a revenue stream. And Asian countries are even worse in this regard. While data privacy legislation is necessary, the private sector—and particularly the leaders of major technology companies—must be in the forefront in finding and implementing solutions that safeguard the trillions of pieces of personal information generated by hundreds of millions of customers and clients. Although many technology executives claim to be sympathetic to data protection concerns, few seem to be actually backing it up with action.

One CEO who deserves credit for at least moving his very large company in the right direction is Microsoft’s Satya Nadella. Only the third chief of Microsoft, after Bill Gates and Steve Ballmer, Nadella was born in Hyderabad, India, and got his master’s in electrical engineering at the University of Wisconsin–Milwaukee. Nadella has set Microsoft on a new mission: where company cofounder Gates hoped for “a PC on every desk and in every home, running Microsoft software,” Nadella wants to “empower every person and every organization on the planet to achieve more.”2

As part of this goal, Nadella believes that data privacy is a “human right” and has vowed that Microsoft will not monetize customers’ personal data or use it for profit. His cloud computing strategy is a window into his thinking on this topic. The centerpiece of this offering is separating public from private data. Microsoft’s cloud servers would store and manage the nonsensitive data that reveals nothing about the identity of its subjects or where it came from, while each client’s local servers, accessible only to the companies themselves, would maintain and protect private data. In other words, when a client wants to use Microsoft’s cloud-based analytical tools, it extracts data from its private servers, forgets where it came from, performs the analysis on the cloud, and then ships those results back to its private servers. All of this is accomplished virtually instantaneously; otherwise the cost of the transaction in time would make private server–public cloud technology too slow.

There are other alternatives being considered as well to address the data protection issue. One of the more prominent ones recently has been blockchain, which essentially turns every data transaction into a series of anonymized blocks. The transaction can be verified and tracked at every stage but cannot be decoded and the individuals initiating the communication hold private keys that allow them to control how the information is used and who is allowed to view it. Blockchain is an attractive idea that has a lot of merit, although there are crucial issues to work out. For instance, how do you make it reversible? You may want to call the interaction off and erase it totally, but at this stage you can’t do that even though that’s a necessary component of data protection as well. Another question involves figuring out how to make some of the data in the blockchain public but only for a short period of time and for a specific use.

Today’s alternatives will likely look primitive in a few years. But whatever solutions we arrive at must address the public–private information problem head-on, in a manner completely divorced from the current massive, centralized data farms that serve as the basis of most information and economic exchanges today.3

Where Is Civil Society?

To help institute controls on platforms like Facebook, Google, Amazon, and Baidu, and to control the proliferation of artificial intelligence and its impact on our social structures, civil society—technology-focused organizations, NGOs, nonprofits—must play a part in helping determine who controls new technology and what rules and social norms are needed to protect society from it. These civil society organizations will have to be given access to the resources, both data and financial, to allow them to do so. We need to find a way to strengthen the role of civil society and provide equivalent resources to research and development with a primary focus on crafting AI and platforms with people at the heart of the solution.

So what might that look like? One solution is to provide equivalent access to data for those whose primary interest is the creation of public goods. A pool (or, in cloud parlance, a lake) of information and access to resources for those who have clearly passed the standard of a legitimate not-for-profit organization. A few principles should hold for those wishing to access resources or information in civil society. All results should be made available in the public domain, personal information has to be protected with state-of-the-art cybersecurity and data-masking protocols, and governance practices of the type discussed below need to be in place. Organizations wishing to be considered should have to go through some form of peer review by academics and informed citizens to determine the types of access provided to data and the level of funding. Providing funding for this type of initiative will of course be a key part of the solution. One answer might be that platforms are required to put aside a fund for this.

How Can You Govern What You Can’t See?

As noted in chapter 3, we are about to enter a world in which artificial intelligence plays an increasingly dominant role and we aren’t close to understanding how AI programs learn and what their responses will be to new commands or programming inputs. We don’t comprehend how AI software develops biases toward some items or language or characteristics of individuals and how its reactions reflect the unique realities that it embodies. To most of us, the way AI programs analyze new information is still fairly alien, opaque, and sometimes even random. Because of these features, even well-intended AI-based solutions could have seriously harmful consequences and we may not know it until it’s too late.

Despite these unpredictable outcomes, AI has huge potential. It could add $15.7 trillion into the global economy by 2030, with the greatest economic gains in China and North America. But while we rush headlong into the era of artificial intelligence, another PwC study found that only 25 percent of executives surveyed would consider prioritizing the ethical implications of an AI solution before investing in it.4 That’s not a promising statistic. Business and governmental leaders cannot abrogate their responsibility for the consequences of artificial intelligence and must work toward building an ethical AI environment focusing on these five primary dimensions:

  1. Establish effective governance across all elements of AI strategy, planning, ecosystem (internal and external players in a project), development, deployment, operations, and measurement. These questions should be addressed: Who is accountable for each facet of an AI implementation? How does AI align with the business strategy? What processes could be modified to improve the outputs and make them less opaque? What controls need to be in place to track performance and identify problems? Are the results consistent and reproducible?
  2. Create a set of practical, frontline-relevant principles for ethical AI practice, educate everyone involved, and hold them accountable for applying those principles. In instances of organizational culture change, such as the development of whole new principles, leadership from the top is essential.
  3. Insist on the development of transparent artificial intelligence—that is, AI whose data sources and decision rules can be inspected, debated, and adjusted in response to legitimate concerns from thoughtful reviewers.
  4. Design AI systems that are robust and secure. That means the AI program is sufficiently self-reflective so that it can correct faulty decisions quickly with adjustments to its core algorithms. In addition, it should be drawn up in such a manner that a flaw in visualizing or understanding an idea, expression, statement, or something in a physical space would not cause serious harm. In other words, no autonomous vehicle should be on the road if there is any chance that it could misread a stop sign and instead go full speed through a busy intersection.
  5. Root out as much as possible bias from AI systems. Recently reported cases of AI programs exhibiting racial bias in the criminal justice realm and gender discrimination in hiring were disconcerting. Of course, all decisions ultimately entail disappointing some people to the benefit of others. In each case, companies, managers, or individuals must balance the choices they make against the harm that these choices cause and to whom. However, hopefully these decisions are not clouded by unfair or unethical biases. Similarly, with an AI system, developers and anyone responsible for managing these programs must be extremely mindful of tuning the technology to mitigate bias and enable decisions that are as fair as possible and that adhere to an organization’s code of ethics as well as antidiscrimination regulations.

Unfortunately, we are unlikely to make the investments necessary to implement these five AI protective dimensions as long as the vast majority of executives do not believe we need to. Perhaps the way to begin to change that attitude is by approaching an AI development project with this question: What if it was my family member, friend, colleague, community, or organization that was inadvertently seriously harmed by the unforeseen consequences of the AI system that I am responsible for?

We Need to Learn How to Behave Ourselves

While technophobia is a real thing and nearly every survey about technology adoption reveals a rippling undertow of fear about new hardware and software, we can’t seem to resist buying the stuff or jumping onto every platform that glides across our screens. We are seemingly well aware of the ability of social media to divide and mislead us and increase our ambient anxiety or of tablets and phones to distract and compel us to respond immediately to text messages, email, and other alerts, pulling us away from what we should be doing often against our better judgment and to the detriment of our well-being.5 Yet we are unable to resist the allure of these shiny new technologies.

Because these technologies are such a recent phenomenon and so different from what we have experienced before, we cannot draw on earlier research conducted about the impact on people of media like TV or radio or even the early versions of the Internet to help us determine how to develop good habits, create smart policies and laws, or derive appropriate practices and safeguards for using and integrating this technology. Given that vacuum of information, it is imperative that we now study quickly, carefully, and honestly how a platform-based, machine-filled world is affecting individuals, communities, organizations, and governments.

Some of the best initial analysis in this area has come from University of California at San Francisco neuroscience professor Adam Gazzaley, whose work with Cal State psychology professor Larry Rosen was highlighted in chapter 3. Among the solutions that Gazzaley offers to alter our technology-obsessive behavior is the use of technology itself to help mitigate its negative impact. To demonstrate this, Gazzaley developed a game called NeuroRacer, a three-dimensional interactive program in which players steer a car up a treacherous hilly road with their left thumb, while watching for signs of specific shapes and colors that they must shoot down with their right thumb. The mixture of essential cognitive skills required to play Neuro Racer well—such as the ability to focus attention on two things at once and to use temporary memory to hold on to multiple pieces of information for immediate recollection—appear to improve neuroplasticity and the facility to filter out distractions.6

In addition, Gazzaley and others have stressed the importance of developing new habits and disciplines to not only neutralize the negative consequences of technology on cognition but to enhance our ability to function in a in a world in which technology is ubiquitous. One of the more charming examples of new behaviors is the growing practice of young adults to stack their cell phones in the center of a table during dinner or other social gatherings. The person who can’t resist the vibrating phones any longer and looks at their device first picks up the tab for everyone. Other suggestions include partitioning your day into separate designated times for work, social networking, answering emails, and browsing the Web so that the technology doesn’t become a constant interrupter; frequent breaks during which you get away from technology and spend a few minutes outside; or even ten minutes every few hours when you can daydream, nap, or meditate without distractions.

Images

We all need to be good students of technology and its impact on society and ourselves. To take on this task responsibly and effectively, we need to have a somewhat paradoxical mindset; we need to be what I call tech-savvy humanists. That is, we must understand both what people require from technology to make their lives better as well as the full breadth of technology’s potential to alter the world around us in big and small ways. The problem with merely being an astute student of people and human systems is that, in a technology-driven world, you are irrelevant. However, if you are a tech whiz in a world of people, you could do real harm. Ideally the path to developing tech-savvy humanists would begin in elementary school.

As we seek to implement the solutions proposed in this chapter, we will need to do massive things fast on a global scale and adopt personal behavioral changes as well. A fractured world makes the larger answers harder to implement, but with technology increasingly dominating our existence—a trend that will accelerate in the coming years—the option to do nothing is not viable.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.64.66