CHAPTER 4

The Wider Impacts of Learning Technologies

While business objectives and user experiences are key considerations in the evaluation of any learning technology, there are wider impacts to consider in any evaluation of an emerging technology. In this chapter we go beyond business considerations and individual user experiences to consider how to evaluate those impacts.

No technology exists in isolation from the rest of society, and its adoption can have short-, medium-, and long-term internal and external consequences for any organization. A technology’s impact can be felt in its effects on other technologies, social organizations, economic systems, and even politics (Gibbons and Voyer 1974). One of the future roles of learning and talent development departments will be to design workplace learning to consider these broader issues.

As jobs become more precarious and less stable, as brick and mortar offices decline in importance, and as employees become part of diverse connected systems and decentralized teams, learning leaders will become a critical part of developing collaborative networks that bring together disparate types of workers in a virtual setting. At the same time, greater flexibility and agility will be required from everyone to make such a system work. In the near future, we will all have to become learning designers or at least be prepared to engage in design thinking.

The objectives, techniques, and processes that allow users to act as co-designers is known as meta-design. Learning technologies are examples of socio-technical systems (STSs) that can be analyzed using meta-design methods. Meta-design “does not provide fixed solutions but a framework within which all stakeholders (designers and users) can contribute to the development of technical functionality and the evolution of the social side such as organizational change, knowledge construction, and continuous learning” (Fischer and Herrmann 2015). Using a meta-design approach allows us to develop technologies that empower users to be more than just consumers of experiences that have been created for them. It requires engagement at broader levels than a focus on individual user experiences (although optimal user experiences remain critically important). Because design must always be changing to meet the needs of new users, emerging technologies, and not-yet-invented processes, meta-design is vital to facilitating the redesign of older designs.

In this chapter we propose evaluation criteria for judging the impact of emerging learning technologies at individual, organizational, and global levels. Our objective is to make sure that any learning technologies of the future are life-affirming tools that allow users to infuse their own world with meaning and to use these tools for the purposes they have chosen themselves (Fischer 2003a). At the same time, we need to recognize that most technologies are networked and interdependent, creating a dynamic and complex social-ecological system that requires multiple perspectives to understand. And, while it is well beyond our scope in this publication to do a full in-depth evaluation of all the issues accompanying the use of emerging learning technologies (or technology in general), you should review these three levels to see if they are relevant to the decisions about whether to use a particular learning technology in your workplace.

Impact on Individuals

In chapter 3 we looked at individuals as users to evaluate their experiences with a specific learning technology. But technologies can have other kinds of impacts on individuals beyond the specific user experiences that apply to the learning technologies they encounter. For example, there are non-users (including other computer systems that employ data generated by a new platform) within an organization who may be indirectly affected by the introduction of a new technology: The adoption of a sales enablement platform will have the most impact on a firm’s sales staff, but will also have implications for marketing staff, designers of the company’s brand image, the IT department, C-suite executives, and the computer systems that manage both accounting and customer relationships. Technologies can have a positive or negative impact, depending on whose interests are being served.

Here are some possible positive impacts of emerging digital technologies for employees:

• Positive work changes. New work experiences may be less boring and repetitive than many industrial forms of work. For example, mobile devices that allow people to work from anywhere at any time and retrieve information at the time of need might be more compatible with the demands of their home life.

• Connectivity. Keep in touch with family, friends, and peers worldwide, and meet new people at almost no cost to the individual.

• Information sharing. Share medical, safety, and other valuable information almost instantly.

• Trust building. Use peer recommendations and new technologies like blockchain to help build trusting relationships.

• Democratization of content creation. Anyone can make user-created content available.

• Health and fitness monitoring. Track the status of your body through self-tracking (also known as the quantified self).

• Coordination and collaboration. Communicate with others through social media to make it easier to work together at a distance and to organize for collective action.

• “Sousveillance.” Offer protection from oppressive authority by making their actions visible.

• Lower costs. Save money through access to wider markets and price comparison.

• Save time. Perform online tasks that would normally require travel and queuing.

• Entertainment. Pursue hobbies and interests, and view an immense amount of video content.

• Learning. Expand knowledge and skills through online courses, educational materials, and search. Learn at the time of need or as a by-product of being online.

But, all technologies have a dark side and negative impacts may include:

• Negative work changes. Work may become more intense with expectations of being available and performing 24 hours a day.

• Too much information or connectedness overload can cause stress and anxiety.

• Internet platforms are often designed to be addictive to increase profitability.

• Open systems without regulation may contribute to polarization around issues.

• Computational thinking may put limitations on human creativity and the range of thinking.

• Biases in the default settings of forms, standard responses, and algorithms can lead to exclusion and lack of diversity. This has led to the design of software that can allow and even facilitate harassment and abuse.

• Digital technologies may reduce face-to-face interactions, directly speaking to people, and the sense of presence of others. This may lead to a shallowness of engagement in life with little time or motivation for reflection and concentration.

• There are obvious threats to privacy and a danger of others controlling individuals, especially vulnerable populations. May require employees to wear devices that track their every movement.

• Rapidly developing conspiracy theories, fake news, and other breaches of trust is problematic.

• Negative social pathologies, trolling, bullying, and anonymous defacement of digital properties have occurred.

• Questions arise about the authority and accuracy of information that is presented and issues of how to make rational judgments about the information we receive.

• Issues of ownership of information and the channels or platforms through which it is transmitted can lead to a digital divide, whereby a few wealthy people have immense control, while most people become more impoverished and less powerful.

• Barriers to accessibility to technology may work against specific groups and can also lead to a digital divide, whereby one group has access to learning materials while another does not.

• Use of digital devices often raises ethical issues in the workplace.

Computational Thinking

One of the dangers raised by critics of digital technologies is that we start to think like computers and accept the information presented by technologies as more real than our own thinking. As James Bridle (2018) notes, “we have come to live inside computation,” and it often has become the foundation of our thought. He adds, “As computation and its products increasingly surround us, are assigned power and the ability to generate truth, and step in to take over more and more cognitive tasks, so reality itself takes on the appearance of a computer; and our modes of thought follow suit. … That which is possible becomes that which is computable.”

The expression user experience, and how we design for it, often assumes a passive role for users, where UX designers make decisions based on their observations and intuitions about how a piece of hardware or software will be used by individuals. But, human beings are naturally creative, and often want to modify or extend the tools they use, adapting them to address problems they face. A movement for end user development, led by Gerhard Fischer at the University of Colorado, develops technologies that empower people to be more than just users or consumers, and to invest technology with their own meanings and purposes (Fischer 2003b; Fischer and Giaccardi 2006). This movement builds open, evolvable systems that put users in charge by “under designing” programs to allow for design elaboration at the time of use (Fischer and Scharff 2000; Fischer, Fogli, and Piccinno 2017).

Biases

For the most part, our digital technologies have been developed by white males, with a notable lack of representation from women or people of color. In Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech, Sara Wachter-Boettcher (2017) ably documents the many biases found in digital technologies. Such biases can be found in learning content, forms that need to be filled out, canned responses to questions and actions in interactive software, and in the stereotypes deep in programming algorithms that drive many types of software. Even robots have gender stereotypes attached to their design; for example, “research shows that users tend to like a male voice when an authoritative presence is needed and a female voice when receiving helpful guidance,” and this is often reflected in robot design (Simon 2018). Sometimes, there is pressure on programmers to build in biases for a client’s product to meet the requirements of a contract (Sourour 2016).

These biases seem to be an inherent part of tech culture, which originated from the libertarian ideas of maximum freedom and individual eccentricity. Meredith Broussard (2018), in Artificial Unintelligence: How Computers Misunderstand the World, summarizes the situation:

We have a small, elite group of men who tend to overestimate their mathematical abilities, who have systematically excluded women and people of color in favor of machines for centuries, who tend to want to make science-fiction real with little regard for social convention, who don’t believe that social norms or rules applied to them, who have unused piles of government money sitting around, and who have adopted the ideological rhetoric of far-right libertarian anarcho-capitalists. What could possibly go wrong?

One of the things that has gone wrong is the astounding amount of harassment and abuse that takes place among users of online platforms such as Twitter and Facebook. Some of the ways that social media can lead to harassment in the workplace include (Farrell 2012):

virtual harassment: harassment through a social media site, for example, friending a co-worker on Facebook and then sending offensive messages (or repeated requests for a date)

textual harassment: harassing, intimidating, or sending inappropriate text messages

sexting: sending sexually explicit or offensive photos or videos through electronic media

cyberstalking: harassing employees by following them on blogs, posts, and social websites.

Recently, executives responsible for these platforms have tried to clean up their content by removing the most offensive material, reducing the influence of bots, eliminating fake accounts, hiring content moderators, and using AI to help police their platforms. But they still have a long way to go (Nieva and Hautala 2018).

Privacy and Control

Facial recognition software has become so accurate that artificial intelligence algorithms are able to recognize and track people as they move around their world. In a 2018 article about China, Paul Mozur of The New York Times reports, “Beijing is embracing technologies like facial recognition and artificial intelligence to identify and track 1.4 billion people. It wants to assemble a vast and unprecedented national surveillance system, with crucial help from its thriving technology industry.” Already, at least 18 countries have purchased this software from China to monitor their citizens (Freedom House 2018). “It’s like Christmas for repressive regimes,” says Sophie Richardson, China director at Human Rights Watch (CBC 2018). As facial recognition software becomes more readily available, other governments, large multinational corporations, and many more organizations will use this type of software as a matter of course, unless it is regulated.

One danger inherent in workplace learning software is that companies will monitor employees in order to control them, or to detect when they are stepping out of alignment with company policy. As Douglas Heaven (2017) writes in his anthology of New Scientist articles on AI, “Start to slack off or show signs of going rogue, and an algorithm could tattle to your boss. … The idea is that it could detect when someone might pose a security risk by stepping outside their usual behavioural patterns.”

This has already started at large corporations such as Uber and Amazon, which use forms of control software. Uber’s issues with the ethical treatment of its drivers are well known, including tolerance of sexual harassment of female drivers, sexist promotions, not paying minimum wage or benefits, and many other scandals and controversies involving its drivers, executives, and the treatment of its customers (K. Taylor 2017). Across Europe, Amazon workers have complained about “timed toilet breaks and strict targets, with many falling asleep on the warehouse floor” (The Week 2018). In his book New Dark Age: Technology and the End of the Future, James Bridle (2018) describes conditions for Amazon workers in the UK that are facilitated by new tracking technologies:

The handheld devices carried by Amazon’s workers and mandated by its logistics are also tracking devices, recording their every movement and keeping score of their efficiency. Workers are docked points—meaning money—for failing to keep up with the machine, for toilet breaks, for late arrival from home or meals, while constant movement prevents association with fellow employees. They have nothing to do but follow the instructions on the screen, pack and carry. They are intended to act like robots, impersonating machines while remaining, for now, slightly cheaper than them. Reducing workers to meat algorithms, useful only for their ability to move and follow orders, makes them easier to hire, fire, and abuse.

Another interesting impact we are seeing play out with Uber and Lyft (or really anything in the gig economy; Airbnb and VRBO are two other examples) is rating inflation. For example, a rating given by a patron of less than five stars is seen as a hostile rebuke of the service provider. (Basically, it’s supposed to be like third-period health class in high school—show up and you’ll get an A.) This type of artificial control by crowdsourcing and the redirection of a reward to one of punitive implications is another dark side of this social technology.

The Ethics of Learning Technologies

In general, ethics is about what people should do in specific situations based on accepted community norms. Ethics includes the fair resolution of conflicts of interest based on accepted community-shared principles. Community norms can involve social influence on others; political pressure; support for cultural, linguistic, and bodily diversity; and etiquette—how people treat one another while interacting through their digital devices.

Concerns over the ethics of using digital devices have been expressed for several decades. In a 1986 article for MIS Quarterly, Richard O. Mason, then a professor of management sciences at Southern Methodist University, identified four ethical issues of the information age:

privacy: which information can be withheld and which cannot, under what conditions and with what safeguards

accuracy: the authenticity and fidelity of stored information

ownership: both of the information and the channels through which it is transmitted

accessibility: what information does a person or an organization have a right or privilege to obtain, under what conditions, and with what safeguards?

Many more ethical issues with learning technologies have been identified since then. In 2006, Lin and Kolb wrote about “ethical issues experienced by learning technology practitioners in design and training situations.” They noted, “A laundry list of such ethical issues includes, but is not limited to, digital copyright infringement, violation of online private information, and misuse of learning technologies in learning situations.” Ashman and colleagues (2014) zeroed in on ethical issues with personalization technologies used in e-learning, and list privacy compromise, lack of control, reduced individual capability, and the commodification of education as some of their concerns. Mayes, Natividad, and Spector (2015) add that as educational technologies “continue to evolve, ethical issues such as equal access to resources become imperative.” Pardo and Siemens (2014) include trust, accountability, transparency, and data ownership and access to their ethical issues for learning analytics.

In particular, workers and trainees in settings with populations of vulnerable people, such as hospitals, schools, day care centers, prisons, rehabilitation centers, and professional offices, need to ensure proper informed consent and maintenance of privacy before recording, photographing, or videotaping individuals or groups in these settings. The casual creation of digital data or materials for professional or personal purposes and how they are used, raises issues of power, accountability, and vulnerability.

For example, employees of prisons, hospitals, and psychiatric facilities are in the position of being able to take advantage of those in their care through the recording and distribution of personally embarrassing or compromising materials that are potentially harmful. Preethi Shivayogi (2013), a doctor serving on an ethics committee in Bangalore, India, says organizations serving vulnerable populations need to have solid safety monitoring plans in place with data safety monitoring committee supervision and, wherever applicable, observational study monitoring boards. A framework developed by Asif and colleagues (2013) for an integrated management system for corporate social responsibility includes a case study of how the nonprofit organization Truckee Meadows Tomorrow (TMT), in partnership with Charles Schwab Bank, facilitates community responsiveness to the most vulnerable populations in its geographical area.

A big concern, of course, is privacy, but this is also a two-way street. The rise of “sousveillance,” whereby ordinary citizens use mobile devices to document and distribute images of abuse by those in positions of power, can counter some of the issues of privacy and use of power that technology also enables (Mann, Nolan, and Wellman 2003). However, this is a complex area that is still in flux and needs further exploration.

With the advent of big data and machine learning, new ethical concerns have come to the forefront, including “the abuse of probabilistic prediction” by emerging artificial intelligence systems (Heaven 2017; Bostrom 2014). AI systems now can push us toward purchases or new behaviors, and will soon be able to predict our future behavior and then try to move us in that direction, or in a direction that reflects the values and biases of those who program or own learning and social platforms. “The trick,” writes Heaven (2017), “will be to accept that we cannot know why these choices were made, and to recognize the choices for what they are: recommendations, mathematical possibilities. There is no oracle behind them.”

At the most extreme is software that makes decisions without human guidance that end up changing people’s lives. For example, AI systems can “decide who gets a bank loan, who gets a job, who counts as a citizen and who should be considered for parole” (Heaven 2017). And, with immutable blockchain technology available to store a person’s complete social history, work history, and their successes and failures, AI algorithms will have a much richer source of data with which to work (Ahmed 2018).

Today, for better or worse, these technologies are being introduced gradually through a process known as nudging (Thaler and Sunstein 2009). This occurs when people’s decisions and actions are manipulated or nudged by the state to reach certain outcomes desired by officials or politicians (Helbing 2015). This tendency toward conformity is also seen in the well-publicized “filter bubble” effect that the AI-driven personalized search results from search engines like Google are known to produce (Pariser 2011).

As this situation moves beyond the control of individuals, there is a call for new forms of governance of these powerful tools. Ian Harris and his colleagues (2011) in the UK have developed a framework called DIODE to assess the ethics involved in the use of specific technologies, including radio frequency identity devices (RFID), smart dust, biometrics, nanotechnology, and robotics. DIODE reflects the five stages of their methodology, namely, definitions, issues, options, decisions, and explanations:

• Define questions. Ensures that the assessor has defined the technology or project to be examined and is, therefore, able to frame the ethical questions.

• Issues analysis. Ensures that all relevant parties who might be affected are considered (and where appropriate consulted) and that the relevant risks and rewards are examined from both teleological and deontological perspectives.

• Options evaluation. Ensures that relevant choices are made. This is not merely a go/no go assessment; often the answer will be to go ahead, with appropriate safeguards or checkpoints along the way.

• Decision determination. Ensures that the assessor can clearly state the ethical decisions made and reasoning behind them. It encourages the assessor to revisit minority interests at the stage before making the decision. The decision should include guidance on the circumstances that would lead the assessor to revisit the problem.

• Explanations dissemination. Ensures that the decisions are communicated appropriately, including public domain publication wherever possible.

More technology will not ensure the ethical behavior of those taking training. Karen Fields (2016) holds that many employees follow the ethics of their senior leadership. She writes that executives need to be “genuinely interested in avoiding ethical breaches, then they need to be proactive in communicating with their management teams to make it clear that they’re serious about what is being taught to the rank-and-file.”

The Shift to Lifelong Learning

Rapid technological change and the provision of new platforms and forms of content are transforming the learning landscape. For many, the shift from rigid credential-based models to flexible learner-driven ecologies of creativity and collaboration gives hope for the future, as we move toward knowledge-based societies in which lifelong learning is the norm.

It used to be that post-secondary students took courses aimed at a specific field, culminating in a terminal degree that indicated they were ready to join the workforce and (hopefully) find a lifelong career. Now, the average worker changes jobs every 4.4 years, and by 2020, work-related knowledge acquired by college students is expected to have a shelf-life of less than five years (Hagel et al. 2015). The ever-changing nature of jobs and work has been referred to as having a Protean career (D. Hall 2004), which now requires continuous learning and development to adapt to changing professional skills, interests, and career identities necessary for working in this new reality. A new report from the National Academies of Sciences, Engineering, and Medicine (2018) suggests that, instead of traditional training, an organization’s culture can play a key role in facilitating an employee’s development in this new work environment by doing the following:

• Promoting a “big picture” perspective from which employees know what the goals of the organization are. This enables workers to align development with organizational goals.

• Providing assignments that permit people to stretch beyond their job description. In learning organizations, people are assigned tasks that provide opportunities to do new things, learn new skills, and apply what they learn back on the job.

• Fostering a climate where people can learn from their mistakes. In learning organizations mistakes are tolerated, particularly when people are trying new things in the early stages of learning. Research suggests that error-prone practice can enhance learning, so if mistakes are tolerated, they can lead to greater development.

• Making employees accountable for their own development. For example, performance evaluations might include ratings for engaging in autonomous career-related professional development.

The move to lifelong learning is well underway. A 2016 Pew Research Center survey found that 73 percent of American adults agree that the phrase “I think of myself as a lifelong learner” applies “very well” to them and another 20 percent say it applies “somewhat well” (Horrigan 2016). This shift toward lifelong learning started in the late 1960s and early 1970s (R. Harris 1999), but has accelerated in the past decade:

The Great Recession that began in 2008 was an especially brutal reckoning for many American workers about their place in a changing economy, the reliability of their jobs, the value of their skills and education, their place in the class structure of America, the state of the benefits safety net, and their prospects for retirement. The recession has prompted much commentary about the “skill recession” and the role of learning centers both in traditional settings and in cutting-edge digital platforms in helping workers adjust to new economic realities. (Horrigan 2016)

A new ecosystem of learning providers and support technologies is emerging, independent of more traditional providers of workplace learning, such as post-secondary institutions, company-sponsored training, and trade conferences. Some of the innovations at the edges identified by John Hagel and his colleagues (2015) include:

• third-party education programming providers, including MOOCs, microlearning, and boot camps

• learning mobilizers, which facilitate collaboration among not just students, but a diverse array of community members and corporate partners

• creation spaces, which provide locations and tools for students to build things

• open-source communities for people to share skills and knowledge

• agent businesses, which help students strategically navigate through all their learning options

• third-party learning aggregation platforms that gather all learning experiences and recognition of learning together.

It’s clear that these new user-centered design approaches and the “What’s in it for me?” equations that make these technologies attractive for individuals can also have significant advantages at the organizational and societal levels.

Impact on Organizations

Beyond the impact on individuals, emerging learning technologies can exert a variety of influences on an entire organization. These influences can include:

• changes in the structure of the organization

• shifts in power relationships and politics within an organization

• improved knowledge flow within an organization

• spillover effects from one organization to another.

From Rigid to “Liquid” Organizations

New networked platforms enable collaboration, support communications among members of virtual teams, help increase trust, allow for mobility and working from anywhere at any time, and grant access to immense stores of sharing resources and information. Organizations have moved from being rigidly organized fixed entities to “liquid” ecosystems that can evolve and change rapidly (Bounfour 2016). Change results in fluid knowledge that can be valid one day and invalid the next.

Organization charts have become passé; today, dynamic maps are needed to tell us what is happening within systems. Person-to-person networking has been displaced by virtual networking that stretches around the globe and is not bound by time or place. As the need for speed has increased, there is now more autonomy across business units to be able to respond more quickly to changing conditions.

Workplace Politics and Power Relationships

Much of what we think about learning is based on our own classroom experiences of schools and postsecondary education. New understandings of workplace learning can be obtained by abandoning an educational perspective on learning, because workplaces are often quite different than the reality portrayed by the slowly changing curricula of educational institutions.

Power relations in the workplace are particularly different. Within an organization, internal politics can influence opportunities to learn, and how employees interact with the workplace. As Dutch researchers Doornbos, Bolhuis, and Simons (2004) explain:

Workplaces can be highly competitive and the opportunities to learn unevenly distributed. … Cliques, politics, and power may intentionally or unintentionally influence the distribution of opportunities to learn. Those with more access to power can claim learning opportunities, and they can also deny opportunities for learning, whereas those with less power may find access to what they want difficult. In contrast, access to learning is assumed to be equal within a formal education setting.

Political behaviors within an organization can have a strong impact on workplace learning as documented by Cacciattolo (2013), who identified political behaviors such as “narcissism,” “the new employee considered as a threat,” and “bureaucracy.” Many of these behaviors do not fit with our models of learning within classrooms, but can occur in a work setting.

External politics, those which stakeholders cannot control, can also influence the provision of workplace learning. Examples of external politics that can affect workplace learning are employment laws, tax policies, trade restrictions, trade reforms, international agreements, environmental regulations, government funding programs, and military spending. Indeed, many of the emerging learning technologies used around the world were first funded by the U.S. Department of Defense, including the Internet, e-learning, and GPS.

Networking and Knowledge Flow

Computer networking has had a tremendous impact on knowledge flow within and among organizations. In the past, traditional bureaucracies hoarded and guarded information, producing the famous silo effect that has often been criticized as contributing to organizational inefficiencies. However, starting with the first email sent in 1971, communication through digital networks has exploded around the world, changing the flow of knowledge within and beyond organizations.

Within organizations, networking has vastly improved internal communications and the sharing of company knowledge. In addition, knowledge is flowing out of organizations through the same networks, causing spillover effects because knowledge cannot be contained within the walls of a single organization. The movement of internal information to outside entities became even more pronounced with the popularity of mobile phones equipped with cameras, Internet connections, and a variety of messaging apps.

Because of the rapid spread of connectivity around the world, new approaches to evaluation frameworks for innovation and learning networks have been developed. Elise Ramstad (2009) saw this coming a decade ago when she wrote, “new types of broader networks that aim to achieve widespread effects in the working life have emerged. These are typically based on an interactive innovation approach, where knowledge is created jointly together with diverse players. At the moment, the challenge is how to evaluate these complex networks and learning processes.” Her solution is an evaluation framework with three elements: “the micro level of the work organization, the meso level of the innovation infrastructure and the macro level of innovation policy and broader society.” Learning is not something that only happens within individuals; it is a group phenomenon that involves employees and workplaces, research and development units, and policymakers all working and sharing together.

Impact at a Global Level

Based on the libertarian ideals of many of the original founders of the Internet, global networking has been seen as a commons to be used by everyone. Electronic technologies were originally seen as technologies of freedom that would give a voice to all and usher in a new age of peace, love, and understanding (de Sola Pool 1983). Unfortunately, it hasn’t quite turned out that way.

Control of these emerging digital technologies is critical because some will attempt to use them to gain great power and wealth at the expense of others. All these new technologies have the potential to hijack our minds and steal our attention away from more important things in life, argues Tristan Harris (2016), founder of the Center for Humane Technology and the “Time Well Spent” movement. In his work as a design ethicist, Harris shows how a few hundred people, using principles of behavioral psychology, can influence and manipulate billions of people to buy more or to act against their own interests.

With the rapid growth of e-commerce and the building of “walled gardens” by a few giant platforms that aggregate users and collect data from them, the Internet has mostly become a commercial space governed by the rules of the market. While users treat the offerings of digital providers as public spaces, in reality, these providers are, for the most part, businesses dedicated to maximizing profits for their shareholders. Companies like Amazon, Apple, Facebook, Microsoft, and Google control the vast majority of transactions on the Internet, and make immense amounts of money doing so. They do this by using the data they collect to manipulate us into buying even more goods, so they can make even more money (Galloway 2017). Beyond using our data for increasing their own business revenue, these large Internet companies sell large quantities of our data to other companies. For these Internet giants, we have become the product.

We are at a crossroads as to how the Internet will be governed in the future. The United States continues its market-centric approach with light or no regulations, which allows innovation to grow rapidly. Americans have done this by exempting the big platforms like Facebook, Google, and Twitter from liability for content posted by their users. While this has promoted rapid growth, it is also led to many of the abuses that we now see on these platforms.

Some countries, such as China and others ruled by dictatorships, see the need for “cyber-sovereignty,” the desire to control all computer networking within their national borders, and to project their influence around the world (Segal 2018). On the other hand, India has developed a new inclusive network, where the government has built open systems as public goods (Nilekani 2018). The Europeans took a third approach and introduced the General Data Protection Regulation (GDPR), a new set of rules that went into effect in May 2018 that greatly strengthens the protection of private data (Dixon 2018). It is not clear at this point which approach will win out in regulating the future of the Internet.

The stakes are high. Emerging information and communications technologies (ICT), are being networked and interconnected to create something that has never existed before—a global “brain” that is open to all and will be connected to the various specialized artificial intelligences we are building. What we don’t know is how this new form of intelligence will turn out—some think it might take over and enslave the human race, while others argue that it will be used to solve some of our most intractable problems (Heaven 2017). It is surely a critical question as to who controls and regulates these powerful platforms and the global infrastructure that makes it all possible (Kornbluh 2018).

Perhaps the greatest crisis that humanity has ever faced is the prospect of global warming caused by climate change due to the burning of fossil fuels since the beginning of the first industrial revolution. More than 97 percent of the world’s scientists agree that the planet is heating up due to the use of non-renewable fossil fuels, and that the problem may be accelerating faster than we think (Wallace-Wells 2017). Most of the countries in the world (with the notable exception of the United States) have signed on to a global agreement to limit the impact of this threat to life as we know it.

You may be wondering how emerging learning technologies are implicated in this trend. Like all computer-based technologies, they use a lot of electricity, which is currently being generated by the burning of fossil fuels such as coal and oil. According to Bryan Walsh (2013), “the digital economy uses a tenth of the world’s electricity—and that share will only increase with serious consequences for the economy and the environment.” By 2015, the data centers around the world that make up the “cloud” consumed 3 percent of the world’s electrical power; that’s the same carbon footprint as the airline industry and it exceeds all the electricity used that year by the United Kingdom. The demand for electricity for data centers is expected to triple in the next 10 years. A study in Japan suggested that the power required for all digital devices and services would outstrip current generating capacity in that country by 2030 (Bridle 2018).

On the other hand, there are valid arguments that the use of computer networking and mobile devices in learning and development is actually good for the environment because it reduces the need for travel to a specific location for training, leading to cost savings in transportation, accommodation, and food necessary to gather employees together. Against that argument, we need to consider the costs for manufacturing, transporting, selling, powering, and maintaining all the equipment and content that goes into online training. We also need to recognize that global networking and online learning platforms may be the best technologies to support the spread of learning worldwide that will be necessary if we are to solve environmental and other major international problems in the near future.

Evaluating the Wider Impact of Emerging Technologies

In the last chapter we focused on different types of users and their experiences with emerging learning technologies. But the impacts of adopting a new technology are wide-ranging and far-reaching. They can affect many people who are not end users; shake up teams, companies, and other types of organizations; and even have a significant influence on regions, nations, and the planet as a whole. While most of us will focus first on local impact for our end users, it is important to evaluate other potential consequences of introducing new technologies into the workplace.

For example, people with a disability may be affected differently than those without one. This may be positive: a new technology can enable new forms of accessibility or augment the abilities that a person with a disability might have. Because innovation often starts at the edges of society, many new devices and uses of new technologies are initially developed for people with disabilities. Examples include text readers, speech recognition software, speech synthesis, electric wheelchairs with navigation using a person’s thoughts, exoskeletons, cochlear implants, and the telephone (initially developed for Alexander Graham Bell’s wife, who was deaf). At the same time, however, new technologies can also have a negative impact if they introduce a barrier that excludes people with a disability from full participation in society. Barriers can be part of the impact of emerging technologies for other groups, as well, because of biases related to gender, race, or culture. All these impacts on individuals need to be evaluated.

The introduction of new technologies can also affect the work experiences of people within an organization. Instead of slowly learning about a process or a customer from more experienced employees, a great deal of information about what is needed for performing one’s job is often available instantly on computer monitors or mobile devices. Similarly, training that was once only available in classrooms can now be offered to each employee on demand through their mobile phone. These are only a few of the effects new technologies can have on work experiences.

Changes in how work is experienced can ultimately affect how a company is organized and managed. When roles change, so does an organization’s structure, and how people within that structure relate to one another. This can raise ethical issues such as privacy, control, harassment, and abuse, especially if your organization is working with people with vulnerabilities. Do you at least have a checklist for evaluating these issues?

Finally, you need to be aware that all technologies affect the environment and may or may not be sustainable in the long term. Does the adoption of the technology you are evaluating entail harsh working conditions for people in other countries? For example, some of the rare minerals used in the production of mobile phones come from mines in countries with horrible working conditions or use of child labor. By investigating and evaluating potential issues, you are in a position to make environmentally friendly and sustainable choices, while becoming aware of the working conditions of other people in the supply chain for the technology you are using. In this age of abundant information, it’s not difficult to find answers to questions about the ethics of the impact of a technology.

At minimum we recommend asking the following questions about the impact of an emerging learning technology on your employees, organization, and the world in which you live:

1.   Is the emerging learning technology accessible for people with a disability? Does it have any obvious biases or barriers for specific groups (for example, groups based on gender, race, or culture)?

2.   Would the adoption of this technology introduce new work experiences into your company? Does it allow for mobile learning?

3.   How would the adoption of this technology change your company’s organizational structure?

4.   Does this technology introduce ethical issues or problems (such as at-risk populations, privacy, and control)?

5.   Is this emerging technology environmentally friendly and sustainable?

Looking Ahead

In the end, it is a matter of choosing the kind of world in which we want to live, and collectively, as a human community, anticipating the future and deciding how to get there—a concept known as anticipatory governance. Helbing (2015) articulates the choices we are facing in light of emerging technologies: “We must decide between a society in which the actions are determined in a top-down way and then implemented by coercion or manipulative technologies (such as personalized ads and nudging) or a society, in which decisions are taken in a free and participatory way and mutually coordinated.”

What is needed is nothing less than a revolution in individual and collective learning, whereby humanity solves the problems it is facing rather than surrendering to those who control and own emerging technologies.

A key question for us as learning professionals is whether the use of digital learning technologies is sustainable and effective; that is, does it promote change in a positive way that meets and enhances present and future human needs? We need to be able to show that educational and workplace learning work better with emerging technologies than without them. We turn to that question next.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.17.40