Chapter 9. The Digital Divide and the IoT

The IoC already exacerbates class differences. Can we keep the IoT from doing the same?

The dawn of the web (and the original Internet of Computers) introduced concerns about the digital divide: how this new technology might amplify class differences, particularly because education and affluence appeared to be necessary to get on board in the first place. By reaching into more aspects of life and even into basic physical infrastructure, the IoT may also increase digital division. This chapter will consider:

  • How digital divides emerged in the IoC

  • How digital divides may continue in the IoT

  • How digital divides may emerge when IT is required to support basic rights

  • How IoT applications may enforce preexisting socioeconomic divides

  • How IoT applications may create divides even among connected classes

How Digital Divides Emerged in the IoC

I remember the dawn of the web. Through the distorted lens of hindsight, it’s easy to wax lyrical about the sudden emergence of pervasive client browsers, the common HTTP protocol—and the electricity as we all realized that this ability for ordinary citizens and enterprises to start exchanging information and interacting would change everything. Fast-forward now a quarter century, and one could add the conclusion: “See, we were right—it did change everything.”

To some extent, these stories are true. At the beginning, there was an electric excitement—and I even had the chance, as a government security analyst, to work with a number of citizen-facing agencies that were exploring how this new technology could increase efficiency and access to services. To paraphrase a standard client vision:

We are legally mandated to provide information service X to citizens but we are only given a limited budget, so maybe this internet web thing can help us reach more citizens without having to build more offices. (Can you tell us what the security and technology risks are?)

And from the perspective of anyone reading these words, “everything” did change: the internet and web are how we now read the news, shop, stay connected with our friends, consume audio and video media, do banking, pay bills—and also take care of various government services.

However, these rosy stories sneak in some implicit assumptions. Words such as “everything,” “everyone,” and “pervasive” imply universality—but they are really universal only to the experience of the people telling the stories.

In the early 1990s, the community I was a part of and whose excitement I shared about this new thing consisted mainly of computer science researchers affiliated with universities and laboratories that happened to be close to the network backbone—and where funding and circumstance left the researchers with then-fancy computers on their desktops and time to play with them for not-necessarily-work-related purposes. This small community was only a tiny sliver of society at large.

The Digital Divide

As the decades passed, citizenship in this new world grew to encompass a larger fraction of the population, but a fraction still.

As Susan Crawford wrote in the New York Times in 2011 [3]:

Telecommunications, which in theory should bind us together, has often divided us in practice. Until the late 20th century, the divide split those with phone access and those without it. Then it was the Web: in 1995 the Commerce Department published its first look at the digital divide, finding stark racial, economic and geographic gaps between those who could get online and those who could not.

Many demographics do not share in the connectivity of the geek elite. In the US alone, even in 2015, the data showed differences. According to the White House [13]:

  • The highest-income quartile has almost twice the internet usage as the lowest quartile (see Figure 9-1).

  • Households where the head has finished college have almost twice the internet usage as households where the head has not finished high school.

  • Households headed by whites and Asians have more usage than households headed by other races.

  • Usage drops off with age (see also Figure 9-1).1

iort 0901
Figure 9-1. Home internet use by age and income, 2013. (Source: the White House [8].)

Again, even in the US alone, the data shows that more than just straightforward socioeconomic factors are at play. Internet usage strongly correlates with geography, although not always by rural/urban divisions (see Figure 9-2)—bandwidth may be a bigger factor [3]:

While we still talk about “the” internet, we increasingly have two separate access marketplaces: high-speed wired and second-class wireless.

iort 0902
Figure 9-2. Internet adoption in the US in 2013. (Source: the White House [8].)

In 2013, Mashable shared this anecdote from a school in Newark, New Jersey (“a city with one of the highest poverty rates in the U.S.”)[6]:

“If they knew someone who could turn their phone into a hot spot, they would actually pay other students to use their data,” said Robert Fabriano, a 30-year-old teacher at the school. “They would trade bus tickets…if they lived two or more miles away, or a couple of bucks if they had it.”

School examples help reveal the slippery slope of the digital divide. What starts out as innocuous—for instance, the inability to share in the cultural experience of watching funny cat videos—quietly becomes a more substantial handicap: the inability to apply for jobs, the inability to do homework. The Pew Research Center quotes a university librarian [1]:

What I see are a handful of first-world white men touting their shiny new toys. Put this in context with someone struggling to get by on a daily basis—in the US or in other countries: what these devices primarily signify is a growing gulf between the tech haves and have-nots. That said, I’m not boycotting these devices—I see them as interesting and important. But just as students today are burdened if they don’t have home Internet—and at the university where I work, that is true of some of our commuter students, much as people might find that hard to believe—there will be an expectation that successful living as a human will require being equipped with pricey accoutrements…. Reflecting on this makes me concerned that as the digital divide widens, people left behind will be increasingly invisible and increasingly seen as less than full humans.

Even in the IoC, even when we consider a single developed country such as the US, we see a digital divide. If we extend internationally to countries with different stages of economic development, different types of government, and different levels of stability, the situation gets worse.

Other factors also play into the digital divide. Can a blind person be a full cyber-citizen? Can someone who does not speak the dominant language of the region they live in? In my own classroom, I was confronted with a challenge: how to enable a student who’d injured his dominant arm to still do his computer architecture project (where text-based workarounds such as dictation would not suffice).

How Digital Divides May Continue in the IoT

The IoC age itself has already created and exacerbated digital divides. Although they would likely not agree on the appropriate solution, most political ideologies would regard this as a problem—when segments of society are systematically cut off from the basic infrastructure of society. As IoC access becomes more important to education and economic advancement, the divide becomes even more troubling, since it becomes self-perpetuating.

As we rush to the IoT and distribute intelligence throughout the physical world, will we risk making the digital divide even bigger?

Connectivity to Machines

One avenue to consider is the basic plumbing of network connectivity. The standard full vision of the IoT has the smart things talking back to the big data back-end; where this channel does not exist or is constrained, the communication will be limited. Will the applications even work without this connectivity?

In a 2015 analysis on the digital divide, Huawei lamented [7]:

A lack of locally relevant, quality and accessible services for many users is limiting the benefits they can achieve through digital technologies. These are often the very people that could most benefit from these services: those who do not have quality education or healthcare systems, those with poor infrastructure and geographic difficulties, or those with poor eyesight, hearing or mobility. Though not always necessary, many services are built for—or operate best with—high internet speeds.

Others have noted that the global (or at least multinational) aspect of many IoT applications will require a widespread IoT ecosystem to come to fruition—for example, Weber and Weber raise the example of RFID-enhanced smart shipping [12].

The security vision of the IoT stresses the need for pushing patches to the smart things (or bringing about a revolution in software engineering that eliminates that need). Lack of sufficient connectivity will reduce this ability. Will the shading in the map of Figure 9-2 also correspond to malware infections in the future IoT? (For that matter, what about in the present IoC?)

Connectivity Between People

In May 2015, Mary Catherine O’Connor (in the IoT Journal, writing about the IoT and agriculture) identified a different digital divide, between the IoT technologists and the experts in the domains in which they’re trying to embed the IoT [9]:

Technologists tend to be more excited about the IoT on the farm than farmers and chefs are…. [T]here is—at least in Central California—a divide between Silicon Valley and food producers…. [I]t seemed as though the subtext to questions and comments from farmers was: What do the folks in Silicon Valley know about how to produce food? Didn’t all those VCs already do enough damage by over-hyping Internet technology before the dot-com bust in the early 2000s? What damage can they inflict on the ag industry?

Indeed, an element of the history of computer science not often mentioned by computer scientists has been excessive optimism (or perhaps hubris) that real-world processes could be easily captured with just the proverbial small amount of programming (see Figure 9-3). One example of this excessive optimism was the prediction of early AI researchers that exact computational reproduction of the human thought process was imminent. Examples in the academy include complexity and computability theory: useful problems turn out to be uncomputable, or probably intractable (e.g., NP-complete), or apparently intractable with unknown foundation (e.g., factoring large integers). The potentially revolutionary (or potential dead end) field of quantum computing arose because of the apparent intractability of simulating quantum physics on classical computers. Less academic examples include the continued failure of digital rights management to capture the nuances of fair use (see Chapter 8), and the continued struggle of health IT and EMRs to capture the nuances of medical workflow (e.g., see [11]). Will the IoT bring information technology solutions to the world’s problems—or just information technologists trying to solve them?

iort 0903
Figure 9-3. As this XKCD cartoon illustrates, not all real-world processes can be captured easily by software. (Source: xkcd.com.)

When IT Is Required to Support Basic Rights

In the context of humans, “rights” is a dangerous term to use. Discussion of rights threatens to bring in cans of worms: government structures, ideologies, social contracts, etc. For the purposes of this book, let’s sidestep those issues and use the working definition that a “right” is something that society decides—formally or informally—a human merits simply by being a citizen, or perhaps even by just being human.

Certificates

Even in the world of bricks and mortar and paper, technology does not always nicely mesh with human and citizen rights. For example, in the US in recent decades, many social processes—such as boarding an airplane, or (more recently, in some states) voting—operate on the implicit assumption that everyone has a government-issued photo ID. However, as the Brennan Center for Justice at NYU reports, about 11 percent of Americans do not [4]:

Many seniors and many poor people don’t drive. In big cities, many minorities rely on public transit. And many young adults, especially those in college, don’t yet have licenses. A good number of these people, particularly seniors, function well with the IDs they have long had—such as Medicaid cards, Social Security cards or bank cards. Among the elderly, many of them have banked at the same branch for so long that tellers recognize them without needing to see their IDs.

Hence the news has seen much discussion of laws requiring birth certificates and other documentation in order for a citizen to vote—and of the large numbers of people who have bona fide trouble obtaining this documentation. I personally have a relative who had trouble getting a copy of his birth certificate (due to lax bureaucracy in a New York mill town 80 years ago), a colleague who doesn’t have one at all (due to migration from Cuba), and a spouse who has continual trouble flying because the name on her passport does not match the name on her driver’s license.

So the simple idea of “use this special piece of paper” to support rights processes on a national scale has problems—11 percent of the US population has problems. If we can’t manage paper certificates and IDs, then how are we going to manage digital credentials such as public key certificates? How are we going to manage citizen identification in the IoT without excluding segments of society?

(Indeed, how to authenticate a citizen was a concern for our egovernment explorers back in the 1990s. The IRS “Get Transcript” debacle described in Chapter 6 shows that it’s still a problem.)

Entitlements and Risks

Consider the legacy credit card system—and in particular, the risks and potential losses to the banks that issue credit cards and the merchants who accept credit cards as payment. Someone who steals a card—or finds a lost card—can make fraudulent charges even without physical possession of the card. An impostor who learns a card’s magic numbers and expiration date can make fraudulent charges. A dishonest consumer might make a number of purchases but then disavow them by falsely claiming the card had been lost. A careless consumer might repeatedly lose their card. A financially irresponsible consumer might stop paying the monthly bills.

Despite all this exposure, two key features have enabled this system to persist. First, financial mechanisms exist to move the risk around. Banks can shift the cost of bad transactions to merchants; banks can charge riskier customers higher interest rates and fees; merchants can require minimum charges for transactions. Second, parties can decline to participate once the risks are too high. Banks can refuse credit cards to consumers deemed too likely to default or to lose their cards; merchants can refuse to accept credit cards; merchants and banks can deny specific transactions if conditions are not satisfactory.

But as some of my egovernment clients decades ago were well aware of, these mechanisms do not apply when the service in question shifts to a legal entitlement or other kind of basic right. The security and economic features that make credit cards work did not immediately extend to benefits programs like food stamps—credit card issuers are free to deny the cards to segments of the population for whom deployment is judged to be too risky, but if citizens are entitled to something as a legal right, one cannot deny them just because it’s inconvenient.

In the Smart City

At a NIST workshop on smart cities, speakers posited visions of roads and bridges wirelessly identifying passing vehicles and charging tolls to their owners’ credit cards. Will the IoT close off public infrastructure to those without credit cards?

If standard treatment for healthcare evolves to use IoT-based monitoring at home, will citizens be denied treatment if they live in an area with poor connectivity—or themselves cannot afford home WiFi?

If sophisticated cryptographic aggregation and blinding requires sophisticated computing technology in one’s smart grid home, will only the affluent have privacy?

Klint Finley in Wired wrote how this smart future may “leave many people behind” [5]:

Developing nations—precisely the ones that could most benefit from IoT’s environmental benefits—will be least able to afford them…. [T]he IoT could lead to a much larger digital divide, one in which those who cannot or choose not to participate are shut out entirely from many daily activities. What happens when you need a particular device to pay for items at your local convenience store?

The IoT Enforcing Preexisting Socioeconomic Divides

The preceding sections considered various ways the IoC and IoT can lead to divisions between the connected class and the disconnected class—divisions that may perhaps somewhat align with preexisting socioeconomic divisions.

However, for some applications, the division may be more explicit, almost seeming to result from a conscious choice to optimize for one group of people over another. I’ve talked to seniors who feel that way already about being unable to get traditional printed media from public libraries, or who have to go online to get tax manuals. Over the decades, I’ve regularly encountered services (such as medical questionnaires from my primary care physician or grant paperwork from universities) that implicitly assume the only possible machine one can use is a Windows PC and the only possible browser is Internet Explorer. In April 2016, Gerry McGovern wrote in CMSWire of the consequences of optimizing for electronic in retail [8]:

Digital self-service is a double-edged sword. Although it reduces costs, it creates distance between customer and organization. From a customer perspective, self-service means behaving in a semi-automatic, instinctive way…. Designing for this sort of environment requires an incredibly deep understanding of human behavior. Yet, as organizations roll out self-service they get rid of the very employees who actually understand and regularly deal with customers.

I remember in the early days of ATMs in the US, banks would charge customers a fee to use the ATM. In recent years, banks have instead started charging customers to use tellers. Will the grocery store or bookstore or post office start charging if I don’t use the electronic version?

Will smart infrastructure serve to ease life for the affluent at the expense of the poor? For example, consider the Surtrac smart traffic light system developed at Carnegie Mellon in Pittsburgh, Pennsylvania (e.g., [10]). Pollution and wasted time are significant costs of automobile traffic, and using IT to coordinate traffic lights to shape traffic flow to reduce these costs sounds like a good thing. The research presentations discussed impressive results at pilot sites—but then I noticed where the pilot sites were (Penn Circle and East Liberty, in Pittsburgh). When I lived in Pittsburgh as a graduate student a few decades ago, East Liberty was a neighborhood in which one was very careful at night, and Penn Avenue/Route 8 was the corridor connecting the university and medical neighborhoods of the city through disadvantaged and dangerous Wilkinsburg to nicer suburbs. The pilot sites were probably chosen because those immediate areas have gone through some recent redevelopment, and so provided a nice opportunity to insert prototypes into real infrastructure. On the other hand, I saw the numbers about improved wait times and wondered: whose lives were being optimized? Potentially, such systems could make life wonderful for the affluent commuters coming through disadvantaged neighborhoods, at the expense of the people who live there.

Another area that may require design choice is smart medicine. IoC (and eventually IoT) medical applications can potentially improve healthcare and make it accessible to wider populations. However, health informatics researchers such as Kay Connelly at Indiana University point out that many of the populations these services try to reach are educationally disadvantaged or even “functionally illiterate”—and designing effective web and mobile interfaces for such groups is substantially different from designing them for other demographics (e.g., [2]). Even the assumption that a personal cellphone is indeed a personal avenue of communication comes into question.

Another IoT-style domain where design requires a demographic choice is EMRs for children’s hospitals. As my colleague Ross Koppel of UPenn has documented, data details for clinicians treating children can have significant and safety-relevant differences from data about general patients. For one example, age may need to be expressed in hours or even minutes—and perhaps even as a negative, for patients still in utero. For another, medication may critically depend on body weight, so the body weight units need to be clear (kilograms or pounds?), and dosages need to be clearly indicated as “mg per kg” or “mg total.” However, clinicians lament that children’s hospitals are an “orphan subset” of the EMR market in the US—and it’s not economically worth it for a vendor to design specifically for that demographic.

The IoT Creating Divides Among Connected Classes

Another avenue to consider is the role of the IoT in promoting—or splintering—social cohesion.

Even in the early days of “blitzmail” and electronic life at Dartmouth, it wasn’t clear whether this new thing helped or hurt the sense of community. On the one hand, students might bury their noses in computer screens instead of actually meeting and doing things with other students. On the other hand, students might connect with other students with shared interests but who otherwise moved in different social circles.

As the IoC progressed, the same issues continued. Does “social” networking actually promote or hinder social connection? The common meme and cartoon motif of people together ignoring each other but paying attention to their cellphones suggests one answer. An emeritus professor of social science once quipped to me about the conflict between IT and human evolution: to paraphrase, “How can you work on a team with people when you don’t even know how they smell?” Is ecommerce destroying or rescuing Main Street? The conventional wisdom is the former—why pay higher prices at a local shop when Amazon Prime can bring it to you cheaply in two days? On other hand, some analysts argue that by opening up market connections to the whole world, ecommerce can help a specialty business in a small town stay alive.

As we move into the IoT, what else will happen?

One area of concern is the consequences of the transformative power of the IoT on media. Will the end-user convenience (and advertiser delight) of individually customized radio stations and newspapers and billboards and even logos on sports uniforms shatter the sense of community, previously held together by common media experiences? Media can define and shape community. Go back a few decades, and cities only offered a few newspapers and a few mainstream TV and radio stations. The fact that we were all reading the same newspaper, listening to the same drive-time DJs, watching the same nightly newscast all created and contributed to a sense of “us”: the local community sharing this experience. Advertising and wider-scale broadcasting still helped with this: we remember a particular outrageous clothing store ad “we all saw”; the local businesses advertised on the rink sideboards on the televised NHL game gave a sense of where that place was.

As the IoT transforms media by enabling fine-tuning of content and advertising to the end consumer, society risks losing these connections. Rather than sharing the same drive-time DJ, we each listen to our own personal Pandora stations. Rather than sharing in a common presentation of the news, we read or watch items handpicked for our own interests (and political beliefs). We may even see different advertisements on the rink boards and outfield walls; our own personalized ghost advertisements and businesses appearing as our smart glasses sense our location or context. What will happen to the sense of community? Will we be citizens of Town X, or of Internet Chat Channel Y? Will we care as much about the welfare of our neighbors if we have fewer connections to them?

(On the other hand, as with Main Street, this connectivity may also serve to promote geographically diverse communities. Besides computer-generated streaming, actual human-hosted radio shows focusing on specialized genres and specialized podcasts stream on the internet. The IoC makes it possible for more people to form a personal weekly connection with WGBH’s Brian O’Donovan.)

As McGovern observed [8]:

As people grow closer to their friends, family and peers through use of digital, they grow more distant from the establishment, brands and organizations.

Self-driving cars may have a similar effect. Instead of connecting to the streets through which they drive (“Hey, maybe I should stop at that coffee house”) or feeling part of the community of other commuters, drivers will instead be absorbed in their own virtual worlds.

Looking Forward

As we move into the IoT, how can we mitigate these social divisions?

Early in the chapter, we discussed the divisions following from the basic lack of network plumbing. One way to reduce these divisions is to promote more plumbing, just as in previous generations societies promoted telephony and electricity and running water. Many private and public advocacy groups are doing just that. Another angle would be to jump to a new technology (8G?) that will eliminate the obstacles faced by lack of physical wire.

Lack of technological infrastructure in a society also creates an opportunity—because with it comes a lack of inertial resistance from the established infrastructure. Arguably, this may be why cellphone technology caught on first in the developing world—and why some predict that innovative smart grid technology may first catch on in developing countries that do not have a large investment already in the traditional grid. Maybe the IoT can overcome digital divides via such leapfrogging.

The drive for profits may sometimes cloud technological development, but it can also drive players to look for strategies to reach across digital divides (in other words, to reach new markets). The Huawei report stressed the need to reach these groups [7]:

Business models that create value are critical, even if what’s on offer is “free.” Poor and disadvantaged groups often targeted for digital enablement should be treated like any customer. They need to be convinced that they can benefit in order to “invest” in a digital enablement solution, whether it actually costs them money or not.

Huawei and others also stress the role of increased “digital literacy” to bridge the divide.

If participation in the IoT becomes an implicit part of the human experience, we need to make sure that everyone has the option to be fully human.

Works Cited

  1. J. Anderson and L. Rainie, The Internet of Things Will Thrive by 2025. Pew Research Center, May 14, 2014.

  2. B. Chaudry and others, “Mobile interface design for low-literacy populations,” in Proceedings of the ACM SIGHIT International Health Informatics Symposium, January 2012.

  3. S. P. Crawford, “The new digital divide,” The New York Times, December 3, 2011.

  4. C. Dade, “Why new photo ID laws mean some won’t vote,” National Public Radio, January 28, 2012.

  5. K. Fineley, “Why tech’s best minds are very worried about the Internet of Things,” Wired, May 19, 2014.

  6. J. Goodman, “The digital divide is still leaving Americans behind,” Mashable, August 18, 2013.

  7. Huawei, Digital Enablement: Bridging the Digital Divide to Connect People and Society. 2015.

  8. G. McGovern, “The new digital divide,” CMSWire, April 11, 2016.

  9. M. C. O’Connor, “IoT on the farm: Bridging the digital divide,” IoT Journal, May 19, 2015.

  10. S. F. Smith and others, Real-Time Adaptive Traffic Signal Control for Urban Road Networks: The East Liberty Pilot Test. Carnegie Mellon University Robotics Institute Technical Report CMU-RI-TR-12-20, 2012.

  11. S. W. Smith and R. Koppel, “Healthcare information technology’s relativity problems: A typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ,” Journal of the American Medical Informatics Association, June 2013.

  12. R. H. Weber and R. Weber, Internet of Things Legal Perspectives. Springer, 2010.

  13. The White House, Mapping the Digital Divide. Council of Economic Advisors Issue Brief, July 2015.

1 Of course, it can be challenging to find the causality behind this correlation. One older friend laments “millennials who are just not interested in becoming geeks.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.255.187