Chapter 10. The Future of Humans and Machines

The IoT deepens and changes the interconnections between human space and cyberspace. To conclude, this chapter considers some of the deeper implications of these changes by presenting a semiotic framework for this interconnection, using this framework to examine the IoC to IoT transition, then examining other issues—ethics, boundaries, economics, life—relevant to being human in the new information age.

A Framework for Interconnection

In the coming IoT age, newfangled computer systems will smash into human processes. However, as Chapter 3 noted, this future has been here before. Much of my research work (and, before I returned to academia, my professional work) focused on the issues that arise when this occurs. What are the security and privacy issues if a citizen-facing government agency offers services over the web? Why does adding EMR systems to a hospital sometimes cause problems instead of solving them? Why do password rules intended to improve security actually make it worse?

Semiotic Triads, in 2013

This work led to a framework. When my colleague Ross Koppel and I were analyzing usability trouble in health IT [27], we found it useful to consider three things:

  • The mental model of the clinician working with the patient and the health IT system

  • The representation of medical reality in the health IT system

  • The actual medical reality of patients

Figure 10-1 illustrates. (This model could clearly extend to include other actors with mental states—for example, the patients.)

iort 1001
Figure 10-1. The basic Ogden–Richards triad, moved into 21st-century IT; the arrows indicate the main direction of mappings. (Adapted from my paper [26].)

In theory, these three things should show an exact correspondence. The IT system should actually describe the medical reality; the output of the IT system should enable the clinician to infer correct facts about medical reality; inputs to the IT system should correctly reflect clinician actions affecting the reality of the patient. With advances in medical IT, the IT may even directly interact with reality: computers may control the drug dosage administered by a “smart pump”; sensors on a patient may input data directly into the patient’s electronic record.

In practice, we found that in the usability problems we identified in our fieldwork, there was a lack of correspondence. Usability problems organized nicely according to mismatches between the expressiveness of the representation “language” and the details of reality—between how a clinician’s mental model works with the representations and reality.

Semiotic Triads, in the 1920s

Somewhat to our chagrin, we discovered we had been scooped by almost a century. In their seminal 1920s work on the meaning of language, Ogden and Richards [22] constructed what is sometimes called the semiotic triad. The vertices are the three principal objects:

  • What the speaker (or listener/reader) thinks

  • The symbol they use in the language

  • The actual item to which they are referring

Much of Ogden and Richards’s analysis stems from the observation that there is not a direct connection from symbol to referent. Rather, when speaking or writing, the referent maps into the mental model of the speaker and then into the symbol; when reading (or listening), the symbol maps into the reader’s (listener’s) mental model, which then projects to a referent, but not necessarily the same one. For example, Alice may think of “Mexico” when she writes “this country,” but when Bob reads those words, he thinks of “Canada”—and (besides not being Mexico) his imagined Canada may differ substantially from the real one. Thanks to the connection of IT and reality, we now have a direct symbol–referent connection, complicating the merely linguistic world Ogden and Richards explored.

The semiotics of language and the effective communication of meaning focus on morphisms—“structure-preserving mappings”—between nodes of the triad. However, with IT usability problems we are concerned instead with ineffective communication and hence focus on what we called mismorphisms: mappings that fail to preserve important structure when we go from z in one node of the triad to its corresponding z' in another. (See Figure 10-2.) Indeed, we later explored the mismorphisms that lie at the heart of user circumvention of security control, because they characterize the scenarios that frustrate users—and often the resulting circumvention itself [26].

iort 1002
Figure 10-2. (a) Standard semiotics considers structure-preserving mappings between the nodes of the triad. (b) In circumvention semiotics, we think about mappings that fail to preserve structure. (c) For example, in a standard mismorphism scenario, the generated reality fails to embody a property the user regards as critical. (Adapted from my paper [26].)

To reestablish context: when we have computers in real-world applications, we can see the computation as a depiction—in signs and symbols—of reality. The computing reflects assumptions and beliefs about how things work. Consequently, when we have computers and reality and humans involved, we have the opportunity for semiotic confusion. The developers produce systems and code (signs and symbols) that reflect their own mental model of what the real world is; these are different things.

Human/Machine Interconnection in the IoT

The emerging IoT runs the risk of taking this confusion to a whole new level, for several reasons:

  • The IoT will tie orders of magnitude more computers to vastly more parts of physical reality.

  • Unlike the linguistic scenarios of the last century, the IoT will greatly expand the direct connection between computing and reality (even without human intermediaries).

  • The vast decentralization and potential permanence of IoT devices leads to vastly more chances for texts to be wrong—and for wrong texts to remain uncorrected.

As a result, this triad becomes a convenient framework for looking at how these new machines will affect humanity.

Mapping, Literally

The preceding discussion talked about “mapping” in the mathematical sense (as a “function” or “correspondence”). A computer in some real-world application (say, monitoring the health of a human) has internal state; the configuration of this internal state should (in theory) correspond to the configuration of the human.

However, many IoT applications require literal mapping: representation of the actual real world. For example, Google’s extensive mapping database is a key part of the secret sauce that makes Google’s self-driving cars possible. In space alone, we already see interesting incidents of friction between things as they are and things as the computers think they are.

The region of Vermont and western New Hampshire where I live regularly provides amusing and mostly harmless examples. Over the last two centuries, the rural areas here underwent “negative development,” as people discovered farming was easier in the flatlands of the Midwest and moved away. Farms, houses, and towns were abandoned; forests grew back. However, the roads have persisted: sometimes as legal rights of way, sometimes not; sometimes on maps, sometimes not; sometimes traversable with an ordinary car, sometimes only with a four-wheel-drive car, or only with a full-suspension mountain bike, or only on foot. These “ancient roads” lead to lots of fun. Google Maps used to direct some travellers to drive on one that goes underneath the Hanover Reservoir, and to send others on ones that go through an abandoned copper mine. In summer 2016, it gave me directions to an Appalachian Trail parking area by having me drive there on the Appalachian Trail (neither legal nor possible); today, it still shows Goss Road going over Moose Mountain, even though dedicated hikers have not been able to find any evidence of the route on the ground.

A few times a year, local news reports on tourists who need to be rescued because they followed their fancy GPS navigation devices and drove their cars places where cars really shouldn’t go. On a more serious note, international news also reports on drivers who have died this way: rather than merely being stuck on rocks or snow, they drove cars into water, or into remote backcountry where rescue was not possible.

Human migration isn’t the only cause of mapping error here. In summer 2016, the Australian Broadcasting Corporation reported how Australia’s GPS coordinates are wrong by “more than 1.5 meters,” and noted that Geoscience Australia’s Dan Jaksa observes “with the applications that are coming in intelligent transport systems—like driverless cars—if you’re 1.5m out then you’re in another lane” [4]. Also in summer 2016, Niley Patel wrote how even more rudimentary mapping problems hamper smart infrastructure such as self-driving cars [23], due to “the ‘egress problem’—the way we locate buildings on a map doesn’t really describe how people move in and out of those buildings.”

Mapping, Figuratively

Embedding IT in the real world raises another mapping challenge: encoding real-world processes and flow in the IT algorithms and architecture. Computers only do what they are told to. But as anyone who has ever been frustrated by inept bureaucracy knows, simply following rules is extremely problematic when the process they represent isn’t so simple.

The IoC has already shown this frustrating friction.

One famous area is digital rights management (e.g., [2]). Until the latter part of the 20th century, artistic creations (such as texts) were explicitly grounded in the physical media (such as paper) underlying them. Humanity had millennia to evolve understandings of reasonable behavior with regard to such art and the underlying media and to see that the two coincided, more or less. However, the explosion of digital media changed all that, as stakeholders such as the recording industry, YouTube, and scholarly reviewers seeing massive copy-and-paste plagiarism can attest. DRM emerged as an attempt to have technology reintroduce “correct” behavior to this new digital media. However, among computer professionals, DRM is widely regarded as simply not working (e.g., [11]). We regularly see sad but amusing incidents where a website takes down material due to a robot-generated copyright infringement claim that is incorrect but automatically believed. Codifying all the nuances about fair use and such is surprisingly hard to do—particularly a priori. Quoting someone else, Ed Felten famously lamented that “computers are too stupid to look the other way” [13]. (This isn’t to say that the problem of knowing when to look the other way is not something that could be solved by a computer—indeed, colleagues who assert that the human mind is nothing more than a computer made of meat would argue that we already have working examples! I’m not sure if I would go that far—but I am comfortable asserting that the problem of codifying such behavior in a way that our human-made computers can carry it out has turned out to be far, far more complicated than envisioned.)

For that matter, the onslaught of lawsuits about online music piracy themselves (e.g., [1]) rest on a mapping problem: IP addresses do not equal human actors.

Another area that’s less famous (although I’m trying to correct that) is that of circumvention of computer controls and processes. Over and over again, we see scenarios in IT-enhanced workplaces where ordinary users, trying to get their jobs done, circumvent the security controls in the IT. For just a few examples from my previous papers [26, 27]:

  • Some smart pumps assume the patient never weighs more than 350 pounds; for overweight patients, clinicians must similarly distort the IT representation by hooking up two pumps (each allegedly serving a patient of half the actual weight), or by telling the pump it is giving a different medication that, for a legal weight patient, works out the correct drip rate.

  • A vendor of power grid equipment had a marketing slide showing their default password and the default passwords of all the competitors. The slide was intended to show how secure this vendor was, since they used a more secure default password. However, a deeper issue here is that access to equipment during an emergency is critical, since availability of the grid is far more important than other classical security aspects. Any scheme to replace default passwords with a stronger scheme needs to preserve this availability.

  • For some medications, a clinician may need to prescribe a tapered decline (sometimes called staged reduction) of dosage rather than an abrupt end. However, the EMR IT does not allow for a taper; what the clinician thinks of as a single unit—the tapered end of medicine—must be instantiated as a sequence of separate non-tapered medication orders, with the clinician needing to remember to terminate the earlier items in the sequence.

One medical clinician told of screen-scraping a medical record into PowerPoint and then emailing the result to a colleague for a second opinion, because her hospital’s data access policy did not allow what her ethical duty required. Another medical clinician even asked us [24]:

Are you trying to build a better policeman, or do you want to help patients? Because they’re not the same thing!

(Those trained in the traditional “confidentiality/integrity/availability” definition of security would assert that forgetting to provide the core purpose of the system violates the third principle of security: availability.)

Even in these current IoC (and IoT) application domains, trouble arises because the workflow embedded in the IT does not match the workflow in the real world. In the future IoT, when the tie between IT and the real world is even more intimate and ubiquitous, what will we see? Maybe even more trouble—or maybe we (the computer science community) will learn from our past mistakes. My own team’s circumvention work suggests two potential promising avenues:

  • Closing the loop. Don’t just stop with the IT codification of the real-world process: measure if it actually works. (When we asked a senior clinician at a major New York hospital whether the IT developers had any idea of the trouble their incorrect assumptions had caused, he said: “Not at all.”)

  • Allowing for override. When end users perceive that the system does not match reality, let the end users change the system. (At least this way, the system actually knows what’s happening; with circumvention, the system departs further from reality.)

Uncanny Descents

Mismorphism can hamper reasoning about IoT applications in deeper ways, as well. Many application areas might have some desirable and measurable property and some kind of tunable parameter that, in theory, affects this property. For example, my earlier work on security circumvention considered password-based authentication; here, the property might be something such as “net aggregate security,” and the parameters might be such things as “minimum password length” or “frequency of required password changes.” In a human’s mental model, turning these parameters “up” should make the property go up. However, in reality, dialing up the parameter can make things worse. Mapping from IT to reality loses a fundamental property.

My team started using the term uncanny descent for this kind of mismorphism, inspired by computer graphics’ use of the term uncanny valley for when dialing up realism makes things worse before it makes things better. But since we don’t know whether things will get better, we stick with just one slope.

Many IoT applications already in deployment demonstrate uncanny descents.

Aging in place

One family of “smart home” applications receiving much research attention is aging in place. Helping older people stay in their own homes longer (rather than moving into retirement and assisted-care facilities) can both improve their quality of life and save them money. However, staying in one’s own home can increase risks—what if the elder suffers a sudden health-related crisis? To mitigate these risks, researchers have been exploring the use of IT; for instance, augmenting the elder’s household with telecommunications devices that let remote relatives and caregivers monitor their health.

The idea here is that adding IT to the household will help improve quality of life. However, researchers at Indiana University observe that it may have the opposite effect [16]:

Finally, such systems may have the unintended consequence of reducing the number of phone calls or visits from caregivers, because the caregiver now knows that the older adult being monitored in their home is safe and secure for the present moment…. A primary concern that older adults express about these types of in-home technologies is that they will replace human contact with formal and informal caregivers.

The conclusion here is not to give up, but rather to be aware—to keep in mind the overall goal of the smartening, and to measure and tune appropriately.

Self-checkout

Another area receiving attention (and hype) is incorporation of smart technology into retail shopping. For myself (and probably many readers), a tangible manifestation of this trend is the recent proliferation of self-checkout stations at grocery stores. The idea here is that adding IT to this part of retail can improve revenue due to quicker checkout and reduced costs (since fewer employees are needed). However, again, researchers Adrian Beck and Matt Hopkins at the University of Leicester have noted it may have the opposite effect [20]:

One million shopping trips were audited in detail, amounting to six million items checked. Nearly 850,000 were found not to have been scanned, the report said, making up 4 percent of the total value of the purchases.

Whether the items were unscanned due to intentional theft or inadvertent error is not clear.

Self-checkout is also a wonderfully tangible example of a mismatch between the real-world process expected by a shopper and the way this process is codified inside the smart checkout system—with (as Beck observed) “the phrase ‘unexpected item in the bagging area’ striking dread into many a shopper.”

In the IoT as with energy, friction generates waste.

Safety

Adding smart technology to automobiles is intended to make them safer, but may have the opposite effect. Freakonomics’s Steven Dubner quotes Glenn Beck [12]:

I was looking at an Audi as well, and the guy said to me, he said, “this has some amazing safety features, it knows when the car is going to roll, if your window is rolled down, it immediately rolls the window up, it has the side airbags, your seats, depending on what the car senses it’s going to do, it puts the seats put in right position,” you know, it makes me want to flip the car! I’m going to put my seat in the most awkward position, and I’m gonna flip it! This is, like, the safest car on the road, he used the term “death-proof.” But honestly I didn’t even think about it until we were—until I was driving it. And I thought—I really was taking a corner a little too fast, and I’m like “I can handle it, what’s the worst that can happen?”…“What? So I didn’t stop at the stop light, and I’m going a hundred and ninety? What? I can flip it, I’ll survive, it’s the death-proof car!” What a dope!

As a middle ground between traditional automobiles and self-driving automobiles, Liviu Iftode and his colleagues have been considering the challenges of remote driving: that is, a car on the road that is in fact “driven” by a human at a remote control center (e.g., [19]). Besides the engineering challenges, this concept also raises psychological ones, as the physical separation isolates the human pilot from the physical consequences of their actions. Will separation reduce the propensity to road rage and cause the pilot to drive more safely? Or, as with Glenn Beck’s death-proof Mercedes, will separation reduce the sense of personal risk and cause the pilot to drive more dangerously? (Similar questions have been raised about the use of remotely controlled drones in combat.)

Of course, this problem predates smart IT. Studies of antilock brakes in automobiles suggest that this safety-promoting technology can increase the risk of accidents because it also promotes more “aggressive” driving [28]. In American football, the use of pads and helmets can similarly increase the risk of injuries, both because they decrease the perceived risk of injury to a player and because (with helmets, at least) the safety devices themselves can be used as weapons. As Dubner observes: “As the safety equipment gets better, our behavior becomes more aggressive” [12].

In the workplace

Uncanny descents have been seen when moving IT into the workplace too. As a security researcher, I have heard multiple colleagues (all from the public sector) rant about how important it was to block employees from doing personal web browsing during work hours—even though studies have shown that such activity can boost productivity [3], essentially since it provides micro-breaks that are more time-efficient than walking to the water cooler. On the other hand, a recent university study “showed that employees’ performance improved 26 per cent when their smartphones were taken away” [25].

Others

Some recent items in the news demonstrate scenarios where stakeholders are aware of what they perceive as negative effects of bringing IT into real life—and are taking action.

To further its mission of promoting the right to bear firearms in the US, the National Rifle Association (NRA) has ensured that the US facility where law enforcement traces guns is not allowed to have computers [18]:

That’s been a federal law, thanks to the NRA, since 1986: No searchable database of America’s gun owners. So people here have to use paper, sort through enormous stacks of forms and record books that gun stores are required to keep and to eventually turn over to the feds when requested. It’s kind of like a library in the old days—but without the card catalog. They can use pictures of paper, like microfilm (they recently got the go-ahead to convert the microfilm to PDFs), as long as the pictures of paper are not searchable. You have to flip through and read. No searching by gun owner. No searching by name.

Cyclists Nairo Quintana and Alejandro Valverde, while sitting in first and second place in the Vuelta a España, called for the banning of power meters (IT in the bicycle, to help measure and then tune rider performance) [14]:

“They take away a lot of spectacle and make you race more cautiously,” Quintana said. “I’d be the first in line to say they should be banned.” “I think they’re really useful for training, but they take out a lot of drama from the sport,” added Valverde. “In competition you should be racing on feelings.”

Ethical Choices in the IoT Age

In his novel A Clockwork Orange, Anthony Burgess considered whether moral decisions made by clockwork would have the same value as moral decisions made by unconstrained humans.1

In an IoT world, many scenarios exist where choice and action move from a human in a direct situation to a machine, or perhaps to the human who programmed that machine at some point long ago. Some of these choices will inevitably have moral and ethical dimensions. How will the reality—and the human perception—of the choices change when the actors move?

Ethicists have long considered the Trolley Problem (e.g., see Figure 1 in [15]). One can construct many scenarios involving unfortunate arrangements of trains and human bodies in which a human actor must make a choice between two options. For example:

  • If a runaway train is about to hit a fork where it can go in one direction and kill five people or can go in another and kill one, and Alice controls the switch at this fork, which option should she choose?

  • If a runaway train is about to hit five people, but Alice is standing on a bridge over the tracks next to a very large Bob, should she let the five people die, or push Bob off the bridge, so he dies but stops the train and thus saves the other five?

Psychologists have studied why humans tend to make different choices in the different scenarios, even though (when evaluated solely in terms of net utility, or lives lost) the choice in each scenario should be the same—lost one life to save five. There’s a mismorphism here: something changes when the situation is mapped from end reality to human perception of the action.

Now, fast-forward to a future of smart technology, when the actor is no longer human. If a self-driving car is in an unfortunate scenario where it must kill either five pedestrians in one lane or one in another, which should it choose? What if the choice is between killing five pedestrians, or crashing into a wall and killing its driver? And what about all the other kinds of smart machine actors the IoT will bring?

These questions are grist for much philosophical discussion (e.g., see [10]).

Perception of Boundaries in the IoT Age

Another structure that can be lost in the mapping between smart IT, the real world, and human perception is that of boundaries.

Even in the plain IoC, the borders in IT infrastructure do not match the borders in reality. Back at the turn of the century, Bill Cheswick’s Internet Mapping Project revealed internet connectivity suggesting mergers between companies that had not yet announced they were merging, and connectivity suggestive of covert nation-state influence. In modern times, Doug Madory of Dyn has done analysis of internet routing data indicating interesting connections with modern political events, among other things; Dyn even markets “internet intelligence” as a business.

As we move into the IoT, as Chapter 1 noted, humans may have difficulty perceiving the risks that compromise of a particular IoT infrastructure, such as smart meters, can pose to apparently unrelated systems, such as the cellphone network; likewise, human perception of security risks to a particular IoT infrastructure, such as Target’s point-of-sale terminals, may not take into account attacks launched at apparently unrelated systems, such as HVAC controls. Chapter 4 noted an instance where a port for a car’s CAN bus—through which one can unlock the vehicle—can be found outside the locked perimeter. In summer 2016, a security analysis of the massive privacy breach at Banner Health noted “it was odd that the point of sale systems at Banner’s 27 food service locations that were affected appear to have been on the same network as clinical systems” [8].

The business relationships in IT manufacturing—who buys and repackages what software and components from whom—also complicate accurate perception of and reasoning about boundaries. As Chapter 4 noted, vulnerabilities in the firmware of one CCTV device affect dozens of vendors, who simply re-packaged that device. The ThinkPwn exploit for a low-level UEFI driver from one particular small vendor affects the larger machines (such as some Lenovo laptops) that used that driver [9]. The flaws enabling hackers to shut down Andy Greenberg’s Jeep were not in the Jeep itself but in a radio manufactured by someone else—and used in other brands of cars as well [21].

Human Work in the IoT Age

Humans often define themselves by work—“your work is your worth.” The IoT changes the workplace. Does it change human worth?

As noted previously, mixing technology into the workplace changes the workplace. One might think that it would make things “better,” by some definition: people could get more things done more accurately in less time. For example, NPR’s Planet Money notes [17]:

The economist John Maynard Keynes predicted [in 1930] that his grandkids would work just 15 hours a week. He imagined by now, we would basically work Monday and Tuesday, and then have a five-day weekend.

Why did Keynes get this wrong? NPR posits it was because although technology improved productivity, people still choose to work, due to some combination of opportunity cost (that hour of leisure is not worth the lost income) and personal satisfaction. Observers taking a long view of the US economy might add that even the productivity improvement cannot compensate for the decrease in wages per unit of productivity, and the increase in cost of living. Back in the 1980s, the older businessman who ran one of the first tech companies I worked for lamented how (even then) both parents working would not support a family as well as one parent working when he was younger. Paraphrasing, “I never thought I would see the next generation do worse than the previous one.”

The IoT promises an even more disruptive technological revolution in the workplace. In terms of the triad discussed earlier, the permeation of IT into physical reality fundamentally changes the way business processes work—and human understanding of these processes (and of the role of human workers in it) lags behind.

In particular, the new technology enables a sort of arbitrage: tasks that required specialized humans can now be done by machines; tasks that required expensive local infrastructure (such as big computing) can now be outsourced inexpensively (e.g., to the cloud). In a series of articles [5, 6, and 7], writers Bernard Condon, Jonathan Fahey, and Paul Wiseman analyzed this impact:

Five years after the start of the Great Recession, the toll is terrifyingly clear: Millions of middle-class jobs have been lost in developed countries the world over. And the situation is even worse than it appears. Most of the jobs will never return, and millions more are likely to vanish as well, say experts who study the labor market. What’s more, these jobs aren’t just being lost to China and other developing countries, and they aren’t just factory work. Increasingly, jobs are disappearing in the service sector, home to two-thirds of all workers….

For more than three decades, technology has reduced the number of jobs in manufacturing….

Start-ups account for much of the job growth in developed economies, but software is allowing entrepreneurs to launch businesses with a third fewer employees than in the 1990s….

Those jobs are being replaced in many cases by machines and software that can do the same work better and cheaper….

Reduced aid from Indiana’s state government and other budget problems forced the Gary, Ind., public school system last year to cut its annual transportation budget in half, to $5 million. The school district responded by using sophisticated software to draw up new, more efficient bus routes. And it cut 80 of 160 drivers….

The analysis concludes with considering what will happen in a society where (thanks to technology) a majority of humans cannot find employment—or must compete for a vanishing number of “midskill” jobs. The machines will be wonderful and enable a wonderful life for the innovators. But what about the rest of us?

As far back as 1958, American union leader Walter Reuther recalled going through a Ford Motor plant that was already automated. A company manager goaded him: “Aren’t you worried about how you are going to collect union dues from all these machines?” “The thought that occurred to me,” Reuther replied, “was how are you going to sell cars to these machines?”

Many in the field already talk about dark factories (with no human employees). However, humans like doing—that’s one of the reasons Keynes was wrong. What will our future hold?

Brave New Internet, with Brave New People in It

The previous chapter used the fashionable term “digital divide.” A similarly fashionable term, “digital natives” refers to humans who grow up with new information technology. Rather than having to adapt to a new world (as “digital immigrants” must do), a digital native has always seen the universe as having this IT enhancement.

Confronted with the internet, the web, digital music, YouTube, iPods, and smartphones, digital immigrants are often frightened by the skill and ease with which digital natives interact with the digital. As Groucho Marx quipped in technologically ancient times:

A child of five would understand this. Send someone to fetch a child of five.

In slightly less ancient times, when I was considering leaving industry for academia, I asked a mentor how you lead students. His response: “You don’t lead them—you follow them.”

The IT of the 2010s is considerably advanced from the IT of the 1990s or the 1970s, and the teenagers of the 2010s perceive a very different world from their predecessors. However, the IoT of the 2020s promises (or threatens) to be potentially a far greater advancement. What world will the digital natives of the IoT grow up in? Will they regard our current dumb houses and dumb cars and dumb bridges as hopelessly archaic? Alternatively, if we don’t adequately resolve the security and privacy risks, will they be able to cope if their world reverts to a cyber Love Canal?

After all, they are us.2

Works Cited

  1. R. Beckerman, “Large recording companies vs. the defenseless: Some common sense solutions to the challenges of the RIAA litigations,” The Judges Journal, American Bar Association, July 2008.

  2. L. J. Camp, “DRM: Doesn’t really mean digital copyright management,” in Proceedings of the 9th ACM Conference on Computer and Communications Security, 2002.

  3. J. Cheng, “Study: Surfing the internet at work boosts productivity,” Ars Technica, April 2, 2009.

  4. E. Clark, “Driverless cars need Australia’s latitude and longitude coordinates to be corrected,” Australian Broadcasting Corporation News, July 28, 2016.

  5. B. Condon, J. Fahey, and P. Wiseman, “Practically human: Can smart machines do your job?,” AP: The Big Story, January 24, 2013.

  6. B. Condon and P. Wiseman, “AP IMPACT: Recession, tech kill middle-class jobs,” AP: The Big Story, January 23, 2013.

  7. B. Condon and P. Wiseman, “Will smart machines create a world without work?,” AP: The Big Story, January 25, 2013.

  8. J. Conn, “Banner Health cyberattack impacts 3.7 million people,” Modern Healthcare, August 3, 2016.

  9. L. Constantin, “Firmware exploit can defeat new Windows security features on Lenovo ThinkPads,” PC World, July 1, 2016.

  10. C. Doctorow, “The problem with self-driving cars: Who controls the code?,” The Guardian, December 23, 2015.

  11. C. Doctorow, DRM: You Have the Right to Know What You’re Buying! Electronic Frontier Foundation, August 5, 2016.

  12. S. J. Dubner, The Dangers of Safety Full Transcript. Freakonomics Radio, August 13, 2015.

  13. E. Felten, “Too stupid to look the other way,” Freedom to Tinker, October 29, 2002.

  14. A. Fotheringham, “Quintana calls for power meters to be banned from racing,” Cyclingnews, August 30, 2016.

  15. M. Hauser and others, “A dissociation between moral judgments and justifications,” Mind & Language, February 2007.

  16. L. Huber and others, “How in-home technologies mediate caregiving relationships in later life,” International Journal of Human–Computer Interaction, 2013.

  17. D. Kestenbaum, “Keynes predicted we would be working 15-hour weeks. Why was he so wrong?,” NPR Planet Money, August 13, 2015.

  18. J. M. Laskas, “Inside the Federal Bureau of Way Too Many Guns,” GQ, August 30, 2016.

  19. R. Liu and others, Remote Driving: A Ready-to-Go Approach to Driverless Car? Technical Report DCS-TR-712, Rutgers University, Department of Computer Science, February 2015.

  20. C. Mele, “Self-service checkouts can turn customers into shoplifters, study says,” The New York Times, August 10, 2016.

  21. D. Morgan, “Car hacking risk may be broader than Fiat Chrysler: U.S. regulator,” Reuters, July 31, 2015.

  22. C. Ogden and I. Richards, The Meaning of Meaning. Harcourt, Brace and Company, 1927.

  23. N. Patel, “Self-driving cars aren’t going to be so great until we make our maps way better,” The Verge, August 24, 2016.

  24. S. Sinclair and S. W. Smith, “What’s Wrong with Access Control in the Real World?,” IEEE Security and Privacy, July/August 2010.

  25. S. Shinde-Nadhe, “Not using smartphones can improve productivity by 26%, says study,” Business Standard, August 30, 2016.

  26. S. W. Smith and others, Mismorphism: A Semiotic Model of Computer Security Circumvention (Extended Version). Dartmouth Computer Science Technical Report TR2015-768, March 2015.

  27. S. W. Smith and R. Koppel, “Healthcare information technology’s relativity problems: A typology of how patients’ physical reality, clinicians’ mental models, and healthcare information technology differ,” Journal of the American Medical Informatics Association, June 2013.

  28. C. Winston and others, “An exploration of the offset hypothesis using disaggregate data: The case of airbags and antilock brakes,” Journal of Risk and Uncertainty, March 2006.

1 Of course, this was back in an era when most intellectuals believed in free will; such belief seems unfashionable in many modern scientific circles. I think it’s safe to say this issue may be beyond the scope of this book.

2 Parts of “A Framework for Interconnection” are adapted from portions of my paper [26].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.106.237