Chapter 6. The Internet of Tattletale Devices

The IoT’s deep reach into society means we will need to reexamine how we think about technology and privacy.

In the emerging IoT, the “things” connect more deeply into an individual’s life and behavior than did the computers of previous IT infrastructure. They also transmit this data to a wider range of players. Consequently, the IoT may enable aggregation across large populations of individuals and across larger populations of attributes for any one individual.

These new dimensions challenge the way we think about privacy: access and control of data about ourselves. This chapter considers some of these challenges, discussing:

  • Cautionary tales about IoT and IoC privacy leakage

  • Areas where current IoT collection of data was intended but surprising

  • How the emerging IoT may enable widespread monitoring of individuals

  • Why an effective privacy policy is a hard goal to achieve

Cautionary Tales

Chapter 2 discussed how, in a typical IoT architecture, distributed sensors and actuators connect to a big data backend, and Chapter 3 observed how the future has been here before. Let’s take a look at a few cautionary tales.

IoC Privacy Spills

Big data backends already have a history of privacy spills. Spills of personal data records have become so common that they’re barely newsworthy anymore. One nicely ironic incident was the 2015 compromise of the US government’s Office of Personnel Management (OPM); information (including mine) collected during background investigations of government employees, in order to ensure their trustworthiness, was then provided in bulk to attackers apparently working for a foreign nation [18]. In the medical domain alone, Forbes reports over 112 million records were spilled in 2015 [43].

Many of these spills are due to attackers breaching the servers where they are stored—a standard matter of a hole or lack of patching. In 2015, BinaryEdge surveyed the internet for installations of four specific database tools and found over a petabyte (1015 bytes) of data exposed online due to lack of authentication [3] (recall Chapter 4). Some leaks stem from lost laptops, but others arise from the complexities of connection. For one example, the FTC reports [47]:

Medical transcript files prepared between March 2011 and October 2011 by Fedtrans, GMR’s service provider, were indexed by a major internet search engine and were publicly available to anyone using the search engine. Some of the files contained notes from medical examinations of children and other highly sensitive medical information, such as information about psychiatric disorders, alcohol use, drug abuse, and pregnancy loss.

For another example, the White House bragged about moving citizen services from paper to electronic [58]:

In 2014, the Internal Revenue Service made it possible for tax-payers to digitally access their last three years of tax information through a tool called Get Transcript. Individual taxpayers can use Get Transcript to download a record of past tax returns, which makes it easier to apply for mortgages, student loans, and business loans, or to prepare future tax filings.

Unfortunately, in 2015, attackers used this “Get Transcript” service to download other people’s past returns, and then, using this information, filed fraudulent returns for the current year and collected the tax overpayments. The New York Times reports that over $50 million was stolen from over 100,000 individuals [53].

Given that the IoT will have a larger number of systems interconnected more complexly, we also might expect more spills resulting from inadvertent interconnection. What else do these IoC issues say about privacy in the IoT?

For one thing, as we go from the IoC to the IoT, there’s no reason to suspect the backend servers will become more secure. It’s also interesting to note that root factors in the OPM case included dependence on software platforms no longer supported by their vendors, and on web portals using obsolete cryptographic protocols. As Chapter 1 noted, the IoT will likely bring lifetime mismatches among things, software, and vendors; as Chapter 4 noted, the lifetime of things (and the difficulty of patching them) may lead to more trouble with aging cryptography.

We also might expect the IoT to inherit the risk of exposure from physical distribution, since it will have even more separate pieces more broadly distributed, and more likely to be misplaced. Indeed, things may even live longer than the people and enterprises that use them—and researchers and journalists already report finding interesting personal data on used machines purchased on eBay and elsewhere. Even now in the IoC, a problem here is the deep ways that data can penetrate and hide in systems. A colleague who worked in IT at a large hospital used to report that when clinicians would return borrowed laptops—after conscientiously trying to scrub them of patient data—he could always find some such data remaining somewhere. Tools that would help purge devices of such sensitive remnants would help both in the IoC and the IoT.

Unfortunately, we are already seeing risks to the IoT from IoC privacy problems becoming more than just theoretical. In 2015, Motherboard reported on a spill of backend data from VTech, an IT-enhanced toy company [16]:

The personal information of almost 5 million parents and more than 200,000 kids was exposed earlier this month after a hacker broke into the servers of a Chinese company that sells kids toys and gadgets…. The hacked data includes names, email addresses, passwords, and home addresses…. The dump also includes the first names, genders and birthdays of more than 200,000 kids. What’s worse, it’s possible to link the children to their parents, exposing the kids’ full identities and where they live

IoT Privacy Worries

Besides merely inheriting the privacy risks of their backend servers, IoT products and applications have also been introducing new concerns of their own.

Adding “smart” functionality to previously less-smart home appliances is one area where this concern is manifested. For example, consider Vizio smart TVs [31]:

Vizio has sold more than 15 million smart TVs, with about 61 percent of them connected as of the end of June [2015]. While viewers are benefiting from those connections, streaming over 3 billion hours of content, Vizio says it’s watching them too, with Inscape software embedded in the screens that can track anything you’re playing on it—even if it’s from cable TV, videogame systems and streaming devices.

Of additional concern is the 2015 discovery that, thanks to the “bad PKI” pattern described in Chapter 4, adversaries can intercept the Vizio data transmissions [22]. The TV, its corporate partners, and adversarial middlemen are watching you. Robert Bork1 might be rolling over in his grave.

Adding voice interaction to home devices adds another vector of concern. Apple plans to have Siri listen to and transcribe her owner’s voicemail, using various servers and services in the process [11]. Samsung smart TVs have a voice recognition feature that allows users to control the TV by speaking to it. Implementing this feature requires sending the audio back through the cloud, including to players other than Samsung. ITworld notes this advice from Samsung [10]:

Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.

Your TV is also listening to your living room!

It’s interesting to note the Samsung implementation also followed the insecurity design pattern of forgetting to encrypt [9]:

Following the incident, David Lodge, a researcher with a U.K.-based security firm called Pen Test Partners, intercepted and analyzed the Internet traffic generated by a Samsung smart TV and found that it does send captured voice data to a remote server using a connection on port 443.

This port is typically associated with encrypted HTTPS (HTTP with SSL, or Secure Sockets Layer) communications, but when Lodge looked at the actual traffic he was surprised to see that it wasn’t actually encrypted.

(As we move forward into the IoT, “closing the loop” with testing like this will be vital.)

In 2015, Mattel announced “Hello Barbie,” bringing cloud-connected voice interaction to children’s dolls [55]. The Register described it as:

A high-tech Barbie that will listen to your child, record its words, send them over the internet for processing, and talk back to your kid. It will email you, as a parent, highlights of your youngster’s conversations with the toy.

The article further noted the conversations might persist in the cloud, as grist for the computational engines. According to ToyTalk’s privacy policy:

When users interact with ToyTalk, we may capture photographs or audio or video recordings (the ‘Recordings') of such interactions, depending upon the particular application being used. We may use, transcribe and store such Recordings to provide and maintain the Service.

Although “Hello Barbie” repeated some cryptographic insecurity patterns (as Chapter 4 discussed), no major privacy disaster has emerged. However, the implications are deep. Is one’s home still a sanctuary? Who gets to know what one’s child whispers to a toy? Will these recordings result in parents being visited by child welfare agents—or (depending on the society) being prosecuted for political thoughtcrime? Will the whispers show up when the child grows and applies for a job or a security clearance, or is a defendant in court?

The consumer-side smart grid—a sexy IoT application domain discussed back in Chapter 3—has also triggered privacy concerns. In its intended functionality, the smart grid will instrument appliances and other electrical devices in the home with sensors and maybe even actuators so that the rest of the grid can better keep things in balance (bringing the user along by offering better pricing and such). The privacy concern here is that this set of measurements—how much power each device in a house or apartment is using at any given moment in time—becomes a signature of what is going on there: who is present and what they are doing. The patterns that emerge over time can then be the basis for prediction of future household activity—useful to the grid, perhaps, but also useful to criminals who might want to know when the home is empty, or when an intended assault or kidnap victim is there alone.

As with any privacy exposure in information technology, the issue exists on two levels: not only what the authorized backend entities might know, but also what might be gleaned by unauthorized entities who penetrate communication channels or servers.

When Things Betray Their Owners

The preceding scenarios were mostly describing potential privacy problems in the IoT, and were inspired by IoC privacy issues. However, there have already been many actual privacy problems arising from what IoT things actually do: collect lots of data about the physical world around them—including their owners.

Your Things May Talk to Police

In March 2015 in Pennsylvania, a woman called 911 to report she had been raped by someone who broke into her house and assaulted her while she was sleeping. However, police investigators concluded she had made the story up, in part because of her Fitbit [24]:

The device, which monitors a person’s activity and sleep, showed Risley was awake and walking around at the time she claimed she was sleeping.

Big Brother may not be watching you, but Little Fitbit is.

In December 2015 in Florida, a driver (allegedly) was involved in a car accident and then fled [40]. When telephoned by an emergency dispatcher, she responded, “Ma’am, there’s no problem. Everything was fine.” Hit-and-run accidents are nothing new. However, what’s interesting in this case is how the police became involved:

The dispatcher responds: “OK, but your car called in saying you’d been involved in an accident. It doesn’t do that for no reason. Did you leave the scene of an accident?” [Emphasis added]

The woman’s car, like many new cars enhanced with computing magic, was set up to call 911 if its GPS information indicated a potential crash (e.g., because of a rapid change in direction and momentum). Your car can know where you are and will call 911 to help you, even if that’s not your plan. But what else will happen with this data? The article notes further:

Privacy campaigners concerned that governments might use the technology to keep permanent track of a vehicle’s movements have been told the new rules only allow for GPS information to be collected in the event of a collision, and that it must be deleted one it’s been used.

But will the GPS data really “only” be used for the noble purpose of accident reporting? Indeed, modern IT-enhanced cars collect much data, and police know about this.

In Vermont in 2015, a cyclist was killed by a car whose driver was allegedly intoxicated [13]:

Scott said investigators obtained a search warrant for the Gonyeau car to download information from its computer. He said once the information from the car’s sensors can be reviewed, police will know more about the crash.

The investigation later concluded the cyclist was mostly to blame.

In 2016, Canada’s CBC News reported [6]:

From July 1 to Dec. 31 of last year, there were five fatal vehicle collisions in the parts of Halifax policed by the RCMP. Information from event data recorders was used in two of those investigations, according to an access to information request filed by CBC News.

CBC also noted the various implications:

  • Are the owners aware their cars collect this data?

  • Will the data only speak of things such as car accidents, and not other aspects of driver and passenger identity and behavior?

  • Will the police only use the data for correct purposes?

  • Is the data actually correct?

It’s worth noting that a colleague of mine who spent a career in law enforcement (in the US, a country with constitutional privacy protections for citizens) observed that it’s common practice for police to use illegal means to find out who’s guilty—after which they then use legal means to obtain evidence for court. It’s also worth noting that just because a computer allegedly measured something doesn’t mean that it actually happened; “Things ‘on the witness stand’” on page 180 will consider further the legal implications.

Your Things May Phone Home

Law enforcement officials aren’t the only people your smart things may talk to.

In February 2013, John Broder wrote an unfavorable review of the Tesla Model S in the New York Times [5]. Broder was unhappy with the performance of the high-end electric car, and supported this conclusion with his firsthand observations of speeds and charges and such as he test-drove it. What’s interesting here from the IoT perspective is that the reviewer was not the only witness—the car itself was recording data about its experiences and sending this data back to Tesla. Unhappy with the review, Tesla chair Elon Musk published a retort [44] using the car’s logs to dispute what the reviewer claimed happened during this “most peculiar test drive.” For example, one of the diagrams showed a speed versus distance graph, with annotations appearing to show that Broder’s claims of “cruise control set at 54 miles per hour” and how he “limped along at about 45 mph” did not match recorded reality. A back-and-forth ensued, with no clear winner [23]. (Tesla would not give me permission to republish any of these diagrams, but you can see them in [44, on page T3].)

In 2016, this pattern continues, with high-profile incidents (e.g., [20]) of customers claiming their Teslas did something odd, and Tesla using its logs to claim otherwise.

In the IoT, your things are also witnesses to what you witness—and they may see it differently.

Given the computationally intensive engineering challenges of high-tech and high-end cars such as Teslas, the fact that they log data and send it back home would appear reasonable. The more data is collected, the more the engineers can analyze and tune both the design in general and that car in particular. Tesla is not alone in doing this. One colleague reported his BMW decided it needed servicing and told BMW, which called my colleague—while he was driving. (The message was something like “Your car says it needs to be serviced right now.”) Another colleague who handles IoT security for the company whose machines generate “half the world’s electricity” talks about the incredible utility of being able to instrument complex machines, send the data back home, and feed it into computerized models that the engineers trust more than physical observation.

However, in February 2015, Brian Krebs wrote about a family of IoT devices that appear to phone home for no reason at all [30]:

Imagine buying an internet-enabled surveillance camera…only to find that it secretly and constantly phones home to a vast peer-to-peer (P2P) network run by the Chinese manufacturer of the hardware.

In fact, this is what newer IP cameras from Foscam were doing—which came to light when a user “noticed his IP camera was noisily and incessantly calling out to more than a dozen online hosts in almost as many countries.” To make things even more interesting, the camera UI does allow the user to tick a box opting out of P2P—but doesn’t actually change its behavior when the box is ticked. In this case, it’s harder to see a reasonable argument for the P2P network; Foscam claims it helps with maintenance.

Your Things May Talk to the Wrong People

In the preceding cases, IoT things shared data about their experiences in perhaps surprising ways—but at least they were sharing it in accordance with their design (e.g., to authorized law enforcement officers, or to the original vendor for maintenance and tuning).

However, a problem with exposing interfaces is that, perhaps due to one of the standard insecurity patterns of Chapter 4 or perhaps due to a new one, one may inadvertently provide these services to more parties than one intended. Unfortunately, this has already happened with IoT data collection.

GM brags that its OnStar system for collecting and transmitting car data has “been the Kleenex of telematics for a long time” [19]. In 2011, Volt owner Mike Rosack so much enjoyed tracking the telematics he received on his phone from his car that he reverse-engineered the protocol and set up the Volt Stats website, which enabled a broader population of Volt owners to share their telematics. Unfortunately, doing this required that the owners share their credentials with Volt Stats (the “lack of delegation” pattern from Chapter 4). GM decided this was an unacceptable privacy risk and shut down the API, but then provided an alternate one that allowed Volt Stats data sharing to continue but without this risk. Unfortunately, in 2015, researcher Samy Kamkar found a way to surreptitiously capture owner credentials (the “easy exposure” pattern from Chapter 4). The resulting OwnStar tool allows unauthorized adversaries to usurp all owner rights [17].

In Australia, four shopping malls set up “smart parking” that used license-plate readers to track when cars entered and left, and gave users the option of receiving text alerts when their parking time was close to expiring. However, the malls discontinued this service when it was noticed that anyone could request notification for any vehicle (the “no authentication” pattern from Chapter 4) [12].

Chapter 1 discussed the Waze crowdsourcing traffic mapping application. Chapter 4 mentioned the “bad PKI” design pattern that has been surfacing in IoT applications. One place it has surfaced is in Waze: in 2016, scholars at UC Santa Barbara demonstrated that (due to flaws in checking certificates) they could intercept Waze’s encrypted SSL communications, and then introduce “thousands of ‘ghost drivers’ that can monitor the drivers around them—an exploit that could be used to track Waze users in real-time” [26]. Here, the service being usurped by the unauthorized party (“Where is driver X right now?”) was not really one of the intended services to begin with.

As an extreme case of unauthorized access to unintended services, researchers at SRI (recall Figure 1-3 in Chapter 1) have been worrying about not just adversarial access to the internal IT of government automobiles, but even mere detection that a particular vehicle is passing by. For a terrorist or assassin, the ability to build a roadside bomb that explodes when one particular vehicle goes by would be useful indeed. In this case, even the natural solution of “disable all electronic emanations” would not work, since the bomb could simply wait for the car that is suspiciously silent.

Emerging Infrastructure for Spying

The previous section closed with thinking about how the IoT could be useful for terrorists. IoT applications can also have utility for adversaries (such as spies or corrupt government officials) interested in systematically monitoring an individual’s activity.

Wearables and Health

One set of issues arises from IoT technology tied to a person: smartphones and applications, Fitbits and other wearables, Garmin-style devices on bicycles, mobile health (mHealth) technologies, etc.

Such technology can have upsides for the individual: monitoring aspects of health, tuning and improving athletics, tracking and sharing bicycle and running routes, etc. Friends concerned about their weight track calorie intake with iPhone applications connected to databases of food items; friends interested in exploring fine beer track their drinking with a similar application. When I was a serious bike racer, everyone in the peloton started using wearable heart-rate monitors to track and tune performance. The local cycling and running communities are full of people who religiously monitor each ride or run with Strava.

Initial privacy (and security) concerns about wearables arise from straightforward issues:

  • By design, can they expose data to the wrong parties? For a positive example, Strava addresses this concern by permitting users to set up a privacy zone around their residence, so that cycling or running routes from there will actually appear to have an endpoint somewhere nearby.

  • Does the core device have secure interfaces? For a negative example, in 2015 researcher Simone Margaritelli discovered that the Nike+ Fuelband fell into some of the standard patterns from Chapter 4: flawed authentication allowed “anyone to connect to your device,” and inadvertent inclusion of debug code allowed the ability to alter the internal programming [34].

  • Does the device’s supporting infrastructure have problems? For example, in a survey of Android mHealth applications, researchers from the University of Illinois at Urbana-Champaign discovered many did not encrypt communications (thus exposing user data to anyone listening in) and used third-party services (thus incurring privacy dependence on cloud parties the user might not know about) [25].

Beyond these straightforward issues, deeper privacy issues arise when one considers who else (besides the user) might benefit from these personal technologies.

The fact that many (US) employers also pay for their employees’ health insurance costs creates a business motivation for employers to promote wellness via Fitbits or the like, as Chapter 7 will discuss. However, what else should the employer know? In California in 2015, a woman filed a lawsuit claiming her employer required her to run an iPhone application that “that tracked her every move 24 hours a day, seven days a week” [29]. In 2016, the Dutch government ruled that employer monitoring of employees via wearables violated employee privacy, even if the employee gave consent, since the power relationship makes true consent impossible [46].

What if the business itself depends on physical performance? At a wearables event in Canada connected to the 2015 Pan Am games, panelists discussed a variety of issues, ranging from upsides for everyone (better performance) to the privacy downsides for athletes [59]:

If predictive analytics can suggest when an athlete’s performance will start to decline, will team owners use that data to shorten players’ contracts or pay them less, even while they’re still at the top of their game?

Yet another issue: who actually owns the athlete’s data? (In the US, we’ve already seen a similar kerfuffle for the general population: who owns your health record?)

The provider of a wearable service benefits monetarily, and advertisers benefit by gaining precise information about their target audience. Putting the two together, a wearable service might expose a user’s personal information to advertisers. In fact, in May 2016, new reports indicated the operators of Runkeeper were doing that, and more, in apparent violation of European privacy laws: “It turns out that Runkeeper tracks its users’ location all the time—not just when the app is active—and sends that data to advertisers” [7].

Internet of Big Brother’s Things

George Orwell’s 1984 posited a dystopian future where a governmental Big Brother monitored every move of every citizen. Many aspects of the IoT may be useful to Big Brothers (government or otherwise).

To start with, many current IoT applications seem targeted directly to improving surveillance. In 2015, the Mercury News reported that the city of San Jose added license-plate readers to trash trucks so that they could scan and report to the police what cars they saw along their routes [21]. The ACLU voiced objections:

If it’s collected repeatedly over a long period of time, it can reveal intimate data about you like attending a religious service or a gay bar. People have a right to live their lives without constantly being monitored by the government.

For other examples of fusion, Bill Schrier, formerly the CTO of Seattle, speculates on the “Internet of First Responder Things” [51]; Macon-Bibb County in Georgia is considering deploying drones as first responders [14].

Big Brother can also watch via cameras. While CCTV cameras in urban areas have been emerging over the last few decades, recent years have seen commercial products that embed such things in other devices. Sensity bundles active surveillance into previously inert infrastructure such as lighting:

NetSense for Airports offers a unique and effective solution for enhanced security through the airport terminal, parking lots and perimeter. By embedding video and other sensors inside LED luminaires, energy savings are combined with high-power security technology.

The ACLU again expressed concern [36]:

These lightbulbs-of-the-not-so-distant-future will also be able to GPS track individual shoppers as they travel through stores. Wait. What? The light bulbs can function as tracking devices? We would have to imagine that if they can GPS-track shoppers in stores, they could work just as effectively to track people as they walk the streets of our cities and towns. In fact, if you traveled through Newark Liberty International Airport in the past year, these spy-bulbs lights were already watching you. And there’s more: the bulbs can be programmed to “pick() up on suspicious behavior.” What exactly does that mean? If two women wearing head scarves decide to chat in a parking lot after seeing a late night movie, are the police going to be notified?

On a lighter note, mass video surveillance has an upside: the band The Get Out Clause allegedly used ubiquitous government cameras to film their breakout music video [8]:

“We wanted to produce something that looked good and that wasn’t too expensive to do,” guitarist Tony Churnside told Sky News.

“We hit upon the idea of going into Manchester and setting up in front of cameras we knew would be filming and then requesting that footage under the Freedom Of Information act.”

One can harvest entertaining screenshots from this video; however, I could not find anyone to ask for publication permission, so no screenshot will appear here.

In 2016, Matt Novak of Paleofuture observed [45]:

Back in March, I filed a Freedom of Information request with the FBI asking if the agency had ever wiretapped an Amazon Echo. This week I got a response: “We can neither confirm nor deny.”

In 2016, the Guardian reported that James Clapper, former US Director of National Intelligence, had observed [1]:

“In the future, intelligence services might use the [Internet of Things] for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.”

Richard Ledgett of the NSA concurred [37]:

Biomedical devices could be…“a tool in the toolbox”…. When asked if the entire scope of the Internet of Things—billions of interconnected devices—would be “a security nightmare or a signals intelligence bonanza,” he replied, “Both.”

In 2016, the US has seen ongoing debate about whether law enforcement officials should have warrantless access to citizens’ prescription information [41]:

“It has become the status quo that when a person comes under their radar they run to the prescription drug database and see what they are taking,” said Sen. Todd Weiler, a Republican—who said that police in Utah searched the PDMP database as many as 11,000 times in one year alone. “If a police officer showed up at your home and wanted to look in your medicine cabinet and you said no, he would have to go and get a search warrant.”

Interestingly, such access has actually enabled drug abuse, as witnessed by the report of:

An opioid addicted police officer who was caught on video stealing pills from an elderly couple’s home after tracking their prescriptions in the state’s PDMP database.

On a brighter note, in 2014 the Supreme Court ruled, in Riley v. California, against warrantless searching of a cellphone [32]:

Even the word cellphone is a misnomer, [Chief Justice Roberts] said. “They could just as easily be called cameras, video players, Rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps or newspapers.”

Instead of asking what’s possible, perhaps we should be asking what isn’t possible. Researchers from Nanjing University have shown that accelerometers alone (and not GPS or cellular connection) can suffice to track an individual’s motion through an underground train system [48]. Researchers here at Dartmouth College have shown data from a student’s smartphone can predict both GPA and psychological depression [39]. As I wrote these words, news reports indicated that (thanks to the “default password” pattern) the Quebec Liberal Party’s videoconference system could be used to spy on party meetings [15]. No wonder InfoWorld’s Fahmida Rashid fears the Google Home of the future [50]:

Always-listening devices accelerate our transformation into a constantly surveilled society. That’s a problem not only for us but for our kids, too.

Getting What We Want

Current and proposed IoT applications have privacy risks. What can we do about this?

Saying What We Want

First, there’s an old saying in software engineering that if you can’t say what correct behavior is, then the system can never behave incorrectly. An overarching problem with saying the IoT will reveal too much personal information to the wrong parties is what such a statement implicitly implies:

  • That some amount of information is “not too much”

  • That some parties are acceptable recipients of this information

  • That we (as individuals or as a society) are able to express exactly what these rules are

  • That computers are able to enforce these rules

Let the individual decide?

For example, in the case of electronic embodiments of personal health information, one often hears the assertion that each individual patient should be able to decide what to share with whom. Although perhaps compelling, this doctrine has problems.

First, it’s not what happens in the pre-IoT health world. In the US, law requires that clinicians treating certain kinds of issues (e.g., gunshot wounds, suicidal tendencies, potential molestation, certain communicable diseases) report these to various authorities. The question of whether a parent can legally see health information about his or her child is also surprisingly nuanced.

Secondly, the patient may not necessarily be in the position to judge “need to know” accurately. An MD friend who works in medical informatics loves to point out annoying counterexamples:

  • In the 1984 Libby Zion case, an 18-year-old woman receiving emergency treatment in a New York City hospital died unexpectedly, leading to lengthy litigation. According to some versions of the story, Zion died in part because neither she nor her family decided it was relevant to tell clinicians she had been on antidepressants and had recently taken cocaine. (See [28] for one viewpoint.)

  • Should your heart doctor know about your dental health? Current consensus in the medical community says yes: gum issues connect to heart disease [2].

Another challenge is that the problem of Alice deciding to share X with Bob is not well defined. When does Alice make this decision? When she makes the decision, does she express what she really wants? Over the last few decades, psychology has produced many reproducible results in cognitive bias: how human minds can form perceptions and judgments in surprisingly bizarre and “incorrect” ways. In my own research, I’ve looked at how cognitive bias can complicate security and privacy [54]. For example:

  • Perhaps due to dual process issues, educating users about social network privacy issues can lead them to make quantitatively worse privacy decisions [56].

  • Perhaps due to the empathy gap, reasonable electronic medical record (EMR) users in policy meetings will make access control decisions that reasonable EMR users in practice will find overly constraining [57].

An older friend of mine in the US even laments that he would like to have all of his medical information widely accessible by default, since that might lead to better treatment as he ages—but US medical privacy regulations do not give him that option.

Finally, what about aggregation and anonymization? Even in the medical case, most individuals would probably accept having their individual cases factored into some larger and appropriately blinded counts of diagnoses and treatments, for the greater social good. However, even this noble sentiment hides a minefield: what is “appropriate blinding”? Cynical cryptographic colleagues often lament that reports of anonymization are greatly exaggerated.

Policy tools

Even in the legacy IoC, the problem of specifying an access policy—whether Alice should be allowed to see X under conditions Y—is vexing. Way back in 1976, Harrison, Ruzzo, and Ullman proved that the problem “do these policy rules allow anything bad to ever happen?” (for a fairly simple and formal definition of “bad”) was computationally undecidable in the general case. Way back in the early days of the World Wide Web, scientist Lorie Cranor crafted the Privacy Bird tool to help ordinary users ensure that websites respected their privacy wishes—only to find that while users had subtle and nuanced privacy preferences, developing a language in which they could specify them was extremely complicated. Later work by Cranor should how apparently minor choices in file access policy language can make it easy or hard for ordinary users to correctly express their desired behavior. See [52] for some of my ranting in this access control hygiene space.

When media (such as text and songs) started merging with the IoC world, the challenge of digital rights management—how to express appropriate usage rights in a way that machines can enforce—became another nightmare problem, still echoing today (e.g., see https://www.eff.org/issues/drm). The human world had the concept of “fair use,” but it’s hard to tell computers what that means.

When we go from the IoC to the IoT, we add even more dimensions to the privacy policy problem. Do users fully understand all the players that are involved in an interconnected cloud-supported service? Do users understand the implications of large aggregations of personal data? (For example, in the Dartmouth StudentLife project [39], test subjects who were willing to share some smartphone data would probably not have been willing to share their GPAs and mental health diagnoses—even though the former correlated with the latter.) What about things that last longer than the companies that support them or the people to whom they belonged? Can my wife still see her health record even though her name changed? Can my children listen to my music after I die?

As we’ve seen earlier in this chapter, IoT and IoC applications may often share more data with more parties than intended. Perhaps it would be useful to develop tools to “fuzz-test” policy: for instance, if P turns to P' due to some standard blunders, is P' still acceptable?

Law and Standards

Industry and legal standards can be another vector to address “greater social good” concerns such as privacy. We already see this happening in the IoT space.

Even back in 2013, researchers in medical informatics lamented the lack of formal privacy policies in mHealth applications [42]. In February 2015, the office of Senator Edward J. Markey released a report looking at privacy in the IT-enhanced car [35] and described several disturbing findings, including:

Nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.

Automobile manufacturers collect large amounts of data on driving history and vehicle performance….

most do not describe effective means to secure the data….

Customers are often not explicitly made aware of data collection and, when they are, they often cannot opt out without disabling valuable features, such as navigation.

The report urged the US National Highway Traffic Safety Administration (NHTSA) and Federal Trade Commission to take action.

Around the same time, Edith Ramirez, chair of the FTC, warned [33]:

Connected devices that provide increased convenience and improve health services are also collecting, transmitting, storing, and often sharing vast amounts of consumer data, some of it highly personal, thereby creating a number of privacy risks.

She urges the following:

(1) adopting “security by design”; (2) engaging in data minimization; and (3) increasing transparency and providing consumers with notice and choice for unexpected data uses

In August 2015, the Online Trust Alliance (OTA) proposed a set of IoT rules to promote privacy [49]. Many of these will sound welcome to the reader who’s read this far and seen the consequences of bad authentication, bad encryption, flawed interfaces, unpatchability, and lifetime troubles:

  1. Default passwords must be prompted to be reset or changed on first use or uniquely generated….

  2. All user sites must adhere to SSL2 best practices using industry standard testing mechanisms….

  3. All device sites and cloud services must utilize HTTPS encryption by default.

  4. Manufacturers must conduct penetration testing for devices, applications and services….

  5. Manufacturers must have capabilities to remediate vulnerabilities in a prompt and reliable manner.

  6. All updates, patches, revisions, etc. must be signed/verified.

  7. Manufacturers must provide a mechanism for the transfer of ownership including providing updates for consumer notices and access to documentation and support.

Although it does not seem to have any ability to force compliance, the OTA is led by the heavy hitters of consumer IT (e.g., Microsoft) and web security infrastructure (e.g., DigiCert and Verisign), so one hopes it has clout. The proposed IoT Trust Framework has since gone through three more revisions and is now complemented by a “Consumer IoT Security and Privacy Checklist.”

Also in summer 2015, the Healthcare Information Technology Policy Committee (HITPC), under the umbrella of the US federal efforts for electronic health, put forth a proposed set of rules for its space so that “patients should not be surprised about or harmed by collections, uses or disclosures of their information” [27].

Technological Enforcement

Even if we specify what “correct” behavior is (and perhaps have laws and standards to drive and guide that specification), how do we (again, as individuals or as a society) trust that our smart things actually follow these rules?

Consider two recent examples. In the Vizio analysis mentioned earlier, researchers discovered shenanigans [38]:

From this, it is obvious that the same data is being sent to Cognitive Networks servers through UDP and HTTP. This data is the fingerprint of what you’re watching being sent through the Internet to Cognitive Networks. This data is sent regardless of whether you agree to the privacy policy and terms of service when first configuring the TV.

Ars Technica made similar observations about Windows 10 [4]:

Windows 10 uses the Internet a lot to support many of its features. The operating system also sports numerous knobs to twiddle that are supposed to disable most of these features and the potentially privacy-compromising connections that go with them.

Unfortunately for privacy advocates, these controls don’t appear to be sufficient to completely prevent the operating system from going online and communicating with Microsoft’s servers.

For example, even with Cortana and searching the Web from the Start menu disabled, opening Start and typing will send a request to www.bing.com to request a file called threshold.appcache which appears to contain some Cortana information, even though Cortana is disabled. The request for this file appears to contain a random machine ID that persists across reboots.

When I was a young computer scientist in industry taking a product through federal security validation, I was surprised to have to demonstrate not just that my product did what I said it would do, but also that it did not do what it did not say it did. I objected, but in hindsight, I see the wisdom.

Achieving this is hard, though. The long history of side-channel analysis shows that computing devices can communicate in ways observers (and owners) may not expect; applied cryptography shows how these communications can be made practically indistinguishable from random noise; and Scott Craver’s Underhanded C Contest shows how even direct inspection of source code can fail to reveal what it really does. (Maybe the issues described in “Cryptographic Decay” are in fact positive features: if you wait enough decades, you can finally decrypt your device’s sneaky espionage reports.)

Works Cited

  1. S. Ackerman and S. Thielman, “US intelligence chief: We might use the Internet of Things to spy on you,” The Guardian, February 9, 2016.

  2. American Academy of Periodontology, Healthy Gums and a Healthy Heart: The Perio-Cardio Connection. June 1, 2009.

  3. BinaryEdge, “Data, technologies and security—Part 1,” blog.binaryedge.io, August 10, 2015.

  4. P. Bright, “Even when told not to, Windows 10 just can’t stop talking to Microsoft,” Ars Technica, August 12, 2015.

  5. J. M. Broder, “Stalled out on Tesla’s electric highway,” The New York Times, February 8, 2013.

  6. D. Burke, “Are tighter rules needed on recording devices in cars?,” CBC News, May 24, 2016.

  7. K. Carlon, “Runkeeper is secretly tracking you around the clock and sending your data to advertisers,” Android Authority, May 13, 2016.

  8. T. Chivers, “The Get Out Clause, Manchester stars of CCTV,” The Telegraph, May 8, 2008.

  9. L. Constantin, “Samsung smart TVs don’t encrypt the voice data they collect,” ITworld, February 18, 2015.

  10. L. Constantin, “Smart TVs raise privacy concerns,” ITworld, February 9, 2015.

  11. J. Cook, “Apple is preparing to launch a voicemail service that will use Siri to transcribe your messages,” Business Insider, August 3, 2015.

  12. A. Coyne, “Westfield ditches SMS feature over privacy issues,” iTnews, February 3, 2016.

  13. M. Donoghue, “Arraignment delayed in fatal car–bike crash,” Burlington Free Press, June 25, 2015.

  14. S. Dunlap, “Drones could be used in Macon-Bibb for emergency response,” The Telegraph, July 13, 2015.

  15. J. Foster, “Someone gained access to private PLQ meetings, very easily,” CJAD News, June 17, 2016.

  16. L. Franceschi-Bicchierai, “One of the largest hacks yet exposes data on hundreds of thousands of kids,” Motherboard, November 27, 2015.

  17. S. Gallagher, “OwnStar: Researcher hijacks remote access to OnStar,” Ars Technica, July 30, 2015.

  18. S. Gallagher, ‘EPIC’ fail—How OPM hackers tapped the mother lode of espionage data,” Ars Technica, June 21, 2015.

  19. S. Gallagher, “OnStar gives Volt owners what they want: Their data, in the cloud,” Ars Technica, November 25, 2012.

  20. J. M. Gitlin, “Another driver says Tesla’s Autopilot failed to brake; Tesla says otherwise,” Ars Technica, May 13, 2016.

  21. R. Giwargis, “San Jose looks at using garbage haulers to catch car thieves,” The Mercury News, August 19, 2015.

  22. D. Goodin, “Man-in-the-middle attack on Vizio TVs coughs up owners’ viewing habits,” Ars Technica, November 11, 2015.

  23. R. Grenoble, “Tesla, New York Times still feuding over Model S review: Elon Musk releases data, reviewer counters,” The Huffington Post, February 14, 2013.

  24. B. Hambright, “Woman staged ‘rape’ scene with knife, vodka, called 9-1-1, police say,” LancasterOnline, June 19, 2015.

  25. D. He and others, “Security concerns in Android mHealth apps,” in Proceedings of the American Medical Informatics Association Annual Symposium, November 2014.

  26. K. Hill, “If you use Waze, hackers can stalk you,” Fusion, April 26, 2016.

  27. Health IT Policy Committee Privacy and Security Workgroup, Health Big Data Recommendations, August 11, 2015.

  28. S. Knope, “October 4, 1984 and Libby Zion: The day medicine changed forever,” The Pearl, November 7, 2013.

  29. D. Kravets, “Worker fired for disabling GPS app that tracked her 24 hours a day,” Ars Technica, May 11, 2015.

  30. B. Krebs, “This is why people fear the ‘Internet of Things,’” Krebs on Security, February 18, 2016.

  31. R. Lawler, “Vizio IPO plan shows how its TVs track what you’re watching,” Engadget, July 24, 2015.

  32. A. Liptak, “Major ruling shields privacy of cellphones,” The New York Times, June 25, 2014.

  33. N. Lomas, “The FTC warns Internet of Things businesses to bake in privacy and security,” TechCrunch, January 8, 2015.

  34. S. Margaritelli, “Nike+ FuelBand SE BLE protocol reversed,” evilsocket.net, January 29, 2015.

  35. Staff of E. Markey, Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk. Office of the United States Senator for Massachusetts, February 2015.

  36. C. Marlow, Building a Mass Surveillance Infrastructure Out of Light Bulbs. American Civil Liberties Union, July 23, 2015.

  37. J. McLaughlin, “NSA looking to exploit Internet of Things, including biomedical devices, official says,” The Intercept, June 10, 2016.

  38. A. McSorley, “The anatomy of an IoT hack,” Avast Blog, November 11, 2015.

  39. M. Mirhashem, “Stressed out? Your smartphone could know even before you do,” New Republic, September 22, 2014.

  40. T. Mogg, “Hit-and-run suspect arrested after her own car calls cops,” Digital Trends, December 7, 2015.

  41. C. Moraff, “DEA wants inside your medical records to fight the war on drugs,” The Daily Beast, June 10, 2016.

  42. J. Mottl, “Mobile app privacy practices scarce, lack transparency,” FierceHealthcare, August 24, 2014.

  43. D. Munro, “Data breaches in healthcare totaled over 112 million records in 2015,” Forbes, December 31, 2015.

  44. E. Musk, “A most peculiar test drive,” Tesla Blog, February 13, 2013.

  45. M. Novak, “The FBI can neither confirm nor deny wiretapping your Amazon Echo,” Paleofuture, May 11, 2016.

  46. NU.nl, “Bedrijven mogen gezondheid medewerkers niet volgen via wearables,” March 8, 2016.

  47. Office of Public Affairs, Bureau of Consumer Protection, Provider of Medical Transcript Services Settles FTC Charges That It Failed to Adequately Protect Consumers’ Personal Information. Federal Trade Commission, January 31, 2014.

  48. P. H. O’Neill, “New research suggests that hackers can track subway riders through their phones,” The Daily Dot, May 25, 2015.

  49. Online Trust Alliance, IoT Trust Framework—Discussion Draft, August 11, 2015.

  50. F. Y. Rashid, “Home invasion? 3 fears about Google Home,” InfoWorld, June 15, 2016.

  51. B. Schrier, “The Internet of First Responder Things (IoFRT),” The Chief Seattle Geek Blog, May 25, 2015.

  52. S. Sinclair and S. W. Smith, “What’s wrong with access control in the real world?,” IEEE Security and Privacy, July/August 2010.

  53. J. F. Smith, “Cyberattack exposes I.R.S. tax returns,” The New York Times, May 26, 2015.

  54. S. W. Smith, “Security and cognitive bias: Exploring the role of the mind,” IEEE Security and Privacy, September/October 2012.

  55. I. Thomson, “Hello Barbie: Hang on, this Wi-Fi doll records your child’s voice?,” The Register, February 19, 2015.

  56. S. Trudeau and others, “The effects of introspection on creating privacy policy,” in Proceedings of the 8th ACM Workshop on Privacy in the Electronic Society, November 2009.

  57. Y. Wang and others, “Access control hygiene and the empathy gap in medical IT,” in Proceedings of the 3rd USENIX Conference on Health Security and Privacy, 2012.

  58. The White House, Big Data: Seizing Opportunities, Preserving Values. Executive Office of the President, May 2014.

  59. C. Wong, “Sports wearables may affect athletes’ privacy, paycheques as well as performance,” IT Business, July 13, 2015.

1 History lesson: when Robert Bork was nominated for the U.S. Supreme Court in 1987, a reporter obtained and published his video rental records—which led to Congress passing a law making this particular kind of privacy spill illegal.

2 As one colleague notes, we hope they mean TLS, as (strictly speaking) SSL is obsolete.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.138.178