The IoT crosses many boundaries (cultural, jurisdictional, and national); laws and management of the IoT will also need to cross these boundaries.
Law1 is undeniably another large force in shaping how things happen in this world. The IoT traverses not only geographical lines separating governmental jurisdictions, but also domains of life previously covered by separate customs. This boundary crossing leads to interesting interactions between the IoT and law, and this chapter surveys some of the principal ones:
The use of smart technology to hide from the law
The use of law to keep smart technology from being scrutinized
How the IoT introduces new things, neither fish nor fowl, that create challenges for what legal framework they should inherit
In theory, law governs behavior by sanctioning behaviors deemed to be sufficiently bad for the social contract. However, in the IoC and IoT, we have already seen scenarios where the behavior of smart technology has somehow sidestepped this governance. Let’s consider a few.
In the Garden of Eden story from Genesis, eating from the Tree of Knowledge enables human to sin. A cynic might predict that in the IoT story, making things smart will enable them to sin: to use the adaptiveness and resourcefulness of computing to do objectively bad things, such as deceptively cheat. Unfortunately, this has already happened.
Powering internal combustion engines with diesel instead of gasoline offers many efficiency advantages. However, burning diesel can generate more pollution. In recent years, diesel-powered cars have become popular in Europe; they also made inroads in the US, as cars equipped with “clean diesel” technology passed the strict pollution standards of the Environmental Protection Agency (EPA).
Some researchers were curious. Eric Niler in Wired writes [21]:
In 2013, a small non-profit group decided to compare diesel emissions from European cars, which are notoriously high, with the US versions of the same vehicles. A team led by Drew Kodjak, executive director of the International Council on Clean Transportation, worked with emissions researchers at West Virginia University to test three four-cylinder 2.0-liter diesel cars in the Los Angeles area: a Jetta, a Passat, and a BMW.
The EPA tests evaluate emissions when the car is stationary. The ICCT/WVU team instrumented the cars to evaluate emissions when actually driving on roads. Figure 8-1 summarizes the result: the VW diesel cars (“Vehicle A” and “Vehicle B”) somehow emitted far more pollutants when being driven on roads than when being tested while stationary. The stationary tests were under the EPA threshold, but the actual road tests far exceeded it.
This difference was puzzling, and a long conversation began [26]:
VW engineers continued to suggest technical reasons for the test results. None of the explanations satisfied regulators, who indicated the models wouldn’t be certified.
However, then the bombshell dropped:
“Only then did VW admit it had designed and installed a defeat device in these vehicles in the form of a sophisticated software algorithm that detected when a vehicle was undergoing emissions testing,” the EPA said in its letter to VW.
Wired provides more details about this “sophisticated software algorithm”:
Computer sensors monitored the steering column. Under normal driving conditions, the column oscillates as the driver negotiates turns. But during emissions testing, the wheels of the car move, but the steering wheel doesn’t. That seems to have have been the signal for the “defeat device” to turn the catalytic scrubber up to full power, allowing the car to pass the test.
Subsequent investigation suggests that this smart circumvention was an intentional VW strategy for a long time. The New York Times reports [12]:
A PowerPoint presentation was prepared by a top technology executive at Volkswagen in 2006, laying out in detail how the automaker could cheat on emissions tests in the United States…. In a laboratory, regulators would try to replicate a variety of conditions on the road. The pattern of those tests, the presentation said, was entirely predictable. And a piece of code embedded in the software that controlled the engine could recognize that pattern, activating equipment to reduce emissions just for testing purposes.
News reports from Belgium indicate that Opel may be doing something similar [25].
For Volkswagen, consequences included a $14.7 billion settlement in the US, with actions still pending in other countries. Researchers also concluded that the cheating software led to deaths [34]:
According to the model, the extra NOx from VW’s cars [in the US] will cause about 10 to 150 (or a median of 59) people to die 10 to 20 years early. Hospital bills and other social costs add up to $450 million.
For the Internet of Cheating Things, this may be only the tip of the iceberg. Considering other potential examples is left as an exercise for the reader—although “Things ‘on the witness stand’” on page 180 gives a few.
A good way to ensure that things are not cheating is to ensure that their internals can be scrutinized, as “When Law Stops Scrutiny of Technology” will discuss.
In the US and elsewhere, the legal framework codifies a notion of negligence. If something bad happens to Alice because of a basic flaw in something Bob designed and built, then Bob should be held responsible.
It’s long been observed that somehow the software industry has avoided this doctrine—vendors of software systems sidestep responsibility for negligent behavior in a way that would make vendors of automobiles, airbags, and hot coffee jealous. It’s tempting to posit a general negligence principle: hard things require care, but soft things are blame-free.
In the IoT, the soft things are deeply permeating the space of the hard things. Unfortunately, we are also starting to see the negligence principle permeate as well.
For one example, consider the privacy spill from toy company VTech’s backend servers, discussed back in Chapter 6. Motherboard quoted the president of VTech assuring customers, “We are committed to the privacy and protection of the information you entrust with VTech” [15]. However, Motherboard also noted that VTech’s terms and conditions now include this advisory (in all caps):
YOU ACKNOWLEDGE AND AGREE THAT ANY INFORMATION YOU SEND OR RECEIVE DURING YOUR USE OF THE SITE MAY NOT BE SECURE AND MAY BE INTERCEPTED OR LATER ACQUIRED BY UNAUTHORIZED PARTIES.
When it comes to legal issues, one can usually find opposing points of view. Motherboard further notes:
Rik Ferguson, the vice president of security research at Trend Micro, said the clause is “outrageous, unforgivable, ignorant, opportunistic, and indefensible,” and likened it to “weasel words.” Despite this surprising change—a British law professors told me he’s “never seen a clause like that before”—legal experts doubt the provision has any real value.
Which view will dominate?
In the domain of smart health, one can find examples with more dire consequences. My colleague Harold Thimbleby cites the Mersey Burns app (approved by the UK NHS), which helps a clinician determine how much fluid a burn victim needs based on the extent of their burns. Harold notes first that the app’s legal warranty removes responsibility from the vendor and regulator:
You agree to indemnify and hold…the NHS harmless from any claim…as a result of your use or misuse of the app.
Harold notes further that the warranty allows itself to change arbitrarily:
The NHS may modify this disclaimer…at any time…without giving notice to you.
In conversation, Harold asks what the reader is likely asking now: what good is a written warranty whose terms can change at any point without notice?
Unfortunately, the Mersey Burns case is not simply a case of weasel words. The application prompts the clinician to input the extent of a patient’s burns first on the front side of the body, and then on the back side of the body. The app also gives the clinician two ways to enter these measurements: graphically, or via a percentage. Harold identified a dangerous bug: indicating severe burns on the front side graphically followed by indicating minor burns on the back via percentage somehow causes the app to forget about the fluids required because of the front injuries.
Patients might die from this bug, but somehow no one is responsible.
My colleague Ross Koppel has long lamented how this pattern is endemic in health IT (at least in the US), and specifically calls out clauses commonly found in the contracts hospitals have with the vendors of health IT systems [19]:
One clause prohibited clinicians from publicly displaying screen shots…even if they felt those screen shots illustrate a danger to patient safety…. A part of that clause also prohibited clinicians from speaking pejoratively about the vendor’s product. The second clause, “hold harmless,” said that the vendor was not responsible for any errors committed because of their products even if the vendor had been repeatedly informed that the product was defective in some way…. The legal logic of the clause is that the vendor merely creates a “tool” used by a learned intermediary…[who] has the authority to take whatever information is shown and make a considered professional judgment, including realizing that the information shown to him or her via the software is incorrect.
The legal disclaimers say the vendor is not responsible, the vendor does not have to fix bugs, and the clinicians are not allowed to disclose those bugs. Sadly, Ross documents many cases were patients were harmed—and even killed—because of such bugs.
In the years since Ross first published these concerns, other organizations have joined the call to eliminate these clauses.
The scientific world preaches the value of peer review and reproducible results: advances are not considered valid unless they can be carefully examined and verified. Engineering preaches the value of closed loop systems: not just acting, but measuring the result and adaptively tuning. Even the basic 20th-century American mythos celebrates tinkering: the computer giant that started in a garage, the young woman wrenching on an old car.
A common thread through all of this is the ability for individuals to examine technology. However, even in the IoC, we began to see situations where law was used to discourage such examination. (Noted security and public policy researcher Ed Felten even titled his blog Freedom to Tinker in response to this situation.)
For example, May 2016 brought news of a particularly ironic case: in Florida, security researcher David Levin—working with local elections supervisor Dan Sinclair—demonstrated a security hole in a state elections website. The government response was to arrest him [24]:
“Dave didn’t cause these problems, he only reported them,” Sinclair said, adding that the elections office could not previously detect intrusions. Levin also provided defensive measures to the state about how it could fix the hole and detect further intrusions.
June brought news that the ACLU had “filed a lawsuit with the US Department of Justice contending that the Computer Fraud and Abuse Act’s (CFAA’s) criminal prohibitions have created a barrier for those wishing to conduct research and anti-discrimination testing online” [5], because the act is used to “criminalize violations of websites’ ‘terms of service’” [3]:
The CFAA violates the First Amendment because it limits everyone, including academics and journalists, from gathering the publicly available information necessary to understand and speak about online discrimination.
Closer to the IoT space, in 2012 researchers Roel Verdult, Flavio Garcia, and Baris Ege discovered flaws in the wireless key/starter protocol used by many vehicles, including Volkswagens. Their result was to be published at the 2013 USENIX Security Symposium, but VW used the UK courts to stop publication [2]:
VW and Thales argued that the algorithm was confidential information, and whoever had released it on the net had probably done so illegally. Furthermore, they said, there was good reason to believe that criminal gangs would try to take advantage of the revelation to steal vehicles.
Two years later, the paper was finally published [17]. It is interesting to note the involvement of VW (which at the same time was using IT to evade EPA rules) in stopping scrutiny of its IT.
Unfortunately, this wasn’t the first time that a USENIX Security research paper was delayed due to legal action. In 2000, the Secure Digital Music Initiative—an industry consortium focused on digital rights management (DRM)—held a challenge for researchers to scrutinize various DRM techniques. Researchers from Princeton (including Ed Felten, later to start the Freedom to Tinker blog) and Rice were largely successful [7], but court action by SDMI delayed publication of their results.
The Digital Millennium Copyright Act (DMCA) is a 1998 US law often invoked by players trying to suppress scrutiny and decried by advocates for the freedom to tinker. As the Electronic Frontier Foundation puts it:
The Digital Millennium Copyright Act prohibits “circumventing” digital rights management (DRM) and other “technological measures” used to protect copyrighted works. While this ban was meant to deter copyright infringement, many have misused the law to chill competition, free speech, and fair use.
In the IT space, the ostensible intention of the DMCA was to protect as intellectual property the software embedded in devices. If a pirate—or researcher or curious tinkerer—“circumvented” a manufacturer’s barrier (no matter how small) to examine this software, that could be interpreted as a violation of the DMCA.
What behaviors the DMCA actually prohibits has been a matter of ongoing contention. Nonetheless, it’s hard to dispute the chilling effect. In the VW emissions scandal discussed earlier, discovery that VWs were programmed to provide acceptable levels of pollution only when they detected they were being tested happened indirectly, because researchers tried an alternate way of measuring pollution. If researchers could have looked directly at the code, that discovery might have happened much earlier.2 The EFF argues [31]:
Automakers argue that it’s unlawful for independent researchers to look at the code that controls vehicles without the manufacturer’s permission…. The legal uncertainly created by the Digital Millennium Copyright Act also makes it easier for manufacturers to conceal intentional wrongdoing…. Volkswagen had already programmed an entire fleet of vehicles to conceal how much pollution they generated, resulting in a real, quantifiable impact on the environment and human health. This code was shielded from watchdogs’ investigation by the anti-circumvention provision of the DMCA.
(Personally, as someone who spoke out against the DMCA when it was first enacted, I found it ironic to hear it being discussed on NPR nearly two decades later.)
One example of the evolving contention over what the DMCA prohibits is jailbreaking cellphones. The Copyright Office (part of the Library of Congress) determined that jailbreaking a cellphone violated the DMCA and was prohibited; it took federal legislation in 2014 to make it legal again [6].
In Slate, Kyle Weins then observed in January 2015 [33]:
How many people does it take to fix a tractor?…. [I]t actually takes an army of copyright lawyers, dozens of representatives from U.S. government agencies, an official hearing, hundreds of pages of legal briefs, and nearly a year of waiting. Waiting for the Copyright Office to make a decision about whether people like me can repair, modify, or hack their own stuff.
As the Copyright Office considered, manufacturers offered an opposing view [32]:
John Deere—the world’s largest agricultural machinery maker—told the Copyright Office that farmers don’t own their tractors. Because computer code snakes through the DNA of modern tractors, farmers receive “an implied license for the life of the vehicle to operate the vehicle.”
It’s John Deere’s tractor, folks. You’re just driving it.
In October 2015, the Copyright Office (FEDREG) ruled that “computer programs that are contained in and control the functioning of a motorized land” were exempt from the DMCA “when circumvention is a necessary step undertaken by the authorized owner of the vehicle to allow the diagnosis, repair or lawful modification of a vehicle function” and also “for the purpose of good-faith security research.”
Weins, however, worries about the IoT’s slippery slope [33]:
Phones are just the beginning. Thanks to the “smart” revolution, our appliances, watches, fridges, and televisions have gotten a computer-aided intelligence boost. But where there are computers, there is also copyrighted software, and where there is copyrighted software, there are often software locks. Under Section 1201 of the DMCA, you can’t pick that lock without permission. Even if you have no intention of pirating the software. Even if you just want to modify the programming or repair something you own.
Churn continues. In December 2015, Techdirt reported that Philips had released a firmware update that would remove the ability of purchasers of the Philips Hue smart lighting bridge from using third-party lightbulbs [9]. Presumably, circumventing these restrictions would be considered a copyright violation.
In May 2016, state legislators in Michigan (historical home of the US automobile industry) bypassed the DMCA altogether and introduced legislation that would make hacking into a car a crime, possibly meriting life imprisonment. In ComputerWorld, Darlene Storm responded [28]:
Of course we don’t want to wait until hackers are remotely taking control and crashing cars before we figure out what should be done to malicious attackers, but if security researchers can’t look for vulnerabilities without fear of life in prison, then aren’t we all less safe?
Would we have been better off not to know that hackers could remotely seize control of a Jeep as it is speeding down the highway? I don’t think so.
A common motif in this book is that layering networked IT on top of previously dumb objects creates new sorts of things whose existence and behaviors raise new questions—including legal ones, when these new beasts, neither fish nor fowl, do not fit into the standard paradigms. Even way back at the dawn of the IoC, sharp-eyed legal scholars pointed out that putting a click-through license on the front of an “electronic” book changed the governance of reader/publisher behavior from copyright law to contract law, which was somehow worse for the reader. The new age of the IoT brings new dilemmas.
For an extreme example of adding networked computing (and a degree of autonomy) to a previously dumb thing, a teenager in 2015 attached a gun to a drone and posted a video of it on YouTube. CNN reported [20]:
The gun drone in Connecticut appears to have been fired on private property and—so far, authorities said—it did not appear any laws were broken…. “It appears to be a case of technology surpassing current legislation,” police in Clinton, Connecticut, said.
In 2016, Popular Science reported on a Finnish project that equipped a drone with a chainsaw [1]. This effort also yielded an amusing video—but, sadly, did not give rise to a dry observation from police on whether it was illegal. Bringing things back to cyber, in 2015 the Intercept reported that an Italian hacking company was in discussions with a Boeing drone subsidiary to explore using drones to deliver malware by hovering over targets and intercepting wireless communications [8].
Another standard item in the IoT vision is replacing our population of cars with a fleet of self-driving vehicles that can drive more safely and efficiently. One of the upsides this vision presents is freeing humans from having to commit time and attention to driving: instead, commuting time becomes work or relaxation time; alcohol-impaired humans can still “drive” home safely; humans with disabilities (e.g., vision impairment) preventing them from driving traditional cars can become autonomously mobile.
However, thanks to at least a century of evolution, driving is surrounded by sociocultural processes with foundations in law. When the car becomes the driver, what should happen with these processes and laws?
For one example, consider licensing. In the US and many other nations, humans need to obtain (and then carry) a government-issued license before driving a vehicle on public roads. Should the human who is not driving a self-driving car also require a license?
At first glance, this question sounds ridiculous. Of course not—the human is not driving!
On the other hand, what if the car provides the ability for a human driver to take control, perhaps as a safety feature? In this case, perhaps we do need licenses after all—but then what happens to the upsides of freeing humans from driving themselves?
Even without the ability for a human to take over fully, a self-driving car may still take some control from its human—for instance, for destination, route options, speed preferences, etc. In this case, should we still require some kind of licensing (or at least some minimum age)? If there are multiple passengers, which ones require licenses?
In summer 2015, the UK Department of Transport released The Pathway to Driverless Cars: A Code of Practice for Testing outlining legal guidelines for this transition [10]. This document distinguished between a test driver (“the person who is seated in the vehicle in a position where they are able to control the speed and direction using manual controls at any time”) and a test operator (“someone who oversees testing of an automated vehicle without necessarily being seated in the vehicle”). However:
The test driver or test operator must hold the appropriate category of driving licence for the vehicle under test, if testing on a public road. This is true even if testing a vehicle’s ability to operate entirely in an automated mode. It is strongly recommended that the licence holder also has several years’ experience of driving the relevant category of vehicle…
Test drivers and operators should remain alert and ready to intervene if necessary throughout the test period.
Another legal/social aspect of driving is insurance. Accidents will happen. How do we handle who pays for the damage? Indeed, in most places in the US (but not New Hampshire, where I live), the biggest obstacle for young drivers is neither earning the license nor buying an inexpensive used car—rather, it is obtaining the legally required insurance. With traditional vehicles, we (as a society) have worked out a system for financial responsibility for car accidents that works, mostly—there are still cases of lawsuits about negligent manufacturers of cars and servers of alcohol, and (in New Hampshire specifically) the problem of uninsured drivers. How does this system translate to when vehicles somewhat or mostly or entirely drive themselves (Figure 8-2)?
Reporting on discussions between Google and the UK on testing self-driving cars, the Telegraph also observed [30]:
Google has taken a special interest in the thorny issue of how driverless cars will be insured. Because a computer program, rather than a human, would be controlling the vehicle, experts have suggested that manufacturers will be held responsible.
In IEEE Spectrum, Nathan Greenblatt went even further [18]:
It is the year 2023, and for the first time, a self-driving car navigating city streets strikes and kills a pedestrian. A lawsuit is sure to follow. But exactly what laws will apply? Nobody knows.
(Chapter 7 discussed a related problem: if autonomous vehicles have fewer accidents, what happens to the business case for selling insurance?)
Yet another standard aspect of our legal/social driving practice is interaction with law enforcement. A driver being pulled over by police has been a standard cultural motif since the days of the Keystone Cops movies. Law also dictates specific driver behaviors when encountering other privileged vehicles, such as fire trucks and ambulances and school buses. How should these behaviors translate to self-driving cars?
Will Oremus of Slate repeated a scenario from RAND [23]:
The police officer directing traffic in the intersection could see the car barreling toward him and the occupant looking down at his smartphone. Officer Rodriguez gestured for the car to stop, and the self-driving vehicle rolled to a halt behind the crosswalk.
As a thought experiment, this may sound reasonable. But Oremus considers the slippery slope:
If a police officer can command a self-driving car to pull over for his own safety and that of others on the road, can he do the same if he suspects the passenger of a crime? And what if the passenger doesn’t want the car to stop—can she override the command, or does the police officer have ultimate control?
These scenarios make me think of technical challenges reminiscent of Chapter 1 and Chapter 5. How do we (as a society) set up an authentication infrastructure that permits all the vehicles from different manufacturers and countries to verify “Stop, police!” commands from officials of all the different types of law enforcement agencies? Given the historical tendency of interfaces to unintentionally permit too much power, what will happen if someone other than authorized law enforcement personnel can do this a vehicle? Given also the unfortunate historical tendency of some law enforcement officers to overreach their authority, what will happen when a rogue officer does this?
Smartphones have provided a ubiquitous networked platform for IoT-style applications. Many of these applications are medical, ranging from directly measuring properties of the user’s body (e.g., heart rate) to helping the user track and manage other medical-related issues, such as calories consumed or anxiety/depression incidents.
In 2014, Brian Dolan noticed something interesting [11]:
This week MobiHealthNews sought out apps in Apple’s AppStore that appeared to pitch themselves as useful medical or health-focused apps, but also included some iteration of that common legal disclaimer: “For entertainment purposes only”. While none of the apps we found appear to be trying to take advantage of their users with a fantastical claim…the inclusion of the “entertainment” disclaimer is still a bit puzzling….
After all, how entertaining is a medical calculator app that helps you figure out the stages of a patient’s acute tubular necrosis? I’m not a doctor. I’ve never attempted such a calculation myself. But I’m guessing it’s not particularly fun.
The working assumption here is that installing a medical application on a smartphone makes it a medical device and thus brings it under the purview of the US FDA, whose mission includes regulating and certifying medical devices in order to ensure the safety of patients and effectiveness of treatments. What’s troubling here is the emergence of this class of apps, which are clearly being used for medical purposes but sidestepping oversight. The FDA is trying to address this problem [13].
A related issue here is how to dovetail the need to continually update software (to fix the inevitable bugs, security and otherwise—recall Chapter 1) with the need for FDA certification on medical devices. A commonly held view is that vendors and clinicians are loath to patch software on medical devices because doing so would require taking the devices through FDA certification again. Medical security researcher Kevin Fu disputes that recertification is always required but discusses the difficult gray area [16]:
Guidance documents are peppered with conditional language open to interpretation. If you want to scare a regulatory affairs specialist, just add a bunch of implicit “if” or “unless” conditional branch statements…. [T]he absolute claim that “FDA rules prevent software security patches” is false. But there is a half truth hiding behind the sentiment. The rules are sufficiently fuzzy to cause misunderstandings and unintended interpretations.
Chapter 1 quoted from a Bloomberg article on the penetration tests performed by Billy Rios and others on medical devices at the Mayo Clinic [27]. The same article reported on subsequent testing Rios performed on the Hospira infusion pump. Rios found holes: “He could set the machine to dump an entire vial of medication into a patient.” However, Rios could not get regulators to pay attention: “The FDA seems to literally be waiting for someone to be killed before they can say, ‘OK, yeah, this is something we need to worry about.’” These concerns came full circle when Rios himself was hospitalized—and connected to a Hospira infusion pump.
Eventually, the FDA did issue a warning with specific guidance for how healthcare facilities could reduce the risk from these holes. Although perhaps less than the dire, “Fix this now!” requirement a security analyst might have wanted, this action did demonstrate a welcomed transition in regulatory culture for medical devices—from “Patching for security is bad” to “Patching for security is good.”
Chapter 6 discussed the potential for IoT devices to betray their owners. Law enforcement officials are already using data from such devices as part of investigations, and it’s not hard to foresee a day when such data is used in court; your devices may testify against you.
However, this sort of thing has been happening for a while (the future has been here before!). Data from speed traps, red light cameras, and breathalyzers has been used in legal proceedings for decades. In recent years, the US has seen a spate of copyright infringement lawsuits launched by the recording industry based on computer-generated data.
Looking at the copyright cases in particular, hackers Sergey Bratus and Anna Shubina, working with law professor Ashlyn Lembree, explored the issues that arise when things start testifying on the witness stand [4]:
Thus it appears that the only entity to “witness” the alleged violations and to produce an account of them for the court—in the form of a series of print-outs—was in fact an autonomous piece of software, programmed by a company acting on behalf of the plaintiffs and RIAA, and running on a computer controlled by this company.
One can find incidents where bugs in breathalyzers and speed trap devices have resulted in cases being dismissed because the bugs rendered the output unreliable. Bratus, Lembree, and Shubina cite a particular case where a developer intentionally introduced the error to increase revenue from traffic tickets; the annual Underhanded C Contest shows how effectively a malicious programmer can hide evil behavior in code. So, rationally, one must conclude that testimony from things should not be automatically trusted. However, have legal mechanisms caught up to this reality?
Witnesses in court make their statements under oath, with severe consequences of deviating from the truth in their testimony. Witnesses are then cross-examined in order to expose any biases or conflicts of interest they might have. Computer-generated evidence comes from an entity that cannot take an oath…nor receive an adversarial examination….
In short, a human witness’ testimony is not automatically assumed to be trustworthy. Specific court procedures such as cross-examination and deposition by the opposing lawyers have evolved for challenging such testimony.
The authors of this report point out that developing such procedures to scrutinize testimony from things is still a matter for “research by legal scholars.” Fortunately, there have been several cases where US courts have required things like breathalyzers to undergo code reviews, and Bratus and Lembree themselves defended perhaps the only case where the Recording Industry Association of America (RIAA) withdrew its suit with prejudice—so some progress is happening.
Moving from the IoC to the IoT puts us in interesting times.
This chapter discussed many ways in which legal issues have arguably increased the chance for a dangerous future with the IoT. However, we have also seen progress.
In January 2015, the US FTC issued a report outlining thinking on the IoT [14], discussing topics such as “how the long-standing Fair Information Practice Principles (FIPPs), which include such principles as notice, choice, access, accuracy, data minimization, security, and accountability, should apply to the IoT space’.” The report recommended that ``companies should build security into their devices at the outset” and “continue to monitor products throughout the life cycle and, to the extent feasible, patch known vulnerabilities.” The concept of data minimization was raised too: “Companies should limit the data they collect and retain, and dispose of it once they no longer need it.”
The FTC report also wrestled with the balance of enacting new legislation versus working within existing regulatory frameworks, and with the balance of regulation versus innovation.
In the same month, Ofcom (the “communications regulator in the UK”) issued its own report looking ahead to the IoT [22]. As might be expected from an office focused on communications, the report concentrated on issues such as networking and RF spectrum—but it did also address regulatory frameworks for privacy:
In so far as the IoT involves the collection and use of information identifying individuals, it will be regulated by existing legislation such as the Data Protection Act 1998. We have concluded that a common framework that allows consumers easily and transparently to authorise the conditions under which data collected by their devices is used and shared by others will be critical to future development of the IoT sector.
Ofcom also stressed the need for consumer literacy:
Some respondents identified the benefit of advocating and communicating the potential benefits associated with the IoT more broadly. In particular, respondents noted the need to raise consumer awareness on how new devices and apps will be collecting and using personal data to deliver IoT services.
As noted above, these are interesting times; one should expect many more developments in the legal and regulatory arenas.
K. Atherton, “Finnish filmmakers gave a drone a chainsaw,” Popular Science, April 1, 2016.
BBC, “Car key immobiliser hack revelations blocked by UK court,” BBC News, July 29, 2013.
E. Bhandari and R. Goodman, ACLU Challenges Computer Crimes Law that Is Thwarting Research on Discrimination Online. American Civil Liberties Union, June 29, 2016.
S. Bratus, A. Lembree, and A. Shubina, “Software on the witness stand: What should it take for us to trust it?,” in Proceedings of the Third International Conference on Trust and Trustworthy Computing, 2010.
N. Cappella, “ACLU lawsuit challenges Computer Fraud and Abuse Act,” The Stack, June 29, 2016.
R. Cox, “Senate passes bill to allow ‘unlocking’ cell phones,” The Hill, July 15, 2014.
S. Craver and others, “Reading between the lines: Lessons from the SDMI challenge,” in Proceedings of the 10th USENIX Security Symposium, 2001.
C. Currier, “Hacking team and Boeing subsidiary envisioned drones deploying spyware,” The Intercept, July 18, 2015.
T. Cushing, “Light bulb DRM: Philips locks purchasers out of third-party bulbs with firmware update,” Techdirt, December 14, 2015.
Department for Transport, The Pathway to Driverless Cars: A Code of Practice for Testing. February 2015.
B. Dolan, “The rise of the seemingly serious but ‘just for entertainment purposes’ medical app,” MobiHealthNews, August 7, 2014.
J. Ewing, “VW presentation in ’06 showed how to foil emissions tests,” The New York Times, April 26, 2016.
FDA, CDRH, and CBER, Mobile Medical Applications Guidance for Industry and Food and Drug Administration Staff. U.S. Department of Health and Human Services Food and Drug Administration, Center for Devices and Radiological Health, and Center for Biologic Evaluation and Research, February 9, 2015.
Federal Trade Commission, Internet of Things: Privacy & Security in a Connected World. FTC Staff Report, January 2015.
L. Franceschi-Bicchierai, “Hacked toy company VTech’s TOS now says it’s not liable for hacks,” Motherboard, February 9, 2016.
K. Fu, False: FDA does not allow software security patches. October 17, 2012.
S. Gallagher, “Researchers reveal electronic car lock hack after 2-year injunction by Volkswagen,” Ars Technica, October 12, 2015.
N. Greenblatt, “Self-driving cars will be ready before our laws are,” IEEE Spectrum, January 19, 2016.
R. Koppel, “Great promises of healthcare information technology deliver less,” in Healthcare Information Management Systems: Cases, Strategies, and Solutions, A. C. Weaver and others, Eds. Springer International Publishing, 2016.
M. Martinez and others, “Handgun-firing drone appears legal in video, but FAA, police probe further,” CNN, July 21, 2015.
E. Niler, “VW could fool the EPA, but it couldn’t trick chemistry,” Wired, September 22, 2015.
Ofcom, Promoting Investment and Innovation in the Internet of Things. January 27, 2015.
W. Oremus, “Should cops be allowed to take control of self-driving cars?,” Slate, August 24, 2015.
D. Pauli, “Researcher arrested after reporting pwnage hole in elections site,” The Register, May 9, 2016.
L. Pauwels, “Are Opel dealers modyfing the software of polluting Zafiras?,” FlandersNews, January 18, 2016.
J. Plungis and D. Hull, “VW’s emissions cheating found by curious clean-air group,” Bloomberg, September 19, 2015.
M. Reel and J. Robertson, “It’s way too easy to hack the hospital,” Bloomberg Businessweek, November 2015.
D. Storm, “Hack a car in Michigan, go to prison for life if new bill becomes law,” Computerworld, May 2, 2016.
G. J. Thompson and others, In-Use Emissions Testing of Light-Duty Diesel Vehicles in the United States. Center for Alternative Fuels, Engines & Emissions, West Virginia University, May 15, 2014.
J. Titcomb, “Google’s meetings with UK Government over driverless cars revealed,” The Telegraph, December 12, 2015.
K. Walsh, Researchers Could Have Uncovered Volkswagen’s Emissions Cheat If Not Hindered by the DMCA. Electronic Frontier Foundation, September 21, 2015.
K. Wiens, “We can’t let John Deere destroy the very idea of ownership,” Wired, April 21, 2015.
K. Wiens, “Before I can fix this tractor, we have to fix copyright law,” Slate, January 13, 2016.
S. Zhang, “New study links VW’s emissions cheating to 60 early deaths,” Wired, October 30, 2015.
18.226.177.85