Frameworks to Address IoT Privacy Risks

Now that we’ve explored what the IoT is, examined some of the many views of privacy, and considered the privacy risks the IoT portends, we can turn to different frameworks and tools that can be brought to bear on those risks.

Historical Methods of Privacy Protection

In many ways, IoT privacy risks reflect general historical privacy risks: surveillance, unbridled collection, poor security practices, limited privacy management knowledge inside companies, weak consent models, and loss of user control. Similarly, there are established, general tactics that we can employ at various layers of IoT system design:

Data minimization
Emerging from the 1970s, one of the oldest strategies in privacy and data protection is to minimize collection and use. The idea is very simple: limit the amount and type of data collected, limit its use, and limit its storage. As the FTC neatly states: “Thieves cannot steal data that has been deleted after serving its purpose; nor can thieves steal data that was not collected in the first place.”67 Further, limiting use helps to ensure that the data is used in the context in which it was collected, thereby avoiding function creep. In the IoT, minimization can occur at two levels:
 
Design: Designers should include only the sensors, functions, and capabilities necessary for a device’s core feature set versus including the ability to capture information for a future yet-undetermined use.
 
Storage: Devices and systems shouldn’t retain data that’s no longer in use or relevant, nor should they necessarily keep raw data.
Encryption
Scrambling messages which can then be unscrambled only by using a related key is known as encryption.68 As mentioned earlier, in the modern sense, encryption relies on complex math executed by computers or dedicated hardware to make messages unreadable. For connected devices, the main use of encryption would be for data storage and transmission so that unauthorized parties cannot see the information that’s been collected.
Transparency
Transparency refers to practices that ensure data subjects know what is being collected about them, when, how it is used, and with whom it is shared. This is a central principle underpinning the use of privacy policies. Given how IoT devices can fade into the background or otherwise invisibly collect personal data, transparency remains a critical strategy. However, because IoT devices might have reduced user interactions in comparison with traditional computing, the challenge of meaningfully informing users is magnified. Thankfully, this challenge is being answered by usable privacy researchers (see the section that follows).
Anonymization/pseudonymization/de-identification
These three terms all point to the same strategy: removing identifying information from data collected about a person (name, IP address, phone number, etc.). De-identification is a cornerstone of medical research, where ethics and policy mandate its use. In most other areas, its use is encouraged rather than required. Law and policy often point to de-identification as a desirable strategy, but research and news reports have shown that it’s neither easy nor a panacea.69 Also, de-identification can conflict with business goals because identifiable data is far more valuable for marketing purposes. Further, de-identification is not binary—data is not simply identifiable or not. Recent work by the Future of Privacy Forum describes a spectrum of characteristics—direct versus indirect identifiers, potentially identifiable versus not readily identifiable, de-identified versus “protected” de-identified, and others.70

Emerging Frameworks for IoT Privacy Challenges

Even though IoT privacy risks reflect historical risks, there are also particular challenges related to technology, sector, scale, and mode of governance. The frameworks and best practices that follow represent some of the current thinking about how to address IoT-specific challenges.

The view of the US Federal Trade Commission

In late 2013, the FTC hosted a workshop called The Internet of Things: Privacy and Security in a Connected World. The workshop, which included leading technologists and academics, industry representatives, and consumer advocates, was a broad review of IoT concepts and potential privacy and security challenges. The resultant report collected the participants’ and FTC staff’s recommendations for best practices for companies in the IoT space:

  • Conduct a privacy and/or security risk assessment.

  • Test security measures before launching products.

  • Incorporate the use of smart defaults, such as requiring consumers to change default passwords during the setup process.

  • Implement reasonable access control measures to limit the ability of an unauthorized person to access a consumer’s device, data, or network.

  • Inform consumers about the “shelf-life” of products—how long a company plans to support them and release software and security patches.

  • Impose reasonable limits on the collection and retention of consumer data (in other words, data minimization).

  • Companies should consider de-identifying stored consumer data, publicly commit not to re-identify the data, and have enforceable contracts in place with any third parties with whom they share the data, requiring them to commit to not re-identifying the data as well.

  • Continue to implement Notice and Choice, that is, providing consumers data use or privacy policies and giving them the ability to agree to or decline data collection. The report states, “Whatever approach a company decides to take, the privacy choices it offers should be clear and prominent, and not buried within lengthy documents.”

The view of the EU Article 29 Working Party

When Europe enacted its Data Protection Directive in 1995, it also created a watchdog group called the Article 29 Working Party (Art29WP), made up of data protection regulators from each of the EU member states. This independent group keeps an eye on data protection and privacy issues across all of Europe, issuing advice and proposing guidelines as new technology develops. In its 2014 Opinion on the Internet of Things,71 it proposed a wide variety of recommendations:

  • Believing that organizations mainly need aggregate data, the Art29WP states that raw data should be deleted as soon as the necessary data has been extracted, and that developers who do not need raw data should be prevented from ever seeing it. The transport of raw data from the device should be minimized as much as possible.

  • If a user withdraws his consent, device manufacturers should be able to communicate that fact with all other concerned stakeholders.

  • IoT devices should offer a “Do Not Collect” option to schedule or quickly disable sensors, similar to a “Do Not Disturb” feature on mobile phones, as well as the silencing of the chips, discussed in a moment.

  • Devices should disable their own wireless interfaces when not in use or use random identifiers (such as randomized MAC addresses) to prevent location tracking via persistent IDs.

  • Users should be given a friendly interface to be able to access the aggregate or raw data that a device or service stores.

  • Devices should have settings to be able to distinguish between different people using it so that one user cannot learn about another’s activities.

  • Manufacturers and service providers should perform a Privacy Impact Assessment on all new devices and services before deploying them (see “Privacy Impact Assessments”).

  • Applications and devices should periodically notify users when they are recording data.

  • Information published by IoT devices on social media platforms should, by default, not be public nor indexed by search engines.

Silencing of the chips

In the mid 2000s, the European Commission funded a great deal of research into the IoT, though much of this work was focused on Radio Frequency ID (RFID) technologies. Out of this research came a belief that people have a right to disconnect from their networked environment, and therefore be able to deactivate the tracking functions of their RFID devices. French Internet expert Bernard Benhamou coined the term the “silence of the chips” to capture this belief:

[Citizens] must be able to control the way in which their personal data are used, and even the way in which these [RFID] chips can be deactivated. So in the future, citizens will have to intervene in the architecture of these systems in order to enjoy a new kind of freedom: the “silence of the chips.”72

The oft-cited example for the expression of this right was in the retail sector.73 If a person bought goods with embedded RFID tags, the principle of the silence of the chips would ensure that consumers could kill the tags temporarily or permanently so that purchased goods could not be tracked outside the store. An updated version of this right could be formulated as a Do Not Collect feature added to devices, wherein users could simply “blind” all of the sensors on a device (see the section “The view of the EU Article 29 Working Party”).

Privacy engineering

As the earlier section Chapter 3 shows, privacy is complex, culturally infused, ambiguous, and conceptually dense. For lawyers, researchers, compliance officers, policy-makers, and others, this comes with the territory. However, for those tasked with embedding privacy directives into technical systems—engineers, programmers, system architects, and the like—this contested, indefinite character can be detrimental. Engineers and their kin work in a world of definitions, specifications, constrained vocabularies, repeatability, and structured change. To bridge the two worlds, the ambiguous and the specified, a new field has begun to emerge: privacy engineering.74 Although a unified definition has yet to be established, key characteristics of privacy engineering are requirements gathering, diagramming and modeling, use cases, classification, business rules, auditing, and system lifecycles. As such, privacy engineering overlaps with and compliments risk management frameworks and compliance activities (see the section “Privacy Impact Assessments”). Even though this field is not particular to the IoT, it’s an important advancement in the ways that companies can approach the challenge of building privacy-preserving, ethical, respectful technical systems.

Vehicle privacy protection principles

In November of 2014, two car-manufacturing trade bodies released a set of Privacy Principles for Vehicle Technologies and Services.75 Modeled largely on the White House’s Consumer Privacy Bill of Rights,76 the automaker’s privacy principles call for transparency, choice, respect for context, data minimization, and accountability. Twenty members77 of the two organizations have adopted the voluntary principles, committing to obtaining affirmative consent to use or share geolocation, biometrics, or driver behavior information. Such consent is not required, though, for internal research or product development, nor is consent needed to collect the information in the first place. One could reasonably assert that biometrics and driver behavior are not necessary to the basic functioning of a car, so there should be an option to disable most or all of these monitoring functions if a driver wishes to. The automakers’ principles do not include such a provision. Still, the auto industry is one of the few to be proactive regarding consumer privacy in the IoT space. The vehicle privacy principles provide a foundation for critical discussion of the impact of new technologies in cars and trucks.

Usable privacy and security

The field of usable privacy and security examines how people interact with systems, and the design and use challenges that arise from those systems’ privacy and security characteristics. Jason Hong, Lorrie Cranor, and Norman Sadeh, three senior professors in the field, write:78

There is growing recognition that privacy and security failures are often the results of cognitive and behavioral biases and human errors. Many of these failures can be attributed to poorly designed user interfaces or secure systems that have not been built around the needs and skills of their human operators: in other words, systems which have not made privacy and security usable.

The field draws upon a wide variety of disciplines, including human–computer interaction, computer security, mobile computing, networking, machine learning, cognitive psychology, social psychology, decision sciences, learning sciences, and economics.79 Pioneering work has been done at the CyLab Usable Privacy and Security Lab80 at Carnegie Mellon University and similar labs, and at the annual Symposium on Usable Privacy and Security.81 Researchers have addressed issues directly relating to the IoT, including the following:

Authentication
Passwords have been a necessary evil since the 1960s,82 but there is widespread agreement that people have too many to contend with, resulting in poor choices and weakened system security. Usable privacy and security researchers measure the usability and efficacy of authentication interactions and offer new methods to improve the overall system. IoT devices might lack keyboards, screens, or biometric readers, further complicating authentication. Research in this area serves the twin goals of improving the user experience and helping to ensure devices retain strong authentication features.
Privacy notices
The use of privacy notices to inform people of what data is collected about them, how it’s used, and with whom it’s shared is a common practice in the US and Europe. However, it’s also widely agreed that these notices are ineffective because they are too long and people are exposed to too many of them.83 Again, a lack of screens on IoT devices exacerbates the problem. Usable privacy researchers have addressed this issue head-on, proposing the following design practices:84  
 
Create different notices for different audiences, such as primary, secondary and incidental users.
 
Provide relevant and actionable information, in particular, explaining when data is collected or shared in ways that a user could not be expecting.
 
Use layered and contextual notices. Researchers argue that “showing everything at once in a single notice is rarely effective. Instead, all but the most simple notices should consist of multiple layers.”85 Different times, methods, and granularity of information displayed help users to absorb what’s being presented.
 
Involve users in the design of notices through user-centered86 or participatory design.87 Include user testing and usability evaluation as part of the overall system’s quality assurance.

Because privacy notices are mandated by regulation and user testing involves cost, businesses have little incentive to be progressive or experimental. As such, university-based experimentation and research is vital to advance the state of the art in notifying users and in interface design. The field of Usable Privacy and Security is essential to address the particular interface challenges of the IoT.

Privacy Impact Assessments

A Privacy Impact Assessment (PIA) is a systematic process to evaluate the impact and risks of collecting, using, and disseminating personally identifiable information in a project, product, service, or system. The goal is to identify privacy risks; ensure compliance with national or local laws, contractual requirements, or company policy; and put risk mitigation strategies in place. Privacy scholar Gary T. Marx writes that a PIA “anticipates problems, seeking to prevent, rather than to put out fires.”88 As such, a PIA is an integral part of planning and development rather than an afterthought. PIAs have traditionally been used by government agencies, but they have clear and direct application in the commercial sphere. The recently passed EU General Data Protection Regulation requires PIAs when data processing is “likely to result in a high risk for the rights and freedoms of individuals.”89 Each EU country will determine exactly what those activities will be, but it’s safe to assume that some IoT systems will trigger this requirement when the GDPR comes into effect in 2018.

According to expert Toby Stevens,90 PIAs analyze risks from the perspective of the data subject and are complementary to security risk assessments, which are done from the perspective of the organization. A security risk assessment might conclude that the loss of 10,000 customer records is an acceptable risk for the organization, but the PIA will consider the impact on the affected individuals. PIAs are also directly beneficial to the organization by preventing costly redesigns or worse—helping to curtail regulator fines, irreparable brand damage, lawsuits, or loss of customers because of a significant privacy failure. They are, as the New Zealand PIA Handbook states, an “early warning system... enabling [organizations] to identify and deal with their own problems internally and proactively rather than awaiting customer complaints, external intervention or bad press.”91 PIAs allow business stakeholders to get their ethics down on paper and into a process that can be applied over and over as new products and services are developed; this in turn enables staff to understand executive risk appetite.

A PIA is a flexible instrument, and can be configured to meet a variety of needs, policies, and regulations. Here are some basic elements it can include:

  • Data sources

  • Data flows through the product/service lifecycle

  • Data quality management plan

  • Data use purpose

  • Data access inventory—who inside and outside the organization can access the data

  • Data storage locations

  • Data retention length

  • Applicable privacy laws, regulations, and principles

  • Identification of privacy risks to users and the organizations and the severity level (e.g., High, Medium, Low)

  • Privacy breach incident response strategy

In 2011, a group of industry players and academics authored an RFID PIA framework92 that was endorsed by the European Commission. At the time, RFID technology was considered a cornerstone of the IoT ecosystem, and the framework focuses on it to the exclusion of other IoT-like technologies. Use of the framework is not required by law, but is instead “part of the context of other information assurance, data management, and operational standards that provide good data governance tools for RFID and other Applications.”93

Whether a PIA meets the letter of the law and no more, or if it goes far beyond it, incorporating broad ethical concerns and sensitivities for users, a PIA can help organizations get a better sense of the personal data it handles, the associated risks, and how to manage issues before a disaster strikes.

Identity management

The field of identity management (IDM) is concerned with authentication, attributes, and credentials—methods of identification and access. Not only is this domain important for engineering-level objectives about how users and devices identify and connect to one another, but it also provides a framework and language for privacy design considerations.

For many years, identity practices have been converging around what is called federated identity, where people use a single sign-on (SSO) to access multiple, disparate resources. Examples include Facebook logins to access news sites, university logins to access academic publishers, Gmail logins to access other Google functions, and national IDs to log in to government websites. Using SSO means there’s always someone looking over your shoulder online—unless a system is designed specifically not to. This and other challenges inherent to IDM systems have yielded several strategies to strengthen privacy protection. Three in particular are valuable for the IoT:94

Unlinkability
This is the intentional separation of data events and their sources, breaking the “links” between users and where they go online. In the IDM world, this means designing systems so that one website does not know you are using another website even though you are using the same login on both. In the IoT context, the analogy would be your bathroom scale does not need to know where you drive, or your fitness band does not need to know which websites you visit. There are certainly advantages to commingling data from different contexts, and many people will feel comfortable with it happening automatically. The point is for there to be options for those who do not. Ergo, there is a design imperative for IoT devices to not share cross-contextual data without explicit user consent, and for defaults to be set to opt-in to sharing rather than to opt-out.
Unobservability
Identity systems can be built to be blind to the activities that occur within them. People can use credentials and log in to various websites, and the “plumbing” of the system is unaware of what goes on. We can apply this same design principle to the various intermediaries, transport subsystems, and middle layers that make up the IoT ecosystem’s connective tissue of communications.
Intervenability
This is exactly what it sounds like—the ability for users to intervene with regard to the collection, storage, and use of their personal data. Intervenability is a broad design and customer relationship goal; it aims to give users more knowledge and control over data that’s already been collected about them, what raw data is stored, and what inferences a company has made. The ability to delete and withdraw consent, to determine who gets to see personal data and how it’s used, and to correct erroneous information all support transparency, user control and rights, and autonomy.

Standards

A standard is an agreed-upon method or process. Standards create uniformity—a common reference for engineers, programmers, and businesses to rely upon so that products made by different companies can interoperate with one another. Standards reduce costs and complexity because companies seeking to enter a new market don’t need to invent everything from scratch. Standards abound in the technical world: DVD, USB, electrical outlets, the screw threads on a lightbulb, WiFi, TCP/IP, Ethernet, RFID, the C programming language, Bluetooth... information age technologies are typified by standardization. Standards can originate with noncommercial or public organizations, such as the Institute of Electrical and Electronic Engineers (IEEE), or with commercial organizations and groups, such as the AllSeen Alliance, “a cross-industry consortium dedicated to enabling the interoperability of billions of devices, services, and apps that comprise the Internet of Things.”95

Successful standards wield much influence because they can specify what devices can and cannot do. As such, they are a powerful intervention point for privacy in a technical sense. There is a clear need for more research into which and how IoT standards can affect the privacy landscape. Given the complexity of building respectful, secure, privacy-preserving systems, IoT-specific and more general standards play a critical role in the evolution of connected devices. See the Further Reading section for references to existing and emerging standards.

67 Federal Trade Commission. 2015. Internet of Things: Privacy & Security in a Connected World. Available at http://bit.ly/2dwxDIY.

68 Research shows that this method of protecting information originates around 1900 BC. See Waddell, K. 2016. The Long and Winding History of Encryption. http://theatln.tc/2debU8g.

69 See, e.g., Ohm, P. 2010. Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review 57(6):1701-1777. Available at http://ssrn.com/abstract=1450006.

70 Polonetsky, J., Tene, O and Finch, K. 2016. Shades of Gray: Seeing the Full Spectrum of Data De-identification. Available at http://bit.ly/2deeT08; Future of Privacy Forum. 2016. A Visual Guide to Practical Data De-identification. Available at http://bit.ly/2d41FkL.

71 Article 29 Working Party. 2014. Opinion 8/2014 on Recent Developments on the Internet of Things. Available at http://bit.ly/2cXhOZM.

72 Quoted in Santucci, G. 2013. Privacy in the Digital Economy: Requiem or Renaissance? Available at http://bit.ly/2dlpFDq.

73 Baldini, G. et al. 2012. RFID Tags: Privacy Threats and Countermeasures. European Commission: Joint Research Centre. Available at http://bit.ly/2dlrKPo.

74 Dennedy, M., Fox, J. and Finneran, T. 2014. The Privacy Engineer’s Manifesto: Getting from Policy to Code to QA to Value. New York: Apress. Available at https://www.apress.com/9781430263555; Bracy, J. 2014. Demystifying Privacy Engineering. IAPP. Available at http://bit.ly/2dbbhdV.

75 Alliance of Automobile Manufacturers and Association of Global Automakers. 2014. Consumer Privacy Protection Principles: Privacy Principles for Vehicle Technologies and Services. Available at http://bit.ly/2ddCvhT; see also FAQ at http://bit.ly/2d445ji.

76 See footnote 50.

77 See Participating Members at http://bit.ly/2cQ4h4w.

78 Hong, J., Cranor, L. and Sadeh, N. 2011. Improving the Human Element: Usable Privacy and Security. Available at http://bit.ly/2cyrifS.

79 Ibid.

80 https://cups.cs.cmu.edu/.

81 https://www.usenix.org/conference/soups2016.

82 Yadron, D. 2014. Man Behind the First Computer Password: It’s Become a Nightmare. Available at http://on.wsj.com/2cQ4MLD.

83 A 2014 report to President Obama observed: “Only in some fantasy world do users actually read these notices and understand their implications before clicking to indicate their consent.” See http://bit.ly/2d44pP6; see also Madrigal, A. 2012. Reading the Privacy Policies You Encounter in a Year Would Take 76 Work Days. The Atlantic 1 Mar. Available at http://theatln.tc/2ddD6QK.

84 This section is drawn from Schaub, F., Balebako, R., Durity, A. and Cranor, L. 2015. A Design Space for Effective Privacy Notices. Available at http://bit.ly/2dwBkhZ.

85 See footnote 84.

86 Usability.gov defines user-centered design as a process that “outlines the phases throughout a design and development life-cycle all while focusing on gaining a deep understanding of who will be using the product.” See http://bit.ly/2cXjmTS.

87 Computer Professionals for Social Responsibility defined participatory design as “an approach to the assessment, design, and development of technological and organizational systems that places a premium on the active involvement of workplace practitioners (usually potential or current users of the system) in design and decision-making processes.” See http://bit.ly/2cwDUiL.

88 Marx, G. 2012. Privacy is Not Quite Like the Weather. In D. Wright and P. De Hert (eds.), Privacy Impact Assessment (pp. v-xiv). Dordrecht: Springer.

89 Maldoff, G. 2016. The Risk-Based Approach in the GDPR: Interpretation and Implications. Available at http://bit.ly/2d44diR.

90 http://privacygroup.org/.

91 Office of the Privacy Commissioner. 2007. Privacy Impact Assessment Handbook. Aukland: Office of the Privacy Commissioner. Available at http://bit.ly/2d3Qev4.

92 See http://bit.ly/2dmorbf; also, for much more context on the RFID PIA and its development, see Spiekermann, S. 2012. The RFID PIA—Developed by Industry, Endorsed by Regulators. In D. Wright and P. De Hert (eds.), Privacy Impact Assessment, (pp. 323–346). Dordrecht: Springer. Available at http://bit.ly/2cXjbb0.

93 See first reference in footnote 92.

94 Rost, M. and Bock, K. 2011. Privacy by Design and the New Protection Goals. Available at http://bit.ly/2cFN4gf.

95 AllSeen Alliance. 2016. Home page. Available at https://allseenalliance.org/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.73.207