CHAPTER 2

Understanding Privacy

In order to be able to appropriately address privacy issues and challenges in mobile and pervasive computing, we first need to better understand why we—as individuals and as society—might want and need privacy. What does privacy offer? How does privacy affect our lives? Why is privacy necessary? Understanding the answers to these questions naturally helps to better understand what “privacy” actually is, e.g., what it means to “be private” or to “have privacy.” Only by examining the value of privacy, beyond our maybe intuitive perception of it, will we be able to understand what makes certain technology privacy invasive and how it might be designed to be privacy-friendly.

Privacy is a complex concept. Robert C. Post, Professor of Law and former dean of the Yale Law School, states that “[p]rivacy is a value so complex, so entangled in competing and contradictory dimensions, so engorged with various and distinct meanings, that I sometimes despair whether it can be usefully addressed at all” [Post, 2001]. In this chapter, we aim to untangle the many perspectives on and motivations for privacy. In order to better understand both the reasons for—and the nature of—privacy, we examine privacy from three perspectives. A first understanding comes from a historical overview of privacy, in particular from a legal perspective. Privacy law, albeit only one particular perspective on privacy, certainly is the most codified incarnation of privacy and privacy protections. Thus, it lends itself well as a starting point. Privacy law also has a rich history, with different approaches in different cultures and countries. The legal understanding of privacy has also changed substantially over the years, often because of technological advances. As we discussed in Chapter 1, technology and privacy are tightly intertwined, as technological innovations often tend to “change the playing field” in terms of making certain data practices and incursions on privacy possible that weren’t possible before. Our historic overview hence also includes key moments that prompted new views on what privacy constitutes.

Our second perspective on privacy then steps back from the codification of privacy and examines arguments for and against privacy—the motivation for protecting or curtailing privacy. This helps us to not only understand why we may want privacy, but also what we might lose without privacy. Is privacy something valuable worth incorporating into technology?

With both the historic backdrop and privacy motivations in mind, we then present contemporary conceptualizations of privacy. We will see that there are many views on what privacy is, which can make it difficult to understand what someone is referring to when talking about “privacy.” Precision is important when discussing privacy, in order to ensure a common understanding rather than arguing based on diverging perspectives on what privacy is or ought to be. The discussion of different conceptualizations and understandings of privacy is meant to help us evaluate the often nuanced privacy implications of new technologies.

2.1  CODIFYING PRIVACY

There is certainly no lack of privacy definitions—in fact, this whole chapter is about defining privacy in one way or another. However, at the outset, we take a look at definitions of privacy that have received broader societal support, i.e., by virtue of being actually enshrined in law. This is not meant as legal scholarship, but rather as an overview to what are considered fundamental aspects of privacy worth protecting.

2.1.1  HISTORICAL ROOTS

Privacy is hardly a recent fad. Questions of privacy have been in the focus of society for hundreds of years. In fact, references to privacy can already be found in the Bible, e.g., in Luke 12(2–3): “What you have said in the dark will be heard in the daylight, and what you have whispered in the ear in the inner rooms will be proclaimed from the roofs” [Carroll and Prickett, 2008]. The earliest reference in common law1 can be traced back to the English Justices of the Peace Act of 1361, which provided for the arrest of eavesdroppers and peeping toms [Laurant, 2003]. In 1763, William Pitt the Elder, at that time a member of the English parliament, framed in his speech on the Excise Bill the privacy of one’s home as follows [Brougham, 1839]:

The poorest man may in his cottage bid defiance to all the forces of the Crown. It may be frail—it’s roof may shake—the wind may blow through it—the storm may enter—the rain may enter—but the King of England cannot enter!—all his forces dare not cross the threshold of the ruined tenement.

One of the earliest explicit definitions of privacy came from the later U.S. Supreme Court Justice Louis Brandeis and his colleague Samuel Warren. In 1890, the two published the essay “The Right to Privacy” [Warren and Brandeis, 1890], which created the basis for privacy tort law2 in the U.S. legal system. They defined privacy as “the right to be let alone.” The fact that this definition is so often quoted can probably be equally attributed to it being the first legal text on the subject and being easily memorizable. While it encompasses in principle all of the cases mentioned previously, such as peeping toms, eavesdroppers, and trespassers, it is still a very limited definition of privacy. Warren and Brandeis’ defintion focuses on only one particular “benefit” of privacy: solitude. As we will see later in this chapter, privacy has other benefits beyond solitude.

Probably the most interesting aspect of Warren and Brandeis’ work from today’s perspective is what prompted them to think about the need for a legal right to privacy at the end of the 19th century:

Recent inventions and business methods call attention to the next step which must be taken for the protection of the person, and for securing to the individual what Judge Cooley calls the right ‘to be let alone.’ …Numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops’ [Warren and Brandeis, 1890].

Image

Figure 2.1: The Kodak Camera. George Eastman’s “Snap Camera” made it suddenly simple to take anybody’s image on a public street without their consent.

In this context, Warren and Brandeis’ quote of Luke 12(2–3) (in a translation slightly different from the Bible [Carroll and Prickett, 2008]) sounds like an prescient description of the new possibilities of mobile and pervasive computing. Clearly, neither the Evangelist Luke nor Warren and Brandeis had anything like modern mobile and pervasive computing in mind. In Warren and Brandeis’ case, however, it actually was a reference to a then novel technology—photography. Before 1890, getting one’s picture taken usually required visiting a photographer in their studio and sitting still for a considerable amount of time, otherwise the picture would be blurred. But on October 18, 1884, George Eastmann, the founder of the Eastman Kodak Company, received U.S.-Patent #306 594 for his invention of the modern photographic film. Instead of having to use a large tripod-mounted camera with heavy glass plates in the studio, everybody could now take Kodak’s “Snap Camera” (see Figure 2.1) out to the streets and take a snapshot of just about anybody—without their consent. It was this rise of unsolicited pictures, which more and more often found their way into the pages of the (at the same time rapidly expanding) tabloid newspapers, that prompted Warren and Brandeis to paint this dark picture of a world without privacy.

Today’s developments of smartphones, wearable devices, smart labels, memory amplifiers, and IoT-enabled smart “things” seem to mirror the sudden technology shifts experienced by Warren and Brandeis, opening up new forms of social interactions that change the way we experienced our privacy in the past. However, Warren and Brandeis’ “right to be let alone” looks hardly practical today: with the multitude of interactions in today’s world, we find ourselves constantly in need of dealing with people (or better: services) that do not know us in person, hence require some form of personal information from us in order to judge whether such an interaction would be beneficial. From opening bank accounts, applying for credit, obtaining a personal yearly pass for trains or public transportation, or buying goods online—we constantly have to “connect” with others (i.e., give out our personal information) in order to participate in today’s life. Even when we are not explicitly providing information about ourselves we constantly leave digital traces. Such traces range from what websites we visit or what news articles we read, to surveillance and traffic cameras recording our whereabouts, to our smartphones revealing our location to mobile carriers, app developers and advertisers. Preserving our privacy through isolation is just not as much of an option anymore as it was over a 100 years ago.

Privacy as a Right

Warren and Brandeis’ work put privacy on the legal map, yet it took another half century before privacy made further legal inroads. After the end of the Second World War, in which Nazi Germany had used detailed citizen records to identify unwanted subjects of all kinds [Flaherty, 1989], privacy became a key human right across a number of international treaties—the most prominent being the Universal Declaration of Human Rights, adopted by the United Nations in 1948, which states in its Article 12 that [United Nations, 1948]:

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks.

Similar protections can be found in Article 8 of the Council of Europe’s Convention of 1950 [Council of Europe, 1950], and again in 2000 with the European Union’s Charter of Fundamental Rights [European Parliament, 2000], which for the first time in the European Union’s history sets out in a single text the whole range of civil, political, economic, and social rights of European citizens and all persons living in the European Union [Solove and Rotenberg, 2003]. Article 8 of the Charter, concerning the Protection of Personal Data, states the following [European Parliament, 2000].

1.  Everyone has the right to the protection of personal data concerning him or her.

2.  Such data must be processed fairly for specified purposes and on the basis of the consent of the person concerned or some other legitimate basis laid down by law. Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.

3.  Compliance with these rules shall be subject to control by an independent authority.

The rise of the Internet and the World Wide Web in the early 1990s had prompted many to proclaim the demise of national legal frameworks, as their enforcement in a borderless cyberspace seemed difficult at least.3 However, the opposite effect could be observed: at the beginning of the 21st century, many national privacy laws have not only been adjusted to the technical realities of the Internet, but also received a substantial international harmonization facilitating cross-border enforcement.

Today, more than 100 years after Warren and Brandeis laid the foundation for modern data protection laws, two distinctive principles for legal privacy protection have emerged: the European approach of favoring comprehensive, all-encompassing data protection legislation that governs both the private and the public sector, and the sectoral approach popular in the United States that favors sector-by-sector regulation in response to industry-specific needs and concerns in conjunction with voluntary industry self-regulation. In both approaches, however, privacy protection is broadly modeled around what is known as “Fair Information Practice Principles.”

The Fair Information Practice Principles

If one would want to put a date to it, modern privacy legislation was probably born in the late 1960s and early 1970s, when governments first began to systematically make use of computers in administration. Alan Westin’s book Privacy and Freedom published in 1967 [Westin, 1967] had a significant impact on how policymakers in the next decades would address privacy. Clarke [2000] reports how a 1970 German translation of Westin’s book significantly influenced the world’s first privacy law, the “Datenschutzgesetz” (data protection law) of the West German state Hesse. In the U.S., a Westin-inspired 1973 report of the United States Department for Health Education and Welfare (HEW) set forth a code of Fair Information Practice (FIP), which has become a cornerstone of U.S. privacy law [Privacy Rights Clearinghouse, 2004], and has become equally popular worldwide. The five principles are as follows [HEW Advisory Committee, 1973].

1.  There must be no personal data record keeping systems whose very existence is secret.

2.  There must be a way for an individual to find out what information about him is in a record and how it is used.

3.  There must be a way for an individual to prevent information about him that was obtained for one purpose from being used or made available for other purposes without his consent.

4.  There must be a way for an individual to correct or amend a record of identifiable information about him.

5.  Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuse of the data.

In the early 1980s, the Organization for Economic Cooperation and Development (OECD) took up those principles and issued “The OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data” [OECD, 1980], which expanded them into eight practical measures aimed at harmonizing the processing of personal data in its member countries. By setting out core principles, the organization hoped to “obviate unnecessary restrictions to transborder data flows, both on and off line.” The eight principles are as follows [OECD, 2013].4

1.  Collection Limitation Principle. There should be limits to the collection of personal data and any such data should be obtained by lawful and fair means and, where appropriate, with the knowledge or consent of the data subject.

2.  Data Quality Principle. Personal data should be relevant to the purposes for which they are to be used, and, to the extent necessary for those purposes, should be accurate, complete and kept up-to-date.

3.  Purpose Specification Principle. The purposes for which personal data are collected should be specified not later than at the time of data collection and the subsequent use limited to the fulfillment of those purposes or such others as are not incompatible with those purposes and as are specified on each occasion of change of purpose.

4.  Use Limitation Principle. Personal data should not be disclosed, made available or otherwise used for purposes other than those specified in accordance with the Purpose Specification principle except:

(a)  with the consent of the data subject; or

(b)  by the authority of law.

5.  Security Safeguards Principle. Personal data should be protected by reasonable security safeguards against such risks as loss or unauthorised access, destruction, use, modification or disclosure of data.

6.  Openness Principle. There should be a general policy of openness about developments, practices and policies with respect to personal data. Means should be readily available of establishing the existence and nature of personal data, and the main purposes of their use, as well as the identity about usual residence of the data controller.5

7.  Individual Participation Principle. Individuals should have the right:

(a)  to obtain from a data controller, or otherwise, confirmation of whether or not the data controller has data relating to them;

(b)  to have communicated to them, data relating to them

  i.  within a reasonable time;

 ii.  at a charge, if any, that is not excessive;

iii.  in a reasonable manner; and

iv.  in a form that is readily intelligible to them;

(c)  to be given reasons if a request made under subparagraphs (a) and (b) is denied, and to be able to challenge such denial; and

(d)  to challenge data relating to them and, if the challenge is successful, to have the data erased; rectified, completed or amended.

8.  Accountability Principle. A data controller should be accountable for complying with measures which give effect to the principles stated above.

Even though the OECD principles, just as the HEW guidelines before them, carried no legal obligation, they nevertheless constituted an important international consensus that substantially influenced national privacy legislation in many countries in the years to come [Solove and Rotenberg, 2003]. In what Michael Kirby, former Justice of the High Court in Australia, has called the “decade of privacy” [Clarke, 2006], many European countries (and the U.S.) followed the German state Hesse in passing comprehensive data protection laws—the first national privacy law was passed in Sweden in 1973, followed by the U.S. (Privacy Act of 1974, regulating processing of personal information by federal agencies), Germany (1977), and France (1978).

The FIPs, while an important landmark in privacy protection, are, however, not without their flaws. Clarke [2000] calls them a “movement that has been used by corporations and governments since the late 1960s to avoid meaningful regulation.” Instead of taking a holistic view on privacy, Clark finds the FIPs too narrowly focused on “data protection,” only targeting the “facilitation of the business of government and private enterprise” rather than the human rights needs that should be the real goal of privacy protection: “the principles are oriented toward the protection of data about people, rather than the protection of people themselves” [Clarke, 2006]. More concrete omissions of the FIPs are the complete lack of data deletion or anonymization requirements (i.e., after the data served its purpose), or the absence of clear limits on what could be collected and in what quantities (the FIPs only require that the data collected is “necessary”). Similarly, Cate [2006] notes that, in their translation into national laws, the broad and aspirational fair information practice principles have often been reduced to narrow legalistic concepts, such as notice, choice, access, security, and enforcement. These narrow interpretations of the FIPs focus on procedural aspects of data protection rather than the larger goal of protecting privacy for the benefit of individuals and society.

2.1.2  PRIVACY LAW AND REGULATIONS

Many countries have regulated privacy protections through national laws—often with reference to or based on the fair information practice principles. We provide an overview of those laws with a specific emphasis on the U.S. and Europe, due to their prominent roles in developing and shaping privacy law and their differing approaches for regulating privacy.

Privacy Law and Regulations in the United States

The U.S. Constitution does not lay out an explicit constitutional right to privacy. However, in a landmark case, Griswold vs. Connecticut 1965,6 the U.S. Supreme Court recognized a constitutional right to privacy, emanating from the First, Third, Fourth, Fifth, and Ninth Amendments of the U.S. Constitution.7 The First Amendment guarantees freedom of worship, speech, press, assembly and petition. Privacy under First Amendment protection usually refers to being unencumbered by the government with respect to one’s views (e.g., being able to speak anonymously or keeping one’s associations private). The Third Amendment provides that troops may not be quartered (i.e., allowed to reside) in private homes without the owner’s consent (an obvious relationship to the privacy of the home). The Ninth Amendment declares that the listing of individual rights is not meant to be comprehensive, i.e., that the people have other rights not specifically mentioned in the Constitution [National Archives]. The right to privacy is primarily anchored in the Fourth and Fifth Amendments [Solove and Rotenberg, 2003].

•  Fourth Amendment: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

•  Fifth Amendment: No person shall be […] compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.

In addition, the Fourteenth Amendment’s due process clause has been interpreted to provide a substantive due process right to privacy.8

•  Fourteenth Ammendment: No state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.

While the U.S. Constitution recognizes an individual right to privacy, the constitution only describes the rights of citizens in relationship to their government, not to other citizens or companies9 [Cate, 1997]. So far, no comprehensive legal privacy framework exists in the United States that equally applies to both governmental and private data processors. Instead, federal privacy law and regulation follows a sectoral approach, addressing specific privacy issues that arise in certain public transactions or industry sectors [Solove and Schwartz, 2015].

Privacy with respect to the government is regulated by the Privacy Act of 1974, which only applies to data processing at the federal level [Gormley, 1992]. The Privacy Act roughly follows the Fair Information Principles set forth in the HEW report (mentioned earlier in this section), requiring government agencies to be transparent about their data collections and to support access rights. It also restricts what information different government agencies can share about an individual and allows citizens to sue the government for violating these provisions. Additional laws regulate data protection in other interactions with the government, such as the Driver’s Privacy Protection Act (DPPA) of 1994, which restricts states in disclosing or selling personal information from motor vehicle records, or the Electronic Communications Privacy Act (ECPA) of 1986, which extended wiretapping protections to electronic communication.

Privacy regulation in the private sector is largely based on self-regulation, i.e., industry associations voluntarily enact self-regulations for their sector to respect the privacy of their customers. In addition, federal or state privacy laws are passed for specific industry sectors in which privacy problems emerge. For instance, the Family Educational Rights and Privacy Act (FERPA) of 1974 regulates student privacy in schools and universities; and the Children’s Online Privacy Protection Act (COPPA) of 1998 restricts information collection and use by websites and online services for children under age 13.

The Health Insurance Portability and Accountability Act (HIPAA) of 1996 gives the Department of Health and Human Services rule making authority regarding the privacy of medical records. The HIPAA Privacy Rule requires privacy notices to patients, patient authorization for data processing and sharing, limits data processing to what is necessary for healthcare, gives patients data access rights, and prescribes physical and technical safeguards for health records. Commonly, federal privacy laws are amended over time to account for evolving privacy issues. For instance the Genetic Information Nondiscrimination Act (GINA) of 2008 limits the use of genetic information in health insurance and employment decisions.

Privacy in the financial industry is regulated by multiple laws. The Fair Credit Reporting Act (FCRA) of 1970 governs how credit reporting agencies can use consumer information. It has been most recently amended by the Economic Growth, Regulatory Relief, and Consumer Protection Act of 2018, which, as a reaction to the 2017 Equifax Data Breach, gave consumers the right to free credit freezes to limit access to their credit reports and thus reduce the risk of identity theft. The Gramm-Leach-Bliley Act (GLBA) of 1999 requires that financial institutions store financial information in a secure manner, provide customers with a privacy notice annually and gives consumers the right to opt-out or limit sharing of personal information with third parties.

The Telephone Consumer Protection Act (TCPA) of 1991 provides remedies from repeat telephone calls by telemarketers and created the national Do Not Call registry.10 The Controlling the Assault of Non-Solicited Pornography And Marketing (CAN-SPAM) Act of 2003 created penalties for the transmission of unsolicited email and requires that email newsletters and marketing emails must contain an unsubscribe link. The Video Privacy Protection Act (VPPA) of 1988 protects the privacy of video rental records.

Those federal privacy laws are further complemented by state laws. For instance, many states have passed RFID-specific legislation that prohibits unauthorized reading of RFID-enabled cards and other devices (e.g., the state of Washington’s Business Regulation Chapter 19.300 [Washington State Legislature, 2009]). The state of Delaware enacted four privacy laws in 2015, namely the Online and Personal Privacy Protection Act (DOPPA), the Student Data Privacy Protection Act (SDPPA), the Victim Online Privacy Act (VOPA), and the Employee/Applicant Protection for Social Media Act (ESMA).

One of the more well-known state privacy laws is California’s Online Privacy Protection Act (CalOPPA) of 2004, which poses transparency requirements, including the posting of a privacy policy, for any website or online service that collects and maintains personally identifiable information from a consumer residing in California. Because California is the most populous U.S. state with a large consumer market and due to the difficulty of reliably determining an online user’s place of residence, CalOPPA, despite being a state law, affected almost all US websites as well as international websites. In 2018, California became the first US state to enact a comprehensive (i.e., non-sectoral) privacy law. The California Consumer Privacy Act of 2018, which will go into effect in 2020, requires improved privacy notices, a conspicuous opt-out button regarding the selling of consumer information, and grants consumers rights to data access, deletion and portability.

Due to the fractured nature of privacy legislation, privacy enforcement authority is also divided among different entities, including the Department of Health and Human services (for HIPAA), the Department of Education (for FERPA), State Attorneys General (for respective state laws), and the Federal Trade Commission (FTC). The FTC, as the U.S. consumer protection agency, has a prominent privacy enforcement role [Solove and Hartzog, 2014], including the investigation of deceptive and unfair trade practices with respect to privacy, as well as statutory enforcement (e.g., for COPPA). The FTC further has enforcement power with respect to Privacy Shield, the U.S.–European agreement for cross-border transfer. Due to its consumer protection charge, the FTC can also bring privacy-related enforcement actions against companies in industries without a sectoral privacy law [Solove and Hartzog, 2014], such as mobile apps, online advertising, or smart TVs. In addition to monetary penalties, FTC consent decrees typically require companies to submit to independent audits for 20 years and to establish a comprehensive internal security or privacy program. The FTC’s enforcement creates pressure for industries to adhere to their self-regulatory privacy promises and practices.

In addition to federal and state laws, civil privacy lawsuits (i.e., between persons or corporations) are possible. Prosser [1960] documented four distinct privacy torts common in US law,11 i.e., ways for an individual who felt their privacy has been violated to sue the violator for damages:

•  intrusion upon seclusion or solitude, or into private affairs;

•  public disclosure of embarrassing private facts;

•  adverse publicity which places a person in a false light in the public eye; and

•  appropriation of name of likeness.

In summary, privacy is protected in the U.S. by a mix of sector-specific federal and state laws, with self-regulatory approaches and enforcement by the FTC in otherwise unregulated sectors. An advantage of this sectoral approach is that resulting privacy laws are often specific to the privacy issues, needs, and requirements in a given sector, a downside is that laws are often surpassed by the advancement of technology, thus requiring periodical amendments.

Privacy Law and Regulation in the European Union

On the other side of the Atlantic, a more civil-libertarian perspective on personal data protection prevails. Individual European states began harmonizing their national privacy laws as early as the mid-1970s. In 1973 and 1974, the European Council12 passed resolutions (73)22 and (74)29, containing guidelines for national legislation concerning private and public databases, respectively [Council of Europe, 1973, 1974]. In 1985, the “Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data” (108/81) went into effect, providing a normative framework for national privacy protection laws of its member states [Council of Europe, 1981]. Convention 108/81 is open to any country to sign (i.e., not only CoE members), and has since seen countries like Uruguay, Mauritus, Mexico, or Senegal join.13 While the convention offered a first step toward an international privacy regime, its effect on national laws remained relatively limited [Mayer-Schönberger, 1998].

It was the 1995 Data Protection Directive 95/46/EC [European Parliament and Council, 1995] (in the following simply called “the Directive”) that achieved what Convention 108/81 set out to do, namely a lasting harmonization of the various European data protection laws and providing an effective international tool for privacy protection even beyond European borders.

The Directive had two important aspects that advanced its international applicability. On the one hand, it required all EU member states14 to enact national law that provided at least the same level of protection as the Directive stipulated. This European harmonization allowed for a free flow of information among all its member states, as personal data enjoyed the same minimum level of protection set forth by the Directive in any EU country.

On the other hand, the Directive’s Article 25 explicitly prohibited the transfer of personal data into “unsafe third countries,” i.e., countries with data protection laws that would not offer an adequate level of protection as required by the Directive. After European officials made it clear that they intended to pursue legal action against the European branch offices of corporations that would transfer personal data of EU residents to their corresponding headquarters in such unsafe third countries, a large number of non-European countries around the world began to adjust their privacy laws in order to become a “safe” country with regards to the Directive, and thus become part of the European Internal Information Market. Eventually, a dozen countries were considered “safe” third-countries with respect to personal data transfers: Andorra, Argentina, Canada, Switzerland, Faeroe Islands, the British Channel Islands (Guernsey, Jersey, Isle of Man), Israel, New Zealand, the U.S.,15 and Uruguay.

However, despite its significant impact, the 1995 Directive was woefully ignorant of the rapid technological developments of the late 1990s and early 2000s. It was created before the Web took off, before smartphones appeared, before Facebook and Twitter and Google were founded. It is not surprising then that many criticized it for being unable to cope with those realities [De Hert and Papakonstantinou, 2012]. While the Directive was specifically written to be “technology neutral,” it also meant that it was unclear how it would apply to many concrete technical developments, such as location tracking, Web cookies, online profiling, or cloud computing. In order to bring the European privacy framework more in line with the realities of mobile and pervasive computing, as well as to create a single data protection law that applies in all EU member states, an updated framework was announced in 2012 and finally enacted in early 2016—the General Data Protection Regulation (GDPR). The GDPR then went into effect on May 25, 2018. Its main improvements over the 1995 Directive can be summarized as follows [De Hert and Papakonstantinou, 2012, 2016].

1.  Expanded Coverage: As per its Article 3, the GDPR now also applies to companies outside of the EU who offer goods or services to customers in the EU (“marketplace rule”)—the 1995 Directive only applied to EU-based companies (though it attempted to limit data flows to non EU-based companies).

2.  Mandatory Data Protection Officers (DPO): Article 37 requires companies whose “core activities… require regular and systematic monitoring of data subjects on a large scale” to designate a DPO as part of their accountability program, who will be the main contact for overseeing legal compliance.

3.  Privacy by Design: Article 25 requires that all data collection and processing must now follow a “data minimization” approach (i.e., collect only as much data as absolutely necessary), that privacy is provided by default, and that entities use detailed impact assessment procedures to evaluate the safety of its data processing.

4.  Consent: Article 7 stipulates that those who collect personal data must demonstrate that it was collected with the consent of the data subject, and if the consent was “freely given.” For example, if a particular piece of data is not necessary for a service, but if the service is withheld from a customer otherwise, would not qualify as “freely given consent.”

5.  Data Breach Notifications: Article 33 requires those who store personal data to notify national data protection authorities if they are aware of a “break-in” that might have resulted in personal data being stolen. Article 34 extends this to also notify data subjects if the breach “is likely to result in a high risk to the rights and freedoms of natural persons.”

6.  New Subject Rights: Articles 15–18 give those whose data is collected more explicit rights, such as the right to object to certain uses of their data, the right to obtain a copy of the personal data undergoing processing, or the right to have personal data being deleted (“the right to be forgotten”).

How these changes will affect privacy protection in Europe and beyond will become clearer over the coming years. When the GDPR finally came into effect in May 2018, its most visible effect was a deluge of email messages that asked people to confirm that they still wanted to be on a mailing list (i.e., giving “unambiguous” consent, as per Article 4) [Hern, 2018, Jones, 2018], as well as a pronounced media backlash questioning both the benefits of the regulation [Lobo, 2018] as well as its (seemingly extraordinarily high) costs [Kottasová, 2018]. Many of the new principles in the GDPR sound simple, but can be challenging to implement in practice (e.g., privacy by design, the right to erasure). We will discuss some of these challenges in Chapter 6. Also, the above-mentioned Council of Europe “Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data” (108/81) [Council of Europe, 1981] has recently been updated [Council of Europe, 2018] and is now being touted as a first step for non-EU countries to receive the coveted status of a “safe third country” (adequacy assessment) [European Commission, 2017] with respect to the new GDPR [Greenleaf, 2018].

Privacy Law and Regulation in Other Countries

Beyond the U.S. and Europe, many countries have adopted data protection or privacy laws [Greenleaf, 2017, Swire and Ahmad, 2012]. An increasing number of countries have been adopting comprehensive data protection laws, which not just follow the Europan model, but are often based on EU Directive 95/46/EC or the GDPR. For instance, the data protection laws of Switzerland, Russia, and Turkey are similar to the EU Directive. Mexico’s 2010 Federal Law on the Protection of Personal Data Held by Private Entities also follows a comprehensive approach similar to the EU Directive, in particular with respect to data subjects’ rights, obligations of data controllers and processors, and international data transfer requirements. The Mexican law further incorporates the Habeas Data concept common in Latin American legal regimes [Swire and Ahmad, 2012]. Habeas Data refers to the constitutional right that citizens “may have the data” that is stored about them, i.e., they have the right to pose habeas data requests to entities to learn whether and what information is stored about them and request correction. The Mexican law requires data controllers to designate a contact for such requests and process them in a timely manner. The GDPR’s data portability right (Art. 20, GDPR) provides a similar right for data subjects and obligations for data controllers. In 2018, Brazil adopted the General Data Privacy Law (LGPD), which goes into effect in 2020. The LGPD closely mirrors the GDPR in its key provisions.

Canada also employs a comprehensive data protection approach. PIPEDA, the Personal Information Protection and Electronic Documents Act, regulates data protection for the private sector in Canada. A key difference between the GDPR and PIPEDA is that under PIPEDA individual informed consent is the only basis for lawful data collection, processing, and sharing, with limited exceptions [Banks, 2017].

Australia employs a co-regulatory model. Australia’s Federal Privacy Act defines National Privacy Principles for government agencies and the private sector. Industries then define self-regulatory codes that reflect the National Privacy Principles, with oversight by the Australian National Privacy Commissioner.

The Privacy Framework of the Asia-Pacific Economic Cooperation (APEC) aims to promote interoperability of privacy regimes across the 21 APEC countries. In contrast to Europe’s GDPR, the APEC Privacy Framework [APEC, 2017] is not a law but rather defines nine privacy principles, based on the OECD privacy guidelines, APEC countries can choose to subscribe to. The Framework further defines Cross-Border Privacy Rules (CBPR) as a code of conduct to enable cross-border data transfers among countries committing to the CBPR. The CBPR requires a local accountability agent (i.e., a governmental institution) that certifies organization’s CBPR compliance. As of 2018, six APEC countries are participating in CBPR, namely the U.S., Japan, Mexico, Canada, South Korea, and Singapore. In addition to the CBPR, the APEC Cross-border Privacy Enforcement Agreement (CPEA) facilitates cooperation and information sharing among APEC countries’ privacy enforcement authorities.

2.2  MOTIVATING PRIVACY

When the UK government in 1994 tried to rally support for its plans to significantly expand CCTV surveillance in Britain, it coined the slogan “If you’ve got nothing to hide, you’ve got nothing to fear” [Rosen, 2001]—a slogan that has been a staple in counter-privacy arguments ever since. What is so bad of having less privacy in today’s day and age, unless you are a terrorist, criminal, or scoundrel? Surely, people in Britain, with its over 6 million surveillance cameras (one for every 11 people) [Barrett, 2013] seem to be no worse off than, say, their fellow European neighbors in France or Germany, which both have nowhere near that many cameras.16 Would those who maintain an active Facebook page say they are worse off than those who only use email, text messages, or, say, written letters, to communicate with friends and family? Why not let Google monitor all Web searches and emails sent and received, so that it can provide better search results, a cleaner inbox, and more relevant targeted advertising, rather than the random spam that usually makes it into one’s inbox? Who would not want police and other national security institutions have access to our call records and search history in order to prevent terrorists and child molesters from planning and conducting their heinous crimes?

One might assume that making the case for privacy should be easy. Privacy is one of the leading consumer concerns on the Internet, dominating survey responses for more than 20 years now (e.g., Westin’s privacy surveys between 1990 and 2003 [Kumaraguru and Cranor, 2005], the 1999 IBM Multi-National Consumer Privacy Survey [IBM Global Services, 1999], or recent consumer reports from KPMG [2016] or International Data Corporation (IDC) [2017]). Everybody seems to want privacy. However, when separating preferences from actual behavior [Berendt et al., 2005, Spiekermann et al., 2001], most people in their everyday life seem to care much less about privacy than surveys indicate—something often called the “privacy paradox” [Norberg et al., 2007]. Facebook, with its long history of privacy-related issues [Parakilas, 2017], is still growing significantly every year, boasting over 2.23 billion “active monthly users”17 at the end of June 2018 [Facebook, Inc., 2018]. Back in 2013, with only about half that many active users (1.2 billion) [Facebook, Inc., 2018], Facebook users already shared almost 3.3 million pieces of content (images, posts, links) per minute [Facebook, Inc., 2013]. Within the same 60 s, Google serves an estimated 3.6 million search queries [James, 2017], each feeding into the profile of one of its over 1+ billion unique users18 in order to better integrate targeted advertising into their search results, Gmail inboxes, and YouTube videos. Of course, more privacy-friendly alternatives exist and they do see increasing users. For example, a service like the anonymous search engine DuckDuckGo saw its traffic double within days19 after Edward Snowden revealed the extent to which many Internet companies, including Google, were sharing data with the U.S. government. However, DuckDuckGo’s share of overall searches remains minuscule. Even though its share had been on the rise ever since the Snowden leaks of June 2013, its current20 11 million queries a day (roughly seven times its pre-Snowden traffic) are barely more than 0.3%21 of Google’s query traffic.

Why are not more people using a privacy-friendly search engine like DuckDuckGo? Does this mean people do not care about privacy? Several reasons come to mind. First, not many people may have heard about DuckDuckGo. Second, “traditional” search engines might simply provide superior value over their privacy-friendly competitors. Or maybe people simply think that they do. Given that the apparent cost of the services is the same (no direct charge to the consumer), the fact that one offers more relevant results than the other may be enough to make people not want to switch. Third, and maybe most important: indirect costs like a loss of privacy are notoriously hard to assess [Solove, 2013]. What could possibly happen if Yahoo, Microsoft, or Google know what one is searching? What is so bad about posting holiday pictures on Facebook or Instagram? Why would chatting through Signal22 be any better than through WhatsApp?23 Consider the following cases.

•  In 2009, U.S. Army veteran turned stand-up comedian Joe Lipari had a bad customer experience in his local Apple store [Glass, 2010]. Maybe unwisely, Joe went home and took out his anger via a Facebook posting that quoted a line from the movie he started watching—Fight Club (based on the 1996 book by Palahniuk [1996]): “And this button-down, Oxford-cloth psycho might just snap, and then stalk from office to office with an Armalite AR-10 carbine gas-powered semi-automatic weapon, pumping round after round into colleagues and co-workers.” Lipari posted the slightly edited variant: “Joe Lipari might walk into an Apple store on Fifth Avenue with an Armalite AR-10 carbine gas-powered semi-automatic weapon and pump round after round into one of those smug, fruity little concierges.” An hour later, a full SWAT team arrived, apparently alerted by one of Joe’s Facebook contacts who had seen the posting and contacted homeland security. After a thorough search of his place and a three-hour interrogation downtown, Joe assumed that his explanation of this being simply a bad movie quote had clarified the misunderstanding. Yet four months later, Joe Lipari was charged with two “Class D” felonies—“PL490.20: Making a terroristic threat” [The State of New York, 2018b] and “PL240.60: Falsely reporting an incident in the first degree” [The State of New York, 2018a]—each carrying prison terms of 5–10 years. Two years and more than a dozen court appearances later the case was finally dismissed in February 2011.

•  In 2012, Leigh Van Bryan and Emily Bunting, two UK residents just arriving in Los Angeles for a long-planned holiday, were detained in Customs and locked up for 12 h in a cell for interrogation [Compton, 2012]. Van Bryan’s name had been placed on a “One Day Lookout” list maintained by Homeland Security for “intending to come to the US to commit a crime,” while Bunting was charged for traveling with him. The source of this were two tweets Van Bryan had made several weeks before his departure. The first read “3 weeks today, we’re totally in LA pissing people off on Hollywood Blvd and diggin’ Marilyn Monroe up!”—according to Van Bryan a quote from his favorite TV show “Family Guy.” The second tweet read “@MelissaxWalton free this week, for quick gossip/prep before I go and destroy America?” Despite explaining that “destroying” was British slang for “party,” both were denied entry and put on the next plane back to the UK. Both were also told that they had been removed from the customary Visa Waiver program that is in place for most European passport holders and instead had to apply for visas from the U.S. Embassy in London before ever flying to the U.S. again [Hartley-Parkinson, 2012].

In both cases, posts on social media that were not necessarily secret, yet implicitly assumed to be for friends only, ended up being picked up by law enforcement, who did not appreciate the “playful” nature intended by the poster. Did Joe Lipari or Leigh Van Bryan do “something wrong” and hence had “something to hide”? If not, why should they have anything to fear?

“Knowledge is power” goes the old adage, and as these two stories illustrate, one aspect of privacy certainly concerns controlling the spread of information. Those who lose privacy will also lose control over some parts of their lives. In some cases, this is intended. For example, democracies usually require those in power to give up some of their privacy for the purpose of being held accountable, i.e., to control this power. Citizens routinely give up some of their privacy in exchange for law enforcement to keep crime at bay. In a relationship, we usually show our trust in one another by opening up and sharing intimate details, hence giving the other person power over us (as repeatedly witnessed when things turn sour and former friends or lovers start disclosing these details in order to embarrass and humiliate the other).

In an ideal world, we are in control of deciding who knows what about us. Obviously, this control will have limits: your parents ask you to call in regularly to say where you are; your boss might require you to “punch in/out” when you arrive at work and leave, respectively; the tax office may request a full disclosure on your bank accounts in order to compute your taxes; and police can search your house should they have a warrant24 from a judge.

In the following two sections we look at both sides of the coin: Why do we want privacy, and why might one not want it (in certain circumstances)? Some of the motivations for privacy will be distilled from the privacy laws we have seen in the previous section: what do these laws and regulations attempt to provide citizens with? What are the aims of these laws? By spelling out possible reasons for legal protection, we can try to better frame both the values and the limits of privacy. However, many critics argue that too much privacy will make the world a more dangerous place. Privacy should (and does) have limits, and we will thus also look at the arguments of those that think we should have less rather than more privacy.

2.2.1  PRIVACY BENEFITS

The fact that so many countries around the world have privacy legislation in place (over 120 countries in 2017 [Greenleaf, 2017]) clearly marks privacy as an important “thing” to protect, it is far from clear to what extent society should support individuals with respect to keeping their privacy. Statements by Scott McNealy, president and CEO of Sun Microsystems,25 pointing out that “you have no privacy anyway, get over it” [Sprenger, 1999], as well as Peter Cochrane’s editorial in Sovereign Magazine (when he was head of BT26 Research) claiming that “all this secrecy is making life harder, more expensive, dangerous and less serendipitous” [Cochrane, 2000], are representative of a large part of society that questions the point of “too much” secrecy (see our discussion in Section 2.2.2 below).

In his book Code and other Laws of Cyberspace [Lessig, 1999], Harvard law professor Lawrence Lessig tries to discern possible motivations for having privacy27 in today’s laws and social norms. He lists four major driving factors for privacy.

•  Privacy as empowerment: Seeing privacy mainly as informational privacy, its aim is to give people the power to control the dissemination and spread of information about themselves. A legal discussion surrounding this motivation revolves around the question whether personal information should be seen as a private property [Samuelson, 2000], which would entail the rights to sell all or parts of it as the owner sees fit, or as a “moral right,” which would entitle the owner to assert a certain level of control over their data even after they sold it.

•  Privacy as utility: From the data subject’s point of view, privacy can be seen as a utility providing more or less effective protection from nuisances such as unsolicited calls or emails, as well as more serious harms, such as financial harm or even physical harm. This view probably best follows Warren and Brandeis’ “The right to be let alone” definition of privacy, where the focus is on reducing the amount of disturbance for the individual, but can also be found, e.g., in U.S. tort law (see Section 2.1.1) or anti-discrimination laws.

•  Privacy as dignity: Dignity can be described as “the presence of poise and self-respect in one’s deportment to a degree that inspires respect” [Pickett, 2002]. This not only entails being free from unsubstantiated suspicions (for example when being the target of a wire tap, where the intrusion is usually not directly perceived as a disturbance), but rather focuses on the balance in information available between two people: analogous to having a conversation with a fully dressed person while being naked oneself, any relationship where there is a considerable information imbalance will make it much more difficult for those with less information about the other to keep their poise.

•  Privacy as constraint of power: Privacy laws and moral norms to that extend can also be seen as a tool for keeping checks and balances on a ruling elite’s powers. By limiting information gathering of a certain type, crimes or moral norms pertaining to that type of information cannot be effectively enforced. As Stuntz [1995] puts it: “Just as a law banning the use of contraceptives would tend to encourage bedroom searches, so also would a ban on bedroom searches tend to discourage laws prohibiting contraceptives” (as cited in Lessig [1999]).

Depending upon the respective driving factor, an individual might be more or less willing to give up part of their privacy in exchange for a more secure life, a better job, or a cheaper product. The ability of privacy laws and regulations to influence this interplay between government and citizen, between employer and employee, and between manufacturer or service provider and customer, creates a social tension that requires a careful analysis of the underlying motivations in order to balance the protection of the individual and the public good. An example of how a particular motivation can drive public policy is anti-spam legislation enacted both in Europe [European Parliament and Council, 2002] and in the U.S. [Ulbrich, 2003], which provides privacy-as-an-utility by restricting the unsolicited sending of e-mail. In a similar manner, in March 2004 the Bundesverfassungsgericht (the German Supreme Court) ruled that an 1998 amendment to German’s basic law enlarging law enforcements access to wire-tapping (“Der Grosse Lauschangriff”) was unconstitutional, since it violated human dignity [Der Spiegel, 2004].

This realization that privacy is more than simply providing secrecy for criminals is fundamental to understanding its importance in society. Clarke [2006] lists five broad driving principles for privacy.

•  Philosophical: A humanistic tradition that values fundamental human rights also recognizes the need to protect an individual’s dignity and autonomy. Protecting a person’s privacy is inherent in a view that values an individual for their own sake.

•  Psychological: Westin [1967] points out the emotional release function of privacy—moments “off stage” where individuals can be themselves, finding relief from the various roles they play on any given day: “stern father, loving husband, car-pool comedian, skilled lathe operator, unions steward, water-cooler flirt, and American Legion committee chairman.”

•  Sociological: Societies do not flourish when they are tightly controlled, as countries such as East Germany have shown. People need room for “minor non-compliance with social norms” and to “give vent to their anger at ‘the system,’ ‘city hall,’ ‘the boss’:”

The firm expectation of having privacy for permissible deviations is a distinguishing characteristic of life in a free society [Westin, 1967].

•  Economical: Clark notes that “all innovators are, by definition, ‘deviant’ from the norms of the time,” hence having private space to experiment is essential for a competitive economy. Similarly, an individual’s fear of surveillance—from both private companies and the state—will dampen their enthusiasm in participating in the online economy.

•  Political: The sociological need for privacy directly translates into political effects if people are not free to think and discuss outside current norms. Having people actively participate in political debate is a cornerstone of a democratic society—a lack of privacy would quickly produce a “chilling effect” that directly undermines this democratic process.

As Clarke [2006] points out, many of today’s data protection laws, in particular those drafted around the Fair Information Principles, are far from addressing all of those benefits, and instead rather focus on ensuring that the collected data is correct—not so much as to protect the individual but more so to ensure maximum economic benefits. The idea that privacy is more of an individual right, a right that people should be able to exercise without unnecessary burden, rather than simply an economic necessity (e.g., to make sure collected data is correct), is a relatively recent development. Representative for this paradigm shift was the so-called “census-verdict” of the German federal constitutional court (Bundesverfassungsgericht) in 1983, which extended the existing right to privacy of the individual (Persönlichkeitsrecht) with the right of self-determination over personal data (informationelle Selbstbestimmung) [Mayer-Schönberger, 1998].28 The judgment reads as follows.29

If one cannot with sufficient surety be aware of the personal information about oneself that is known in certain part of his social environment, …can be seriously inhibited in one’s freedom of self-determined planning and deciding. A society in which the individual citizen would not be able to find out who knows what when about them, would not be reconcilable with the right of self-determination over personal data. Those who are unsure if differing attitudes and actions are ubiquitously noted and permanently stored, processed, or distributed, will try not to stand out with their behavior. …This would not only limit the chances for individual development, but also affect public welfare, since self-determination is an essential requirement for a democratic society that is built on the participatory powers of its citizens [Reissenberger, 2004].

The then-president of the federal constitutional court, Ernst Benda, summarized his private thoughts regarding their decision as follows.30

The problem is the possibility of technology taking on a life of its own, so that the actuality and inevitability of technology creates a dictatorship. Not a dictatorship of people over people with the help of technology, but a dictatorship of technology over people [Reissenberger, 2004].

The concept of self-determination over personal data31 constitutes an important part of European privacy legislation with respect to ensuring the autonomy of the individual. First, it extends the Fair Information Principles with a participatory approach, which would allow the individual to decide beyond a “take it or leave it” choice over the collection and use of his or her personal information. Second, it frames privacy protection no longer only as an individual right, but emphasizes its positive societal and political role. Privacy not as an individual fancy, but as an obligation of a democratic society, as Julie Cohen notes:

Prevailing market-based approaches to data privacy policy …treat preferences for informational privacy as a matter of individual taste, entitled to no more (and often much less) weight than preferences for black shoes over brown, or red wine over white. But the values of informational privacy are far more fundamental. A degree of freedom from scrutiny and categorization by others promotes important noninstrumental values, and serves vital individual and collective ends [Cohen, 2000].

The GDPR additionally includes a number of protection mechanisms that are designed to strengthen the usually weak bargaining position of the individual. For example, Article 9 of the GDPR specifically restricts the processing sensitive information, such as ethnicity, religious beliefs, political or philosophical views, union membership, sexual orientation, and health, unless for medical reasons or with the explicit consent of the data subject.

2.2.2  LIMITS OF PRIVACY

There are certainly limits to an individual’s privacy. While the GDPR obliges EU member states to offer strong privacy protection, many of them at the same time keep highly detailed records on their citizens, both in the interest of fighting crime but also in order to provide social welfare and other civil services.32

A good example for this tension between the public good and the protection of the individual can be found in the concept of communitarianism. Communitarians like Amitai Etzioni, professor for sociology at the George Washington University in Washington, D.C., and founder of the Communitarian Network, constantly question the usefulness of restricting society’s power over the individual through privacy laws, or more general, to “articulate a middle way between the politics of radical individualism and excessive stateism” [Etzioni, 1999].

In his 1999 work The Limits of Privacy [Etzioni, 1999], Etzioni gives the example of seven-year-old Megan Kanka, who in 1994 was raped and strangled by her neighbor Jesse Timmendequas. No one in the neighborhood knew at that time that Timmendequas had been tried and convicted of two prior sex offenses before, and had served six years in prison for this just prior to moving in next to the Kankas. Megan Kanka’s case triggered a wave of protests in many U.S. states, leading to virtually all states implementing some sort of registration law for convicted sex offenders, collectively known as “Megan’s Law.” Depending on the individual state, such registration procedures range from registering with the local police station upon moving to a new place, to leaving blood and saliva samples or even having to post signs in one’s front yard reading “Here lives a convicted sex offender”33 [Solove and Rotenberg, 2003].

While many criticize Megan’s Law for punishing a person twice for the same crime (after all, the prison sentence has been served by then—the perpetual registration requirement equals a lifelong sentence and thus contradicts the aim of re-socialization), others would like even more rigorous surveillance (e.g., with the help of location-tracking ankle bracelets) or even a lifelong imprisonment in order to prevent any repeated offenses.34 A similar lifelong-custody mechanism passed in 2004 a public referendum in Switzerland: before being released from their prison sentence, psychologists will have to assess a sex offender’s likelihood for relapse. Those with a negative outlook will then be taken directly into lifelong custody.

But it is not only violent crimes and homeland security that makes people wonder whether the effort spent on protecting personal privacy is worth it. Especially mundane everyday data, such as shopping lists or one’s current location—things that usually manifest themselves in public (in contrast to, say, one’s diary, or one’s bank account balance and transactions)—seem to have no reason for protection whatsoever. In many cases, collecting such data means added convenience, increased savings, or better service for the individual: using detailed consumer shopping profiles, stores will be able to offer special discounts, send only advertisements for items that really interest a particular customer, and provide additional information that is actually relevant to an individual. And, as Lessig remarks, any such data collection is not really about any individual at all: “[N]o one spends money collecting these data to actually learn anything about you. They want to learn about people like you” [Lessig, 1999].

What could be some of the often cited dangers of a transparent society then? What would be the harm if stores had comprehensive profiles on each of their customers in order to provide them with better services?

One potential drawback of more effective advertisement is the potential for manipulation: if, for example, one is identified as a mother of teenagers who regularly buys a certain breakfast cereal, a targeted advertisement to buy a competitor’s brand at half the price (or with twice as many loyalty points) might win the kid’s favor, thus prompting the mother to switch to the potentially more expensive product (with a higher profit margin). A similar example of “effective advertising” can be found in the Cambridge Analytica scandal of 2018 [Lee, 2018, Meyer, 2018], which saw a political data firm harvest the private profiles of over 50 million Facebook users (mostly without their knowledge) in order to create “psychographic” profiles that were then sold to several political campaigns (the 2016 Trump campaign, the Brexit “Leave” campaign) in order to target online ads. Presumably, such information allowed those campaigns to identify voters most open to their respective messages. Profiles allow a process that sociologist David Lyon calls social sorting [Lyon, 2002]:

The increasingly automated discriminatory mechanisms for risk profiling and social categorizing represent a key means of reproducing and reinforcing social, economic, and cultural divisions in informational societies [Lyon, 2001].

This has implication on both the individual and societal level. For democratic societies, a thoroughly profiled population exposed to highly target political ads may become increasingly divided [The Economist, 2016]. On an individual level, the benefits of profiling would depend on the existing economic and social status. For example, since a small percentage of customers (whether it be in supermarkets or when selling airline tickets) typically makes a large percentage of profits,35 using consumer loyalty cards or frequent flyer miles would allow vendors to more accurately determine whether a certain customer is worth fighting for, e.g., when having to decide if a consumer complaint should receive fair treatment.

This might not only lead to withholding information from customers based on their profiles, but also to holding this information against them, as the example of Ron Rivera in Chapter 1 showed. In a similar incident, a husband’s preference for expensive wine that was well documented in his supermarket profile, allowed his wife to claim a higher alimony after having subpoenaed the profile in court. Even if such examples pale in comparison to the huge number of transactions recorded everyday worldwide, they nevertheless indicate that this massive collection of mundane everyday facts will further increase through the use of mobile and pervasive computing, ultimately adding a significant burden to our lives, as Lessig explains:

The burden is on you, the monitored, first to establish your innocence, and second, to assure all who might see these ambiguous facts, that you are innocent [Lessig, 1999].

This silent reversal of the classical presumption of innocence can lead to significant disadvantages for the data subject, as the examples of comedian Joe Lipari (page 23), UK-couple Leigh Van Bryan and Emily Bunting (page 23), and firefighter Philip Scott Lyons (page 1) have shown. Another example for the sudden significance of these profiles is the fact that shortly after the September 11 attacks, FBI agents began collecting the shopping profiles and credit card records of each of the suspected terrorists in order to assemble a terrorist profile [Baard, 2002].36 First reports of citizens who were falsely accused, e.g., because they shared a common name with a known terrorist [Wired News] or had a similar fingerprint [Leyden, 2004], illustrate how difficult it can be for an individual to contest findings from computerized investigative tools.

Complete transparency, however, may also help curb governmental power substantially, according to David Brin, author of the book “The Transparent Society” [Brin, 1998]. In his book, Brin argues that losing our privacy can ultimately also have advantages: While up to now, only the rich and powerful had been able to spy on common citizens at will, the next technology would enable even ordinary individuals to “spy back,” to “watch the watchers” in a society without secrets, where everybody’s actions could be inspected by anybody else and thus could be held accountable, where the “surveillance” from above could now be counteracted by “sousveillance” from below [Mann et al., 2003].

Critics of Brin point out that “accountability” is a construct defined by public norms and thus will ultimately lead to a homogenization of society, where the moral values of the majority will threaten the plurality of values that forms an integral part of any democracy, simply by holding anybody outside of the norm “accountable” [Lessig, 1999].

The ideal level of privacy can thus have very different realities, depending on what is technically feasible and socially desirable. The issues raised above are as follows.

1.  Communitarian: Personal privacy needs to be curbed for the greater good of society (trusting the government). Democratic societies may choose to appoint trusted entities to oversee certain private matters in order to improve life for the majority.

2.  Convenience: The advantages of free flow of information outweighs the personal risks in most cases. Only highly sensitive information, like sexual orientation, religion, etc. might be worth protecting. Semi-public information like shopping habits, preferences, contact information, and even health information, might better be publicly known so that one can enjoy the best service and protection possible.

3.  Egalitarian: If everybody has access to the same information, it ceases to be a weapon in the hands of a few well-informed. Only when the watchers are being watched, all information they hold about an individual is equally worth the information the individual holds about them. Eventually, new forms of social interaction will evolve that are built upon these symmetrical information assets.

4.  Feasibility: What can technology achieve (or better: prevent)? All laws and legislation require enforceability. If privacy violations are not traceable, the much stressed point of accountability (as developed in the fair information practices) becomes moot.

2.3  CONCEPTUALIZING PRIVACY

The prior sections should already have helped to illustrate some of the many different ways of understanding what privacy is (or should be). In this section, we will present some of the many attempts to “capture” the nature of privacy in definitions and conceptual models.

2.3.1  PRIVACY TYPES

The historic overview in Section 2.1.1 provides a sense of how the understanding of privacy has changed over the years—continuously adding “things to protect,” often in response to novel technological developments. Clearly, back in the late 19th century, with no computerized data processing around, privacy of “data” was not much of an issue. Instead, as evident in Warren and Brandeis’ work or William Pitt’s Excise Bill speech (“The poorest man may in his cottage bid defiance to all the forces of the Crown…”) [Brougham, 1839], the protection of the home—territorial privacy—was the most prevalent aspect of privacy protection. While this conception dates back to the 18th century, defining privacy as “protected spaces” is still relevant today. For example, workplace privacy—which is rooted in the concept of territorial privacy—sees renewed interest, given how fewer and fewer people have an actual desk in an office that would mark a defined “territory,” but instead use hot desking37 or work in coffee shops or co-working spaces. What exactly is the “territory” that one should protect here?

With the advent of the telegraph and the telephone, communication privacy became another facet of privacy concerns. While letters had long since enjoyed some sort of privacy guarantees—either by running a trusted network of private messengers (e.g., religious orders), or by “corporate” guarantees from early postal companies such as the Thurn-and-Taxis Post [Schouberechts, 2016]—these new forms of remote communication quickly required legal provisions for safeguarding their contents. Email and recently instant messengers and Voice-over-IP have again made communication privacy a timely issue.

Maybe furthest back dates the idea of bodily privacy, visible in the earliest “privacy laws” (which were not called privacy laws at the time) against peeping toms [Laurant, 2003]. Today, bodily privacy remains relevant as, e.g., international travelers may be subject to strip searches at airports in countries with strict drug laws, or workers may be forced to accept mandatory drug tests by their employer.

Today’s most prevalent “privacy type” comes from the 1960s, when automated data processing first took place on a national scale. Alan Westin, then professor of public law and government at Columbia University, defined privacy in his groundbreaking book Privacy and Freedom as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others” [Westin, 1967]. The “thing” to protect here is a person’s “data”—information about them that, once collected, may be shared with others without their knowledge or consent. Not surprisingly, Westin’s type of privacy is called information privacy or sometimes also data privacy.

Figure 2.2 illustrates how each of these privacy types relates to a different aspect (in center of figure): bodily privacy to the person, territorial privacy to physical space, communication privacy to our social interactions, and information privacy to stored “files” about us. Figure 2.2 also shows how, as technology moves forward, these privacy types are being challenged anew. For example, Westin’s “information privacy” is being challenged by today’s online profiles: with each website we visit being connected to countless ad networks, content delivery networks, performance trackers, affiliate sites, and other third parties a single page view can easily result in dozens if not hundreds of traces being left in databases around the world—practically impossible for the individual to keep track of, lest control. Similarly, communication privacy has long ceased to be an issue that pertains only to postal services: email, chat, mobile messaging, and online social networking potentially allow both governments and companies to observe our communication patterns in a more fine-grained fashion than ever before. And by using a modern smartphone for our communication with others, we also provide countless of companies, if not governments, with detailed information on our whereabouts and activities. Now, DNA analysis is becoming affordable and commonplace,38 which means that biological profiling will challenge our bodily privacy like never before.

Image

Figure 2.2: Core Privacy Types and Today’s Challenges. Across history, privacy concerns shifted as technology made things possible that were not possible before. The four broad types of privacy—bodily, territorial, communication, and information—are being challenged by new technology: online profiling, social networking, activity tracking, and biological profiling.

2.3.2  PRIVACY CONSTITUENTS

The early characterization of privacy as “the right to be let alone” by Warren and Brandeis [1890] clearly captured only a narrow part39 of the complex privacy problem. Westin [1967] expanded this concept of “privacy as solitude” to capture a wider range of settings that all can be motivated with the need for privacy, stressing the fact that “the individual’s desire for privacy is never absolute, since participation in society is an equally powerful desire” [Westin, 1967]. Westin enumerated four distinct “privacy states” (see Figure 2.3) that all describe different forms of privacy: beyond the idea of “privacy as solitude,” Westin describes “intimacy”—the state of sharing initmate information with another person, “reserve”—the act of “standing apart,” e.g., at a party, in order to (even if only temporarily) disengage from others (“the creation of a psychological barrier against unwanted intrusion”), and “anonymity”—blending in with a crowd so as to be non-distiguishable from others (“the individual is in public places or performing public acts but still seeks, and finds, freedom from identification and surveillance”). Each of these states offer some sort of privacy, even though they involve very different physical and social settings.

Image

Figure 2.3: Westin’s Privacy States based on Westin [1967] defines four privacy states, or “experiences:” Solitude, Intimacy, Reserve, and Anonymity.

Solitude is the “positive” version of loneliness, the act of “being alone without being lonely.” Solitude plays an important role in psychological well-being, offering benefits such as freedom, creativity, and spirituality [Long and Averill, 2003].

Intimacy is probably a concept as complex as privacy, as it may refer to “feelings, to verbal and nonverbal communication processes, to behaviors, to people’s arrangements in space, to personality traits, to sexual activities, and to kinds of long-term relationships” [Reis et al., 1988]. Yet, it is clear that intimacy is an essential component of forming the types of close relationships [Levinger and Raush, 1977] that are essential to our psychological well-being [Baumeister and Leary, 1995]. Gerstein [Gerstein, 1978] argues that “intimacy simply could not exist unless people had the opportunity for privacy.” Paradoxically, Nisenbaum [1984] finds that even solitude often helps create intimacy, by prompting “feelings of connection with another person” [Long and Averill, 2003].

Anonymity provides us with what Rössler [2001] calls decisional privacy: “securing the interpretational powers over one’s life.” The freedom to decide for oneself “who do I want to live with; which job to take; but also: what clothes do I want to wear.” Anonymity thus helps to ensure the autonomy of the individual, protecting one’s independence in making choices central to personhood.

The interplay between solitude, intimacy, and anonymity (and hence autonomy) ultimately shapes our identity. Westin describes this as follows.

Each person is aware of the gap between what he wants to be and what he actually is, between what the world sees of him and what he knows to be his much more complex reality. In addition, there are aspects of himself that the individual does not fully understand but is slowly exploring and shaping as he develops [Westin, 1967].

According to Arendt [1958], privacy is essential for developing an individual identity because it facilitates psychological and social depth, and protects aspects of a person’s identity that

cannot withstand constant public scrutiny—be it publicly chastised preferences and practices, or silly tendencies and other behavior people self-censor when being observed.

Autonomy allows us to be what we want to be; intimacy and solitude help us to explore and shape our “complex reality,” both through the intimate exchange with others. This not only connects with the previously-mentioned emotional release function of privacy, but also with what Westin [1967] calls the “safety-value” function of privacy, e.g., the “minor non-compliance with social norms” and to “give vent to their anger at ‘the system,’ ‘city hall,’ ‘the boss’”:

The firm expectation of having privacy for permissible deviations is a distinguishing characteristic of life in a free society [Westin, 1967].

In that sense, privacy protects against extrinsic and intrinsic losses of freedom [Nissenbaum, 2009, p. 75]. Nissenbaum argues that “privacy is important because it protects the diversity of personal choices and actions, not because it protects the freedom to harm others and commit crimes” [Nissenbaum, 2009, p. 77].

Other privacy constituents can be drawn from its implementation, i.e., how current social, legal, and technical solutions attempt to provide privacy to citizens, customers, employees, or users. We can roughly group these approaches into three categories: secrecy, transparency, and control.

•  Secrecy is often equated with privacy, in particular in security scholarship, where privacy is simply another word for confidentiality. At the outset, it seems as if privacy without secrecy does not make sense: if others know our information, we have lost our privacy, surely? Such thinking implies a binary nature of privacy: we either have it or do not have it, based on the knowledge of others. Similar to Warren and Brandeis’ conception of privacy as “the right to be let alone,” such a binary view is neither realistic nor practical. When I confide in a trusted friend something in private, I surely do not expect my disclosure to invalidate my privacy. I instead expect my friend to hold this information in confidence, making it a shared secret between the two of us, rather than public information. I expect my doctor, who knows a lot about my health, to keep this information private—in fact, many professionals are often bound by law to keep private information of others secret (e.g., lawyers, clerics). Note that in these cases, secrecy does not entirely vanish, it just includes more people who share the same confidential information. An interesting corner case of secrecy is anonymity, often also called unlinkability. While confidentiality makes it hard for others to find out a certain secret information, unlinkability removes its connection to a person. As it is the case with confidentiality, unlinkability is not a binary value but comes in many different shades. Data minimization is the bridge between the two: It aims to ensure that only those data elements are collected that are essential for a particular purpose—all non-essential information is simply not collected.

•  Transparency can be seen as the flip side of secrecy. If secrecy prevents others from knowing something about me, transparency allows me to know what others know about me. In its simplest form, transparency requires notice: informing someone about what data is being collected about them, or why this information is being collected, or what is already known about them. Transparency is often the most basic privacy requirement, as it thwarts secret record keeping. While individuals may have no say about their data being collected, they may at least understand that their information is being recorded and act accordingly (and thus retain their autonomy). Obviously, transparency by itself does not mean that one’s privacy is being protected—the fact that one spots a nosy paparazzi taking photos from afar does not help much once one’s secret wedding pictures appear in a tabloid paper.

•  Control is often the key ingredient that links secrecy and transparency: if people can freely control “when, how, and to what extend information about them is communicated to others” [Westin, 1967], they can decide who they want to take into their confidence and when they like to keep information private or even remain anonymous. A practical (and minimal) form of control is “consent,” i.e., a person’s affirmative acknowledgment of a particular data collection or data processing practice. Consent may be implicit (e.g., by posting a sign “video surveillance in progress at this premise” by the door through which visitors enter) or explicit (e.g., by not starting an online data collection practice unless a person has checked a box in the interface). Implicit and explicit consent are thus seen as giving individuals a choice: if they do not want their data collected, the have the option of not selecting a particular option, or simply not proceeding beyond a particular point (either physical or in a user interface). Together, notice and choice offer what Solove [2013] calls “privacy self-management.” Such control tools form the basis for many privacy laws worldwide, as we discussed in Section 2.1. While control seems like the ideal “fix” for enabling privacy, the feasibility of individuals effectively exercising such control is questionable. Solove [2013] notes that individuals are often ill-placed to make privacy choices: “privacy decisions are particularly susceptible to problems such as bounded rationality, the availability heuristic, and framing effects because privacy is so complex, contextual, and difficult to conceptualize.” In most cases, people simply “lack enough background knowledge to make an informed choice,” or “there are simply too many entities that collect, use, and disclose people’s data for the rational person to handle.”

With these constituents in mind, Gavison [1984] goes on to define privacy as being comprised of “solitude, anonymity, and control,” while Simmel [1968] puts it similarly, yet expanding somewhat on Gavinson.

Privacy is a concept related to solitude, secrecy, and autonomy, but it is not synonymous with these terms; for beyond the purely descriptive aspects of privacy as isolation from the company, the curiosity, and the influence of others, privacy implies a normative element: the right to exclusive control to access to private realms [Simmel, 1968].

Contrary to Westin and Rössler, Gavinson and Simmel describe privacy not as an independent notion, but rather as an amalgam of a number of well established concepts, something that constitutes itself only through a combination of a range of factors. While Westin also relates privacy to concepts such as solitude, group seclusion, anonymity, and reserve [Cate, 1997], he calls them privacy states, indicating that these are merely different sides to the same coin.

2.3.3  PRIVACY EXPECTATIONS

The boundaries implicit in the privacy states described by Westin [1967] are a key part of modeling privacy expectations. As is also the case for security, privacy is not a goal in itself, not a service that people want to subscribe to, but rather an expectation of being in a state of protection without having to actively pursue it. All else being equal, users undoubtedly would prefer systems without passwords or similar access control mechanisms, as long as they would not suffer any disadvantages from this. Only if any of their files are maliciously deleted or illegally copied, users will regret not having any security precautions in place. So what would be the analogy to a “break-in” from a privacy point of view?

Marx [2001] identified personal border crossings as a core concept for understanding privacy: “Central to our acceptance or sense of outrage with respect to surveillance …are the implications for crossing personal borders.” Marx differentiates between four such border crossings that are perceived as privacy violations.

•  Natural borders: Physical limitations of observations, such as walls and doors, clothing, darkness, but also sealed letters, telephone calls. Even facial expressions can form a natural border against the true feelings of a person.

•  Social borders: Expectations about confidentiality for members of certain social roles, such as family members, doctors, or lawyers. This also includes expectations that your colleagues will not read personal correspondance addressed to you, or material that you left lying around the photocopy machine.

•  Spatial or temporal borders: The usual expectations of people that parts of their life, both in time and social space, can remain separated from each other. This would include a wild adolescent time that should not interfere with today’s life as a father of four, or different social groups, such as your work colleagues and friends in your favorite bar.

•  Borders due to ephemeral or transitory effects: This describes what is best known as a “fleeting moment,” an unreflected utterance or action that we hope gets forgotten soon, or old pictures and letters that we put out in our trash. Seeing audio or video recordings of such events later, or observing someone sifting through our trash, will violate our expectations of being able to have information simply pass away unnoticed or forgotten.

Whenever personal information crosses any of these borders without our knowledge, our potential for possible actions—our decisional privacy—is affected. When someone at the office suddenly mentions family problems that one has at home, or if circumstances of our youth suddenly are being brought up again even though we assumed that they were long forgotten, we perceive a violation of our local, informational, or communication privacy. This violation is by no means an absolute measure, but instead depends greatly on the individual circumstances, such as the kind of information transgressed, or the specific situation under which the information has been disclosed. The effects such border crossing have on our lives, as well as the chances that they actually happen, are therefore a highly individual assertion.

Similarly, Nissenbaum [1998] investigated expectations of privacy in public situations, e.g., private conversations in a public restaurant, ultimately proposing a framework of contextual integrity [Nissenbaum, 2004, 2009] that defines privacy along contextualized privacy expectations, rather than a private/public dichotomy. To Nissenbaum, privacy expectations relate to context-dependent norms of information flows that are characterized by four key aspects: contexts, actors, attributes, and transmission principles [Nissenbaum, 2009].

•  Contexts: Contexts describe the general institutional and social circumstances (e.g., healthcare, education, family, religion, etc.) in which information technology is used or information exchange takes place. Contexts also include the activities in which actors (in different roles) engage, as well as the purposes and goals of those activities (Nissenbaum calls this values). Contexts and associated informational norms can be strictly specified or only sparsely and incompletely defined—for example, the procedure of voting vs. an informal business meeting. Expectations of confidentiality are clearly defined in the first place, but less clear in the second case. People often engage in multiple contexts at the same time which can be associated with different, potentially conflicting informational norms. For instance, talking about private matters at work.

•  Actors: Actors are senders, receivers, and information subjects who participate in activities. Actors fill specific roles and capacities depending on the contexts. Roles define relationships between various actors, which express themselves through the level of intimacy, expectations of confidentiality, and power dynamics between actors. Informational norms regulate information flow between actors.

•  Attributes: Attributes describe the type and nature of the information being collected, transmitted, and processed. Informational norms render certain attributes appropriate or inappropriate in certain contexts. The concept of appropriateness serves to describe what are acceptable actions and information practices.

•  Transmission principles: Transmission principles constrain the flow of information between entities. They are associated with specific expectations. Typical transmission principles are confidentiality, reciprocity or fair exchange of information, and whether an actor deserves or is entitled to receive information.

Norms of information flows can be either implicitly understood or explicitly codified. Common types of norms are moral norms, conventions of etiquette, rules, and procedures. In order to identify the potential of a novel technology to increase the danger of privacy violations, Nissenbaum proposes a multi-step process [Nissenbaum, 2009, p. 148ff, 182ff], similar to a privacy impact assessment (see, e.g., Wright and De Hert [2012]). Her process starts by establishing the social context in which a technology is used, the key actors, the affected attributes, i.e., the involved information, and the principles of transmission. Contextual integrity and privacy expectations are violated if the introduction of an information technology or practice changes any of these aspects. Furthermore, one should analyze how the technology affects moral and political factors, e.g., power structures, fairness, or social hierarchies. Identified threats should then be further analyzed for their impact on goals and values in the specific context. Based on these assessments, contextual integrity recommends either for or against the information practice.

Altman [1975] proposes a corresponding process-oriented view on privacy in his privacy regulation theory. He defines privacy as “the selective control of access to the self or to one’s group” and argues that individuals adjust their privacy based on internal and external changes. Internal changes can be caused by changes in personal preference, past experiences, or new knowledge. External changes pertain to changes in the environment and the context. According to Altman, privacy is a dynamic, dialectic, and non-monotonic process. An individual regulates privacy and social interaction through adjusting own outputs as well as inputs by others. Outputs roughly correspond to information that is being disclosed or observable; inputs to potential for invasions and disturbances. In social interaction, adjustments rely on verbal, paraverbal, and nonverbal behavioral mechanisms, such as revealing or omitting information (verbal), changing intonation and speaking volume (paraverbal), or using posture and gestures to non-verbally express and control personal space and territory. Schwartz [1968] analyzes how entering and leaving personal spaces, and even the use of doors, is governed by rules of appropriateness and privacy expectations.

A critical part of Altman’s theory is the distinction between desired privacy and achieved privacy. Desired privacy is defined by an individual’s privacy preferences and privacy expectations. Achieved privacy is the actual privacy level obtained or achievable in a given situation with the means for privacy control available in that situation. If achieved privacy is lower than desired privacy, privacy expectations are violated and the individual feels exposed. Achieving more privacy than desired causes social isolation. Thus, the privacy regulation process aims for an optimal privacy level in which desired and achieved privacy are aligned.

2.3.4  A PRIVACY TAXONOMY

Solove [2008] argues that striving for a singular definition for all privacy characteristics may be misguided due to the diversity of the topic. Furthermore, information alone may not be a sufficient indicator for associated privacy expectations, because the sensitivity of information depends on the purposes for which data subjects want to conceal it or other parties want to use it. Thus, Solove argues for a more pragmatic, contextualized, and pluralistic view on privacy. Privacy problems disrupt particular activities. Whenever a privacy problem surfaces in any given situation, a corresponding privacy interest must exist. As a consequence, Solove suggests to define privacy through its disruptions and issues in specific contextual situations rather than general core characteristics. Privacy protection mechanisms and regulations have to address multiple interconnected problems and balance conflicting interests. In order to map the topology of these interconnected problems, and support the identification of compromises that protect both privacy and the conflicting interest, Solove proposes a comprehensive privacy taxonomy structured by generalized privacy problems [Solove, 2006, 2008].

Solove divides privacy problems into four major categories, as shown in Figure 2.4, loosely following an information flow from the data subject to data controllers and further on to potential third parties. Each of these categories is further divided into subcategories.

Image

Figure 2.4: Four categories of privacy issues based on Solove’s privacy taxonomy based on Solove [2008].

Information Collection

Solove distinguishes two types of information collection: surveillance and interrogation. Surveillance is the passive observation of the data subject by others. He argues that while not all observations disrupt privacy, continuous monitoring does. Surveillance disrupts privacy because people may feel anxious and uncomfortable and even alter their behavior when they are being watched. Covert surveillance has the additional effect that it creates a power imbalance because the data subject can be observed without being able to see the observer.

Bentham’s Panopticon purposefully leverages this effect in the architectural design of a prison [Bentham, 1787]. Bentham designed the panopticon as a circular prison with cells on the outside that are open toward the middle. A guard tower at the center is fitted with small window slits facing in all directions. Thus, guards could watch any prisoner at any time, while inmates would not know when they are actually being watched.

In a less Orwellian sense, surveillance also includes privacy issues caused by incidental observations, such as private information visible on a screen or someone observing a typed password, also known as shoulder surfing [Schaub et al., 2012]. Hawkey and Inkpen [2006] investigate the dimensions of incidental information privacy.

In contrast, interrogation constitutes active information collection. The data subject is directly exposed to an inquisitive party, which may pressure the data subject to disclose details. Similar to surveillance, less evocative interrogation issues also occur in common situations. For example, when a questionnaire or a registration form asks for more information than required, or when social pressure leads to revealing information one would have kept private otherwise.

Information Processing

Solove’s second category, information processing, contains five potential privacy harms, which all occur after information has been collected, and therefore without direct involvement of the data subject: aggregation, identification, insecurity, secondary use, and exclusion.

Aggregation of information about one person facilitates profiling. While such aggregation can have benefits, it often violates the data subjects’ expectations in terms of what others should be able to find out about them. However, the effects of aggregation are typically less direct, because the data has already been collected previously. The main issue is that multiple innocuous pieces of information gain privacy sensitivity when combined.

Identification is the process of linking some information to a specific individual, sometimes also called re-identification or de-anonymization. Presumably anonymized data may contain sufficient information to link the provided information back to the individual. For instance, Sweeney [2002] showed that zip code, gender, and date of birth provided in U.S. census data are sufficient to uniquely identify 87% of the U.S. population. The risk of de-anonymization has also been demonstrated in the context of presumably anonymized medical records [Benitez and Malin, 2010], genome research data [Malin and Sweeney, 2004], location traces [Gruteser and Hoh, 2005], and home/work location pairs [Golle and Partridge, 2009].

The lack of proper security (“insecurity”) of data processing and stored data is also a significant privacy risk, because it facilitates identity theft and distortion when information about an individual is more readily accessible by unauthorized entities than it should be. Therefore, all discussed data protection frameworks place a strong emphasis on the security of collected personal data.

The term “secondary use” describes any use of collected information beyond the purposes for which it was collected. As the data subject did not consent to secondary use by definition, it violates privacy expectations. The main issue here is that data subjects may have provided different information if they would have been aware of the secondary use.

Solove calls the lack of appropriate means for data subjects to learn about the existence of collected personal data, to access those data, and to rectify it “exclusion.” Exclusion runs contrary to the data protection principles of participation and transparency.

Information Dissemination

Solove’s third category, information dissemination, summarizes seven privacy issues that concern the further disclosure or spread of personal information.

Breach of confidentiality is the violation of trust in a specific relationship by revealing secret information associated with that relationship. Disclosure is the dissemination of true information about a data subject without consent. It violates the data subject’s information self-determination. Disclosure can adversely affect the data subject’s reputation. Distortion is similar to disclosure with the difference that false or misleading information about a person is being willfully disseminated, often with the intention of harming that person’s reputation.

Exposure is very similar to disclosure, but Solove notes that it pertains to revealing physical or emotional attributes about a person, such as information about the person’s body and health. Thus, exposure violates bodily privacy and affects the person’s dignity rather than reputation.

Increased accessibility does not directly disclose information to any specific party but makes it generally easier to access aggregated information about an individual. Although information might have been previously publicly available, aggregation and increased accessibility increase the risk of actual disclosure.

Blackmail is the threat to expose private information if the blackmailer’s demands are not met. Blackmail is a threat of disclosure enabled by a power imbalance created by information obtained by the blackmailer. Appropriation, on the other hand, is the utilization of another person’s identity for one’s own benefit. It is sometimes also referred to as exploitation.

Invasion

Solove’s fourth category is concerned with privacy invasion, featuring two privacy issues: intrusion and decisional interference. While Solove’s other three categories mainly deal with information privacy, invasion does not involve personal information directly.

Intrusion is the violation of someone’s personal territory, however that territory may be defined. One can intrude in someone’s physical territory or private space, but intrusion can also pertain to disrupting private affairs. Such “realms of exclusion” [Solove, 2008] facilitate interaction with specific people without interference, and also exist in otherwise public environments, e.g., having a private conversation at a restaurant.

Decisional interference is a privacy issue where governmental regulations interfere with the freedom of personal decisions and self-determination (Rössler’s concept of decisional privacy), e.g., in the case of sexuality or religious practices. Solove argues that these are not merely issues of autonomy but are strongly associated with information privacy. The risk of potential disclosure can severely inhibit certain decisions of an individual.

Solove’s taxonomy provides a comprehensive framework to reason about types of privacy violations. His goal was to facilitate the categorization of privacy violations in specific cases to obtain appropriate legal regulations and rulings. He ignores individual privacy preferences in his taxonomy on purpose, because, according to him, it is virtually impossible to protect individual, varying privacy expectations in a legal framework [Solove, 2008, p. 70]. However, he recognizes that personal privacy preferences play an important role in shaping individual expectations of privacy.

2.4  SUMMARY

If there is anything this chapter should have demonstrated, it is that privacy is a complex concept—hiding many different meanings in many different situations in a simple seven-letter word. Without a clear understanding of what people expect from “having privacy,” we cannot hope to create technology that will work accordingly, offering “privacy-aware” or “privacy-friendly” behavior. Law offers a good starting point for this exploration,40 as it contains a society-sanctioned codification of privacy that goes beyond the views and opinions of individual scholars (such as the “GDPR” [European Parliament and Council, 2016]). However, understanding the raison d’être behind these laws—Why do they exist? What function do they serve in society?—offers further insights that help us shape the solution space for privacy-aware technology: privacy means empowerment, dignity, utility, a constraint of power [Lessig, 1999]; privacy functions as an emotional release, a human right, a staple of democracy, or as a driver for innovation [Clarke, 2006, Westin, 1967]. Last but not least, we discussed various conceptualizations of privacy from the literature (e.g., privacy constituents such as solitude, intimacy, reserve, anonymity, autonomy, control) that further completed the many possible uses for and benefits of privacy, and how the actions of others can affect this (e.g., Solove’s privacy taxonomy).

1The common law is the legal system of many anglo-american countries. It is based on traditions and customs, dating back to historic England, and heavily relies on precedents. This is in contrast to “civil law” juristdictions where judgments are predominently based on codified rules.

2In common law jurisdictions, tort law governs how individuals can seek compensation for the loss or harm they experienced due to the (wrongful) actions of others.

3In his 1996 “Declaration of Independence of Cyberspace,” John Barlow, co-founder of the Electronic Frontier Foundation (EFF), declared “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather” [Barlow, 1996].

4In 2013, the OECD published a revision of its privacy guidelines [OECD, 2013]. The OECD privacy principles remain unchanged in the 2013 version, except for a gender neutral reformulation of the individual participation principle, which we provide here. The revisions primarily updated the OECD’s recommendations regarding the principles’ implementation with a focus on the practical implementation of privacy protection through risk management and the need to address the global dimension of privacy through improved interoperability.

5Privacy law typically considers three principal roles: data subjects, data controllers, and data processors. Data subjects are natural persons whose personal data (i.e., “any information relating to an identified or identifiable natural person”) is collected and processed; a data controller is “the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data;” a data processors is “a natural or legal person, public authority, agency or other body which processes personal data on behalf of the controller” [European Parliament and Council, 2016].

6Griswold vs. Connecticut involved the directors of the Planned Parenthood League of Connecticut, a nonprofit agency which disseminated birth control information, who challenged a Connecticut law criminalizing contraceptives and counseling about contraceptives to married couples. The Court held that the law was unconstitutional, and specifically described two interests for protecting privacy: (1) “the individual interest in avoiding disclosure of personal matters” and (2) “the interest in independence in making certain kinds of important decisions” [Solove and Rotenberg, 2003].

7The first ten amendments to the U.S. Constitution are collectively known as the “Bill of Rights.” They were added as a result of objections to the original Constitution of 1787 during state ratification debates. Congress approved these amendments as a block of twelve in September 1789, and the legislatures of enough states had ratified 10 of those 12 by December 1791 [Wikipedia].

8That the Fourteenth Amendment provides a due process right to privacy was first recognized in concurring opinions of two Supreme Court Justices in Griswold v. Connecticut. It was also recognized in Roe v. Wade 1973, which invoked the right to privacy to protect a woman’s right to an abortion, and in Lawrence v. Texas 2003, which invoked the right to privacy regarding the sexual practices of same-sex couples.

9An exception is the 13th Amendment, which prohibits slavery and thus also applies to private persons.

10National Do Not Call Registry: https://www.donotcall.gov.

11A tort is a civil wrong for which the law provides remedy. The “law of torts” is part of the common law, which is the legal system of many Anglo-American countries, such as the UK or the U.S. In contrast to civil law practiced in most European countries (which is derived from Roman law, and has the form of statutes and codes written and enacted by emperors, kings, and, today, by national legislatures), common law is based on traditions, customs, and precedents dating back to historical England.

12The European Council was founded in 1949 in order to harmonize legal and social practices across Europe. It groups together 47 countries—the 28 EU member states and additional mostly central and eastern European countries. Since 1989, its main job has become assisting the post-communist democracies in central and eastern Europe in carrying out political, legal, and economic reform.

13As of August 2018, all 47 CoE member states have ratified the convention, while six non-member states have done so; see conventions.coe.int/Treaty/Commun/ChercheSig.asps?NT=108&;CM=8&DF=8/18/04&CL=ENG for latest figures.

14The directive actually applies to the so-called “European Economic Area” (EEA), which not only includes the EU-member states but also Norway, Iceland, and Liechtenstein.

15Data transfer between Europe and the U.S. has been regulated by a separate agreement called the Safe Harbor Agreement, which was later replaced by the EU-US Privacy Shield (see https://www.privacyshield.gov/) [Weiss and Archick, 2016].

16Note that surveillance cameras seem to have no significant effect on violent crime—one of the main reasons for having all those cameras in the first place. According to a 2013 report from the UK College of Policing [National Police Library – College of Computing, 2013], CCTV leads only to “a small reduction in crime,” mostly in the area of vehicle theft, but that it has “no impact on the levels of violent crimes.”

17An active user is someone who has logged in at least once in the last 30 days.

18As announced at Google’s 2017 I/O developer conference [Popper, 2017].

19See https://duckduckgo.com/traffic.html.

20As of January 2017, see https://duckduckgo.com/traffic.html.

21According to Internetlivestats.com [2018], Google serves over 3.5 billion queries a day. Google does not publicly disclose the number of queries they serve.

22See https://www.signal.org/.

23See https://www.whatsapp.com/. Note that since 2016, WhatsApp supports the encryption of all information being exchanged, though the metadata, i.e., who is chatting with whom and when, is still available to WhatApp’s owner, Facebook.

24Such a warrant should only be issued based on sufficient evidence (“probably cause”).

25Sun Microsystems was once a key software and hardware manufacturer of Unix workstations. It was acquired by Oracle in 2009.

26BT was formerly called British Telecom, which was the state-owned telecommunication provider in the UK.

27A similar categorization but centering around privacy harms can be found in Joyee De and Le Métayer [2016].

28The finding was triggered by the controversy surrounding the national census announcement on April 27, 1983, which chose the unfortunate wording “Totalzählung” and thus resulted in more than hundred constitutional appeals (Verfassungsbeschwerde) to the federal constitutional court [Reissenberger, 2004].

29Translation by the authors.

30Translation by the authors.

31Often abbreviated to data self-determination.

32In its Article 23, the GDPR allows member states to limit the applicability of the Regulation in order to safeguard, e.g., national or public security.

33In May 2001, a judge in Texas ordered 21 convicted sex offenders not only to post signs in their front yards, but also place bumper stickers on their cars stating: “Danger! Registered Sex Offender in Vehicle” [Solove and Rotenberg, 2003].

34Another problem with this approach is its broad application toward any “sex-offenses:” in some states, this also puts adult homosexuals or underage heterosexual teenagers having consensual sex on such lists.

35The Guardian cites IBM-analyst Merlin Stone with saying “In every sector, the top 20% of customers give 80% of the profit” [Guardian].

36Interestingly enough, the main shopping characteristic for all of the suspected terrorists wasn’t a preference for Middle-eastern food, but rather a tendency to order home-delivery pizza and paying for it by credit card.

37Hot desking describes the practice of office workers sharing a single physical desk/workplace, often on a first come–first use basis. Personal belongings are kept in movable (and lockable) containers that can easily be moved to another empty desk the next morning.

38Companies like AncestryDNA (www.ancestry.com/dna/) and 23andMe (www.23andme.com) already sell kits for less than USD $100 that allow consumers to get an in-depth analysis of their DNA. They typically reserve the right to “share aggregate information about users genomes to third parties” [Seife, 2013].

39Even within such a narrow definition, however, the inherent conceptual model can become quite complex, as the various privacy torts derived from it (protection from unwanted intrusion, public disclosure, adverse publicity, and appropriation) illustrate.

40Note that our discussion on privacy law in this chapter is only cursory. An excellent overview of U.S. privacy legislation can be found, e.g., in Gormley [1992], Solove [2006], and Solove and Schwartz [2018]. For an international perspective, Bygrave [2014] offers a detailed discussion, while Greenleaf [2014] focuses explicitly on Asia, and Burkert [2000] on Europe. Voss [2017] and De Hert and Papakonstantinou [2016] specifically focus on the GDPR. Greenleaf [2017] offers a current overview of over 120 national and transnational privacy laws.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.141.75