Privacy can be defined as the power to control what others know about you and what they can do with that information. In the computer age, personal information forms the basis for many decisions, from credit card transactions for purchasing goods to the ability to buy an airplane ticket and fly. Although it is theoretically possible to live an almost anonymous existence today, the price for doing so is high—from higher prices at the grocery store (no frequent shopper discount), to higher credit costs, to challenges with air travel, opening bank accounts, and seeking employment.
Information is an important item in today’s society. From instant credit, to digital access to a wide range of information via the Internet, to electronic service portals such as e-commerce sites, e-government sites, and so on, our daily lives have become intertwined with privacy issues. Information has become a valuable entity because it is an enabler of many functions. The creation of an information-centric economy is as dramatic a revolution as the adoption of money to act as an economic utility, simplifying bartering. This revolution and reliance on information imbues information with value, creating the need to protect it.
Privacy is the right to control information about you and what others can do with that information.
Data retention is the determination of what records require storage and for how long. There are several reasons for retaining data: billing and accounting, contractual, warranty, and local, state, and national government rules are some of the obvious. Maintaining data stores for longer than is required is a source of risk, as is not storing the information long enough. Some information, like protected health information (PHI) for workers in some industries or workers who have been exposed to specific hazards, can have very long retention periods.
Failure to maintain the data in a secure state can be a retention issue, as is not retaining it. In some cases, destruction of data, specifically data subject to legal hold in a legal matter, can result in adverse court findings and sanctions. Legal hold can add significant complexity to data retention efforts because it forces almost a separate store of the data until the legal issues are resolved, because once data is on the legal hold track, its retention clock does not expire. This makes determining, labeling, and maintaining data associated with legal hold an added dimension for normal storage times.
When a company loses data that it has stored on its network, the term used is data breach. Data breaches have become an almost daily news item, and the result is that people are becoming desensitized to their occurrence. Data breaches act as a means of notification that security efforts have failed.
Verizon publishes an annual data breach report that examines the types and causes of data breaches over the previous calendar year. These results are presented in multiple forms, by attribution to attack type, attacker type, industry, geographic region, company size, and more, providing a significant level of detailed analysis into the incidents. This report is a framework of what actually happened to real companies with real security programs, or in spite of their security programs. It is an extremely valuable collection of data that can provide guidance with respect to current threat environments and results of actual attacks and errors.
Reputation damage is a form of damage against a firm’s brand. Customers exert a choice when they engage in a commerce transaction, and businesses spend a lot of time and resources on building brands that facilitate the purchase decision toward their firms. Having to notify all customers of a breach/disclosure event is truly damaging to a firm’s brands. An online computer vendor, Egghead, suffered a breach/disclosure event near the holiday shopping season, and it saw sales dry up in that critical period, resulting in bankruptcy shortly thereafter.
Target Corporation continues to be the example of record for costly breaches, with a breach in 2013 that cost hundreds of millions in dollars and cost multiple senior executives their jobs. In 2017, Yahoo! had a breach involving three billion accounts, and this led to delays and a $350 million dollar price reduction during its acquisition. Facebook joined this club with its Cambridge Analytica scandal of 2018, where it failed to protect the personal information of its users. Facebook also faced legal and regulatory oversight inquiries as well as triggered EU data protection directive responses.
Identity theft occurs when a criminal, using stolen information, assumes the identity of another individual to obtain and use credit in the victim’s name. If the data disclosure results in loss of customer personal information, regulations may hold a firm responsible for sharing in the risk of identity theft for the victims. The usual response on the part of a company is to purchase an identity theft protection service policy for the affected individuals of a breach. This can cost over $50 per person affected, making a breach of a million records a costly issue.
Regulatory agencies, such as the Federal Trade Commission (FTC), have the ability to levy fines when regulations are not followed. These fines are not minor. In the EU, General Data Protection Regulation (GDPR) fines can be 4 percent of a firm’s revenue, and fines in the hundreds of millions of euros have been levied. In the U.S., Equifax was fined nearly $700 million to be paid in restitution to users affected by their data breach.
One of the primary targets of an attacker on a system is intellectual property. IP theft is a major organizational consequence when it occurs, because when it occurs, the damage may not become evident until the material is used by a competitor. In organizations with significant levels of IP, it is one of the most important items to be protected against loss. Years of investment, and more years of potential sales and profits, can vanish quickly if IP is stolen and used actively against a firm.
Be aware that organizational consequences of data privacy breaches can result in reputational damage, identify theft, fines, or IP theft.
Effective data classification programs include data sensitivity labeling, which enables personnel handling the data to know whether it is sensitive and to understand the levels of protection required. When the data is inside an information-processing system, the protections should be designed into the system. But when the data leaves this cocoon of protection, whether by printing, downloading, or copying, it becomes necessary to ensure continued protection by other means. This is where data labeling assists users in fulfilling their responsibilities. Training to ensure that labeling occurs and that it is used and followed is important for users whose roles can be impacted by this material.
Training plays an important role in ensuring proper data handling and disposal. Personnel are intimately involved in several specific tasks associated with data handling and data destruction/disposal; if properly trained, they can act as a security control. Untrained or inadequately trained personnel will not be a productive security control and, in fact, can be a source of potential compromise.
A key component of IT security is the protection of the information processed and stored on the computer systems and network. Organizations deal with many different types of information, and they need to recognize that not all information is of equal importance or sensitivity. This requires classification of information into various categories, each with its own requirements for its handling. Factors that affect the classification of specific information include its value to the organization (what will be the impact to the organization if it loses this information?), its age, and laws or regulations that govern its protection. The most widely known system of classification of information is that implemented by the U.S. government (including the military), which classifies information into categories such as Confidential, Secret, and Top Secret. Businesses have similar desires to protect information and often use categories such as Publicly Releasable, Proprietary, Company Confidential, and For Internal Use Only. Each policy for the classification of information should describe how it should be protected, who may have access to it, who has the authority to release it and how, and how it should be destroyed. All employees of the organization should be trained in the procedures for handling the information that they are authorized to access.
Public data is data that can be seen by the public and has no needed protections with respect to confidentiality. It is important to protect the integrity of public data, lest one communicate incorrect data as being true. Public-facing web pages, press releases, corporate statements—these are examples of public data that still needs protection, but specifically with respect to integrity.
Data is labeled private if its disclosure to an unauthorized party would potentially cause harm or disruption to the organization. Passwords could be considered private. The term private data is usually associated with personal data belonging to a person and less often with corporate entities. The level of damage typically associated with private data is lower than confidential but still significant to the organization.
Sensitive data is a generalized term that typically represents data classified as restricted from general or public release. This term is often used interchangeably with confidential data.
Data is labeled confidential if its disclosure to an unauthorized party would potentially cause serious harm to the organization. This data should be defined by policy, and that policy should include details regarding who has the authority to release the data. Common examples of confidential data include pricing and cost data, customer data, internal business plans, and so on, as the release of these could result in significant loss to the firm.
Data is labeled critical if its disclosure to an unauthorized party would potentially cause extreme harm to the organization. This data should be defined by policy, and that policy should include details regarding who has the authority to release the data. Common examples of critical data include trade secrets, proprietary software code, and new product designs, as the release of these could result in significant loss to the firm. The level of damage from a critical data release would be extreme, material to the business, and could result in the highest levels of loss.
The difference between critical and confidential data lies in the level of potential damage should the information be released.
Proprietary data is data that is restricted to a company because of potential competitive use. If a company has data that could be used by a competitor for any particular reason (say, internal costs and pricing data), then it needs to be labeled and handled in a manner to protect it from release to competitors. Proprietary data may be shared with a third party that is not a competitor, but in labeling the data “proprietary,” you alert the party you have shared the data with that it is not to be shared further.
When information is about a person, failure to protect it can have specific consequences. Business secrets are protected through trade secret laws, government information is protected through laws concerning national security, and privacy laws protect information associated with people. A set of elements that can lead to the specific identity of a person is referred to as personally identifiable information (PII). By definition, PII can be used to identify a specific individual, even if an entire set is not disclosed.
As little information as the ZIP code, gender, and date of birth can resolve to a single person.
PII is an essential element of many online transactions, but it can also be misused if disclosed to unauthorized parties. For this reason, it should be protected at all times, by all parties that possess it. And when PII is no longer needed, it should be destroyed in accordance with the firm’s data destruction policy in a complete, nonreversible manner.
PII refers to information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other personal or identifying information that is linked or is linkable to a specific individual.
PII is by nature sensitive to end users. Loss or compromise of end-user PII can result in financial and other impacts borne by the end user. For this reason, collection of PII should be minimized to what is actually needed. Here are three great questions to ask when determining whether to collect PII:
Do I need each specific data element?
What is my business purpose for each specific element?
Will my customers/end users agree with my rationale for collecting each specific element?
If the accidental disclosure of user data could cause the user harm, such as discrimination (political, racial, health related, or lifestyle), then the best course of action is to treat the information as sensitive PII.
Because privacy is defined as the power to control what others know about you and what they can do with this information, and PII represents the core items that should be controlled, communication with the end user concerning privacy is paramount. Privacy policies are presented later in the chapter, but with respect to PII, three words can govern good citizenry when collecting PII. Notice refers to informing the customer that PII will be collected and used and/or stored. Choice refers to the opportunity for the end user to consent to the data collection or to opt out. Consent refers to the positive affirmation by a customer that they have read the notice, understand their choices, and agree to release their PII for the purposes explained to them.
The Health Insurance Portability and Accountability Act (HIPAA) regulations define protected health information (PHI) as “any information, whether oral or recorded in any form or medium,” that
“[i]s created or received by a health care provider, health plan, public health authority, employer, life insurer, school or university, or health care clearinghouse” and
“[r]elates to the past, present, or future physical or mental health or condition of an individual; the provision of health care to an individual; or the past, present, or future payment for the provision of health care to an individual.”
HIPAA’s language is built upon the concepts of PHI and Notice of Privacy Practices (NPP). HIPAA describes “covered entities,” including medical facilities, billing facilities, and insurance (third-party payer) facilities. Patients are to have access to their PHI and should have an expectation of appropriate privacy and security associated with medical records. HIPAA mandates a series of administrative, technical, and physical security safeguards for information, including elements such as staff training and awareness as well as specific levels of safeguards for PHI when in use, stored, or in transit between facilities.
Financial information is a major source of PII. Items such as bank accounts, loans, and payment amounts can all be leveraged against knowledge-based authentication systems to achieve access to even more information, such as credit reports. Financial information is one of the most sought-after types of PII because it is the easiest type of information to monetize.
Search for Your Own PII
Modern Internet search engines have the ability to catalog tremendous quantities of information and make wide-area searches for specific elements easy. Using your own elements of PII, try searching the Internet and see what is returned on your name, address, phone number, Social Security number, date of birth, and so forth. For security reasons, be sure to be anonymous when doing this—that is, log out of Google applications before using Google Search, Microsoft/Live applications before using Bing, or Yahoo! applications before using Yahoo! Search. This step may seem minor, but with search records being stored, the last thing you want to do is provide records that can cross-correlate data about yourself. If you find data on yourself, analyze the source and whether or not the data should be publicly accessible.
The U.S. government as well as governments worldwide collect information as part of their operations. Government regulations concerning the collection, storage, and use of government data exist to assist the government agencies in the proper management of data during its lifecycle in government systems. Government data can include PII about people, and this information needs protection in accordance with current rules and regulations.
Customer data is the primary source of PII in an enterprise’s systems. This information was collected in response to a specific business need, and it requires appropriate levels of protection to prevent disclosure or release.
Multiple personnel are associated with the control and administration of data. These data roles include data owners, controllers, processors, custodians/stewards, protection officers, and users. Each of these has a role in the protection and control of the data. The leadership of this effort is under the auspices of the privacy officer.
Data requires a data owner. Data ownership roles for all data elements need to be defined in the business. Data ownership is a business function, where the requirements for security, privacy, retention, and other business functions must be established. Not all data requires the same handling restrictions, but all data requires these characteristics to be defined. This is the responsibility of the data owner.
The data controller is the person responsible for managing how and why data is going to be used by the organization. In the era of GDPR and other privacy laws and regulations, this is a critical position because, under GDPR and other privacy laws, the data controller is the position responsible for protecting the privacy and rights of the data’s subject, such as the user of a website. Whether the data is primary data or data from a third party, the data controller remains the point of responsibility for specifying how data is going to be used and processed either internally or externally. There can be multiple data controllers in an organization, with responsibilities over different sets of data.
In the European Union (EU), General Data Protection Regulation (GDPR) classifies the data controller as the data manager. In other words, the data controller manages the data.
With respect to data with privacy implications, under most privacy regulations and GDPR, the data controller is responsible for deciding the following:
What data is collected
Where and how it is used
With whom and how data is shared
How long the data is kept and how it is disposed at end of life (EOL)
The data processor is the entity that processes data given to it by the data controller. Data processors do not own the data, nor do they control it. Their role is the manipulation of the data as part of business processes. Data processors can be personnel or systems; an example of a system is the use of Google Analytics to manipulate certain elements of data, making them useful for business analysts.
With respect to data with privacy implications, under most privacy regulations and GDPR, data processors are responsible for the following:
Developing and implementing IT processes and systems that manage personal data
Implementing security measures that would safeguard personal data
Using tools and strategies to properly handle personal data
Data custodians or stewards are the parties responsible for the day-to-day caretaking of data. The data owner sets the relevant policies, and the steward or custodian ensures these policies are followed.
The data privacy officer also plays an important role if information on European customers is involved because the EU has strict data protection (privacy) rules. The privacy officer who is accountable for the protection of consumer data from the EU must ensure compliance with EU regulations.
The data privacy officer is responsible for ensuring legal compliance with data privacy regulations.
When data is no longer being used, whether it is on old printouts, old systems being discarded, or broken equipment, it is important to destroy the data before losing physical control over the media it is on. Many criminals have learned the value of dumpster diving to discover information that can be used in identity theft, social engineering, and other malicious activities. An organization must concern itself not only with paper trash, but also the information stored on discarded objects such as computers. Several government organizations have been embarrassed when old computers sold to salvagers proved to contain sensitive documents on their hard drives. It is critical for every organization to have a strong disposal and destruction policy and related procedures. This section covers data destruction and media sanitization methods.
Data/information has a lifecycle—a beginning, a middle, and, at some point, an end. Understanding the lifecycle of information assets—from the point of collection, use, and storage as well as how the assets are shared, protected, and ultimately destroyed—is important if one is to properly handle the information. Not all information has the same time periods, or even steps, associated with it, so lifecycles are unique to different information sources and elements. The lifecycle forms a foundation upon which information management resides.
Burning is considered one of the gold-standard methods of data destruction. Once the storage media is rendered into a form that can be destroyed by fire, the chemical processes of fire are irreversible and render the data lost forever. The typical method is to shred the material, even plastic disks and hard drives (including SSDs), and then put the shred in an incinerator and oxidize the material back to base chemical forms. When the material is completely combusted, the information that was on it is gone.
Shredding is physical destruction by tearing an item into many small pieces, which can then be mixed, making reassembly difficult if not impossible. Important papers should be shredded, and important in this case means anything that might be useful to a potential intruder or dumpster diver. It is amazing what intruders can do with what appears to be innocent pieces of information. Shredders come in all sizes, from little desktop models that can handle a few pages at a time, or a single CD/DVD, to industrial versions that can handle even phone books and multiple discs at the same time. The ultimate in industrial shredders can even shred hard disk drives, metal case and all. Many document destruction companies have larger shredders on trucks that they bring to their clients location and do on-site shredding on a regular schedule.
Pulping is a process by which paper fibers are suspended in a liquid and recombined into new paper. If you have data records on paper, and you shred the paper, the pulping process removes the ink by bleaching, and recombines all the shred into new paper, completely destroying the physical layout of the old paper.
Pulverizing is a physical process of destruction using excessive physical force to break an item into unusable pieces. Pulverizers are used on items like hard disk drives, destroying the platters in a manner that they cannot be reconstructed. A more modern method of pulverizing the data itself is the use of encryption. The data on the drive is encrypted and the key itself is destroyed. This renders the data nonrecoverable based on the encryption strength. This method has unique advantages of scale; a small business can pulverize its own data, whereas it would either need expensive equipment or a third party to pulverize the few disks it needs to destroy each year.
A safer method for destroying files on magnetic storage devices (that is, magnetic tape and hard drives) is to destroy the data magnetically, using a strong magnetic field to degauss the media. Degaussing realigns the magnetic particles, removing the organized structure that represented the data. This effectively destroys all data on the media. Several commercial degaussers are available for this purpose.
Data purging is a term commonly used to describe methods that permanently erase and remove data from a storage space. The key phrase is “remove data,” for unlike deletion, which just destroys the data, purging is designed to open up the storage space for reuse. A circular buffer is a great example of an automatic purge mechanism. It stores a given number of data elements and then the space is reused. A circular buffer that holds 64 MB, once full, overwrites the oldest material as new material is added to the buffer.
Wiping data is the process of rewriting the storage media with a series of patterns of 1’s and 0’s. This is not done once, but multiple times to ensure that every trace of the original data has been eliminated. There are data-wiping protocols for various security levels of data, with three, seven, or even 35 passes. Of particular note are solid-state drives, as these devices use a different storage methodology and require special utilities to ensure that all the sectors are wiped.
Data wiping is nondestructive to the media, unlike pulping and shredding, and this makes it ideal for another purpose. Media sanitization is the clearing of previous data off of a media device before the device is reused. Wiping can be used to sanitize a storage device, making it clean before use. This can be important to remove old trace data that will later show up in free and unused space.
Identity privacy and the establishment of identity theft crimes is governed by the Identity Theft and Assumption Deterrence Act, which makes it a violation of federal law to knowingly use another’s identity. The collection of information necessary to do this is also governed by the Gramm-Leach-Bliley Act (GLBA), which makes it illegal for someone to gather identity information on another person under false pretenses. In the education area, privacy laws have existed for years. See “Family Education Records and Privacy Act (FERPA),” later in the chapter.
Major Elements of the Privacy Act
The Privacy Act has numerous required elements and definitions. Among other things, the major elements require federal agencies to do the following:
Publish in the Federal Register a notice of each system of records that it maintains, including information about the type of records maintained, the purposes for which they are used, and the categories of individuals on whom they are maintained.
Maintain only such information about an individual as required by law, or is needed to perform a statutory duty.
Maintain information in a timely, accurate, relevant, secure, and complete form.
Inform individuals about access to PII upon inquiry.
Notify individuals from whom it requests information as to what authorizes it to request the information, whether disclosure is mandatory or voluntary, the purpose for which the information may be used, and penalties for not providing the requested information.
Establish appropriate physical, technical, and administrative safeguards for the information that is collected and used.
Additional elements can be found by examining provisions of the act itself, although it is drafted in legislative form and requires extensive cross-referencing and interpretation.
Two major privacy initiatives followed from the U.S. government: the Privacy Act of 1974 and the Freedom of Information Act of 1996.
In the United States, the Federal Trade Commission has a significant role in addressing privacy concerns. The core principles the FTC uses are referred to as the Fair Information Practice Principles (FIPPs). The FIPPS and their components, as detailed in OMB Circular A-130, are as follows:
Access and Amendment Agencies should provide individuals with appropriate access to PII and appropriate opportunity to correct or amend PII.
Accountability Agencies should be accountable for complying with these principles and applicable privacy requirements, and should appropriately monitor, audit, and document compliance. Agencies should also clearly define the roles and responsibilities with respect to PII for all employees and contractors and should provide appropriate training to all employees and contractors who have access to PII.
Authority Agencies should only create, collect, use, process, store, maintain, disseminate, or disclose PII if they have authority to do so, and should identify this authority in the appropriate notice.
Minimization Agencies should only create, collect, use, process, store, maintain, disseminate, or disclose PII that is directly relevant and necessary to accomplish a legally authorized purpose, and should only maintain PII for as long as is necessary to accomplish the purpose.
Quality and Integrity Agencies should create, collect, use, process, store, maintain, disseminate, or disclose PII with such accuracy, relevance, timeliness, and completeness as is reasonably necessary to ensure fairness to the individual.
Individual Participation Agencies should involve the individual in the process of using PII and, to the extent practicable, seek individual consent for the creation, collection, use, processing, storage, maintenance, dissemination, or disclosure of PII. Agencies should also establish procedures to receive and address individuals’ privacy-related complaints and inquiries.
Purpose Specification and Use Limitation Agencies should provide notice of the specific purpose for which PII is collected and should only use, process, store, maintain, disseminate, or disclose PII for a purpose that is explained in the notice and is compatible with the purpose for which the PII was collected, or that is otherwise legally authorized.
Security Agencies should establish administrative, technical, and physical safeguards to protect PII commensurate with the risk and magnitude of the harm that would result from its unauthorized access, use, modification, loss, destruction, dissemination, or disclosure.
Transparency Agencies should be transparent about information policies and practices with respect to PII, and should provide clear and accessible notice regarding the creation, collection, use, processing, storage, maintenance, dissemination, and disclosure of PII.
The Privacy Act of 1974 was an omnibus act designed to affect the entire federal information landscape. This act has many provisions that apply across the entire federal government, with only minor exceptions for national security (classified information), law enforcement, and investigative provisions. This act has been amended numerous times, and you can find current, detailed information at the Electronic Privacy Information Center (EPIC) website, https://epic.org/privacy/laws/privacy_act.html.
The Freedom of Information Act (FOIA) of 1996 is one of the most widely used privacy acts in the United States, so much so that its acronym, FOIA (pronounced “foya”), has reached common use. FOIA was designed to enable public access to U.S. government records, and “public” includes the press, which purportedly acts on the public’s behalf and widely uses FOIA to obtain information. FOIA carries a presumption of disclosure; the burden is on the government, not the requesting party, to substantiate why information cannot be released. Upon receiving a written request, agencies of the U.S. government are required to disclose those records, unless they can be lawfully withheld from disclosure under one of nine specific exemptions in FOIA. The right of access is ultimately enforceable through the federal court system. The nine specific exemptions, listed in Section 552 of U.S. Code Title 5, fall within the following general categories:
1. National security and foreign policy information
2. Internal personnel rules and practices of an agency
3. Information specifically exempted by statute
4. Confidential business information
5. Inter- or intra-agency communication that is subject to deliberative process, litigation, and other privileges
6. Information that, if disclosed, would constitute a clearly unwarranted invasion of personal privacy
7. Law enforcement records that implicate one of a set of enumerated concerns
8. Agency information from financial institutions
9. Geological and geophysical information concerning wells
FOIA is frequently used and generates a tremendous amount of work for many federal agencies, resulting in delays to requests. This in itself is a testament to its effectiveness.
Record availability under FOIA is less of an issue than is the backlog of requests. To defray some of the costs associated with record requests, and to prevent numerous trivial requests, agencies are allowed to charge for research time and duplication costs. These costs vary by agency, but are typically nominal, in the range of $8.00 to $45.00 per hour for search/review fees and $.10 to $.35 per page for duplication. Agencies are not allowed to demand a requester to make an advance payment unless the agency estimates that the fee is likely to exceed $250 or the requester previously failed to pay proper fees. For many uses, the first 100 pages are free, and under some circumstances the fees can be waived.
Student records have significant protections under the Family Education Records and Privacy Act of 1974, which includes significant restrictions on information sharing. FERPA operates on an opt-in basis, as the student must approve the disclosure of information prior to the actual disclosure. FERPA was designed to provide limited control to students over their education records. The law allows students to have access to their education records, an opportunity to seek to have the records amended, and some control over the disclosure of information from the records to third parties. For example, if the parent of a student who is 18 or older inquires about the student’s schedule, grades, or other academic issues, the student has to give permission before the school can communicate with the parent, even if the parent is paying for the education.
FERPA is designed to protect the privacy of student information. At the K–12 school level, students are typically too young to have legal standing associated with exercising their rights, so FERPA recognizes the parents as part of the protected party. FERPA provides parents with the right to inspect and review their children’s education records, the right to seek to amend information in the records they believe to be inaccurate, misleading, or an invasion of privacy, and the right to consent to the disclosure of PII from their children’s education records. When a student turns 18 years old or enters a postsecondary institution at any age, these rights under FERPA transfer from the student’s parents to the student.
The U.S. Computer Fraud and Abuse Act (as amended in 1994, 1996, 2001, and 2008) and privacy laws such as the EU GDPR have several specific objectives, but one of the main ones is to prevent unauthorized parties from having access to information they should not have access to. Fraudulent access, or even exceeding one’s authorized access, is defined as a crime and can be punished. Although the CFAA is intended for broader purposes, it can be used to protect privacy related to computer records through its enforcement of violations of authorized access.
Websites that are collecting information from children under the age of 13 are required to comply with the Children’s Online Privacy Protection Act (COPPA). The U.S. FTC provides an informational website on COPPA and compliance issues at www.ftc.gov/tips-advice/business-center/privacy-and-security/children%27s-privacy.
Considered by many privacy advocates to be the strongest U.S. privacy law, the Video Privacy Protection Act of 1988 provides civil remedies against unauthorized disclosure of personal information concerning video tape rentals and, by extension, DVDs and games as well. This is a federal statute, crafted in response to media searches of rental records associated with Judge Bork when he was nominated to the U.S. Supreme Court. Congress, upset with the liberal release of information, reacted with legislation, drafted by Senator Leahy, who noted during the floor debate that new privacy protections are necessary in “an era of interactive television cables, the growth of computer checking and check-out counters, of security systems and telephones, all lodged together in computers....” (S. Rep. No. 100-599, 100th Cong., 2nd Sess. at 6 ).
This statute, civil in nature, provides for civil penalties of up to $2500 per occurrence, as well as other civil remedies. The statute provides the protections by default, thus requiring a video rental company to obtain the renter’s consent to opt out of the protections if the company wants to disclose personal information about rentals. Exemptions exist for issues associated with the normal course of business for the video rental company as well as for responding to warrants, subpoenas, and other legal requests. This law does not supersede state laws, of which there are several.
Many states have enacted laws providing both wider and greater protections than the federal VPPA statute. For example, Connecticut and Maryland laws brand video rental records as confidential, and therefore not subject to sale, while California, Delaware, Iowa, Louisiana, New York, and Rhode Island have adopted state statutes providing protection of privacy with respect to video rental records. Michigan’s video privacy law is as sweeping as its broad super-DMCA state statute. This state law specifically protects records of book purchases, rentals, and borrowing as well as video rentals.
Medical and health information also has privacy implications, which is why the U.S. Congress enacted the Health Insurance Portability and Accountability Act (HIPAA) of 1996. HIPAA calls for sweeping changes in the way health and medical data is stored, exchanged, and used. From a privacy perspective, significant restrictions of data transfers to ensure privacy are included in HIPAA, including security standards and electronic signature provisions. HIPAA security standards mandate a uniform level of protections regarding all health information that pertains to an individual and is housed or transmitted electronically. The standards mandate safeguards for physical storage, maintenance, transmission, and access to individuals’ health information. HIPAA mandates that organizations that use electronic signatures have to meet standards ensuring information integrity, signer authentication, and nonrepudiation. These standards leave to industry the task of specifying the technical solutions and mandate compliance only to significant levels of protection as provided by the rules being released by industry.
Protected Health Information (PHI)
HIPAA regulations define protected health information (PHI) as “any information, whether oral or recorded in any form or medium” that “[i]s created or received by a health care provider, health plan, public health authority, employer, life insurer, school or university, or health care clearinghouse” and “[r]elates to the past, present, or future physical or mental health or condition of an individual; the provision of health care to an individual; or the past, present, or future payment for the provision of health care to an individual.”
HIPAA’s language is built on the concepts of protected health information (PHI) and Notice of Privacy Practices (NPP). HIPAA describes “covered entities,” including medical facilities, billing facilities, and insurance (third-party payer) facilities. Patients are to have access to their PHI and an expectation of appropriate privacy and security associated with medical records. HIPAA mandates a series of administrative, technical, and physical security safeguards for information, including elements such as staff training and awareness, as well as specific levels of safeguards for PHI when in use, stored, or in transit between facilities.
Notice of Privacy Practices
Visit your local doctor’s office, hospital, or clinic and ask for their Notice of Privacy Practices (NPP). This notice to patients details what information will be collected and the uses and safeguards that are applied. These can be fairly lengthy and detailed documents, and in many cases are in a booklet form.
In 2009, as part of the American Recovery and Reinvestment Act of 2009, the Health Information Technology for Economic and Clinical Health Act (HITECH Act) was passed into law. Although the primary purpose of the HITECH Act was to provide stimulus money for the adoption of electronic medical records (EMR) systems at all levels of the healthcare system, it also contained new security and privacy provisions to add teeth to those already in HIPAA. HIPAA protections were confined to the direct medical profession and did not cover entities such as health information exchanges and other “business associates” engaged in the collection and use of PHI. Under HITECH, business associates will be required to implement the same security safeguards and restrictions on uses and disclosures, to protect individually identifiable health information, as covered entities under HIPAA. It also subjects business associates to the same potential civil and criminal liability for breaches as covered entities. HITECH also specifies that the U.S. Department of Health and Human Services (HHS) is now required to conduct periodic audits of covered entities and business associates.
HIPAA civil penalties for willful neglect are increased under the HITECH Act. These penalties can extend up to $250,000, and repeat/uncorrected violations can extend up to $1.5 million. Under HIPAA and the HITECH Act, an individual cannot bring a cause of action against a provider. The laws specify that a state attorney general can bring an action on behalf of state residents.
In the financial arena, GLBA introduced the U.S. consumer to privacy notices, requiring firms to disclose what they collect, how they protect the information, and with whom they will share it. Annual notices are required as well as the option for consumers to opt out of the data sharing. The primary concept behind U.S. privacy laws in the financial arena is that consumers be allowed to opt out. This was strengthened in GLBA to include specific wording and notifications as well as requiring firms to appoint a privacy officer. Most U.S. consumers have witnessed the results of GLBA, every year receiving privacy notices from their banks and credit card companies. These notices are one of the visible effects of GLBA on changing the role of privacy associated with financial information.
California Senate Bill 1386 (SB 1386) was a landmark law concerning information disclosures. It mandates that Californians be notified whenever PII is lost or disclosed. Since the passage of SB 1386, numerous other states have modeled legislation on this bill, and although national legislation has been blocked by political procedural moves, it will eventually be passed. The current list of U.S. states and territories that require disclosure notices is up to 49, with only Alabama, New Mexico, and South Dakota without bills. Each of these disclosure notice laws is different, making the case for a unifying federal statute compelling, but currently it is low on the priority lists of most politicians.
Banking has always had an element of PII associated with it, from who has deposits to who has loans. As the scale of operations increased, both in numbers of customers and products, the importance of information for processing grew. Checks became a utility instrument to convey information associated with funds transferred between parties. As a check was basically a promise to pay, in the form of directions to a bank, occasionally the check was not honored and a merchant had to track down the party to demand payment. Thus, it became industry practice to write additional information on a check to assist a firm in later tracking down the drafting party. This information included items such as address, work phone number, a credit card number, and so on. This led to the co-location of information about an individual, and this information was used at times to perform the crime of identity theft. To combat this and prevent the gathering of this type of information, a series of banking and financial regulations were issued by the U.S. government to prohibit this form of information collection. Other regulations addressed items such as credit card numbers being printed on receipts, mandating only the last five digits be exposed.
As described in Chapter 24, the major credit card firms, such as MasterCard, Visa, American Express, and Discover, designed a private-sector initiative to deal with privacy issues associated with credit card transaction information. PCI DSS is a standard that provides guidance on what elements of a credit card transaction need protection and the level of expected protection. PCI DSS is not a law, but rather a contractual regulation, enforced through a series of fines and fees associated with performing business in this space. PCI DSS was a reaction to two phenomena: data disclosures and identity theft.
The Fair Credit Reporting Act of 1999 brought significant privacy protections to the consumer credit reporting agencies (CRAs). This act requires that the agencies provide consumers notice of their rights and responsibilities. The agencies are required to perform timely investigations on inaccuracies reported by consumers. The agencies are also required to notify the other CRAs when consumers close accounts. The act also has technical issues associated with data integrity, data destruction, data retention, and consumer and third-party access to data. The details of FCRA proved to be insufficient with respect to several aspects of identity theft, and in 2003, the Fair and Accurate Credit Transactions Act (FACTA) was passed, modifying and expanding on the privacy and security provisions of FCRA.
FACTA and Credit Card Receipts
One of the provisions of FACTA compels businesses to protect credit card information on receipts. Before FACTA, it was common for receipts to show entire credit card numbers as well as additional information. Today, receipts can display only the last five digits of the card number and cannot include the card expiration date. These rules went into effect in 2005, and merchants had one year to comply.
The Fair and Accurate Credit Transactions Act of 2003 was passed to enact stronger protections for consumer information from identity theft, errors, and omissions. FACTA amended portions of FCRA to improve the accuracy of customer records in consumer reporting agencies, to improve timely resolution of consumer complaints concerning inaccuracies, and to make businesses take reasonable steps to protect information that can lead to identity theft.
FTC Disposal Rule
The FTC’s Disposal Rule applies to consumer reporting agencies as well as to any individuals and businesses that use consumer reports, such as lenders, insurers, employers, and landlords.
FACTA also had other “disposal rules” associated with consumer information. FACTA mandates that information that is no longer needed must be properly disposed of, by either burning, pulverizing, or shredding. Any electronic information must be irreversibly destroyed or erased. Should third-party firms be used for disposal, the rules still pertain to the original contracting party, so third parties should be selected with care and monitored for compliance.
Red Flag Rules
The FTC has adopted a set of red flag rules that are invoked to assist entities in determining when extra precautions must be taken concerning PII records. The following are some examples of red flags that should prompt an organization to initiate additional, specific data handling steps to protect data:
Change of address request. This is a common tool for identity thieves, and as such, firms should provide protection steps to verify change-of-address requests.
Sudden use of an account that has been inactive for a long time, or radical changes in use of any account.
A suspicious address or phone number. Many fraudulent addresses and numbers are known, and repeated applications should be quickly noted and stopped.
Request for credit on a consumer account that has a credit freeze on a credit reporting record.
Additional information is available from the FTC at www.ftc.gov/tips-advice/business-center/guidance/fighting-identity-theft-red-flags-rule-how-guide-business.
Whenever a red flag issue occurs, the business must have special procedures in place to ensure that the event is not fraudulent. Calling the customer and verifying information before taking action is one example of this type of additional action.
Privacy is not a U.S.-centric phenomenon, but it does have strong cultural biases. Legal protections for privacy tend to follow the socio-cultural norms by geography; hence, there are different policies in European nations than in the United States. In the United States, the primary path to privacy is via opt-out, whereas in Europe and other countries, it is via opt-in. What this means is that the fundamental nature of control shifts. In the U.S., a consumer must notify a firm that they wish to block the sharing of personal information; otherwise, the firm has permission by default. In the EU, sharing is blocked unless the customer specifically opts in to allow it. The Far East has significantly different cultural norms with respect to individualism versus collectivism, and this is seen in their privacy laws as well. Their legal systems reflect these cultural norms as well. Even in countries with common borders, distinct differences exist, such as the United States and Canada; Canadian laws and customs have strong roots to their UK history, and in many cases follow European ideals as opposed to U.S. ones. One of the primary sources of intellectual and political thought on privacy has been the Organization for Economic Co-operation and Development (OECD). This multinational entity has for decades conducted multilateral discussions and policy formation on a wide range of topics, including privacy.
OECD Fair Information Practices are the foundational element for many worldwide privacy practices. Dating to 1980, Fair Information Practices are a set of principles and practices that set out how an information-based society may approach information handling, storage, management, and flows with a view toward maintaining fairness, privacy, and security. Members of the OECD recognized that information was a critical resource in a rapidly evolving global technology environment, and that proper handling of this resource was critical for long-term sustainability of growth.
OECD’s Privacy Code
OECD’s privacy code was developed to help “harmonise national privacy legislation and, while upholding such human rights, [to] at the same time prevent interruptions in international flows of data. [The Guidelines] represent a consensus on basic principles which can be built into existing national legislation, or serve as a basis for legislation in those countries which do not yet have it.” (Source: “OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” www.oecd.org/sti/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm.)
The EU has developed a comprehensive concept of privacy, which is administered via a set of statutes known as data protection. These privacy statutes cover all personal data, whether collected and used by government or by private firms. These laws are administered by state and national data protection agencies in each country. With the advent of the EU, this common comprehensiveness stands in distinct contrast to the patchwork of laws in the United States.
Privacy laws in Europe are built around the concept that privacy is a fundamental human right that demands protection through government administration. When the EU was formed, many laws were harmonized across the original 15 member nations, and data privacy was among those standardized. The initial harmonization related to privacy was the Data Protection Directive, adopted by EU members, which has a provision allowing the European Commission to block transfers of personal data to any country outside the EU that has been determined to lack adequate data protection policies. The impetus for the EU directive is to establish the regulatory framework to enable the movement of personal data from one country to another, while at the same time ensuring that privacy protection is “adequate” in the country to which the data is sent. This can be seen as a direct result of early United States Department of Health, Education, and Welfare (HEW) task force and OECD directions. If the recipient country has not established a minimum standard of data protection, it is expected that the transfer of data will be prohibited.
Two factors led to what can only be seen as a complete rewrite of EU data protection regulations. In light of the Snowden revelations, the EU began a new round of examining data protection when shared with the U.S. and others. This brought Safe Harbor provisions into the spotlight as the EU wanted to renegotiate stronger protections. Then, the European Court of Justice invalidated the Safe Harbor provisions. This led the way to the passage of the General Data Protection Regulation (GDPR), which went into effect in May of 2018.
The GDPR ushers in a brand-new world with respect to data protection and privacy. With global trade being important to all countries, and the fact that trade rests upon information transfers, including those of personal data, the ability to transfer data, including personal data, between parties becomes important to trade. Enshrined in the Charter of Fundamental Rights of the EU is the fundamental right to the protection of personal data, including when such data elements are transferred outside the EU. Recognizing that, the new set of regulations is more expansive and restrictive, making the Safe Harbor provisions obsolete. For all firms that wish to trade with the EU, there is now a set of privacy regulations that will require specific programs to address the requirements.
The GDPR brings many changes, one being the appointment of a data protection officer (DPO). This role may be filled by an employee or a third-party service provider (for example, consulting or law firm), and it must be a direct report to the highest management level. The DPO should operate with significant independence, and provisions in the GDPR restrict control over the DPO by management.
The GDPR requires significant consideration, including the following:
Assessing personal data flows from the EU to the U.S. to define the scale and scope of the cross-border privacy-compliance challenge
Assessing readiness to meet model clauses, remediate gaps, and organize audit artifacts of compliance with the clauses
Updating privacy programs to ensure they are capable of passing an EU regulator audit
Conducting EU data-breach notification stress tests
Monitoring changes in EU support for model contracts and binding corporate rules
The GDPR specifies requirements regarding consent, and they are significantly more robust than previous regulations. Consent requirements are also delineated for specific circumstances:
Informed/affirmative consent to data processing. Specifically, “a statement or a clear affirmative action” from the data subject must be “freely given, specific, informed and unambiguous.”
Explicit consent to process special categories of data. Explicit consent is required for “special categories” of data, such as genetic data, biometric data, and data concerning sexual orientation.
Explicit parental consent for children’s personal data.
Consent must be specific to each data-processing operation, and the data subject can withdraw consent at any time.
The GDPR provides protections for new individual rights, and these may force firms to adopt new policies to address these requirements. The rights include the Right to Information, Right to Access, Right to Rectification, Right to Restrict Processing, Right to Object, Right to Erasure, and Right to Data Portability. Each of these rights is clearly defined with technical specifics in the GDPR. The GDPR also recognizes the risks of international data transfer to other parties and has added specific requirements that data protection issues be addressed by means of appropriate safeguards, including binding corporate rules (BCRs), model contract clauses (MCCs), also known as standard contractual clauses (SCCs), and legally binding documents. These instruments must be enforceable between public authorities or bodies, as well as all who handle data.
The differences in approach between the U.S. and the EU with respect to data protection led the EU to issue expressions of concern about the adequacy of data protection in the United States, a move that could have paved the way to the blocking of data transfers. This has forced U.S. and other international companies to adapt their privacy protections to at least align with the GDPR for EU customers.
Encryption and Privacy
Encryption has long been held by governments to be a technology associated with the military. As such, different governments have regulated it in different manners. The U.S. government has greatly reduced controls over encryption in the past decade. Other countries, such as Great Britain, have enacted statutes that compel users to turn over encryption keys when asked by authorities. Countries such as France, Malaysia, and China still tightly control and license end-user use of encryption technologies. The primary driver for Phil Zimmerman to create Pretty Good Privacy (PGP) was the need for privacy in countries where the government was considered a threat to civil liberties.
Another major difference between U.S. and European regulation lies in where the right of control is exercised. In European directives, the right of control over privacy is balanced in such a way as to favor consumers. Rather than having to pay to opt out, as with unlisted phone numbers in the United States, consumers have such services for free. Rather than users having to opt out at all, the default privacy setting is deemed to be the highest level of data privacy, and users have to opt in to share information. This default setting is a cornerstone of the European Union’s Directive on Protection of Personal Data and is enforced through national laws in all member nations.
Like many European countries, Canada has a centralized form of privacy legislation that applies to every organization that collects, uses, or discloses personal information, including information about employees. These regulations stem from the Personal Information Protection and Electronic Data Act (PIPEDA), which requires that personal information be collected and used only for appropriate purposes. Individuals must be notified as to why the information is requested and how it will be used. The act has safeguards associated with storage, use, reuse, and retention.
To ensure leadership in the field of privacy issues, Canada has a national-level privacy commissioner, and each province has a provincial privacy commissioner. These commissioners act as advocates on behalf of individuals and have used legal actions to enforce the privacy provisions associated with PIPEDA to protect personal information.
Japan has the Personal Information Protection Law, which requires protection of personal information used by the Japanese government, third parties, and the public sector. The Japanese law has provisions where the government entity must specify the purpose for which information is being collected, specify the safeguards applied, and, when permitted, discontinue use of the information upon request.
Hong Kong has an office of the Privacy Commissioner for Personal Data (PCPD), a statutory body entrusted with the task of protecting personal data privacy of individuals and to ensure compliances with the Personal Data (Privacy) Ordinance in Hong Kong. One main task of the Commissioner is public education, creating greater awareness of privacy issues and the need to comply with the Personal Data Ordinance.
China has had a long reputation of poor privacy practices. Some of this comes from the cultural bias toward collectivism, and some comes from the long-standing government tradition of surveillance. News of the Chinese government eavesdropping on Skype and other Internet-related communications has heightened this concern. China’s constitution has provisions for privacy protections for the citizens. Even so, issues have come in the area of enforcement and penalties as well as privacy items that have been far from uniform in their judicial history.
One principal connection between information security and privacy is that without information security, you cannot have privacy. If privacy is defined as the ability to control information about oneself, then the aspects of confidentiality, integrity, and availability from information security become critical elements of privacy. Just as technology has enabled many privacy-impacting issues, technology also offers the means in many cases to protect privacy. An application or tool that assists in such protection is called a privacy-enhancing technology (PET).
Encryption is at the top of the list of PETs for protecting privacy and anonymity. As noted earlier, one of the driving factors behind Phil Zimmerman’s invention of PGP was the desire to enable people living in repressive cultures to communicate safely and freely. Encryption can keep secrets secret, and it’s a prime choice for protecting information at any stage in its lifecycle. The development of Tor routing to permit anonymous communications, coupled with high-assurance, low-cost cryptography, has made many web interactions securable and safe from eavesdropping.
Other PETs include small application programs called cookie cutters that are designed to prevent the transfer of cookies between browsers and web servers. Some cookie cutters block all cookies, while others can be configured to selectively block certain cookies. Some cookie cutters also block the sending of HTTP headers that might reveal personal information but might not be necessary to access a website, as well as block banner ads, pop-up windows, animated graphics, or other unwanted web elements. Some related PET tools are designed specifically to look for invisible images that set cookies (called web beacons or web bugs). Other PETs are available to PC users, including encryption programs that allow users to encrypt and protect their own data, even on USB keys.
Data minimization is one of the most powerful privacy-enhancing technologies. In a nutshell, it involves not keeping what you don’t need. Limiting the collection of personal information to that which is directly relevant and necessary to accomplish a specified purpose still allows the transactions to be accomplished, but it also reduces risk from future breaches and disclosures by not keeping “excess” data. In the EU, privacy rules are built around the idea that individuals own the rights to the reuse of their data, and unless they grant it to a company, the right to store and reuse the data beyond the immediate transaction is prohibited. This serves several purposes, but one important outcome is that when a breach/disclosure event occurs, the reach of the PII loss is limited.
While you may need to have a reasonable amount of PII to process and ship an order, once that process has concluded, do you need the data? There may be a need for a reasonable period for returns, warranty claims, and so on, but once that period has passed, destroying unneeded PII removes it from the chance of disclosure.
Data masking involves the hiding of data by substituting altered values. A mirror version of a database is created, and data modification techniques such as character shuffling, encryption, and word or character substitution are applied to change the data. Another form is to physically redact elements by substituting a symbol such as * or x. This is seen on credit card receipts, where the majority of the digits are removed in this fashion. Data masking makes reverse engineering or detection impossible.
Data masking hides personal or sensitive data but does not render it unusable.
Tokenization is the use of a random value to take the place of a data element that has traceable meaning. A good example of this is when you have a credit card approval, you do not need to keep a record of the card number, the cardholder’s name, or any of the sensitive data concerning the card verification code (CVC) because the transaction agent returns an approval code, which is a unique token to that transaction. You can store this approval code, the token, in your system, and if there comes a time you need to reference the original transaction, this token provides you with complete traceability to it and yet, if disclosed to an outside party, reveals nothing.
Tokens are used all the time in data transmission systems involving commerce because they protect the sensitive information from being reused or shared, yet they maintain the desired nonrepudiation characteristics of the event. Tokenization is not an encryption step because encrypted data can be decrypted. By substituting a nonrelated random value, tokenization breaks the ability for any outside entity to “reverse” the action because there is no connection.
Tokenization assigns a random value that can be reversed or traced back to the original data.
Data anonymization is the process of protecting private or sensitive information by removing identifiers that connect the stored data to an individual. Separating the PII elements such as names, Social Security numbers, and addresses from the remaining data through a data anonymization process retains the usefulness of the data but keeps the connection to the source anonymous. Data anonymization is easier said than done, because data exists in many places in many forms. This permits data aggregators to collect multiple instances and then, through algorithms and pattern matching, de-anonymize the data through multiple cross-references against multiple sources.
Pseudo-anonymization is a de-identification method that replaces private identifiers with fake identifiers or pseudonyms (for example, replacing the value of the name identifier “Mark Sands” with “John Doe”). Not all uniquely identifying fields are changed because some, such as date of birth, may need to be preserved to maintain statistical accuracy. Noise can be added to some fields to remove direct connections, but maintaining the approximate value; for example, randomly adding or subtracting three days to/from the actual date of birth preserves the age but de-identifies to the original record. Pseudo-anonymization preserves statistical accuracy and data integrity, allowing the modified data to be used for training, development, testing, and analytics while protecting data privacy.
Privacy Compliance Steps
To ensure that an organization complies with the numerous privacy requirements and regulations, a structured approach to privacy planning and policies is recommended:
1. Identify the role in the organization that will be responsible for compliance and oversight.
2. Document all applicable laws and regulations, industry standards, and contract requirements.
3. Identify any industry best practices.
4. Perform a privacy impact assessment (PIA) and a risk assessment.
5. Map the identified risks to compliance requirements.
6. Create a unified risk mitigation plan.
The legal description of terms of agreement (commonly known as terms and conditions) is a set of items that both parties agree upon before some joint activity. This is used all the time with any external-facing interface, where you have the responding party agree to a published terms of agreement document before granting them access or processing their data elements. A typical terms of agreement document includes the terms, the rules, the guidelines of acceptable behavior, and other useful sections to which users must agree in order to use or access an IT resource, such as website, a mobile app, an order placement page, and so on. Important items in the terms of agreement document include legal terms, governing law, agreement to operating rules, what services are offered and under what business conditions, liabilities, remedies for disagreements (for example, arbitration), and business terms such as the right to cancel, refunds, service level agreements, and so on. This becomes a license that binds the parties to the terms the business wishes to enforce.
When you collect personal information
Why you collect personal information
What information is collected
How the information will be protected
When the information can or will be shared
Who to contact and where questions should be directed concerning the notice
How to opt out or opt in
An effective date of the document
A privacy impact assessment (PIA) is a structured approach to determining the gap between desired privacy performance and actual privacy performance. A PIA is an analysis of how PII is handled through business processes and an assessment of risks to the PII during storage, use, and communication. A PIA provides a means to assess the effectiveness of a process relative to compliance requirements and identify issues that need to be addressed. A PIA is structured with a series of defined steps to ensure a comprehensive review of privacy provisions.
The following steps comprise a high-level methodology and approach for conducting a PIA:
1. Establish PIA scope. Determine the departments involved and the appropriate representatives. Determine which applications and business processes need to be assessed. Determine applicable laws and regulations associated with the business and privacy concerns.
2. Identify key stakeholders. Identify all business units that use PII. Examine staff functions such as HR, Legal, IT, Purchasing, and Quality Control.
3. Document all contact with PII:
PII collection, access, use, sharing, and disposal
Processes and procedures, policies, safeguards, data-flow diagrams, and any other risk assessment data
Website policies, contracts, HR, and administrative for other PII
4. Review legal and regulatory requirements, including any upstream contracts. The sources are many, but some commonly overlooked issues are agreements with suppliers and customers over information sharing rights.
5. Document gaps and potential issues between requirements and practices. All gaps and issues should be mapped against where the issue was discovered and the basis (requirement or regulation) that the gap maps to.
6. Review findings with key stakeholders to determine accuracy and clarify any issues. Before the final report is written, any issues or possible miscommunications should be clarified with the appropriate stakeholders to ensure a fair and accurate report.
7. Create a final report for management.
In late 2020, and into 2021, Apple entered into the privacy world with a bold and strong statement supporting users’ right to privacy. The company backed these words with changes to its operating systems for the iPhone and iPad tablets to significantly reduce the information shared via apps. This has created a battle between Apple and the marketing giants of Facebook and Google, but Apple seems to be firm in its resolve that technology should support privacy. For more information, see https://time.com/collection/davos-2019/5502591/tim-cook-data-privacy/.
The Internet acts as a large information-sharing domain and, as such, can be a conduit for the transference of information among many parties. The Web offers much in the form of communication between machines, people, and systems, and this same exchange of information can be associated with privacy based on the content of the information and the reason for the exchange.
Cookies are small bits of text that are stored on a user’s machine and sent to specific websites when the user visits these sites. Cookies can store many different things, from tokens that provide a reference to a database server behind the web server to assist in maintaining state through an application, to the contents of a shopping cart. Cookies can also hold data directly, in which case there are possible privacy implications. When a cookie holds a token number that is meaningless to outsiders but meaningful to a back-end server, then the loss of the cookie represents no loss at all. When the cookie text contains meaningful information, then the loss can result in privacy issues. For instance, when a cookie contains a long number that has no meaning except to the database server, then the number has no PII. But if the cookie contains text, such as a ship-to address for an order, this can represent PII and can result in a privacy violation. It is common to encode the data in cookies, but Base64 encoding is not encryption and can be decoded by anyone, thus providing no confidentiality.
Cookies provide the useful service of allowing state to be maintained in the stateless process of web serving (see “Cookies” in Chapter 17). But because of the potential for PII leakage, many users have sworn off cookies. This leads to issues on numerous websites, because when properly implemented, cookies pose no privacy danger and can greatly enhance website usefulness.
The bottom line for cookies is fairly clear: Done correctly, they do not represent a security or privacy issue. Done incorrectly, they can be a disaster. A simple rule solves most problems with cookies: never store data directly on a cookie; instead, store a reference to another web application that permits the correct actions to occur based on the key value.
Shared information still requires control, and in this case the control function has shifted to the party that obtained the information. They may store it for future use, for record purposes, or for other uses. If they fail to adequately protect the information from loss or disclosure, then the owner no longer has authorized the uses it may be employed in. Data disclosures and information thefts both result in unauthorized use of information. Users can take actions to protect their information and to mitigate risk from unauthorized sharing and use of their information.
Users have to share information for a variety of legitimate purposes. Information has value, both to the authorized user and to those who would steal the information and use it for unauthorized purposes. If users are going to control their information, they have to take certain precautions. This is where security and privacy intersect at an operational level. Security functionality enables control and thus enables privacy functionality.
One aspect of maintaining control over information is in the proper security precautions presented throughout the book, so they will not be repeated here. A second level of actions can be employed by users to maintain knowledge over their information uses. The value of information is in its use, and in many cases, this use can be tracked. The two main types of information that have immediate value are financial and medical. Financial information, such as credit card information, identity information, and banking information, can be used by criminals to steal from others. Many times the use of identity or financial information will show up on the systems of record associated with the information. This is why it is important to actually read bank statements and verify charges.
Users should periodically, as in annually, request copies of their credit bureau reports and examine them for unauthorized activity. Likewise, users should periodically check their healthcare insurers, looking for unauthorized activity there as well. These checks do not take much time and provide a means to prevent long-term penetration of identities.
In the same vein, one should periodically examine their credit report, looking for unauthorized credit requests or accounts. Periodic checks of healthcare insurance accounts and reports are essential for the same reason. Just because you have paid all your copays, you shouldn’t shred unopened envelopes from the insurance company. If someone else is using your information, you may be authorizing their use of your stolen information by not alerting the insurance company to the misuse.
Data breaches continue to plague firms. Here are some recent major breaches and the number of records they affected:
Equifax 143,000,000 records
Friend Finder Network 412,000,000 records
River City Media 1,370,000,000 records
Spambot 700,000,000 records
Philippine Commission on Elections 550,000,000 records
Uber 57,000,000 records
There are many additional breaches, varying in size and in data sensitivity. While the large numbers of e-mail addresses capture the headlines, the release of all Swedish car registrations in the entire country is missed because of the limited numbers of cars in Sweden, yet the impact for Swedes could be significant. For further reference and additional information, see www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/.
When a company loses data that it has stored on its network, the term used is data breach. Data breaches have become an almost daily news item, and the result is that people are becoming desensitized to their occurrence. Data breaches act as means of notification that security efforts have failed. Verizon regularly publishes a data breach investigation report, examining the root causes behind hundreds of breach events. In the Verizon Data Breach Investigations report, Verizon found that breaches can be described by the following distinct patterns:
Point-of-sale (POS) intrusions
Web app attacks
Insider and privilege misuse
Physical theft and loss
Miscellaneous errors (misdelivery, misconfiguration, user errors)
Payment card skimmers
Denial of service
In 2020, the report found that 70 percent of breaches were caused by outsiders, 86 percent of the breaches were financially motivated, 43 percent of breaches were attacks on web applications (more than double the previous year), and 27 percent of malware incidents were attributed to ransomware.
Rebecca Herold, Privacy Professor
Monthly Privacy Professor tips www.privacyguidance.com/eTips.html
Information Is Beautiful (visualizations) www.informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/
Verizon data breach investigations report https://enterprise.verizon.com/resources/reports/dbir/
After reading this chapter and completing the exercises, you should understand the following aspects of privacy.
Privacy is the power to control what others know about you and what they can do with that information.
The concept of privacy does not translate directly to information about a business because it is not about a person.
Numerous U.S. federal statutes have privacy provisions, including FERPA, VPPA, GLBA, HIPAA, and so on.
The number of state and local laws that address privacy issues is limited.
A wide array of international laws address privacy issues, including those of the EU, Canada, and other nations.
Policies drive corporate actions, and privacy policies are required by several statutes and are essential to ensure compliance with the myriad of mandated actions.
Cookies represent a useful tool to maintain state when surfing the Web, but if used incorrectly, they can represent a risk to security and privacy.
Data sensitivity labels are used to identify the types of data sensitivity.
Assignment of duties to data owners, controllers, custodians/stewards, processors, and privacy officers is done by management.
A direct relationship exists between information security and privacy—one cannot have privacy without security.
Privacy-enhancing technologies (PETs) are used in the technological battle to preserve anonymity and privacy.
Specific constituent elements of PII need to be protected.
Corporate responsibilities associated with PII include the need to protect PII appropriately when in storage, use, or transmission.
cookie cutters (951)
data custodian (937)
data owner (936)
data privacy officer (DPO) (937)
data processor (937)
data protection (948)
data retention (931)
data roles (936)
data sensitivity labeling (933)
data steward (937)
Disposal Rule (947)
Fair Information Practice Principles (FIPPs) (941)
Freedom of Information Act (FOIA) (942)
General Data Protection Regulation (GDPR) (949)
Health Insurance Portability and Accountability Act (HIPAA) (944)
identity theft (946)
Notice of Privacy Practices (NPP) (945)
Personal Information Protection and Electronic Data Act (PIPEDA) (950)
personally identifiable information (PII) (935)
Privacy Act of 1974 (942)
privacy-enhancing technology (PET) (951)
privacy impact assessment (PIA) (954)
proprietary data (934)
protected health information (PHI) (945)
red flag (947)
red flag rules (947)
sensitive data (934)
Use terms from the Key Terms list to complete the sentences that follow. Don’t use the same term more than once. Not all terms will be used.
1. In the United States, the standard methodology for consumers with respect to privacy is to _______________, whereas in the EU it is to ______________.
2. _______________ is the right to control information about oneself.
3. The FTC mandates firms’ use of _______________ procedures to identify instances where additional privacy measures are warranted.
4. The newer set of privacy rules and regulations in the EU are referred to as the _______________.
5. Data that can be used to identify a specific individual is referred to as _______________.
6. Programs used to control the use of ___________ during web browsing are referred to as _________.
7. The major U.S. privacy statutes are the ____________ and the _______________.
8. Medical information in the United States is protected via the _______________.
9. Many privacy regulations have specified that firms provide an annual _______________ to customers.
10. To evaluate the privacy risks in a firm, a(n) _______________ can be performed.
1. HIPAA requires which of the following controls for medical records?
A. Encryption of all data
B. Technical safeguards
C. Physical controls
D. Administrative, technical, and physical controls
2. Which of the following is not PII?
A. Customer name
B. Customer ID number
C. Customer Social Security number or taxpayer identification number
D. Customer birth date
3. A privacy impact assessment:
A. Determines the gap between a company’s privacy practices and required actions
B. Determines the damage caused by a breach of privacy
C. Determines what companies hold information on a specific person
D. Is a corporate procedure to safeguard PII
4. Which of the following should trigger a response under the red flag rule?
A. All credit requests for people under 25 or over 75
B. Any new customer credit request, except for name changes due to marriage
C. Request for credit from a customer who has a history of late payments and poor credit
D. Request for credit from a customer with a credit freeze on their credit reporting record
5. Which of the following is an acceptable PII disposal procedure?
C. Electronic destruction per military data destruction standards
D. All of the above
6. Key elements of GDPR include which of the following?
A. Conducting EU data-breach notification stress tests
B. Appointing a data protection officer reporting directly to top-level management of the firm
C. Right to Erasure
D. All of the above
7. European privacy laws are built upon which of the following?
A. General Data Protection Regulations
B. Personal Information Protection and Electronic Data Act (PIPEDA)
C. Safe Harbor principles
D. Common law practices
8. In the United States, company responses to data disclosures of PII are regulated by which of the following?
A. Federal law, the Privacy Act
B. A series of state statutes
C. Contractual agreements with banks and credit card processors
D. The Gramm-Leach-Bliley Act (GLBA)
9. What is/are the primary factor(s) behind data-sharing compliance between U.S. and European companies?
A. U.S. firms adopting provisions of the GDPR
B. Safe Harbor provisions
C. U.S. FTC enforcement actions
D. All of the above
10. Privacy is defined as:
A. One’s ability to control information about oneself
B. Being able to keep one’s information secret
C. Making data-sharing illegal without consumer consent
D. Something that is outmoded in the Internet age
1. Privacy and technology often clash, especially when technology allows data collection that has secondary uses. In the case of automotive technology, black boxes to collect operational data are being installed in new cars in the United States. What are the privacy implications, and what protections exist?
2. Privacy policies are found all over the Web. Pick three websites with privacy policies and compare and contrast them. What do they include and what is missing?
3. The EU has dramatically changed its privacy infrastructure and requirements as a result of several events, including court cases, the Snowden revelations, and government activism. Examine the new world of data privacy regulations under the GDPR and then compare and contrast this to both the U.S. system and the previous EU system.
• Lab Project 25.1
Privacy-enhancing technologies can do much to protect a user’s information and/or maintain anonymity when using the Web. Research onion routing and the Tor project. What do these things do? How do they work?