CHAPTER 5

Privacy and Security in Healthcare

This chapter covers Domain 5, “Privacy and Security in Healthcare,” of the HCISPP certification. After you read and study this chapter, you should be able to:

•   Identify key information security objectives and attributes

•   Understand common information security definitions and concepts

•   Know fundamental privacy terms and principles used in information protection

•   Comprehend the interdependence of privacy and security in healthcare organizations

•   Categorize sensitive health information according to US and international guidelines

•   Define privacy and security terms as they apply to healthcare

•   Distinguish methods for reducing or mitigating the sensitivity of healthcare information

The importance of understanding and applying proper privacy and security controls on healthcare information is foundational to your success as a healthcare information privacy and security professional. The healthcare industry is highly regulated in the United States as is the protection of personal data in most other countries. As we move from our discussion of the regulatory environments that impact our healthcare organizations, we will examine specific information security and privacy definitions and concepts that regulations and ultimately our policies and procedures are built upon. In this chapter, you will learn to understand security objectives and attributes, including the principles of confidentiality, integrity, and availability. You’ll also learn about accountability, which is often a major part of the discussion of security as it relates to healthcare organizations. You will also learn general security definitions as they apply to healthcare.

As with most healthcare organizations, your organization likely faces a multitude of challenges, not the least of which is the need to apply a reasonable standard of due care and due diligence to safeguard the confidentiality, integrity, and availability of patient healthcare information. Whether the intent is to improve patient care, to protect the sensitive information we need to serve our customers, or to ensure compliance with regulatory requirements, the challenges are significant.

This chapter addresses security- and privacy-related topics together. This is in part because it also includes an overview of the relationship between information security and privacy. Privacy involves controlling access to personal information and the control a person can have over information discovery and sharing. Security is administrative, technical, and physical mechanism that protects information from unauthorized access, alteration, and loss. In short, privacy is about what we protect, and security is about how we protect it.1

Images

EXAM TIP   You will be tested on your basic understanding of security and privacy concepts and principles, the relationship between security and privacy, and the types of information requiring protection in the healthcare industry.

Privacy and security are important to everyone involved in healthcare, including health facility employees, patients, family members, and care givers. It also applies to anyone who works for organizations that play cursory roles in patient care—for example, workers at a clearinghouse for healthcare information who never work directly providing patient healthcare. These workers also have obligations to ensure that the privacy and security of patient data are maintained in accordance with their employers’ policies and applicable regulations.

Guiding Principles of Information Security: Confidentiality, Integrity, and Availability

Data security has three guiding principles: confidentiality, integrity, and availability (CIA).2 In general, it does not matter where you work, where you live, or what organizations you support; these principles remain the same. In addition to understanding CIA, you need to understand the importance of accountability, another central concept akin to CIA.

The CIA model is driven by the implementation of a combination of technical, administrative, and physical information protection controls. You should understand the relationship between CIA components in general and within your organization. Depending on a variety of concerns, including your role in the organization, the organization’s mission and size, applicable regulatory authorities, and sensitivity of information, one component may be emphasized over the others. For example, as a system administrator, providing integrity and availability may be more appropriate to your job description than providing confidentiality.

The prevailing illustration used for the CIA triad is an equilateral triangle that indicates the “weight” of each component as being equal to the others. The reality of these relationships, however, depends on situational factors. This is an important concept, because the emphasis placed on these three factors represents the assessment and balance of choices for security and privacy tools within your organization. So, for example, in your organization, confidentiality may be considered more of a priority than the other two factors, which means you’ll have increased focus on access controls and encryption. Or if data availability takes precedence, you may invest more in technical solutions for disaster recovery.

Figure 5-1 shows a comparison of the CIA triad in two scenarios. In the solid line triangle, each component is weighted equally. The triangle comprising dotted lines depicts an emphasis on confidentiality in implementing security and privacy controls.

Images

Figure 5-1  The CIA is often depicted as a triangle that implies the relationship of the three components.

Confidentiality

Confidentiality relates to protecting sensitive proprietary information or personally identifiable information (PII) from unauthorized disclosure. The objective is to control access to a limited amount of data so that only those with proper authorization are allowed to access it. Security controls are implemented to protect confidentiality and avoid unauthorized disclosure. In healthcare, a breach in confidentiality could include an employee calling a media outlet to “anonymously” inform them that a public figure has been admitted to a rehabilitation facility. A similar example of a confidentiality breach, though beyond the scope of healthcare, also occurs when a fan magazine publishes photos of famous people enjoying personal time at a beach or other locale.

Although confidentiality is an important aspect in protecting information in any organization, healthcare information, in particular, warrants a higher level of protection and enforcement—consider, for example, an unauthorized disclosure that an individual has HIV, a psychiatric disorder, or another health concern.

Healthcare data confidentiality requirements are recognized internationally. For example, in the United States, the National Institute of Standards and Technology (NIST) has issued directives regarding CIA for healthcare information, and within the scope of confidentiality it states the following in a 2008 publication:

The confidentiality impact level is the effect of unauthorized disclosure of health care delivery services on the ability of responsible agencies to provide and support the delivery of health care to its beneficiaries will have only a limited adverse effect on agency operations, assets, or individuals. Special Factors Affecting Confidentiality Impact Determination: Some information associated with health care involves confidential patient information subject to the Privacy Act and to HIPAA [Health Insurance Portability and Accountability Act]. The Privacy Act Information provisional impact levels are documented in the Personal Identity and Authentication information type. Other information (e.g., information proprietary to hospitals, pharmaceutical companies, insurers, and care givers) must be protected under rules governing proprietary information and procurement management. In some cases, unauthorized disclosure of this information such as privacy-protected medical records can have serious consequences for agency operations. In such cases, the confidentiality impact level may be moderate.3

In Canada, the Canadian Privacy Act, Section 63, states the following:

Subject to this Act, the Privacy Commissioner and every person acting on behalf or under the direction of the Commissioner shall not disclose any information that comes to their knowledge in the performance of their duties and functions under this Act.4

In healthcare, confidentiality is not only important to protect individuals from medical and financial identity theft, but research shows that a breach of confidentiality can impact patient care. In some cases, where breaches are all too common, some patients are worried their private information will fall into the wrong hands.5 As mentioned in Chapter 4, providers understand that patients who fear their information might be disclosed in an unauthorized manner may delay seeking care or withhold information.

Specific circumstances require additional confidentiality considerations. For example, patient care information related to HIV, behavioral health, substance abuse, and children’s health often have even more restrictive confidentiality requirements than other types of information.

Whether in the United States under HIPAA or in the European Union under the General Data Protection Regulation (GDPR), confidentiality requirements often continue even after an employee’s role has changed or after an employee leaves a position or an organization. Most healthcare regulatory requirements clearly state that even when an individual no longer has access to the information, he or she is still required to keep the information confidential. For example, in the European Union, the Data Protection Directive, Article 28, Section 7, states the following:

Member States shall provide that the members and staff of the supervisory authority, even after their employment has ended, are to be subject to a duty of professional secrecy with regard to confidential information to which they have access.6

By law, data collectors are responsible, in the United States and in most other nations, for maintaining the confidentiality of the information forever, even if the patient discloses the information. Of course, the patient can give consent for specific disclosures, but generally, regulations do not permit the healthcare organization to disclose the information outside of legal allowances. Further, the organization must protect the information from disclosure until it can legally and properly destroy it.

Integrity

Integrity is a security pursuit that is intended to protect information from unauthorized editing, alteration, or amendment. Imagine a scenario in which malicious code (such as a software virus) is introduced via a malware software application into a medical device. If, for example, that medical device is responsible for dosing medications to a patient, and the virus causes all decimal points to move to the right one space, the resulting dosage change can be significant, and potentially life-threatening, to the patient (for example, a dose of 0.5 ml may have a disastrous effect if only 0.05 ml is indicated). The integrity of healthcare data is important for patient safety and for many other reasons.

Data integrity is achieved by protecting the accuracy, quality, and completeness of the information. Integrity of information is maintained by assuring that any changes made to data are authorized and correct or not made at all. Security controls for integrity follow the data flow through the lifecycle of the information. When you examine the process of data collection and use in a healthcare setting, the data often changes format, and various data elements are combined, parsed out, or even aggregated. Throughout this process, however, the integrity of the data must remain intact. A patient name or date of birth, for instance, must remain the same even if it is collected as Jane Doe, December 14, 1970, at admissions and then changed to Doe, Jane, 12/14/70, after it gets transcribed into the billing system. Although this is an exceedingly simplified example, maintaining data integrity across data flow is one reason for the existence of standards such as the US Centers for Disease Control and Prevention (CDC) guideline ICD-10 for coding patient encounters, and Health Level (HL7), a set of international standards for transmitting health information across organizations and systems. Using standard data sets and transaction codes helps to assure data integrity.

To accomplish integrity, several methods are used, including error checking and validation procedures. The following list includes some generic data integrity approaches that apply, some sample technical methods, and the security improvements that are addressed. The list is adapted from data integrity guidance from the NIST SP 1800-11 (Draft), Data Integrity: Recovering from Ransomware and Other Destructive Events:7

•   Corruption testing  This procedure includes the use of extract-transform-load (ETL) data testing applications, reliable backups, and TCP/IP checksum testing to examine unauthorized changes. The process includes logging and auditing for a retrospective review of data. The testing uses file hashing and encryption algorithms to identify cybersecurity events and data alteration.

•   Secure storage  This process includes encrypted backups and immutable (unchangeable) storage solutions with write-once, read-many (WORM) properties. Technical processes, such as redundant storage—namely RAID—are storage configuration solutions that satisfy secure storage and data integrity protection.

•   Logging  A significant component of data integrity is to collect and enable the review of access to data and user activity. In this case, logging is used in alerting and analysis to discover any unusual or unauthorized activity and in legal discovery and e-forensics. It can be generated from individual systems. Several analysis tools can be used, such as security incident and event monitoring (SIEM) applications and network data capture systems.

•   Backup capability  Data integrity is preserved through procedures that enable data to be replicated and recovered periodically. Related to secure storage, backup tools support full, incremental, and differential schedules for backup. Another approach to backups is mirroring, which is similar to a full backup except that an exact copy of the data is stored separately, matching the source. Other backup procedures store files in one encrypted storage repository.

In healthcare, information integrity has a strong association with patient care and patient safety. While unauthorized disclosure (confidentiality) may lead to an unauthorized individual having access to healthcare data, and while the unavailability of data may hinder care, the fact remains that if the data we have on a patient is not accurate, it can in fact lead to death. For example, an unconscious patient who has an allergy to a specific medication undergoing treatment cannot advise staff about his allergy. In such a case, the availability of accurate data can save his life.

Images

NOTE   Security controls for integrity also apply to technology and processes that assure nonrepudiation. This means that the controls in place assure the authentication of a data user or sender without the possibility of another actor impersonating the user.

Availability

Information is valuable only if it is accessible and timely. The data can be accurate and kept private, but if it is not available when it is needed, this third part of the CIA triad has failed. Availability of data is generally described as proper access at the time the information is needed. In healthcare, we can certainly understand the failure of protecting PII and ensuring that patient records are accurate. But experiencing network downtime with no contingency operations plan in place would mean the information is not available at the point of care. If the provider does not have the ability to access the information he or she needs, patient care is affected and patient safety risks increase.

Paper-based health records and procedure manual processes can exacerbate the availability issue, because they may not be as easily accessed or enacted as digitally stored information. Not having availability of information can result in improper diagnosis, inefficient or redundant tests, and in some cases adverse drug-to-drug interactions. A major assurance of availability from an information security and privacy perspective is reached through implementation of business continuity and disaster recovery procedures. These focus areas require the use of administrative, technical, and physical controls to oversee high-availability system architectures, reliable backups, secondary operating locations, and practiced recovery procedures.

Availability also relates to having only the necessary information available. Having too much information available or having unorganized raw data can pose a security issue. Privacy and security frameworks such as the DPD, GDPR, and HIPAA, for instance, address the issue of having relevant information versus having more information than is needed. Consider an example: A provider who requires a relevant prior MRI image when treating an orthopedic injury must certainly have the most recent MRI on the affected body part to compare against the latest image. However, that provider would be overwhelmed by having to search through all the images on record for unrelated care of that patient. If nothing else, the search would be time consuming and wasteful.

In addition, by limiting availability, we can prevent unauthorized disclosures or data breaches simply by not sharing unused or extra data in the transaction. For illustration, consider an example from the past. There was a time in the United States when credit card numbers were printed in their entirety on receipts. This was useful for identification purposes and convenient for the payer when proving a purchase or seeking a refund. But eventually the practice ceased, because proof of purchase could be determined in more discrete ways, such as using only the last four to six digits on the card with the other digits masked. The practice of including the entire credit card number introduced too much risk of data loss and identity theft. This is a good example of the security impact of unnecessarily disclosing too much information.

Accountability

While generally not considered part of the CIA principles, accountability is often included as a high-level security principle. Within the healthcare environment, compliance standards often treat accountability as a basic principle. Accountability in information security and privacy refers to the determination of who is responsible for proper and improper information access and use. For example, a clinician treating a patient who enters data, thereby editing the patient record, is accountable for the actions they take to enter the information. The clinician should be responsible for ensuring that the information is accurate when it is entered. Accountability intersects with the CIA principles. The individual with access should ensure the integrity of the data entered, while leveraging availability of the system, and ensuring confidentiality that the information is available only to those with proper authorization.

An organization must also demonstrate accountability for the information it collects and uses. Data use actions must be logged and audited to various degrees to prove that measures of accountability are in place. Accountability incorporates tracking actions and identification of responsible parties in retrospect after cybersecurity incidents and data breach recovery. Auditing information disclosure reports enables us to view and remediate any disclosures that may have been unauthorized, or at least to prove to government regulators that disclosures are tracked as required. Nonrepudiation also applies to accountability. By providing protections such as digital signatures and encryption algorithms, an organization can ensure that the sender of an electronic message cannot deny sending it and that the receiver cannot deny receiving it. In this way, nonrepudiation assists organizations by providing ways to prove accountability.

Understanding Security Concepts

The concepts that shape information security can seem abstract and complex. To help you understand the approaches and practices of information security, this section describes some basic approaches and methods to security that you should be familiar with. These concepts are central to information security practices and compliance, and many progressive security processes are based on them.

You should be familiar with three basic aspects of security that will help you better understand the information that follows in this and other chapters: security controls, defense-in-depth, and security categorization. Understanding these important concepts may also improve your ability to do a good job in supporting and providing security and privacy in your organization.

Security Controls  Security controls include management controls (often administrative, such as policies or procedures), operational controls (the processes we follow to do things), and technical controls (hardware and software implementations to assist in securing computer-based resources). The organization’s cost considerations and interrelationships between security controls have a great deal of influence on the ability of the organization to deliver on its mission.

Defense-in-Depth  Defense-in-depth consists of implementation of various defensive controls working together for your systems or applications to protect the overall security of organizational assets. In the IT world, examples of defense-in-depth include the integrated use of antivirus and antimalware software, firewalls, encryption, intrusion detection and prevention systems, and biometric authentication. An example of defense-in-depth in your home is an alarm system for the house that includes smoke and carbon monoxide detectors, which may or may not be separate from the smoke detectors; cameras that you can check remotely over the Internet; and smartphone apps that enables you to control lighting, entry doors, and so on. Figure 5-2 demonstrates the defense-in-depth principle, and although it may not depict a system used in larger organizations, it provides a basic understanding of how the layers of the system must rely on one another to be effective.

Images

Figure 5-2  Simplified defense-in-depth

Security Categorization  Security categorization enables you to determine the level of security required for a system based on the information (or data) type the system uses or maintains. This book specifically addresses working with healthcare information, which includes sensitive information such as protected health information (PHI) and electronic protected health information (ePHI), personal health records (PHRs), personally identifiable information (PII), and a number of other terms, depending on where you work (which can include the specific nation, continent, province, or state).

In determining the security categorization of a system, an application, or an organization overall, you must first identify, or categorize, the type of information or data involved. You then review the security controls required, including those you have implemented and those that should be implemented (referred to as “planned”). Organizations must determine the category, or categories, of information stored or used in their system, such as the healthcare information categories discussed in NIST SP 800-60, Vol. 2 Rev. 1, Section D.14.4. A similar system is shown in Figure 5-3. This method for security categorization can be adapted to fit any organization.

Images

Figure 5-3  Categorizing information for security

Note that one system may include multiple types of data—for example, your health information system may also store insurance data, billing information, employee records, and so on, which may be categorized differently than PHI and PII. To categorize a variety of data, based on the information type, the organization sets a “provisional impact level.” Here’s an example of how a healthcare security categorization system process might work:

If Confidentiality = Medium, Integrity = High, and Availability = Medium, the overall security impact may be considered High if data integrity carries the most weight. An example in which data integrity might be weighted higher than the other information protection categorizations might involve a medical device like the linear accelerator used in precision radiology treatment of cancer. The data that informs the actions of the device must be protected from tampering, arguably against all other considerations so the patient does not get more or less radiation and the treatment is in the exact right area. This initial categorization can be accomplished by an individual with knowledge of the data system. However, the next step should be done by representatives that serve different functions within the organization, such as IT, biomedical engineering, administration, patient care services, and so on.

The next part of the process involves two phases: the first phase involves reviewing the controls based on the provisional impact of High. In the second phase, the organization adjusts the controls based on its organization and systems. The organization also determines which controls are already in place (such as access control policies) and which ones are planned (such as training for system administrators).

In the final part, the organization finalizes the security categorization. Outside that process, the organization goes through the security controls and implements and documents how and if each control meets the criteria established.

Although this process may appear cumbersome, it is necessary to ensure that the organization adheres to security standards and can prove it to patients, government agencies, and other stakeholders. Healthcare organizations must develop strategies for protecting sensitive data. Security categorization supports developing processes, procedures, and policies as part of the organization’s strategic vision. Categorization leads to effective implementation of established security standards. The result is useful documentation of the organization’s security approach that guides the implementation of security controls in a tiered fashion, based on the systems operated and the data protected.

Identity and Access Management

An organization’s IAM strategy defines and manages the roles and access privileges of network users and the circumstances in which users are granted or denied access privileges. NIST guidelines describe IAM as a set of critical cybersecurity controls that function together to ensure the right people and things have the right access to the right resources at the right time.8 A successful healthcare IAM strategy must incorporate information use requirements with a complex technical environment and progressively demanding compliance requirements.

Although many technologies are available to help an organization create a general IAM strategy, each organization’s IAM strategy must align with its particular requirements, especially with regard to accessing PHI, PHRs, and PII. Following are a few examples of IAM:

•   Password-management tools  These can be used to create complex passwords on request, and they provide a secure, encrypted repository for current passwords for retrieval as needed.

•   Privileged account management systems  For users and accounts with elevated levels of permissions, or for administrative accounts, these systems provide management assistance, often with automation, and an auditable record of account use.

•   Provisioning (and deprovisioning) software  As employees are hired, this software maps permissions and access to various resources based on users’ roles or organizational policy (provisioning). User permissions and access are reviewed periodically, and when they’re no longer needed (such as upon employee termination), the software supports removing the user’s permissions and access (deprovisioning).

•   Certificate management  Certificates are digital keys used to secure data through encryption. Certificate management includes approval, issuance, inventory, distribution, controlling, suspension, and retirement. Certificates held by subscribers are documented by a registration authority, or responsible certificate authority, which is normally an external entity that issues trusted digital keys based on the authority’s initial root certificate.

•   Single-sign on (SSO) applications  This access management solution enables secure authentication to more than one system managed by an organization using a single user credential. The underlying identity framework is usually the Lightweight Directory Access Protocol (LDAP) and the necessary LDAP credential databases to coordinate information services.

•   Multifactor authentication (MFA)  This additional layer of identification and access protection enhances the use of a password and user ID as credentials. MFFA requires two or more pieces of evidence, to include something you have (such as a key fob or smartcard), something you know (such as a passcode or PIN), or something you are (such as facial recognition).

Images

NOTE   MFA is also referred to as two-factor authentication (2FA) because it requires at least two layers of identification. However, the use of two passwords does not constitute MFA or 2FA; the factors must be from different authentication components.

IAM helps the organization administer and secure credentials, privileges, and authentication processes in a consistent way. The various systems and tools that can be used to provide IAM can scale to the size of the organization and distribute access across multiple business and clinical entities. Centralized IAM models use Active Directory, which holds user credentials in one repository. Federated architectures use applications such as PingFederate to distribute user credentials across multiple systems with authentication requirements. Federated IAM may use single sign-on (SSO) to help reduce administrative fatigue from having to provide credentials for each system. The SSO solution would provide a level of trust that would be shared with each individual system even though user credentials are not stored in each. In any case, an IAM system provides a capability to guard against excessive authorization levels and protect against compromised user credentials.

The IAM security controls and mechanisms that accompany security processes constitute a valuable information protection resource to monitor and enforce policies and procedures in the healthcare organization. The processes of granting access, creating access roles, reviewing access periodically, and terminating access were once a manually accomplished, instance by instance, using rosters and spreadsheets that was a very labor intensive, frustrating process. Today, many IAM systems are available to help organizations automate and improve security by reducing human error and neglect due to the monotony of the required tasks.

Access Control

The ability to use a technical asset is defined as access. The term usually refers to a technical capability such as reading, creating, modifying, or deleting a file; executing a program; or using an external connection. The term access is often mistakenly confused with “authorization” and “authentication.” Access control protects sensitive information and is made up of all of the actions and controls put in place to regulate viewing, storing, copying, modifying, transferring, and deleting information. Controlling access also includes limitations of time and situation. For instance, an end user may be allowed to access a system at a predetermined time, but not afterward, or for certain reasons, but not others. An example of situational access is provided in an emergency room, where physicians are given some access to a patient’s behavioral health records at that point of care, but the same physicians would not have access to those records in a primary care setting.

Access control includes the following:

•   Authorization  Policies and procedures for determining which permissions an end user will have and when (that is, a user’s level of access).

•   Authentication  The process used to verify the identity of end users and validate that they are who they claim to be.

•   Identification  The act of indicating who a person is—in computing technologies, a person is usually identified by a username or identity code (for example, firstname, lastname). (Note that the identification does not include combining this username with a password, which is considered authentication, and combines a simple identity code with a verification code, or password.)

•   Nonrepudiation  Use of identification authentication methods and tools in a digital transaction to provide indisputable or assured proof of who the sender and/or the recipient of the data is.

Organizations implement access controls to permit legitimate use by authorized personnel while preventing unauthorized use. Keep in mind that “legitimate use” can change depending on the situations at hand, however, so access control must occur at various levels and at key intervals of access. For instance, access control can be implemented at the operating-system level to safeguard files and storage media, and at the database level to guard against unauthorized access and potential corruption of data. In addition, well-designed applications and web services typically enforce several independent access controls to layer the protection, even if an SSO technology is used.

Access control is an effort to prevent unauthorized use, but it also allows for sharing of sensitive information at an acceptable level of risk to the organization. Without access controls in place, most organizations would consider it too risky to share PHI or PII, and that would prevent or degrade patient care in a digitized healthcare environment. For access control to be effective, an authentication process must support the controls.

Identifying personnel properly and granting them appropriate access levels can involve two types of authentication: The first, MFA, is present when at least two conditions are satisfied independently by the same individual who wants to access a network, system, or application. In many contexts, MFA and 2FA are synonymous based on how many conditions must be satisfied. When an organization implements proper MFA, the process is recognized as strong authentication. The second type, computer-based access controls, or logical access controls, determine who (a person) or what (a process) can have access to a category of data or a computing system. These controls may be built into the operating system; may be incorporated into applications, programs, or major utilities (for example, database management systems or communications systems); or may be implemented through add-on security packages. Logical access controls may be implemented internally in the computer system being protected or they may be implemented in external devices.

Logical access controls can help protect the integrity and availability of the following:

•   Software applications that include EHR, as well as underlying operating systems, such as Windows and Unix, from unauthorized changes and misuses

•   Sensitive information through management and limitation of access by people and system processes, which reduces risk of unauthorized access or disclosure

Access controls can be administered globally for access to an entire network or at a single system level. A single system access control could be implemented to restrict access to one or two people for the ER pharmaceutical cabinet, for example. Access controls augment and enforce the administrative and physical controls that work together to protect sensitive information and, in the case of the pharmaceutical cabinet, access to controlled assets.

Images

EXAM TIP   Know the exception to these access controls in HIPAA termed “break glass” procedures. This provision in HIPAA allows systems that include ePHI to be accessed in emergency situations, when, under normal conditions, access would be prohibited. The intention is to avoid patient safety or patient care adverse events resulting from lack of information and system availability.

Access control models are used to enforce authentication and authorization guidelines, and the controls used are automated. (Imagine how difficult it would be to check the credentials of each end user manually every time they desired access to data or to a system!) The common models for access control are mandatory, discretionary, role-based, attribute-based, and context-based. Some hybrid models combine features of each, but these are the most prevalent structures and approaches. You will recognize some overlap and common features in each and how they work together.

Images

NOTE   Although these access control models are enforced by computer policies and configurations, some information security guidelines have provisions for overriding the policies and configurations, or for emergency access. The best example is the provision in HIPAA to provide “break glass” access in emergency situations. In other words, the system has a method for providing access to someone who may not have authorization or access under normal circumstances.

Mandatory Access Control

In mandatory access control (MAC), a central authority, such as the organization’s chief information officer (CIO), creates the access control policy, which is implemented by the IT department. The actual access control is enforced at the hardware or operating system level as a technical control. Most often, MAC is used in organizations that handle classified information, such as the military and, in some cases, a healthcare organization, where rigid, centralized controls are used in some applications and networked resources. In this model, individual system or data owners cannot change the level of access allowed.

A MAC model depends on proper security categorization of information, because the access policy in this case most likely will be determined by the sensitivity of the information. In healthcare organizations, information confidentiality is vital, so the central authority can determine who is allowed access to information. Improperly categorized information can lock out individuals who may need access, or it may allow access to those who have no need to know the information. Neither case is desired. On the positive side, having a central authority enforce the access control makes standard, equitable policies possible.

Discretionary Access Control

A discretionary access control (DAC) model is used if access control is more decentralized or delegated to the owner of an individual system or to the owner of the data itself. Privileges are granted by the system owner or data owner to whomever he or she considers authorized to access the information. DAC is more flexible than MAC, but it introduces greater risk. For instance, once someone has access to view a file, the system owner has little control over whether that person decides to copy it and share it with others.

Role-Based Access Control

Role-based access control (RBAC) is probably the most prevalent type of nondiscretionary access control model, in which system owners determine the level of access based on a user’s or group’s job function or organizational role, versus individual attributes. Role-based access is implemented to match access to data or systems according to the functional or structural role an individual plays within the organization. For example, a doctor who works in the emergency department will have the same access to data granted to another doctor in the emergency department, but this access would not be granted for a doctor who works in the ophthalmology department. In some cases, there may be one type of access control for physicians in pediatrics and another for nurses in the same department. An advantage to role-based access controls (RBACs) is that when a new doctor is assigned to the emergency department, the menus, access, and capabilities of another doctor in the department can be copied if the new doctor is to have identical privileges.

Rules-Based Access Control

Rules-based access control (RuBAC) uses of rules, policies, or attributes to set predetermined conditions for user access. RuBAC is a policy-based control that sets policies or rules that enable access and permissions to particular users, actions, objects, or contexts. RuBAC differs from RBAC in that RuBAC depends on established rules to grant access, rather than user roles. In RuBAC, for example, a rule could allow network or resource access to some IP addresses but block others. Rules require more administrative design and maintenance, as specific combinations of attributes are built to allow precise levels and conditions for access. Consider the scenario, for example, in which a newly hired physician is granted identical access as other physicians with the same role under RBAC. RuBAC can permit additional controls to increase or decrease access according to differences in the job functions or dynamic needs for access, such as time of day or location from which the physician desires information. RuBAC uses properties that are described to the access control engine and become preset rules.

Images

NOTE   RuBAC is related closely with another emerging model, the attribute-based access control (ABAC). In ABAC, based on very specific conditions or attributes, a high-precision policy or rule for access can be enacted. ABAC has shown promise in the healthcare environment, where patient privacy can be maintained in emergency situations. Role-based authentication may preclude access in some circumstances, particularly emergency situations, where it is more important to allow access based on environmental conditions than strictly the person’s identity or normal job function.9

Context-Based Access Control

For context-based access control (CBAC), controls are not established at the user level but are based on settings within the firewall that control traffic flow based on application layer protocol session information. CBACs demonstrate that access controls are not always connected to a person. They are used to manage access by systems, services, and other computing assets. CBAC limits traffic using access control lists (ACLs), which implement packet examination at the network layer or at the transport layer. CBACs can inspect data at the network or transport layer but can also examine application layer protocol information.

Data Encryption

Encryption is another mechanism that can be used for logical access control, for both data in transit and at rest. This technical security control gives an organization the ability to limit who has access to sensitive data and to protect information confidentiality. Strictly speaking, encryption does not focus on protecting or providing data integrity or availability; however, cryptographic algorithms, called hashing algorithms, do provide for data integrity. Sensitive data, such as PHI and PII, must be encrypted under the prevailing information security regulations and standards, such as HIPAA and ISO 27001, to name just two. There are two basic states in which data can (and should) be encrypted: when it is at rest and when it is in transit.

Data at rest refers to data that is stored on any media and is not currently being processed or transmitted. If data at rest has been encrypted, media that is lost or stolen and recovered by someone unauthorized to view the data will be unusable or unreadable to the person who finds it, because that person does not have authorized access to it. In this case, the only people who can access the data are those who possess the cryptographic key to decrypt it. The principle concept is that the data is protected from unauthorized access and disclosure by being rendered unusable to anyone other than someone with authorized access. This type of encryption can be applied in two common ways: encrypt the entire volume of information on the disk with the same encryption key, or encrypt at the file level. Encrypting the entire volume of the disk has benefits in that human error is minimized, because users may forget or neglect to encrypt data if any part of the process is manual or discretionary, as file-level encryption can be. An advanced form of encryption of data at rest includes encryption of database files within the file at a table, column, row, or even individual cell level. At this advanced level, you will encounter the tokenization process, where the identifying data is encrypted and replaces the identifiable data within the file or database. A re-identifying key is stored separately to enable reinsertion of the column-, row-, or cell-level data in the original file or database. However, to satisfy most leading security standards and regulations, data at rest encryption for disk level and file level is sufficient.

With data in transit encryption, some sensitive data must be transferred via an e-mail message, while other data is transferred over a networked connection. During these transfer processes, PHI and PII data must be protected by encryption. The cryptographic key process is similar whether the data is at rest or in transit. To achieve encryption of data in transit, an encryption key is attached to a digital certificate related to public and private keys for each sender and recipient. The private key is secret, but the public key is not and can be used by anyone to encrypt the message. Only the private key of the intended recipient can decipher the message. These two keys are different, but they are related by a mathematical algorithm that makes this process possible. A certificate is the mechanism used to uniquely identify an encryption key and associate it with the asset owner, which assures confidentiality and prevents unauthorized disclosure.

It is important to note that the leading standard for encryption keys is included in NIST publication FIPS 140-2, Security Requirements for Cryptographic Modules. If an encryption key has been tested and certified under FIPS 140-2, it can be used to provide safe harbor protection under HIPAA in the United States. Keep in mind the distinction between HIPAA safe harbor and the former DPD safe harbor provisions, as described in Chapter 4. There are important resources that provide guidelines for implementation and use of encryption for data at rest and in transit. You will not need them on a daily basis, but you will use them to ensure that your configurations and solutions from product suppliers or developers meet your legal and local policy guidelines for encryption. The following are a few to consult:

•   NIST SP 800-175B, Guideline for Using Cryptographic Standards in the Federal Government: Cryptographic Mechanisms

•   NIST SP 800-111, Guide to Storage Encryption Technologies for End-User Devices

•   NIST SP 800-52 Rev. 2, Guidelines for the Selection, Configuration, and Use of Transport Layer Security (TLS) Implementations

•   NIST SP 800-77, Guide to IPsec VPNs

•   NIST SP 800-113, Guide to SSL VPNs

•   ISO/IEC 18033-x, Information technology – Security techniques – Encryption algorithms (This publication has multiple parts, indicated after the hyphen: Part 1 covers general techniques and more specific technologies are covered in later parts.)

Figure 5-4 depicts a simple e-mail transfer using encryption for data in transit. It illustrates components of public key infrastructure (PKI), which consists of all technology, personnel, and policies that work together to create, manage, and share digital certificates.10 The encrypted information can be decrypted, allowing access only by those possessing the appropriate cryptographic key. This is especially useful if strong physical access controls cannot be provided, such as for laptops or mobile media devices. This ensures that if information is encrypted on a laptop computer and the laptop is stolen, the information cannot be accessed, because it is rendered unreadable without the decryption key. Encryption can provide strong access control, but it must be accompanied by effective key management.

Images

Figure 5-4  The transfer of encrypted data in transit

Images

EXAM TIP   The cryptography described is public-key cryptography, also called asymmetric cryptography, where one of the two keys used is public and the other is private. In contrast, private-key cryptography is called symmetric cryptography, and both parties hold a private key known only to the key holders.

Training and Awareness

An active employee training and awareness program is one of the most cost-effective security controls that any organization can implement. Training and awareness can help prevent the breaches caused by employee mistakes and reduce complacency around handling PHI and PII. Training and awareness practices must be used in tandem to conduct a comprehensive workforce information security program.

Security training teaches employees skills and techniques that can help them be more successful at keeping healthcare information private and secure. Often, classes are delivered based on the end users’ level of access, whether they are system administrators or standard end users, for example.

Training is most effective when it’s delivered regularly. In fact, many required training courses are mandated for annual or more frequent recurrence, sometimes by law. This training is tracked, and compliance is reported to relevant leadership or to the appropriate government agencies. HIPAA, for example, requires that a healthcare organization provide training to all workforce members on relevant security policies and procedures.

Training can be especially effective when it’s offered on an ad hoc or as-needed basis. A class on how to encrypt an e-mail containing PHI may be helpful as a weekly in-service topic for a specific audience, for example. The more often ad hoc trainings are offered on a variety of timely topics, the more effective they are at influencing proper information security behaviors.

Training can address many levels, from basic security practices to advanced or specialized skills, and often focuses on role-based access requirements. For example, a system administrator who manages a server farm with a specific operating system and desktop/laptop environment may need security-relevant training to perform her role. A network administrator who manages a network that uses specific vendor products also requires security training that is specific to the systems he manages, but this training would be quite different from that of the server administrator. Although these two examples are specific to technology, training is also important for employees with regard to specific duties performed in nontechnology roles. An example of such training could include payment card industry (PCI) requirements for staff who work in billing or cashiers in the insurance support areas of the healthcare organization.

Security awareness is the preferred outcome of training activities. Awareness is often incorporated into basic security training and can involve any method that encourages employee awareness of best security practices. Most awareness programs include marketing and communication of relevant information through information security campaigns using posters displayed in prominent areas, newsletters, banners that pop up during login, or announcements made in staff meetings, for example. Although delivery of security information could also be viewed as a form of ad hoc training, the targeted nature of the messages and the audience provide a distinction.

Awareness improves when all employees or certain groups of employees are regularly informed of proper security practices at work—such how to create appropriate passwords, what to do in the event of a virus or other security issue, or how and when to notify appropriate staff of a potential security violation. Many organizations require annual reviews of policies or procedures to test employee awareness. Awareness also improves employee security practices, such as logging off a computer system when it’s not in use.

Sanction Policy

Even the best training and awareness programs cannot prevent every employee-caused security incident. To address employee violations of the organization’s privacy and security practices and policies, a sanction policy is required. A sanction policy is a set of prescribed actions that management can take with regard to employee security violations.

Although some incidents are so serious that immediate termination is appropriate, the majority of incidents are accidental by nature. Therefore, the policy should outline progressive levels of discipline and provide for management discretion whenever possible. To facilitate this, the organization must categorize the types of infractions and match them against the various types of penalties to be considered. In the end, a good sanction policy will be fair and consistent, not only with regard to who commits a violation but also with regard to other organizational policies for human resource types of disciplinary actions.

As a part of the organization’s training and awareness program, data from the sanction policy enforcement process is invaluable in determining trending issues and reasons for breaches, as well for aggregate reporting of outcomes. Research has demonstrated that informing employees specifically about the sanction policy and actions related to it help improve employee compliance with information protection policies in healthcare settings.11

Logging and Monitoring

Within the information security environment are many different event logs and types of logging. Basically, a log is a record that is generated by the processing of events on the network and on systems, applications, and end user devices. Each specific event is recorded. The logs that relate to potential security events, such as failed login attempts or denied access incidents at the firewall, can be helpful tools. One of the principle duties of an information security and privacy professional is to review logs actively. The information gained from logs is invaluable in supporting performance improvement, detecting abnormalities from a security perspective, supporting forensics, and responding to legal requests for historical data.

Monitoring and tracking the health and status of the system and its operations may require specialized training by staff members to understand what information is being collected, how to determine whether a potential violation or suspected event may have occurred, and how to identify issues up to and including vulnerabilities to a specific system or application. In short, logging and monitoring are functions of generating performance and security data and acting on any events that trigger alerts.

Reviewing logs is a daunting task, because the sheer volume and complexity of logs have increased over time. Manual review approaches are infeasible as the need to monitor the computing environment has increased in importance. Monitoring today requires automated methods, in which rules and/or parameters are set to distinguish normal network behavior from potential incidents or events. For instance, a good monitoring process would provide an alert when a simultaneous logon by the same end user occurs on the network inside the computing domain and on the organization’s virtual private network. This incident would indicate the likelihood of spoofed or a stolen set of credentials. Although automated monitoring is the norm, administrators require the appropriate training and knowledge to set up the monitoring tools and to know and understand the complex rules of behavior, what normal behavior is, and what should alert action.

A growing trend within logging and monitoring, because of improved automated tools and process in security automation, is information security continuous monitoring (ISCM). NIST SP 800-137, Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations, defines ISCM as the ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. More important is that the directive outlines domains of potential automation and best practice philosophies around the domains, as depicted in Figure 5-5. Although this publication only introduces these domains, the key point is that logging and monitoring in these domains is increasingly automated and continuous.

Images

Figure 5-5  Security automation domains

Vulnerability Management

A vulnerability provides a description of a potential risk that exists within the organization and its information assets based on the state of a computing asset. The vulnerability makes it possible for a mistake by an internal employee or an attack to expose a security weakness. We often think of vulnerabilities when we assess systems and applications being up to date from a security perspective. But along with software and other technical controls, a vulnerability can be related to the lack of physical and administrative controls. For example, an unlocked office door could allow physical access by an unauthorized person, and the lack of a privilege access review process could lead to credential theft or misuse.

The measure of vulnerabilities is a central component of evaluating risk, particularly the likelihood of an event happening. Vulnerabilities are simply indicators that alert risk managers to consider actions and to balance risk, factoring in the likelihood of occurrence versus cost implications. Commonly, the cumulative impact of vulnerabilities increases concern and drives risk management priorities. In some cases, a single vulnerability could be significant enough that risk management actions are elevated to the highest priority. Figure 5-6 shows the interrelationships of vulnerability and risk.

Images

Figure 5-6  Relationships among threats, vulnerabilities, safeguards, and assets

Images

NOTE   Updating systems and networks while they are operating in the production environment is important. However, a foundational component of vulnerability management starts before a system or network is put into operation. The concept of system hardening addresses this initial phase of vulnerability management. System hardening consists of configuration management that assures a baseline security level for any asset in production that includes industry-recommended security configurations and settings. For information about Windows baselines, or benchmarks, visit https://www.cisecurity.org/benchmark/microsoft_windows_desktop. This information can be helpful in hardening or securing an endpoint system using Microsoft operating systems, as recommended by the Center for Internet Security (CIS).

If we implement the proper safeguards, we can reduce vulnerabilities and mitigate the risks of most threats. Vulnerability management can never eliminate all vulnerabilities, however. Some threats will exploit a vulnerability that we are unable to manage by a security control or safeguard. For example, if a user is using an obsolete operating system that is no longer supported by the manufacturer, operating system upgrades and security patches are no longer feasible. If the outdated system is valuable or important to the business or clinical practice, the vulnerability may have to be accepted, and compensating controls such as network segmentation and limited access may be the best approaches to mitigating the risk, but not eliminate it.

The better an organization manages its implementation of safeguards, the less the chance that a vulnerability will affect its systems. In reality, however, there will always be new vulnerabilities leading to risk. The role of a healthcare information security and privacy professional is to design, implement, and enforce the security controls of safeguards that reduce risk of vulnerability exposure. To measure and evaluate whether the proper safeguards are in place and operating effectively and efficiently, a security controls assessment is a necessary part of your role.

The practice of vulnerability management includes the process of patching systems and applications with updated code or software changes. This patch management process is a technical control that is vital to maintaining a properly safeguarded network and computing environment. Operating systems and various applications all are designed and implemented with a secure configuration when they are introduced to the marketplace. However, vulnerability management, specifically patch management, is a dynamic, ongoing process that addresses current identified vulnerabilities in established application code. Sometimes the vulnerability is found through an exploitation, such as a hacking attempt or the introduction of malicious code. Other times, the vulnerability is discovered by software developers who bring the vulnerability to light before anyone exploits it (the preferred scenario, of course). In such cases, the patch required to fix the vulnerability is coded, developed, tested, and distributed to the marketplace as quickly as possible in hopes of preventing malicious activity.

You may have heard of a vulnerability being “in the wild.” That indicates an active exploit in a production environment of one or more organizations. This is contrasted with creating a hack or exploit in a lab or research condition. “In the wild” exploits increase the criticality of patches, because the likelihood of further exploits for other organizations is increased significantly.

The process of patch management has benefited from security automation. Patches (once tested and validated) can be automatically distributed across a local area network and automatically updated on all networked resources. With some exceptions for types of operating systems, application compatibility, and patient safety concerns (see the following Exam Tip), this is efficient and effective.

Images

EXAM TIP   An example of where best-practice security in other industries must be tailored when applied to healthcare organizations is medical device vulnerability management. For FDA-regulated medical devices, the operating system patching process should not be automatic. Each patch must be evaluated and approved for use on all medical devices by that device’s manufacturer before being implemented. In short, medical devices may be able to be patched, but the manufacturer must first test and approve the patch on each particular device. Otherwise, the addition of a patch, which in reality is a piece of third-party software, can result in a patient safety issue if the medical device malfunctions after the patch is applied. Vulnerability management for medical devices is improving all the time. Get familiar with the dynamic content at https://www.fda.gov/medical-devices/digital-health/cybersecurity to stay abreast of trends in this area.

Segregation of Duties

The segregation of duties is a sort of checks and balances system implemented to reduce the risk of accidental or deliberate misuse of information. It involves processes and controls that help create and maintain a separation of security roles and responsibilities within an organization to ensure that the integrity of security processes is not jeopardized, and to ensure that no single person has the ability to disrupt a critical computing process or security function. A segregation of duties policy, for example, prevents an individual from making system changes and then changing the audit logs so that there is no record of the changes. Or, for example, in another area of the healthcare organization’s operations, an individual who is permitted to request a payment to a vendor or customer should not be the same person who issues the check for that payment.

Vulnerabilities are not always technical in nature. The ability of one individual to cause unauthorized changes or disruptions, inadvertently or intentionally, is an example of a nontechnical vulnerability to the overall information security program.

Images

NOTE   Segregation of duties in the financial sector is bolstered by a required policy that forces employees in designated critical positions, such as elevated access and authorization privileges, to take mandatory time off, commonly for five consecutive business days. This practice is meant to increase the organization’s ability to detect insider threats. Some healthcare organizations also use this practice as well as job rotation, which also minimizes the control a single individual can have by forcing breaks in duties periodically and moving employees into other roles on the team.

Least Privilege (Need to Know)

The term “least privilege” may be more familiar to many of us as the “need to know” principle. Least privilege refers to each user having only the permissions, rights, and privileges necessary to perform his or her assigned duties, and no more. In addition, users of information should be granted access only to the information they need to perform their duties. Least privilege does not mean that all users have extremely limited functional access. Some employees will be granted significant access to data if it is required for their position. Following this principle can decrease the risk of and limit the damage resulting from accidents, errors, or unauthorized use of system resources.

Many of the concepts within information security (and privacy) relate to avoiding too much access or too much disclosure, where such access or disclosure is not needed. Generally, least privilege or minimal use concepts must support every information protection program. It is an imperative to protect information from individuals who have no need for it, have no reason to use it, or no longer need it. Limited access policies go hand-in-hand with least privilege policies. These policies require that only as much information as needed should be disclosed, transferred, used, and stored, and as soon as the information is no longer required, it should be destroyed.

Images

NOTE   It is important to ensure that the implementation of least privilege is complemented by a plan for substitute or rapid granting of access to other personnel when an authorized individual is not available. Without careful planning, access control can interfere with contingency plans and ultimately, patient safety.

One of the distinctive elements of providing information protection in a healthcare environment is that least privilege concerns can be highly dynamic. For example, a physician may have access to records one day because she is dealing with a specific patient’s case or a specific responsibility, but the next day that access may be restricted if her duties change. This commonly happens as physicians see patients in emergency situations, such as when a behavioral health patient arrives in the emergency room. Another example is that sometimes, for peer review under medical records management processes, physicians may be granted temporary access to pediatric records, where their normal clinical duties would not include children under the age of 18. This process is important within a healthcare setting for continuity of patient care.

Business Continuity

Business continuity, or continuity of operations, includes all the actions taken to enable a healthcare organization to perform clinical and business services with minimal to no interruption or degradation. The organization must be able to perform certain activities in an effort to continue its mission should an unforeseen event occur. The ability to deliver medical care, for example, could be affected by electrical outages, weather-related events, community-based events such as riots, accidents, or even a serious outbreak of a virus that affects staffing levels. The healthcare organization must plan for and have procedures in place for such events.

Few industries are required to function 24/7, but in healthcare, this is a must. Not only is this affected by regulatory pressures, but many governments at the national, provincial, and state levels also require reporting procedures when a healthcare organization is not functioning. For our purposes, we concentrate on information asset functioning, so network or application downtime is the primary issue. In the United States, healthcare networks are included in the NIST publication Framework for Improving Critical Infrastructure Cybersecurity, which includes rigid guidelines to help assure continuity of operations.

As with all security controls, business continuity plans also include time-related elements that help ensure that a facility can recover from an issue quickly. Having a continuity of operations plan (COOP) for disasters is a preventive aspect of security control, and monitoring network activity is a detection function. Once a disaster happens, the optimal method for business continuity is a redundant system or an alternative source of recovery. For example, if an electrical outage occurs, a healthcare organization may switch temporarily to power provided by generators running diesel fuel. This helps ensure that patient care and business processes are not likely to be interrupted. When network resources are not available, because of power outages or for another reason, manual processes should be implemented. As an example, a lab system that processes samples using bar code technology may also have a manual process that uses hand-written intake forms. Personnel should be trained and able to implement the manual forms in the event that the bar code–scanning system is unavailable.

Images

NOTE   Most organizations have implemented and rely upon advanced information technology solutions. In some cases, the manual processes once used are no longer possible. Consider, for example, a healthcare facility that currently uses a computerized system to order patient lab procedures. If the system goes down, healthcare workers would need an alternative way to order labs, such as using paper lab request slips with checkmarks, signatures, and so on, for the applicable tests to be conducted. However, if there is no supply of those paper slips, ordering the appropriate lab procedures would be difficult, if not impossible. Or perhaps the medical staff has not been trained in how to access and use those paper slips if the computerized system goes down. These examples illustrate that we tend to rely on the technological improvements to the point at which certain manual processes can no longer be relied upon or accomplished.

Disaster Recovery

Disaster recovery is another important security control that supports business continuity. The ability to recover systems, specifically information technology systems, is a vital aspect of operations in any business, but in healthcare organizations, especially, it can mean the difference between life and death.

Power outages, weather events, or other incidents can cause damage to the data center where the systems are housed. Even if the system itself is not damaged, restarting or configuring the system is an important consideration in a disaster recovery plan. Once a system has been remediated (and after the threat is no longer present), a process for bringing the system back online is needed. This is true for every system or application. For example, if a database server is corrupted by an attack, the server must obviously be tested and evaluated prior to putting it back into the production environment.

Images

NOTE   Disaster recovery procedures must be in place to prioritize operations and restore systems in a staggered process. Following a total network outage, restarting all systems immediately and simultaneously can result in unintended failures at some network locations, which can result from electrical surges or load balancing issues, for example.

The best COOP, specifically for system recovery, includes a plan for a scheduled, prioritized system restart. Clinical systems and business systems would be restarted first. Then additional systems could be started, while the impacts of the restarts are monitored. Depending on the duration of the outage or system failure, a significant amount of new and sensitive healthcare information may have been collected. That data will need to be integrated into the entirety of the patient record and related business records.

System Backup and Recovery

System backup processes and recovery capabilities are fundamental components of a healthcare organization’s resiliency. To illustrate, a University of Texas researcher estimated that large hospitals’ losses can reach $1 million per hour during an unplanned outage of their EHR.12 Having secondary or duplicate copies of data, high-availability systems, and alternate operating locations are all part of a robust resiliency program to ensure the availability of data and reduce downtime.

Events such as ransomware attacks or natural disasters may cause primary systems or sources of data to be unusable. System backups and recovery plans alleviate the impact of these risks. Backups are needed for other reasons as well, such as for audits, forensics, and other regulatory information requests. For the most part, we will concentrate on the use of backups and recovery as part of a security control to maintain the CIA triad. Factors that determine the strategies for system backup and recovery include the sensitivity of the information, the risk tolerance of the organization, and operational requirements.

Backup Storage Approaches

There are several types of backup approaches with distinct purposes. With advancing technology such as cloud platforms and cloud storage solutions, you will encounter variations in these approaches. Learning the traditional basic backups cycles will be a good start, however, so we will cover the most traditional forms here—full backups, incremental backups, and differential backups:

•   Full backups  Backups should include a full copy of the entire system, including the entire database, operating system, boot files, and an exact copy of the system drive. Usually, the first backup copy is a full backup. Subsequent backups are usually not full backups. Storage requirements are a significant concern if previous backups are not deleted. To be safe, full backups are good practice before any major upgrades or migrations of systems occur. This backup can be used to fully restore the last known good copy of the system files. The restore process from a full backup is the slowest of the options presented here.

•   Differential backups  This process backs up only the changes made since the last full backup and copies those changes in each subsequent backup. The system accumulates updates periodically with a collection of all modifications and variations in multiple, subsequent copies. Differential backups occur each day until another full backup occurs. Differential backups help to compress the time it takes to restore a system, but depending on the frequency of differential backups, storage costs can become a problem. To restore from a differential backup, you would normally use the latest differential copy and the full backup copy together.

•   Incremental backups  An incremental backup copies only the files, data, and system information that have changed from the previous backup. You can run incremental backups more often because of the relatively small storage requirements and the cost-effectiveness of the solutions. Restoration from incremental backups would include the last incremental backup and the last full backup.

Backup Storage Locations

Related to the ability to restore backups and operate during a business disruption is the location of backups. A risk assessment and business requirements evaluation will help determine which location approach the organization should use. The correct choice depends on how much downtime the organization can accept, any pertinent regulatory requirements, and which solution or combination of solutions offers the best options regarding data availability.

The most secure locations for data storage during a catastrophic event are offsite locations, particularly because onsite data centers may be unusable or destroyed. There are three types of offsite configurations:

•   Hot site  A hot site is a computing environment that is almost identical to the onsite environment. This is the best choice for critical systems. To minimize disruption of services, the hot site runs concurrently, along with the main production environment. The use of a hot site is the costliest of location options, because these sites continuously operate and secure data assets. System backups at a hot site may occur via two main approaches: If the hot site is intended to serve as a high-availability solution, backups are continuous or a secondary data flow in real time. In this scenario, hot site systems provide a redundant capability to maintain uptime requirements to a level close to 99.999 percent. More than likely, hot sites require daily backups so the site can be made operational in a very short amount of time, usually measured in a few hours. A good hot site would be located far enough away from the source site to be outside of any shared risks, such as earthquakes, hurricanes, and so on.

•   Warm site  Warm sites are locations with adequate equipment, such as hardware and software, that can be accessed fairly quickly when needed. The warm site normally has electricity, telecommunications, and networking infrastructure at a minimum. Backups are stored there in case they are needed for recovery. In the event of a disaster or cyber incident that causes disruption at the source site, the warm site would be brought to full power. There would be a short delay to production levels for critical systems, because immediate recovery is not expected.

•   Cold site  The least costly option is a cold site. A cold site includes the bare minimum and may store nothing but backups, or backups could be transported or transferred to the cold site when required. A cold site would take the longest to power up and bring to production levels.

Images

NOTE   An emerging alternative to physical production environments (on-premises) system backups are cloud solutions that can act as hot and warm sites. An availability zone can provide an isolated, logical data center in the cloud environment. Backups can be transferred from the physical site and stored in the availability zone, which can be used as the alternate production environment, to be initiated as needed. Using the cloud can be a more reliable and cost-effective solution than physical facilities for data storage and backups.

Configuration, or Change Management

To help ensure integrity within the parameters of the CIA triad, organizations establish a configuration, or change management process. This process is part of an organization’s overall information governance approach and a valuable security control measure to ensure consistency in how changes are made to the network, systems, and applications. The goal of change management is to establish standard procedures for managing changes efficiently to control risk and minimize disruption to IT services and business operations.

A change management process is a critical best practice that’s included in the Information Technology Infrastructure Library (ITIL) framework. ITIL recognizes the value of an efficient, organized methodology for taking new products or updates of existing products from design to operations without adding risk.13

In the realm of healthcare, a change management process may affect a laboratory department that, for example, wants to purchase a new information system for processing lab results. The new system would require network access and must interface with current systems, including EHR. Imagine the result if the acquisition and implementation of this new system did not include a formal, management-level review of the operating system, interfacing requirements, physical installation environment, and other significant factors. Without a change management system in place to ensure that the new acquisition is consistent with existing systems, results could be disastrous: the new system might fail to be interoperable with existing systems, chiefly the EHR, or the system may not work at all.

Other considerations for change management could involve required changes to perimeter security defenses, such as border firewalls. If a networked system needed to communicate with external entities across the Internet, the data traffic devices would exit the organization’s network through the firewall via distinct ports using specific protocols. Outside of the ACL’s approved ports and protocols, all other traffic would be blocked. Special-purpose computing systems, such as medical device systems, may require the use of a port not on the current ACL. To request or ultimately make changes to perimeter defenses, a formal change management process is required.

Incident Response

Incident response is effective as a corrective measure that occurs after a security incident. A successful incident response strategy can limit the duration and impact of an exploited vulnerability. Today, as data breaches seem almost inevitable no matter how well you implement security processes and controls, the one security control that may make the most difference is your incident response strategy. NIST SP 800-61 Rev. 2, Computer Security Incident Handling Guide, provides direction for establishing a solid incident response program that accomplishes detection, analysis, prioritization, and handling of security events, such as data breaches. According to Stroz Friedberg (Aon), a leading global cybersecurity service firm with specific expertise in incident response, “The way an organization responds can be the difference between exacerbating the reputational and financial damages from a breach, and mitigating them.”14 A solid incident response strategy will be based on significant regulatory requirements regarding notification, with HIPAA and HITECH in the United States, and internationally with the European Union’s DPD, for example.

Understanding Privacy Concepts

As an HCISPP, you are expected to have a solid understanding of privacy concepts. Some concepts are used independently of security controls. However, as the healthcare industry becomes ever more digitally based versus paper based, the integration of privacy and security controls also increases. The significance of privacy cannot be overstated. Worldwide, the emphasis on privacy as a human right can lead to enforcement under criminal law. In this section, we examine how various privacy frameworks incorporate privacy concepts. Another current aspect of privacy is the ability to provide privacy in the face of social media, public surveillance, and information sharing.

Within healthcare, the roles and responsibilities of those who are charged with protecting information converge around distinct roles that may or may not have previously involved working with digital or electronic information. Some roles originate from traditional privacy or legal roles in health information management, with a shift from paper-based information storage to digital storage; others come from IT support backgrounds, such as local area networking, application management, and end-user support, where new concerns over protected health information are relevant. Still others may come from the clinical engineering or biomedical technology professions, where the interconnectivity of medical devices and to internal and external networks is rapidly evolving. Figure 5-7 depicts the intersection.

Images

Figure 5-7  Convergence of healthcare competencies with information privacy and security responsibilities

For these previously distinct and somewhat separated communities, this section provides a primer in privacy compliance for those with stronger backgrounds in security, and in security compliance for those with stronger backgrounds in privacy. It offers guidance to those who have a responsibility for complying with information privacy in healthcare, as well as those with traditional privacy or legal compliance roles in healthcare, who have increasing roles in protecting digital information through information security management.

Images

EXAM TIP   The distinction between privacy and security has begun to narrow. Some advocate that privacy is a concept embedded in the practice of providing information security or cybersecurity. However, for purposes of this text and the exam, we will maintain a distinction: Information privacy concerns what is being protected and why. Security addresses how an organization can protect the private information.

The following concepts and definitions are found in the leading privacy frameworks and regulations. Minor differences may exist, particularly where terms are combined to make one principle rather than two distinct principles in the framework. Refer back to Chapter 4 to review these terms in the context of the framework that advocates them.

US Approach to Privacy

The United States does not apply a uniform data privacy policy across all industries and data collectors. Because of a variety of factors, and to maintain a free-market economy, the United States approaches data privacy from a sector, industry, or functional perspective. The central principle in this approach is that government does not set a singular policy that transcends industries. Instead, each industry is governed by a combination of self-imposed guidelines and government-originated regulations specific to that industry. What results is incremental legislation that is focused on specific concerns. Examples include the Electronic Communications Privacy Act of 1986, the Children’s Online Privacy Protection Act (COPPA) of 1998, the Fair Credit Reporting Act, and the 2010 Massachusetts Data Privacy Regulations.

To understand how the United States approaches privacy law, you must consider the US Constitution and that the establishment of the federal government included an intentional reluctance to influence or meddle in private business and the economy. In this case, the result is a reluctance (at least initially) for the federal government to legislate privacy. Additionally, privacy in the United States tends to be limited by what society is willing to accept, and that can change significantly over time. Privacy laws have emerged and are enforced within industries at a federal level, such as HIPAA in healthcare, and at the state level, where some states have relatively stringent privacy laws. The recent California Consumer Privacy Act (CCPA) is a good example.

European Approach to Privacy

The European Union approached the concept of privacy from a very different perspective than the United States. Because of historical experiences such as the Nazi regime, many Europeans have a natural suspicion and fear of intrusive, unnecessary, unfettered access to personal information, which has led to the creation of stringent privacy protection laws.15 This is not only reflected in the overarching data protection approach in the DPD, which has been updated to the GDPR, but also in how the European Union sees data transfer to non-EU nations. Additionally, there is a variation between the European Union and other nations in what identifiers are considered personal information. For instance, the European Union includes race, ethnicity, and union status as protected, sensitive information. The impact of identifying someone based on this information was once the catalyst for secret denunciations from neighbors and friends in the 1930s. This led to detentions of targeted individuals and groups (for example, members with Jewish heritage) that sent friends and neighbors to work camps and concentration camps.16 The European Union favors stringent and widespread privacy protections for its citizens and has led the world in establishing and enforcing data protection laws.

Consent

In the privacy context, consent is a voluntary action by an individual to allow collection and sharing of their personal information for purposes that are disclosed to them beforehand. For consent to be legitimate, a few conditions must be satisfied: Individuals must provide consent voluntarily and must also be informed about their rights, one of which is that it is okay to change their minds after they provide consent. Of course, individuals must also be able to understand these conditions and be capable of communicating their decisions.

According to the HIPAA Privacy Rule, a healthcare organization must obtain authorization for the exchange of information, with a general exception for information needed for purposes of treatment, payment, and operations. Consent can be informed or granted by a competent individual. In granted consent, a competent individual such as a family member can provide consent when the patient cannot do so. This can happen when the patient is heavily medicated or injured and is unlikely able to make a rational decision. When complying with HIPAA guidelines and using or disclosing PHI for purposes of treatment, payment, or healthcare operations, a healthcare organization does not have to obtain consent. However, the organization may opt to obtain consent where practical.

For other uses and disclosures, such as for research and third-party population health analytics, additional patient consent may be required. Under the DPD, data should not be disclosed without the data subject’s consent. That provision extends to GDPR as well. The DPD addresses both informed consent, where the subject has sufficient information before making their decision, and specific intent, indicating the subject is advised of the precise nature of the data and purpose for which it is collected. There is no differentiation under the DPD as to the purpose such as treatment, payment, or healthcare operations.

Choice

Choice is defined under various privacy standards. Choice provides an individual the option of whether to provide the information freely or withhold it. The distinction between choice and consent is that choice is about providing options, whereas consent is about providing permission. The choice offered to an individual must be between legitimate options, and the options must be presented in a clear manner without deception. Choice is offered through opt-in and opt-out provisions. One version of opt-in or opt-out provisions for legitimate data collection and sharing is a healthcare organization that provides a statement to patients about their choice to provide personal information or withhold it. An organization can structure its opt-in and opt-out processes in multiple ways to obtain patient choice. If the individual does nothing, it can be an implicit opt-in or opt-out choice by default; if the individual makes an active choice (by selecting or unselecting an option), it is an explicit choice. When collecting sensitive information, it is best to require the individual to make an explicit opt-in or opt-out choice. Examples of implicit and explicit opt-in and opt-out statements are shown in Figure 5-8.

Images

Figure 5-8  Examples of implicit and explicit opt-in and opt-out statements

With the growing adoption of EHRs, the opt-in or opt-out choice can have a profound impact. When providing consent to receive care, patients should be offered an informed choice if that information is collected into the EHR and then automatically shared as part of a health information exchange (HIE). This additional sharing may be outside of what the patient may expect. Therefore, additional consent may be required. To receive such informed consent, either by a patient opting in or opting out, healthcare organizations may have to provide notice to thousands of patients, which is logistically difficult and costly.

The DPD mandates that an individual can exercise free consent, which is identified using the term “choice.” This enables the data subject to be able to exercise a real choice, and there should be no deception, intimidation, coercion, or significant negative consequences if the subject does not consent. The DPD does not, however, mention the right to withdraw consent. With the recent enacting of GDPR, individual choice is still preserved as consent requests must be easily understood and accessible; the intended uses of the information must be made to the individual and in plain language.

Per the HIPAA guidelines, the term “choice” is more routinely associated with the right to revoke an authorization that was previously granted at any time. This revocation must be in writing and is not effective until the covered entity has received that revocation. The Privacy Rule requires that the authorization must clearly state the individual’s right to revoke. There are two permissible ways a healthcare organization can satisfy the mandatory communication of the revocation process to the individual. Either they provide it as part of the authorization process or they include the revocation process in the notice of privacy practices. In either case, the revocation process must be described in clear terms that the individual can understand.17

Images

NOTE   It is easy to confuse the appropriate use of the terms opt in and opt out. For instance, a patient portal web site may provide a choice for patients to opt in to the healthcare organization’s data collection and storage system by selecting a checkbox that reads, “I agree to the terms and conditions found in the hospital privacy statement.” In this case, the checkbox should not be prechecked, because a preselected, affirmative statement would be more appropriately defined as an explicit opt-out choice—that is, the patient would have to physically uncheck the checkbox to opt out.

Limited Collection

A principle common across data protection best practices and regulations is the collection limitation principle. This principle basically asserts that organizations can collect only what is necessary and nothing more. In accordance with the DPD, any personal data that is collected must be used for a specific purpose that is disclosed to the individual, and the collection must be limited to the specified use of the data. The HIPAA Privacy Rule has a similar provision that defines a limited data set as a collection of data elements that comprise only the required protected health information. All other identifying information is removed.

This following list represents a sample of some of the identifiers of the individual or of the individual’s relatives, employers, or household members that would be removed under this principal:

•   Name

•   E-mail address

•   Certificate or license numbers

•   Web URL

•   Address

•   Social Security number

•   Vehicle identification number/license plate

•   Device numbers and serial numbers

•   Telephone numbers

•   Medical record numbers (patient ID)

•   Account numbers

•   Biometric identifiers, including fingerprints, retina scans, and voice prints

•   Fax numbers

•   Health plan beneficiary numbers

•   Full face photographic images/comparable images

Limited collection is a safeguard that helps assure individuals that data collectors, such as healthcare organizations, do not collect too much information because it is easier than specifying data that is required or because it could be used for other reasons, such as marketing. It is also helpful to guard against overcollection, which can result in problems if data loss occurs. Numerous data breaches are caused or exacerbated by persistent data that was never really needed but was nonetheless collected, stored, maintained, and ultimately lost. Think about the forms patients fill out while waiting to see a doctor, or the web forms customers fill out prior to gaining access to an online service. If all of that information is required—or relevant, as the collection limitation principle generally states—then the requests probably follow the collection limitation principle. However, if some of that information is not needed, or the data collector uses it for a purpose other than that understood by the individual providing it, the practice is inappropriate.

Legitimate Purpose

If healthcare organizations are to state the purpose of the data collection, collect it in a lawful or fair manner, and use the data only for the stated purpose, they must also ensure that the collection purpose is a legitimate one. Legitimate purpose is a principle that takes collection limitation a step further. Under HIPAA, it is a legitimate business purpose for which an organization requires the information to perform its mission. The HIPAA Privacy Rule allows public health authorities and providers to collect and disclose health information relevant to their public health responsibilities as a legitimate purpose. In this case, legitimate purpose is aligned with the importance and public interest in identifying threats to the community and individual well-being. A healthcare organization in the United States must collect patient information only as that information assists in the provision of treatment, payment, or operations. If the organization is sharing that information, it must still limit disclosure to reasons that are defined as legitimate under the law. One such provision is in public health information disclosure.18 To carry out public health and safety responsibilities, healthcare organizations are permitted to provide data access to public health authorities and those who have a public health mission. This provision extends to third parties who handle PHI for the healthcare organization. If the third party is required to disclose PHI to public health entities, that disclosure possibility should be stated in the business associate agreement.

Legitimate purpose is present in the EU DPD, which allows personal data to be processed only for specified explicit and legitimate purposes. The DPD mentions a related privacy concept, fair processing, which instructs the data collector to obtain an additional or updated consent from the individual if the processing of the data changes from the legitimate purpose originally presented to the individual.

Purpose Specification

The purpose specification explicitly states that individuals should be informed of why their information is needed and how it will be used. If the purpose changes, new consent and choice options should be provided to the individual. The principle of purpose specification protects data subjects or individuals by also setting limits on the collection and further processing of their data. Combined with legitimate purpose, this concept makes it necessary for data collectors to disclose the specific reasons or intentions for collecting patient data. When an individual provides personal data to a company or another organization, that person usually has a right to know, and can choose to consent to, the purpose for collection and use. Purpose specification may also include disclosure of reasons or ways the information will not be used—for example, sold to third parties.

Disclosure Limitation

Disclosure of information must be controlled by limitations. Disclosure of healthcare information is limited to treatment, payment, and operations. Leading privacy frameworks generally limit disclosure but also provide specific exceptions where public safety and health considerations outweigh the need for individual privacy. Dramatic examples are cases of child abuse or some communicable diseases. There are also provisions for disclosure to be permitted when that use is clearly provided in authorization forms signed by the individual or outlined in privacy practices notifications. Disclosure limitation can be further reduced based on an individual’s desire and request. Although the provider does not need to agree to the additional limitations, they may accept limits above the legal requirements. If the provider voluntarily agrees to the additional limitations, it is then obligated to honor the patient’s request. In short, privacy frameworks may authorize disclosures, but they do not mandate them. The data collector and individual can establish additional limits.

Transfer to Third Parties (or Countries)

HIPAA allows for third-party transfer or sharing of PHI for the purposes of healthcare payment, treatment, or healthcare operations. The law also mandates a special type of contract between the covered entity and the third party, called a business associate, under a business associate agreement (BAA). As you’ll recall from Chapter 1, BAAs should include the possibility of transfer of PHI with business associates. The healthcare organization can inform the individual that the business associate is legally obligated to protect the information using applicable security controls to maintain confidentiality for as long as they have the data. The BAA will also have terms and conditions to outline the appropriate uses of the information by the business associate, such as prohibiting any further transfer.

Disclosure limitation includes concerns for data being transferred to third parties in other countries. The European Union uses the term “third countries” to refer to foreign nations that are third parties in information sharing transactions. Disclosure limitation to third countries is a component of adhering to the “onward transfer” principal included in DPD, GDPR, and the Organisation for Economic Cooperation and Development (OECD). EU laws such as DPD and the GDPR place clear responsibility for onward transfer to third countries on the EU data controller. The EU country must ensure that the third-country entity provides information privacy protections at the same level as the DPD or GDPR. Under the DPD, that assurance could come from working with a third country that was vetted under safe harbor. Today, that assurance comes from designation on the privacy shield list of countries under GDPR.

Transborder Concerns

Electronic information transfer and advancing technologies such as cloud computing have surfaced transborder information privacy concerns. The central issue in the transfer of personal data across international borders is the assurance that each nation provides an equal amount of protection to its citizens’ sensitive information. The ability to transfer electronic information internationally is generally viewed as a positive, though varying privacy approaches and lack of jurisdiction hinder or prevent such information flow. This can be legislated, as in the case of DPD and GDPR, where specific authorizations must be in place before data transfer is allowed, such as safe harbor and privacy shield.

Although HIPAA does not specifically prohibit transfer of electronic PHI, healthcare organizations must understand the risks in sharing information with third parties outside of HIPAA jurisdiction. The guidance from the US Department of Health and Human Services (HHS) on this topic includes recommendations to have the cloud service provide enter into a BAA, document the risks in the covered entities’ risk assessment, and implement reasonable and appropriate technical security controls.19 This is not always aligned with how a cloud service provider configures its environment, however.

Cross-jurisdictional or transborder transfers should be accepted only when the clinical or business requirement is substantial. Cloud service providers have begun to recognize the reluctance for healthcare organizations to take the risks of transborder transfer. You will see international cloud services, such as those provided by Microsoft or Amazon, offering solutions that store and transfer data within the United States only and entering into BAAs, easing the transborder concerns of US healthcare organizations.

Access Limitation

Limiting access to data follows the concept of allowing only the minimum access necessary. Healthcare organizations must implement and enforce reasonable levels of control over PHI to permit access only to those individuals with a legitimate need. This requirement may include certain adjustments of physical controls in addition to technical safeguards. For example, locking doors to rooms where data is kept or installing ID badging systems with video surveillance to data centers are considered reasonable controls, based the data’s sensitivity. Technical measures such as passwords, intrusion detection systems, user behavior monitors, and identification management systems are relevant technical solutions to help control and restrict access.

Access limitation can extend into an individual record as well. An example is an individual’s medical record that includes substance abuse information, as well as recent emergency department visits and pharmaceutical prescriptions. Some of that information may be needed by an authorized healthcare provider, but other portions of the records—in this case, the information related to the substance abuse—may be unrelated to the emergency department visits and therefore should be unavailable to an attending physician. Although not always possible, many record systems can be configured to provide that level of granularity of system access and authorization. In smaller organizations, such as a medical group of a few associated healthcare providers, it may be infeasible and unnecessary to be so restrictive based on the sophistication of their IT and continued use of paper-based records. It is always necessary to provide training and awareness regarding access limitations. If technical and physical controls are not reasonable, training and awareness is even more important.

Accuracy

As covered in previous chapters, the accuracy of data (referred to in security as integrity) is vital to the health of the patient, and it covers the entire scope of the overall healthcare record. A patient (or, for that matter, anyone, including care providers) who determines information within the record is in error has the right to request an amendment to the record or entry. The right to request the change is in effect as long as the healthcare organization maintains the record. The healthcare organization may decline the request for amending the record if any of the following conditions are present:

•   The healthcare organization did not create the entry. If the individual who is the subject of the information can demonstrate that the entity that actually created the information no longer has the ability to make the changes, the healthcare organization may be in the position to accomplish the amendment.

•   The healthcare organization established that the data in question is not included in what the organization defines as its “designated record set.”

•   The information is not subject to review under the HIPAA Privacy Rule.

•   The healthcare organization decides the information is, in fact, accurate and verifiable.

One final note, even as a healthcare organization does not need to make the change or amend the record if the conditions support not honoring the individual’s request, it must inform the public about its right to ask. The healthcare organization is required to disclose through its Notice of Privacy Practice or another notice document that satisfies relevant regulatory authorities that an individual has the right to request amendment to their records and how to do so.

Completeness

Data must be accurate and current according to the principle of completeness. This principle is found in HIPAA, DPD, and GDPR, as well as many other leading frameworks. The requirement consists of using reasonable measures to ensure that the data remains up to date based on the purpose for collection and use. EU member states and US healthcare organizations are required to take steps to erase or fix incorrect data when they learn about it. There is an increased responsibility for completeness for organizations that intend to share the information. They must assure completeness before sharing the information.

Quality

Certainly in our healthcare organizations the term “quality” has a multitude of meanings and uses. With respect to privacy, it likely has different meanings based on what part of the organization you may be supporting. Quality of data has an association with the quality of care, and knowing who is responsible for the quality of the data is also of value. The concepts of data integrity align with assuring the quality of data, underscoring the continuous integration between privacy and security. The more assurance the organization can provide that the data it manages has a high quality, the more capable it is of ensuring that the information is complete and accurate as well as to be used for the reasons stated when it was collected.

Management

This privacy framework principle identifies the importance for an organization to establish a function to build, implement, enforce, and document the processes to assure confidentiality of sensitive information. A central component of management is the governance structure that must be present to include assigned accountability that shows the privacy program’s policies and procedures are working effectively.

Management of privacy is dependent on involvement and support from all senior leaders of the organization, not just senior leaders who are accountable for privacy or security. The tone from the top of the organization structure is associated with the behavior and culture of the entire organization. Without support from the senior leaders, employees are likely to give privacy practices less priority, and processes may be circumvented. Additionally, management at all levels must support the reporting of privacy-related incidents and monitor the effectiveness of the program through ongoing measurement of important indicators. If you can find evidence of senior-leader involvement, privacy reporting, and documentation of program requirements that are communicated to the organization, you can be confident that this privacy principle is being met.

Images

NOTE   You should be familiar with the concepts of privacy by design and privacy by default. These approaches instruct organizations to integrate and engineer privacy into relevant processes such as design, purchase, implementation, and decommissioning of systems as well as vendor selection and all aspects of patient care. Privacy by design and privacy by default have long traditions and are written into recent regulatory guidance such as GDPR. See https://iapp.org/resources/article/privacy-by-design for additional resources on Privacy by Design.

Privacy Officer

Regardless of where you are in the world and which regulatory requirements you must comply with, most privacy regulatory requirements require the designation or assignment in writing from management of a privacy officer, and most require specific experience or training of such individuals. While the individual’s title may not be that of privacy officer, the intent is to create a role that ensures compliance with specific requirements. In the United States, the requirement to designate a privacy official is a primary requirement of the HIPAA Privacy Rule. GDPR mandates the assignment of a Data Protection Officer, with similar responsibilities. The privacy official is responsible for establishing a privacy program with the required policies and procedures to comply with regulatory direction. The privacy official will manage and oversee several specific tasks:

•   Develop and publicize the notice of privacy practices (covered later in this chapter in the section “Notice”).

•   Make sure patients are provided and acknowledge receipt of the notice of privacy practices.

•   Authorize exceptions for disclosure of information when warranted for purposes such as research, marketing, and fundraising.

•   Administer patient requests for amendment of their records.

•   Assess the need to apply additional information protection controls to increase privacy of sensitive records.

•   Serve as the focal point for any internal employee complaints concerning regulatory guidance (such as HIPAA Privacy Rule) and compliance.

•   Handle inquiries from patients and patients’ families for information about privacy practices and access to information.

If you are employed in a large healthcare organization or facility, the privacy officer role may be a senior management level professional. They typically require staff support just to handle all these administrative tasks. In a small clinic or practice, privacy officer responsibilities may be only part of an individual’s job responsibilities. In these scenarios, the privacy responsibilities are likely added to job requirements for managing medical records, security, and patient administration. You can infer that this complexity is not always helpful for strong privacy practices.

The person selected to be a privacy officer may seem to face a difficult role, whatever the size of the organization. The privacy official must also understand and implement a program following other regulatory requirements such as existing requirements of state or other local laws and professional codes of ethics. The most important part of a privacy official’s job is to develop a culture in the organization that considers privacy protection as part of everyone’s role and responsibility, not just the privacy official.

Supervisory Authority

The supervisory authority is an independent, nongovernment entity that is required by each EU member state to oversee compliance with DPD and GDPR. This entity is responsible for overseeing the effectiveness of data protection within its jurisdiction. The supervisory authority works with government agencies to govern data protection including legal enforcement for any noncompliance and violations of the regulations. The supervisory authority is also focused on aiding the secure transfer of information within the European Union. Individuals who think their privacy rights have been violated bring their complaints to the supervisory authority for legal action. Data controllers interact with supervisory authorities by providing notice before collecting personal information. This transaction is recorded in a public register.

Processing Authorization

Processing authorization refers to the function of data controllers to restrict the use of information to minimize the risks to individuals as much as possible. The supervisory authority reviews transfers and use before processing and provides an evaluation. The authority’s decision may come with requirements for measures to further reduce risk, including additional safeguards needed before information processing. Data controllers also must ensure that processing is not done unless there is prior written authorization between the data processors that adheres to the purpose of data collection and outlines any transfers to other third parties on behalf of a processor. If this sounds complicated, think of the relationship between US covered entities, business associates, and business associates of business associates. The downstream sharing of data must be controlled through specific authorizations.

Accountability

As discussed earlier in the chapter, although accountability generally not considered part of the basic principles of security, it is relevant and should be included as a principle of privacy. Accountability can be described as the ability to determine a responsible party for an action taken regarding privacy or security. Accountability is demonstrated by an organization through processes to assign responsibility for the effective operation of privacy controls. The responsibility should be irrefutable, and individuals should be able to determine who controls their sensitive information that is collected and used. Accountability can be associated with the liability of healthcare organizations for adhering to regulations and laws, such as HIPAA, DPD, and GDPR. Fines and penalties are potentially levied, because authorities view healthcare organizations as accountable to follow basic principles of privacy (and security).

Training and Awareness

As with information security training and awareness as a security control, educating the workforce on privacy principles is a cost-effective way to protect personal information. One of the key differences you may find in privacy training and awareness is the need to educate employees on the regulations that govern data privacy. In comparison with information security, which of course is regulated by legislation, data privacy is steeped in national and local (state) laws, while information security is guided more by standards, such as NIST SP 800-53, which deals with security and privacy controls. Consider the impact and focus of international laws such as DPD, GDPR, and the Personal Information Protection and Electronic Documents Act (PIPEDA) have in relation to privacy of data. The fact that the HIPAA Privacy Rule came first and has been in effect since 2003 illustrates the approach to assuring data privacy has been a legislative approach. Training and awareness programs need to focus on the laws and regulations that enact and enforce data privacy.

Good training and awareness programs on data privacy also focus on how employees should comply with local policies and procedures. It helps to create training content that informs people about why keeping sensitive information confidential and using it for the purposes it was collected helps compliance. Perhaps the most important aspect of training is providing clear and relatively easy ways for employees to report privacy-related incidents as they happen. Reducing administrative barriers to reporting, such as providing a hotline or direct reporting system, and minimizing fear of retribution for reporting, are important components of the incident reporting process.

Images

NOTE   A valuable topic to cover in privacy training and awareness curricula is recognizing threats to data privacy. Excessive use of information by employees, for instance, can be presented in scenarios that employees may otherwise consider business as usual based on who is sharing the information and conditions in the healthcare setting.

Openness and Transparency

Openness and transparency are principles that emphasize that trust in the collection and use of sensitive information, such as the exchange of electronic health information, can best be established in an open and transparent environment. Each component stresses the importance for individuals to know and understand what personal information is being collected and stored. It also applies to how that information is used and disclosed, as well as how reasonable choices can be exercised with respect to that information. Openness and transparency work together to support the important privacy requirements of choice and consent. It is infeasible to conclude that an individual can provide explicit or implicit choice and consent without knowing and understanding the actions of the data collectors and processors.

Openness

The concept of openness enables any member of the public who has a legitimate interest to be provided with information about the processing of personal data. Although this does not imply access to the data itself, it does mean that the organization should disclose how the data it maintains on behalf of others is kept secure, processed, or shared. According to the Organization for Economic Cooperation and Development (OECD) privacy principles, organizations are advised to maintain a policy and procedure approach that exemplifies openness in all practices and decisions related to the use of sensitive personal information. An organization should be able to demonstrate or provide answers to an individual about the type of information the organization maintains, how it is safeguarded, and how it is used.20 Of course, openness would include disclosing all purposes for collecting information and the intended uses before collection.

Transparency

In general, transparency, while related to openness, refers to permitting an individual who has data maintained by an organization to be aware of specifically how that data is maintained, processed, or shared. In fact, in many regulations, such as DPD, specific notice must be made when data is being processed to keep the data subjects informed of how information is maintained, processed, or shared. Data controllers must demonstrate transparency by requesting approval that certifies the processing of the information is fair. Transparency supports the right of data subjects to access all data related to them and request deletion or correction of any data that is not accurate or that is being used outside of prior consent. Another aspect of transparency is the willingness of organizations to share information about the policies and procedures that govern the information sharing. PIPEDA is a good example of a regulatory framework that includes transparency in disclosure of policies and procedures as a part of privacy best practices.

Proportionality

Proportionality addresses assurances that personal data that is collected and shared is limited only to that which is necessary and used for the intended and described purpose. Proportionality is included in the Articles of the DPD, which adds the need for data to be accurate and current to meet the intent of this principle. The GDPR has carried the principle forward in its data processing guidance. This principle would, for example, advise against collection of sensitive genetic information through a mouth swab to establish and collect identification data when the sufficient personal information could be obtained from a government-provided identification card.

Proportionality also refers to the level of risk to maintaining and using the information. In the Generally Accepted Privacy Principles (GAPP) framework and DPD law, the principle cautions against holding information longer than is needed would fit within the definition of excessive data collection and use. The OECD privacy principles warn against transfer of information when risk of unauthorized use and disclosure exceeds the benefit of the transfer.

Use and Disclosure

The DPD includes particular direction with regard to how data is intended to be used and how long it can be retained. The data collector must follow the principle specification principle and use the data only for its intended purpose, as disclosed to the data subject. The data should be retained for only as long as needed to fulfil its intended purpose. At that point, the data must be erased. Data can be maintained for analysis, but laws such as PIPEDA direct organizations to make the data anonymous. Entities that want to adhere to GAPP must use data collected only according to what is disclosed in the notice they provided to individuals. Under HIPAA, PHI can be used for treatment, payment, and operations, as well as research and other specified purposes, as long as there is informed patient choice and consent. Use of PHI for reasons outside of these conditions is prohibited.

Images

EXAM TIP   You should be aware of the right to be forgotten, or right of individuals to request erasure of their personal data. GDPR strengthened the DPD principle for use and retention. There is, however, a caveat for EU healthcare providers that allows retention of health data in spite of an individual’s request when the data is needed and the healthcare provider protects it according to the legal obligations.

Use Limitation

When disclosing PHI, the amount or content of data provided must be limited to what is required to satisfy the request and nothing more. Although the additional data might be useful at a later date or for another purpose unrelated to the patient’s current visit, that data should not be collected.

The use limitation requires that the healthcare organization first determine that the disclosure does not violate the disclosure limitation principle, and then it must disclose only the minimum necessary information. Even with legal use or consent, the healthcare organization is obligated to limit use to the volume of data and content that is required for the intended purpose. For instance, if an audit log is required by law enforcement because a hospital employee is suspected of snooping in a medical record, producing an entire access log for all personnel on a certain day, showing all accessed or modified patient information, would be inappropriate because it would involve not only disclosure of the name of the hospital employee and the PHI he or she accessed but possibly also a tremendous amount of PHI irrelevant to the investigation.

A permitted or required use or disclosure does not absolve an organization from excessive use if reasonable efforts are not made to restrict access or control subsequent handling of the information. Privacy principles and laws typically include provisions for use without consent for specific law enforcement and legal proceedings, such as the OECD and HIPAA.

Access

Whether you are referencing the PIPEDA in Canadian law, the DPD or GDPR in European law, or the HIPAA Privacy Rule in US law, you will find provisions regarding the rights of individuals to request access to their personal information that the organization collected. Furthermore, individuals should be given the ability to correct, or rectify, any information that they believe to be in error.

In a healthcare setting, changing some information may not be permissible under other governing medical-legal considerations. However, an annotation of the individual record can be made to highlight the alleged discrepancy. In other cases, if the record can be corrected, provisions must be in place to do so.

In the European Union, data subjects have the right to demand that the data controller notify all third parties that received the data in error, in order to correct the record. With the havoc that medical identity theft can have on an individual’s life and financial state, it’s easy to see that having the right to access health information and correct a health record is a very important thing. Imagine the issues that could occur to a person whose insurance rates increase because an imposter is receiving care under his or her identity. That individual may initially be the only one to suspect something is wrong, so having the right to access his or her personal information in a timely manner and without excessive cost can limit personal distress and the damage caused by fraud.

Images

NOTE   Accounting of disclosures is a concept related to the right of access under HIPAA. Along with a right to access the record, the individual has the right to know to whom their PHI was disclosed.

Individual Participation

One of the central tenants of all privacy principles is the acknowledgment that privacy works best when the affected individuals are active and involved in the use of their data. The OECD privacy principles clearly outline the privacy expectations individuals should have in this regard, many of which are reflected in subtopics such as notice, consent, choice, and access and correction, which are covered in this chapter.

HIPAA, for example, was enacted in 1996 in large part because the use and disclosure of a patient’s health information was too often deemed the privilege of the healthcare organization, without the individual’s participation in such disclosure. HIPAA helped to rectify this by clearly outlining the rights of the individuals to have access to, and to participate in, the use of their healthcare information.

Using the OECD privacy principles as a comprehensive illustration, individual participation charges data collectors to do the following:

•   Provide data collected to individuals or attest to not having such data in the data collector’s control.

•   Respond to individuals in a timely and reasonable manner that the individual can understand at a fair cost paid by the individual.

•   Explain why a decision to access information is rejected and provide instructions for appealing that decision.

•   Permit individual requests to erase, fix, complete, or amend data pertaining to them. Where the challenge is appropriate or correct, the data collector should change the record accordingly.

Notice

Those who have experienced healthcare treatment in the United States have become painfully aware of the requirement to sign a document each year (and it seems like every visit) indicating that we understand HIPAA. In fact, however, what a patient is signing is a statement that he or she understands the notice of privacy practices of your healthcare provider or organization. The ability to provide choice and consent is related very closely to the organization’s responsibility to provide notice to the affected individual. Under the HIPAA Privacy Rule, US healthcare organizations must not only provide notice at the point of care, but they must also display the notice of privacy practices on their web site, if they have one, in a prominent, easy-to-find way.

Healthcare organizations should obtain a signature from each patient to indicate that a copy of the notice of privacy practices was received and understood. To complement the process, the notice should be posted in plain view onsite at the organization. If a patient wants a copy, it should be relatively easy to provide one. In the United States, healthcare organizations are required to provide notice to patients upon enrollment, at the time of the first appointment, when receiving healthcare services (for example, before undergoing procedures), and any time a patient asks to see such documentation. The notice is generally in paper form and presented to the patient at the required occasions. Notice, as a privacy principle, is satisfied when the patient signs an acknowledgment of understanding of the notice of privacy practices published by that healthcare organization, which contains the following:

•   An explanation of how the covered entity’s protected health information may be used and disclosed (if the healthcare organization is required to use it in any other way, the patient must consent to this first)

•   The healthcare organization’s acknowledgement of its duties to protect health information privacy

•   The patient’s privacy rights, which include the right to view the information on hand, the right to file a complaint with the federal government (specifically, HHS) if a violation is suspected, and the right to request an amendment to the record if it is incorrect

•   Contact information relative to the privacy practices and complaint process

In contrast to a healthcare privacy framework, the data privacy framework of the DPD seeks notice every time data subjects have their data collected. Depending on the individual’s preference for the content and frequency of these notices, the DPD allows for a data collector to provide a choice to data subjects to opt in or opt out of notices based on various criteria. This permits the individual to decide on the level of detail or frequency they prefer for data collection.

Events, Incidents, and Breaches

The terms “events,” “incidents,” and “breaches” are probably not words you want to hear or use on a daily basis. However, with the increase in cybersecurity attacks and the sophistication of attackers, preparing for and responding to them make up a major portion of your responsibilities.

Events and incidents are seemingly synonymous data protection terms, but the difference may be in what characterizes them. According to NIST SP 800-53 Rev. 4, an event is simply and broadly defined as anything that happens in an information system that is observed. Under that definition, it could be a reading from a logging record or an informational alert from a monitoring system. An incident, however, is much more complex and related to risk. The publication describes an incident as an occurrence that actually or potentially impacts the information security of an information system or the sensitive information used by the system. An incident may result in or progress to a violation of policy or procedures. In short, an event can lead to an incident. From an incident, you may experience the next step, a breach. A breach is an unauthorized disclosure, regardless whether it was caused intentionally or unintentionally. Measures for dealing with breach notification may be specific to country, state, and other jurisdiction. Understanding these measures is important as you prepare yourself to protect data in a healthcare organization.

Images

CAUTION   Be aware that you need to establish a balance in how you approach data breach notification. You can be negligent in failing to notify individuals and regulators within adequate, required timeframes, and fines and penalties result in most countries for such negligence. You can also contribute to “data breach notification fatigue” if you overnotify. This can open your organization to undue scrutiny and needless bad publicity. Remember that data breaches have an impact on the financial bottom line of organizations, on patient care, and on organizational reputation. Knowing the rules and actions to take, therefore, will likely position you in a key role in your organization’s data breach notification procedure.

Notification in the United States

To comply with HIPAA as it has been amended most recently by the Omnibus HIPAA Final Rule (2013), healthcare providers have very specific notification requirements.21 First, the threshold for when to notify is very clear. A healthcare organization must measure against a risk of disclosure standard established in this regulation. For instance, when a healthcare organization suspects that health information may have been disclosed in an unauthorized manner, it must determine how great a risk exists that the information was actually viewed or used by an unauthorized recipient. An unencrypted e-mail sent to a valid recipient was not likely a reportable breach if the e-mail was not intercepted or sent to other unauthorized recipients. In that case, there is no risk of unauthorized disclosure, and no additional reporting is required. If the information may have been disclosed (for example, as a result of a lost, unencrypted laptop), however, the healthcare organization would be required to promptly notify affected individuals of a breach. If a breach involves more than 500 individuals, the organization must notify HHS and the media. Where the breach affects fewer than 500 individuals, the healthcare organization is required to report these in aggregate on a yearly basis.

Notification in the European Union

Although many European nations have experienced breaches, data breach notification traditionally has tended to depend on the rules established by each independent nation. That is beginning to change, however. The latest amendments to the DPD and enactment of GDPR strengthen compulsory obligations to inform EU regulators when data breaches occur across the European Union. The report must be immediate. With a few exceptions, the report must be filed within 24 hours of a security breach of personal data. The directives standardize the data breach notification process across EU member states.

Images

EXAM TIP   As discussed at the beginning of the chapter, because privacy principles differ from location to location, you should familiarize yourself with the specific requirements of other nations and locations. In other words, study the DPD and GDPR if you are non-European; study Canadian and EU requirements if you live in the United States. The (ISC)2 exam is likely to include material on regulations not specific to your home country.

Notification in Canada

Canadian breach notification rules vary across national privacy guidance rules (PIPEDA) and provincial-level privacy rules. Where provinces have guidance that is considered substantially similar to PIPEDA, Canada’s federal law, the provincial guidance is sufficient according to federal regulators. That said, PIPEDA currently has limited requirements for notifying individuals. However, some provincial laws do have mandates. Those that require data breach notification are Ontario’s Personal Health Information Protection Act, Newfoundland and Labrador’s Personal Health Information Act, New Brunswick’s Personal Health Information Privacy and Access Act, and Alberta’s Personal Information Protection Act.

The Relationship Between Privacy and Security

Security is focused on protection of information from unauthorized disclosure as well as the capabilities needed to detect, respond, and recover from events that become incidents. The focus of privacy is restricting and managing the authorized access to sensitive information and making sure individuals provide personal information only after they are informed of and understand their rights. In performing your role as a healthcare information security and privacy professional in your organization, you must be familiar with certain general and specific areas, but it’s also important for you to know how these areas relate to one another. For example, you must understand how consent relates to authorization, how openness relates to transparency, and how legitimate purpose relates to purpose specification. Knowing how HIPAA, DPD, PIPEDA, and other regulations differ is also important.

Privacy and security have evolved over time to become a combined general category of “information protection.” The progression was natural, as we find more and more texts and seminars on healthcare privacy and security with the terms used synonymously. However, there is still a distinction between the two that you need to understand. Primarily, information security will never focus on privacy principles such as notice, consent, and accounting of disclosures, for example. Privacy, it can be argued, will focus on more than just digital assets; it will also fulfill obligations to the patient. For example, the organization’s promise in the notice of privacy practice may indicate obligations it has that are not related to ensuring confidentiality, integrity, and availability of the data. Maybe its obligations are focused on ensuring the relevancy of the information it collects. This consideration may not have any impact on information security concerns for the same information.

Granted, the domains of privacy and security are closely related, and increasingly so. However, it is unlikely we will ever get to a point where they are indistinguishable and synonymous. We can be relatively certain that for privacy to be effectively provided, security controls must exist and operate effectively. With that in mind, there are a few concepts that demonstrate the interconnected nature of privacy and security, particularly in healthcare, where an unbreakable bond exists between the two: these concepts are dependency, integration, and ownership.

Dependency

The relationship between security and privacy has developed into one of dependency. To achieve security in the healthcare industry, there are certainly elements of privacy that must be addressed. Security controls are more effective and efficient when privacy principles such as data retention or purpose are followed, which reduce the need to protect data that is not required and can be potentially breached. At the same time, privacy is often provided through one or more information security controls.

Within the regulatory process for protecting privacy of personal information (for example, under HIPAA), encryption is seen as an adequate information security control for ensuring the confidentiality of the information transferred via e-mail. Integrated within this security control is the ability to make sure that the person to whom you want to send the e-mail is authorized to view it. The patient experiences the dependency of privacy on security in that the confidentiality they expect is usually regulated by access controls and detection of unauthorized use, as examples. In short, information security tools are used to protect unauthorized disclosure from a privacy perspective.

Privacy depends on good information security practices to preserve the right of individuals to choose who has access to their information. In fact, maintaining the right to refuse to share the information at all is an element of privacy that information security is designed and implemented to protect. In the use of EHRs, identification, authentication, and access management technologies serve to allow credentialed access to defined amounts of data. Without proper credentials, access is denied. Based on the patient’s choice and consent, access is even more defined. For example, when patients choose to disallow any requests for their patient status, information found in the EHR cannot be shared with friends, family, or individuals calling the reception desk. Of course, there are usually additional instructions provided to allow specific family members or powers of attorney to receive patient status updates.

Integration

Privacy and security depend on each other, and that dependence results in an integration of the two. In other words, providing information security may involve privacy issues. Conversely, providing privacy can introduce unintended information security concerns that may have nothing to do with whatever privacy protection is being implemented. For example, a number of security safeguards (surveillance cameras and facility access logs, for example) require monitoring people or collecting personal information. These safeguards introduce privacy concerns, because not only do they keep data and people more secure, but they collect personal data in the process. So, while initially you may be concerned with unauthorized access to a patient portal, you may end up having an additional concern with privacy controls. As information privacy and security professionals, we must balance such information security measures against the privacy impacts of collecting personal information, constantly assessing risk. Almost daily, we see integration of privacy and security processes. The goal is to ensure that we understand the implications of privacy and security actions on one another as well as on the problem we intended to address.

This is not to say that integration necessarily produces a negative consequence. Most integration of privacy and security is positive in nature. Information security controls in a digital environment successfully provide privacy as they automate routine processes such as access management. They also reduce errors in enforcement that would exist in paper-based environments, where policy adherence or human action is the only line of defense. For instance, a network firewall or access control list programmed into a router is certainly less fallible than a records room clerk in charge of clearing individuals for facility entry. Moreover, privacy is the intended consequence of many information security practices. Where organizations enforce role-based access configuration of their EHR systems, the privacy of each individual’s information is protected by allowing access only to those providers who have a requirement to use the data. In a paper-based records system, this level of data segmentation and constrained availability is nearly impossible. Eavesdropping and easy access to data in plain view is too likely. In the context of integration of information protection, it is relevant to reiterate that the introduction of HIT often improves privacy and security concerns, even as we examine the impacts to information protection HIT capabilities generate.

Another example, and a timely one, of how privacy concerns are integrated with information security involves bring-your-own-device (BYOD) initiatives. While healthcare organizations are increasingly allowing individuals to bring their own smartphones, laptops, tablets, and mobile devices to work, under these initiatives, they are also instituting information security policies and procedures to protect the PHI and PII on their networks—the same networks these devices are accessing. One such procedure is data wiping: In the event an employee quits or is terminated, and that person used his or her own device to access the organization’s network resources, the BYOD policy likely gives the organization the right to remotely and completely erase everything on the device. This would include the work-related information along with any potential PHI or PII. It could also include pictures, personal information, and personal property.22 Because of this, the healthcare organization’s effort to protect privacy through information security may actually infringe on the privacy of the employee. When implementing the BYOD policy, the integration of privacy and security issues should be considered.

Ownership of Healthcare Information

When it comes to healthcare, traditional expertise grew independently around privacy (such as protecting identity) and information security (such as protecting resources). Over time, both disciplines evolved and developed into specific competencies found in the workforce.

Today that reality has changed. Privacy and security have been integrated into an almost singular competency that every person handling PHI or PII requires. The reasons for the integration have already been discussed—the digitization of health information, networking of medical systems and devices, and regulatory pressures to safeguard health information, to review a few. This is a global reality.

Let’s examine the impact of privacy and security from the perspective of information use, beginning with a quick look at health information ownership according to international law and customs, with a focus on the key concern of ownership of the information once it is collected by a healthcare organization. This concern is addressed differently in different countries, based on each country’s views on data ownership and laws. Recognizing how authorities view this concern helps you understand how relevant guidelines, laws, and customs affect the overall privacy and security approaches the country expects healthcare organizations (or data collectors) to take.

United States (HIPAA)

True ownership of health information is hard to determine. If we try to make a comparison between how the United States regulates property rights against a notion of data ownership, the comparison is flawed. To clarify, the issue is what level of control a patient in the United States actually has with regard to the use of their private information. Property rights offer owners control as to how their property is used or not used. The rights enforced by US laws provide guidance about how the information is used, but patients don’t have ownership rights in that some nonconsensual PHI uses are authorized, such as use for public health reasons or for use under purview of an institutional review board (IRB).

Images

NOTE   An IRB is an internal organization in an academic healthcare environment where research is conducted and is in place if clinical trials are performed with the use of human subjects. The IRB governs some baseline consent and authorization guidelines that would not necessarily include additional input from the patient.

Patients do have the right to know what information is collected about them, the right to access that information, the right to request amendment when the information is believed to be incorrect, and the right to know who else has seen the information. Once the data is collected, however, the healthcare organization owns the information in the recorded format, whether written or electronic, such as a file folder or a digital file. The legal responsibility to safeguard the information under HIPAA stems from a perspective of proper caretaking of the data, but the law favors healthcare organization ownership.

European Union (GDPR)

In the European Union, GDPR makes it very clear that the individuals who provide their personal information are the data owners. Data collectors have a responsibility to protect sensitive information continually, but the rights individuals have over their information do not change as the information changes hands. There are strict provisions for gathering personal data, which allow collection of data only for legitimate purposes. Once data is collected, the healthcare organization must respect the rights of the individuals as the data owners. Chief among the rights of data owners under GDPR is the right to complain and obtain redress if an individual believes his or her information is not being used in a way the data collector indicated. In fact, as mentioned earlier in the chapter, as the data owner, the individual has the right to be forgotten from that organization’s databases.

United Kingdom

Because healthcare is funded and provided almost exclusively by the National Health Service (the United Kingdom’s government healthcare system), health data and medical records in the United Kingdom are seen as government property. Controls must be in place to safeguard the information, of course. There are provisions for patients to view and address perceived discrepancies in their records, but the philosophy of ownership leans toward the government. The overall responsibility for the records lies in the authority of the Secretary of State for Health.

Images

NOTE   The UK implementation of GDPR includes a national data opt-out, which became effective in March 2020. Under this provision, individuals can choose not to allow their sensitive information to be used for research and healthcare planning.

Germany

Germany is presented here outside of the governance of the GDPR, because Germany passed its own law, known as an implementing law for GDPR. The Federal Data Protection Act (FDPD), effective as of May 25, 2018, was enacted the same day as the GDPR. Germany was the first EU member state to issue its own implementing law.

One of the most important focuses of the FDPD is extensive provisions on the processing of personal data of employees. The law serves to clarify and strengthen the obligation of the provider not only to safeguard the information, but to document all health information completely. For example, the provider must document information such as patient history, diagnoses, treatment, and prognoses. The law mandates that the provider properly maintain the records (whether paper or digital) and preserves ownership with the individual. For example, the law mandates that any and all information be made available for the patient upon his or her request. However, there is some ambiguity in the implementation law about secondary uses of health data. A debated issue is that data controllers in special cases may process the health data for a purpose that is different from the original one. FDPD references GDPR, which has an exception that permits the processing of data for scientific or historical research, or statistical purposes, without consent.

Understand Sensitive Data and Handling

The process of collecting, recording, storing, and exchanging data electronically introduces risks of disclosure, but understanding how healthcare data might be impacted by the method in which you handle it is an important aspect of your role in protecting that data. Most individuals obviously do not want their healthcare records disclosed to others without their permission. Confidentiality is essential to privacy, which is essential to patient care. Confidentiality practices are in place to protect the dignity of patients and to ensure that patients feel free to reveal complete and accurate information required for them to receive the correct medical treatment.

Images

EXAM TIP   You should be familiar with recent breach reports from the Internet about healthcare data disclosures, even if your organization has never experienced one. While you review the reports, make an effort to read the details and determine, even when it’s not part of the report, what aspect of security failed the organization. This will help you prepare for scenario-based questions on the exam.

Sensitivity Mitigation

Personal information, especially PHI according to HIPAA, is sensitive because of what the information contains and how the information can be misused. Identifying information is often unchangeable, and disclosing it in an unauthorized way invades a person’s privacy and can affect their safety. People have a right, or at least an expectation, that employers, the community, or family members do not need to know certain sensitive details about their personal lives. With healthcare data, the fact that the information cannot be changed easily means that fraud and identity theft based on medical information can be almost impossible to undo. As a certified HCISPP professional, you must recognize the critical importance of protecting sensitive information and take every reasonable precaution in reducing the risk of unauthorized data use. Some of the methods to mitigate the sensitivity of the information are to anonymize it or de-identify the data sets. In these ways, the information remains useful for healthcare research or data analytics but cannot be attributed to an individual. When we remove sensitive information from a data set so that it can be useful, but it no longer is attributable to an individual, we preserve privacy.

Anonymization

By definition, the anonymization of data is a process of replacing PII and PHI with a string of X’s or some other values that render the data unreadable, yet maintain the location for the data. For example, in a database, the record may have a field for Last Name. The actual last name can be replaced with several X’s. The original data is lost, but a data field for last name remains in the database schema. Some identifying data may be useful to include, such as the patient’s address postal code, and this useful information can be retained while achieving anonymization. Your analysis of an anonymization procedure should factor in details specific to the context. For example, some small geographic locations may provide identifiable information based on postal code only, so this should be factored into your analysis. In unstructured data, such as a provider’s note section, the use of X’s could replace the actual last name but still allow analysis of the information.

If you need to re-identify the data later, you can use related and similar processes of pseudonymization and tokenization. Pseudonymization calls for using a fictitious value in place of the original data. Tokenization includes using encryption of the data to create a unique value for each data element. Both processes require a key or code to re-identify the data. The value provided by using pseudonymization and tokenization is that the data is still useful for data analysis and data processing because the data attributes are maintained, and re-identification is possible, unlike in anonymization.

De-identification

De-identification is a sensitivity mitigation process by which data is considered not unique to a specific person and, therefore, not subject to regulation such as the HIPAA Privacy Rule. Using the HIPAA Privacy Rule as a guide, there are two ways to achieve de-identification: the safe harbor method and the expert determination method.

The safe harbor method involves removing all data elements, or direct identifiers, that constitute individually identifiable information. Safe harbor is achieved when the 18 data elements listed in the HIPAA Privacy Rule are stripped from the record. As long as you are clear on which data elements are applicable, this method is probably the easiest to satisfy. Here are those direct identifiers from HIPAA:

•   Names

•   Addresses (first three digits of a postal code excluded in large communities)

•   Most dates related to the individual (such as birth date, except for year)

•   Telephone numbers

•   Vehicle identifiers and serial numbers, including license plate numbers

•   Fax numbers

•   Device identifiers and serial numbers

•   Personal e-mail addresses

•   Universal resource locators (URLs)

•   Social Security numbers

•   Internet Protocol (IP) addresses

•   Medical record numbers

•   Biometric identifiers, including fingerprints and voice prints

•   Health plan beneficiary numbers

•   Photographs and images of the entire face

•   Account numbers

•   Certificate/license numbers

•   Other unique identifying number, characteristic, or codes that may apply if they are likely to identify a specific person

The expert determination method for de-identification is a bit more nuanced. The expert determination process uses a statistically proven method that removes both direct and indirect identifiers and requires an expert in the process of de-identifying sensitive information to assess the risk of anyone re-identifying the data based on the data that remains in the database or record. Here’s how it works: Some data sources include direct identifiers and indirect identifiers. Indirect identifiers are bits of information that can be pieced together to identify a single individual—such as race, ethnicity, hair color, and occupation. As mentioned, postal codes can also fall in this category when the population within one postal code is small enough to allow re-identification based on a combination of data elements. Think of this effect as describing someone based on attributes that they may share with a population. As you combine these indirect identifiers, it may become possible to identify who you are describing in that population.

The general outcome of sensitivity mitigation is de-identified data, whether established using a safe harbor approach or an expert determination method. If the organization needs to reinstate the identity of the individuals in the future, re-identification is the reversal process. A code or key is created to link the de-identified data to corresponding PII and PHI. That code or key is protected by the organization and maintained separately, sometimes in a literal safe or a software application. De-identified and anonymized data have beneficial uses in healthcare, as public health, data analytics, and population health efforts are enabled by being able to examine large data sets. Because the de-identified or anonymized data sets are not subject to the HIPAA Privacy Rule, they can be used and shared more broadly to assist research and marketing as well.

Categories of Sensitive Data

As you know, personal healthcare data is internationally protected by regulation or law. Almost every privacy and security framework includes specific direction for healthcare data. Within the general category of healthcare information (for example, PHI), there are some noteworthy subtypes that bear mentioning, because they have additional or even separate handling provisions for those that require access or use. As a representative sample, the following subsections present EU public health information protection requirements and some US-specific requirements.

European Union

The DPD gave EU member states’ data collection agencies the authority to deviate from regular handling procedures for privacy-protected information where there are concerns for the welfare of the public and public health and safety. GDPR has continued that. The DPD allowed for special handling considerations for health data that should be disclosed in the interest of public safety. The directive also clearly outlines an acceptance of member states that decide to deviate from normal processes for the purposes of scientific research, gathering of government statistics, and settling claims for benefits and services in the health insurance system. However, data collectors must otherwise continue to provide safeguards for protecting sensitive data from other, unauthorized disclosure.

United States

Although it is possible to assume that HIPAA contains the complete set of regulatory concerns and controls relative to healthcare, this is not completely accurate. Other regulatory laws and sets of controls exist with special purposes relative to specific patient populations.

Substance Abuse  Healthcare organizations that treat patients for drug and alcohol abuse must not only comply with HIPAA, but they must also comply with the Confidentiality of Substance Use Disorder Patient Records, updated in 2017; the Comprehensive Alcohol Abuse and Alcoholism Prevention, Treatment, and Rehabilitation Act of 1970; and the amended language of the Drug Abuse Office and Treatment Act of 1972. The considerations addressed by these regulations center around providing additional confidentiality. Even providers that treat a patient in one care setting, such as primary care, have to obtain an additional privilege to access the drug and alcohol treatment records from another care setting, such as a drug and alcohol treatment center, if that record is relevant to the primary care treatment. Typically, substance abuse treatment and records are regulated under behavioral health laws, and additional access clearance is required to access them

Education Records of Minors  The Family Educational Rights and Privacy Act (FERPA) governs the use and disclosure of the PII of minor students and, where applicable, PHI. For the most part, FERPA disallows an educational entity from disclosing any information to a third party without parental consent. While FERPA would not at first seem to have any relationship with HIPAA, it does overlap because schools in the United States commonly employ a nurse or healthcare provider. The data collected by these individuals is covered under FERPA, even if the school has no obligation under HIPAA. The HIPAA privacy and security rules exclude from additional or redundant governance any PHI collected by school healthcare providers. FERPA is generally considered sufficient to protect the health information of minor students as part of securing the defined education record. In short, properly safeguarded information under FERPA would be compliant with HIPAA if the school is under HHS regulatory control.

HIV/AIDS  The accidental disclosure of HIV/AIDS records can be a particular source of distress for patients. You can read summaries of complaints and investigations from the HHS that document mistakes and errors that resulted in sensitive HIV/AIDS disclosures to unauthorized recipients.23 The HIPAA Privacy Rule is applicable to protecting HIV/AIDS records. However, the confidentiality of these diagnoses was also protected using the Rehabilitation Act of 1973 and the Americans with Disabilities Act of 1990 before HIPAA. The dilemma with HIV/AIDS historically is that healthcare organizations have to weigh public health concerns with individual patient confidentiality. This is a serious decision with impacts, as patients with HIV/AIDS have been discriminated against over the years. Unauthorized disclosure of this category of PHI has potentially devastating consequences to many patients and their employment, health insurance, and in the communities they live.

Mental Health  Mental health treatment by a qualified healthcare provider will result in a special type of medical document called psychotherapy notes. These are kept separate from a patient’s regular medical record. There are many provisions for healthcare providers to share mental health information for the care of an individual, but stringent controls over this information are warranted because of the sensitivity of the information and the risk that unauthorized disclosure can cause unwanted effects, such as causing the patient delay or stop seeking the care they need. Special consent may be needed by the individual to release contents to the psychotherapy notes. Because of the risk of harm to themselves and others, HHS and Office of Civil Rights have published clarifying guidance that relaxes disclosure rules when the provider thinks a patient is in imminent danger. This also includes notification of law enforcement and specific individuals who may be the target, such as the spouse, of a patient. These exceptions are outside of regulated requirements for patient consent.

Genetic Information  As technology improves in collecting and using genetic information, there is little doubt that this data will be clinically significant and useful parts of the comprehensive medical record. However, the potential to misuse this information exists. For example, employers and health insurers may discriminate based on potential conditions or predispositions to disease or disability indicated in a person’s genetic code. To begin to address these unwanted actions against people, in 2009 the United States enacted the Genetic Information Nondiscrimination Act (GINA). Without controls to protect an individual’s genetic information, including specific notices of privacy practices and obligating informed consent documents, individuals would be reluctant to participate in pioneering research and genetic studies.

Images

NOTE   Pregnancy information also requires special sensitivity handling. Normally, pregnancy is not a diagnosis with similar stigma to other situations covered in this section, but a history of pregnancy must be kept confidential and cannot be used to discriminate against women. In addition, pregnancy cannot be used as a pre-existing condition for health insurance purposes, and employers must offer maternity care as part of any insurance plans that are a benefit of employment.

Chapter Review

This chapter introduced security objectives and attributes. The evolution from information security to cybersecurity reflects the evolution of healthcare from a paper-based system to a digital one. Responsibilities to protect health information have increased now that duties include or involve electronic information security (cybersecurity) roles and responsibilities as well as traditional, paper-based information. The information in this chapter is central to your overall understanding of healthcare information security and privacy. There are many types of security controls, such as access models, encryption, business continuity, data disposal, and incident response, to name a few. Each control focuses on preventing, detecting, or correcting data incidents, or any combination of those. Mastering implementation and management of information security controls is fundamental, yet the skills required can be complex and challenging.

Under most regulatory requirements, including national, state, and local requirements, organizations are tasked with designating privacy officers, who are responsible for assuring the privacy of data, for assisting customer patients with issues or complaints, and for monitoring the quality and coordination of security within the organization. One of the many duties of the privacy officer is processing authorizations for certain kinds of research, marketing, fundraising, and other issues that are not directly health related. Use relates to the intended use of the data at the time of collection. Retention relates to the time for which the data must be accounted for and kept secure. Chapter 7 will address what should happen when required retention periods end.

If privacy data is disclosed in an unintended or unauthorized way, regulatory requirements include measures to advise those affected by the loss or breach. Openness enables any member of the public who has legitimate interest to be provided information about the processing of data, while transparency refers to permitting an individual who has data maintained by an organization to be aware of specifically how their data is stored, processed, or shared.

Under the HIPAA Privacy Rule, use limitation states that covered entities must reasonably safeguard protected health information to limit incidental uses or disclosures made pursuant to an otherwise permitted or required use or disclosure. Access limitation covers the topic of limiting access to protected health information to those in the workforce who need access based on their roles. Under the OECD privacy principles, individual participation requires that individuals be permitted to obtain the status of data related to them, communicate with the data controller, and challenge data related to them. In sum, the integration and dependency between information privacy and security are fundamental components that you, as an HCISPP, must understand and master.

Questions

1.  A healthcare provider is examining a new patient with complaints of severe back pain. The provider suspects substance abuse and dependency on pain medication based on her initial observation. She requests access to previous substance abuse records. That request is denied to protect patient privacy by the health information management director who controls access to the electronic health record. He claims to be following governing law and ethical practices. Which of the following principles is impeding patient care?

A.  Confidentiality

B.  Integrity

C.  Availability

D.  Accountability

2.  In a healthcare office environment, which of the following applications may possibly have sensitive data included within its storage media?

A.  E-mail

B.  Scheduling

C.  Billing

D.  All of the above

3.  Which of the following are elements of business continuity?

A.  Continuing mission capabilities after a power loss

B.  Continuing contact with a patient who moves from the care area

D.  Keeping in contact with a former business associate

D.  Preparing for an external assessment while continuing to see patients

4.  Which of the following describes the ability for a user to do something with a computer resource, such as permission to review, edit, or delete?

A.  Least privilege

B.  Logging

C.  Monitoring

D.  Access control

5.  Which of the following is defined as a condition or weakness in (or absence of) security procedures or technical, physical, or other controls?

A.  Threat

B.  Risk

C.  Vulnerability

D.  Exploitation

6.  The principles of security, often referred to as CIA, are

A.  Confidentiality, integrity, accountability

B.  Contingency, integrity, accountability

C.  Confidentiality, integrity, availability

D.  Confidentiality, interoperability, availability

7.  To protect health information in an e-mail sent to a colleague, which would be a proper security control?

A.  Logical controls

B.  Strong authentication

C.  Encryption

D.  Least privilege

8.  Based on some concerns you have with the organization’s firewall, you export a report of a data packet transfer during the last 15 minutes. You see evidence of packet loss and a stalled data transfer. Which of these terms best defines the type of information you are reviewing?

A.  Incidents

B.  Breaches

C.  Events

D.  Alerts

9.  According to many leading privacy principles and regulations, which concept would be followed to make edits or amendments to the record should a mistake be identified after a patient requested a copy of her last medical procedure?

A.  Accuracy

B.  Access

C.  Individual participation

D.  Openness

10.  Which of the following is not an element of the limited data set under HIPAA?

A.  Social Security numbers

B.  First three digits of your postal code

C.  E-mail address

D.  Bank personal identification number

Answers

1.  C. Because the director will not make the patient’s data available, it is likely to impact the treatment of this patient negatively. A, confidentiality, is incorrect. Under some circumstances, such as substance abuse, additional provisions are necessary to maintain the confidentiality of patient identification, but this is not the case in this scenario of a legitimate medical need for access. B and D are incorrect because the security principles of integrity and accountability are not at issue with disclosing previous prescriptions and potentially abuse of pain medication in this scenario.

2.  D. Even though an organization may have policies in place that prohibit the use of e-mail for communications with the patient and others about specific sensitive healthcare diagnoses, users and patients could be including this information in their e-mail communications. As a result, you should assume that e-mail data should be stored with the same security controls as other sensitive data systems. Clearly, patient scheduling and billing data contain personally identifiable data as well as protected health information.

3.  A. Business continuity addresses the organization’s ability to deliver its mission (healthcare, for our purposes) when it may be affected by electrical outages, weather-related events, or community-based events such as riots, accidents, or an outbreak of a virus that affects staffing levels. Although maintaining contact with former patients or business associates and preparing for external assessments are part of doing business, they are not part of business continuity.

4.  D. Access control defines the technical ability to perform a function with a computer resource. Although A, least privilege, does cover the level of access, it defines an overall scope and is not specific to a user or role. B is incorrect because logging describes the process of capturing the system information that relates to the asset, not setting any privileges. Likewise, C, monitoring, is a passive activity that is defined by oversight of system activity, usually through review of logs.

5.  C. A vulnerability is a condition of weakness that can be exploited by a threat. Although it can be exploited and is a risk to the system or organization, only vulnerability matches the definition. A, threat, is an action or a condition that presents a source of concern for the security of an organization, like a phishing attack. B, risk, is a measure of a negative event happening based on mitigations, the frequency, and impact of that event. D, exploitation, is the result of a risk realized; an attack that exposes a weakness and results in a successful security incident.

6.  C. The principles of security are confidentiality, integrity, and availability. Although accountability, included in A, is an often-mentioned principle, it is not considered a component of the CIA triad. Likewise contingency, included in B, especially in the healthcare setting. D, interoperability, is also discussed, but it is not a primary principle.

7.  C. Encryption is the technical security control that satisfies the safe and effective transfer of sensitive information via e-mail. A is incorrect because logical controls include access controls, which may be incorporated into the operating systems to permit a technical capability such as reading, creating, editing, modifying, or deleting within an application or system. B is important to gaining access to the system in general but would not protect the information from unauthorized access during data transfer. Least privilege, D, is also not the best answer as the concept is associated as a control for access and authorization to the information, not the secure transfer.

8.  C. Events are all the observable items or entries that describe activity on the network or within the device logs. These observable details are not necessarily a problem, as further investigation can determine whether an event is in fact an incident. A, incidents, are events that describe an information security item that requires you to take action against and possibly remediate. This scenario does not describe a breach, B, because the review has only determined traffic flow that failed before leaving the organization through the firewall, and there is no mention of the data being sensitive. D, alerts, is incorrect because the scenario does not include a mention of those events triggering a notification to you as the security professional.

9.  A. Accuracy is the appropriate principle. Providing patients with the ability to request corrections to their records helps ensure records’ accuracy. The scenario does imply access, B, by honoring the patient’s request to view her record, but the scenario contains a more significant element of using the access to achieve accuracy. C, individual participation, is incorrect for a similar reason, in that knowing the information exists and getting access is individual participation, but the scenario describes accuracy of the information. You could assume the organization does follow the principle of openness, D, but it is not explicit in the scenario that the patient knew and understood the policies that were actually followed.

10.  B. The first three digits of your postal code are not part of the limited date set under HIPAA, because that result could include thousands of people. A, C, and D are incorrect: Social Security numbers, e-mail addresses, and bank PINs are all part of the limited data set because they have a connection to your health record.

References

1.  Dean, B. 2017. Privacy vs. Security: Do today’s models work with the Internet of Things and its cousin, big data? SecureWorks blog, March 23, https://www.secureworks.com/blog/privacy-vs-security.

2.  Lewis, K. 2017. Security Policies and Plans Development. In Computer and Information Security Handbook (pp. 565–570). Morgan Kaufmann.

3.  Stine, K., R. Kissel, W. Barker, A. Lee, and J. Fahlsing. 2008. Guide for Mapping Types of Information and Information Systems to Security Categories: Appendices. NIST SP 800-60 Vol. 2 Rev. 1, Section D.14.4. https://csrc.nist.gov/publications/detail/sp/800-60/vol-2-rev-1/final.

4.  Government of Canada. Canadian Provincial Privacy Act (R.S.C., 1985, c. P-21). http://laws-lois.justice.gc.ca/eng/acts/P-21/index.html.

5.  Borland, S. 2014. “2,000 NHS patients’ records are lost every day with more than two million serious data breaches logged since the start of 2011.” February 14, Daily Mail.com, http://www.dailymail.co.uk/news/article-2559876/2-000-NHS-patients-records-lost-day-two-million-data-breaches-logged-start-2011.html#ixzz3CgQ3zTL9.

6.  European Parliament and the Council of the European Union. 1995. “DIRECTIVE 95/46/EC OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data.” Chapter VI, “Supervisory Authority and Working Party on the Protection of Individuals with Regard to the Processing of Personal Data,” Article 28, Section 7. http://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:31995L0046&from=en.

7.  McBride, T., M. Ekstrom, L. Lusty, J. Sexton, and A. Townsend. 2017. Special Publication (NIST SP 1800-11, Draft). Data Integrity: Recovering from Ransomware and Other Destructive Events. Available for download at https://csrc.nist.gov/publications/detail/sp/1800-11/draft.

8.  NIST Information Technology. 2020. “Identity and Access Management.” https://www.nist.gov/topics/identity-access-management.

9.  Afshar, M., S. Samet, and T. Hu. 2018. An Attribute Based Access Control Framework for Healthcare System. Journal of Physics: Conference Series 933. https://iopscience.iop.org/article/10.1088/1742-6596/933/1/012020/pdf.

10.  Breaux, T. 2014. “Encryption and Other Technologies.” In Introduction to IT Privacy: A Handbook for Technologists. Portsmouth, NH: International Association of Privacy Professionals, p. 126.

11.  Yang, C. and H. Lee. 2016. A study on the antecedents of healthcare information protection intention. Inf Syst Front 18, 253–263. Can be purchased at https://link.springer.com/article/10.1007/s10796-015-9594-x.

12.  Ziegler, A. 2018. Hospitals, Doctors and Patients Impacted by Unplanned EHR Downtime. Healthcare IT Today (online). https://www.healthcareittoday.com/2018/06/18/hospitals-doctors-and-patients-impacted-by-unplanned-ehr-downtime.

13.  UCISA (University of Oxford). 2019. “ITIL – Introducing service operation.” http://docshare01.docshare.tips/files/21461/214619725.pdf.

14.  Aon. nd. “Is Communications Planning Part of Your Incident Response Plan?” Reprinted from E. Strotz, July 2016, at securityroundtable.org. https://www.aon.com/cyber-solutions/thinking/develop-breach-response-communications-plan.

15.  Onitiu, D. 2019. “‘The Duty to Remember v the Right to be Forgotten: Holocaust Archiving and Research, and European Data Protection Law’: Notes from Arye Schreiber’s seminar hosted by NINSO, the Northumbria Internet & Society Research Interest Group.” Northumbria Legal Studies Working Paper No. 2019/01. Download from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3343476.

16.  Black, E. 2001. IBM and the Holocaust: The Strategic Alliance Between Nazi Germany and America’s Most Powerful Corporation. New York: Crown Publishers/Random House.

17.  US Department of Health and Human Services Office for Civil Rights. 2018. “Guidance on HIPAA and Individual Authorization of Uses and Disclosures of Protected Health Information for Research.” https://www.hhs.gov/sites/default/files/hipaa-future-research-authorization-guidance-06122018%20v2.pdf.

18.  US Department of Health and Human Services. 2003. HIPAA Privacy Rule, “Public Health,” 45 CFR 164.512(b). https://www.hhs.gov/hipaa/for-professionals/special-topics/public-health/index.html.

19.  US Department of Health and Human Services. 2017. “Guidance on HIPAA & Cloud Computing.” https://www.hhs.gov/hipaa/for-professionals/special-topics/cloud-computing/index.html.

20.  Gerber, B. 2020. “OECD Privacy Principles,” Organization for Economic Cooperation and Development (OECD). http://oecdprivacy.org.

21.  Holloway, M. 2013. “HHS Finalizes HIPAA Privacy and Data Security Rules, Including Stricter Rules for Breaches of Unsecured PHI.” Lockton blog, https://www.lockton.com/Resource_/PageResource/MKT/01232013_HHS%20Finalizes%20HIPAA_Data%20Security_Unsecured%20PHI.pdf.

22.  Alotaibi, B., and H. Almagwashi. 2018. “A review of BYOD Security Challenges, Solutions and Policy Best Practices,” 2018 1st International Conference on Computer Applications & Information Security (ICCAIS), Riyadh, 2018, pp. 1–6.

23.  US Department of Health and Human Services. 2020. “Health Information Privacy Enforcement Examples Involving HIV/AIDS.” https://www.hhs.gov/civil-rights/for-providers/compliance-enforcement/examples/aids/cases/index.html.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.156.156